Unnamed: 0
int64
0
7.24k
id
int64
1
7.28k
raw_text
stringlengths
9
124k
vw_text
stringlengths
12
15k
5,600
6,068
Learning feed-forward one-shot learners Luca Bertinetto? University of Oxford luca@robots.ox.ac.uk Jo?o F. Henriques? University of Oxford joao@robots.ox.ac.uk Philip H. S. Torr University of Oxford philip.torr@eng.ox.ac.uk Jack Valmadre? University of Oxford jvlmdr@robots.ox.ac.uk Andrea Vedaldi University of Oxford vedaldi@robots.ox.ac.uk Abstract One-shot learning is usually tackled by using generative models or discriminative embeddings. Discriminative methods based on deep learning, which are very effective in other learning scenarios, are ill-suited for one-shot learning as they need large amounts of training data. In this paper, we propose a method to learn the parameters of a deep model in one shot. We construct the learner as a second deep network, called a learnet, which predicts the parameters of a pupil network from a single exemplar. In this manner we obtain an efficient feed-forward one-shot learner, trained end-to-end by minimizing a one-shot classification objective in a learning to learn formulation. In order to make the construction feasible, we propose a number of factorizations of the parameters of the pupil network. We demonstrate encouraging results by learning characters from single exemplars in Omniglot, and by tracking visual objects from a single initial exemplar in the Visual Object Tracking benchmark. 1 Introduction Deep learning methods have taken by storm areas such as computer vision, natural language processing and speech recognition. One of their key strengths is the ability to leverage large quantities of labelled data and extract meaningful and powerful representations from it. However, this capability is also one of their most significant limitations since using large datasets to train deep neural network is not just an option, but a necessity. It is well known, in fact, that these models are prone to overfitting. Thus, deep networks seem less useful when the goal is to learn a new concept on the fly, from a few or even a single example as in one shot learning. These problems are usually tackled by using generative models [18, 13] or, in a discriminative setting, using ad-hoc solutions such as exemplar support vector machines (SVMs) [14]. Perhaps the most common discriminative approach to one-shot learning is to learn off-line a deep embedding function and then to define on-line simple classification rules such as nearest neighbors in the embedding space [5, 16]. However, computing an embedding is a far cry from learning a model of the new object. In this paper, we take a very different approach and ask whether we can induce, from a single supervised example, a full, deep discriminative model to recognize other instances of the same object class. Furthermore, we do not want our solution to require a lengthy optimization process, but to be computable on-the-fly, efficiently and in one go. We formulate this problem as the one of learning a deep neural network, called a learnet, that, given a single exemplar of a new object class, predicts the parameters of a second network that can recognize other objects of the same type. ? The first three authors contributed equally, and are listed in alphabetical order. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Our model has several elements of interest. Firstly, if we consider learning to be any process that maps a set of images to the parameters of a model, then it can be seen as a ?learning to learn? approach. Clearly, learning from a single exemplar is only possible given sufficient prior knowledge on the learning domain. This prior knowledge is incorporated in the learnet in an off-line phase by solving millions of small one-shot learning tasks and back-propagating errors end-to-end. Secondly, our learnet provides a feed-forward learning algorithm that extracts from the available exemplar the final model parameters in one go. This is different from iterative approaches such as exemplar SVMs or complex inference processes in generative modeling. It also demonstrates that deep neural networks can learn at the ?meta-level? of predicting filter parameters for a second network, which we consider to be an interesting result in its own right. Thirdly, our method provides a competitive, efficient, and practical way of performing one-shot learning using discriminative methods. 1.1 Related work Our work is related to several others in the literature. However, we believe to be the first to look at methods that can learn the parameters of complex discriminative models in one shot. One-shot learning has been widely studied in the context of generative modeling, which unlike our work is often not focused on solving discriminative tasks. One very recent example is by Rezende et al. [18], which uses a recurrent spatial attention model to generate images, and learns by optimizing a measure of reconstruction error using variational inference [9]. They demonstrate results by sampling images of novel classes from this generative model, not by solving discriminative tasks. Another notable work is by Lake et al. [13], which instead uses a probabilistic program as a generative model. This model constructs written characters as compositions of pen strokes, so although more general programs can be envisioned, they demonstrate it only on Optical Character Recognition (OCR) applications. A different approach to one-shot learning is to learn an embedding space, which is typically done with a siamese network [2]. Given an exemplar of a novel category, classification is performed in the embedding space by a simple rule such as nearest-neighbor. Training is usually performed by classifying pairs according to distance [5], or by enforcing a distance ranking with a triplet loss [16]. Our work departs from the paradigms of generative modeling and similarity learning, instead predicting the parameters of a neural network from a single exemplar image. It can be seen as a network that effectively ?learns to learn?, generalizing across tasks defined by different exemplars. The idea of parameter prediction was, to our knowledge, first explored by Schmidhuber [20] in a recurrent architecture with one network that modifies the weights of another. Parameter prediction has also been used for zero-shot learning (as opposed to one-shot learning), which is the related problem of learning a new object class without a single example image, based solely on a description such as binary attributes or text. Whereas it is usually framed as a modality transfer problem and solved through transfer learning [21], Noh et al. [15] recently employed parameter prediction to induce the weights of an image classifier from text for the problem of visual question answering. Denil et al. [4] investigated the redundancy of neural network parameters, showing that it is possible to linearly predict as many as 95% of the parameters in a layer given the remaining 5%. This is a vastly different proposition from ours, which is to predict all of the parameters of a layer given an external exemplar image, and to do so non-linearly. 2 One-shot learning as dynamic parameter prediction Since we consider one-shot learning as a discriminative task, our starting point is standard discriminative learning. It generally consists of finding the parameters W that minimize the average loss L of a predictor function ?(x; W ), computed over a dataset of n samples xi and corresponding labels `i : n 1X L(?(xi ; W ), `i ). min W n i=1 (1) Unless the model space is very small, generalization also requires constraining the choice of model, usually via regularization. However, in the extreme case in which the goal is to learn W from a single exemplar z of the class of interest, called one-shot learning, even regularization may be insufficient and additional prior information must be injected into the learning process. The main challenge in 2 siamese siamese learnet learnet Figure 1: Our proposed architectures predict the parameters of a network from a single example, replacing static convolutions (green) with dynamic convolutions (red). The siamese learnet predicts the parameters of an embedding function that is applied to both inputs, whereas the single-stream learnet predicts the parameters of a function that is applied to the other input. Linear layers are denoted by ? and nonlinear layers by ?. Dashed connections represent parameter sharing. discriminative one-shot learning is to find a mechanism to incorporate domain-specific information in the learner, i.e. learning to learn. Another challenge, which is of practical importance in applications of one-shot learning, is to avoid a lengthy optimization process such as eq. (1). We propose to address both challenges by learning the parameters W of the predictor from a single exemplar z using a meta-prediction process, i.e. a non-iterative feed-forward function ? that maps (z; W 0 ) to W . Since in practice this function will be implemented using a deep neural network, we call it a learnet. The learnet depends on the exemplar z, which is a single representative of the class of interest, and contains parameters W 0 of its own. Learning to learn can now be posed as the problem of optimizing the learnet meta-parameters W 0 using an objective function defined below. Furthermore, the feed-forward learnet evaluation is much faster than solving the optimization problem (1). In order to train the learnet, we require the latter to produce good predictors given any possible exemplar z, which is empirically evaluated as an average over n training samples zi : n min 0 W 1X L(?(xi ; ?(zi ; W 0 )), `i ). n i=1 (2) In this expression, the performance of the predictor extracted by the learnet from the exemplar zi is assessed on a single ?validation? pair (xi , `i ), comprising another exemplar and its label `i . Hence, the training data consists of triplets (xi , zi , `i ). Notice that the meaning of the label `i is subtly different from eq. (1) since the class of interest changes depending on the exemplar zi : `i is positive when xi and zi belong to the same class and negative otherwise. Triplets are sampled uniformly with respect to these two cases. Importantly, the parameters of the original predictor ? of eq. (1) now change dynamically with each exemplar zi . Note that the training data is reminiscent of that of siamese networks [2], which also learn from labeled sample pairs. However, siamese networks apply the same model ?(x; W ) with shared weights W to both xi and zi , and compute their inner-product to produce a similarity score: n 1X L(h?(xi ; W ), ?(zi ; W )i, `i ). min W n i=1 (3) There are two key differences with our model. First, we treat xi and zi asymmetrically, which results in a different objective function. Second, and most importantly, the output of ?(z; W 0 ) is used to parametrize linear layers that determine the intermediate representations in the network ?. This is significantly different to computing a single inner product in the last layer (eq. (3)). Eq. (2) specifies the optimization objective of one-shot learning as dynamic parameter prediction. By application of the chain rule, backpropagating derivatives through the computational blocks of ?(x; W ) and ?(z; W 0 ) is no more difficult than through any other standard deep network. Nevertheless, when we dive into concrete implementations of such models we face a peculiar challenge, discussed next. 2.1 The challenge of naive parameter prediction In order to analyse the practical difficulties of implementing a learnet, we will begin with one-shot prediction of a fully-connected layer, as it is simpler to analyse. This is given by y = W x + b, 3 (4) ?(?) ? ? ? ?? Figure 2: Factorized convolutional layer (eq. (8)). The channels of the input x are projected to the factorized space by M (a 1 ? 1 convolution), the resulting channels are convolved independently with a corresponding filter prediction from w(z), and finally projected back using M 0 . given an input x ? Rd , output y ? Rk , weights W ? Rk?d and biases b ? Rk . We now replace the weights and biases with their functional counterparts, w(z) and b(z), representing two outputs of the learnet ?(z; W 0 ) given the exemplar z ? Rm as input (to avoid clutter, we omit the implicit dependence on W 0 ): y = w(z)x + b(z). (5) While eq. (5) seems to be a drop-in replacement for linear layers, careful analysis reveals that it scales extremely poorly. The main cause is the unusually large output space of the learnet w : Rm ? Rk?d . For a comparable number of input and output units in a linear layer (d ' k), the output space of the learnet grows quadratically with the number of units. While this may seem to be a concern only for large networks, it is actually extremely difficult also for networks with few units. Consider a simple linear learnet w(z) = W 0 z. Even for a very small fullyconnected layer of only 100 units (d = k = 100), and an exemplar z with 100 features (m = 100), the learnet already contains 1M parameters that must be learned. Overfitting and space and time costs make learning such a regressor infeasible. Furthermore, reducing the number of features in the exemplar can only achieve a small constant-size reduction on the total number of parameters. The bottleneck is the quadratic size of the output space dk, not the size of the input space m. 2.2 Factorized linear layers A simple way to reduce the size of the output space is to consider a factorized set of weights, by replacing eq. (5) with: y = M 0 diag (w(z)) M x + b(z). (6) 0 The product M diag (w(z)) M can be seen as a factorized representation of the weights, analogous to the Singular Value Decomposition. The matrix M ? Rd?d projects x into a space where the elements of w(z) represent disentangled factors of variation. The second projection M 0 ? Rk?d maps the result back from this space. Both M and M 0 contain additional parameters to be learned, but they are modest in size compared to the case discussed in sect. 2.1. Importantly, the one-shot branch w(z) now only has to predict a set of diagonal elements (see eq. (6)), so its output space grows linearly with the number of units in the layer (i.e. w(z): Rm ? Rd ). 2.3 Factorized convolutional layers The factorization of eq. (6) can be generalized to convolutional layers as follows. Given an input tensor x ? Rr?c?d , weights W ? Rf ?f ?d?k (where f is the filter support size), and biases b ? Rk , 0 0 the output y ? Rr ?c ?k of the convolutional layer is given by y = W ? x + b, (7) where ? denotes convolution, and the biases b are applied to each of the k channels. Projections analogous to M and M 0 in eq. (6) can be incorporated in the filter bank in different ways and it is not obvious which one to pick. Here we take the view that M and M 0 should disentangle the feature channels (i.e. third dimension of x) so that the predicted filters w(z) can operate on each channel independently. As such, we consider the following factorization: y = M 0 ? w(z) ?d M ? x + b(z), 4 (8) ??? ??? ??? ??? ??? z x Predicted filters w(z) ??? Activations Figure 3: The predicted filters and the output of a dynamic convolutional layer in a single-stream learnet trained for the OCR task. Different exemplars z define different filters w(z). Applying the filters of each exemplar to the same input x yields different responses. Best viewed in colour. ??? ??? ??? ??? ??? ??? z x Predicted filters w(z) Activations Figure 4: The predicted filters and the output of a dynamic convolutional layer in a siamese learnet trained for the object tracking task. Best viewed in colour. where M ? R1?1?d?d , M 0 ? R1?1?d?k , and w(z) ? Rf ?f ?d . Convolution with subscript d denotes independent filtering of d channels, i.e. each channel of x ?d y is simply the convolution of the corresponding channel in x and y. In practice, this can be achieved with filter tensors that are diagonal in the third and fourth dimensions, or using d filter groups [12], each group containing a single filter. An illustration is given in fig. 2. The predicted filters w(z) can be interpreted as a filter basis, as described in the supplementary material (sec. A). Notice that, under this factorization, the number of elements to be predicted by the one-shot branch w(z) is only f 2 d (the filter size f is typically very small, e.g. 3 or 5 [5, 23]). Without the factorization, it would be f 2 dk (the number of elements of W in eq. (7)). Similarly to the case of fully-connected layers (sect. 2.2), when d ' k this keeps the number of predicted elements from growing quadratically with the number of channels, allowing them to grow only linearly. Examples of filters that are predicted by learnets are shown in figs. 3 and 4. The resulting activations confirm that the networks induced by different exemplars do indeed possess different internal representations of the same input. 3 Experiments We evaluate learnets against baseline one-shot architectures (sect. 3.1) on two one-shot learning problems in Optical Character Recognition (OCR; sect. 3.2) and visual object tracking (sect. 3.3). All experiments were performed using MatConvNet [22]. 3.1 Architectures As noted in sect. 2, the closest competitors to our method in discriminative one-shot learning are embedding learning using siamese architectures. Therefore, we structure the experiments to compare against this baseline. In particular, we choose to implement learnets using similar network topologies for a fairer comparison. The baseline siamese architecture comprises two parallel streams ?(x; W ) and ?(z; W ) composed of a number of layers, such as convolution, max-pooling, and ReLU, sharing parameters W (fig. 1.a). The outputs of the two streams are compared by a layer ?(?(x; W ), ?(z; W )) computing a measure of similarity or dissimilarity. We consider in particular: the dot product ha, bi between vectors a and b, the Euclidean distance ka ? bk, and the weighted l1 -norm kw a ? w bk1 where w is a vector of learnable weights and the Hadamard product). The first modification to the siamese baseline is to use a learnet to predict some of the intermediate shared stream parameters (fig. 1.b). In this case W = ?(z; W 0 ) and the siamese architecture writes ?(?(x; ?(z; W 0 )), ?(z; ?(z; W 0 ))). Note that the siamese parameters are still the same in the two 5 Table 1: Error rate for character recognition in foreign alphabets (chance is 95%). Inner-product (%) Euclidean dist. (%) Weighted `1 dist. (%) Siamese (shared) 48.5 37.3 41.8 Siamese (unshared) 47.0 41.0 34.6 Siamese (unshared, fact.) 48.4 ? 33.6 Siamese learnet (shared) 51.0 39.8 31.4 Learnet 43.7 36.7 28.6 Modified Hausdorff distance 43.2 streams, whereas the learnet is an entirely new subnetwork whose purpose is to map the exemplar image to the shared weights. We call this model the siamese learnet. The second modification is a single-stream learnet configuration, using only one stream ? of the siamese architecture and predicting its parameter using the learnet ?. In this case, the comparison block ? is reinterpreted as the last layer of the stream ? (fig. 1.c). Note that: i) the single predicted stream and learnet are asymmetric and with different parameters and ii) the learnet predicts both the final comparison layer parameters ? as well as intermediate filter parameters. The single-stream learnet architecture can be understood to predict a discriminant function from one example, and the siamese learnet architecture to predict an embedding function for the comparison of two images. These two variants demonstrate the versatility of the dynamic convolutional layer from eq. (6). Finally, in order to ensure that any difference in performance is not simply due to the asymmetry of the learnet architecture or to the induced filter factorizations (sect. 2.2 and sect. 2.3), we also compare unshared siamese nets, which use distinct parameters for each stream, and factorized siamese nets, where convolutions are replaced by factorized convolutions as in learnet. 3.2 Character recognition in foreign alphabets This section describes our experiments in one-shot learning on OCR. For this, we use the Omniglot dataset [13], which contains images of handwritten characters from 50 different alphabets. These alphabets are divided into 30 background and 20 evaluation alphabets. The associated one-shot learning problem is to develop a method for determining whether, given any single exemplar of a character in an evaluation alphabet, any other image in that alphabet represents the same character or not. Importantly, all methods are trained using only background alphabets and tested on the evaluation alphabets. Dataset and evaluation protocol. Character images are resized to 28 ? 28 pixels in order to be able to explore efficiently several variants of the proposed architectures. There are exactly 20 sample images for each character, and an average of 32 characters per alphabet. The dataset contains a total of 19,280 images in the background alphabets and 13,180 in the evaluation alphabets. Algorithms are evaluated on a series of recognition problems. Each recognition problem involves identifying the image in a set of 20 that shows the same character as an exemplar image (there is always exactly one match). All of the characters in a single problem belong to the same alphabet. At test time, given a collection of characters (x1 , . . . , xm ), the function is evaluated on each pair (z, xi ) and the candidate with the highest score is declared the match. In the case of the learnet architectures, this can be interpreted as obtaining the parameters W = ?(z; W 0 ) and then evaluating a static network ?(xi ; W ) for each xi . Architecture. The baseline stream ? for the siamese, siamese learnet, and single-stream learnet architecture consists of 3 convolutional layers, with 2 ? 2 max-pooling layers of stride 2 between them. The filter sizes are 5 ? 5 ? 1 ? 16, 5 ? 5 ? 16 ? 64 and 4 ? 4 ? 64 ? 512. For both the siamese learnet and the single-stream learnet, ? consists of the same layers as ?, except the number of outputs is 1600 ? one for each element of the 64 predicted filters (of size 5 ? 5). To keep the experiments simple, we only predict the parameters of one convolutional layer. We conducted cross-validation to choose the predicted layer and found that the second convolutional layer yields the best results for both of the proposed variants. Siamese nets have previously been applied to this problem by Koch et al. [10] using much deeper networks applied to images of size 105 ? 105. However, we have restricted this investigation to relatively shallow networks to enable a thorough exploration of the parameter space. A more powerful 6 algorithm for one-shot learning, Hierarchical Bayesian Program Learning [13], is able to achieve human-level performance. However, this approach involves computationally expensive inference at test time, and leverages extra information at training time that describes the strokes drawn by the human author. Learning. Learning involves minimizing the objective function specific to each method (e.g. eq. (2) for learnet and eq. (3) for siamese architectures) and uses stochastic gradient descent (SGD) in all cases. As noted in sect. 2, the objective is obtained by sampling triplets (zi , xi , `i ) where exemplars zi and xi are congruous (`i = +1) or incongruous (`i = ?1) with 50% probability. We consider 100,000 random pairs for training per epoch, and train for 60 epochs. We conducted a random search to find the best hyper-parameters for each algorithm (initial learning rate and geometric decay, standard deviation of Gaussian parameter initialization, and weight decay). Results and discussion. Tab. 1 shows the classification error obtained using variants of each architecture. A dash indicates a failure to converge given a large range of hyper-parameters. The two learnet architectures combined with the weighted `1 distance are able to achieve significantly better results than other methods. The best architecture reduced the error from 37.3% for a siamese network with shared parameters to 28.6% for a single-stream learnet. While the Euclidean distance gave the best results for siamese networks with shared parameters, better results were achieved by learnets (and siamese networks with unshared parameters) using a weighted `1 distance. In fact, none of the alternative architectures are able to achieve lower error under the Euclidean distance than the shared siamese net. The dot product was, in general, less effective than the other two metrics. The introduction of the factorization in the convolutional layer might be expected to improve the quality of the estimated model by reducing the number of parameters, or to worsen it by diminishing the capacity of the hypothesis space. For this relatively simple task of character recognition, the factorization did not seem to have a large effect. 3.3 Object tracking The task of single-target object tracking requires to locate an object of interest in a sequence of video frames. A video frame can be seen as a collection F = {w1 , . . . , wK } of image windows; then, in a one-shot setting, given an exemplar z ? F1 of the object in the first frame F1 , the goal is to identify the same window in the other frames F2 , . . . , FM . Datasets. The method is trained using the ImageNet Large Scale Visual Recognition Challenge 2015 [19], with 3,862 videos totalling more than one million annotated frames. Instances of objects of thirty different classes (mostly vehicles and animals) are annotated throughout each video with bounding boxes. For tracking, instance labels are retained but object class labels are ignored. We use 90% of the videos for training, while the other 10% are held-out to monitor validation error during network training. Testing uses the VOT 2015 benchmark [11]. Architecture. We experiment with siamese and siamese learnet architectures (fig. 1) where the learnet ? predicts the parameters of the second (dynamic) convolutional layer of the siamese streams. Each siamese stream has five convolutional layers and we test three variants of those: variant (A) has the same configuration as AlexNet [12] but with stride 2 in the first layer, and variants (B) and (C) reduce to 50% the number of filters in the first two convolutional layers and, respectively, to 25% and 12.5% the number of filters in the last layer. Training. In order to train the architecture efficiently from many windows, the data is prepared as follows. Given an object bounding box sampled at random, a crop z double the size of that is extracted from the corresponding frame, padding with the average image color when needed. The border is included in order to incorporate some visual context around the exemplar object. Next, ` ? {+1, ?1} is sampled at random with 75% probability of being positive. If ` = ?1, an image x is extracted by choosing at random a frame that does not contain the object. Otherwise, a second frame containing the same object and within 50 temporal samples of the first is selected at random. From that, a patch x centered around the object and four times bigger is extracted. In this way, x contains both subwindows that do and do not match z. Images z and x are resized to 127 ? 127 and 255 ? 255 pixels, respectively, and the triplet (z, x, `) is formed. All 127 ? 127 subwindows in x are considered to not match z except for the central 2 ? 2 ones when ` = +1. 7 Table 2: Tracking accuracy and number of tracking failures in the VOT 2015 Benchmark, as reported by the toolkit [11]. Architectures are grouped by size of the main network (see text). For each group, the best entry for each column is in bold. We also report the scores of 5 recent trackers. Method Accuracy Failures Siamese (?=B) 0.465 105 Siamese (?=B; unshared) 0.447 131 Siamese (?=B; factorized) 0.444 138 Siamese learnet (?=B; ?=A) 0.500 87 Siamese learnet (?=B; ?=B) 0.497 93 DAT [17] 0.442 113 SO-DLT [23] 0.540 108 Method Accuracy Failures Siamese (?=C) 0.466 120 Siamese (?=C; factorized) 0.435 132 Siamese learnet (?=C; ?=A) 0.483 105 Siamese learnet (?=C; ?=C) 0.491 106 DSST [3] 0.483 163 MEEM [24] 0.458 107 MUSTer [7] 0.471 132 All networks are trained from scratch using SGD for 50 epoch of 50,000 sample triplets (zi , xi , `i ). The multiple windows contained in x are compared to z efficiently by making the comparison layer ? convolutional (fig. 1), accumulating a logistic loss across spatial locations. The same hyperparameters (learning rate of 10?2 geometrically decaying to 10?5 , weight decay of 0.005, and small mini-batches of size 8) are used for all experiments, which we found to work well for both the baseline and proposed architectures. The weights are initialized using the improved Xavier [6] method, and we use batch normalization [8] after all linear layers. Testing. Adopting the initial crop as exemplar, the object is sought in a new frame within a radius of the previous position, proceeding sequentially. This is done by evaluating the pupil net convolutionally, as well as searching at five possible scales in order to track the object through scale space. The approach is described in more detail in Bertinetto et al. [1]. Results and discussion. Tab. 2 compares the methods in terms of the official metrics (accuracy and number of failures) for the VOT 2015 benchmark [11]. The ranking plot produced by the VOT toolkit is provided in the supplementary material (fig. B.1). From tab. 2, it can be observed that factorizing the filters in the siamese architecture significantly diminishes its performance, but using a learnet to predict the filters in the factorization recovers this gap and in fact achieves better performance than the original siamese net. The performance of the learnet architectures is not adversely affected by using the slimmer prediction networks B and C (with less channels). An elementary tracker based on learnet compares favourably against recent tracking systems, which make use of different features and online model update strategies: DAT [17], DSST [3], MEEM [24], MUSTer [7] and SO-DLT [23]. SO-DLT in particular is a good example of direct adaptation of standard batch deep learning methodology to online learning, as it uses SGD during tracking to fine-tune an ensemble of deep convolutional networks. However, the online adaptation of the model comes at a big computational cost and affects the speed of the method, which runs at 5 frames-persecond (FPS) on a GPU. Due to the feed-forward nature of our one-shot learnets, they can track objects in real-time at framerates in excess of 60 FPS, while achieving less tracking failures. We consider, however, that our implementation serves mostly as a proof-of-concept, using tracking as an interesting demonstration of one-shot-learning, and is orthogonal to many technical improvements found in the tracking literature [11]. 4 Conclusions In this work, we have shown that it is possible to obtain the parameters of a deep neural network using a single, feed-forward prediction from a second network. This approach is desirable when iterative methods are too slow, and when large sets of annotated training samples are not available. We have demonstrated the feasibility of feed-forward parameter prediction in two demanding one-shot learning tasks in OCR and visual tracking. Our results hint at a promising avenue of research in ?learning to learn? by solving millions of small discriminative problems in an offline phase. Possible extensions include domain adaptation and sharing a single learnet between different pupil networks. Acknowledgements This research was supported by Apical Ltd. and ERC grants ERC-2012-AdG 321162-HELIOS, HELIOS-DFR00200 and ?Integrated and Detailed Image Understanding? (EP/L024683/1). 8 References [1] L. Bertinetto, J. Valmadre, J. F. Henriques, A. Vedaldi, and P. H. S. Torr. Fully-convolutional siamese networks for object tracking. 2016. [2] J. Bromley, J. W. Bentz, L. Bottou, I. Guyon, Y. LeCun, C. Moore, E. S?ckinger, and R. Shah. Signature verification using a ?siamese? time delay neural network. International Journal of Pattern Recognition and Artificial Intelligence, 1993. [3] M. Danelljan, G. H?ger, F. Khan, and M. Felsberg. Accurate scale estimation for robust visual tracking. In BMVC, 2014. [4] M. Denil, B. Shakibi, L. Dinh, N. de Freitas, et al. Predicting parameters in deep learning. In NIPS, 2013. [5] H. Fan, Z. Cao, Y. Jiang, Q. Yin, and C. Doudou. Learning deep face representation. arXiv, 2014. [6] K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In ICCV, 2015. [7] Z. Hong, Z. Chen, C. Wang, X. Mei, D. Prokhorov, and D. Tao. Multi-store tracker (MUSTER): A cognitive psychology inspired approach to object tracking. In CVPR, 2015. [8] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv, 2015. [9] D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv, 2013. [10] G. Koch, R. Zemel, and R. Salakhutdinov. Siamese neural networks for one-shot image recognition. In ICML 2015 Deep Learning Workshop, 2016. [11] M. Kristan, J. Matas, A. Leonardis, M. Felsberg, L. Cehovin, G. Fernandez, T. Vojir, G. Hager, G. Nebehay, and R. Pflugfelder. The VOT2015 challenge results. In ICCV Workshop, 2015. [12] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In NIPS, 2012. [13] B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332?1338, 2015. [14] T. Malisiewicz, A. Gupta, and A. A. Efros. Ensemble of exemplar-SVMs for object detection and beyond. In ICCV, 2011. [15] H. Noh, P. Hongsuck Seo, and B. Han. Image question answering using convolutional neural network with dynamic parameter prediction. In CVPR, 2016. [16] O. M. Parkhi, A. Vedaldi, and A. Zisserman. Deep face recognition. BMVC, 2015. [17] H. Possegger, T. Mauthner, and H. Bischof. In defense of color-based model-free tracking. In CVPR, 2015. [18] D. J. Rezende, S. Mohamed, I. Danihelka, K. Gregor, and D. Wierstra. One-shot generalization in deep generative models. arXiv, 2016. [19] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015. [20] J. Schmidhuber. Learning to control fast-weight memories: An alternative to dynamic recurrent networks. Neural Computation, 4(1):131?139, 1992. [21] R. Socher, M. Ganjoo, C. D. Manning, and A. Ng. Zero-shot learning through cross-modal transfer. In NIPS, 2013. [22] A. Vedaldi and K. Lenc. MatConvNet ? Convolutional Neural Networks for MATLAB. In Proceedings of the ACM Int. Conf. on Multimedia, 2015. [23] N. Wang, S. Li, A. Gupta, and D.-Y. Yeung. Transferring rich feature hierarchies for robust visual tracking. arXiv, 2015. [24] J. Zhang, S. Ma, and S. Sclaroff. MEEM: Robust tracking via multiple experts using entropy minimization. In ECCV. 2014. 9
6068 |@word seems:1 norm:1 fairer:1 eng:1 decomposition:1 prokhorov:1 pick:1 sgd:3 shot:37 hager:1 reduction:1 necessity:1 configuration:2 contains:5 score:3 series:1 initial:3 ours:1 freitas:1 ka:1 activation:3 written:1 must:2 reminiscent:1 gpu:1 dive:1 drop:1 plot:1 update:1 generative:8 selected:1 intelligence:1 provides:2 location:1 firstly:1 simpler:1 zhang:2 five:2 wierstra:1 direct:1 fps:2 consists:4 ijcv:1 fullyconnected:1 manner:1 expected:1 indeed:1 andrea:1 learnet:53 dist:2 growing:1 multi:1 inspired:1 salakhutdinov:2 encouraging:1 window:4 spain:1 begin:1 project:1 joao:1 provided:1 factorized:10 alexnet:1 interpreted:2 finding:1 temporal:1 thorough:1 unusually:1 exactly:2 demonstrates:1 classifier:1 uk:5 control:1 rm:3 unit:5 omit:1 grant:1 danihelka:1 positive:2 understood:1 treat:1 encoding:1 oxford:5 jiang:1 subscript:1 solely:1 might:1 initialization:1 studied:1 dynamically:1 factorization:9 bi:1 range:1 malisiewicz:1 practical:3 thirty:1 lecun:1 testing:2 practice:2 alphabetical:1 block:2 implement:1 writes:1 incongruous:1 mei:1 area:1 significantly:3 vedaldi:5 projection:2 induce:2 context:2 applying:1 accumulating:1 map:4 demonstrated:1 modifies:1 go:2 attention:1 starting:1 independently:2 focused:1 formulate:1 identifying:1 rule:3 importantly:4 disentangled:1 bertinetto:3 embedding:8 searching:1 variation:1 analogous:2 construction:1 target:1 hierarchy:1 unshared:5 us:5 hypothesis:1 element:7 recognition:13 expensive:1 asymmetric:1 predicts:6 labeled:1 observed:1 ep:1 fly:2 solved:1 wang:2 connected:2 sun:1 sect:9 highest:1 envisioned:1 dynamic:9 signature:1 trained:6 solving:5 subtly:1 f2:1 learner:4 basis:1 alphabet:13 train:4 distinct:1 fast:1 effective:2 artificial:1 zemel:1 hyper:2 choosing:1 whose:1 widely:1 posed:1 supplementary:2 cvpr:3 otherwise:2 ability:1 analyse:2 final:2 online:3 hoc:1 sequence:1 rr:2 net:6 propose:3 reconstruction:1 product:7 adaptation:3 cao:1 hadamard:1 poorly:1 achieve:4 description:1 sutskever:1 double:1 asymmetry:1 r1:2 produce:2 object:26 depending:1 recurrent:3 ac:5 felsberg:2 propagating:1 develop:1 exemplar:34 nearest:2 eq:15 implemented:1 predicted:12 involves:3 come:1 radius:1 annotated:3 attribute:1 filter:26 stochastic:1 exploration:1 human:4 centered:1 enable:1 material:2 implementing:1 require:2 f1:2 generalization:2 investigation:1 proposition:1 elementary:1 secondly:1 extension:1 koch:2 around:2 considered:1 tracker:3 bromley:1 predict:9 matconvnet:2 efros:1 sought:1 achieves:1 purpose:1 estimation:1 diminishes:1 label:5 seo:1 grouped:1 weighted:4 minimization:1 clearly:1 always:1 gaussian:1 modified:1 denil:2 avoid:2 resized:2 rezende:2 improvement:1 indicates:1 baseline:6 kristan:1 inference:3 foreign:2 typically:2 integrated:1 transferring:1 diminishing:1 comprising:1 tao:1 pixel:2 noh:2 classification:6 ill:1 denoted:1 animal:1 spatial:2 construct:2 ng:1 sampling:2 ckinger:1 kw:1 represents:1 look:1 icml:1 others:1 report:1 hint:1 few:2 composed:1 recognize:2 replaced:1 phase:2 replacement:1 versatility:1 detection:1 interest:5 reinterpreted:1 evaluation:6 extreme:1 held:1 chain:1 accurate:1 peculiar:1 modest:1 unless:1 orthogonal:1 euclidean:4 initialized:1 instance:3 column:1 modeling:3 cost:2 deviation:1 entry:1 apical:1 predictor:5 krizhevsky:1 delay:1 conducted:2 too:1 reported:1 combined:1 international:1 probabilistic:2 off:2 regressor:1 concrete:1 jo:1 vastly:1 w1:1 central:1 opposed:1 containing:2 choose:2 huang:1 external:1 adversely:1 cognitive:1 derivative:1 conf:1 expert:1 li:1 szegedy:1 valmadre:2 de:1 stride:2 sec:1 wk:1 bold:1 int:1 mauthner:1 notable:1 fernandez:1 ranking:2 ad:1 stream:18 performed:3 depends:1 view:1 vehicle:1 tab:3 red:1 competitive:1 decaying:1 option:1 capability:1 parallel:1 worsen:1 bayes:1 minimize:1 formed:1 shakibi:1 accuracy:4 convolutional:20 efficiently:4 ensemble:2 yield:2 identify:1 handwritten:1 bayesian:1 produced:1 none:1 ren:1 russakovsky:1 bk1:1 stroke:2 sharing:3 lengthy:2 against:3 competitor:1 failure:6 mohamed:1 storm:1 obvious:1 associated:1 proof:1 recovers:1 static:2 sampled:3 dataset:4 ask:1 knowledge:3 color:2 actually:1 back:3 feed:8 supervised:1 methodology:1 response:1 improved:1 bmvc:2 zisserman:1 formulation:1 done:2 ox:5 evaluated:3 box:2 furthermore:3 just:1 implicit:1 favourably:1 replacing:2 su:1 nonlinear:1 logistic:1 quality:1 perhaps:1 believe:1 grows:2 effect:1 concept:3 contain:2 counterpart:1 hausdorff:1 regularization:2 hence:1 xavier:1 moore:1 during:2 backpropagating:1 noted:2 hong:1 generalized:1 demonstrate:4 l1:1 image:24 variational:2 jack:1 novel:2 recently:1 meaning:1 common:1 functional:1 empirically:1 million:3 thirdly:1 belong:2 discussed:2 he:1 surpassing:1 significant:1 composition:1 dinh:1 framed:1 rd:3 similarly:1 erc:2 omniglot:2 language:1 dot:2 toolkit:2 robot:4 han:1 similarity:3 disentangle:1 closest:1 own:2 recent:3 optimizing:2 scenario:1 schmidhuber:2 store:1 meta:3 binary:1 seen:4 additional:2 employed:1 deng:1 determine:1 paradigm:1 converge:1 dashed:1 ii:1 branch:2 full:1 desirable:1 siamese:48 multiple:2 technical:1 faster:1 match:4 convolutionally:1 cross:2 luca:2 divided:1 equally:1 bigger:1 feasibility:1 prediction:13 variant:7 crop:2 vision:1 metric:2 arxiv:5 yeung:1 represent:2 normalization:2 adopting:1 achieved:2 whereas:3 want:1 background:3 fine:1 krause:1 singular:1 grow:1 modality:1 extra:1 operate:1 unlike:1 posse:1 lenc:1 induced:2 pooling:2 seem:3 call:2 leverage:2 constraining:1 intermediate:3 embeddings:1 bernstein:1 affect:1 relu:1 zi:13 gave:1 architecture:27 topology:1 fm:1 psychology:1 inner:3 idea:1 reduce:2 dlt:3 computable:1 avenue:1 shift:1 bottleneck:1 whether:2 expression:1 defense:1 colour:2 accelerating:1 padding:1 ltd:1 speech:1 cause:1 matlab:1 deep:23 ignored:1 useful:1 generally:1 detailed:1 listed:1 tune:1 karpathy:1 amount:1 clutter:1 prepared:1 tenenbaum:1 svms:3 category:1 reduced:1 generate:1 specifies:1 notice:2 estimated:1 per:2 track:2 affected:1 group:3 key:2 redundancy:1 four:1 nevertheless:1 monitor:1 drawn:1 achieving:1 bentz:1 geometrically:1 run:1 modal:1 powerful:2 injected:1 fourth:1 throughout:1 guyon:1 lake:2 patch:1 comparable:1 entirely:1 layer:38 dash:1 tackled:2 fan:1 quadratic:1 strength:1 fei:2 declared:1 speed:1 min:3 extremely:2 performing:1 optical:2 relatively:2 according:1 manning:1 across:2 describes:2 character:16 shallow:1 modification:2 making:1 restricted:1 iccv:3 taken:1 computationally:1 previously:1 mechanism:1 needed:1 ganjoo:1 end:4 serf:1 hongsuck:1 available:2 parametrize:1 apply:1 ocr:5 hierarchical:1 alternative:2 batch:4 shah:1 convolved:1 original:2 denotes:2 remaining:1 ensure:1 include:1 gregor:1 dat:2 tensor:2 objective:6 matas:1 question:2 quantity:1 already:1 strategy:1 dependence:1 diagonal:2 subnetwork:1 gradient:1 subwindows:2 distance:8 capacity:1 philip:2 discriminant:1 enforcing:1 induction:1 retained:1 insufficient:1 illustration:1 minimizing:2 mini:1 demonstration:1 difficult:2 mostly:2 negative:1 implementation:2 satheesh:1 contributed:1 allowing:1 convolution:9 datasets:2 benchmark:4 descent:1 hinton:1 incorporated:2 locate:1 frame:10 bk:1 pair:5 khan:1 connection:1 imagenet:4 bischof:1 quadratically:2 learned:2 barcelona:1 kingma:1 nip:4 address:1 able:4 leonardis:1 beyond:1 usually:5 below:1 xm:1 pattern:1 challenge:8 program:4 rf:2 green:1 max:2 video:5 memory:1 demanding:1 natural:1 difficulty:1 predicting:4 representing:1 improve:1 extract:2 naive:1 helios:2 auto:1 text:3 prior:3 literature:2 epoch:3 geometric:1 acknowledgement:1 understanding:1 determining:1 loss:3 fully:3 interesting:2 limitation:1 filtering:1 ger:1 validation:3 sufficient:1 verification:1 bank:1 classifying:1 cry:1 eccv:1 prone:1 totalling:1 supported:1 last:3 free:1 infeasible:1 henriques:2 bias:4 offline:1 deeper:1 neighbor:2 face:3 dimension:2 evaluating:2 rich:1 forward:8 author:2 collection:2 projected:2 far:1 welling:1 excess:1 keep:2 confirm:1 overfitting:2 reveals:1 sequentially:1 ioffe:1 discriminative:14 xi:15 factorizing:1 search:1 iterative:3 pen:1 triplet:6 khosla:1 table:2 promising:1 learn:14 transfer:3 channel:10 nature:1 robust:3 delving:1 obtaining:1 investigated:1 complex:2 bottou:1 domain:3 diag:2 protocol:1 did:1 official:1 main:3 linearly:4 bounding:2 border:1 hyperparameters:1 big:1 x1:1 fig:8 representative:1 pupil:4 adg:1 slow:1 position:1 comprises:1 candidate:1 answering:2 third:2 learns:2 rk:6 departs:1 specific:2 rectifier:1 covariate:1 showing:1 learnable:1 explored:1 dk:2 decay:3 gupta:2 concern:1 workshop:2 socher:1 effectively:1 importance:1 dissimilarity:1 gap:1 chen:1 sclaroff:1 suited:1 entropy:1 generalizing:1 yin:1 simply:2 explore:1 visual:10 contained:1 tracking:21 chance:1 extracted:4 ma:2 acm:1 goal:3 viewed:2 careful:1 labelled:1 shared:8 replace:1 feasible:1 change:2 parkhi:1 included:1 torr:3 uniformly:1 reducing:3 except:2 called:3 total:2 asymmetrically:1 multimedia:1 meaningful:1 berg:1 internal:2 support:2 latter:1 assessed:1 incorporate:2 evaluate:1 tested:1 scratch:1
5,601
6,069
Mixed vine copulas as joint models of spike counts and local field potentials Arno Onken Istituto Italiano di Tecnologia 38068 Rovereto (TN), Italy arno.onken@iit.it Stefano Panzeri Istituto Italiano di Tecnologia 38068 Rovereto (TN), Italy stefano.panzeri@iit.it Abstract Concurrent measurements of neural activity at multiple scales, sometimes performed with multimodal techniques, become increasingly important for studying brain function. However, statistical methods for their concurrent analysis are currently lacking. Here we introduce such techniques in a framework based on vine copulas with mixed margins to construct multivariate stochastic models. These models can describe detailed mixed interactions between discrete variables such as neural spike counts, and continuous variables such as local field potentials. We propose efficient methods for likelihood calculation, inference, sampling and mutual information estimation within this framework. We test our methods on simulated data and demonstrate applicability on mixed data generated by a biologically realistic neural network. Our methods hold the promise to considerably improve statistical analysis of neural data recorded simultaneously at different scales. 1 Introduction The functions of the brain likely rely on the concerted interaction of its microscopic, mesoscopic and macroscopic systems. Concurrent recordings of signals at different scales, such as simultaneous measurements of field potential and single-cell spiking activity or other multimodal measures such as concurrent electrophysiological and fMRI measures, are leading to rapid advances in understanding brain dynamics [16]. Analysis of these concurrent data is complicated by the great difference in nature (e.g. discrete vs. continuous) and signal-to-noise ratio of each type of neural signal. To take full advantage of these data, flexible statistical models that take into account many variables with different statistics and their dependencies are needed. Recently, construction of multivariate statistical models based on the concept of copulas has attracted a lot of attention [9]. Intuitively, a copula represents a particular relationship between a set of random variables that, together with separate margin models of the individual elements can be used to construct a joint statistical model. This approach has become an indispensable tool in economics, finance and risk management in both theoretical and practical applications [9, 13, 11]. Yet, despite their promise, application to neuroscience has been limited [10, 14, 19]. The copula approach is general and, in principle, applicable to model mixed discrete and continuous statistics. Specific cases of mixed discrete and continuous copula-based models with parametric distributions were recently applied in clinical applications [24, 7]. Racine [17] proposed nonparametric mixed copula distributions based on kernel density estimators. In most studies, however, the elements of the copula-based multivariate distributions are all continuous [9, 13, 11]. A reason for this is that in the general case, likelihood calculation has exponential complexity in the number of discrete elements, limiting the usefulness of the models. In particular, these methods are impractical for likelihood-based estimation of information-theoretic quantities which requires many likelihood evaluations. Smith and Khaled [23] recently proposed a copula-based framework with quadratic complexity, but limited to fully discrete distributions. For valuable applications in neuroscience settings, however, we need a framework 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. that can overcome these limitations and cope with elements (i.e. number of neurons, activity sites) that have different statistical properties - some continuous and others discrete - while still allowing efficient likelihood calculation. Here, we develop a framework to accomplish these goals by means of vine copulas with mixed discrete and continuous margins. We describe methods to make numeric model selection, parameter fitting and sampling scale efficiently with the number of elements and apply these methods to estimate information-theoretic quantities. To demonstrate our framework, we draw samples from mixed models and simulate mixed activity in a biologically realistic neural network. We then apply our methods to these data and show that our methods outperform corresponding mixed independent and fully continuous models. 2 Mixed vine copulas Our goal is to construct multivariate distributions with arbitrary mixed margins and a wide range of possible dependence structures. To accomplish this goal, we apply an approach that individuates the margin part and the dependence part. The dependence is represented by a copula. Briefly, a copula is defined as a multivariate distribution function with support on the unit hypercube and uniform margins [13]. We will denote multivariate random variables by X with elements Xi . We denote the cumulative distribution function (CDF) of X by FX with margin CDFs Fi . For consistency of notation, we will denote probability density functions as well as probability mass functions by fX with margins fi . 2.1 Mixed copula-based models The great strength of copulas is their utility for constructing and decomposing multivariate distributions. Sklar?s Theorem [21, 13] lays out the theoretical foundations for this. According to this theorem, every CDF FX can be decomposed into margins F1 , . . . , Fd and a copula C such that FX (x1 , . . . , xd ) = C(F1 (x1 ), . . . , Fd (xd )) (1) and, conversely, margins F1 , . . . , Fd , a copula C and Eq. 1 can be used to construct a CDF FX . In this decomposition, C is unique on the range of X. Sklar?s Theorem holds for mixed discrete and continuous distributions and thus provides a method to construct multivariate mixed distributions based on CDFs of copulas and margins. The important point here is that the approach yields a cumulative distribution function FX of a multivariate random variable X, not its likelihood fX which we need for inference and other tasks (c.f. Section 2.5). Thus, we need to calculate the likelihood fX based on the cumulative distribution function FX . W.l.o.g., let X1 , . . . , Xn be discrete and Xn+1 , . . . , Xd be continuous. By calculating the mixed derivative of Eq. 1, we obtain the probability density function of the mixed distribution of X: X X fX (x1 , . . . , xd ) = ??? (?1)m1 +???+mn m1 =0,1 mn =0,1 d Y C (F1 (x1 ? m1 ), . . . , Fn (xn ? mn ), Fn+1 (xn+1 ), . . . , Fd (xd )) fi (xi ). ?un+1 . . . ?ud i=n+1 ? d?n (2) Note that the number of terms in the sum grows exponentially with the number of discrete variables. In general, the exponential number of terms prevents us from a direct evaluation of this equation. Nevertheless, we will see in the next section that we need to calculate the probability density function for likelihood-based estimation of differential entropy and mutual information. Therefore, we need an efficient way to calculate the probability density function that is tractable for many discrete variables. We will introduce methods to accomplish this in Section 2.5. 2.2 Information estimation with copulas and mixed margins For continuous R as well as mixed multivariate distributions, differential entropy h(X) is defined as h(X) = ? fX (x) log2 fX (x)dx, where fX is a multivariate density which can also have mixed margins like the one in Eq. 2 [6, 20]. With this, the mutual information I(X; Y ) between two 2 multivariate random variables X and Y with potentially mixed margins is given by I(X; Y ) = h(X) + h(Y ) ? h(X, Y ), where h(X, Y ) is the joint differential entropy of the joint distribution (X, Y ) with joint density fX,Y [6, 20]. For high dimensional distributions, evaluation of the integral over the support of fX is unfeasible. However, we can estimate the differential entropy and thereby the mutual information by means of classical Monte Carlo (MC) estimation [18]. We express the entropy as an expectation over fX and approximate the expectation by the empirical average by producing a large number of samples x1 , . . . , xk from X: k X ck := ? 1 h(X) = EfX [? log2 fX (X)] ? h log (fX (xj )) (3) k j=1 2 ck converges almost surely to h(X). Moreover, we By the strong law of large numbers, h h i ck ? c ck : Var h can assess the convergence of hk by estimating the sample variance of h 2 Pk  1 k ?h(X) c . With this estimate, the term hc q is approximately stanj=1 ? log2 (fX (xj )) ? hk k+1 c Var[h k] dard normal distributed, allowing us to obtain confidence intervals for our differential entropy estimate [18]. This shows that there are two requisites for the MC procedure to estimate entropy and mutual information for a mixed distribution: 1) an efficient sampling procedure to produce samples xj from X, and 2) a tractable method for calculating the density fX (xj ). We will introduce the former in Section 2.4 and the latter in Section 2.5. In the next section we will describe a copula decomposition that makes these efficient methods possible. 2.3 Pair copula constructions The number of available high-dimensional copula families is quite limited while there are an abundant number of bivariate copula families. The pair copula construction provides a flexible way to construct higher-dimensional copulas from bivariate copulas [1]. The idea of pair copula models is to factorize the multivariate distribution into conditional distributions and to describe these conditional distributions by means of bivariate copulas modeling dependence of two variables at a time. Special pair copula constructions, called regular vine copula structures, assume conditional independence between specific elements of the distribution, allowing us to circumvent the curse of dimensionality in likelihood evaluation and sampling. More specifically, a vine can be represented as a hierarchical set of trees where each node corresponds to a conditional distribution function and each edge corresponds to a pair copula. The nodes of the lowest tree are the unconditional distribution margins with empty conditioning sets. Each tree in the hierarchy incorporates additional variables into the conditioning sets by means of its pair copulas. The results of these couplings then form the nodes of the next tree in the hierarchy, thus extending the conditioning sets from tree to tree. Here we focus on the canonical vine or C-vine in which each tree in the hierarchy has a unique node that is connected to all other nodes [1]. In this section, F (xi |xj1 , . . . , xjk ) denotes the conditional cumulative distribution function of Xi given Xj1 , . . . , Xjk . In the C-vine, the multivariate model likelihood is factorized as follows [1]: fX (x1 , . . . , xd ) = d Y k=1 f (xk ) d?1 Y d?j Y cj,i+j|1,...,j?1 (F (xj |x1 , . . . , xj?1 ), F (xi+j |x1 , . . . , xj?1 )) j=1 i=1 (4) The C-vine is a good choice if there are outstanding variables with important dependencies to many other variables [2]. Such situations are commonly encountered in electrophysiology recordings where the same electrode might record a local field potential (LFP, acting as the outstanding variable) and statistically dependent spikes from nearby neurons. 2.4 Sampling from mixed canonical vines For a vine with mixed margins, we sample from the corresponding continuous vine and apply the inversion method with the inverse of the margin cumulative distribution function to obtain mixed discrete and continuous samples. ?C In the following, ?u denotes the partial derivative of the copula C with respect to its first argument 1 ?C and ?u2 denotes the partial derivative with respect to the second argument. For mixed C-vine 3 sampling, we take the algorithm for sampling from a continuous C-vine copula with uniform margins [1] and extend it by means of the inversion method to attach arbitrary continuous and discrete margins. The algorithm requires (d ? 2)(d ? 1)/2 + d cumulative distribution function evaluations: 1. Sample w1 , . . . , wd i.i.d. uniform on [0, 1]. 2. v1,1 = w1 . 3. x1 = F1?1 (v1,1 ). 4. For i = 2, . . . , d: (a) vi,1 = wi . ?1 (b) For k = i ? 1, i ? 2, . . . , 1 : vi,1 ? Fi|1,...,k (vi,1 , vk,k ), where Fi|1,...,k = ?Ck,i|1,...,k?1 . ?u1 (c) xi = Fi?1 (vi,1 ). (d) If i < d then for j = 1, . . . , i ? 1 : vi,j+1 ? Fi|1,...,j (vi,j , vj,j ), where Fi|1,...,j = ?Cj,i|1,...,j?1 . ?u1 5. The result is x1 , . . . , xd . The algorithm has quadratic complexity and is thus applicable to estimate information-theoretic quantities following the scheme outlined in Section 2.2. 2.5 Tractable algorithm for calculating mixed canonical vine densities Panagiotelis et al. [15] introduced an algorithm for calculating the likelihood of specific discrete pair-copula decompositions. Notably, this algorithm has quadratic complexity in the number of elements in the multivariate distribution. Here, we generalize this algorithm to the mixed margins case and apply it to the C-vine. We apply a dynamic programming approach and build the likelihood in a bottom up fashion from vine level T0 to level Td . The algorithm has quadratic complexity and computes the density of a C-vine with mixed discrete and continuous margins. We abbreviate + ? c Fi|A := Fi|A := P (Xi ? xi |XA = xA ) and Fi|A := P (Xi ? xi ? 1|XA = xA ). We write fi|A := f (Xi = xi |XA = xA ) if Xi is continuous and fi|A := P (Xi = xi |XA = xA ) if Xi is ?C ab a b discrete. Moreover ?a, b ? {+, ?, c} : Ci,j|A := Ci,j|A (Fi|A , Fj|A ). ?u is the partial derivative of 1 ?C the copula C with respect to its first argument and ?u2 is the partial derivative with respect to C?s second argument. Consequently, for w ? {u, v} we write ab ?Ci,j|A ?w := ?Ci,j|A a b ?w (Fi|A , Fj|A ). 1. Level T0 : For i = 1, . . . , d: evaluate fi = Fi+ ? Fi? if Xi is discrete and fi = fi (xi ) if Xi is continuous. 2. Levels T1 , T2 , . . . , Td?1 : For t = 1, . . . , d ? 1 and i = t + 1, . . . , d: Let It = {1, . . . , t}. Let Lt = {1, . . . , t ? 1} if t > 1, and Lt = ? if t = 1. (a) ? ab ? ?a, b ? {+, ?} : Ct,i|L if Xt and Xi discrete, ? t ? ac ? ?Ct,i|L ? ac t ??a ? {+, ?} : C if Xt discrete and Xi continuous, t,i|Lt and ?u2 cb Evaluate ?C t,i|Lt cb ? ?b ? {+, ?} : Ct,i|L and ?u if Xt continuous and Xi discrete, ? ? t 1 ? ?C cc c 2 c ? ? t,i|Lt , ?Ct,i|Lt and ? Ct,i|Lt if Xt and Xi continuous. ?u1 ?u2 ?u1 ?u2 (5) (b) Evaluate ? if Xi discrete: + ? fi|It = Fi|I ? Fi|I , (6) t t where ? if Xt discrete: + Fi|I = t ++ ?+ Ct,i|L ? Ct,i|L t t ft|Lt 4 , ? Fi|I = t +? ?? Ct,i|L ? Ct,i|L t t ft|Lt . (7) ? if Xt continuous: + Fi|I t = c+ ?Ct,i|L t ?u2 , ? Fi|I t = c? ?Ct,i|L t ?u2 . (8) ? if Xt discrete and Xi continuous: c Fi|I = t +c ?c Ct,i|L ? Ct,i|L t t ft|Lt , fi|It = c ?Fi|I t ?xi = +c ?Ct,i|L t ?u1 ? ?c ?Ct,i|L t ! ?u1 fi|Lt , (9) ft|Lt ? if Xt continuous and Xi continuous: c Fi|I t 3. The result is f1,...,d = f1 = cc ?Ct,i|L t Qd i=2 ?u2 , fi|It = c ?Fi|I t ?xi = cc ? 2 Ct,i|L t ?u1 ?u2 fi|Lt , (10) fi|1,...,i?1 . Like the sampling algorithm in Section 2.4, the likelihood algorithm has quadratic complexity and is thus applicable to estimate information-theoretic quantities following the scheme outlined in Section 2.2. 2.6 Inference We can apply maximum likelihood methods to estimate model parameters, because we can directly calculate the full likelihood of the model - even for high dimensions - following the procedure Pk outlined in Section 2.5. Let L(?, ?1 , . . . , ?d ) = j=1 log fX (xj ; ?, ?1 , . . . , ?d ) denote the log likelihood of the joint probability density function, where ? denotes the parameters of the chosen copula family. We can now apply the so-called inference for margins (IFM) method to estimate the parameters [11]. The idea of this method is to break the joint optimization of all parameters up into Pk smaller optimization problems. For i = 1, . . . , d, let Li (?i ) = j=1 log fi (xi,j ; ?i ) denote the sum of log likelihoods of the marginal distribution fi (xi,j ; ?i ), where ?1 , . . . , ?d are the parameters of the chosen family of margins. The method proceeds in two steps. In the first step, the margin likelihoods ci = argmax{Li (?i )}. In the second step, the full are maximized separately: ?i = 1, . . . , d : ? ?i b = argmax{L(?, ? c1 , . . . , ? cd )}. likelihood is maximized given the estimated margin parameters as ? ? Each of the individual optimization problems can be solved by means of a general multivariate optimization algorithm such as the trust-region-reflective algorithm [4]. Joe and Xu [11] showed that the IFM estimator is asymptotically efficient. The method is particularly attractive if the ratio of margin parameters to copula parameters is big. If the number of copula parameters is too big to be estimated in a single joint optimization, then the complexity of the copula model can be reduced by truncating the vine tree of the C-vine (truncated vine [1]). This corresponds to an independence assumption for higher vine levels and the validity of this simplification should be confirmed [22]. The families of margin and copula distributions can be selected using the Akaike information criterion (AIC) [3]: Each combination of family selections is scored by means of its AIC value and then the best combination is chosen. 3 Validation on artificial data We validated our framework by sampling from mixed vine-based models of different dimensionality and by evaluating performance of various alternative models. Fig. 1 illustrates a 3-dimensional example vine-based model with two continuous margins and one discrete margin. In the top row, we show the probability density functions of the 2-dimensional margins obtained by integrating over one margin each. One can appreciate the mixed distribution from the step-wise changes in probability density in margin 2 and the smooth changes in margins 1 and 3. The bottom row shows scatter plots of 3-dimensional samples projected onto each pair of margins. The distributions of samples nicely reflect the corresponding densities. We drew samples from this and other mixed vine distributions and fitted various models to these samples. For model selection, we used normal and gamma distributions as options for continuous 5 5 25 20 15 10 5 Margin 3 Margin 3 Margin 2 10 25 20 15 10 5 0 2 10 5 0 ?2 ?2 0 Margin 1 2 0 2 25 20 15 10 5 0 Margin 3 0 Margin 3 Margin 2 ?2 ?2 0 Margin 1 2 5 10 25 20 15 10 5 0 5 Margin 2 10 Figure 1: Characteristics of a 3D mixed vine example. Margin 1 is standard normal distributed, margin 2 is Poisson distributed with mean 5 and margin 3 is gamma distributed with shape 2 and scale 4. The pairwise copulas are Gaussian with parameter 0.5, Student with correlation 0.5 and 2 degrees of freedom and Clayton with parameter 5 for margin pairs (1,2), (1,3) and (2,3) respectively. Top row: Probability density functions of 2D margins. The lighter the color the higher is the density. Bottom row: 2D margin scatter plots of 300 samples. margins, Poisson, binomial and negative binomial distributions as options for discrete margins, and Gaussian, Student, Clayton and rotated (90? , 180? , 270? ) Clayton copula families as options for pair copula constructions. To quantify the gain of using a vine-based mixed model instead of a mixed independent model, we drew samples from the vine-based mixed model and calculated the cross-validated likelihood ratio (LR) statistic for nested models as D = 2(log(Lvine ) ? log(Lind )), where Lvine denotes the likelihood of separate test-set samples under the vine-based model and Lind denotes the likelihood of the samples under the corresponding independent model. Figure 2: Model fit and entropy of simulated vine samples. Ground truth models are mixed vines of different dimensionality (range 2 to 6 shown as dark brown to light brown lines) with margins and copulas up to the respective dimension. Margins 1 to 3 and associated pairwise copulas are the same as in Fig. 1. Margin 4 is binomial distributed with N = 6 and p = 0.4, margin 5 is negative binomial distributed with N = 6 and p = 0.4 and margin 6 is standard normal distributed. The pairwise copulas are Clayton survival, independent and Clayton rotated 90? for margin pairs (1,4), (2,4) and (3,4) respectively, and Clayton rotated 270? , independent, Gaussian with parameter 0.5 and independent for margin pairs (1,5), (2,5), (3,5) and (4,5) respectively and independent, independent, Gaussian with parameter 0.5, independent and Student with parameters 0.5 and 2 for margin pairs (1,6), (2,6), (3,6), (4,6) and (5,6) respectively and with parameter 5 for all Clayton based copulas. (AC) Cross-validated LR statistic between the ground truth model and the mixed vine-based model (A), independent model (B) or mixed Gaussian model (C). (D,E) Normalized entropy difference between the ground truth model and the independent model (D) or fully continuous vine-based model (E). Lines denote averages over 30 repetitions as a function of the number of samples. Shaded areas denote standard error. Fig. 2A shows the LR statistic between the ground truth and the best-fitting mixed vine-based model as a function of the number of samples for different dimensionality. The statistics were low in all cases but increased with increasing dimensionality. The gain as quantified by the LR statistic of using the full mixed vine-based model instead of the independent model, on the other hand, was moderate for the bivariate model (D < 0.5) while being substantial for the 6-dimensional model (D ? 7). Wilks? LR test on non-cross-validated data was highly significant whenever we used at least 32 samples 6 (p < 0.01). We also evaluated the fit of the multivariate Gaussian copula with mixed margins which is nested in our mixed vine-based models and obtained by restricting all pairwise copula families to be Gaussian. The LR statistics indicated substantially better fit than for the independent model but the statistics were below those of the mixed vine-based model for most tested dimensions (Fig. 2C). Unfortunately, a vine-based mixed model and the corresponding best-fitting fully continuous vinebased model are not directly comparable in this way due to the different weighting of discrete and continuous elements (i.e. mass vs. density). Nevertheless, in an actual application it is easy to determine which margins are discrete and which margins are continuous. Appropriate discrete or continuous margins can therefore be selected easily. To extend our comparison to fully continuous vine-based models, we estimated entropies of the mixed vine-based model, the corresponding independent model and of the best-fitting fully continuous model. We calculated the entropy differences between these models and normalized with the entropy of the mixed vine-based model. Fig. 2D shows the normalized entropy difference between the mixed vine-based model and the independent model. The relative results are similar to those of the likelihood ratio statistic (Fig. 2B) suggesting that in this case the entropy comparison is indicative of the performance gain. In Fig. 2E, we plot the normalized entropy difference between the mixed vine-based model and the best-fitting fully continuous model. Overall, the normalized differences of these models were smaller than for the independent model. Similarly to the independent model, though, we found increasing differences for increasing dimensionality of the models. All in all, our results suggest that our framework can yield substantial advantages in terms of goodness of fit and in terms of estimated entropy in particular for high-dimensional problems. 4 Application to simulated network activity To evaluate our framework in a typical neuroscience setting, we applied our mixed vine-based model to a biologically realistic neural network model. We simulated network activity with the Virtual Electrode Recording Tool for EXtracellular potentials (VERTEX) [25] with network parameters as in VERTEX tutorial 2. Briefly, the model contained a total of 5000 neurons with 85% of those cell models representing layer 2/3 pyramidal neurons and 15% representing basket interneurons. The spiking dynamics followed an adaptive exponential model. To simulate two different stimulus conditions, we used random input currents with different means. We presented each stimulus condition an equal number of times (corresponding to 1/2 probability of occurrence of either stimulus). The network generated network oscillations in both conditions. To simulate a typical recording situation, we recorded LFPs with two randomly placed electrodes and collected spike counts from the four neurons closest to those electrodes. For each input condition, we ran the network 128 times and collected one 6-dimensional mixed vector with the LFPs (continuous) and spike counts (discrete) collected in a 100 ms interval from each network run. We then fitted the full mixed vine-based model, the mixed independent model and the fully continuous vine-based model to these data. Importantly, we fitted separate models for each stimulus condition and varied the number of samples per stimulus condition between 8 and 128. This allowed us to estimate mutual information following the procedure outlined in Section 2.2. Similarly to Figs. 2B,C, Fig. 3A depicts the LR statistic between the best-fitting mixed vine-based model and the corresponding independent model or mixed Gaussian model. We found relatively small statistics for all sample sizes (D < 1). Nevertheless, Wilks? LR test indicated highly significant improvement whenever we used at least 64 samples (p < 0.01). To evaluate the importance of the mixed vine-based model when performing an information-theoretic analysis of the network activity, we estimated mutual information between the modeled network activity (LFP and spike counts) and the two stimulus conditions. Fig. 3B shows mutual information estimates that we obtained based on the mixed independent, mixed Gaussian, continuous vine-based and mixed vine-based models. The mixed Gaussian model yielded information estimates that were close to those of the mixed-vine based models. Estimates based on the independent model and fully continuous model, on the other hand, were both substantially different (overestimating and underestimating information, respectively) from estimates that we obtained from the mixed vine-based model. The latter model is the most faithful one with the most accurate information estimates. The overestimation of the independent model suggests that spike counts and LFPs carry partly redundant information. The big differences in information estimates further indicate that it can be important to take mixed margins and dependencies into account for estimating mutual information, even if the LR statistic is low. 7 Figure 3: Analysis of simulated neural network activity obtained from the VERTEX tool [25]. Data samples are formed by the average LFP within 200 ? 300 ms after simulation onset from two randomly chosen electrodes and spike counts from the four neurons in closest proximity to those electrodes. One simulation run provided one sample only. The network was simulated with two different input conditions: Input currents following an Ornstein-Uhlenbeck process had a mean value of 330 pA for the excitatory population and 190 pA for the inhibitory population in condition 1, and 300 pA for the excitatory population and 40 pA for the inhibitory population in condition 2. In both conditions, standard deviation was 90 pA for the excitatory population and 50 pA for the inhibitory population. (A) LR statistic between the best-fitting mixed vine-based model and the best-fitting mixed independent model (blue) or mixed Gaussian model (red) as a function of the number of samples (i.e. number of simulations in each condition) averaged over stimulus conditions. (B) Mutual information between the neural activity and the two input conditions estimated from the mixed independent model (blue), mixed Gaussian model (red), continuous vine-based model (green) or mixed vine-based model (black) as a function of the number of samples. Lines denote averages over 30 repetitions. Shaded areas denote standard error. 5 Discussion We developed a complete framework based on vine copulas for modeling multivariate data that are partly discrete and partly continuous. Our framework includes methods for sampling, likelihood calculation and inference. We combined these procedures to estimate entropy and mutual information by means of MC integration. In particular, our methods provide the possibility to construct joint statistical models of LFPs and spike counts. In a biologically realistic network simulation we demonstrated that our mixed vine-based model provides a fit that is better than that of the corresponding independent model and showed that mutual information estimates of fully continuous and mixed independent models can strongly differ even if the likelihood ratio statistic suggests otherwise. For LFP and spike count data, a mixed model with detailed dependence structures can make full use of all available statistical data. This also makes it possible to construct optimal Bayesian decoders for inferring the presented stimulus from both LFPs and spike counts. Moreover, our model provides the possibility to investigate the statistical dependencies between LFPs and spike counts. Contrary to other analysis methods for analyzing mixed LFPs and spiking [12, 5] our framework follows a purely data-driven approach. Even high-dimensional distributions can be fitted, because all inference operations have quadratic complexity. However, entropy and MI estimation can be problematic, because MC integration can become unfeasible for very high-dimensional problems. One possible remedy is to use our models for maximum likelihood decoding and then estimate information based on decoding performance [8]. We note that our models are based on pair-constructions and thus cannot model arbitrary higher-order dependencies. We stress, however, that higher-order correlations do occur in the vine tree and depend on both the vine-tree selection and the copula families. Thus, selecting the right vine-tree and copula families can - to a limited extent - account for higher-order correlations. In general, however, limited sample numbers make it difficult to reliably estimate higherorder correlations in real neuroscience applications. The parametric nature of our model framework also makes it possible to introduce dependencies on external variables. Directions for future research include applications to experimentally recorded data and detailed evaluation of observed dependency structures. Acknowledgments. This work was supported by the European Commission?s Horizon 2020 Programme (H2020-MSCA-IF-2014) under grant agreement number 659227 (?STOMMAC?). 8 References [1] K. Aas, C. Czado, A. Frigessi, and H. Bakken. Pair-copula constructions of multiple dependence. Insurance: Mathematics and Economics, 44(2):182?198, 2009. [2] E. F. Acar, C. Genest, and J. Ne?Lehov?. Beyond simplified pair-copula constructions. Journal of Multivariate Analysis, 110:74?90, 2012. [3] H. Akaike. A new look at the statistical model identification. Automatic Control, IEEE Transactions on, 19(6):716?723, 1974. [4] R. H. Byrd, M. E. Hribar, and J. Nocedal. An interior point algorithm for large-scale nonlinear programming. SIAM Journal on Optimization, 9(4):877?900, 1999. [5] D. E. Carlson, J. S. Borg, K. Dzirasa, and L. Carin. On the relations of LFPs & neural spike trains. In Advances in Neural Information Processing Systems, pages 2060?2068, 2014. [6] T. M. Cover and J. A. Thomas. Elements of Information Theory. New York: Wiley, second edition, 2006. [7] A. R. de Leon and B. Wu. Copula-based regression models for a bivariate mixed discrete and continuous outcome. Statistics in Medicine, 30(2):175?185, 2011. [8] R. A. A. Ince, S. Panzeri, and C. Kayser. Neural codes formed by small and temporally precise populations in auditory cortex. The Journal of Neuroscience, 33(46):18277?18287, 2013. [9] P. Jaworski, F. Durante, and W. K. H?rdle. Copulae in mathematical and quantitative finance. Lecture Notes in Statistics, Proceedings, 213, 2013. [10] R. L. Jenison and R. A. Reale. The shape of neural dependence. Neural Computation, 16:665?672, 2004. [11] H. Joe and J. J. Xu. The estimation method of inference functions for margins for multivariate models. Technical Report 166, Department of Statistics, University of British Colombia, 1996. [12] R. C. Kelly, M. A. Smith, R. E. Kass, and T. S. Lee. Local field potentials indicate network state and account for neuronal response variability. Journal of Computational Neuroscience, 29(3):567?579, 2010. [13] R. B. Nelsen. An Introduction to Copulas. Springer, New York, second edition, 2006. [14] A. Onken, S. Gr?new?lder, M. H. J. Munk, and K. Obermayer. Analyzing short-term noise dependencies of spike-counts in macaque prefrontal cortex using copulas and the flashlight transformation. PLoS Computational Biology, 5(11):e1000577, 11 2009. [15] A. Panagiotelis, C. Czado, and H. Joe. Pair copula constructions for multivariate discrete data. Journal of the American Statistical Association, 107(499):1063?1072, 2012. [16] S. Panzeri, J. H. Macke, J. Gross, and C. Kayser. Neural population coding: combining insights from microscopic and mass signals. Trends in Cognitive Sciences, 19(3):162?172, 2015. [17] J. S. Racine. Mixed data kernel copulas. Empirical Economics, 48(1):37?59, 2015. [18] C. P. Robert and G. Casella. Monte Carlo Statistical Methods. New York: Springer, second edition, 2004. [19] L. Sacerdote, M. Tamborrino, and C. Zucca. Detecting dependencies between spike trains of pairs of neurons through copulas. Brain Research, 1434:243?256, 2012. [20] C. E. Shannon. A mathematical theory of communication. Bell System Technical Journal, 27:379?423, 1948. [21] A. Sklar. Fonctions de r?partition ? n dimensions et leurs marges. Publications de l?Institut de Statistique de L?Universit? de Paris, 8:229?231, 1959. [22] M. Smith, A. Min, C. Almeida, and C. Czado. Modeling longitudinal data using a pair-copula decomposition of serial dependence. Journal of the American Statistical Association, 105(492), 2010. [23] M. S. Smith and M. A. Khaled. Estimation of copula models with discrete margins via Bayesian data augmentation. Journal of the American Statistical Association, 107(497):290?303, 2012. [24] P. X. K. Song, M. Li, and Y. Yuan. Joint regression analysis of correlated data using Gaussian copulas. Biometrics, 65(1):60?68, 2009. [25] R. J. Tomsett, M. Ainsworth, A. Thiele, M. Sanayei, X. Chen, M. A. Gieselmann, M. A. Whittington, M. O. Cunningham, and M. Kaiser. Virtual Electrode Recording Tool for EXtracellular potentials (VERTEX): comparing multi-electrode recordings from simulated and biological mammalian cortical tissue. Brain Structure and Function, 220(4):2333?2353, 2015. 9
6069 |@word briefly:2 inversion:2 frigessi:1 simulation:4 decomposition:4 thereby:1 carry:1 selecting:1 longitudinal:1 current:2 wd:1 ka:1 comparing:1 yet:1 dx:1 attracted:1 scatter:2 fn:2 realistic:4 partition:1 shape:2 acar:1 plot:3 v:2 selected:2 indicative:1 xk:2 smith:4 short:1 record:1 lr:10 underestimating:1 provides:4 detecting:1 node:5 mathematical:2 direct:1 become:3 differential:5 borg:1 yuan:1 fitting:8 introduce:4 concerted:1 pairwise:4 notably:1 rapid:1 multi:1 brain:5 ifm:2 decomposed:1 td:2 byrd:1 actual:1 curse:1 increasing:3 spain:1 estimating:2 notation:1 moreover:3 provided:1 mass:3 factorized:1 lowest:1 substantially:2 arno:2 developed:1 transformation:1 impractical:1 quantitative:1 every:1 xd:7 finance:2 universit:1 control:1 unit:1 grant:1 producing:1 t1:1 local:4 despite:1 analyzing:2 approximately:1 might:1 black:1 quantified:1 conversely:1 shaded:2 suggests:2 limited:5 cdfs:2 range:3 statistically:1 averaged:1 practical:1 unique:2 faithful:1 acknowledgment:1 lfp:4 kayser:2 procedure:5 area:2 empirical:2 bell:1 confidence:1 integrating:1 regular:1 statistique:1 suggest:1 unfeasible:2 onto:1 selection:4 close:1 cannot:1 interior:1 risk:1 demonstrated:1 attention:1 economics:3 truncating:1 estimator:2 insight:1 colombia:1 importantly:1 population:8 fx:22 limiting:1 construction:9 hierarchy:3 ainsworth:1 programming:2 lighter:1 akaike:2 agreement:1 pa:6 element:10 trend:1 particularly:1 lay:1 mammalian:1 bottom:3 ft:4 observed:1 solved:1 vine:61 calculate:4 region:1 connected:1 plo:1 thiele:1 valuable:1 ran:1 substantial:2 gross:1 complexity:8 overestimation:1 dynamic:3 depend:1 purely:1 multimodal:2 joint:10 easily:1 iit:2 represented:2 various:2 sklar:3 train:2 describe:4 monte:2 artificial:1 outcome:1 quite:1 tested:1 otherwise:1 lder:1 statistic:18 lfps:8 dzirasa:1 advantage:2 propose:1 interaction:2 combining:1 jaworski:1 convergence:1 empty:1 electrode:8 extending:1 produce:1 nelsen:1 h2020:1 converges:1 rotated:3 coupling:1 develop:1 ac:3 eq:3 strong:1 indicate:2 qd:1 quantify:1 differ:1 direction:1 stochastic:1 virtual:2 munk:1 f1:7 biological:1 hold:2 proximity:1 ground:4 normal:4 panzeri:4 cb:2 great:2 estimation:8 applicable:3 currently:1 concurrent:5 repetition:2 tool:4 gaussian:13 ck:5 publication:1 validated:4 focus:1 vk:1 improvement:1 likelihood:26 hk:2 inference:7 dependent:1 cunningham:1 relation:1 overall:1 flexible:2 special:1 copula:65 mutual:12 marginal:1 field:5 construct:8 equal:1 nicely:1 integration:2 sampling:10 biology:1 represents:1 look:1 carin:1 fmri:1 future:1 others:1 t2:1 stimulus:8 overestimating:1 report:1 randomly:2 simultaneously:1 gamma:2 individual:2 argmax:2 ab:3 freedom:1 fd:4 interneurons:1 highly:2 possibility:2 investigate:1 insurance:1 evaluation:6 unconditional:1 light:1 accurate:1 integral:1 edge:1 partial:4 istituto:2 respective:1 institut:1 biometrics:1 tree:11 abundant:1 xjk:2 theoretical:2 fitted:4 increased:1 modeling:3 cover:1 goodness:1 applicability:1 vertex:4 deviation:1 leurs:1 uniform:3 usefulness:1 gr:1 too:1 commission:1 dependency:9 accomplish:3 considerably:1 combined:1 density:17 siam:1 lee:1 decoding:2 together:1 w1:2 augmentation:1 reflect:1 recorded:3 management:1 prefrontal:1 external:1 cognitive:1 american:3 derivative:5 leading:1 macke:1 li:3 account:4 potential:7 suggesting:1 de:6 student:3 coding:1 includes:1 vi:6 lind:2 onset:1 performed:1 break:1 lot:1 ornstein:1 red:2 option:3 complicated:1 msca:1 ass:1 formed:2 variance:1 characteristic:1 efficiently:1 maximized:2 yield:2 generalize:1 bayesian:2 identification:1 mc:4 carlo:2 mesoscopic:1 confirmed:1 cc:3 tissue:1 simultaneous:1 casella:1 whenever:2 basket:1 associated:1 di:2 mi:1 gain:3 auditory:1 rdle:1 color:1 dimensionality:6 electrophysiological:1 cj:2 higher:6 response:1 evaluated:1 though:1 strongly:1 xa:8 correlation:4 hand:2 trust:1 nonlinear:1 indicated:2 grows:1 xj1:2 concept:1 validity:1 brown:2 normalized:5 former:1 remedy:1 attractive:1 rovereto:2 wilks:2 criterion:1 m:2 stress:1 theoretic:5 demonstrate:2 complete:1 tn:2 onken:3 ince:1 stefano:2 fj:2 wise:1 recently:3 fi:38 spiking:3 conditioning:3 exponentially:1 extend:2 association:3 m1:3 measurement:2 significant:2 fonctions:1 automatic:1 consistency:1 outlined:4 similarly:2 mathematics:1 had:1 jenison:1 cortex:2 multivariate:21 closest:2 showed:2 italy:2 moderate:1 driven:1 indispensable:1 additional:1 surely:1 determine:1 ud:1 redundant:1 signal:4 multiple:2 full:6 smooth:1 technical:2 calculation:4 clinical:1 cross:3 serial:1 regression:2 expectation:2 poisson:2 sometimes:1 kernel:2 uhlenbeck:1 cell:2 c1:1 separately:1 interval:2 pyramidal:1 macroscopic:1 recording:6 khaled:2 contrary:1 incorporates:1 reflective:1 easy:1 xj:8 independence:2 fit:5 idea:2 t0:2 utility:1 song:1 york:3 detailed:3 nonparametric:1 dark:1 reduced:1 outperform:1 canonical:3 tutorial:1 inhibitory:3 problematic:1 neuroscience:6 estimated:6 per:1 blue:2 discrete:34 write:2 promise:2 express:1 four:2 nevertheless:3 nocedal:1 v1:2 asymptotically:1 efx:1 sum:2 run:2 inverse:1 almost:1 family:10 wu:1 oscillation:1 draw:1 comparable:1 layer:1 ct:17 followed:1 simplification:1 aic:2 quadratic:6 encountered:1 yielded:1 activity:10 strength:1 occur:1 nearby:1 u1:7 simulate:3 argument:4 min:1 leon:1 performing:1 extracellular:2 relatively:1 durante:1 department:1 according:1 combination:2 reale:1 smaller:2 increasingly:1 wi:1 biologically:4 intuitively:1 equation:1 count:12 needed:1 tractable:3 italiano:2 bakken:1 studying:1 available:2 decomposing:1 operation:1 apply:8 hierarchical:1 appropriate:1 occurrence:1 alternative:1 thomas:1 denotes:6 top:2 binomial:4 include:1 log2:3 calculating:4 carlson:1 medicine:1 build:1 hypercube:1 classical:1 appreciate:1 quantity:4 spike:15 kaiser:1 parametric:2 dependence:8 obermayer:1 microscopic:2 separate:3 higherorder:1 simulated:7 decoder:1 collected:3 extent:1 reason:1 code:1 modeled:1 relationship:1 ratio:5 difficult:1 unfortunately:1 robert:1 potentially:1 racine:2 negative:2 reliably:1 allowing:3 neuron:7 truncated:1 situation:2 variability:1 precise:1 communication:1 varied:1 arbitrary:3 introduced:1 clayton:7 pair:19 paris:1 barcelona:1 nip:1 macaque:1 beyond:1 proceeds:1 below:1 green:1 rely:1 circumvent:1 attach:1 abbreviate:1 mn:3 representing:2 scheme:2 improve:1 ne:1 temporally:1 understanding:1 kelly:1 relative:1 law:1 lacking:1 fully:10 lecture:1 mixed:76 limitation:1 var:2 validation:1 foundation:1 degree:1 principle:1 cd:1 row:4 excitatory:3 placed:1 supported:1 wide:1 czado:3 distributed:7 overcome:1 dimension:4 xn:4 numeric:1 cumulative:6 evaluating:1 computes:1 calculated:2 cortical:1 dard:1 commonly:1 projected:1 adaptive:1 simplified:1 programme:1 cope:1 transaction:1 approximate:1 flashlight:1 xi:30 factorize:1 continuous:43 un:1 nature:2 genest:1 hc:1 european:1 constructing:1 vj:1 pk:3 big:3 noise:2 scored:1 edition:3 allowed:1 x1:11 xu:2 site:1 fig:10 neuronal:1 depicts:1 fashion:1 wiley:1 inferring:1 exponential:3 weighting:1 theorem:3 british:1 specific:3 xt:8 survival:1 bivariate:5 joe:3 restricting:1 drew:2 ci:5 importance:1 illustrates:1 margin:66 horizon:1 chen:1 entropy:18 electrophysiology:1 lt:13 likely:1 prevents:1 contained:1 u2:9 springer:2 aa:1 corresponds:3 nested:2 truth:4 cdf:3 conditional:5 goal:3 consequently:1 change:2 experimentally:1 tecnologia:2 specifically:1 typical:2 acting:1 called:2 total:1 partly:3 shannon:1 support:2 almeida:1 latter:2 outstanding:2 evaluate:5 requisite:1 marge:1 correlated:1
5,602
607
Recognition-based Segmentation of On-line Hand-printed Words M. Schenkel*, H. Weissman, I. Guyon, C. Nohl, D. Henderson AT&T Bell Laboratories, Holmdel, NJ 07733 * Swiss Federal Institute of Technology, CH-8092 Zurich Abstract This paper reports on the performance of two methods for recognition-based segmentation of strings of on-line hand-printed capital Latin characters. The input strings consist of a timeordered sequence of X-Y coordinates, punctuated by pen-lifts. The methods were designed to work in "run-on mode" where there is no constraint on the spacing between characters. While both methods use a neural network recognition engine and a graph-algorithmic post-processor, their approaches to segmentation are quite different. The first method, which we call IN SEC (for input segmentation), uses a combination of heuristics to identify particular penlifts as tentative segmentation points. The second method, which we call OUTSEC (for output segmentation), relies on the empirically trained recognition engine for both recognizing characters and identifying relevant segmentation points. 1 INTRODUCTION We address the problem of writer independent recognition of hand-printed words from an 80,OOO-word English dictionary. Several levels of difficulty in the recognition of hand-printed words are illustrated in figure 1. The examples were extracted from our databases (table 1). Except in the cases of boxed or clearly spaced characters, segmenting characters independently of the recognition process yields poor recognition performance. This has motivated us to explore recognition-based segmentation techniques. 723 724 Schenkel, Weissman, Guyon, Nohl, and Henderson Table 1: Databases used for training and testing. DB2 contains words one to five letters long, but only four and five letter words are constrained to be legal English words. DB3 contains legal English words of any length from an 80,000 word dictionary. uppercase database DBl DB2 DBS data nature boxed letters short words English words pad used AT&T Grid Wacom ffiJ[1g~ r~ 2 ty.i F!f.!r L (:)rr:::rp training set size 9000 8000 - test set size 1500 1000 600 approx. # of donors 250 400 25 (a) boxed (b) spaced (c) pen-lifts (d) connected Figure 1: Examples of styles that can be found in our databases: (a) DB1; (b) DB2; (c), (d) DB2 and DB3 . The line thickness or darkness is alternated at each pen-lift. The basic principle of recognition-based segmentation is to present to the recognizer many "tentative characters". The recognition scores ultimately determine the string segmentation. We have investigated two different recognition-based segmentation methods which differ in their definition of the tentative characters, but have very similar recognition engines. The data collection device provides pen trajectory information as a sequence of (x, y) coordinates at regular time intervals (10-15 ms). We use a preprocessing technique which preserves this information by keeping a finely sampled sequence of feature vectors along the pen trajectory (Guyon et al. 1991, Weissman et al. 1992). The recognizer is a Time Delay Neural Network (T DN N) (Lang and Hinton 1988, Waibel et al. 1989, Guyon et al. 1991). There is one output per class, in this case 26 outputs, providing a score for all the capital letters of the Latin alphabet. The critical step in the segmentation process is the postprocessing which disentangles various word hypotheses using the character recognition scores provided by the TDN N. For this purpose, we use conventional dynamic programming algorithms. In addition we use a dictionary that checks the solution and returns a list of similar legal words. The best word hypotheses, subject to this list, is again chosen by dynamic programming algorithms. Recognition-based segmentation relies on the recognizer to give low confidence Recognition-based Segmentation of On-line Hand-printed Words scores for wrong tentative characters corresponding to a segmentation mistake. Recognizers trained only on valid characters usually perform poorly on such a task. We use "segmentation-driven training" techniques which allow the training of wrong tentative characters, produced by the segmentation engine itself, as negative examples. This additional training has reduced our error rates by more than a factor of two. In section 2 we describe the INSEG method which uses tentative characters delineated by heuristic segmentation points. It is expected to be most appropriate for hand-printed capital letters since nearly all writers separate these letters by pen-lifts. This method was inspired by a similar technique used for Optical Character Recognition (OCR) (Burges et al. 1992). In section 3 we present an alternative method, OUTSEG, which expects the recognition engine to learn empirically (learning by examples) both to recognize characters and to identify relevant segmentation points. This second method bears similarities with the OCR methods proposed by Matan et al. (1991) or Keeler et al. (1991). In section 4 we compare the two methods and present experimental results. 2 SEGMENTATION IN INPUT SPACE Figure 2 shows the different steps of the IN SEG process. Module 1 is used to define "tentative characters" delineated by "tentative cuts" (spaces or pen-lifts). The tentative characters are then handed to module 2 which performs the preprocessing and the scoring of the characters with a T DN N. The recognition results are then gathered into an interpretation graph . In module 3 the best path through that graph is found with the Viterbi algorithm . stroke detector & grouper pen input ~~~ qff 1-2 ,:::.( "\. 0-2 2-3 1-3 tentative characters preprocessor &TDNN u-, R 0-2 Z '-2 E ,-3 E 2-3 E 3-4 graph best path search J 2-4 J 3-S F .!;! ItR E EFIt Figure 2: Processing steps of the IN SEG method. 725 726 Schenkel, Weissman, Guyon, Nohl, and Henderson In figure 3 we show a simplified representation of an interpretation graph built by our system. Each tentative character (denoted {i, j}) has a double index: the tentative cut i at the character starting point and the tentative cut j at the character end point. We denote by X {i, j} the node associated to the score of letter X for the tentative character {i,j}. A path through the graph starts at a node X{O,.} and ends at a node Y {., m}, where is the word starting point and m the last pen-lift. In between, only transitions of the kind X{.,i} -+ Y{i,.} are allowed to prevent character overlapping. ? To avoid searching through too complex a graph, we need to perform some pruning. The spatial relationship between strokes is used to discard unlikely tentative cuts. For instance, strokes with a large horizontal overlap are bundled. The remaining tentative characters are then grouped in different ways to form alternative tentative characters. Tentative characters separated by a large horizontal spatial interval are never considered for grouping. Figure 3: Graph obtained with the input segmentation method. The grey shading in each box indicates the recognition scores (the darker, the stronger the recognition score and the higher the recognition confidence). In table 2 we present the results obtained with the T DN N recognizer used by Guyon et al. (1991), with 4 convolutional layers and 6,252 weights. Characters are preprocessed individually, which provides the network with a fixed dimension input. 3 SEGMENTATION IN OUTPUT SPACE In contrast with IN SEC, the OUTSEC method does not rely on human designed segmentation hints: the neural network learns both recognition and segmentation features from examples. Recognition-based Segmentation of On-line Hand-printed Words Tentative characters are produced simply in that a window is swept over the input sequence in small steps. At each step the content of the window is taken to be a tentative character. Successive characters usually overlap considerably. L (X) time (i) 1111111111111111111111111111111111111 012... m ~ Figure 4: T DN N outputs of the OUTSEG system. The grey curve indicates the best path through the graph, using duration modeling. The word "LOOP" was correctly recognized in spite of the ligatures which prevent segmentation on the basis of pen-lifts. In figure 4, we show the outputs of our TDN N recognizer when the word "LOOP" is processed. The main matrix is a simplified representation of our interpretation graph. Tentative character numbers i (i E {I, 2, ... , m}), run along the time direction. Each column contains the scores of all possible interpretations X (X E {A, B, C, ... , Z, nil}) of a given tentative character. The bottom line is the nil interpretation score which approximates the probability that the present input is not a character (meaningless character): P(nil{i}linput) 1- (P(A{i}linput) + P(B{i}linput) + ... + P(Z{i} Iinput? = The connections between nodes reflect a model of character durations. A simple way of enforcing duration is to allow only the following transitions: X {i} ~ X {i + I}, nil{i} ~ nil{i+l}, X {i} ~ nil{i + I}, nil{i} ~ X {i + I}, where X stands for a certain letter. A character interpretation can be followed by 727 728 Schenkel, Weissman, Guyon, Nohl, and Henderson the same interpretation but cannot be followed immediately by another character interpretation: they must be separated by nil. This permits distinguishing between letter duration and letter repetition (such as the double "0" in our example). The best path in the graph is found by the Viterbi algorithm. In fact, this simple pattern of connections corresponds to a Markov model of duration, with exponential decay. We implemented a slightly fancier model which allows the generation of any duration distribution (Weissman et al. 1992) to help prevent character omission or insertion. In our experiments, we selected two Poisson distributions to model character and the nil-class duration respectively. We use a T D N N recognizer with 3 layers and 10, 817 weights. The sequence of recognition scores is obtained by sweeping the neural network over the input. Because of the convolutional structure of the T DN N, there are many identical computations between two successive calls of the recognizer and only about one sixth of the network connections have to be reevaluated for each new tentative character. As a consequence, although the OUTSEG system processes many more tentative characters than the IN SEG system does, the overall computation time is about the same. 4 COMPARISON OF RESULTS AND CONCLUSIONS Table 2: Comparison of the performance of the two segmentation methods using a TDN N recognizer. II on DB2 INSEG OUTSEG on DB3 INSEG OUTSEG Error without dictionary % char. % word 9 18 21 10 % char. % word 8 33 11 48 II Error with dictionary % char. % word 8.5 15 17 8 % char. % word 13 5 7 21 I We summarize in table 2 the results obtained with our two segmentation methods. To complement the results obtained with database DB2, we used (without retraining) database DB3 as a control, containing words of any length from the English dictionary. In our current versions, INSEG performs better than OUTSEG. The OUTSEG method can handle connected letters (such as in the example of the word "LOOP" in figure 4), while the INSEG method, which relies on pen lifts, cannot. But, we discovered that very few people did not separate their characters by pen lifts in the data we collected. On the other hand, an advantage of the IN SEG method is that it can easily be used with recognizers other than the T DN N, whereas the OUTSEG method relies heavily on the convolutional structure of the T DN N for computational efficiency. For comparison, we substituted two other neural network recognizers to the T DN N. These networks use alternative input representations. The OCR - net was designed for Optical Character Recognition (Le Cun et al. 1989) and uses pixel map inputs. Recognition-based Segmentation of On-line Hand-printed Words Its first layer performs local line orientation detection. The orientation - net has an architecture similar to that of the OCR - net, but its first layer is removed and local line orientation information, directly extracted from the pen trajectory, is transmitted to the second layer (Weissbuch and Le Cun 1992). Without a dictionary, the OCR - net has an error rate more than twice that of the T DN N but the orientation - net performs similarly. With dictionary the orientation - net has a 25% lower error rate than the T DN N. This improvement is attributed to better second and third best recognition choices, which facilitates dictionary use. Our best results to date (tables 3) were obtained with the INSEG method, using two recognizers combined with a voting scheme: the T DN N and the orientationnet. For comparison purposes we mention the results obtained by a commercial recognizer on the same data. One should notice that our dictionary is the same as the one from which the data was drawn and is probably a larger dictionary than the one used by the commercial system. Our results are substantially better than those of the commercial system. On an absolute scale they are quite satisfactory if we take into account that the test data was not cleaned at all and that more than 20% of the errors have been identified to be patterns written in cursive, misspelled or totally illegible. We expect the OUTSEG method to work best for cursive handwriting, which does not exhibit trivial segmentation hints, but we do not have any direct evidence to support this expectation as yet. Rumelhart (1992) had success with a version of OUTSEG . Work is in progress to extend the capabilities of our systems to cursive writing. Table 3: Performance of our best system. For comparison, we mention in parenthesis the performances obtained by a commercial recognizer on the same data. The performance of the commercial system with dictionary (marked with a *) are penalized because DB2 and DB3 include words not contained in its dictionary. Method DB2 DB3 Error without dictionary % char. % word Error with dictionary % char. % word 7 (18) 6 (20) 7 (17*) 5 (18*) 13 (29) 23 (61) 10 (32*) 11 (49*) Acknowledgments We wish to thank the entire Neural Network group at Bell Labs Holmdel for their supportive discussions. Helpful suggestions with the editing of this paper by L. Jackel and B. Boser are gratefully acknowledged. We are grateful to Anne Weissbuch, Yann Le Cun and Jan Ben for giving us their Neural Networks to tryon our IN SEG method. We are indebted to Howard Page for providing comparison figures with the commercial recognizer. The experiments were performed with the neural network simulators of B. Boser, Y. Le Cun and L. Bottou who we thank for their help and advice. 729 730 Schenkel, Weissman, Guyon, Nohl, and Henderson References I. Guyon, P. Albrecht, Y. Le Cun, J . Denker and W. Hubbard. Design of a neural network character recognizer for a touch terminal. Pattern Recognition, 24(2), 1991. H. Weissman, M. Schenkel, I. Guyon, C. Nohl and D. Henderson. Recognitionbased Segmentation of On-line Run-on Handprinted Words: Input vs. Output Segmentation. Submitted to Pattern Recognition, October 1992. K. J. Lang and G . E. Hinton. A time delay neural network architecture for speech recognition. Technical Report CMU-cs-88-152, Carnegie-Mellon University, Pittsburgh PA, 1988. A. Waibel, T. Hanazawa, G. Hinton, K. Shikano and K. Lang. Phoneme recognition using time-delay neural networks. IEEE Transactions on Acoustics, Speech and Signal Processing, 37:328-339, March 1989. C. J. C. Burges, O. Matan, Y. Le Cun, D. Denker, L. D. Jackel, C . E. Stenard, C. R. Nohl and J. I. Ben. Shortest path segmentation: A method for training neural networks to recognize character strings. In IJCNN'92, volume 3, Baltimore, 1992. IEEE. O. Matan, C. J . C. Burges, Y. Le Cun and J. Denker. Multi-digit recognition using a Space Dispacement Neural Network. In J. E. Moody et al., editor, Advances in Neural Information Processing Systems 4, Denver, 1992. Morgan Kaufmann. J . Keeler, D. E. Rumelhart and W-K. Leow. Integrated segmentation and recognition of hand-printed numerals. In R. Lippmann et aI., editor, Advances in Neural Information Processing Systems 3, pages 557-563, Denver, 1991. Morgan Kaufmann. Y. Le Cun, L.D. Jackel, B. Boser, J .S. Denker, H.P. Graf, I. Guyon, D. Henderson, R.E. Howard and W. Hubbard. Handwritten digit recognition: Application of neural network chips and automatic learning. IEEE Communications Magazine, pages 41-46, November 1989. A. Weissbuch and Y. Le Cun. Private communication. 1992. D. Rumelhart et al. Integrated segmentation and recognition of cursive handwriting. In Third NEC symposium Computational Learning and Cognition, Princeton, New Jersey, 1992 (to appear).
607 |@word private:1 version:2 stronger:1 retraining:1 grey:2 leow:1 mention:2 shading:1 contains:3 score:10 current:1 anne:1 lang:3 yet:1 must:1 written:1 designed:3 v:1 selected:1 device:1 short:1 provides:2 node:4 successive:2 five:2 along:2 dn:11 direct:1 symposium:1 expected:1 simulator:1 multi:1 terminal:1 inspired:1 window:2 totally:1 provided:1 kind:1 string:4 substantially:1 nj:1 voting:1 wrong:2 control:1 appear:1 segmenting:1 local:2 mistake:1 consequence:1 path:6 twice:1 acknowledgment:1 testing:1 swiss:1 digit:2 jan:1 bell:2 printed:9 word:30 confidence:2 regular:1 spite:1 cannot:2 disentangles:1 writing:1 darkness:1 linput:3 conventional:1 map:1 punctuated:1 starting:2 independently:1 duration:7 identifying:1 immediately:1 searching:1 handle:1 coordinate:2 commercial:6 heavily:1 magazine:1 programming:2 us:3 distinguishing:1 hypothesis:2 pa:1 rumelhart:3 recognition:36 cut:4 donor:1 database:6 bottom:1 module:3 db3:6 seg:5 connected:2 removed:1 insertion:1 dynamic:2 ultimately:1 trained:2 grateful:1 writer:2 efficiency:1 basis:1 easily:1 chip:1 various:1 jersey:1 alphabet:1 separated:2 describe:1 lift:9 matan:3 quite:2 heuristic:2 larger:1 itself:1 hanazawa:1 sequence:5 rr:1 advantage:1 net:6 relevant:2 loop:3 date:1 poorly:1 double:2 ben:2 help:2 progress:1 implemented:1 c:1 differ:1 direction:1 human:1 char:6 numeral:1 inseg:6 keeler:2 considered:1 algorithmic:1 viterbi:2 cognition:1 dictionary:15 purpose:2 recognizer:12 jackel:3 individually:1 grouped:1 hubbard:2 repetition:1 federal:1 clearly:1 grouper:1 avoid:1 bundled:1 improvement:1 check:1 indicates:2 contrast:1 helpful:1 unlikely:1 entire:1 integrated:2 pad:1 pixel:1 overall:1 orientation:5 denoted:1 constrained:1 spatial:2 never:1 identical:1 nearly:1 report:2 hint:2 few:1 preserve:1 recognize:2 detection:1 henderson:7 uppercase:1 handed:1 instance:1 modeling:1 column:1 expects:1 recognizing:1 delay:3 too:1 thickness:1 stenard:1 considerably:1 combined:1 moody:1 again:1 reflect:1 containing:1 style:1 return:1 albrecht:1 account:1 sec:2 performed:1 lab:1 start:1 capability:1 convolutional:3 kaufmann:2 phoneme:1 who:1 spaced:2 identify:2 yield:1 gathered:1 handwritten:1 produced:2 reevaluated:1 trajectory:3 processor:1 indebted:1 stroke:3 submitted:1 detector:1 definition:1 sixth:1 ty:1 associated:1 attributed:1 handwriting:2 sampled:1 segmentation:35 dbl:1 higher:1 ooo:1 editing:1 box:1 hand:10 horizontal:2 touch:1 overlapping:1 mode:1 laboratory:1 satisfactory:1 illustrated:1 wacom:1 m:1 performs:4 postprocessing:1 empirically:2 denver:2 volume:1 extend:1 interpretation:8 approximates:1 mellon:1 ai:1 approx:1 automatic:1 grid:1 similarly:1 gratefully:1 had:1 recognizers:4 similarity:1 driven:1 discard:1 certain:1 supportive:1 success:1 swept:1 scoring:1 transmitted:1 morgan:2 additional:1 recognized:1 determine:1 shortest:1 signal:1 ii:2 technical:1 long:1 post:1 weissman:8 parenthesis:1 basic:1 expectation:1 poisson:1 cmu:1 addition:1 whereas:1 spacing:1 interval:2 baltimore:1 meaningless:1 finely:1 probably:1 subject:1 db:1 facilitates:1 call:3 latin:2 architecture:2 identified:1 itr:1 motivated:1 speech:2 cursive:4 processed:1 reduced:1 notice:1 per:1 correctly:1 carnegie:1 group:1 four:1 acknowledged:1 drawn:1 capital:3 preprocessed:1 prevent:3 graph:11 run:3 letter:11 guyon:11 yann:1 schenkel:6 holmdel:2 layer:5 followed:2 ijcnn:1 constraint:1 optical:2 waibel:2 combination:1 poor:1 march:1 slightly:1 character:45 cun:9 delineated:2 taken:1 legal:3 zurich:1 end:2 permit:1 denker:4 ocr:5 appropriate:1 alternative:3 rp:1 remaining:1 include:1 tdn:3 giving:1 exhibit:1 separate:2 thank:2 collected:1 trivial:1 enforcing:1 length:2 index:1 relationship:1 providing:2 handprinted:1 october:1 negative:1 design:1 perform:2 markov:1 howard:2 november:1 hinton:3 communication:2 discovered:1 illegible:1 omission:1 sweeping:1 complement:1 cleaned:1 connection:3 tentative:24 engine:5 acoustic:1 boser:3 address:1 usually:2 pattern:4 summarize:1 built:1 critical:1 overlap:2 difficulty:1 rely:1 dispacement:1 scheme:1 technology:1 nohl:7 tdnn:1 alternated:1 graf:1 expect:1 bear:1 generation:1 suggestion:1 principle:1 db1:1 editor:2 penalized:1 last:1 keeping:1 english:5 allow:2 burges:3 institute:1 absolute:1 curve:1 dimension:1 valid:1 transition:2 stand:1 collection:1 preprocessing:2 simplified:2 outseg:10 transaction:1 pruning:1 lippmann:1 pittsburgh:1 shikano:1 search:1 pen:13 table:7 nature:1 learn:1 boxed:3 investigated:1 complex:1 bottou:1 substituted:1 did:1 main:1 ligature:1 allowed:1 advice:1 db2:8 darker:1 wish:1 iinput:1 exponential:1 third:2 learns:1 qff:1 preprocessor:1 list:2 decay:1 evidence:1 grouping:1 consist:1 nec:1 simply:1 explore:1 contained:1 ch:1 corresponds:1 relies:4 extracted:2 marked:1 content:1 except:1 nil:9 experimental:1 people:1 support:1 fancier:1 princeton:1
5,603
6,070
Asynchronous Parallel Greedy Coordinate Descent Yang You ?, + XiangRu Lian?, + Ji Liu ? Hsiang-Fu Yu ? Inderjit S. Dhillon ? James Demmel ? Cho-Jui Hsieh ? + ? ? equally contributed University of California, Davis University of Rochester ? ? University of Texas, Austin University of California, Berkeley youyang@cs.berkeley.edu, xiangru@yandex.com, jliu@cs.rochester.edu {rofuyu,inderjit}@cs.utexas.edu, demmel@eecs.berkeley.edu chohsieh@cs.ucdavis.edu Abstract In this paper, we propose and study an Asynchronous parallel Greedy Coordinate Descent (Asy-GCD) algorithm for minimizing a smooth function with bounded constraints. At each iteration, workers asynchronously conduct greedy coordinate descent updates on a block of variables. In the first part of the paper, we analyze the theoretical behavior of Asy-GCD and prove a linear convergence rate. In the second part, we develop an efficient kernel SVM solver based on Asy-GCD in the shared memory multi-core setting. Since our algorithm is fully asynchronous?each core does not need to idle and wait for the other cores?the resulting algorithm enjoys good speedup and outperforms existing multi-core kernel SVM solvers including asynchronous stochastic coordinate descent and multi-core LIBSVM. 1 Introduction Asynchronous parallel optimization has recently become a popular way to speedup machine learning algorithms using multiple processors. The key idea of asynchronous parallel optimization is to allow machines work independently without waiting for the synchronization points. It has many successful applications including linear SVM [13, 19], deep neural networks [7, 15], matrix completion [19, 31], linear programming [26], and its theoretical behavior has been deeply studied in the past few years [1, 9, 16]. The most widely used asynchronous optimization algorithms are stochastic gradient method (SG) [7, 9, 19] and coordinate descent (CD) [1, 13, 16], where the workers keep selecting a sample or a variable randomly and conduct the corresponding update asynchronously. Although these stochastic algorithms have been studied deeply, in some important machine learning problems a ?greedy? approach can achieve much faster convergence speed. A very famous example is greedy coordinate descent: instead of randomly choosing a variable, at each iteration the algorithm selects the most important variable to update. If this selection step can be implemented efficiently, greedy coordinate descent can often make bigger progress compared with stochastic coordinate descent, leading to a faster convergence speed. For example, the decomposition method (a variant of greedy coordinate descent) is widely known as best solver for kernel SVM [14, 21], which is implemented in LIBSVM and SVMLight. Other successful applications can be found in [8, 11, 29]. In this paper, we study asynchronous greedy coordinate descent algorithm framework. The variable is partitioned into subsets, and each worker asynchronously conducts greedy coordinate descent in one of the blocks. To our knowledge, this is the first paper to present a theoretical analysis or practical applications of this asynchronous parallel algorithm. In the first part of the paper, we formally define the asynchronous greedy coordinate descent procedure, and prove a linear convergence rate under mild assumption. In the second part of the paper, we discuss how to apply this algorithm to solve the kernel SVM problem on multi-core machines. Our algorithm achieves linear speedup with number of cores, and performs better than other multi-core SVM solvers. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. The rest of the paper is outlined as follows. The related work is discussed in Section 2. We propose the asynchronous greedy coordinate descent algorithm in Section 3 and derive the convergence rate in the same section. In Section 4 we show the details how to apply this algorithm for training kernel SVM, and the experimental comparisons are presented in Section 5. 2 Related Work Coordinate Descent. Coordinate descent (CD) has been extensively studied in the optimization community [2], and has become widely used in machine learning. At each iteration, only one variable is chosen and updated while all the other variables remain fixed. CD can be classified into stochastic coordinate descent (SCD), cyclic coordinate descent (CCD) and greedy coordinate descent (GCD) based on their variable selection scheme. In SCD, variables are chosen randomly based on some distribution, and this simple approach has been successfully applied in solving many machine learning problems [10, 25]. The theoretical analysis of SCD has been discussed in [18, 22]. Cyclic coordinate descent updates variables in a cyclic order, and has also been applied to several applications [4, 30]. Greedy Coordinate Descent (GCD).The idea of GCD is to select a good, instead of random, coordinate that can yield better reduction of objective function value. This can often be measured by the magnitude of gradient, projected gradient (for constrained minimization) or proximal gradient (for composite minimization). Since the variable is carefully selected, at each iteration GCD can reduce objective function more than SCD or CCD, which leads to faster convergence in practice. Unfortunately, selecting a variable with larger gradient is often time consuming, so one needs to carefully organize the computation to avoid the overhead, and this is often problem dependent. The most famous application of GCD is the decomposition method [14, 21] used in kernel SVM. By exploiting the structure of quadratic programming, selecting the variable with largest gradient magnitude can be done without any overhead; as a result GCD becomes the dominant technique in solving kernel SVM, and is implemented in LIBSVM [5] and SVMLight [14]. There are also other applications of GCD, such as non-negative matrix factorization [11], large-scale linear SVM [29], and [8] proposed an approximate way to select variables in GCD. Recently, [20] proved an improved convergence bound for greedy coordinate descent. We focus on parallelizing the GS-r rule in this paper but our analysis can be potentially extended to the GS-q rule mentioned in that paper. To the best of our knowledge, the only literature discussing how to parallelize GCD was in [23, 24]. A thread-greedy/block-greedy coordinate descent is a synchronized parallel GCD for L1 -regularized empirical risk minimization. At an iteration, each thread randomly selects a block of coordinates from a pre-partitioned block partition and proposes the best coordinate from this block along with its increment (i.e., step size). Then all the threads are synchronized to perform the actual update to the variables. However, the method can potentially diverge; indeed, this is mentioned in [23] about the potential divergence when the number of threads is large. [24] establishes sub-linear convergence for this algorithm. Asynchronous Parallel Optimization Algorithms.In a synchronous algorithm each worker conducts local updates, and in the end of each round they have to stop and communicate to get the new parameters. This is not efficient when scaling to large problem due to the curse of last reducer (all the workers have to wait for the slowest one). In contrast, in asynchronous algorithms there is no synchronization point, so the throughput will be much higher than a synchronized system. As a result, many recent work focus on developing asynchronous parallel algorithms for machine learning as well as providing theoretical guarantee for those algorithms [1, 7, 9, 13, 15, 16, 19, 28, 31]. In distributed systems, asynchronous algorithms are often implemented using the concept of parameter servers [7, 15, 28]. In such setting, each machine asynchronously communicates with the server to read or write the parameters. In our experiments, we focus on another multi-core shared memory setting, where multiple cores in a single machine conduct updates independently and asynchronously, and the communication is implicitly done by reading/writing to the parameters stored in the shared memory space. This has been first discussed in [19] for the stochastic gradient method, and recently proposed for parallelizing stochastic coordinate descent [13, 17]. This is the first work proposing an asynchronous greedy coordinate decent framework. The closest work to ours is [17] for asynchronous stochastic coordinate descent (ASCD). In their algorithm, each worker asynchronously conducts the following updates: (1) randomly select a variable (2) compute the update and write to memory or server. In our AGCD algorithm, each worker will select the best variable to update in a block, which leads to faster convergence speed. We also compare with ASCD algorithm in the experimental results for solving the kernel SVM problem. 2 3 Asynchronous Greedy Coordinate Descent We consider the following constrained minimization problem: min (1) f (x), x2? where f is convex and smooth, ? ? RN is the constraint set, ? = ?1 ? ?2 ? ? ? ? ? ?N and each ?i , i = 1, 2, . . . , N is a closed subinterval of the real line. Notation: We denote S to be the optimal solution set for (1) and PS (x), P? (x) to be the Euclidean projection of x onto S, ?, respectively. We also denote f ? to be the optimal objective function value for (1). We propose the following Asynchronous parallel Greedy Coordinate Descent (Asy-GCD) for solving (1). Assume N coordinates are divided into n non-overlapping sets S1 [ . . . [ Sn . Let k be the global counter of total number of updates. In Asy-GCD, each processor repeatedly runs the following GCD updates: ? Randomly select a set Sk 2 {S1 , . . . , Sn } and pick the coordinate ik 2 Sk where the projected gradient (defined in (2)) has largest absolute value. ? Update the parameter by xk+1 P? (xk rik f (xk )), where is the step size. Here the projected gradient defined by r+ xk ) := xk ik f (? P? (xk rik f (? xk )) (2) is a measurement of optimality for each variable, where x ?k is current point stored in memory used to calculate the update. The processors will run concurrently without synchronization. In order to analyze Asy-GCD, we capture the system-wise global view in Algorithm 1. Algorithm 1 Asynchronous Parallel Greedy Coordinate Descent (Asy-GCD) Input: x0 2 ?, , K Output: xK+1 1: Initialize k 0; 2: while k ? K do 3: Choose Sk from {S1 , . . . , Sn } with equal probability; 4: Pick ik = arg maxi2Sk kr+ x)k; i f (? 5: xk+1 P? (xk rik f (? xk )); 6: k k + 1; 7: end while The update in the k th iteration is xk+1 th P? (xk rik f (? xk )), where ik is the selected coordinate in k iteration, x ?k is the point used to calculate the gradient and rik f (? xk ) is a zero vector where the ik th coordinate is set to the corresponding coordinate of the gradient of f at x ?k . Note that x ?k may not be equal to the current value of the optimization variable xk due to asynchrony. Later in the theoretical analysis we will need to assume x ?k is close to xk using the bounded delay assumption. In the following we prove the convergence behavior of Asy-GCD. We first make some commonly used assumptions: Assumption 1. 1. (Bounded Delay) There is a set J(k) ? {k 1, . . . , k T } for each iteration k such that X x ?k := xk (xj+1 xj ), (3) j2J(k) where T is the upper bound of the staleness. In this ?inconsistent read? model, we assume some of the latest T updates are not yet written back to memory. This is also used in some previous papers [1, 17], and is more general than the ?consistent read? model that assumes x ?k is equal to some previous iterate. 3 2. For simplicity, we assume each set Si , i 2 {1, . . . , n} has m coordinates. 3. (Lipschitzian Gradient) The gradient function of the objective rf (?) is Lipschitzian. That is to say, krf (x) rf (y)k ? Lkx yk 8x, 8y. (4) Under the Lipschitzian gradient assumption, we can define three more constants Lres , Ls and Lmax . Define Lres to be the restricted Lipschitz constant satisfying the following inequality: 8i 2 {1, 2, ..., N } and t 2 R with x, x + tei 2 ? (5) Let ri be the operator calculating a zero vector where the ith coordinate is set to the ith coordinate of the gradient. Define L(i) for i 2 {1, 2, . . . , N } as the minimum constant that satisfies: kri f (x) ri f (x + ?ei )k ? L(i) |?|. (6) Define Lmax := maxi2{1,...,N } L(i) . It can be seen that Lmax ? Lres ? L. Let s be any positive integer bounded by N . Define Ls to be the minimal constant satisfying the following inequality: 8x 2 ?, 8S ? {1, 2, ..., N } where |S| ? s: P P rf (x) rf x + i2S ?i ei ? Ls i2S ?i ei . krf (x) rf (x + ?ei )k ? Lres |?|, 4. (Global Error Bound) We assume that our objective f has the following property: when = 3L1max , there exists a constant ? such that PS (x)k 6 ?k? x xk, 8x 2 ?. (7) ? ? Where x ? is defined by argminx0 2? hrf (x), x0 xi + 21 kx0 xk2 . This is satisfied by strongly convex objectives and some weakly convex objectives. For example, it is proved in [27] that the kernel SVM problem (9) satisfies the global error bound even when the kernel is not strictly positive definite. 5. (Independence) All random variables in {Sk }k=0,1,??? ,K in Algorithm 1 are independent to each other. kx We then have the following convergence result: Theorem 2 (Convergence). Choose = 1/(3Lmax ) in Algorithm 1. Suppose n upper bound for staleness T satisfies the following condition p nLmax T (T + 1) 6 . 4eLres Under Assumption 1, we have the following convergence rate for Algorithm 1: ? ?k 2Lmax b ? E(f (xk ) f ) 6 1 (f (x0 ) f ? ). L?2 n where b is defined as ? ? 1 L2 p T b= +2 . 18 nLmax Lres 6 and that the (8) This theorem p indicates a linear convergence rate under the global error bound and the condition T 2 ? O( n). Since T is usually proportional to the total number cores involved in the computation, this result suggests that one can have linear speedup as long as the total number of cores is smaller than O(n1/4 ). Note that for n = N Algorithm 1 reduces to the standard asynchronous coordinate descent algorithm (ASCD) and our result is essentially consistent with the one in [17], although they use the optimally strong convexity assumption for f (?). The optimally strong convexity is a similar condition to the global error bound assumption [32]. Here we briefly discuss the constants involved in the convergence rate. Using Gaussian kernel SVM on covtype as a concrete sample, Lmax = 1 for Gaussian kernel, Lres is the maximum norm of columns of kernel matrix (? 3.5), L is the 2-norm of Q (21.43 for covtype), and conditional number ? ? 1190. As number of samples increased, the conditional number ? will become a dominant term, and this also appears in the rate of serial greedy coordinate descent. In terms of speedup when increasing L2T number of threads (T ), although LT may grow, it only appears in b = ( 18pnLmax + 2) 1 , where Lres p the first term inside b is usually small since there is a n in the demominator. Therefore, b ? 2 1 in most cases, which means the convergence rate does not slow down too much when we increase T . 4 4 Application to Multi-core Kernel SVM In this section, we demonstrate how to apply asynchronous parallel greedy coordinate descent to solve kernel SVM [3, 6]. We follow the conventional notations for kernel SVM, where the variables for the dual form are ? 2 Rn (instead of x in the previous section). Given training samples {ai }`i=1 with corresponding labels yi 2 {+1, 1}, kernel SVM solves the following quadratic minimization problem: ? 1 T minn ? Q? eT ? := f (?) s.t. 0 ? ? ? C, (9) ?2R 2 where Q is an ` by ` symmetric matrix with Qij = yi yj K(ai , aj ) and K(ai , aj ) is the kernel 2 function. Gaussian kernel is a widely-used kernel function, where K(ai , aj ) = e kai aj k . Greedy coordinate descent is the most popular way to solve kernel SVM. In the following, we first introduce greedy coordinate descent for kernel SVM, and then discuss the detailed update rule and implementation issues when applying our proposed Asy-GCD algorithm on multi-core machines. 4.1 Kernel SVM and greedy coordinate descent When we apply coordinate descent to solve the dual form of kernel SVM (9), the one variable update rule for any index i can be computed by: ? i = P[0, C] ?i rfi (?)/Qii (10) ?i where P[0, C] is the projection to the interval [0, C] and the gradient is rfi (?) = (Q?)i 1. Note that this update rule is slightly different from (2) by setting the step size to be = 1/Qii . For quadratic functions this step size leads to faster convergence because i? obtained by (10) is the closed form solution of ? = arg min f (? + ei ), and ei is the i-th indicator vector. As in Algorithm 1, we choose the best coordinate based on the magnitude of projected gradient. In this case, by definition r+ P[0, C] ?i ri f (?) . (11) i f (?) = ?i The success of GCD in solving kernel SVM is mainly due to the maintenance of the gradient g := ri f (?) = (Q?) 1. Consider the update rule (10): it requires O(`) time to compute (Q?)i , which is the cost for stochastic coordinate descent or cyclic coordinate descent. However, in the following we show that GCD has the same time complexity per update by using the trick of maintaining g during the whole procedure. If g is available in memory, each element of the projected gradient (11) can be computed in O(1) time, so selecting the variable with maximum magnitude of projected gradient only costs O(`) time. The single variable update (10) can be computed in O(1) time. After the update ?i ?i + , the g has to be updated by g g + qi , where qi is the i-th column of Q. This also costs O(`) time. Therefore, each GCD update only costs O(`) using this trick of maintaining g. Therefore, for solving kernel SVM, GCD is faster than SCD and CCD since there is no additional cost for selecting the best variable to update. Note that in the above discussion we assume Q can be stored in memory. Unfortunately, this is not the case for large scale problems because Q is an ` by ` dense matrix, where ` can be millions. We will discuss how to deal with this issue in Section 4.3. With the trick of maintaining g = Q? 1, the GCD for solving (9) can be summarized in Algorithm 2. Algorithm 2 Greedy Coordinate Descent (GCD) for Dual Kernel SVM 1: Initial g = 1, ? = 0 2: For k = 1, 2, ? ? ? 3: step 1: Pick i = arg maxi |r+ i f (?)| using g 4: step 2: Compute i? by eq (10) 5: step 3: g g + ? qi 6: step 4: ?i ?i + ? 5 (See eq (11)) 4.2 Asynchronous greedy coordinate descent When we have n threads in a multi-core shared memory machine, and the dual variables (or corresponding training samples) are partitioned into the same number of blocks: S1 [ S2 [ ? ? ? [ Sn = {1, 2, ? ? ? , `} and Si \ Sj = for all i, j. Now we apply Asy-GCD algorithm to solve (9). For better memory allocation of kernel cache (see Section 4.3), we bind each thread to a partition. The behavior of our algorithm still follows Asy-GCD because the sequence of updates are asynchronously random. The algorithm is summarized in Algorithm 3. Algorithm 3 Asy-GCD for Dual Kernel SVM 1: Initial g = 1, ? = 0 2: Each thread t repeatedly performs the following updates in parallel: 3: step 1: Pick i = arg maxi2St |r+ i f (?)| using g 4: step 2: Compute i? by eq (10) 5: step 3: For j = 1, 2, ? ? ? , ` 6: gj gj + ? Qj,i using atomic update 7: step 4: ?i ?i + ? (See eq (11)) Note that each thread will read the `-dimensional vector g in step 2 and update g in step 3 in the shared memory. For the read, we do not use any atomic operations. For the writes, we maintain the correctness of g by atomic writes, otherwise some updates to g might be overwritten by others, and the algorithm cannot converge to the optimal solution. Theorem 2, suggests a linear convergence rate of our algorithm, and in the experimental results we will see the algorithm is much faster than the widely used Asynchronous Stochastic Coordinate Descent (Asy-SCD) algorithm [17]. 4.3 Implementation Issues In addition to the main algorithm, there are some practical issues we need to handle in order to make Asy-GCD algorithm scales to large-scale kernel SVM problems. Here we discuss these implementation issues. Kernel Caching.The main difficulty for scaling kernel SVM to large dataset is the memory requirement for storing the Q matrix, which takes O(`2 ) memory. In the GCD algorithm, step 2 (see eq (10)) requires a diagonal element of Q, which can be pre-computed and stored in memory. However, the main difficulty is to conduct step 3, where a column of Q (denoted by qi )is needed. If qi is in the memory, the algorithm only takes O(`) time; however, if qi is not in the memory, re-computing it from scratch takes O(dn) time. As a result, how to maintain most important columns of Q in memory is an important implementation issues in SVM software. In LIBSVM, the user can specify the size of memory they want to use for storing columns of Q. The columns of Q are stored in a linked-list in the memory, and when memory space is not enough the Least Recent Used column will be kicked out (LRU technique). In our implementation, instead of sharing the same LRU for all the cores, we create an individual LRU for each core, and make the memory space used by a core in a contiguous memory space. As a result, remote memory access will happen less in the NUMA system when there are more than 1 CPU in the same computer. Using this technique, our algorithm is able to scale up in a multi-socket machine (see Figure 2). Variable Partitioning.The theory of Asy-GCD allows any non-overlapping partition of the dual variables. However, we observe a better partition that minimizes the between-cluster connections can often lead to faster convergence. This idea has been used in a divide-and-conquer SVM algorithm [12], and we use the same idea to obtain the partition. More specifically, we partition the data by running kmeans algorithm on a subset of 20000 training samples to obtain cluster centers {cr }nr=1 , and then assign each i to the nearest center: ?(i) = argminr kcr xi k. This steps can be easily parallelized, and costs less than 3 seconds in all the datasets used in the experiments. Note that we include this kmeans time in all our experimental comparisons. 5 Experimental Results We conduct experiments to show that the proposed method Asy-GCD achieves good speedup in parallelizing kernel SVM in multi-core systems. We consider three datasets: ijcnn1, covtype and webspam (see Table 1 for detailed information). We follow the parameter settings in [12], where C 6 Table 1: Data statistics. ` is number of training samples, d is dimensionality, `t is number of testing samples. ` `t d C ijcnn1 49,990 91,701 22 32 2 covtype 464,810 116,202 54 32 32 webspam 280,000 70,000 254 8 32 (a) ijcnn1 time vs obj (b) webspam time vs obj (c) covtype time vs obj Figure 1: Comparison of Asy-GCD with 1?20 threads on ijcnn1, covtype and webspam datasets. and are selected by cross validation. All the experiments are run on the same system with 20 CPUs and 256GB memory, where the CPU has two sockets, each with 10 cores. We locate 64GB for kernel caching for all the algorithms. In our algorithm, the 64GB will distribute to each core; for example, for Asy-GCD with 20 cores, each core will have 3.2GB kernel cache. We include the following algorithms/implementations into our comparison: 1. Asy-GCD: Our proposed method implemented by C++ with OpenMP. Note that the preprocessing time for computing the partition is included in all the timing results. 2. PSCD: We implement the asynchronous stochastic coordinate descent [17] approach for solving kernel SVM. Instead of forming the whole kernel matrix in the beginning (which cannot scale to all the dataset we are using), we use the same kernel caching technique as Asy-GCD to scale up PSCD. 3. LIBSVM (OMP): In LIBSVM, there is an option to speedup the algorithm in multi-core environment using OpenMP (see http://www.csie.ntu.edu.tw/~cjlin/libsvm/ faq.html#f432). This approach uses multiple cores when computing a column of kernel matrix (qi used in step 3 of Algorithm 2). All the implementations are modified from LIBSVM (e.g., they share the similar LRU cache class), so the comparison is very fair. We conduct the following two sets of experiments. Note that another recent proposed DC-SVM solver [12] is currently not parallelizable; however, since it is a meta algorithm and requires training a series of SVM problems, our algorithm can be naturally served as a building block of DC-SVM. 5.1 Scaling with number of cores In the first set of experiments, we test the speedup of our algorithm with varying number of cores. The results are presented in Figure 1 and Figure 2. We have the following observations: ? Time vs obj (for 1, 2, 4, 10, 20 cores). From Fig. 1 (a)-(c), we observe that when we use more CPU cores, the objective decreases faster. ? Cores vs speedup. From Fig. 2, we can observe that we got good strong scaling when we increase the number of threads. Note that our computer has two sockets, each with 10 cores, and our algorithm can often achieve 13-15 times speedup. This suggests our algorithm can scale to multiple sockets in a Non-Uniform Memory Access (NUMA) system. Previous asynchronous parallel algorithms such as HogWild [19] or PASSCoDe [13] often struggle when scaling to multiple sockets. 5.2 Comparison with other methods Now we compare the efficiency of our proposed algorithm with other multi-core parallel kernel SVM solvers on real datasets in Figure 3. All the algorithms in this comparison are using 20 cores and 64GB memory space for kernel caching. Note that LIBSVM is solving the kernel SVM problem with the bias term, so the objective function value is not showing in the figures. We have the following observations: 7 (a) ijcnn1 cores vs speedup (b) webspam cores vs speedup (c) covtype cores vs speedup Figure 2: The scalability of Asy-GCD with up to 20 threads. (a) ijcnn1 time vs accuracy (b) covtype time vs accuracy (c) webspam time vs accuracy (d) ijcnn1 time vs objective (e) covtype time vs objective (f) webspam time vs objective Figure 3: Comparison among multi-core kernel SVM solvers. All the solvers use 20 cores and the same amount of memory. ? Our algorithm achieves much faster convergence in terms of objective function value compared with PSCD. This is not surprising because using the trick of maintaining g (see details in Section 4) greedy approach can select the best variable to update, while stochastic approach just chooses variables randomly. In terms of accuracy, PSCD is sometimes good in the beginning, but converges very slowly to the best accuracy. For example, in covtype data the accuracy of PSCD remains 93% after 4000 seconds, while our algorithm can achieve 95% accuracy after 1500 seconds. ? LIBSVM (OMP) is slower than our method. The main reason is that they only use multiple cores when computing kernel values, so the computational power is wasted when the column of kernel (qi ) needed is available in memory. Conclusions In this paper, we propose an Asynchronous parallel Greedy Coordinate Descent (AsyGCD) algorithm, and prove a linear convergence rate under mild condition. We show our algorithm is useful for parallelizing the greedy coordinate descent method for solving kernel SVM, and the resulting algorithm is much faster than existing multi-core SVM solvers. Acknowledgement XL and JL are supported by the NSF grant CNS-1548078. HFY and ISD are supported by the NSF grants CCF-1320746, IIS-1546459 and CCF-1564000. YY and JD are supported by the U.S. Department of Energy Office of Science, Office of Advanced Scientific Computing Research, Applied Mathematics program under Award Number DE-SC0010200; by the U.S. Department of Energy Office of Science, Office of Advanced Scientific Computing Research under Award Numbers DE-SC0008700 and AC02-05CH11231; by DARPA Award Number HR001112-2-0016, Intel, Google, HP, Huawei, LGE, Nokia, NVIDIA, Oracle and S Samsung, Mathworks and Cray. CJH also thank the XSEDE and Nvidia support. 8 References [1] H. Avron, A. Druinsky, and A. Gupta. Revisiting asynchronous linear solvers: Provable convergence rate through randomization. In IEEE International Parallel and Distributed Processing Symposium, 2014. [2] D. P. Bertsekas. Nonlinear Programming. Athena Scientific, Belmont, MA 02178-9998, second edition, 1999. [3] B. E. Boser, I. Guyon, and V. Vapnik. A training algorithm for optimal margin classifiers. In COLT, 1992. [4] A. Canutescu and R. Dunbrack. Cyclic coordinate descent: A robotics algorithm for protein loop closure. Protein Science, 2003. [5] C.-C. Chang and C.-J. Lin. LIBSVM: Introduction and benchmarks. Technical report, Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan, 2000. [6] C. Cortes and V. Vapnik. Support-vector network. Machine Learning, 20:273?297, 1995. [7] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, A. Senior, P. Tucker, K. Yang, Q. V. Le, et al. Large scale distributed deep networks. In NIPS, 2012. [8] I. S. Dhillon, P. Ravikumar, and A. Tewari. Nearest neighbor based greedy coordinate descent. In NIPS, 2011. [9] J. C. Duchi, S. Chaturapruek, and C. R?. Asynchronous stochastic convex optimization. arXiv preprint arXiv:1508.00882, 2015. [10] C.-J. Hsieh, K.-W. Chang, C.-J. Lin, S. S. Keerthi, and S. Sundararajan. A dual coordinate descent method for large-scale linear SVM. In ICML, 2008. [11] C.-J. Hsieh and I. S. Dhillon. Fast coordinate descent methods with variable selection for non-negative matrix factorization. In KDD, 2011. [12] C. J. Hsieh, S. Si, and I. S. Dhillon. A divide-and-conquer solver for kernel support vector machines. In ICML, 2014. [13] C.-J. Hsieh, H. F. Yu, and I. S. Dhillon. PASSCoDe: Parallel ASynchronous Stochastic dual Coordinate Descent. In International Conference on Machine Learning(ICML),, 2015. [14] T. Joachims. Making large-scale SVM learning practical. In Advances in Kernel Methods - Support Vector Learning. MIT Press, 1998. [15] M. Li, D. G. Andersen, J. W. Park, A. J. Smola, A. Ahmed, V. Josifovski, J. Long, E. J. Shekita, and B.-Y. Su. Scaling distributed machine learning with the parameter server. In OSDI, 2014. [16] J. Liu and S. J. Wright. Asynchronous stochastic coordinate descent: Parallelism and convergence properties. 2014. [17] J. Liu, S. J. Wright, C. Re, and V. Bittorf. An asynchronous parallel stochastic coordinate descent algorithm. In ICML, 2014. [18] Y. E. Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM Journal on Optimization, 22(2):341?362, 2012. [19] F. Niu, B. Recht, C. R?, and S. J. Wright. HOGWILD!: a lock-free approach to parallelizing stochastic gradient descent. In NIPS, pages 693?701, 2011. [20] J. Nutini, M. Schmidt, I. H. Laradji, M. Friedlander, and H. Koepke. Coordinate descent converges faster with the gauss-southwell rule than random selection. In ICML, 2015. [21] J. C. Platt. Fast training of support vector machines using sequential minimal optimization. In B. Sch?lkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods - Support Vector Learning, Cambridge, MA, 1998. MIT Press. [22] P. Richt?rik and M. Tak??c. Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function. Mathematical Programming, 144:1?38, 2014. [23] C. Scherrer, M. Halappanavar, A. Tewari, and D. Haglin. Scaling up coordinate descent algorithms for large l1 regularization problems. In ICML, 2012. [24] C. Scherrer, A. Tewari, M. Halappanavar, and D. Haglin. Feature clustering for accelerating parallel coordinate descent. In NIPS, 2012. [25] S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized loss minimization. Journal of Machine Learning Research, 14:567?599, 2013. [26] S. Sridhar, S. Wright, C. Re, J. Liu, V. Bittorf, and C. Zhang. An approximate, efficient LP solver for lp rounding. NIPS, 2013. [27] P.-W. Wang and C.-J. Lin. Iteration complexity of feasible descent methods for convex optimization. Journal of Machine Learning Research, 15:1523?1548, 2014. [28] E. P. Xing, W. Dai, J. Kim, J. Wei, S. Lee, X. Zheng, P. Xie, A. Kumar, and Y. Yu. Petuum: A new platform for distributed machine learning on big data. In KDD, 2015. [29] I. Yen, C.-F. Chang, T.-W. Lin, S.-W. Lin, and S.-D. Lin. Indexed block coordinate descent for large-scale linear classification with limited memory. In KDD, 2013. [30] H.-F. Yu, C.-J. Hsieh, S. Si, and I. S. Dhillon. Parallel matrix factorization for recommender systems. KAIS, 2013. [31] H. Yun, H.-F. Yu, C.-J. Hsieh, S. Vishwanathan, and I. S. Dhillon. Nomad: Non-locking, stochastic multi-machine algorithm for asynchronous and decentralized matrix completion. In VLDB, 2014. [32] H. Zhang. The restricted strong convexity revisited: Analysis of equivalence to error bound and quadratic growth. ArXiv e-prints, 2015. 9
6070 |@word mild:2 briefly:1 norm:2 vldb:1 closure:1 overwritten:1 hsieh:7 decomposition:2 pick:4 reduction:1 initial:2 liu:4 cyclic:5 series:1 selecting:5 ours:1 kcr:1 outperforms:1 existing:2 past:1 current:2 com:1 kx0:1 numa:2 surprising:1 si:4 yet:1 written:1 belmont:1 devin:1 partition:7 happen:1 kdd:3 update:31 v:14 chohsieh:1 greedy:31 selected:3 xk:20 beginning:2 ith:2 dunbrack:1 core:40 revisited:1 bittorf:2 zhang:3 mathematical:1 along:1 dn:1 become:3 symposium:1 ik:5 qij:1 prove:4 cray:1 overhead:2 inside:1 introduce:1 ch11231:1 x0:3 indeed:1 behavior:4 multi:16 actual:1 curse:1 cache:3 solver:12 increasing:1 becomes:1 spain:1 cpu:4 bounded:4 notation:2 minimizes:1 proposing:1 guarantee:1 lru:4 berkeley:3 avron:1 growth:1 classifier:1 platt:1 partitioning:1 grant:2 organize:1 bertsekas:1 positive:2 engineering:1 local:1 bind:1 timing:1 struggle:1 parallelize:1 niu:1 might:1 studied:3 equivalence:1 suggests:3 qii:2 josifovski:1 factorization:3 limited:1 practical:3 yj:1 atomic:3 practice:1 block:11 definite:1 testing:1 writes:2 implement:1 petuum:1 procedure:2 empirical:1 got:1 composite:2 projection:2 idle:1 pre:2 jui:1 wait:2 protein:2 get:1 onto:1 close:1 selection:4 operator:1 cannot:2 xsede:1 risk:1 applying:1 writing:1 www:1 conventional:1 dean:1 center:2 latest:1 independently:2 convex:5 l:3 druinsky:1 simplicity:1 passcode:2 rule:7 handle:1 coordinate:69 increment:1 updated:2 suppose:1 user:1 programming:4 us:1 trick:4 element:2 satisfying:2 csie:1 preprint:1 argminr:1 wang:1 capture:1 calculate:2 revisiting:1 richt:1 remote:1 counter:1 reducer:1 decrease:1 deeply:2 mentioned:2 yk:1 environment:1 convexity:3 complexity:3 locking:1 scd:6 nesterov:1 weakly:1 solving:10 efficiency:2 easily:1 darpa:1 samsung:1 fast:2 demmel:2 asy:21 choosing:1 shalev:1 widely:5 solve:5 larger:1 say:1 kai:1 otherwise:1 statistic:1 asynchronously:7 sequence:1 propose:4 loop:1 achieve:3 scalability:1 exploiting:1 convergence:23 cluster:2 p:2 requirement:1 converges:2 i2s:2 derive:1 develop:1 completion:2 measured:1 nearest:2 progress:1 eq:5 solves:1 strong:4 implemented:5 c:4 synchronized:3 stochastic:19 assign:1 ntu:1 randomization:1 strictly:1 wright:4 achieves:3 xk2:1 label:1 currently:1 utexas:1 largest:2 correctness:1 create:1 successfully:1 establishes:1 minimization:6 mit:2 concurrently:1 gaussian:3 modified:1 avoid:1 caching:4 cr:1 varying:1 office:4 koepke:1 focus:3 joachim:1 indicates:1 mainly:1 slowest:1 contrast:1 kim:1 osdi:1 dependent:1 huawei:1 ascd:3 tak:1 selects:2 arg:4 dual:9 issue:6 html:1 denoted:1 among:1 colt:1 proposes:1 classification:1 constrained:2 platform:1 initialize:1 equal:3 park:1 yu:5 icml:6 throughput:1 others:1 report:1 few:1 randomly:7 divergence:1 national:1 individual:1 cns:1 keerthi:1 n1:1 maintain:2 huge:1 zheng:1 halappanavar:2 maxi2:1 chaturapruek:1 fu:1 worker:7 haglin:2 conduct:9 indexed:1 euclidean:1 divide:2 re:3 j2j:1 theoretical:6 minimal:2 increased:1 column:9 contiguous:1 cost:6 subset:2 uniform:1 delay:2 successful:2 rounding:1 too:1 optimally:2 stored:5 eec:1 proximal:1 cho:1 chooses:1 recht:1 international:2 siam:1 randomized:1 lee:1 diverge:1 concrete:1 andersen:1 satisfied:1 choose:3 slowly:1 leading:1 li:1 potential:1 distribute:1 de:2 summarized:2 yandex:1 later:1 view:1 hogwild:2 closed:2 analyze:2 linked:1 xing:1 option:1 parallel:20 rochester:2 yen:1 accuracy:7 efficiently:1 yield:1 socket:5 lkopf:1 famous:2 served:1 processor:3 classified:1 parallelizable:1 sharing:1 definition:1 energy:2 involved:2 james:1 tucker:1 naturally:1 stop:1 proved:2 dataset:2 popular:2 knowledge:2 dimensionality:1 scherrer:2 carefully:2 back:1 appears:2 higher:1 xie:1 follow:2 specify:1 improved:1 wei:1 nomad:1 done:2 strongly:1 just:1 smola:2 ei:6 su:1 nonlinear:1 overlapping:2 google:1 aj:4 asynchrony:1 scientific:3 building:1 concept:1 ccf:2 regularization:1 read:5 symmetric:1 dhillon:7 staleness:2 deal:1 round:1 during:1 davis:1 yun:1 demonstrate:1 performs:2 l1:2 duchi:1 wise:1 recently:3 ji:1 gcd:38 million:1 discussed:3 jl:1 sundararajan:1 measurement:1 cambridge:1 kri:1 ai:4 outlined:1 mathematics:1 hp:1 access:2 lkx:1 gj:2 dominant:2 closest:1 recent:3 nvidia:2 server:4 inequality:2 meta:1 success:1 discussing:1 yi:2 seen:1 minimum:1 additional:1 dai:1 omp:2 parallelized:1 converge:1 corrado:1 ii:1 multiple:6 reduces:1 smooth:2 technical:1 faster:12 ahmed:1 cross:1 long:2 lin:6 divided:1 serial:1 equally:1 award:3 bigger:1 ravikumar:1 qi:8 variant:1 maintenance:1 essentially:1 arxiv:3 iteration:10 kernel:49 faq:1 sometimes:1 monga:1 robotics:1 addition:1 want:1 interval:1 grow:1 sch:1 rest:1 ascent:1 inconsistent:1 obj:4 integer:1 yang:2 svmlight:2 enough:1 decent:1 iterate:1 xj:2 tei:1 independence:1 reduce:1 idea:4 ac02:1 texas:1 qj:1 synchronous:1 thread:12 gb:5 accelerating:1 repeatedly:2 deep:2 rfi:2 useful:1 detailed:2 tewari:3 amount:1 extensively:1 http:1 nsf:2 per:1 yy:1 write:2 waiting:1 key:1 libsvm:11 krf:2 isd:1 wasted:1 year:1 run:3 you:1 communicate:1 guyon:1 shekita:1 scaling:7 bound:8 quadratic:4 g:2 oracle:1 constraint:2 vishwanathan:1 x2:1 ri:4 software:1 speed:3 min:2 optimality:1 kumar:1 speedup:13 department:3 developing:1 remain:1 smaller:1 slightly:1 kicked:1 partitioned:3 lp:2 tw:1 making:1 s1:4 ijcnn1:7 restricted:2 southwell:1 xiangru:2 remains:1 discus:5 cjlin:1 mathworks:1 needed:2 end:2 available:2 operation:1 decentralized:1 apply:5 observe:3 schmidt:1 slower:1 jd:1 assumes:1 running:1 include:2 clustering:1 ccd:3 lock:1 maintaining:4 lipschitzian:3 calculating:1 taipei:1 conquer:2 objective:13 print:1 diagonal:1 nr:1 gradient:21 thank:1 athena:1 reason:1 provable:1 taiwan:2 minn:1 index:1 providing:1 minimizing:2 l2t:1 unfortunately:2 potentially:2 negative:2 implementation:7 contributed:1 perform:1 upper:2 recommender:1 observation:2 datasets:4 benchmark:1 descent:57 extended:1 communication:1 locate:1 rn:2 dc:2 parallelizing:5 community:1 connection:1 california:2 boser:1 ucdavis:1 barcelona:1 nip:6 able:1 usually:2 parallelism:1 reading:1 program:1 rf:5 including:2 memory:29 webspam:7 power:1 difficulty:2 regularized:2 indicator:1 advanced:2 scheme:1 sn:4 sg:1 literature:1 l2:1 acknowledgement:1 friedlander:1 rofuyu:1 synchronization:3 fully:1 loss:1 proportional:1 allocation:1 validation:1 rik:6 consistent:2 editor:1 storing:2 share:1 cd:3 austin:1 lmax:6 supported:3 last:1 asynchronous:33 free:1 enjoys:1 bias:1 allow:1 senior:1 burges:1 neighbor:1 nokia:1 absolute:1 distributed:5 commonly:1 projected:6 preprocessing:1 sj:1 approximate:2 implicitly:1 keep:1 global:6 consuming:1 xi:2 shwartz:1 sk:4 table:2 subinterval:1 dense:1 main:4 whole:2 s2:1 big:1 edition:1 sridhar:1 fair:1 fig:2 intel:1 hsiang:1 slow:1 sub:1 mao:1 xl:1 hrf:1 communicates:1 theorem:3 down:1 showing:1 maxi:1 list:1 svm:41 covtype:10 gupta:1 cortes:1 exists:1 vapnik:2 sequential:1 kr:1 magnitude:4 kx:1 margin:1 chen:1 lt:1 forming:1 inderjit:2 chang:3 nutini:1 satisfies:3 ma:2 conditional:2 kmeans:2 shared:5 lipschitz:1 feasible:1 included:1 specifically:1 openmp:2 laradji:1 total:3 experimental:5 gauss:1 formally:1 select:6 support:6 cjh:1 lian:1 scratch:1
5,604
6,071
Generative Shape Models: Joint Text Recognition and Segmentation with Very Little Training Data Xinghua Lou, Ken Kansky, Wolfgang Lehrach, CC Laan Vicarious FPC Inc., San Francisco, USA xinghua,ken,wolfgang,cc@vicarious.com Bhaskara Marthi, D. Scott Phoenix, Dileep George Vicarious FPC Inc., San Francisco, USA bhaskara,scott,dileep@vicarious.com Abstract Abstract: We demonstrate that a generative model for object shapes can achieve state of the art results on challenging scene text recognition tasks, and with orders of magnitude fewer training images than required for competing discriminative methods. In addition to transcribing text from challenging images, our method performs fine-grained instance segmentation of characters. We show that our model is more robust to both affine transformations and non-affine deformations compared to previous approaches. 1 Introduction Classic optical character recognition (OCR) tools focus on reading text from well-prepared scanned documents. They perform poorly when used for reading text from images of real world scenes [1]. Scene text exhibits very strong variation in font, appearance, and deformation, and image quality can be lowered by many factors, including noise, blur, illumination change and structured background. Fig. 1 shows some representative images from two major scene text datasets: International Conference on Document Analysis and Recognition (ICDAR) 2013 and Street View Text (SVT). Figure 1: Examples of text in real world scenes: ICDAR 2013 (left two columns) and SVT (right two columns). Unlike classic OCR that handles well-prepared scanned documents, scene text recognition is difficult because of the strong variation in font, background, appearance, and distortion. Despite these challenges, the machine learning and computer vision community have recently witnessed a surging interest in developing novel approaches for scene text recognition. This is driven by numerous potential applications such as scene understanding for robotic control and augmented reality, street sign reading for autonomous driving, and image feature extraction for large-scale image search. In this paper we present a novel approach for robust scene text recognition. Specifically, we study the problem of text recognition in a cropped image that contains a single word, which is usually the output from some text localization method (see [2] for a thorough review on this topic). 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Our core contribution is a novel generative shape model that shows strong generalization capabilities. Unlike many previous approaches that are based on discriminative models and trained on millions of real world images, our generative model only requires hundreds of training images, yet still effectively captures affine transformations and non-affine deformations. To cope with the strong variation of fonts in real scenes, we also propose a greedy approach for selecting representative fonts from a large database of fonts. Finally, we introduce a word parsing model that is trained using structured output learning. We evaluated our approach on ICDAR 2013 and SVT and achieved state-of-the-art performance despite using several orders of magnitude less of training data. Our results show that instead of relying on a massive amount of supervision to train a discriminative model, a generative model trained on uncluttered fonts with properly encoded invariance generalizes well to text in natural images and is more interpretable. 2 Related Work We only consider literature on recognizing scene text in English. There are two paradigms for solving this problem: character detection followed by word parsing, and simultaneous character detection and word parsing. Character detection followed by word parsing is the more popular paradigm. Essentially, a character detection method first finds candidate characters, and then a parsing model searches for the true sequence of characters by optimizing some objective function. Many previous works following this paradigm differ in the detection and parsing methods. Character detection methods can be patch-based or bottom-up. Patch-based detection first finds patches of (hopefully) single characters using over-segmentation [3] or the stroke width transformation [4], followed by running a character classifier on each patch. Bottom-up detection first creates an image-level representation using engineered or learned features and then finds instances of characters by aggregating image-level evidence and searching for strong activations at every pixel. Many different representations have been proposed such as Strokelets [5], convolutional neural networks [6], region-based features [7], tree-structured deformable models [8] and simple shape template [9]. Both patch-based and bottom-up character detection have flawed localization because they cannot provide accurate segmentation boundaries of the characters. Unlike detection, word parsing methods in literature show strong similarity. They are generally sequence models that utilize attributes of individual as well as adjacent candidate characters. They differ in model order and inference techniques. For example, [10] considered the problem as a highorder Markov model in a Bayesian inference framework. A classic pairwise conditional random field was also used by [8, 11, 4], and inference was carried out using message passing [11] and dynamic programming [8, 4]. Acknowledging that a pairwise model cannot encode as useful features as a high-order character n-gram, [3] proposed a patch-based sequence model that encodes up to 4thorder character n-grams and applied beam search to solve it. A second paradigm is simultaneous character detection and word parsing, reading the text without an explicit step for detecting the characters. For example, [12] proposed a graphical model that jointly models the attributes, location, and class of characters as well as the language consistency of the word they constitute. Inference was carried out using weighted finite-state transducers (WFSTs). [13] took a drastically different approach: they used a lexicon of about 90k words to synthesize about 8 million images of text, which were used to train a CNN that predicts a character at each independent position. The main drawback of this ?all-in-one? approach is weak invariance and insufficient robustness, since changes in any attribute such as spacing between characters may cause the system to fail due to over-fitting to the training data. 3 Model Our approach follows the first detection-parsing paradigm. First, candidate characters are detected using a novel generative shape model trained on clean character images. Second, a parsing model is used to infer the true word, and this parser is trained using max-margin structured output learning. 2 3.1 Generative Shape Model for Fonts Unlike many vision problem such as distinguishing dogs from cats where many local discriminative features can be informative, text in real scenes, printed or molded using some fonts, is not as easily distinguishable from local features. For example, the curve ?^? at the bottom of ?O? also exists in ?G?, ?U? and ?Q?. This special structure ?`? can be found in ?B?, ?E?, ?F?, ?H?, ?P? and ?R?. Without a sense of the global structure, a naive accumulation of local features easily leads to false detections in the presence of noise or when characters are printed tightly. We aim at building a model that specifically accounts for the global structure, i.e. the entire shape of characters. Our model is generative such that during testing time we obtain a segmentation together with classification, making the final word parsing much easier due to better explaining-away. Model Construction During training we build a graph representation from rendered clean images of fonts, as shown in Fig. 2. Since we primarily care about shape, the basic image-level feature representation relies only on edges, making our model invariant to appearance such as color and texture. Specifically, given a clean font image we use 16 oriented filters to detect edges, followed by local suppression that keeps at most one edge orientation active per pixel (Fig. 2a). Then, ?landmark? features are generated by selecting one edge at a time, suppressing any other edges within a fixed radius, and repeat (Fig. 2b). We then create a pool variable centered around each landmark point such that it allows translation pooling in a window around the landmark (Fig. 2b). To coordinate the the pool choices between adjacent landmarks (thus the shape of the letter), we add ?lateral constraints? between neighboring pairs of pools that lie on the same edge contour (blue dashed lines in Fig. 2c). All lateral constraints are elastic, allowing for some degree of affine and non-affine deformation. This allows our model to generalize to different variations observed in real images such as noise, aspect change, blur, etc. In addition to contour laterals, we add lateral constraints between distant pairs of pixels (red dashed lines in Fig. 2c) to further constrain the shapes this model can represent. These distant laterals are greedily added one at a time, from shortest to longest, between pairs of features that with the current constraints can deform more than ? times the deformation allowed by adding a direct constraint between the features (typically ? ? 3). Figure 2: Model construction process for our generative shape model for fonts. Given a clean character images, we detect 16 oriented edge features at every pixel (a). We perform a sparsification process that selects ?landmark? features from the dense edge map (b) and then add pooling window (b). We then add lateral constraints to constrain the shape model (c). A factor graph representation of our model is partially shown in (d). (best viewed in color) Formally, our model can be viewed as a factor graph shown in Fig. 2d. Each pool variable centered on a landmark feature is considered a random variable and is associated with unary factors corresponding to the translations of the landmark feature. Each lateral constraint is a pairwise factor. The unary factors give positive scores when matching features are found in the test image. The pairwise factor is parameterized with a single perturbation radius parameter, which is defined as the largest allowed change in the relative position of features in the adjacent pools. This perturbation radius forbids extreme deformation, giving ?? log probability if this lateral constraint is violated. During testing, the state space of each random variable is the pooling window and lateral constraints are not allowed to be violated. During training, this model construction process is carried out independently for all letter images, and each letter is rendered in multiple fonts. 3 Inference and Instance Detection The letter models can be considered to be tiling an input image at all translations. Given a test image, finding all candidate character instances involves two steps: a forward pass and backtracing. The forward pass is a bottom-up procedure that accumulate evidence from the test image to compute the marginal distribution of the shape model at each pixel location, similar to an activation heatmap. To speed-up the computation, we simplify our graph (Fig. 2c) into a minimum spanning tree, computed with edge weights equal to the pixel distance between features. Moreover, we make the pooling window as large as the entire image to avoid a scanning procedure. The marginals in the tree can be computed exactly and quickly with a single iteration of belief propagation. After non-maximum suppression, a few positions that have the strongest activation are selected for backtracing. This process is guaranteed to overestimate the true marginals in the original loopy graphical model, so this forward pass admits some false positives. Such false positives occur more often when the image has a tight character layout or a prominent texture or background. Given the estimated positions of character instances, backtracing is performed in the original loopy graph to further reduce false positives and to output a segmentation (by connecting the ?landmarks?) of each instance in the test image. The backtracing procedure selects a single landmark feature, constrains its position to one of the local maxima in its marginal from the forward pass, and then performs MAP inference in the full loopy graph to estimate the positions of all other landmarks in the model, which provides the segmentation. Because this inference is more accurate than the forward pass, additional false positives can be pruned after backtracing. In both the forward and backward pass, classic loopy belief propagation was sufficient. Greedy Font Model Selection One challenge in scene text reading is covering the huge variation of fonts in uncontrolled, real world images. It is not feasible to train on all fonts because it is too computationally expensive and redundant. We resort to an automated greedy font selection approach. Briefly, for some letter we render images for all fonts and then use the resulting images to train shape models. These shape models are then tested on every other rendered image, yielding a compatibility score (amount of matching ?landmark? features) between every pair of fonts of the same letter. One font is considered representable by another if their compatibility score is greater than a given threshold (=0.8). For each letter, we find and keep the fonts that can represent most other fonts and remove it from the font candidate set together with all the fonts it represents. This selection process is repeated until 90% of all fonts are represented. Usually the remaining 10% fonts are non-typical and rare in real scenes. 3.2 Word Parsing using Structured Output Learning Parsing Model Our generative shape models were trained independently on all font images. Therefore, no explaining-away is performed before parsing. The shape model shows high invariance and sensitivity, yielding a rich list of candidate letters that contains many false positives. For example, an image of letter ?E? may also trigger the following: ?F?, ?I?, ?L? and ?c?. Word parsing refers to inferring the true word from this list of candidate letters. Figure 3: Our parsing model represented as a high-order factor graph. Given a test image, the shape model generates a list of candidate letters. A factor graph is created by adding edges between hypothetical neighboring letters and considering these edges as random variables. Four types of factors are defined: transition, smoothness, consistency, and singleton factors. The first two factors characterize the likelihood of parsing path, while the latter two ensure valid parsing output. Our parsing model can be represented as a high-order factor graph in Fig. 3. First, we build hypothetical edges between a candidate letter and every candidate letter on its right-hand side within some distance. Two pseudo letters ?*? and ?#? are created, indicating start and end of the graph, 4 respectively. Edges are created from start to all possible head letters and similarly from end to all possible tail letters. Each edge is considered as a binary random variable which, if activated, indicates a pair of neighboring letters from the true word. We define four types of factors. Transition factors (green, unary) describe the likelihood of a hypothetical pair of neighboring letters being true. Similarly, smoothness factors (blue, pairwise) describe the likelihood of a triplet of consecutive letters. Two additional factors are added as constraints to ensure valid output. Consistency factors (red, high-order) ensure that if any candidate letter has an activated inward edge, it must have one activated outward edge. This is sometimes referred to as ?flow consistency?. Lastly, to satisfy the single word constraint, a singleton factor (purple, highorder) is added such that there must be a single activated edge from ?start?. Examples of these factors are shown in Fig. 3. Mathematically, assuming that potentials on the factors are provided, inferring the state of random variables in the parsing factor graph is equivalent to solving the following optimization problem. ? ? ? ? ? ? ?X X ? X X T T S S ? = arg maxz z ?v (w )zv + ?u,v (w )zu zv (1) ? ? ? ? c?C u?In(c) ?c?C v?Out(c) ? v?Out(c) P P s.t. ?c ? C, u?In(c) zu = v?Out(c) zv , (2) P (3) v?Out(?) zv = 1, ?c ? C, ?v ? Out(c), zv ? {0, 1}. (4) where, z = {zv } is the set of all binary random variables indexed by v; C is the set of all candidate letters, and for candidate letter c in C, In(c) and Out(c) index the random variables that correspond T to the inward and outward edges of c, respectively; ?T v (w ) is the potential of transition factor at v S T S (parameterized by weight vector w ) and ?u,v (w ) is the potential of smoothness factor from u to v (parameterized by weight vector wS ); Constraints (2)?(4) ensure flow consistency, singleton, and the binary nature of all random variables. T Parameter Learning Another issue is proper parameterization of the factor potentials, i.e. ?T v (w ) S S and ?u,v (w ). Due to the complex nature of real world images, high dimensional parsing features are required. For one example, consecutive letters of the true word are usually evenly spaced. For another example, a character n-gram model can be used to resolve ambiguous letter detections and improve parsing quality. We use Wikipedia as the source for building our character n-gram model. S T S Both ?T v (w ) and ?u,v (w ) are linear models of some features and a weight vector. To learn the best weight vector that directly maps the input-output dependency of the parsing factor graph, we used the maximum-margin structured output learning paradigm [14]. Briefly, maximum-margin structured output learning attempts to learn a direct functional dependency between structured input and output by maximizing the margin between the compatibility score of the ground truth solution and that of the second best solution. It is an extension to the classic support vector machine (SVM) paradigm. Usually, the compatibility score is a linear function of some so-called joint feature vector (i.e. parsing features) and feature weights to be learned (i.e. wT and wS here). We designed 18 parsing features, including the score of individual candidate letters, color consistency between hypothetical neighboring pairs, alignment of hypothetical consecutive triplets, and character n-grams up to third order. Re-ranking Lastly, top scoring words from the second-order Viterbi algorithm are re-ranked using statistical word frequencies from Wikipedia. 4 4.1 Experiments Datasets ICDAR ICDAR (?International Conference on Document Analysis and Recognition?) is a biannual competition on text recognition. The ICDAR 2013 Robust Reading Competition was designed for comparing scene text recognition approaches [1]. Unlike digital-born images like those used on the 5 web, real world image recognition is more challenging due to uncontrolled environmental and imaging conditions that result in strong variation in font, blur, noise, distortion, non-uniform appearance, and background structure. We worked on two datasets: ICDAR 2013 Segmentation dataset and ICDAR 2013 Recognition dataset. In this experiment, we only consider letters, ignoring punctuations and digits. All test images are cropped and each image contains only a single word, see examples in Fig. 1. SVT The Street View Text (SVT) dataset [15] was harvested from Google Street View. Image text in this dataset exhibits high variability and often has low resolution. SVT provides a small lexicon and was created for lexicon-driven word recognition. In our experiments, we did not restrict the setting to a given small lexicon and instead used a general, large English lexicon. SVT does not contain symbols other than letters. 4.2 Model Training Training Generative Shape Model To ensure sufficient coverage of fonts, we obtained 492 fonts from Google Fonts1 . Manual font selection is biased and inaccurate, and it is not feasible to train on all fonts (492 fonts times 52 letters gives 25584 training images). After the proposed greedy font selection process for all letters, we retained 776 unique training images in total (equivalent to a compression rate of 3% if we would have trained on all fonts for all letters). Fig. 4 shows the selected fonts for letter ?a? and ?A?, respectively. Figure 4: Results of greedy font selection for letter ?a? and ?A?. Given a large font database of 492 fonts, this process leverages the representativeness of our generative shape model to significantly reduce the number of training images required to cover all fonts. Training Word Parsing Model Training the structured output prediction model is expensive in terms of supervision because every training sample consists of many random variables, and the state of every random variable has to be annotated (i.e. the entire parsing path). We prepared training data for our parsing model automatically using the ICDAR 2013 Segmentation dataset [1] that provides per-character segmentation of scene text. Briefly, we first detect characters and construct a parsing graph for each image. We then find the true path in the parsing graph (i.e. a sequence of activated random variables) by matching the detected characters to the ground truth segmentation. In total, we used 630 images for training the parser using PyStruct2 . Shape Model Invariance Study We studied the invariance of our model by testing on transformations of the training images. We considered scaling and rotation. For the former, our model performs robust fitting when the scaling varies between 130% and 70%. For the later, the angle of robust fitting is between -20 and +20 degrees. 4.3 Results and Comparison Character Detection We first tested our shape model on the ICDAR 2013 Segmentation dataset. Since this is pre-parsing and no explaining-away is performed, we specifically looked for high recall. A detected letter and a true segmented letter is considered a match only when the letter classes match and their segmentation masks strongly overlap with ? 0.8 IoU (intersection-over-union). Trained on fonts selected from Google Fonts, we obtained a very high 95.0% recall, which is significantly better than 68.7% by the best reported method on the dataset [1]. This attributes to the high invariance encoded in our model from the lateral constraints. The generative nature of the model gives a complete segmentation and classification instead of only letter classification (as most discriminative models do). Fig. 5 shows some instances of letters detected by our model. They exhibit strong variance in font and appearance. Note that two scales (?1, ?2) are used during testing. 1 2 https://www.google.com/fonts https://pystruct.github.io 6 Figure 5: Examples of detected and segmented characters (?A?, ?E? and ?R?) from the ICDAR 2013 Segmentation dataset. Despite obvious differences in font, appearance, and imaging condition and quality, our shape model shows high accuracy in localizing and segmenting them from the full image. (best viewed in color and when zoomed in) Word Parsing We compared our approach against top performing ones in the ICDAR 2013 Robust Reading Competition. Results are given in Table 4.3. Our model perform better than Google?s PhotoOCR[3] with a margin of 2.3%. However, a more important message is that we achieved this result using three orders of magnitude less training data: 1406 total images (776 letter font images for training the shape models and 630 word images for training the parser) versus 5 million by PhotoOCR. Two major factors attribute to our high efficiency. First, considering character detection, our model demonstrates strong generalization in practice. Data-intensive models like those in PhotoOCR impose weaker structural priors and require significantly more supervision. Second, considering word parsing, our generative model solves recognition and segmentation together, allowing the use of highly accurate parsing features. On the other hand, PhotoOCR?s neural-network based character classifier is incapable of generating accurate character segmentation boundaries, making the parsing quality bounded. Our observations on the SVT dataset are similar: using exactly the same training data we achieved state-of-the-art 80.7% accuracy. Note that all reported results in our experiments are case-sensitive. Fig. 6 demonstrates the robustness of our approach toward unusual fonts, noise, blur, and distracting backgrounds. Method PicRead [1] Deep Struc. Learn. [16] PhotoOCR [3] This paper ICDAR 63.1% 81.8% 84.3% 86.2% SVT 72.9% 71.7% 78.0% 80.7 % Training Data Size N/A 8,000,000 (synthetic) 7,900,000 (manually labeled + augmented) 1,406 (776 letter images + 630 word images) Figure 6: Visualization of correctly parsed images from ICDAR (first two columns) and SVT (last column) including per-character segmentation and parsing path. The numbers therein are local potential values on the parsing factor graph. (best viewed in color and when zoomed in) 7 4.4 Further Analysis & Discussion Failure Case Analysis Fig. 7 shows some typical failure cases for our system (left) and PhotoOCR (right). Our system fails mostly when the image is severely corrupted by noise, blur, over-exposure, or when the text is handwritten. PhotoOCR fails at some clean images where the text is easily readable. This reflects the limited generalization of data-intensive models because of the diminishing return of more training data. This comparison also shows that our approach is more interpretable: we can quickly justify the failure reasons by viewing the letter segmentation boundaries overlaid on the raw image. For example, over-exposure and blur cause edge features to drop out and thus fail the shape model. On the contrary, it is not so straightforward to explain why a discriminative model like PhotoOCR fails at some cases as shown in Fig. 7. Figure 7: Examples of failure cases for our system and PhotoOCR. Typically our system fails when the image is severely corrupted or contains handwriting. PhotoOCR is susceptible to failing at clean images where the text is easily readable. Language Model In our experiments, a language model plays two roles: in parsing as character ngram features and in re-ranking as word-level features. Ideally, a perfect perception system should be able to recognize most text in ICDAR without the need for a language model. We turned off the language model in our experiments and observed approximately a 15% performance drop. For PhotoOCR in the same setting, the performance drop is more than 40%. This is due to the fact that PhotoOCR?s core recognition model is a coarse understanding of the scene, and parsing is difficult without the high quality character segmentation that our generative shape model provides. Relation to Other Methods Here we discuss the connections and differences between our shape model and two very popular vision models: deformable parts models (DPM) [17] and convolutional neural networks (CNN) [18]. The first major distinction is that both DPM and CNN are discriminative while our model is generative. Only our model can generate segmentation boundaries without any additional ad-hoc processing. Second, CNN does not model any global shape structure, depending solely on local discriminative features (usually in a hierarchical fashion) to perform classification. DPM accounts for some degree of global structure, as the relative positions of parts are encoded in a star graph or tree structure. Our model imposes stronger global structure by using short and long lateral constraints. Third, during model inference both CNN and DPM only perform a forward pass, while ours also performs backtracing for accurate segmentation. Finally, regarding invariance and generalization, we directly encode invariance into the model using the perturbation radius in lateral constraints. This is proven very effective in capturing various deformations while still maintaining the stability of the overall shape. Neither DPM nor CNN encode invariance directly and instead rely on substantial data to learn model parameters. 5 Conclusion and Outlook This paper presents a novel generative shape model for scene text recognition. Together with a parser trained using structured output learning, the proposed approach achieved state-of-the-art performance on the ICDAR and SVT datasets, despite using orders of magnitude fewer training images than many pure discriminative models require. This paper demonstrates that it is preferable to directly encode invariance and deformation priors in the form of lateral constraints. Following this principle, even a non-hierarchical model like ours can outperform deep discriminative models. In the future, we are interested in extending our model to a hierarchical version with reusable features. We are also interested in further improving the parsing model to account for missing edge evidence due to blur and over-exposure. 8 References [1] Dimosthenis Karatzas, Faisal Shafait, Seiichi Uchida, Mikio Iwamura, Lluis Gomez i Bigorda, Sergi Robles Mestre, Jordi Mas, David Fernandez Mota, Jon Almazan Almazan, and Lluis-Pere de las Heras. Icdar 2013 robust reading competition. In Document Analysis and Recognition (ICDAR), 2013 12th International Conference on, pages 1484?1493. IEEE, 2013. [2] Qixiang Ye and David Doermann. Text detection and recognition in imagery: A survey. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 37(7):1480?1500, 2015. [3] Alessandro Bissacco, Mark Cummins, Yuval Netzer, and Hartmut Neven. Photoocr: Reading text in uncontrolled conditions. In Proceedings of the IEEE International Conference on Computer Vision, pages 785?792, 2013. [4] Lukas Neumann and Jiri Matas. Scene text localization and recognition with oriented stroke detection. In Proceedings of the IEEE International Conference on Computer Vision, pages 97?104, 2013. [5] Cong Yao, Xiang Bai, Baoguang Shi, and Wenyu Liu. Strokelets: A learned multi-scale representation for scene text recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4042?4049, 2014. [6] Adam Coates, Blake Carpenter, Carl Case, Sanjeev Satheesh, Bipin Suresh, Tao Wang, David J Wu, and Andrew Y Ng. Text detection and character recognition in scene images with unsupervised feature learning. In Document Analysis and Recognition (ICDAR), 2011 International Conference on, pages 440?445. IEEE, 2011. [7] Chen-Yu Lee, Anurag Bhardwaj, Wei Di, Vignesh Jagadeesh, and Robinson Piramuthu. Region-based discriminative feature pooling for scene text recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4050?4057, 2014. [8] Cunzhao Shi, Chunheng Wang, Baihua Xiao, Yang Zhang, Song Gao, and Zhong Zhang. Scene text recognition using part-based tree-structured character detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2961?2968, 2013. [9] James M Coughlan and Sabino J Ferreira. Finding deformable shapes using loopy belief propagation. In European Conference on Computer Vision, pages 453?468. Springer, 2002. [10] Jerod J Weinman, Erik Learned-Miller, and Allen R Hanson. Scene text recognition using similarity and a lexicon with sparse belief propagation. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 31(10):1733?1746, 2009. [11] Anand Mishra, Karteek Alahari, and CV Jawahar. Top-down and bottom-up cues for scene text recognition. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 2687?2694. IEEE, 2012. [12] Tatiana Novikova, Olga Barinova, Pushmeet Kohli, and Victor Lempitsky. Large-lexicon attributeconsistent text recognition in natural images. In Computer Vision?ECCV 2012, pages 752?765. Springer, 2012. [13] Max Jaderberg, Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Synthetic data and artificial neural networks for natural scene text recognition. arXiv preprint arXiv:1406.2227, 2014. [14] Ioannis Tsochantaridis, Thomas Hofmann, Thorsten Joachims, and Yasemin Altun. Support vector machine learning for interdependent and structured output spaces. In Proceedings of the twenty-first international conference on Machine learning, page 104. ACM, 2004. [15] Kai Wang and Serge Belongie. Word spotting in the wild. Computer Vision?ECCV 2010, pages 591?604, 2010. [16] Max Jaderberg, Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep structured output learning for unconstrained text recognition. arXiv preprint arXiv:1412.5903, 2014. [17] Pedro Felzenszwalb, David McAllester, and Deva Ramanan. A discriminatively trained, multiscale, deformable part model. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1?8. IEEE, 2008. [18] Yann LeCun, L?eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998. 9
6071 |@word kohli:1 cnn:6 version:1 briefly:3 compression:1 stronger:1 outlook:1 bai:1 born:1 contains:4 score:6 selecting:2 liu:1 document:7 suppressing:1 ours:2 mishra:1 current:1 com:3 comparing:1 activation:3 yet:1 must:2 parsing:40 distant:2 blur:7 informative:1 shape:31 hofmann:1 remove:1 designed:2 interpretable:2 drop:3 generative:17 fewer:2 greedy:5 selected:3 parameterization:1 intelligence:2 cue:1 bissacco:1 core:2 short:1 coughlan:1 detecting:1 provides:4 coarse:1 location:2 lexicon:7 zhang:2 direct:2 jiri:1 transducer:1 consists:1 fitting:3 wild:1 introduce:1 pairwise:5 mask:1 andrea:2 hera:1 nor:1 multi:1 karatzas:1 relying:1 automatically:1 resolve:1 little:1 window:4 considering:3 spain:1 provided:1 moreover:1 bounded:1 inward:2 finding:2 transformation:4 sparsification:1 pseudo:1 thorough:1 every:7 hypothetical:5 fpc:2 exactly:2 preferable:1 classifier:2 demonstrates:3 ferreira:1 control:1 ramanan:1 overestimate:1 positive:6 before:1 segmenting:1 svt:11 aggregating:1 local:7 io:1 severely:2 despite:4 anurag:1 path:4 solely:1 approximately:1 therein:1 studied:1 challenging:3 limited:1 ngram:1 unique:1 lecun:1 testing:4 union:1 practice:1 digit:1 procedure:3 suresh:1 significantly:3 printed:2 matching:3 vedaldi:2 word:28 pre:1 refers:1 altun:1 lehrach:1 cannot:2 selection:6 tsochantaridis:1 accumulation:1 equivalent:2 map:3 maxz:1 shi:2 www:1 maximizing:1 missing:1 layout:1 exposure:3 straightforward:1 independently:2 iwamura:1 survey:1 resolution:1 pure:1 classic:5 handle:1 searching:1 variation:6 autonomous:1 coordinate:1 stability:1 construction:3 trigger:1 parser:4 massive:1 play:1 programming:1 carl:1 distinguishing:1 synthesize:1 recognition:36 expensive:2 predicts:1 database:2 labeled:1 bottom:6 observed:2 role:1 preprint:2 wang:3 capture:1 cong:1 region:2 jagadeesh:1 xinghua:2 substantial:1 alessandro:1 constrains:1 ideally:1 dynamic:1 highorder:2 trained:10 solving:2 tight:1 deva:1 localization:3 creates:1 efficiency:1 easily:4 joint:2 cat:1 represented:3 various:1 train:5 describe:2 effective:1 novikova:1 detected:5 artificial:1 encoded:3 kai:1 solve:1 cvpr:2 distortion:2 simonyan:2 jointly:1 final:1 hoc:1 sequence:4 took:1 propose:1 zoomed:2 neighboring:5 turned:1 poorly:1 achieve:1 deformable:4 competition:4 extending:1 neumann:1 generating:1 perfect:1 adam:1 object:1 depending:1 andrew:3 solves:1 strong:9 coverage:1 involves:1 differ:2 iou:1 radius:4 drawback:1 annotated:1 attribute:5 filter:1 centered:2 engineered:1 viewing:1 mcallester:1 require:2 generalization:4 mathematically:1 extension:1 around:2 considered:7 ground:2 blake:1 overlaid:1 viterbi:1 driving:1 major:3 consecutive:3 failing:1 jawahar:1 sensitive:1 largest:1 create:1 tool:1 weighted:1 reflects:1 aim:1 avoid:1 zhong:1 encode:4 focus:1 joachim:1 properly:1 longest:1 likelihood:3 indicates:1 suppression:2 greedily:1 sense:1 detect:3 inference:8 neven:1 inaccurate:1 unary:3 entire:3 typically:2 diminishing:1 strokelets:2 relation:1 w:2 selects:2 interested:2 tao:1 pixel:6 issue:1 compatibility:4 classification:4 orientation:1 arg:1 overall:1 heatmap:1 art:4 special:1 marginal:2 field:1 equal:1 construct:1 extraction:1 ng:1 flawed:1 manually:1 represents:1 yu:1 unsupervised:1 jon:1 future:1 yoshua:1 simplify:1 primarily:1 few:1 oriented:3 tightly:1 recognize:1 individual:2 attempt:1 detection:21 interest:1 message:2 huge:1 highly:1 alignment:1 punctuation:1 extreme:1 yielding:2 lluis:2 activated:5 accurate:5 edge:20 baoguang:1 netzer:1 tree:5 indexed:1 re:3 deformation:8 instance:7 column:4 witnessed:1 cover:1 localizing:1 loopy:5 rare:1 hundred:1 uniform:1 recognizing:1 too:1 characterize:1 reported:2 dependency:2 struc:1 scanning:1 varies:1 corrupted:2 synthetic:2 bhardwaj:1 international:7 sensitivity:1 lee:1 off:1 mestre:1 pool:5 together:4 quickly:2 connecting:1 yao:1 sanjeev:1 imagery:1 resort:1 return:1 account:3 potential:6 deform:1 singleton:3 de:1 wenyu:1 star:1 ioannis:1 representativeness:1 inc:2 satisfy:1 ranking:2 ad:1 fernandez:1 performed:3 view:3 later:1 wolfgang:2 red:2 start:3 capability:1 contribution:1 purple:1 accuracy:2 convolutional:2 acknowledging:1 variance:1 miller:1 correspond:1 spaced:1 serge:1 generalize:1 weak:1 bayesian:1 handwritten:1 raw:1 cc:2 stroke:2 simultaneous:2 strongest:1 explain:1 manual:1 against:1 failure:4 frequency:1 james:1 obvious:1 associated:1 jordi:1 di:1 handwriting:1 dataset:9 popular:2 recall:2 color:5 segmentation:22 zisserman:2 wei:1 evaluated:1 strongly:1 alahari:1 lastly:2 until:1 hand:2 web:1 multiscale:1 hopefully:1 propagation:4 google:5 quality:5 usa:2 building:2 ye:1 contain:1 true:9 vignesh:1 former:1 adjacent:3 during:6 width:1 covering:1 ambiguous:1 prominent:1 distracting:1 complete:1 demonstrate:1 performs:4 allen:1 image:62 novel:5 recently:1 wikipedia:2 rotation:1 functional:1 phoenix:1 million:3 tail:1 marginals:2 accumulate:1 cv:1 smoothness:3 unconstrained:1 kansky:1 consistency:6 similarly:2 language:5 lowered:1 supervision:3 similarity:2 etc:1 add:4 patrick:1 optimizing:1 driven:2 incapable:1 binary:3 scoring:1 victor:1 yasemin:1 minimum:1 george:1 care:1 additional:3 greater:1 impose:1 paradigm:7 shortest:1 redundant:1 dashed:2 cummins:1 multiple:1 full:2 infer:1 uncluttered:1 segmented:2 match:2 long:1 prediction:1 basic:1 vision:13 essentially:1 arxiv:4 iteration:1 represent:2 sometimes:1 faisal:1 achieved:4 beam:1 addition:2 background:5 fine:1 cropped:2 spacing:1 source:1 biased:1 unlike:5 pooling:5 dpm:5 contrary:1 flow:2 anand:1 structural:1 presence:1 leverage:1 yang:1 bengio:1 automated:1 almazan:2 competing:1 restrict:1 reduce:2 regarding:1 haffner:1 intensive:2 photoocr:13 song:1 render:1 karen:2 transcribing:1 passing:1 cause:2 constitute:1 deep:3 generally:1 useful:1 amount:2 outward:2 prepared:3 ken:2 http:2 generate:1 outperform:1 coates:1 sign:1 estimated:1 per:3 correctly:1 blue:2 zv:6 reusable:1 four:2 threshold:1 neither:1 clean:6 utilize:1 backward:1 imaging:2 graph:16 angle:1 letter:41 parameterized:3 wu:1 yann:1 patch:6 dileep:2 scaling:2 capturing:1 uncontrolled:3 followed:4 guaranteed:1 gomez:1 scanned:2 occur:1 constraint:16 worked:1 constrain:2 scene:26 encodes:1 uchida:1 generates:1 aspect:1 speed:1 pruned:1 performing:1 optical:1 rendered:3 structured:13 developing:1 representable:1 character:46 piramuthu:1 making:3 hartmut:1 invariant:1 thorsten:1 vicarious:4 computationally:1 visualization:1 discus:1 icdar:19 fail:2 end:2 unusual:1 tiling:1 generalizes:1 ocr:2 away:3 hierarchical:3 robustness:2 original:2 thomas:1 top:3 running:1 remaining:1 ensure:5 graphical:2 maintaining:1 readable:2 tatiana:1 giving:1 parsed:1 eon:1 build:2 objective:1 matas:1 added:3 looked:1 font:46 exhibit:3 gradient:1 distance:2 lou:1 lateral:13 street:4 landmark:11 wfsts:1 topic:1 evenly:1 bipin:1 spanning:1 toward:1 reason:1 assuming:1 erik:1 index:1 retained:1 insufficient:1 difficult:2 mostly:1 susceptible:1 proper:1 satheesh:1 twenty:1 perform:5 allowing:2 observation:1 datasets:4 markov:1 finite:1 variability:1 head:1 perturbation:3 community:1 david:4 dog:1 required:3 pair:7 connection:1 hanson:1 marthi:1 learned:4 distinction:1 barcelona:1 nip:1 robinson:1 able:1 spotting:1 usually:5 perception:1 scott:2 pattern:7 reading:9 challenge:2 including:3 max:3 green:1 belief:4 overlap:1 natural:3 ranked:1 rely:1 improve:1 github:1 numerous:1 created:4 carried:3 naive:1 text:42 review:1 understanding:2 literature:2 prior:2 interdependent:1 relative:2 xiang:1 harvested:1 discriminatively:1 proven:1 versus:1 digital:1 degree:3 affine:6 sufficient:2 imposes:1 xiao:1 principle:1 translation:3 eccv:2 repeat:1 last:1 english:2 drastically:1 side:1 weaker:1 explaining:3 template:1 lukas:1 felzenszwalb:1 sparse:1 boundary:4 curve:1 world:6 gram:5 contour:2 rich:1 transition:3 forward:7 valid:2 san:2 thorder:1 cope:1 transaction:2 pushmeet:1 jaderberg:2 keep:2 global:5 robotic:1 active:1 belongie:1 francisco:2 discriminative:11 forbids:1 search:3 triplet:2 why:1 reality:1 table:1 nature:3 learn:4 robust:7 elastic:1 ignoring:1 improving:1 bottou:1 complex:1 european:1 did:1 main:1 dense:1 noise:6 allowed:3 repeated:1 carpenter:1 augmented:2 fig:17 representative:2 referred:1 mikio:1 fashion:1 fails:4 position:7 inferring:2 explicit:1 candidate:14 lie:1 third:2 grained:1 bhaskara:2 down:1 zu:2 barinova:1 symbol:1 list:3 admits:1 svm:1 evidence:3 exists:1 false:6 adding:2 effectively:1 texture:2 magnitude:4 roble:1 illumination:1 margin:5 chen:1 easier:1 intersection:1 distinguishable:1 appearance:6 gao:1 partially:1 springer:2 pedro:1 truth:2 environmental:1 relies:1 acm:1 ma:1 conditional:1 lempitsky:1 viewed:4 feasible:2 change:4 specifically:4 typical:2 yuval:1 laan:1 wt:1 justify:1 olga:1 called:1 total:3 pas:7 invariance:10 la:1 indicating:1 formally:1 support:2 mark:1 latter:1 violated:2 tested:2
5,605
6,072
The Power of Adaptivity in Identifying Statistical Alternatives Kevin Jamieson, Daniel Haas, Ben Recht University of California, Berkeley Berkeley, CA 94720 {kjamieson,dhaas,brecht}@eecs.berkeley.edu Abstract This paper studies the trade-off between two different kinds of pure exploration: breadth versus depth. We focus on the most biased coin problem, asking how many total coin flips are required to identify a ?heavy? coin from an infinite bag containing both ?heavy? coins with mean ?1 2 (0, 1), and ?light" coins with mean ?0 2 (0, ?1 ), where heavy coins are drawn from the bag with proportion ? 2 (0, 1/2). When ?, ?0 , ?1 are unknown, the key difficulty of this problem lies in distinguishing whether the two kinds of coins have very similar means, or whether heavy coins are just extremely rare. While existing solutions to this problem require some prior knowledge of the parameters ?0 , ?1 , ?, we propose an adaptive algorithm that requires no such knowledge yet still obtains near-optimal sample complexity guarantees. In contrast, we provide a lower bound showing that non-adaptive strategies require at least quadratically more samples. In characterizing this gap between adaptive and nonadaptive strategies, we make connections to anomaly detection and prove lower bounds on the sample complexity of differentiating between a single parametric distribution and a mixture of two such distributions. 1 Introduction The trade-off between exploration and exploitation has been an ever-present trope in the online learning literature. In contrast, this paper studies the trade-off between two different kinds of pure exploration: breadth versus depth. Consider a bag that contains an infinite number of two kinds of biased coins: ?heavy? coins with mean ?1 2 (0, 1) and ?light? coins with mean ?0 2 (0, ?1 ). When a player picks a coin from the bag, with probability ? the coin is ?heavy? and with probability (1 ?) the coin is ?light.? The player can flip any coin she picks from the bag as many times as she wants, and the goal is to identify a heavy coin using as few total flips as possible. When ?, ?0 , ?1 are unknown, the key difficulty of this problem lies in distinguishing whether the two kinds of coins have very similar means, or whether heavy coins are just extremely rare. That is, how does one balance flipping an individual coin many times to better estimate its mean against considering many new coins to maximize the probability of observing a heavy one. Previous work has only proposed solutions that rely on some or full knowledge ?, ?0 , ?1 , limiting their applicability. In this work we propose the first algorithm that requires no knowledge of ?, ?0 , ?1 , is guaranteed to return a heavy coin with probability at least 1 , and flips a total number of coins, in expectation, that nearly matches known lower bounds. Moreover, our fully adaptive algorithm supports more general sub-Gaussian sources in addition to just coins, and only ever has one ?coin? outside the bag at a given time, a constraint of practical importance to some applications. In addition, we connect the most biased coin problem to anomaly detection and prove novel lower bounds on the difficulty of detecting the presence of a mixture versus just a single component of a known family of distributions (e.g. X ? (1 ?)g?0 + ?g?1 versus X ? g? for some ?). We show that in detecting the presence of a mixture distribution, there is a stark difference of difficulty 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. between when the underlying distribution parameters are known (e.g. ?, ?0 , ?1 ) and when they are not. The most biased coin problem can be viewed as an online, adaptive mixture detection problem where source distributions arrive one at a time that are either g?0 with probability (1 ?) or g?1 with probability ? (e.g. null or anomolous) and the player adaptively chooses how many samples to take from each distribution (to increase the signal-to-noise ratio) with the goal of identifying an anomolous distribution f?1 using as few total number of samples as possible. This work draws a contrast between the power of an adaptive versus non-adaptive (e.g. taking the same number of samples each time) approaches to this problem, specifically when ?, ?0 , ?1 are unknown. 1.1 Motivation and Related Work for the Most Biased Coin Problem The most biased coin problem characterizes the inherent difficulty of real-world problems including anomaly and intrusion detection and discovery of vacant frequencies in the radio spectrum. Our interest in the problem stemmed from automated hiring of crowd workers: data labeling for machine learning applications is often performed by humans, and recent work in the crowdsourcing literature accelerates labeling by organizing workers into pools of labelers and paying them to wait for incoming data [4, 12]. Workers hired on marketplaces such as Amazon?s Mechanical Turk [16] vary widely in skill, and identifying high-quality workers as quickly as possible is an important challenge. We can model each worker?s performance (e.g. accuracy or speed) as a random variable so that selecting a good worker is equivalent to identifying a worker with a high mean. Since we do not observe a worker?s expected performance directly, we must give them tasks from which we estimate it (like repeatedly flipping a biased coin). Arlotto et al. [3] proposed a strategy with some guarantees for a related problem but did not characterize the sample complexity of the problem, the focus of our work. The most biased coin problem was first proposed by Chandrasekaran and Karp [8]. In that work, it was shown that if ?, ?0 , ?1 were known then there exists an algorithm based on the sequential probability ratio test (SPRT) that is optimal in that it minimizes the expected number of total flips to find a ?heavy? coin whose posterior probability of being heavy is at least 1 , and the expected sample complexity of this algorithm was upper-bounded by ? ? ?? 16 1 ? (1 ?)(1 ) + log . (1) (?1 ?0 )2 ? ? However, the practicality of the proposed algorithm is severely limited as it relies critically on knowing ?, ?0 , and ?1 exactly. In addition, the algorithm returns to coins it has previously flipped and thus requires more than one coin to be outside the bag at a time, ruling out some applications. Malloy et al. [15] addressed some of the shortcomings of [9] (a preprint of [8]) by considering both an alternative SPRT procedure and a sequential thresholding procedure. Both of these proposed algorithms only ever have one coin out of the bag at a time. However, the former requires knowledge of all relevant parameters ?, ?0 , ?1 , and the latter requires knowledge of ?, ?0 . Moreover, these results are only presented for the asymptotic case where ! 0. The most biased coin problem can be viewed through the lens of multi-armed bandits. In the best-arm identification problem, the player has access to K distributions (arms) such that if arm i 2 [K] is sampled (pulled), an iid random variable with mean ?i is observed; the objective is to identify the arm associated with the highest mean with probability at least 1 using as few pulls as possible (see [14] for a short survey). In the infinite armed bandit problem, the player is not confined to K arms but an infinite reservoir of arms such that a draw from this reservoir results in an arm with a mean ? drawn from some distribution; the objective is to identify the highest mean possible after n total pulls for any n > 0 with probability 1 (see [7]). The most biased coin problem is an instance of this latter game with the arm reservoir distribution of means ? defined as P(? ?1 ?) = ?1? 0 + (1 ?)1? ?1 ?0 for all ?. Previous work has focused on an alternative arm distribution reservoir that satisfies E? ? P(? ?? ?) ? E 0 ? for some ?? 2 [0, 1] where E, E 0 are constants and is known [5, 21, 6, 7]. Because neither arm distribution reservoir can be written in terms of the other, neither work subsumes the other. Note that one can always apply an algorithm designed for the infinite armed bandit problem to any finite K-armed bandit problem by defining the arm reservoir as placing a uniform distribution over the K arms. This is appealing when K is very large and one wishes to guarantee nontrivial performance when the number of pulls is much less than K 1 . The most biased problem is a special case of the K-armed reservoir distribution 1 where one arm has mean ?1 and K 1 arms have mean ?0 with ? = K . 1 All algorithms for K-armed bandit problem known to these authors begins by sampling each arm once so that until the number of pulls exceeds K, performance is no better than random selection. 2 Given that [8] and [15] are provably optimal algorithms for the most biased coin problem given knowledge of ?, ?0 , ?1 , it is natural to consider a procedure that first estimates these unknown parameters first and then uses these estimates in the algorithms of [8] or [15]. Indeed, in the parameterized arm reservoir setting discussed above, this is exactly what Carpentier and Valko [7] propose to do, suggesting a particular estimator for given a lower bound b ? . They show that this estimator is sufficient to obtain the same sample complexity result up to log factors as when was known. Sadly, through upper and lower bounds we show that for the most biased coin problem this estimate-then-explore approach requires quadratically more flips than our proposed algorithm that adapts to these unknown parameters. Specifically, we show that when ?1 ?0 is sufficiently small one cannot use a static estimation step to determine whether ? = 0 or ? > 0 unless a number of samples quadratic in the optimal sample complexity are taken. Our contributions to the most biased coin problem include a novel algorithm that never has more than one coin outside the bag at a time, has no knowledge of the distribution parameters, supports distributions on [0, 1] rather than just ?coins,? and comes within log factors of the known informationtheoretic lower bound and Equation 1 which is achieved by an algorithm that knows the parameters. See Table 1 for an overview of the upper and lower bounds proved in this work for this problem. We believe that our algorithm is the first solution to the most biased coin problem that does not require prior knowledge of the problem parameters and that the same approach can be reworked to solve more general instances of the infinite-armed bandit problem, including the -parameterized and K-armed reservoir cases described of above. Finally, if an algorithm is desired for arbitrary arm reservoir distributions, this work rules out an estimate-then-explore approach. 1.2 Problem Statement Let ? 2 ? index a family of single-parameter probability density functions g? and fix ?0 , ?1 2 ?, ? 2 [0, 1/2]. For any ? 2 ? assume that g? is known to the procedure. Note that in the most biased coin problem, g? =Bernoulli(?), but in general it is arbitrary (e.g. N (?, 1)). Consider a sequence of iid Bernoulli random variables ?i 2 {0, 1} for i = 1, 2, . . . where each P(?i = 1) = 1 P(?i = 0) = ?. Let Xi,j for j = 1, 2, . . . be a sequence of random variables drawn from g?1 if ?i = 1 and g?0 N i otherwise, and let {{Xi,j }M j=1 }i=1 represent the sampling history generated by a procedure for some N 2 N and (M1 , . . . , MN ) 2 NN . Any valid procedure behaves accordingly: Algorithm 1 The most biased coin problem definition. Only the last distribution drawn may be sampled or declared heavy, enforcing the rule that only one coin may be outside the bag at a time. Initialize an empty history (N = 1, M = (0, 0, . . . )). Repeat until heavy distribution declared: Choose one of 1. draw a sample from distribution N , MN MN + 1 2. draw a sample from the (N + 1)st distribution, MN +1 = 1, N N +1 3. declare distribution N as heavy Definition 1 We say a strategy for the most biased coin problem is -probably correct if for all (?, ?0 , ?1 ) it identifies a ?heavy? g?1 distribution with probability at least 1 . Definition 2 (Strategies for the most biased coin problem) An estimate-then-explore strategy is a strategy that, for any fixed m 2 N, begins by sampling each successive coin exactly m times for a number of coins that is at least the minimum necessary for any test to determine that ? 6= 0 with probability at least 1 , then optionally continues sampling with an arbitrary strategy that declares a heavy coin. An adaptive strategy is any strategy that is not an estimate-then-explore strategy. We study the estimate-then-explore strategy because there exist optimal algorithms [8, 15] for the most biased coin problem if ?, ?0 , ?1 are known, so it is natural to consider estimating these quantities then using one of these algorithms. Note that the algorithm of [7] for the -parameterized infinite armed bandit problem discussed above can be considered an estimate-then-explore strategy since it first estimates by sampling a fixed number of samples from a set of arms, and then uses this estimate to draw a fixed number of arms and applies a UCB-style algorithm to these arms. A contribution of this work is showing that such a strategy is infeasible for the most biased coin problem. 3 For all strategies that are -probably correct and follow the interface of Algorithm 1, our goal is PN to provide lower and upper bounds on the quantity E[T ] := E[ i=1 Mi ] for any (?, ?0 , ?1 ) if N denotes the final number of coins considered. 2 From Identifying Coins to Detecting Mixture Distributions Addressing the most biased coin problem, [15] analyzes perhaps the most natural strategy: fix an m 2 N and flip each successive coin exactly m times. The relevant questions are how large does m have to be in order to guarantee correctness with probability 1 , and for a given m how long must one wait to declare a ?heavy? coin? The authors partially answer these questions and we improve upon them (see Section 3.2.1) which leads us to our study of the difficulty of detecting the presence of a mixture distribution. As an example of the kind of lower bounds shown in this work, if we observe a sequence of random variables X1 , . . . , Xn , consider the following hypothesis test: 2 H0 : 8i X1 , . . . , Xn ? N (?, H1 : 8i X1 , . . . , Xn ? (1 ) for some ? 2 R, ?)N (?0 , 2 ) + ? N (?1 , 2 (P1) ) which will henceforth be referred to as Problem P1 or just (P1). We can show that if ?0 , ?1 , ? are 2 known and ? = ?0 , then it is sufficient to observe just max{1/?, ?2 (?1 ?0 )2 log(1/ )} samples to determine the correct hypothesis with probability at least 1 . However, if ?0 , ?1 , ? are unknown 2 2 then it is necessary to observe at least max 1/?, ?(?1 ?0 )2 log(1/ ) samples in expectation whenever (?1 ?0 )2 2 ? 1 and max{1/?, ?2 (?1 2 ?0 ) 2 log(1/ )} otherwise (see Appendix C). 2 Recognizing (?1 2?0 ) as the KL divergence between two Gaussians of H1 , we observe startling consequences for anomaly detection when the parameters of the underlying distributions are unknown: if the anomalous distribution is well separated from the null distribution, then detecting an anomalous component is only about as hard as observing just one anomalous sample (i.e. 1/?) multiplied by the inverse KL divergence between the null and anomalous distributions. However, when the two distributions are not well separated then the necessary sample complexity explodes to this latter quantity squared. In Section 4 we will investigate adaptive methods for dramatically decreasing this sample complexity. Our lower bounds are based on the detection of the presence of a mixture of two distributions of an exponential family versus just a single distribution of the same family. There has been extensive work in the estimation of mixture distributions [13, 11] but this literature often assumes that the mixture coefficient ? is bounded away from 0 and 1 to ensure a sufficient number of samples from each distribution. In contrast, we highlight the regime when ? is arbitrarily small, as is the case in statistical anomaly detection [10, 20, 2]. Property testing, e.g. unimodality, [1] is relevant but can lack interpetability or strength in favor of generality. Considering the exponential family allowing us to make interpretable statements about the relevant problem parameters in different regimes. Preliminaries Let P and Q be two probability distributions with densities p and q, respectively. For simplicity, assume p and ? q have ? the same support. Define the KL Divergence between P and Q R p(x) as KL(P, Q) = log q(x) dp(x). Define the 2 Divergence between P and Q as 2 (P, Q) = ?2 R ? p(x) R q(x))2 1 dq(x) = (p(x)q(x) dx. Note that by Jensen?s inequality q(x) ? ? ? ? KL(P, Q) = Ep log pq ? log Ep pq = log 2 (P, Q) + 1 ? 2 (P, Q). (2) Examples: If P = N (?1 , e (?1 ?0 ) 2 2 3 ) and Q = N (?0 , 2 ) then KL(P, Q) = (?1 ?0 )2 2 2 1. If P = Bernoulli(?1 ) and Q = Bernoulli(?0 ) then KL(P, Q) = ?1 ) log( 11 ??10 ) appendix. 2 ? (?1 ?0 )2 /2 ?0 (1 ?0 ) [(?1 ?0 )(2?0 1)]+ and 2 (P, Q) = (?1 ?0 )2 ?0 (1 ?0 ) . and 2 (P, Q) = ?1 log( ??10 ) + (1 All proofs appear in the Lower bounds We present lower bounds on the sample complexity of -probably correct strategies for the most biased coin problem that follow the interface of Algorithm 1. Lower bounds are stated for any 4 adaptive strategy in Section 3.1, non-adaptive strategies that may have knowledge of the parameters but sample each distribution the same number of times in Section 3.2.1, and estimate-then-explore strategies that do not have prior knowledge of the parameters in Section 3.2.2. Our lower bounds, with the exception of the adaptive strategy, are based on the difficulty of detecting the presence of a mixture distribution, and this reduction is explained in Section 3.2. 3.1 Adaptive strategies The following theorem, reproduced from [15], describes the sample complexity of any -probably correct algorithm for the most biased coin identification problem. Note that this lower bound holds for any procedure even if it returns to previously seen distributions to draw additional samples and even if it knows ?, ?0 , ?1 . Theorem 1 [15, Theorem 2] Fix 2 (0, 1). Let T be the total number of samples taken of any procedure that is -probably correct in identifying a heavy distribution. Then ? 1 (1 ) E[T ] c1 max , ? ?KL(g?0 |g?1 ) whenever ? ? c2 where c1 , c2 2 (0, 1) are absolute constants. The above theorem is directly applicable to the special case where g? is a Bernoulli distribution, (1 ?0 ),?1 (1 ?1 )} implying a lower bound of max 1 ? , 2 min{?0?(? for the most biased coin problem. 2 1 ?0 ) The upper bounds of our proposed procedures for the most biased coin problem presented later will be compared to this benchmark. 3.2 The detection of a mixture distribution and the most biased coin problem First observe that identifying a specific distribution i ? N as heavy (i.e. ?i = 1) or determining that ? is strictly greater than 0, is at least as hard as detecting that any of the distributions up to distribution N is heavy. Thus, a lower bound on the total expected number of samples of all considered distributions for this strictly easier detection problem is also a lower bound for the estimate-then-explore strategy for the most biased coin identification problem. The estimate-then-explore strategy fixes an m 2 N prior to starting the game and then samples each distribution exactly m times, i.e. Mi = m for all i ? N for some N . To simplify notation let f? denote the distribution of the sufficient statistics of these m samples. In general f? is a product distribution, but when g? is a Bernoulli distribution, as in the biased coin problem, we can take f? to be a Binomial distribution with parameters (m, ?). Now our problem is more succinctly described as: e ? ?, for some ? 2 ? ? f ?0 H1 : 8i ?i ? Bernoulli(?), 8i Xi ? f ?1 H0 : 8i Xi ? f? if ?i = 0 if ?i = 1 (P2) If ?0 and ?1 are close to each other, or if ? is very small, it can be very difficult to decide between H0 and H1 even if ?, ?0 , ?1 are known a priori. Note that when the parameters are known, one can take e = {?0 }. However, when the parameters are unknown, one takes ? e = ? to prove a lower bound on ? the sample complexity of the estimate-then-explore algorithm, which is tasked with deciding whether or not samples are coming from a mixture of distributions or just a single distribution within the family. That is, lower bounds on the sample complexity when the parameters are known and unknown follow by analyzing a simple binary and composite hypothesis test, respectively. In what follows, for any event A, let Pi (A) and Ei [A] denote probability and expectation of A under hypothesis Hi for i 2 {0, 1} (the specific value of ? in H0 will be clear from context). The next claim is instrumental in our ability to prove lower bounds on the difficulty of the hypothesis tests. Claim 1 Any procedure that is -probably correct also satisfies P0 (N < 1) ? 3.2.1 whenever ? = 0. Sample complexity when parameters are known e ? Theorem 2 Fix 2 (0, 1). Consider the hypothesis test of Problem P2 for any fixed ? 2 ? ?. Let N be the random number of distributions considered before stopping and declaring a 5 N hypothesis. If na procedure satisfies P0 (N 1 , then o n < 1) ? oand P1 ([i=1 {?i = 1}) log(1/ ) log(1/ ) 1 1 e E1 [N ] max ? , KL(P1 |P0 ) max ? , 2 (P1 |P0 ) . In particular, if ? = {?0 } then n1 log(1/ ) o E1 [N ] max , 2 2 . ? ? (f?1 |f?0 ) The next corollary relates Theorem 2 to the most biased coin problem and is related to Malloy et al. [15, Theorem 4] that considers the limit as ? ! 0 and assumes m is sufficiently large (specifically, large enough for the Chernoff-Stein lemma to apply). In contrast, our result holds for all finite , ?, m. Corollary 1 Fix 2 (0, 1). For any m 2 N consider a -probably correct strategy that flips each coin exactly m times. If Nm is the number of coins considered before declaring a coin as heavy then ? ? ) (1 ) log log(1/ ? ?0 (1 ?0 ) min E[mNm ] . m2N ? (?1 ?0 )2 One can show the existence of such a strategy with a nearly matching upperbound when ?, ?0 , ?1 are known (see Appendix B.1). Note that this is at least log(1/?) larger than the sample complexity of (1) that can be achieved by an adaptive algorithm when the parameters are known. 3.2.2 Sample complexity when parameters are unknown If ?, ?0 , and ?1 are unknown, we cannot test f?0 against the mixture (1 ?)f?0 + ?f?1 . Instead, we have the general composite test of any individual distribution against any mixture, which is at least as e = {?} for some particular worst-case setting of ?. hard as the hypothesis test of Problem P2 with ? Without any specific form of f? , it is difficult to pick a worst case ? that will produce a tight bound. Consequently, in this section we consider single parameter exponential families (defined formally below) to provide us with a class of distributions in which we can reason about different possible values for ?. Since exponential families include Bernoulli, Gaussian, exponential, and many other distributions, the following theorem is general enough to be useful in a wide variety of settings. The constant C referred to in the next theorem is an absolute constant under certain conditions that we outline in the following remark and corollary, its explicit form is given in the proof. Theorem 3 Suppose f? for ? 2 ? ? R is a single parameter exponential family so that f? (x) = e = h(x) exp(?(?)x b(?(?))) for some scalar functions h, b, ? where ? is strictly increasing. If ? {?? } where ?? = ? 1 (1 ?)?(?0 ) + ??(?1 ) and N is the stopping time of any procedure that satisfies P0 (N < 1) ? and P1 ([N 1 , then i=1 {?i = 1}) n o 1 log( ) E1 [N ] max 1 ? , 1 . 2 C ( 2 ?(1 ?)(?(?1 ) ?(?0 ))2 ) where C is a constant that may depend on ?, ?0 , ?1 . The following remark and corollary apply Theorem 3 to the special cases of Gaussian mixture model detection and the most biased coin problem, respectively. e in Problem P2 and Remark 1 When ?, ?0 , ?1 are unknown, any procedure has no knowledge of ? consequently it cannot rule out ? = ?? for H0 where ?? is defined in Theorem 3. If f? = N (?, 2 ) 2 for known , then whenever (?1 2?0 ) ? 1 the constant C in Theorem 3 is an absolute constant and 2 2 consequently, E1 [N ] = ? ?(?1 ?0 )2 log(1/ ) . Conversely, when ?, ?0 , ?1 are known, then we simply need to determine whether samples came? from N (?0 , 2 ) or (1? ?)N (?0 , 2 ) + ?N (?1 , 2 ), 2 and we show that it is sufficient to take just O ?2 (?1 ?0 )2 log(1/ ) samples (see Appendix C). Corollary 2 Fix 2 [0, 1] and assume ?0 , ?1 are bounded sufficiently far from {0, 1} such that 2(?1 ?0 ) ? min{?0 (1 ?0 ), ?1 (1 ?1 )}. For any m let Nm be the number of coins a -probably correct estimate-then-explore strategy that flips each coin m times in the exploration step. Then 1 c0 min{ m , ?? (1 ?? )} ?? (1 ?? ) 1 mE[Nm ] . ? ?2 log( ) whenever m ? 2 (?1 ?0 )2 ?(1 ?) ?(??1(1 ?0?)? ) where c0 is an absolute constant and ?? = ? 1 ((1 6 ?)?(?0 ) + ??(?1 )) 2 [?0 , ?1 ]. Remark 2 If ?, ?0 , ?1 are unknown, any estimate-then-explore strategy (or the strategy described in Corollary 1) would be unable to choose an m that depended on these parameters, so we can treat it as a constant. Thus, for the case when ?0 and ?1 are bounded away from {0, 1} (e.g. ?0 , ?1 2 [1/8, 7/8]), the above corollary states that for any fixed m, whenever ?1 ?0 is sufficiently small the number 2 of samples necessary for these strategies to identify a heavy coin scales like ?(?1 1 ?0 )2 log(1/ ). This is striking example of the difference when parameters are known versus when they are not and effectively rules out an estimate-then-explore strategy for practical purposes. Setting Upper Bound Fixed, known ?, ?0 , ?1 log(1/( ?)) , ??2 Adaptive, known ?, ?0 , ?1 1 ?2 Est+Expl, unknown ?, ?0 , ?1 Unconsidered? Adaptive, unknown ?, ?0 , ?1 c log( 1 ? Lower Bound Thm. 7 [8, 15], Thm. 4 + log( 1 ) 1 ??2 log(log(1/ )/?) ??2 ) log(log( ??2 1 ??2 )/ ) Thm. 5 [15] 1 ??2 1 2 ??2 Cor. 1 log( 1 ) 1 ??2 Cor. 2 [15] Table 1: Upper and lower bounds on the expected sample complexity of different -probably correct strategies. Fixed refers to the strategy of Corollary 1. For this table, we assume min{?0 (1 ?0 ), ?1 (1 ?1 )} is lower bounded by a constant (e.g. ?0 , ?1 2 [1/8, 7/8]) and ? = ?1 ?0 is sufficiently small. Also note that the upperbounds apply to distributions supported on [0, 1], not just coins. All results without bracketed citations were unknown prior to this work. ? Due to our discouraging lower bound for any estimate-then-explore strategy, it is inadvisable to propose an algorithm. 4 Near optimal adaptive algorithm In this section we propose an algorithm that has no prior knowledge of the parameters ?, ?0 , ?1 yet yields an upper bound that matches the lower bound of Theorem 1 up to logarithmic factors. We assume that samples from heavy or light distributions are supported on [0, 1], and that drawn samples are independent and unbiased estimators of the mean, i.e., E[Xi,j ] = ?i for ?i 2 {?0 , ?1 }. All results can be easily extended to sub-Gaussian distributions. Consider Algorithm 2, an SPRT-like procedure [18] for finding a heavy distribution given and lower bounds on ? and ? = ?1 ?0 . It improves upon prior work by supporting arbitrary distributions on [0, 1] and requires only bounds ?, ?. Algorithm 2 Adaptive strategy for heavy distribution identification with inputs ?0 , ?0 , Given 2 (0, 1/4), ?0 2 (0, 1/2), ?0 2 (0, 1). Initialize n = d2 log(9)/?0 e, m = d64?0 2 log(14n/ )e, A = 8?0 1 log(21), B = 8?0 1 log(14n/ ), k1 = 5, k2 = d8?0 2 log(2k1 / min{ /8, m 1 ?0 2 })e. Draw k1 distributions and sample them each k2 times. Estimate ?b0 = mini=1,...,k1 ? bi,k2 , ? = ?b0 + ?0 /2. Repeat for i = 1, . . . , n: Draw distribution i. Repeat for j = 1, . . . , m: Sample distribution i and observe Xi,j . Pj If k=1 (Xi,k ? ) > B: Declare distribution i to be heavy and Output distribution i. Pj Else if k=1 (Xi,k ? ) < A: break. Output null. Theorem 4 If Algorithm 2 is run with 2 (0, 1/4), ?0 2 (0, 1/2), ?0 2 (0, 1), then the expected number of total samples taken by the algorithm is no more than c0 ? log(1/?0 ) + c00 log ?0 ?20 7 1 (3) for some absolute constants c0 ,c00 , and all of the following hold: 1) with probability at least 1 , a light distribution is not returned, 2) if ?0 ? ?1 ?0 and ?0 ? ?, then with probability 45 a heavy 0 )) distribution is returned, and 3) the procedure takes no more than c log(1/(? total samples. ? 0 ?2 0 The second claim of the theorem holds only with constant probability (versus with probability 1 ) since the probability of observing a heavy distribution among the n = d2 log(4)/?0 e distributions only occurs with constant probability. One can show that if the outer loop of algorithm is allowed to run indefinitely (with m and n defined as is), ?0 = ?1 ?0 , ?0 = ?, and ?b0 = ?0 , then a heavy coin is returned with probability at least 1 and the expected number of samples is bounded by (3). If a tight lower bound is known on either ? = ?1 ?0 or ?, there is only one parameter that is unknown and the ?doubling trick?, along with Theorem 4, can be used to identify a heavy coin with 2 1 )/ ) )/ ) just log(log(? and log(log(? samples, respectively (see Appendix B.3). ??2 ??2 Now consider Algorithm 3 that assumes no prior knowledge of ?, ?0 , ?1 , the first result for this setting that we are aware of. We remark that while the placing of ?landmarks? (?k , ?k ) throughout the search space as is done in Algorithm 3 appears elementary in hindsight, it is surprising that so few can cover this two dimensional space since one has to balance the exploration of ? and ?. We believe similar a similar approach may be generalized for more generic infinite armed bandit problems. Algorithm 3 Adaptive strategy for heavy distribution identification with unknown parameters Given > 0. Initialize ` = 1, heavy distribution h = null. Repeat until h is not null: Set ` = 2` , ` = /(2`3 ) Repeat for k = 0, . . . , `: q k Set ?k = 2 ` , ?k = 2?1k ` Run Algorithm 2 with ?0 = ?k , ?0 = ?k , = ` and Set h to its output. If h is not null break Set ` = ` + 1 Output h Theorem 5 (Unknown ?, ?0 , ?1 ) Fix 2 (0, 1). If Algorithm 3 is run with then with probability at least 1 a heavy distribution is returned and the expected number of total samples taken is bounded by log2 ( ??1 2 ) (? log2 ( ?12 ) + log(log2 ( ??1 2 )) + log(1/ )) ??2 for an absolute constant c. c 5 Conclusion While all prior works have required at least partial knowledge of ?, ?0 , ?1 to solve the most biased coin problem, our algorithm requires no knowledge of these parameters yet obtain the near-optimal sample complexity. In addition, we have proved lower bounds on the sample complexity of detecting the presence of a mixture distribution when the parameters are known or unknown, with consequences for any estimate-then-explore strategy, an approach previously proposed for an infinite armed bandit problem. Extending our adaptive algorithm to arbitrary arm reservoir distributions is of significant interest. We believe a successful algorithm in this vein could have a significant impact on how researchers think about sequential decision processes in both finite and uncountable action spaces. Acknowledgments Kevin Jamieson is generously supported by ONR awards N00014-15-1-2620, and N0001413-1-0129. This research is supported in part by NSF CISE Expeditions Award CCF-1139158, DOE Award SN10040 DE-SC0012463, and DARPA XData Award FA8750-12-2-0331, and gifts from Amazon Web Services, Google, IBM, SAP, The Thomas and Stacey Siebel Foundation, Apple Inc., Arimo, Blue Goji, Bosch, Cisco, Cray, Cloudera, Ericsson, Facebook, Fujitsu, Guavus, HP, Huawei, Intel, Microsoft, Pivotal, Samsung, Schlumberger, Splunk, State Farm and VMware. 8 References [1] Jayadev Acharya, Constantinos Daskalakis, and Gautam C Kamath. Optimal testing for properties of distributions. In Advances in Neural Information Processing Systems, pages 3577?3598, 2015. [2] Deepak Agarwal. Detecting anomalies in cross-classified streams: a bayesian approach. Knowledge and Information Systems, 11(1):29?44, 2006. [3] Alessandro Arlotto, Stephen E Chick, and Noah Gans. Optimal hiring and retention policies for heterogeneous workers who learn. Management Science, 60(1):110?129, 2013. [4] Michael S Bernstein, Joel Brandt, Robert C Miller, and David R Karger. Crowds in two seconds: enabling realtime crowd-powered interfaces. UIST, 2011. [5] Donald A. Berry, Robert W. Chen, Alan Zame, David C. Heath, and Larry A. Shepp. Bandit problems with infinitely many arms. Ann. Statist., 25(5):2103?2116, 10 1997. [6] Thomas Bonald and Alexandre Proutiere. Two-target algorithms for infinite-armed bandits with bernoulli rewards. In C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 2184?2192. Curran Associates, Inc., 2013. [7] Alexandra Carpentier and Michal Valko. Simple regret for infinitely many armed bandits. arXiv preprint arXiv:1505.04627, 2015. [8] Karthekeyan Chandrasekaran and Richard Karp. Finding a most biased coin with fewest flips. In Proceedings of The 27th Conference on Learning Theory, pages 394?407, 2014. [9] Karthekeyan Chandrasekaran and Richard M. Karp. Finding the most biased coin with fewest flips. CoRR, abs/1202.3639, 2012. URL http://arxiv.org/abs/1202.3639. [10] Eleazar Eskin. Anomaly detection over noisy data using learned probability distributions. In Proceedings of the Seventeenth International Conference on Machine Learning, ICML ?00, pages 255?262, San Francisco, CA, USA, 2000. Morgan Kaufmann Publishers Inc. [11] Yoav Freund and Yishay Mansour. Estimating a mixture of two product distributions. In Proceedings of the twelfth annual conference on Computational learning theory, pages 53?62. ACM, 1999. [12] Daniel Haas, Jiannan Wang, Eugene Wu, and Michael J. Franklin. Clamshell: Speeding up crowds for low-latency data labeling. Proc. VLDB Endow., 9(4):372?383, December 2015. ISSN 2150-8097. [13] Moritz Hardt and Eric Price. Sharp bounds for learning a mixture of two gaussians. ArXiv e-prints, 1404, 2014. [14] Kevin Jamieson and Robert Nowak. Best-arm identification algorithms for multi-armed bandits in the fixed confidence setting. In Information Sciences and Systems (CISS), pages 1?6. IEEE, 2014. [15] Matthew L Malloy, Gongguo Tang, and Robert D Nowak. Quickest search for a rare distribution. In Information Sciences and Systems (CISS), pages 1?6. IEEE, 2012. [16] MTurk. Amazon Mechanical Turk. https://www.mturk.com/. [17] David Pollard. Asymptopia. Manuscript in progress. Available at http://www. stat.yale.edu/?pollard, 2000. [18] David Siegmund. Sequential analysis: tests and confidence intervals. Springer Science & Business Media, 2013. [19] Robert Spira. Calculation of the gamma function by stirling?s formula. mathematics of computation, pages 317?322, 1971. [20] Gautam Thatte, Urbashi Mitra, and John Heidemann. Parametric methods for anomaly detection in aggregate traffic. IEEE/ACM Trans. Netw., 19(2):512?525, April 2011. ISSN 1063-6692. [21] Yizao Wang, Jean yves Audibert, and R?mi Munos. Algorithms for infinitely many-armed bandits. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems 21, pages 1729?1736. Curran Associates, Inc., 2009. 9
6072 |@word exploitation:1 proportion:1 instrumental:1 c0:4 twelfth:1 d2:2 vldb:1 p0:5 pick:3 reduction:1 contains:1 siebel:1 selecting:1 karger:1 daniel:2 fa8750:1 franklin:1 existing:1 com:1 michal:1 surprising:1 stemmed:1 yet:3 dx:1 must:2 written:1 john:1 cis:2 designed:1 interpretable:1 implying:1 accordingly:1 short:1 indefinitely:1 eskin:1 detecting:9 gautam:2 successive:2 brandt:1 org:1 along:1 c2:2 prove:4 cray:1 uist:1 indeed:1 expected:8 p1:7 multi:2 decreasing:1 armed:15 considering:3 increasing:1 gift:1 spain:1 begin:2 bounded:7 moreover:2 underlying:2 estimating:2 notation:1 null:7 what:2 gongguo:1 kind:6 medium:1 minimizes:1 finding:3 hindsight:1 guarantee:4 berkeley:3 exactly:6 k2:3 jamieson:3 appear:1 before:2 declare:3 service:1 retention:1 treat:1 mitra:1 limit:1 severely:1 consequence:2 depended:1 guavus:1 analyzing:1 quickest:1 conversely:1 limited:1 bi:1 seventeenth:1 practical:2 acknowledgment:1 testing:2 regret:1 procedure:15 composite:2 matching:1 confidence:2 cloudera:1 refers:1 donald:1 wait:2 cannot:3 close:1 selection:1 context:1 www:2 equivalent:1 expl:1 starting:1 survey:1 focused:1 amazon:3 identifying:7 simplicity:1 pure:2 estimator:3 rule:4 pull:4 siegmund:1 limiting:1 target:1 suppose:1 yishay:1 anomaly:8 unconsidered:1 distinguishing:2 us:2 hypothesis:8 curran:2 trick:1 associate:2 continues:1 vein:1 observed:1 ep:2 preprint:2 wang:2 sadly:1 worst:2 trade:3 highest:2 alessandro:1 complexity:18 reward:1 depend:1 tight:2 upon:2 eric:1 asymptopia:1 easily:1 darpa:1 samsung:1 unimodality:1 fewest:2 separated:2 shortcoming:1 marketplace:1 labeling:3 aggregate:1 kevin:3 outside:4 crowd:4 h0:5 whose:1 jean:1 widely:1 solve:2 larger:1 say:1 otherwise:2 favor:1 statistic:1 ability:1 think:1 farm:1 noisy:1 final:1 online:2 reproduced:1 sequence:3 propose:5 product:2 coming:1 relevant:4 loop:1 goji:1 organizing:1 adapts:1 empty:1 extending:1 produce:1 ben:1 stat:1 bosch:1 b0:3 progress:1 paying:1 p2:4 come:1 correct:10 exploration:5 human:1 larry:1 require:3 fix:8 preliminary:1 elementary:1 c00:2 strictly:3 hold:4 sufficiently:5 considered:5 deciding:1 exp:1 claim:3 matthew:1 vary:1 purpose:1 estimation:2 proc:1 yizao:1 applicable:1 bag:10 radio:1 correctness:1 generously:1 gaussian:4 always:1 rather:1 pn:1 karp:3 corollary:8 endow:1 focus:2 she:2 bernoulli:9 intrusion:1 contrast:5 huawei:1 stopping:2 nn:1 bandit:14 koller:1 proutiere:1 provably:1 among:1 priori:1 special:3 initialize:3 once:1 never:1 aware:1 sampling:5 chernoff:1 flipped:1 placing:2 reworked:1 icml:1 nearly:2 constantinos:1 simplify:1 inherent:1 few:4 acharya:1 richard:2 vmware:1 gamma:1 divergence:4 individual:2 n1:1 microsoft:1 schlumberger:1 ab:2 detection:12 interest:2 investigate:1 joel:1 mixture:18 light:5 nowak:2 worker:9 necessary:4 partial:1 unless:1 desired:1 instance:2 asking:1 cover:1 yoav:1 stirling:1 d64:1 applicability:1 addressing:1 rare:3 uniform:1 recognizing:1 successful:1 characterize:1 connect:1 answer:1 eec:1 chooses:1 adaptively:1 recht:1 density:2 st:1 international:1 off:3 pool:1 inadvisable:1 michael:2 quickly:1 gans:1 na:1 squared:1 cisco:1 nm:3 management:1 containing:1 choose:2 henceforth:1 d8:1 style:1 return:3 stark:1 suggesting:1 upperbound:1 de:1 subsumes:1 coefficient:1 inc:4 bracketed:1 audibert:1 stream:1 performed:1 h1:4 later:1 break:2 observing:3 characterizes:1 traffic:1 expedition:1 contribution:2 hired:1 yves:1 accuracy:1 kaufmann:1 who:1 miller:1 yield:1 identify:6 identification:6 bayesian:1 critically:1 iid:2 researcher:1 apple:1 startling:1 history:2 classified:1 whenever:6 facebook:1 definition:3 against:3 frequency:1 turk:2 associated:1 mi:3 proof:2 static:1 sampled:2 sap:1 proved:2 hardt:1 knowledge:17 improves:1 appears:1 alexandre:1 manuscript:1 follow:3 april:1 done:1 generality:1 just:13 until:3 web:1 ei:1 lack:1 google:1 quality:1 perhaps:1 believe:3 alexandra:1 usa:1 unbiased:1 ccf:1 former:1 moritz:1 game:2 hiring:2 generalized:1 outline:1 interface:3 novel:2 behaves:1 overview:1 discussed:2 m1:1 significant:2 mathematics:1 xdata:1 hp:1 pq:2 stacey:1 access:1 labelers:1 posterior:1 recent:1 certain:1 n00014:1 inequality:1 binary:1 arbitrarily:1 came:1 onr:1 seen:1 minimum:1 analyzes:1 additional:1 greater:1 morgan:1 determine:4 maximize:1 signal:1 stephen:1 relates:1 full:1 exceeds:1 alan:1 match:2 calculation:1 cross:1 long:1 e1:4 award:4 impact:1 anomalous:4 heterogeneous:1 mturk:2 expectation:3 tasked:1 arxiv:4 represent:1 agarwal:1 confined:1 achieved:2 c1:2 addition:4 want:1 heidemann:1 addressed:1 interval:1 else:1 source:2 publisher:1 biased:34 zame:1 heath:1 probably:9 explodes:1 december:1 near:3 presence:6 bernstein:1 bengio:1 enough:2 automated:1 variety:1 brecht:1 knowing:1 whether:7 url:1 returned:4 pollard:2 repeatedly:1 remark:5 action:1 dramatically:1 useful:1 latency:1 clear:1 stein:1 statist:1 http:3 exist:1 nsf:1 blue:1 key:2 drawn:5 neither:2 carpentier:2 breadth:2 pj:2 nonadaptive:1 run:4 inverse:1 parameterized:3 striking:1 chick:1 arrive:1 family:9 chandrasekaran:3 ruling:1 decide:1 throughout:1 wu:1 realtime:1 draw:8 decision:1 appendix:5 accelerates:1 bound:35 hi:1 guaranteed:1 yale:1 quadratic:1 annual:1 nontrivial:1 strength:1 declares:1 noah:1 constraint:1 declared:2 speed:1 extremely:2 min:6 describes:1 appealing:1 m2n:1 explained:1 taken:4 equation:1 previously:3 know:2 flip:11 cor:2 available:1 gaussians:2 malloy:3 multiplied:1 apply:4 observe:7 away:2 generic:1 alternative:3 coin:78 weinberger:1 existence:1 thomas:2 denotes:1 assumes:3 include:2 ensure:1 binomial:1 uncountable:1 log2:3 upperbounds:1 practicality:1 k1:4 ghahramani:1 jayadev:1 objective:2 question:2 quantity:3 flipping:2 occurs:1 strategy:37 parametric:2 print:1 dp:1 unable:1 landmark:1 outer:1 me:1 haas:2 considers:1 reason:1 enforcing:1 issn:2 index:1 mini:1 ratio:2 balance:2 optionally:1 difficult:2 robert:5 statement:2 kamath:1 stated:1 sprt:3 policy:1 unknown:20 allowing:1 upper:8 benchmark:1 finite:3 enabling:1 supporting:1 defining:1 extended:1 ever:3 mansour:1 arbitrary:5 thm:3 sharp:1 david:4 required:2 mechanical:2 kl:9 connection:1 extensive:1 california:1 quadratically:2 learned:1 barcelona:1 nip:1 trans:1 shepp:1 below:1 mnm:1 regime:2 challenge:1 including:2 max:9 power:2 event:1 difficulty:8 rely:1 natural:3 valko:2 business:1 arm:23 mn:4 improve:1 identifies:1 speeding:1 prior:9 literature:3 discovery:1 discouraging:1 powered:1 berry:1 determining:1 asymptotic:1 eugene:1 freund:1 fully:1 highlight:1 adaptivity:1 declaring:2 versus:8 bonald:1 foundation:1 sufficient:5 thresholding:1 dq:1 editor:2 pi:1 heavy:34 ibm:1 fujitsu:1 succinctly:1 repeat:5 last:1 supported:4 infeasible:1 pulled:1 burges:1 wide:1 characterizing:1 taking:1 differentiating:1 absolute:6 deepak:1 munos:1 depth:2 xn:3 world:1 valid:1 author:2 adaptive:20 san:1 far:1 splunk:1 welling:1 citation:1 obtains:1 skill:1 informationtheoretic:1 netw:1 incoming:1 kjamieson:1 francisco:1 xi:8 spectrum:1 daskalakis:1 search:2 table:3 learn:1 ca:2 schuurmans:1 bottou:2 did:1 motivation:1 noise:1 allowed:1 pivotal:1 x1:3 reservoir:11 referred:2 intel:1 sub:2 wish:1 explicit:1 exponential:6 lie:2 tang:1 theorem:18 formula:1 specific:3 oand:1 showing:2 jensen:1 ericsson:1 exists:1 sequential:4 effectively:1 importance:1 corr:1 gap:1 easier:1 chen:1 logarithmic:1 simply:1 explore:15 infinitely:3 partially:1 scalar:1 doubling:1 applies:1 springer:1 satisfies:4 relies:1 acm:2 goal:3 viewed:2 consequently:3 ann:1 price:1 cise:1 hard:3 infinite:10 specifically:3 eleazar:1 lemma:1 total:11 lens:1 player:5 est:1 ucb:1 exception:1 formally:1 support:3 latter:3 crowdsourcing:1
5,606
6,073
Designing smoothing functions for improved worst-case competitive ratio in online optimization Reza Eghbali Department of Electrical Engineering University of Washington Seattle, WA 98195 eghbali@uw.edu Maryam Fazel Department of Electrical Engineering University of Washington Seattle, WA 98195 mfazel@uw.edu Abstract Online optimization covers problems such as online resource allocation, online bipartite matching, adwords (a central problem in e-commerce and advertising), and adwords with separable concave returns. We analyze the worst case competitive ratio of two primal-dual algorithms for a class of online convex (conic) optimization problems that contains the previous examples as special cases defined on the positive orthant. We derive a sufficient condition on the objective function that guarantees a constant worst case competitive ratio (greater than or equal to 1 2 ) for monotone objective functions. We provide new examples of online problems on the positive orthant that satisfy the sufficient condition. We show how smoothing can improve the competitive ratio of these algorithms, and in particular for separable functions, we show that the optimal smoothing can be derived by solving a convex optimization problem. This result allows us to directly optimize the competitive ratio bound over a class of smoothing functions, and hence design effective smoothing customized for a given cost function. 1 Introduction Given a proper convex cone K ? Rn , let ? : K 7? R be an upper semi-continuous concave function. Consider the optimization problem Pm maximize ? ( t=1 At xt ) (1) subject to xt ? Ft , ?t ? [m], where for all t ? [m] := {1, 2, . . . , m}, xt ? Rl are the optimization variables and Ft are compact convex constraint sets. We assume At ? Rn?l maps Ft to K; for example, when K = Rn+ and Ft ? Rl+ , this assumption is satisfied if At has nonnegative entries. We consider problem (1) in the online setting, where it can be viewed as a sequential game between a player (online algorithm) and an adversary. At each step t, the adversary reveals At , Ft and the algorithm chooses x ?t ? Ft . The performance of the algorithm is measured by its competitive ratio, i.e., the ratio of objective value at x ?1 , . . . , x ?m to the offline optimum. Problem (1) covers (convex relaxations of) various online combinatorial problems including online bipartite matching [14], the ?adwords? problem [16], and the secretary problem [15]. More generally, it covers online linear programming (LP) [6], online packing/covering with convex cost [3, 4, 7], and generalization of adwords [8]. In this paper, we study the case where ??(u) ? K ? for all u ? K, i.e., ? is monotone with respect to the cone K. The competitive performance of online algorithms has been studied mainly under the worst-case model (e.g., in [16]) or stochastic models (e.g., in [15]). In the worst-case model one is interested in lower bounds on the competitive ratio that hold for any (A1 , F1 ), . . . , (Am , Fm ). In stochastic models, adversary choses a probability distribution from a family of distributions to generate (A1 , F1 ), . . . , (Am , Fm ), and the competitive ratio is calculated using the expected value of the algorithm?s objective value. Online bipartite matching and its generalization, the ?adwords? problem, are the two main problems that have been studied under the worst case model. The greedy algorithm achieves a competitive ratio of 1/2 while the optimal algorithm achieves a competitive ratio of 1?1/e (as bid to budget ratio goes to zero) [16, 5, 14, 13]. A more general version of Adwords in which each agent (advertiser) has a concave cost has been studied in [8]. The majority of algorithms proposed for the problems mentioned above rely on a primal-dual framework [5, 6, 3, 8, 4]. The differentiating point among the algorithms is the method of updating the dual variable at each step, since once the dual variable is updated the primal variable can be assigned using a simple complementarity condition. A simple and efficient method of updating the dual variable is through a first order online learning step. For example, the algorithm stated in [9] for online linear programming uses mirror descent with entropy regularization (multiplicative weight updates algorithm) once written in the primal dual language. Recently, the work in [9] was independently extended to random permutation model in [12, 2, 11]. In [2], the authors provide competitive difference bound for online convex optimization under random permutation model as a function of the regret bound for the online learning algorithm applied to the dual. In this paper, we consider two versions of the greedy algorithm for problem (1), a sequential update and a simultaneous update algorithm. The simultaneous update algorithm, Algorithm 2, provides a direct saddle-point representation of what has been described informally in the literature as ?continuous updates? of primal and dual variables. This saddle point representation allows us to generalize this type of updates to non-smooth function. In section 2, we bound the competitive ratios of the two algorithms. A sufficient condition on the objective function that guarantees a non-trivial worst case competitive ratio is introduced. We show that the competitive ratio is at least 12 for a monotone non-decreasing objective function. Examples that satisfy the sufficient condition (on the positive orthant and the positive semidefinite cone) are given. In section 3, we derive optimal algorithms, as variants of greedy algorithm applied to a smoothed version of ?. For example, Nesterov smoothing provides optimal algorithm for the adwords problem. The main contribution of this paper is to show how one can derive the optimal smoothing function (or from the dual point of view the optimal regularization function) for separable ? on positive orthant by solving a convex optimization problem. This gives an implementable algorithm that achieves the optimal competitive ratio derived in [8]. We also show how this convex optimization can be modified for the design of smoothing function specifically for the sequential algorithm. In contrast, [8] only considers continuous updates. The algorithms considered in this paper and their general analysis are the same as those we considered in [10]. In [10], the focus is on non-monotone functions and online problems on the positive semidefinite cone. However, the focus of this paper is on monotone functions on the positive orthant. In [10], we only considered Nesterov smoothing and only derived competitive ratio bounds for the simultaneous algorithm. Notation. Given a function ? : Rn 7? R, ? ? denotes the concave conjugate of ? defined as ? ? (y) = inf u hy, ui ? ?(u), for all y ? Rn . For a concave function ?, ??(u) denotes the set of supergradients of ? at u, i.e., the set of all y ? Rn such that ?u0 ? Rn : ?(u0 ) ? hy, u0 ? ui+?(u). The set ?? is related to the concave conjugate function ? ? as follows. For an upper semi-continuous concave function ? we have ??(u) = argminy hy, ui ? ? ? (y). A differentiable function ? has a Lipschitz continuous gradient with respect to k?k with continuity parameter 1/? > 0 if for all ? ? u, u0 ? Rn , k??(u0 ) ? ??(u)k ? 1/? ku ? u0 k, where k?k is the dual norm to k?k. The dual cone K ? of a cone K ? Rn is defined as K ? = {y | hy, ui ? 0 ?u ? K}. Two examples of self-dual cones are the positive orthant Rn+ and the cone of n ? n positive semidefinite matrices n S+ . A proper cone (pointed convex cone with nonempty interior) K induces a partial ordering on Rn which is denoted by ?K and is defined as x ?K y ? y ? x ? K. 1.1 Two primal-dual algorithms The (Fenchel) dual problem for problem (1) is given by Pm T ? minimize t=1 ?t (At y) ? ? (y), n (2) where the optimization variable is y ? R , and ?t denotes the support function for the set Ft defined as ?t (z) = supx?Ft hx, zi. A pair (x? , y ? ) ? (F1 ? . . . ? Fm ) ? K ? is an optimal primal-dual pair if and only if 2 x?t ? argmaxhx, ATt y ? i, x?Ft m X y ? ??( At x?t ), ? ?t ? [m]. t=1 Based on these optimality conditions, we consider two algorithms. Algorithm 1 updates the primal and dual variables sequentially, by maintaining a dual variable y?t and using it to assign x ?t ? argmaxx?Ft hx, ATt y?t i. The then algorithm updates the dual variable based on the second optimality condition. By the assignment rule, we have At x ?t ? ??t (? yt ), and the dual variable update can be Pt viewed as y?t+1 ? argminy h s=1 As x ?s , yi ? ? ? (y). Therefore, the dual update is the same as the update in dual averaging [18] or Follow The Regularized Leader (FTRL) [20, 19, 1] algorithm with regularization ?? ? (y). Algorithm 1 Sequential Update Initialize y?1 ? ??(0) for t ? 1 to m do Receive At , Ft x ?t ? argmaxx?Ft hx, ATt y?t i Pt y?t+1 ? ??( s=1 As x ?s ) end for Algorithm 2 updates the primal and dual variables simultaneously, ensuring that t X x ?t ? argmaxhx, ATt y?t i, y?t ? ??( As x ?s ). x?Ft s=1 This algorithm is inherently more complicated than algorithm 1, since finding x ?t involves solving a saddle-point problem. This can be solved by a first order method like mirror descent algorithm for saddle point problems. In contrast, the primal and dual updates in algorithm 1 solve two separate maximization and minimization problems 1 . Algorithm 2 Simultaneous Update for t ? 1 to m do Receive At , Ft Pt?1 (? yt , x ?t ) ? arg miny maxx?Ft hy, At x + s=1 As x ?s i ? ? ? (y) end for 2 Competitive ratio bounds and examples for ? In this section, we derive bounds on the competitive ratios of Algorithms 1 and 2 by bounding their respective duality gaps. We begin by stating a sufficient condition on ? that leads to non-trivial competitive ratios, and we assume this condition holds in the rest of the paper. Roughly, one can interpret this assumption as having ?diminishing returns? with respect to the ordering induced by a cone. Examples of functions that satisfy this assumption will appear later in this section. Assumption 1 Whenever u ?K v, there exists y ? ??(u) that satisfies y ?K ? z for all z ? ??(v). When ? is differentiable, assumption 1 simplifies to u ?K v ? ??(u) ?K ? ??(v). That is, the gradient, as a map from Rn (equipped with ?K ) to Rn (equipped with ?K ? ), is order-reversing.When ? is twice differentiable, assumption 1 is equivalent to hw, ?2 ?(u)vi ? 0, for all u, v, w ? K. For example, this is equivalent to Hessian being element-wise non-positive when K = Rn+ . Pm Let define y?m+1 to be the minimum element in ??( t=1 At x ?t ) with respect ordering ?K ? (such Pto m an element ?t ) and Psim = Pm exists in the superdifferential by Assumption (1)). Let Pseq = ? ( t=1 At x ? ( t=1 At x ?t ) denote the primal objective values for the primal solution produced by the algorithms 1 Also if the original problem is a convex relaxation of an integer program, meaning that each Ft = convFt where Ft ? Zl , then x ?t can always be chosen to be integral while integrality may not hold for the solution of the second algorithm. 3 Pm Pm 1 and 2, and Dseq = t=1 ?t (ATt y?t ) ? ? ? (? ym+1 ) and Dsim = t=1 ?t (ATt y?t ) ? ? ? (? ym+1 ) denote the corresponding dual objective values. The next lemma provides a lower bound on the duality gaps of both algorithms. Lemma 1 The duality gaps for the two algorithms can be lower bounded as Psim ? Dsim ? ? ? (? ym+1 ) + ?(0), Pseq ? Dseq ? ? ? (? ym+1 ) + ?(0) + m X hAt x ?t , y?t+1 ? y?t i t=1 Furthermore, if ? has a Lipschitz continuous gradient with parameter 1/? with respect to k?k, Pm 2 1 Pseq ? Dseq ? ? ? (? ym+1 ) + ?(0) ? 2? ?t k . (3) t=1 kAt x Note that right hand side of (3) is exactly the regret bound of the FTRL algorithm (with a negative sign) [19]. The proof is given in the appendix. To simplify the notation in the rest of the paper, we assume ?(0) = 0 by replacing ?(u) with ?(u) ? ?(0). To quantify the competitive ratio of the algorithms, we define ?? as ?? = sup {c | ? ? (y) ? c?(u), y ? ??(u), u ? K}, (4) Since ? ? (y) + ?(u) = hy, ui for all y ? ??(u), ?? is equivalent to ?? = sup{c | hy, ui ? (c + 1)?(u), y ? ??(u) u ? K}. (5) Note that ?1 ? ?? ? 0, since for any u ? K and y ? ??(u), by concavity of ? and the fact that y ? K ? , we have 0 ? hy, ui ? ?(u) ? ?(0). If ? is a linear function then ?? = 0, while if 0 ? ??(u) for some u ? K, then ?? = ?1. The next theorem provides lower bounds on the competitive ratio of the two algorithms. Theorem 1 If Assumption 1 holds, we have Psim ? 1 D? , 1 ? ?? Pseq ? m X 1 hAt x ?t , y?t+1 ? y?t i) (D? + 1 ? ?? t=1 where D? is the dual optimal objective. If ? has a Lipschitz continuous gradient with parameter 1/? with respect to k?k, Pm 2 1 1 (D? ? 2? ?t k ). Pseq ? 1?? (6) t=1 kAt x ? Pt Pm ?s ?K s=1 As x ?s for all Proof: Consider the simultaneous update algorithm. We have s=1 As x Pt t, since As Fs ?PK for all s. Since y?t ? ??( s=1 As x ?s ) and y?m+1 was picked to be the minimum m element in ??( s=1 As x ?s ) with respect to ?K ? , by Assumption 1, we have y?t ?K ? y?m+1 . Since At x ? K for all x ? Ft , we get hAt x, y?t i ? hAt x, y?m+1 i; therefore, ?t (ATt y?t ) ? ?t (ATt y?m ). Thus Dsim = m X ?t (ATt y?t ) ? ? ? (? ym ) ? m X t=1 ?t (ATt y?m+1 ) ? ? ? (? ym ) ? D? . t=1 Now Lemma 1 and definition of ?? give the desired result. The proof for Algorithm 1 follows similar steps.  We now consider examples of ? that satisfy Assumption 1 and derive lower bound on ?? for those examples. Examples on positive orthant. Let K = Rn+ and note that K ? = K. To simplify the notation we use ? instead of ?Rn+ . Assumption 1 is satisfied for a twice differentiable function if and only if Pn the Hessian is element-wise non-positive over Rn+ . If ? is separable, i.e., ?(u) = i=1 ?i (ui ), Assumption 1 is satisfied since by concavity for each ?i we have ??i (ui ) ? ??i (vi ) when ui ? vi . In the basic adwords problem, for all t, Ft = {x ? Rl+ | 1T x ? 1}, At is a diagonal matrix with non-negative entries, and Pn Pn ?(u) = i=1 ui ? i=1 (ui ? 1)+ , (7) 4 where (?)+ = max{?, 0}. In this problem, ? ? (y) = 1T (y ? 1). Since 0 ? ??(1) we have ?? = ?1 by (5); therefore, the competitive ratio of algorithm 2 is 12 . Let r = maxt,i,j At,i,j , then we have Pm ?t , y?t+1 ? y?t i ? nr. Therefore, the competitive ratio of algorithm 1 goes to 12 as r (bid to t=1 hAt x budget ratio) goes to zero. In adwords with concave returns studied in [8], At is diagonal for all t and ? is separable 2 . Pn For any p ? 1 let Bp be the lp -norm ball. We can rewrite the penalty function ? i=1 (ui ? 1)+ in Pn the adwords objective using the distance from B? : we have i=1 (ui ? 1)+ = d1 (u, B? ), where d1 (?, C) is the l1 norm distance from set C. For p ? [1, ?) the function ?d1 (u, Bp ) although not separable it satisfies Assumption 1. The proof is given in the supplementary materials. n Examples on the positive semidefinite cone. Let K = S+ and note that K ? = K. Two examples that satisfy Assumption 1 are ?(U ) = log det(U + A0 ), and ?(U ) = trU p with p ? (0, 1). We refer the reader to [10] for examples of online problems that entails log det in the objective function and competitive ratio analysis of the simultanuous algorithm for these problems. Smoothing of ? for improved competitive ratio 3 The technique of ?smoothing? an (potentially non-smooth) objective function, or equivalently adding a strongly convex regularization term to its conjugate function, has been used in several areas. In convex optimization, a general version of this is due to Nesterov [17], and has led to faster convergence rates of first order methods for non-smooth problems. In this section, we study how replacing ? with a appropriately smoothed function ?S helps improve the performance of the two algorithms discussed in section 1.1, and show that it provides optimal competitive ratio for two of the problems mentioned in section 2, adwords and online LP. We then show how to maximize the competitive ratio of both algorithms for a separable ? and compute the optimal smoothing by solving a convex optimization problem. This allows us to design the most effective smoothing customized for a given ?: we maximize the bound on the competitive ratio over the set of smooth functions.(see subsection 3.2 for details). Let ?S denote an upper semi-continuous concave function (a smoothed version of ?), and suppose ?S satisfies Assumption 1. The algorithms we consider in this section are the same as Algorithms 1 and 2, but with ? replacing ?S . Note that the competitive ratio is computed with respect to the original problem, that is the offline primal and dual optimal values are still the same P ? and D? as before. Pm Pm From LemmaP 1, we have that Dsim ? ?S ( t=1 At x ?t )?? ? (? ym+1 ) and Dseq ? ?S ( t=1 At x ?t )? m ? ? (? ym+1 ) ? t=1 hAt x ?t , y?t+1 ? y?t i. To simplify the notation, assume ?S (0) = 0 as before. Define ??,?S = sup{c |? ? (y) ? ?S (u) + (c ? 1)?(u), y ? ??S (u), u ? K}. Then the conclusion of Theorem 1 for Algorithms 1 and 2 applied to the smoothed function holds with ?? replaced by ??,?S . 3.1 Nesterov Smoothing We first consider Nesterov smoothing [17], and apply it to examples on non-negative orthant. Given a proper upper semi-continuous concave function ? : Rn 7? R ? {??}, let ?S = (? ? + ?? )? . Note that ?S is the supremal convolution of ? and ?. If ? and ? are separable, then ?S satisfies Assumption 1 for K = Rn+ . Here we provide example of Nesterov smoothing for functions on non-negative orthant. Adwords: The optimal competitive ratio for the Adwords problem is 1 ? e?1 . This is achieved by Pm e smoothing ? with ?? (y) = i=1 (yi ? e?1 ) log(e ? (e ? 1)yi ) ? 2yi , which gives ( eui ?exp (ui )+1 ui ? [0, 1] e?1 ?S,i (ui ) ? ?S,i (0) = 1 ui > 1, e?1 2 Note that in this case one can remove the assumption that ??i ? R+ since if y?t,i = 0 for some t and i, then x ?s,i = 0 for all s ? t. 5 3.2 Computing optimal smoothing for separable functions on Rn+ We now tackle the problem of finding the optimal smoothing for separable functions on the positive orthant, which as we show in an example at the endP of this section is not necessarily Pn given by Nesterov n smoothing. Given a separable monotone ?(u) = i=1 ?i (ui ) and ?S (u) = i=1 ?S,i (ui ) on Rn+ we have that ??,?S ? mini ??i ,?S,i . To simplify the notation, drop the index i and consider ? : R+ 7? R. We formulate the problem of finding ?S to maximize ??,?S as an optimization problem. In section 4 we discuss the relation between this optimization method and the optimal algorithm presented in [8]. We set ?S (u) = Ru y(s)ds with y a continuous function (y ? C[0, ?)), and state the infinite dimensional convex 0 optimization problem with y as a variable, minimize ? Ru (8) subject to y(s)ds ? ? ? (y(u)) ? ??(u), ?u ? [0, ?) 0 y ? C[0, ?), where ? = 1 ? ??,?S (theorem 1 describes the dependence of the competitive ratios on this parameter). Note that we have not imposed any condition on y to be non-increasing (i.e., the corresponding ?S to be concave). The next lemma establishes that every feasible solution to the problem (8) can be turned into a non-increasing solution. Lemma 2 Let (y, ?) be a feasible solution for problem (8) and define y?(t) = inf s?t y(s). Then (? y , ?) is also a feasible solution for problem (8). In particular if (y, ?) is an optimal solution, then so is (? y , ?). The proof is given in the supplement.  Re- visiting the adwords problem, we observe that the optimal solution is given by y(u) = e?exp(u) e?1 + , which is the derivative of the smooth function we derived using Nesterov smoothing in section 3.1. The optimality of this y can be established by providing a dual certificate, a measure ? corresponding to the inequality constraint, that together with y satisfies the optimality condition. If we set d? = exp (1 ? u)/(e ? 1) du, the optimality conditions are satisfied with ? = (1 ? 1/e)?1 . Also note that if ? plateaus (e.g., as in the adwords objective), then one can replace problem (8) with a problem over a finite horizon. Theorem 2 Suppose ?(t) = c on [u0 , ?) (? plateaus). Then problem (8) is equivalent to minimize subject to ? Ru 0 0 y(s)ds ? ? ? (y(u)) ? ??(u), y(u ) = 0, ?u ? [0, u0 ] (9) 0 y ? C[0, u ]. So for a function ? with a plateau, one can discretize problem (9) to get a finite dimensional problem, minimize ? subject to h Pt s=1 y[s] ? ? ? (y[t]) ? ??(ht) ?t ? [d] (10) y[d] = 0, where h = u /d is the discretization step. Figure 1a shows the optimal smoothing for the piecewise linear function ?(u) = min(.75, u, .5u + .25) by solving problem (10). We point out that the optimal smoothing for this function is not given by Nesterov?s smoothing (even though the optimal smoothing can be derived by Nesterov?s smoothing for a piecewise linear function with only two pieces, like the adwords cost function). Figure 1d shows the difference between the conjugate of the optimal smoothing function and ? ? for the piecewise linear function, which we can see is not concave. We simulated the performance of the simultaneous algorithm on a dataset with n = m, Ft simplex, and At diagonal. We varied m in the range from 1 to 30 and for each m calculated the the smallest competitive ratio achieved by the algorithm over (10m)2 random permutation of A1 , . . . , Am . Figure 1i depicts this quantity vs. m for the optimal smoothing and the Nesterov smoothing. For the Nesterov ? ? ? e ? ? smoothing we used the function ? (y) = (y ? e?1 ) log( e ? ( e ? 1)y) ? 23 y. Pm In cases where a bound umax on t=1 At Ft is known, we can restrict t to [0, umax ] and discretize problem (8) over this interval. However, the conclusion of Lemma 2 does not hold for a finite horizon 0 6 and we need to impose additional linear constraints y[t] ? y[t ? 1] to ensure the monotonicity of y. We find the optimal smoothing for two examples of this kind: ?(u) = log(1 + u) over [0, 100] ? (Figure 1b), and ?(u) = u over [0, 100] (Figure 1c). Figure 1e shows the competitive ratio achieved with the optimal smoothing of ?(u) ? = log(1 + u) over [0, umax ] as a function of umax . Figure 1f depicts this quantity for ?(u) = u. 3.3 Competitive ratio bound for the sequential algorithm In this section we provide a lower bound on the competitive ratio of the sequential algorithm (Algorithm 1). Then we modify Problem (8) to find a smoothing function that optimizes this competitive ratio bound for the sequential algorithm. Theorem 3 Suppose ?S is differentiable on an open set containing K and satisfies Assumption 1. In addition, suppose there exists c ? K such that At Ft ?K c for all t, then 1 D? , Pseq ? 1 ? ??,?S + ?c,?,?S where ? is given by ?c,?,?S = inf{r | hc, ??S (0) ? ??S (u)i ? r?(u), u ? K} Since ?S satisfies Assumption 1, we have y?t+1 ?K ? y?t . Therefore, we can write: Pm Pm ?t , y?t ? y?t+1 i ? t=1 hc, y?t ? y?t+1 i = hc, y?0 ? y?m+1 i (11) t=1 hAt x Pm Now by combining the duality gap by Lemma 1 with 11, we get Dseq ? ?S ( t=1 At x ?t ) ? Pgiven m ? ? (? ym+1 )+hc, ??S (0) ? ??S ( t=1 At x ?t )i. The conclusion follows from the definition of ??,?S , ?c,?,?S and the fact that Dseq ? D? .  Proof: Based on the result of the previous theorem we can modify the optimization problem set up in Section 3.2 for separable functions on Rn+ to maximize the lower bound on the competitive ratio of the sequential algorithm. Note that when ? and ?S are separable, we have ?c,?,?S ? maxi ?ci ,?i ,?S i . Therefore, similar to the previous section to simplify the notation we drop the index i and assume ? is a function of a scalar variable. The optimization problem for finding ?S that minimizes ?c,?,?S ? ??,?S is as follows: minimize subject to ? Ru 0 y(s)ds + c(? 0 (0) ? y(u)) ? ? ? (y(u)) ? ??(u), ?u ? [0, ?) (12) y ? C[0, ?).    1 u?1 For adwords, the optimal solution is given by ? = 1?exp(? and y(u) = ? 1 ? exp , 1 1+c c+1 ) +   ?1 which gives a competitive ratio of 1 ? exp c+1 . In Figure 1h we have plotted the competitive ratio achieved by solving problem 12 for ?(u) = log det(1 + u) with umax = 100 as a function of c. Figure 1g shows the competitive ratio as a function of c for the piecewise linear function ?(u) = min(.75, u, .5u + .25). 4 Discussion and Related Work We discuss results and papers from two communities, computer science theory and machine learning, related to this work. Online optimization. In [8], the authors proposed an optimal algorithm for adwords with differentiable concave returns (see examples in section 2). Here, ?optimal? means that they construct an instance of the problem for which competitive ratio bound cannot be improved, hence showing the bound is tight. The algorithm is stated and analyzed for a twice differentiable, separable ?(u). The assignment rule for primal variables in their proposed algorithm is explained as a continuous process. A closer look reveals that this algorithm falls in the framework of algorithm 2, with the only difference being that at each step, (? xt , y?t ) are chosen such that x ?t ? argmaxhx, ATt y?t i ?i ? [n] : y?t,i = ??i (vi (ui )), 7 Pt ui = ( t=1 As x ?s )i , 0.8 0.6 5 25 4 20 3 15 2 10 0.4 0.2 ?S ? 1 ?S ? 0 0 0 0 0.5 1 1.5 2 0 20 40 60 u 0 100 40 0.9 0.2 0.1 0.4 0.6 0.8 1 0.85 0.8 0.9 0.85 0.8 0.75 0 1 500 0.7 1000 0 200 400 umax y 0.4 1000 1 Optimal smoothing Nesterov smoothing 0.95 competitive ratio Competitive ratio 0.5 800 (f) 0.8 0.6 600 umax (e) (d) 0.7 100 0.95 0.7 -0.1 80 (c) 0.75 0 60 u comp. ratio 0.3 0.2 20 (b) 0.95 comp. ratio ?S? (y) ? ?? (y) (a) Competitive ratio 80 u 0.4 0 ?S ? 5 0.6 0.4 0.2 0.9 0.85 0.8 0.75 0.3 0 0 0.2 0.4 0.6 0.8 1 0 20 40 60 c c (g) (h) 80 100 0.7 0 10 20 30 m (i) Figure 1: Optimal?smoothing for ?(u) = min(.75, u, .5u+.25) (a), ?(u) = log(1+u) over [0, 100] (b), and ?(u) = u over [0, 100] (c). The competitive ratio ? achieved by the optimal smoothing as a function of umax for ?(u) = log(1 + u) (e) and ?(u) = u (f). ?S? ? ? ? for the piecewise linear function (d). The competitive ratio achieved by the optimal smoothing for the sequential algorithm as a function of c for ?(u) = min(.75, u, .5u + .25) (g) and ?(u) = log(1 + u) with umax = 100 (h). i, Competitive ratio of the simultaneous algorithm for ?(u) = min(.75, u, .5u + .25) as a function of m with optimal smoothing and Nesterov smoothing (see text). where vi : R+ 7? R+ is an increasing differentiable function given as a solution of a nonlinear differential equation that involves ?i and may not necessarily have a closed form. The competitive ratio is also given based on the differential equation. They prove that this gives the optimal competitive ratio for the instances where ?1 = ?2 = . . . = ?m . Note that this is equivalent of setting ?S,i (ui ) = ?(vi (ui ))). Since vi is nondecreasing ?S,i is a concave function. On the other hand, given a concave function ?S,i (R+ ) ? ?i (R+ ), we can set vi : R+ 7? R+ as vi (u) = inf{z | ?i (z) ? ?S,i (u)}. Our formulation in section 3.2 provides a constructive way of finding the optimal smoothing. It also applies to non-smooth ?. Online learning. As mentioned before, the dual update in Algorithm 1 is the same as in Follow-theRegularized-Leader (FTRL) algorithm with ?? ? as the regularization. This primal dual perspective has been used in [20] for design and analysis of online learning algorithms. In the online learning literature, the goal is to derive a bound on regret that optimally depends on the horizon, m. The goal in the current paper is to provide competitive ratio for the algorithm that depends on the function ?. Regret provides a bound on the duality gap, and in order to get a competitive ratio the regularization function should be crafted based on ?. A general choice of regularization which yields an optimal regret bound in terms of m is not enough for a competitive ratio argument, therefore existing results in online learning do not address our aim. 8 References [1] Jacob Abernethy, Elad Hazan, and Alexander Rakhlin. Competing in the dark: An efficient algorithm for bandit linear optimization. In COLT, pages 263?274, 2008. [2] Shipra Agrawal and Nikhil R Devanur. Fast algorithms for online stochastic convex programming. arXiv preprint arXiv:1410.7596, 2014. [3] Yossi Azar, Ilan Reuven Cohen, and Debmalya Panigrahi. Online covering with convex objectives and applications. arXiv preprint arXiv:1412.3507, 2014. [4] Niv Buchbinder, Shahar Chen, Anupam Gupta, Viswanath Nagarajan, et al. Online packing and covering framework with convex objectives. arXiv preprint arXiv:1412.8347, 2014. [5] Niv Buchbinder, Kamal Jain, and Joseph Seffi Naor. Online primal-dual algorithms for maximizing ad-auctions revenue. In Algorithms?ESA 2007, pages 253?264. Springer, 2007. [6] Niv Buchbinder and Joseph Naor. Online primal-dual algorithms for covering and packing. Mathematics of Operations Research, 34(2):270?286, 2009. [7] TH Chan, Zhiyi Huang, and Ning Kang. Online convex covering and packing problems. arXiv preprint arXiv:1502.01802, 2015. [8] Nikhil R Devanur and Kamal Jain. Online matching with concave returns. In Proceedings of the forty-fourth annual ACM symposium on Theory of computing, pages 137?144. ACM, 2012. [9] Nikhil R Devanur, Kamal Jain, Balasubramanian Sivan, and Christopher A Wilkens. Near optimal online algorithms and fast approximation algorithms for resource allocation problems. In Proceedings of the 12th ACM conference on Electronic commerce, pages 29?38. ACM, 2011. [10] R. Eghbali, M. Fazel, and M. Mesbahi. Worst Case Competitive Analysis for Online Conic Optimization. In 55th IEEE conference on decision and control (CDC). IEEE, 2016. [11] Reza Eghbali, Jon Swenson, and Maryam Fazel. Exponentiated subgradient algorithm for online optimization under the random permutation model. arXiv preprint arXiv:1410.7171, 2014. [12] Anupam Gupta and Marco Molinaro. How the experts algorithm can help solve lps online. arXiv preprint arXiv:1407.5298, 2014. [13] Bala Kalyanasundaram and Kirk R Pruhs. An optimal deterministic algorithm for online b-matching. Theoretical Computer Science, 233(1):319?325, 2000. [14] Richard M Karp, Umesh V Vazirani, and Vijay V Vazirani. An optimal algorithm for on-line bipartite matching. In Proceedings of the twenty-second annual ACM symposium on Theory of computing, pages 352?358. ACM, 1990. [15] Robert Kleinberg. A multiple-choice secretary algorithm with applications to online auctions. In Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms, pages 630?631. Society for Industrial and Applied Mathematics, 2005. [16] Aranyak Mehta, Amin Saberi, Umesh Vazirani, and Vijay Vazirani. Adwords and generalized online matching. Journal of the ACM (JACM), 54(5):22, 2007. [17] Yu Nesterov. Smooth minimization of non-smooth functions. Mathematical programming, 103(1):127?152, 2005. [18] Yurii Nesterov. Primal-dual subgradient methods for convex problems. Mathematical programming, 120(1):221?259, 2009. [19] Shai Shalev-Shwartz and Yoram Singer. Online learning: Theory, algorithms, and applications. 2007. [20] Shai Shalev-Shwartz and Yoram Singer. A primal-dual perspective of online learning algorithms. Machine Learning, 69(2-3):115?142, 2007. 9
6073 |@word version:5 norm:3 open:1 mehta:1 jacob:1 psim:3 ftrl:3 contains:1 att:11 existing:1 current:1 discretization:1 wilkens:1 written:1 remove:1 drop:2 update:18 v:1 greedy:3 certificate:1 provides:7 mathematical:2 direct:1 differential:2 symposium:3 prove:1 naor:2 expected:1 roughly:1 decreasing:1 balasubramanian:1 equipped:2 increasing:3 begin:1 notation:6 bounded:1 what:1 pto:1 kind:1 minimizes:1 finding:5 guarantee:2 every:1 concave:16 tackle:1 exactly:1 zl:1 control:1 appear:1 positive:14 before:3 engineering:2 modify:2 twice:3 studied:4 range:1 fazel:3 commerce:2 swenson:1 regret:5 kat:2 area:1 maxx:1 matching:7 get:4 cannot:1 interior:1 zhiyi:1 optimize:1 equivalent:5 deterministic:1 map:2 imposed:1 yt:2 maximizing:1 go:3 independently:1 convex:20 devanur:3 formulate:1 rule:2 updated:1 pt:7 suppose:4 programming:5 us:1 designing:1 complementarity:1 element:5 updating:2 viswanath:1 ft:22 preprint:6 electrical:2 solved:1 worst:8 ordering:3 mentioned:3 ui:24 miny:1 nesterov:16 solving:6 rewrite:1 tight:1 bipartite:4 shipra:1 packing:4 various:1 jain:3 fast:2 effective:2 shalev:2 abernethy:1 supplementary:1 solve:2 elad:1 nikhil:3 nondecreasing:1 online:42 agrawal:1 differentiable:8 maryam:2 turned:1 combining:1 sixteenth:1 amin:1 seattle:2 convergence:1 optimum:1 help:2 derive:6 stating:1 measured:1 involves:2 quantify:1 ning:1 stochastic:3 material:1 hx:3 assign:1 nagarajan:1 f1:3 generalization:2 niv:3 hold:6 supergradients:1 marco:1 considered:3 exp:6 achieves:3 smallest:1 combinatorial:1 establishes:1 minimization:2 always:1 eghbali:4 modified:1 aim:1 pn:6 karp:1 derived:5 focus:2 mainly:1 contrast:2 industrial:1 am:3 secretary:2 a0:1 diminishing:1 relation:1 bandit:1 interested:1 arg:1 dual:33 among:1 colt:1 denoted:1 smoothing:41 special:1 initialize:1 equal:1 once:2 construct:1 having:1 washington:2 choses:1 look:1 yu:1 jon:1 kamal:3 simplex:1 simplify:5 piecewise:5 richard:1 simultaneously:1 replaced:1 analyzed:1 semidefinite:4 primal:19 integral:1 closer:1 partial:1 respective:1 desired:1 re:1 plotted:1 theoretical:1 fenchel:1 instance:2 cover:3 assignment:2 maximization:1 cost:4 entry:2 optimally:1 reuven:1 supx:1 chooses:1 siam:1 ym:10 together:1 central:1 satisfied:4 containing:1 huang:1 expert:1 derivative:1 return:5 ilan:1 satisfy:5 vi:9 depends:2 piece:1 multiplicative:1 view:1 later:1 picked:1 closed:1 analyze:1 sup:3 hazan:1 competitive:54 complicated:1 shai:2 contribution:1 minimize:5 yield:1 generalize:1 produced:1 advertising:1 comp:2 simultaneous:7 plateau:3 whenever:1 definition:2 proof:6 seffi:1 dataset:1 subsection:1 follow:2 improved:3 formulation:1 though:1 strongly:1 furthermore:1 d:4 hand:2 replacing:3 christopher:1 nonlinear:1 continuity:1 tru:1 hence:2 assigned:1 regularization:7 game:1 self:1 covering:5 generalized:1 l1:1 saberi:1 auction:2 meaning:1 wise:2 umesh:2 recently:1 argminy:2 rl:3 cohen:1 reza:2 discussed:1 interpret:1 refer:1 pm:17 mathematics:2 pointed:1 language:1 mfazel:1 entail:1 chan:1 perspective:2 inf:4 optimizes:1 buchbinder:3 inequality:1 shahar:1 yi:4 minimum:2 greater:1 additional:1 impose:1 forty:1 maximize:5 eui:1 advertiser:1 u0:8 semi:4 multiple:1 smooth:8 faster:1 a1:3 ensuring:1 variant:1 basic:1 arxiv:12 achieved:6 receive:2 addition:1 interval:1 appropriately:1 rest:2 subject:5 induced:1 integer:1 near:1 enough:1 bid:2 zi:1 fm:3 restrict:1 competing:1 simplifies:1 det:3 penalty:1 f:1 hessian:2 generally:1 informally:1 dark:1 induces:1 generate:1 sign:1 lemmap:1 write:1 discrete:1 sivan:1 ht:1 integrality:1 uw:2 relaxation:2 monotone:6 subgradient:2 cone:12 fourth:1 adwords:19 family:1 reader:1 electronic:1 decision:1 appendix:1 superdifferential:1 bound:23 bala:1 aranyak:1 nonnegative:1 annual:3 constraint:3 mesbahi:1 bp:2 hy:8 kleinberg:1 argument:1 optimality:5 min:5 separable:14 department:2 ball:1 conjugate:4 describes:1 lp:4 joseph:2 explained:1 resource:2 equation:2 discus:2 nonempty:1 singer:2 yossi:1 end:2 yurii:1 operation:1 apply:1 observe:1 pgiven:1 anupam:2 hat:7 original:2 denotes:3 ensure:1 maintaining:1 yoram:2 society:1 objective:15 quantity:2 dependence:1 diagonal:3 nr:1 visiting:1 gradient:4 distance:2 separate:1 simulated:1 majority:1 considers:1 trivial:2 ru:4 index:2 mini:1 ratio:56 providing:1 equivalently:1 robert:1 potentially:1 stated:2 negative:4 design:4 proper:3 twenty:1 upper:4 discretize:2 convolution:1 implementable:1 finite:3 descent:2 orthant:10 extended:1 rn:22 varied:1 smoothed:4 esa:1 community:1 introduced:1 pair:2 kang:1 established:1 address:1 adversary:3 program:1 including:1 max:1 rely:1 regularized:1 customized:2 improve:2 conic:2 umax:9 text:1 literature:2 molinaro:1 permutation:4 cdc:1 allocation:2 revenue:1 agent:1 sufficient:5 maxt:1 ad:1 offline:2 side:1 exponentiated:1 fall:1 differentiating:1 calculated:2 concavity:2 author:2 vazirani:4 compact:1 monotonicity:1 sequentially:1 reveals:2 leader:2 shwartz:2 continuous:11 ku:1 inherently:1 argmaxx:2 du:1 hc:4 necessarily:2 pk:1 main:2 bounding:1 azar:1 crafted:1 depicts:2 kirk:1 hw:1 theorem:7 xt:4 showing:1 maxi:1 rakhlin:1 gupta:2 exists:3 sequential:9 adding:1 ci:1 mirror:2 supplement:1 budget:2 horizon:3 gap:5 chen:1 vijay:2 entropy:1 led:1 saddle:4 jacm:1 scalar:1 applies:1 springer:1 satisfies:7 acm:8 viewed:2 goal:2 lipschitz:3 replace:1 feasible:3 specifically:1 infinite:1 reversing:1 averaging:1 lemma:7 duality:5 player:1 support:1 alexander:1 constructive:1 d1:3
5,607
6,074
Proximal Deep Structured Models Shenlong Wang University of Toronto slwang@cs.toronto.edu Sanja Fidler University of Toronto fidler@cs.toronto.edu Raquel Urtasun University of Toronto urtasun@cs.toronto.edu Abstract Many problems in real-world applications involve predicting continuous-valued random variables that are statistically related. In this paper, we propose a powerful deep structured model that is able to learn complex non-linear functions which encode the dependencies between continuous output variables. We show that inference in our model using proximal methods can be efficiently solved as a feedfoward pass of a special type of deep recurrent neural network. We demonstrate the effectiveness of our approach in the tasks of image denoising, depth refinement and optical flow estimation. 1 Introduction Many problems in real-world applications involve predicting a collection of random variables that are statistically related. Over the past two decades, graphical models have been widely exploited to encode these interactions in domains such as computer vision, natural language processing and computational biology. However, these models are shallow and only a log linear combination of hand-crafted features is learned [34]. This limits the ability to learn complex patterns, which is particularly important nowadays as large amounts of data are available, facilitating learning. In contrast, deep learning approaches learn complex data abstractions by compositing simple nonlinear transformations. In recent years, they have produced state-of-the-art results in many applications such as speech recognition [17], object recognition [21], stereo estimation [38], and machine translation [33]. In some tasks, they have been shown to outperform humans, e.g., fine grained categorization [7] and object classification [15]. Deep neural networks are typically trained using simple loss functions. Cross entropy or hinge loss are used when dealing with discrete outputs, and squared loss when the outputs are continuous. Multi-task approaches are popular, where the hope is that dependencies of the output will be captured by sharing intermediate layers among tasks [9]. Deep structured models attempt to learn complex features by taking into account the dependencies between the output variables. A variety of methods have been developed in the context of predicting discrete outputs [7, 3, 31, 39]. Several techniques unroll inference and show how the forward and backward passes of these deep structured models can be expressed as a set of standard layers [1, 14, 31, 39]. This allows for fast end-to-end training on GPUs. However, little to no attention has been given to deep structured models with continuous valued output variables. One of the main reasons is that inference (even in the shallow model) is much less well studied, and very few solutions exist. An exception are Markov random fields (MRFs) with Gaussian potentials, where exact inference is possible (via message-passing) if the precision matrix is positive semi-definite and satisfies the spectral radius condition [36]. A family of popular approaches convert the continuous inference problem into a discrete task using particle methods [18, 32]. Specific solvers have also been designed for certain types of potentials, e.g. polynomials [35] and piecewise convex functions [37]. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Proximal methods are a popular solution to perform inference in continuous MRFs when the potentials are non-smooth and non-differentiable functions of the outputs [26]. In this paper, we show that proximal methods are a special type of recurrent neural networks. This allows us to efficiently train a wide family of deep structured models with continuous output variables end-to-end on the GPU. We show that learning can simply be done via back-propagation for any differentiable loss function. We demonstrate the effectiveness of our algorithm in the tasks of image denoising, depth refinement and optical flow and show superior results over competing algorithms on these tasks. 2 Proximal Deep Structured Networks In this section, we first introduce continuous-valued deep structured models and briefly review proximal methods. We then propose proximal deep structured models and discuss how to do efficient inference and learning in these models. Finally we discuss the relationship with previous work. 2.1 Continuous-valued Deep Structured Models Given an input x ? X , let y = (y1 , ..., yN ) be the set of random variables that we are interested QN in predicting. The output space is a product space of all the elements: y ? Y = i=1 Yi , and the domain of each individual variable yi is a closed subset in real-valued space, i.e. Yi ? R. Let E(x, y; w) : X ? Y ? RK ? R be an energy function which encodes the problem that we are interested in solving. Without loss of generality we assume that the energy decomposes into a sum of functions, each depending on a subset of variables X X E(x, y; w) = fi (yi , x; wu ) + f? (y? , x; w? ) (1) ? i where fi (yi ; x, w) : Yi ? X ? R is a function that depends on a single variable (i.e., unary term) and f? (y? ) : Y? ? X ? R depends on a subset of variables y? = (yi )i?? defined on a domain Y? ? Y. Note that, unlike standard MRF models, the functions fi and f? are non-linear functions of the parameters. The energy function is parameterized in terms of a set of weights w, and learning aims at finding the value of these weights which minimizes a loss function. Given an input x, inference aims at finding the best configuration by minimizing the energy function: X X y? = arg min fi (yi , x; wu ) + f? (y? , x; w? ) (2) y?Y ? i ? Finding the best scoring configuration y is equivalent to maximizing the posteriori distribution: 1 p(y|x; w) = Z(x;w) exp(?E(x, y|w)), with Z(x; w) the partition function. Standard multi-variate deep networks (e.g., FlowNet [11]) have potential functions which depend on a single output variable. In this simple case, inference corresponds to a forward pass that predicts the value of each variable independently. This can be interpreted as inference in a graphical model with only unary potentials fi . In the general case, performing inference in MRFs with continuous variables involves solving a very challenging numerical optimization problem. Depending on the structure and properties of the potential functions, various methods have been proposed. For instance, particle methods perform approximate inference by performing message passing on a series of discrete MRFs [18, 32]. Exact inference is possible for a certain type of MRFs, i.e., Gaussian MRFs with positive semidefinite precision matrix. Efficient dedicated algorithms exist for a restricted family of functions, e.g., polynomials [35]. If certain conditions are satisfied, inference is often tackled by a group of algorithms called proximal methods [26]. In this section, we will focus on this family of inference algorithms and show that they are a particular type of recurrent net. We will use this fact to efficiently train deep structured models with continuous outputs. 2.2 A Review on Proximal Methods Next, we briefly discuss proximal methods, and refer the reader to [26] for a thorough review. Proximal algorithms are very generally applicable, but they are particularly successful at solving non-smooth, non-differentiable, or constrained problems. Their base operation is evaluating the proximal operator of a function, which involves solving a small convex optimization problem that 2 often admits a closed-form solution. In particular, the proximal operator proxf (x0 ) : R ? R of a function is defined as proxf (x0 ) = arg min(y ? x0 )2 + f (y) y If f is convex, the fixed points of the proximal operator of f are precisely the minimizers of f . In other words, proxf (x? ) = x? iff x? minimizes f . This fix-point property motivates the simplest proximal method called the proximal point algorithm which iterates x(n+1) = proxf (x(n) ). All the proximal algorithms used here are based on this fix-point property. Note that even if the function f (?) is not differentiable (e.g., `1 norm) there might exist a closed-form or easy to compute proximal operator. While the original proximal operator was designed for the purpose of obtaining the global optimum in convex optimization, recent work has shown that proximal methods work well for non-convex optimization as long as the proximal operator exists [20, 30, 5]. For multi-variate optimization problems the proximal operator might not be trivial to obtain (e.g., when having high-order potentials). In this case, a widely used solution is to decompose the highorder terms into small problems that can be solved through proximal operators. Examples of this family of algorithms are half-quadratic splitting [13], alternating direction method of multipliers [12] and primal-dual methods [2] . In this work, we focus on the non-convex multi-variate case. 2.3 Proximal Deep Structured Models In order to apply proximal algorithms to tackle the inference problem defined in Eq. (2), we require the energy functions fi and f? to satisfy the following conditions: 1. There exist functions hi and gi such that fi (yi , x; w) = gi (yi , hi (x, w)), where gi is a distance function 1 ; 2. There exists a closed-form proximal operator for gi (yi , hi (x; w)) wrt yi . 3. There exist functions h? and g? such that f? (y? , x; w) can be re-written as f? (y? , x; w) = T h? (x; w)g? (w? y? ). 4. There exists a proximal operator for either the dual or primal form of g? (?). A fairly general family of deep structured models satisfies these conditions. Our experimental evaluation will demonstrate the applicability in a wide variety of tasks including depth refinement, image denoising as well as optical flow. If our potential functions satisfy the conditions above, we can rewrite our objective function as follows: X X T E(x, y; w) = gi (yi , hi (x; w)) + h? (x; w)g? (w? y? ) (3) ? i In this paper, we make the important observation that each iteration of most existing proximal solvers contain five sub-steps: (i) compute the locally linear part; (ii) compute the proximal operator proxgi ; (iii) deconvolve; (iv) compute the proximal operator proxg? ; (v) update the result through a gradient descent step. Due to space restrictions, we show primal-dual solvers in this section, and refer the reader to the supplementary material for ADMM, half-quadratic splitting and the proximal gradient method. The general idea of primal dual solvers is to introduce auxiliary variables z to decompose the highorder terms. We can then minimize z and y alternately through computing their proximal operator. In particular, we can transform the primal problem in Eq. (3) into the following saddle point problem X X X T min max gi (yi , hi (x, wu )) ? h? (x, w)g?? (z? ) + h? (x, w)hw? y ? , z? i (4) y?Y z?Z i ? ? where g?? (?) is the convex conjugate of g? (?): g?? (z? ) = sup{hz? , zi ? g? (z)|z ? Z} and the convex conjugate of g?? is g? itself, if g? (?) is convex. 1 A function g : Y ? Y ? [0, ?) is called a distance function iff it satisfies the condition of non-negativity, identity of indisernibles, symmetry and triangle inequality. 3 Figure 1: The whole architecture (top) and one iteration block (bottom) of our proximal deep structured model. The primal-dual method solves the problem in Eq. (4) by iterating the following steps: (i) fix y and minimize the energy wrt z; (ii) fix z and minimize the energy wrt y; (iii) conduct a Nesterov extrapolation gradient step. These iterative computation steps are: ? (t+1) ? ? z? (t+1) yi ? ? (t+1) y?i (t) ?? T ? (t) h? (x;w) w? y? ) (t) ?? ?T (t+1) proxgi ,hi (x,w) (yi ? h? (x;w) w?,i z ) (t+1) (t+1) (t) yi + ?ex (yi ? yi ) = proxg?? (z? + = = (5) where y(t) is the solution at the t-th iteration, z(t) is an auxiliary variable and h(x, wu ) is the deep unary network. Note that different functions gi and g? in (3) have different proximal operators. It is not difficult to see that the inference process in Eq. (5) can be written as a feed-forward pass in a recurrent neural network by stacking multiple computation blocks. In particular, the first step is a convolution layer and the third step can be considered as a deconvolution layer sharing weights with the first step. The proximal operators are non-linear activation layers and the gradient descent step is a weighted sum. We also rewrite the scalar multiplication as a 1 ? 1 convolution. We refer the reader to Fig. 1 for an illustration. The lower figure depicts one iteration of inference while the whole inference process as a recurrent net is shown in the top figure. Note that the whole inference process has two stages: first we compute the unaries h(x; wu ) with a forward pass. Then we perform MAP inference through our recurrent network. The first non-linearity for the primal dual method is the proximal operator of the dual function of f? . This changes for other types of proximal methods. In the case of the alternating direction method of multipliers (ADMM) the nonlinearity corresponds to the proximal operator of f? ; for half-qudratic splitting it is the proximal operator of f? ?s primal form while the second non linearity is a least-squares solver; if fi or f? is reduced to a quadratic function of y, the algorithm is simplified, as the proximal operator of a quadratic function is a linear function [5]. We refer the reader to the supplementary material for more details on other proximal methods. 2.4 Learning gt N Given training pairs composed of inputs {xn }N n=1 and their corresponding output {yn }n=1 , learning aims at finding parameters which minimizes a regularized loss function: X w? = arg min `(yn? , yngt ) + ?r(w) w n where `(?) is the loss, r(?) is a regularizer of the weight (we use `2 -norm in practice), yn? is the minimizer of Eq. (3) for the n-th example and ? is a scalar. Given the conditions that both proxfi and proxg? (or proxg?? ) are sub-differentiable wrt. w and y, back-propagation can be used to compute the gradient efficiently. We refer the reader to Fig. 2 for an illustration of our learning algorithm. Parameters such as the gradient steps ?? , ?? , ?ex in Eq. (5) are considered hyper-parameters in proximal methods and are typically manually set. In contrast, we can learn them as they are 1 ? 1 convolution weights. 4 Algorithm: Learning Continuous-Valued Deep Structured Models Repeat until stopping criteria 1. 2. 3. 4. Forward pass to compute hi (x, w) and h? (x, w) Compute y? i via forward pass in Eq. (5) Compute the gradient via backward pass Parameter update Figure 2: Algorithm for learning proximal deep structured models. Non-shared weights: The weights and gradient steps for high-order potentials are shared among all the iteration blocks in the inference network, which guarantees the feed-forward pass to explicitly minimize the energy function in Eq. (2). In practice we found that by removing the weight-sharing and fixed gradient step constraints, we can give extra flexibility to our model, boosting the final performance. This observation is consistent with the findings of shrinkage field [30] and inference machines [27]. Multi-loss: Intermediate layer outputs y(t) should gradually converge towards the final output. Motivated by this fact, we include a loss over the intermediate computations to accelerate convergence. 2.5 Discussion and Related Work Our approach can be considered as a continuous-valued extension of deep structured models [3, 31, 39]. Unlike previous methods where the output lies in a discrete domain and inference is conducted through a specially designed message passing layer, the output of the proposed method is in continuous domain and inference is done by stacking convolution and non-linear activation layers. Without deep unary potentials, our model is reduced to a generalized version of field-of-experts [28]. The idea of stacking shrinkage functions and convolutions as well as learning iteration-specific weights was exploited in the learning iterative shrinkage algorithm (LISTA) [14]. LISTA can be considered as a special case of our proposed model with sparse coding as the energy function and proximal gradient as the inference algorithm. Our approach is also closely related to the recent structured prediction energy networks (SPEN) [1], where our unary network is analogous to the feature net in SPEN and the whole energy model is analogous to the energy net. Both SPEN and our proposed method can be considered as a special case of optimization-based learning [8]. However, SPEN utilizes plain gradient descent for inference while our network is proximal algorithm motivated. Previous methods have tried to learn multi-variate regression networks for optical flow [11] and stereo [24]. But none of these approaches model the interactions between output variables. Thus, they can be considered a special case of our model, where only unary functions fi are present. 3 Experiments We demonstrate the effectiveness of our approach in three different applications: image denoising, depth refinement and optical flow estimation. We employ mxnet [4] with CUDNNv4 acceleration to implement the networks, which we train end-to-end. Our experiments are conducted on a Xeon 3.2 Ghz machine with a Titan X GPU. 3.1 Image Denoising We first evaluate our method for the task of image denoising (i.e., shallow unary) using the BSDS image dataset [23]. We corrupt each image with Gaussian noise with standard deviation ? = 25. We use the energy function typically employed for image denoising: y? = arg min y?Y X kyi ? xi k22 + ? X T kwho,? y? k1 (6) ? i According to the primal dual algorithm, the activation function for the first nonlinearity is the proximal operator of the dual function of the `1 -norm: prox?? (z) = min(|z|, 1)?sign(z), which is the projection onto an `? -norm ball. In practice we encode this function as prox?? (z) = max(min(z, 1), ?1). The 5 PSNR Time (second) BM3D [6] EPLL [40] LSSC [22] CSF [30] RTF [29] Ours Ours GPU 28.56 28.68 28.70 28.72 28.75 28.79 28.79 2.57 108.72 516.48 5.10 69.25 0.23 0.011 Table 1: Natural Image Denoising on BSDS dataset [23] with noise variance ? = 25. 16 32 64 3?3 28.43 28.48 28.49 5?5 28.57 28.64 28.68 7?7 28.68 28.76 28.79 Table 2: Performance of the proposed model with different hyper-parameters Figure 3: Qualitative results for image denoising. Left to right: noisy input, ground-truth, our result. second nonlinearity is the proximal operator of the primal function of the `2 -norm, which is a weighted sum: prox`2 (y, ?) = x+?y 1+? . For training, we select 244 images, following the configuration of [30]. We randomly cropped 128 ? 128 clean patches from the training images and obtained the noisy input by adding random noise. We use mean square error as the loss function and set a weight decay strength of 0.0004 for all settings. Note that for all the convolution and deconvolution layers, the bias is set to zero. MSRA initialization [16] is used for the convolution parameters and the initial gradient step for each iteration is set to be 0.02. We use adam [19] with a learning rate of t = 0.02 and hyper-parameters ?1 = 0.9 and ?2 = 0.999 as in Kingma et al. [19]. The learning rate is divided by 2 every 50 epoch, and we use a mini-batch size of 32. We compare against a number of recent state-of-the-art techniques [6, 40, 22, 30, 29]. 2 The Peak Signal-to-Noise Ratio (PSNR) is used as a performance measure. As shown in Tab. 1 our proposed method outperforms all methods in terms of accuracy and speed. The second best performing method is RTF [29], while being two orders of magnitude slower than our approach. Our GPU implementation achieves real-time performance with more than 90 frames/second. Note that a GPU version of CSF is reported to run at 0.92s on a 512 ? 512 image on a GTX 480 [30]. However, since GPU implementation is not available online, we cannot make proper comparisons. Tab. 2 shows performance with different hyper-parameters (filter size, number of filters per each layer). As we can see, larger receptive fields and more convolution filters slightly boost the performance. Fig. 3 depicts the qualitative results of our model for the denoising task. 3.2 Depth Refinement Due to specularities and intensity changes of structured light imaging, the sensor?s output depth is often noisy. Thus, refining the depth to generate a cleaner, more accurate depth image is an important task. We conduct the depth refinement experiment on the 7 Scenes dataset [25]. We follow the configuration of [10], where the ground-truth depth was computed using KinectFusion [25]. The noise [10] has a Poisson-like distribution and is depth-dependent, which is very different from the image denoising experiment which contained Gaussian noise. We use the same architecture as for the task of natural image denoising. The multi-stage mean square error is used as loss function and the weight decay strength is set to be 0.0004. Adam (?1 = 0.9 and 2 We chose the model with the best performance for each competing algorithm. For the CSF method, we use CSF57?7 ; for RTF we use RTF5 ; for our method, we pick 7 ? 7 ? 64 high-order structured network. 6 Figure 4: Qualitative results for depth refinement. Left to right: input, ground-truth, wiener filter, bilateral filter, BM3D, Filter Forest, Ours. PSNR Wiener 32.29 Bilateral 30.95 LMS 24.37 BM3D [6] 35.46 FilterForest [10] 35.63 Ours 36.31 Table 3: Performance of depth refinement on dataset [10] Figure 5: Optical flow: Left to right: first and second input, ground-truth, Flownet [11], ours. ?2 = 0.999) is used as the optimizer with a learning rate of 0.01. Data augmentation is used to avoid overfitting, including random cropping, flipping and rotation. We used a mini-batch size of 16. We train our model on 1000 frames of the Chess scene and test on the other scenes. PSNR is used to evaluate the performance. As shown in Tab. 3, our approach outperforms all competing algorithms. This shows that our deep structured network is able to handle non-additive non-Gaussian noise. Qualitative results are shown in Fig. 4. Compared to the competing approaches, our method is able to recover better depth estimates particularly along the depth discontinuities. 3.3 Optical Flow We evaluate the task of optical flow estimation on the Flying Chairs dataset [11]. The size of training images is 512 ? 384. We formulate the energy as follows: X X T y? = arg min kyi ? fi (xl , xr ; wu )k1 + ? kwho,? y? k1 (7) y?Y ? i where fi (xl , xr ; wu ) is a Flownet model [11], is a fully-convolutional encoder-decoder network that predicts 2D optical flow per pixel. It has 11 encoding layers and 11 deconv layers with skip connections. xl and xr are the left and right input images respectively and y is the desired optical flow output. Note that we use the `1 -norm for both, the data and the regularization term. The first nonlinearity activation function is the proximal operator of the `1 -norm?s dual function: prox?? (z) = min(|z|, 1) ? sign(z), and the second non-linear activation function is the proximal operator of the `1 -norm?s primal form: prox?,x (y, ?) = x ? min(|x ? y|, ?) ? sign(x), which is a soft shrinkage function [26]. 7 End-point-error Flownet 4.98 Flownet + TV-l1 4.96 Our proposed 4.91 Table 4: Performance of optical flow on Flying chairs dataset [11] We build a deep structured model with 5 iteration blocks. Each iteration block has 32 convolution filters of size 7 ? 7 for both the convolution and deconvolution layers, which results in 10 convolution/deconv layers and 10 non-linearities. The multi-stage mean square error is used as the loss function and the weight decay strength is set to be 0.0004. Training is conducted on the training subset of the Flying Chairs dataset. Our unary model is initialized with a pre-trained Flownet parameters. The high-order term is initialized with MSRA random initialization [16]. The hyper-parameter ? in this experiment is pre-set to be 10. We use random flipping, cropping and color-tuning for data augmentation, and employ the adam optimizer with the same configuration as before (?1 = 0.9 and ?2 = 0.999) with a learning rate t = 0.005. The learning rate is divided by 2 every 10 epoch and the mini-batch size is set to be 12. We evaluate all approaches on the test set of the Flying chairs dataset. End-point error is used as a measure of performance. The unary-only model (i.e. plain flownet) is used as baseline and we also compare against a plain TV-l1 model with four pre-set gradient operators as post-processing. As shown in Tab. 4 our method outperforms all the baselines. From Fig. 5 we can see that our method is less noisy than Flownet?s output and better preserves the boundaries. Note that our current model is isotropic. In order to further boost the performance, incorporating anisotropic filtering like bilateral filtering is an interesting future direction. 4 Conclusion We have proposed a deep structured model that learns non-linear functions encoding complex dependencies between continuous output variables. We have showed that inference in our model using proximal methods can be efficiently solved as a feed-foward pass on a special type of deep recurrent neural network. We demonstrated our approach in the tasks of image denoising, depth refinement and optical flow. In the future we plan to investigate other proximal methods and a wider variety of applications. References [1] David Belanger and Andrew McCallum. Structured prediction energy networks. In ICML, 2016. [2] A. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. JMIV, 2011. [3] L. Chen, A. Schwing, A. Yuille, and R. Urtasun. Learning deep structured models. In ICML, 2015. [4] T. Chen, M. Li, Y. Li, M. Lin, N. Wang, M. Wang, T. Xiao, B. Xu, C. Zhang, and Z. Zhang. Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems. arXiv, 2015. [5] Y. Chen, W. Yu, and T. Pock. On learning optimized reaction diffusion processes for effective image restoration. In CVPR, 2015. [6] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian. Image denoising by sparse 3-d transform-domain collaborative filtering. TIP, 2007. [7] J. Deng, N. Ding, Y. Jia, A. Frome, K. Murphy, S. Bengio, Y. Li, H. Neven, and H. Adam. Large-scale object classification using label relation graphs. In ECCV. 2014. [8] Justin Domke. Generic methods for optimization-based modeling. In AISTATS, 2012. [9] D. Eigen and R. Fergus. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In ICCV, 2015. [10] S. Fanello, C. Keskin, P. Kohli, S. Izadi, J. Shotton, A. Criminisi, U. Pattacini, and T. Paek. Filter forests for learning data-dependent convolutional kernels. In CVPR, 2014. [11] P. Fischer, A. Dosovitskiy, E. Ilg, P. H?usser, C. Haz?rba?s, V. Golkov, P. van der Smagt, D. Cremers, and T. Brox. Flownet: Learning optical flow with convolutional networks. In CVPR, 2015. 8 [12] D. Gabay and B. Mercier. A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Computers & Mathematics with Applications, 1976. [13] D. Geman and C. Yang. Nonlinear image recovery with half-quadratic regularization. TIP, 1995. [14] Karol Gregor and Yann LeCun. Learning fast approximations of sparse coding. In ICML, 2010. [15] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. arXiv, 2015. [16] K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In ICCV, 2015. [17] G. Hinton, L. Deng, D. Yu, G. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. Sainath, et al. Deep neural networks for acoustic modeling in speech recognition. SPM, IEEE, 2012. [18] A. Ihler and D. McAllester. Particle belief propagation. In AISTATS, 2009. [19] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv, 2014. [20] D. Krishnan and R. Fergus. Fast image deconvolution using hyper-laplacian priors. In NIPS, 2009. [21] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. [22] J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman. Non-local sparse models for image restoration. In ICCV, 2009. [23] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In ICCV, 2001. [24] N. Mayer, E. Ilg, P. H?usser, P. Fischer, D. Cremers, A. Dosovitskiy, and T. Brox. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. arXiv, 2015. [25] R. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. Davison, P. Kohi, J. Shotton, S. Hodges, and A. Fitzgibbon. Kinectfusion: Real-time dense surface mapping and tracking. In ISMAR, 2011. [26] N. Parikh and S. Boyd. Proximal algorithms. Foundations and Trends in optimization, 2014. [27] S. Ross, D. Munoz, M. Hebert, and J. Bagnell. Learning message-passing inference machines for structured prediction. In CVPR, 2011. [28] S. Roth and M. Black. Fields of experts: A framework for learning image priors. In CVPR, 2005. [29] U. Schmidt, J. Jancsary, S. Nowozin, S. Roth, and C. Rother. Cascades of regression tree fields for image restoration. PAMI, 2013. [30] U. Schmidt and S. Roth. Shrinkage fields for effective image restoration. In CVPR, 2014. [31] A. Schwing and R. Urtasun. Fully connected deep structured networks. arXiv, 2015. [32] E. Sudderth, A. Ihler, M. Isard, W. Freeman, and A. Willsky. Nonparametric belief propagation. Communications of the ACM, 2010. [33] I. Sutskever, O. Vinyals, and Q. Le. Sequence to sequence learning with neural networks. In NIPS, 2014. [34] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support vector machine learning for interdependent and structured output spaces. In ICML, 2004. [35] S. Wang, A. Schwing, and R. Urtasun. Efficient inference of continuous markov random fields with polynomial potentials. In NIPS, 2014. [36] Y. Weiss and W. Freeman. Correctness of belief propagation in gaussian graphical models of arbitrary topology. Neural computation, 2001. [37] C. Zach and P. Kohli. A convex discrete-continuous approach for markov random fields. In ECCV. 2012. [38] J. Zbontar and Y. LeCun. Computing the stereo matching cost with a convolutional neural network. In CVPR, 2015. [39] S. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, C. Huang, and P. Torr. Conditional random fields as recurrent neural networks. In ICCV, 2015. [40] D. Zoran and Y. Weiss. From learning models of natural image patches to whole image restoration. In ICCV, 2011. 9
6074 |@word kohli:2 version:2 briefly:2 polynomial:3 norm:8 paredes:1 tried:1 pick:1 initial:1 configuration:5 series:1 disparity:1 ours:5 romera:1 past:1 existing:1 outperforms:3 current:1 reaction:1 activation:5 written:2 gpu:6 numerical:1 partition:1 additive:1 hofmann:1 designed:3 update:2 half:4 isard:1 isotropic:1 mccallum:1 feedfoward:1 davison:1 iterates:1 boosting:1 toronto:6 zhang:4 five:1 along:1 qualitative:4 introduce:2 x0:3 unaries:1 multi:9 bm3d:3 freeman:2 little:1 solver:5 spain:1 linearity:3 interpreted:1 minimizes:3 developed:1 finding:5 transformation:1 guarantee:1 sapiro:1 thorough:1 every:2 tackle:1 haz:1 yn:4 positive:2 before:1 pock:2 local:1 limit:1 encoding:2 pami:1 might:2 proxfi:1 chose:1 initialization:2 studied:1 black:1 challenging:1 statistically:2 lecun:2 practice:3 block:5 definite:1 implement:1 xr:3 fitzgibbon:1 cascade:1 projection:1 boyd:1 word:1 pre:3 matching:1 altun:1 onto:1 cannot:1 tsochantaridis:1 operator:24 deconvolve:1 context:1 restriction:1 equivalent:1 map:1 demonstrated:1 roth:3 maximizing:1 attention:1 independently:1 convex:11 sainath:1 formulate:1 splitting:3 recovery:1 handle:1 analogous:2 exact:2 epll:1 jaitly:1 element:2 trend:1 recognition:4 particularly:3 predicts:2 geman:1 database:1 bottom:1 ding:1 wang:4 solved:3 connected:1 sun:2 nesterov:1 highorder:2 trained:2 depend:1 solving:4 rewrite:2 zoran:1 rtf:3 flying:4 yuille:1 triangle:1 accelerate:1 various:1 regularizer:1 train:5 fast:3 effective:2 hyper:6 widely:2 valued:7 supplementary:2 larger:1 cvpr:7 encoder:1 ability:1 statistic:1 gi:7 fischer:2 transform:2 itself:1 noisy:4 final:2 online:1 sequence:2 differentiable:5 net:4 propose:2 interaction:2 product:1 iff:2 flexibility:1 compositing:1 sutskever:2 convergence:1 optimum:1 cropping:2 categorization:1 adam:5 karol:1 object:3 wider:1 depending:2 recurrent:8 andrew:1 eq:8 lssc:1 solves:1 auxiliary:2 c:3 involves:2 skip:1 frome:1 direction:3 radius:1 closely:1 csf:3 filter:8 criminisi:1 stochastic:1 human:3 mcallester:1 material:2 require:1 fix:4 decompose:2 extension:1 kinectfusion:2 considered:6 ground:4 normal:1 exp:1 proxg:4 bsds:2 mapping:1 lm:1 achieves:1 optimizer:2 purpose:1 estimation:5 applicable:1 label:2 ross:1 ilg:2 correctness:1 weighted:2 hope:1 sensor:1 gaussian:6 rba:1 aim:3 avoid:1 shrinkage:5 encode:3 focus:2 refining:1 ponce:1 joachim:1 contrast:2 baseline:2 kim:1 posteriori:1 inference:30 abstraction:1 mrfs:6 minimizers:1 unary:9 stopping:1 typically:3 dependent:2 neven:1 relation:1 smagt:1 interested:2 pixel:1 arg:5 classification:4 among:2 dual:12 flexible:1 plan:1 art:2 special:6 constrained:1 fairly:1 brox:2 field:10 having:1 manually:1 biology:1 yu:2 icml:4 future:2 piecewise:1 dosovitskiy:2 few:1 employ:2 randomly:1 composed:1 preserve:1 fanello:1 individual:1 murphy:1 attempt:1 message:4 investigate:1 zheng:1 evaluation:1 semidefinite:1 light:1 primal:12 accurate:1 nowadays:1 conduct:2 iv:1 tree:1 initialized:2 re:1 desired:1 instance:1 xeon:1 soft:1 modeling:2 measuring:1 restoration:5 applicability:1 stacking:3 deviation:1 subset:4 cost:1 krizhevsky:1 successful:1 conducted:3 reported:1 dependency:4 proximal:52 peak:1 vineet:1 tip:2 squared:1 augmentation:2 satisfied:1 flownet:9 hodges:1 huang:1 zbontar:1 expert:2 li:3 account:1 potential:11 prox:5 coding:2 titan:1 satisfy:2 cremers:2 explicitly:1 depends:2 bilateral:3 extrapolation:1 closed:4 tab:4 sup:1 recover:1 jia:1 collaborative:1 minimize:4 square:4 accuracy:1 wiener:2 variance:1 convolutional:7 efficiently:5 egiazarian:1 foi:1 produced:1 none:1 ren:2 dabov:1 sharing:3 against:2 energy:15 mohamed:1 ihler:2 dataset:9 popular:3 color:1 usser:2 psnr:4 segmentation:1 back:2 feed:3 follow:1 zisserman:1 wei:2 done:2 chambolle:1 generality:1 stage:3 until:1 hand:1 belanger:1 su:1 nonlinear:3 propagation:5 spm:1 jayasumana:1 k22:1 contain:1 multiplier:2 gtx:1 unroll:1 fidler:2 regularization:2 alternating:2 semantic:1 proxf:4 criterion:1 generalized:1 demonstrate:4 dedicated:1 l1:2 mxnet:2 image:32 variational:1 fi:11 parikh:1 superior:1 rotation:1 common:1 anisotropic:1 he:2 surpassing:1 refer:5 munoz:1 tuning:1 mathematics:1 particle:3 nonlinearity:4 language:1 sanja:1 surface:2 gt:1 base:1 recent:4 showed:1 shenlong:1 certain:3 ecological:1 inequality:1 yi:19 exploited:2 scoring:1 der:1 captured:1 employed:1 deng:2 converge:1 signal:1 semi:1 ii:2 multiple:1 smooth:2 segmented:1 cross:1 long:1 lin:1 bach:1 divided:2 post:1 laplacian:1 prediction:3 mrf:1 regression:2 heterogeneous:1 vision:1 poisson:1 arxiv:5 iteration:9 kernel:1 cropped:1 fine:1 sudderth:1 extra:1 unlike:2 specially:1 pass:1 hz:1 flow:15 effectiveness:3 golkov:1 yang:1 intermediate:3 iii:2 easy:1 bengio:1 shotton:2 variety:3 krishnan:1 variate:4 zi:1 architecture:3 competing:4 topology:1 idea:2 msra:2 motivated:2 stereo:3 speech:2 passing:4 deep:33 generally:1 iterating:1 involve:2 cleaner:1 amount:1 nonparametric:1 locally:1 simplest:1 reduced:2 generate:1 outperform:1 exist:5 sign:3 per:2 discrete:6 group:1 four:1 kyi:2 clean:1 dahl:1 diffusion:1 backward:2 imaging:2 graph:1 sum:3 year:1 convert:1 run:1 parameterized:1 powerful:1 raquel:1 family:6 reader:5 wu:7 yann:1 utilizes:1 patch:2 layer:14 hi:7 tackled:1 quadratic:5 strength:3 precisely:1 constraint:1 scene:4 encodes:1 tal:1 speed:1 min:10 chair:4 performing:3 optical:14 martin:1 gpus:1 structured:28 tv:2 according:1 combination:1 ball:1 conjugate:2 slightly:1 shallow:3 chess:1 restricted:1 gradually:1 iccv:6 izadi:2 discus:3 wrt:4 end:8 mercier:1 available:2 operation:1 apply:1 spectral:1 generic:1 fowlkes:1 batch:3 schmidt:2 slower:1 eigen:1 original:1 paek:1 top:2 include:1 graphical:3 hinge:1 k1:3 build:1 gregor:1 objective:1 deconv:2 malik:1 flipping:2 receptive:1 bagnell:1 gradient:13 distance:2 decoder:1 urtasun:5 reason:1 trivial:1 willsky:1 rother:1 relationship:1 illustration:2 mini:3 minimizing:1 ratio:1 difficult:1 ba:1 implementation:2 motivates:1 proper:1 perform:3 observation:2 convolution:11 markov:3 finite:1 descent:3 hinton:2 communication:1 y1:1 frame:2 arbitrary:1 intensity:1 david:1 pair:1 connection:1 optimized:1 imagenet:2 mayer:1 acoustic:1 learned:1 barcelona:1 kingma:2 nip:5 alternately:1 boost:2 able:3 discontinuity:1 justin:1 pattern:1 including:2 max:2 belief:3 natural:5 regularized:1 predicting:5 residual:1 csf57:1 library:1 negativity:1 review:3 epoch:2 prior:2 interdependent:1 multiplication:1 loss:13 fully:2 interesting:1 filtering:3 foundation:1 vanhoucke:1 consistent:1 xiao:1 corrupt:1 nowozin:1 translation:1 eccv:2 repeat:1 hebert:1 bias:1 katkovnik:1 senior:1 wide:2 taking:1 sparse:4 ghz:1 distributed:1 boundary:1 depth:17 xn:1 world:2 evaluating:2 plain:3 qn:1 van:1 forward:7 collection:1 refinement:9 simplified:1 nguyen:1 approximate:1 dealing:1 global:1 overfitting:1 mairal:1 xi:1 fergus:2 continuous:17 iterative:2 decade:1 decomposes:1 table:4 learn:6 delving:1 obtaining:1 symmetry:1 forest:2 du:1 complex:5 domain:6 aistats:2 main:1 dense:1 whole:5 noise:7 gabay:1 facilitating:1 xu:1 crafted:1 fig:5 depicts:2 precision:2 sub:2 zach:1 xl:3 lie:1 third:1 learns:1 grained:1 hw:1 rk:1 removing:1 specific:2 rectifier:1 decay:3 admits:1 deconvolution:4 exists:3 incorporating:1 adding:1 magnitude:1 chen:3 entropy:1 specularities:1 simply:1 saddle:1 vinyals:1 expressed:1 contained:1 tracking:1 scalar:2 corresponds:2 minimizer:1 satisfies:3 truth:4 acm:1 conditional:1 identity:1 acceleration:1 towards:1 shared:2 admm:2 change:2 jancsary:1 torr:1 domke:1 denoising:14 schwing:3 called:3 pas:9 lista:2 experimental:1 exception:1 select:1 support:1 evaluate:4 ex:2
5,608
6,075
Single Pass PCA of Matrix Products Shanshan Wu The University of Texas at Austin shanshan@utexas.edu Srinadh Bhojanapalli Toyota Technological Institute at Chicago srinadh@ttic.edu Sujay Sanghavi The University of Texas at Austin sanghavi@mail.utexas.edu Alexandros G. Dimakis The University of Texas at Austin dimakis@austin.utexas.edu Abstract In this paper we present a new algorithm for computing a low rank approximation of the product AT B by taking only a single pass of the two matrices A and B. The straightforward way to do this is to (a) first sketch A and B individually, and then (b) find the top components using PCA on the sketch. Our algorithm in contrast retains additional summary information about A, B (e.g. row and column norms etc.) and uses this additional information to obtain an improved approximation from the sketches. Our main analytical result establishes a comparable spectral norm guarantee to existing two-pass methods; in addition we also provide results from an Apache Spark implementation1 that shows better computational and statistical performance on real-world and synthetic evaluation datasets. 1 Introduction Given two large matrices A and B we study the problem of finding a low rank approximation of their product AT B, using only one pass over the matrix elements. This problem has many applications in machine learning and statistics. For example, if A = B, then this general problem reduces to Principal Component Analysis (PCA). Another example is a low rank approximation of a co-occurrence matrix from large logs, e.g., A may be a user-by-query matrix and B may be a user-by-ad matrix, so AT B computes the joint counts for each query-ad pair. The matrices A and B can also be two large bag-ofword matrices. For this case, each entry of AT B is the number of times a pair of words co-occurred together. As a fourth example, AT B can be a cross-covariance matrix between two sets of variables, e.g., A and B may be genotype and phenotype data collected on the same set of observations. A low rank approximation of the product matrix is useful for Canonical Correlation Analysis (CCA) [3]. For all these examples, AT B captures pairwise variable interactions and a low rank approximation is a way to efficiently represent the significant pairwise interactions in sub-quadratic space. Let A and B be matrices of size d ? n (d n) assumed too large to fit in main memory. To obtain a rank-r approximation of AT B, a naive way is to compute AT B first, and then perform truncated singular value decomposition (SVD) of AT B. This algorithm needs O(n2 d) time and O(n2 ) memory to compute the product, followed by an SVD of the n ? n matrix. An alternative option is to directly run power method on AT B without explicitly computing the product. Such an algorithm will need to access the data matrices A and B multiple times and the disk IO overhead for loading the matrices to memory multiple times will be the major performance bottleneck. For this reason, a number of recent papers introduce randomized algorithms that require only a few passes over the data, approximately linear memory, and also provide spectral norm guarantees. The 1 The code can be found at https://github.com/wushanshan/MatrixProductPCA 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. key step in these algorithms is to compute a smaller representation of data. This can be achieved by two different methods: (1) dimensionality reduction, i.e., matrix sketching [15, 5, 14, 6]; (2) random sampling [7, 1]. The recent results of Cohen et al. [6] provide the strongest spectral norm guarantee eT B e to of the former. They show that a sketch size of O(? r/?2 ) suffices for the sketched matrices A T e e achieve a spectral error of ?, where r? is the maximum stable rank of A and B. Note that A B is not the desired rank-r approximation of AT B. On the other hand, [1] is a recent sampling method with very good performance guarantees. The authors consider entrywise sampling based on column norms, followed by a matrix completion step to compute low rank approximations. There is also a lot of interesting work on streaming PCA, but none can be directly applied to the general case when A is different from B (see Figure 4(c)). Please refer to Appendix D for more discussions on related work. Despite the significant volume of prior work, there is no method that computes a rank-r approximation of AT B when the entries of A and B are streaming in a single pass 2 . Bhojanapalli et al. [1] consider a two-pass algorithm which computes column norms in the first pass and uses them to sample in a second pass over the matrix elements. In this paper, we combine ideas from the sketching and sampling literature to obtain the first algorithm that requires only a single pass over the data. Contributions: We propose a one-pass algorithm SMP-PCA (which stands for Streaming Matrix 2 3 Product PCA) that computes a rank-r approximation of AT B in time O((nnz(A) + nnz(B)) ? ?r2 r? + nr 6 ?4 r?3 ). ?4 Here nnz(?) is the number of non-zero entries, ? is the condition number, r? is the maximum stable rank, and ? measures the spectral norm error. Existing two-pass algorithms such as [1] typically have longer runtime than our algorithm (see Figure 3(a)). We also compare our algorithm with the simple idea that first sketches A and B separately and then performs SVD on the product of their sketches. We show that our algorithm always achieves better accuracy and can perform arbitrarily better if the column vectors of A and B come from a cone (see Figures 2, 4(b), 3(b)). The central idea of our algorithm is a novel rescaled JL embedding that combines information from matrix sketches and vector norms. This allows us to get better estimates of dot products of high dimensional vectors compared to previous sketching approaches. We explain the benefit compared to a naive JL embedding in Figure 2 and the related discussion; we believe it may be of more general interest beyond low rank matrix approximations. We prove that our algorithm recovers a low rank approximation of AT B up to an error that depends on kAT B (AT B)r k and kAT Bk, decaying with increasing sketch size and number of samples (Theorem 3.1). The first term is a consequence of low rank approximation and vanishes if AT B is exactly rank-r. The second term results from matrix sketching and subsampling; the bounds have similar dependencies as in [6]. We implement SMP-PCA in Apache Spark and perform several distributed experiments on synthetic and real datasets. Our distributed implementation uses several design innovations described in Section 4 and Appendix C.5 and it is the only Spark implementation that we are aware of that can handle matrices that are large in both dimensions. Our experiments show that we improve by approximately a factor of 2? in running time compared to the previous state of the art and scale gracefully as the cluster size increases. The source code is available at [18]. In addition to better performance, our algorithm offers another advantage: It is possible to compute low-rank approximations to AT B even when the entries of the two matrices arrive in some arbitrary order (as would be the case in streaming logs). We can therefore discover significant correlations even when the original datasets cannot be stored, for example due to storage or privacy limitations. 2 Problem setting and algorithms Consider the following problem: given two matrices A 2 Rd?n1 and B 2 Rd?n2 that are stored in disk, find a rank-r approximation of their product AT B. In particular, we are interested in the setting where both A, B and AT B are too large to fit into memory. This is common for modern large scale machine learning applications. For this setting, we develop a single-pass algorithm SMP-PCA that computes the rank-r approximation without explicitly forming the entire matrix AT B. 2 One straightforward idea is to sketch each matrix individually and perform SVD on the product of the sketches. We compare against that scheme and show that we can perform arbitrarily better using our rescaled JL embedding. 2 Notations. Throughout the paper, we use A(i, j) or Aij to denote (i, j) entry for any matrix A. Let Ai and Aj be the i-th column vector and j-th row vector. We use kAkF for Frobenius norm, and kAk for spectral (or operator) norm. The optimal rank-r approximation of matrix A is Ar , which can be found by SVD. For any positive integer n, let [n] denote the set {1, 2, ? ? ? , n}. Given a set ? ? [n1 ] ? [n2 ] and a matrix A 2 Rn1 ?n2 , we define P? (A) 2 Rn1 ?n2 as the projection of A on ?, i.e., P? (A)(i, j) = A(i, j) if (i, j) 2 ? and 0 otherwise. 2.1 SMP-PCA Our algorithm SMP-PCA (Streaming Matrix Product PCA) takes four parameters as input: the desired rank r, number of samples m, sketch size k, and the number of iterations T . Performance guarantee involving these parameters is provided in Theorem 3.1. As illustrated in Figure 1, our algorithm has three main steps: 1) compute sketches and side information in one pass over A and B; 2) given partial information of A and B, estimate important entries of AT B; 3) compute low rank approximation given estimates of a few entries of AT B. Now we explain each step in detail. Figure 1: An overview of our algorithm. A single pass is performed over the data to produce the e B e and the column norms kAi k, kBj k, for i 2 [n1 ] and j 2 [n2 ]. We then sketched matrices A, f) through a biased sampling process, where P? (M f) = M f(i, j) if compute the sampled matrix P? (M f (i, j) 2 ? and zero otherwise. Here ? represents the set of sampled entries. The (i, j)-th entry of M f is given in Eq. (2). Performing matrix completion on P? (M ) gives the desired rank-r approximation. Algorithm 1 SMP-PCA: Streaming Matrix Product PCA 1: Input: A 2 Rd?n1 , B 2 Rd?n2 , desired rank: r, sketch size: k, number of samples: m, number of iterations: T 2: Construct a random matrix ? 2 Rk?d , where ?(i, j) ? N (0, 1/k), 8(i, j) 2 [k] ? [d]. Perform 3: 4: 5: 6: e = ?A, B e = ?B, and kAi k, kBj k, for i 2 [n1 ] and a single pass over A and B to obtain: A j 2 [n2 ]. Sample each entry (i, j) 2 [n1 ] ? [n2 ] independently with probability q?ij = min{1, qij }, where qij is defined in Eq.(1); maintain a set ? ? [n1 ] ? [n2 ] which stores all the sampled pairs (i, j). f 2 Rn1 ?n2 , where M f(i, j) is given in Eq. (2). Calculate P? (M f) 2 Rn1 ?n2 , where Define M f) = M f(i, j) if (i, j) 2 ? and zero otherwise. P? ( M f), ?, r, q?, T ), see Appendix A for more details. Run WAltMin(P? (M n1 ?r b Output: U 2 R and Vb 2 Rn2 ?r . Step 1: Compute sketches and side information in one pass over A and B. In this step we e := ?A and B e := ?B, where ? 2 Rk?d is a random matrix with entries being compute sketches A i.i.d. N (0, 1/k) random variables. It is known that ? satisfies an "oblivious Johnson-Lindenstrauss (JL) guarantee" [15][17] and it helps preserving the top row spaces of A and B [5]. Note that any sketching matrix ? that is an oblivious subspace embedding can be considered here, e.g., sparse JL transform and randomized Hadamard transform (see [6] for more discussion). e and B, e we also compute the L2 norms for all column vectors, i.e., kAi k and kBj k, for Besides A i 2 [n1 ] and j 2 [n2 ]. We use this additional information to design better estimates of AT B in the 3 eT B e to sample. Note that this is the only step next step, and also to determine important entries of A that needs one pass over data. Step 2: Estimate important entries of AT B by rescaled JL embedding. In this step we use partial eT B. e We first information obtained from the previous step to compute a few important entries of A T e B e to sample, and then propose a novel rescaled JL embedding for determine what entries of A estimating those entries. We sample entry (i, j) of AT B independently with probability q?ij = min{1, qij }, where kAi k2 kBj k2 + ). (1) 2n2 kAk2F 2n1 kBk2F P Let ? ? [n1 ] ? [n2 ] be the set of sampled entries (i, j). Since E( i,j qij ) = m, the expected number of sampled entries is roughly m. The special form of qij ensures that we can draw m samples in O(n1 + m log(n2 )) time; we show how to do this in Appendix C.5. qij = m ? ( Note that qij intuitively captures important entries of AT B by giving higher weight to heavy rows and columns. We show in Section 3 that this sampling actually generates good approximation to the matrix AT B. The biased sampling distribution of Eq. (1) is first proposed by Bhojanapalli et al. [1]. However, their algorithm [1] needs a second pass to compute the sampled entries, while we propose a novel way of estimating dot products, using information obtained in the first step. f 2 Rn1 ?n2 as Define M f(i, j) = kAi k ? kBj k ? M eT B ej A i . ei k ? kB ej k kA (2) f, instead, we only calculate M f(i, j) for (i, j) 2 ?. This Note that we will not compute and store M f), where P? (M f)(i, j) = M f(i, j) if (i, j) 2 ? and 0 otherwise. matrix is denoted as P? (M 2 Estimated dot product JL embedding Rescaled JL embedding 1 0 -1 -2 -1 -0.5 0 0.5 1 True dot product (a) (b) Figure 2: (a) Rescaled JL embedding (red dots) captures the dot products with smaller variance compared to JL embedding (blue triangles). Mean squared error: 0.053 versus 0.129. (b) Lower figure illustrates how to construct unit-norm vectors from a cone with angle ?. Let x be a fixed unit-norm vector, and let t be a random Gaussian vector with expected norm tan(?/2), we set y as either x + t or (x + t) with probability half, and then normalize it. Upper figure plots the ratio of T eT Bk/kA e fk, when the column vectors of A and B are unit spectral norm errors kAT B A B M f has better accuracy than A eT B e for all possible vectors drawn from a cone with angle ?. Clearly, M values of ?, especially when ? is small. f is a better estimator than A eT B. e To estimate the We now explain the intuition of Eq. (2), and why M T T e e e e e (i, j) entry of A B, a straightforward way is to use Ai Bj = kAi k ? kBj k ? cos ?ij , where ?eij is the 4 ei and B ej . Since we already know the actual column norms, a potentially angle between vectors A better estimator would be kAi k ? kBj k ? cos ?eij . This removes the uncertainty that comes from distorted column norms3 . eT B ej (JL embedding) and M f(i, j) (rescaled JL embedding) Figure 2(a) compares the two estimators A i for dot products. We plot simulation results on pairs of unit-norm vectors with different angles. The vectors have dimension 1,000 and the sketching matrix has dimension 10-by-1,000. Clearly rescaling by the actual norms help reduce the estimation uncertainty. This phenomenon is more prominent when the true dot products are close to ?1, which makes sense because cos ? has a small slope when cos ? approaches ?1, and hence the uncertainty from angles may produce smaller distortion compared to that from norms. In the extreme case when cos ? = ?1, rescaled JL embedding can perfectly recover the true dot product4 . In the lower part of Figure 2(b) we illustrate how to construct unit-norm vectors from a cone with angle ?. Given a fixed unit-norm vector x, and a random Gaussian vector t with expected norm tan(?/2), we construct new vector y by randomly picking one from the two possible choices x+t and (x + t), and then renormalize it. Suppose the columns of A and B are unit vectors randomly drawn T eT Bk/kA e fk in from a cone with angle ?, we plot the ratio of spectral norm errors kAT B A B M T e f e Figure 2(b). We observe that M always outperforms A B and can be much better when ? approaches zero, which agrees with the trend indicated in Figure 2(a). Step 3: Compute low rank approximation given estimates of few entries of AT B. Finally we compute the low rank approximation of AT B from the samples using alternating least squares: X f(i, j))2 , min wij (eTi U V T ej M (3) U,V 2Rn?r (i,j)2? where wij = 1/? qij denotes the weights, and ei , ej are standard base vectors. This is a popular technique for low rank recovery and matrix completion (see [1] and the references therein). After T f presented in the convenient factored form. This iterations, we will get a rank-r approximation of M subroutine is quite standard, so we defer the details to Appendix A. 3 Analysis Now we present the main theoretical result. Theorem 3.1 characterizes the interaction between the sketch size k, the sampling complexity m, the number of iterations T , and the spectral error [ [ T B k, where A T B is the output of SMP-PCA, and (AT B) is the optimal rank-r k(AT B)r A r r r T approximation of A B. Note that the following theorem assumes that A and B have the same size. For the general case of n1 6= n2 , Theorem 3.1 is still valid by setting n = max{n1 , n2 }. Theorem 3.1. Given matrices A 2 Rd?n and B 2 Rd?n , let (AT B)r be the optimal rank-r ? kAk2 kBk2 approximation of AT B. Define r? = max{ kAkF2 , kBkF2 } as the maximum stable rank, and ? = 1? as r the condition number of (AT B)r , where i? is the i-th singular values of AT B. [ T B be the output of Algorithm SMP-PCA. If the input parameters k, m, and T satisfy Let A r C1 kAk2 kBk2 ?2 r3 max{? r, 2 log(n)} + log (3/ ) ? , (4) kAT Bk2F ?2 ? ?2 C2 r?2 kAk2F + kBk2F nr3 ?2 log(n)T 2 m ? ? , (5) T kA BkF ?2 kAkF + kBkF T log( ), (6) ? where C1 and C2 are some global constants independent of A and B. Then with probability at least 1 , we have [ T B k ? ?kAT B k(AT B)r A (AT B)r kF + ? + ? r? . (7) r k 3 We also tried using the cosine rule for computing the dot product, and another sketching method specifically designed for preserving angles [2], but empirically those methods perform worse than our current estimator. 4 See http://wushanshan.github.io/files/RescaledJL_project.pdf for more results. 5 Remark 1. Compared to the two-pass algorithm proposed by [1], we notice that Eq. (7) contains an additional error term ? r? . This extra term captures the cost incurred when we are approximating entries of AT B by Eq. (2) instead of using the actual values. The exact tradeoff between ? and k is given by Eq. (4). On one hand, we want to have a small k so that the sketched matrices can fit into memory. On the other hand, the parameter k controls how much information is lost during sketching, and a larger k gives a more accurate estimation of the inner products. kAk2 +kBk2 Remark 2. The dependence on kAFT BkF F captures one difficult situation for our algorithm. If kAT BkF is much smaller than kAkF or kBkF , which could happen, e.g., when many column vectors of A are orthogonal to those of B, then SMP-PCA requires many samples to work. This is reasonable. Imagine that AT B is close to an identity matrix, then it may be hard to tell it from an all-zero matrix without enough samples. Nevertheless, removing this dependence is an interesting direction for future research. Remark 3. For the special case of A = B, SMP-PCA computes a rank-r approximation of matrix AT A in a single pass. Theorem 3.1 provides an error bound in spectral norm for the residual matrix [ T A . Most results in the online PCA literature use Frobenius norm as performance (AT A)r A r measure. Recently, [10] provides an online PCA algorithm with spectral norm guarantee. They ? achieves a spectral norm bound of ? 1? + r+1 , which is stronger than ours. However, their algorithm requires a target dimension of O(r log n/?2 ), i.e., the output is a matrix of size n-by-O(r log n/?2 ), while the output of SMP-PCA is simply n-by-r. Remark 4. We defer our proofs to Appendix C. The proof proceeds in three steps. In Appendix C.2, we show that the sampled matrix provides a good approximation of the actual matrix AT B. In Appendix C.3, we show that there is a geometric decrease in the distance between the computed b , Vb and the optimal ones U ? , V ? at each iteration of WAltMin algorithm. The spectral subspaces U norm bound in Theorem 3.1 is then proved using results from the previous two steps. Computation Complexity. We now analyze the computation complexity of SMP-PCA. In Step 1, we compute the sketched matrices of A and B, which requires O(nnz(A)k + nnz(B)k) flops. Here nnz(?) denotes the number of non-zero entries. The main job of Step 2 is to sample a set ? and calculate the corresponding inner products, which takes O(m log(n) + mk) flops. Here we define n as max{n1 , n2 } for simplicity. According to Eq. (4), we have log(n) = O(k), so Step 2 takes O(mk) flops. In Step 3, we run alternating least squares on the sampled matrix, which can be completed in O((mr2 + nr3 )T ) flops. Since Eq. (5) indicates nr = O(m), the computation complexity of Step 3 is O(mr2 T ). Therefore, SMP-PCA has a total computation complexity O(nnz(A)k + nnz(B)k + mk + mr2 T ). 4 Numerical Experiments Spark implementation. We implement our SMP-PCA in Apache Spark 1.6.2 [19]. For the purpose of comparison, we also implement a two-pass algorithm LELA [1] in Spark5 . The matrices A and B are stored as a resilient distributed dataset (RDD) in disk (by setting its StorageLevel as DISK_ONLY). We implement the two passes of LELA using the treeAggregate method. For SMP-PCA, we choose the subsampled randomized Hadamard transform (SRHT) [16] as the sketching matrix. The biased sampling procedure is performed using binary search (see Appendix C.5 for how to sample m elements in O(m log n) time). After obtaining the sampled matrix, we run ALS (alternating least squares) to get the desired low-rank matrices. More details can be found at [18]. Description of datasets. We test our algorithm on synthetic datasets and three real datasets: SIFT10K [9], NIPS-BW [11], and URL-reputation [12]. For synthetic data, we generate matrices A and B as GD, where G has entries independently drawn from standard Gaussian distribution, and D is a diagonal matrix with Dii = 1/i. SIFT10K is a dataset of 10,000 images. Each image is represented by 128 features. We set A as the image-by-feature matrix. The task here is to compute a low rank approximation of AT A, which is a standard PCA task. The NIPS-BW dataset contains bag-of-words features extracted from 1,500 NIPS papers. We divide the papers into two subsets, and let A and B be the corresponding word-by-paper matrices, so AT B computes the counts of co-occurred words between two sets of papers. The original URL-reputation dataset has 2.4 million 5 To our best knowledge, this the first distributed implementation of LELA. 6 Runtime (sec) vs Cluster size 2000 1000 Spectral norm error LELA SMC-PCA 3000 !T B) ! SVD(A SMP-PCA LELA Optimal 0.2 0.15 0.2 0.15 0.1 0.05 0.05 1000 5 0.25 0.1 0 2 !T B) ! SVD(A SMP-PCA LELA Optimal 0.3 2000 1000 Sketch size (k) 10 (a) 2000 Sketch size (k) (b) Figure 3: (a) Spark-1.6.2 running time on a 150GB dataset. All nodes are m3.2xlarge EC2 instances. See [18] for more details. (b) Spectral norm error achieved by three algorithms over two datasets: eT B) e by a factor of 1.8 for SIFT10K (left) and NIPS-BW (right). SMP-PCA outperforms SVD(A SIFT10K and 1.1 for NIPS-BW. The error of SMP-PCA keeps decreasing as the sketch size k grows. URLs. Each URL is represented by 3.2 million features, and is indicated as malicious or benign. This dataset has been used previously for CCA [13]. Here we extract two subsets of features, and let A and B be the corresponding URL-by-feature matrices. The goal is to compute a low rank approximation of AT B, the cross-covariance matrix between two subsets of features. Sample complexity. In Figure 4(a) we present simulation results on a small synthetic data with n = d = 5, 000 and r = 5. We observe that a phase transition occurs when the sample complexity m = ?(nr log n). This agrees with the experimental results shown in previous papers, see, e.g., [4, 1]. For all rest experiments, unless otherwise specified, we set r = 5, T = 10, and m as 4nr log n. Table 1: A comparison of spectral norm error over three datasets Dataset d n Algorithm Sketch size k Error Synthetic 100,000 100,000 Optimal LELA SMP-PCA 2,000 0.0271 0.0274 0.0280 URLmalicious 792,145 10,000 Optimal LELA SMP-PCA 2,000 0.0163 0.0182 0.0188 URLbenign 1,603,985 10,000 Optimal LELA SMP-PCA 2,000 0.0103 0.0105 0.0117 Comparison of SMP-PCA and LELA. We now compare the statistical performance of SMP-PCA and LELA [1] on three real datasets and one synthetic dataset. As shown in Figure 3(b) and Table 1, LELA always achieves a smaller spectral norm error than SMP-PCA, which makes sense because LELA takes two passes and hence has more chances exploring the data. Besides, we observe that as the sketch size increases, the error of SMP-PCA keeps decreasing and gets closer to that of LELA. In Figure 3(a) we compare the runtime of SMP-PCA and LELA using a 150GB synthetic dataset on m3.2xlarge Amazon EC2 instances6 . The matrices A and B have dimension n = d = 100, 000. The sketch dimension is set as k = 2, 000. We observe that the speedup achieved by SMP-PCA is more prominent for small clusters (e.g., 56 mins versus 34 mins on a cluster of size two). This is possibly due to the increasing spark overheads at larger clusters, see [8] for more related discussion. eT B). e In Figure 4(b) we repeat the experiment in Section 2 Comparison of SMP-PCA and SVD(A eT B) e refers to by generating column vectors of A and B from a cone with angle ?. Here SVD(A 6 Each machine has 8 cores, 30GB memory, and 2?80GB SSD. 7 105 Ratio of errors vs theta Spectral norm error Spectral norm error 0.5 k = 400 0.4 k = 800 0.3 0.2 1 2 3 4 # Samples / nrlogn (a) 100 0 ? /4 ? /2 3 ? /4 (b) ? 1 0.8 ATr Br SMP-PCA 0.6 0.4 0.2 200 400 600 800 1000 Sketch size (k) (c) Figure 4: (a) A phase transition occurs when the sample complexity m = ?(nr log n). (b) This eT B) e over that of SMP-PCA. The columns of figure plots the ratio of spectral norm error of SVD(A A and B are unit vectors drawn from a cone with angle ?. We see that the ratio of errors scales to infinity as the cone angle shrinks. (c) If the top r left singular vectors of A are orthogonal to those of B, the product ATr Br is a very poor low rank approximation of AT B. eT B) e computing SVD on the sketched matrices7 . We plot the ratio of the spectral norm error of SVD(A over that of SMP-PCA, as a function of ?. Note that this is different from Figure 2(b), as now we take the effect of random sampling and SVD into account. However, the trend in both figures are the eT B) e and can be arbitrarily better as ? goes to zero. same: SMP-PCA always outperforms SVD(A eT B) e on two real datasets SIFK10K and NIPS-BW. In Figure 3(b) we compare SMP-PCA and SVD(A [ [ T B ||/||AT B||, where A T B is The y-axis represents spectral norm error, defined as ||AT B A r r the rank-r approximation found by a specific algorithm. We observe that SMP-PCA outperforms eT B) e by a factor of 1.8 for SIFT10K and 1.1 for NIPS-BW. SVD(A eT B). e The reasons are Now we explain why SMP-PCA produces a more accurate result than SVD(A f is a better estimator for AT B than A eT B e (Figure 2). twofold. First, our rescaled JL embedding M f, and Second, the noise due to sampling is relatively small compared to the benefit obtained from M T e f e hence the final result computed using P? (M ) still outperforms SVD(A B). Comparison of SMP-PCA and ATr Br . Let Ar and Br be the optimal rank-r approximation of A and B, we show that even if one could use existing methods (e.g., algorithms for streaming PCA) to estimate Ar and Br , their product ATr Br can be a very poor low rank approximation of AT B. This is demonstrated in Figure 4(c), where we intentionally make the top r left singular vectors of A orthogonal to those of B. 5 Conclusion We develop a novel one-pass algorithm SMP-PCA that directly computes a low rank approximation of matrix product, using ideas of matrix sketching and entrywise sampling. As a subroutine of our algorithm, we propose rescaled JL for estimating entries of AT B, which has smaller error compared ? This we believe can be extended to other applications. Moreover, to the standard estimator A?T B. SMP-PCA allows the non-zero entries of A and B to be presented in any arbitrary order, and hence can be used for steaming applications. We design a distributed implementation for SMP-PCA. Our eT B), e and is experimental results show that SMP-PCA can perform arbitrarily better than SVD(A significantly faster compared to algorithms that require two or more passes over the data. Acknowledgements We thank the anonymous reviewers for their valuable comments. This research has been supported by NSF Grants CCF 1344179, 1344364, 1407278, 1422549, 1302435, 1564000, and ARO YIP W911NF-14-1-0258. 7 This can be done by standard power iteration based method, without explicitly forming the product matrix T e e A B, whose size is too big to fit into memory according to our assumption. 8 References [1] S. Bhojanapalli, P. Jain, and S. Sanghavi. Tighter low-rank approximation via sampling the leveraged element. In Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 902?920. SIAM, 2015. [2] P. T. Boufounos. Angle-preserving quantized phase embeddings. In SPIE Optical Engineering+ Applications. International Society for Optics and Photonics, 2013. [3] X. Chen, H. Liu, and J. G. Carbonell. Structured sparse canonical correlation analysis. In International Conference on Artificial Intelligence and Statistics, pages 199?207, 2012. [4] Y. Chen, S. Bhojanapalli, S. Sanghavi, and R. Ward. Completing any low-rank matrix, provably. arXiv preprint arXiv:1306.2979, 2013. [5] K. L. Clarkson and D. P. Woodruff. Low rank approximation and regression in input sparsity time. In Proceedings of the 45th annual ACM symposium on Symposium on theory of computing, pages 81?90. ACM, 2013. [6] M. B. Cohen, J. Nelson, and D. P. Woodruff. Optimal approximate matrix product in terms of stable rank. arXiv preprint arXiv:1507.02268, 2015. [7] P. Drineas, R. Kannan, and M. W. Mahoney. Fast monte carlo algorithms for matrices ii: Computing a low-rank approximation to a matrix. SIAM Journal on Computing, 36(1):158?183, 2006. [8] A. Gittens, A. Devarakonda, E. Racah, M. F. Ringenburg, L. Gerhardt, J. Kottalam, J. Liu, K. J. Maschhoff, S. Canon, J. Chhugani, P. Sharma, J. Yang, J. Demmel, J. Harrell, V. Krishnamurthy, M. W. Mahoney, and Prabhat. Matrix factorization at scale: a comparison of scientific data analytics in spark and C+MPI using three case studies. arXiv preprint arXiv:1607.01335, 2016. [9] H. Jegou, M. Douze, and C. Schmid. Product quantization for nearest neighbor search. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 33(1):117?128, 2011. [10] Z. Karnin and E. Liberty. Online pca with spectral bounds. In Proceedings of The 28th Conference on Learning Theory (COLT), volume 40, pages 1129?1140, 2015. [11] M. Lichman. UCI machine learning repository. http://archive.ics.uci.edu/ml, 2013. [12] J. Ma, L. K. Saul, S. Savage, and G. M. Voelker. Identifying suspicious urls: an application of large-scale online learning. In Proceedings of the 26th annual international conference on machine learning, pages 681?688. ACM, 2009. [13] Z. Ma, Y. Lu, and D. Foster. Finding linear structure in large datasets with scalable canonical correlation analysis. arXiv preprint arXiv:1506.08170, 2015. [14] A. Magen and A. Zouzias. Low rank matrix-valued chernoff bounds and approximate matrix multiplication. In Proceedings of the twenty-second annual ACM-SIAM symposium on Discrete Algorithms, pages 1422? 1436. SIAM, 2011. [15] T. Sarlos. Improved approximation algorithms for large matrices via random projections. In Foundations of Computer Science, 2006. FOCS?06. 47th Annual IEEE Symposium on, pages 143?152. IEEE, 2006. [16] J. A. Tropp. Improved analysis of the subsampled randomized hadamard transform. Advances in Adaptive Data Analysis, pages 115?126, 2011. [17] D. P. Woodruff. Sketching as a tool for numerical linear algebra. arXiv preprint arXiv:1411.4357, 2014. [18] S. Wu, S. Bhojanapalli, S. Sanghavi, and A. Dimakis. Github repository for "single-pass pca of matrix products". https://github.com/wushanshan/MatrixProductPCA, 2016. [19] M. Zaharia, M. Chowdhury, T. Das, A. Dave, J. Ma, M. McCauley, M. J. Franklin, S. Shenker, and I. Stoica. Resilient distributed datasets: A fault-tolerant abstraction for in-memory cluster computing. In Proceedings of the 9th USENIX conference on Networked Systems Design and Implementation, 2012. 9
6075 |@word repository:2 mr2:3 stronger:1 norm:38 loading:1 disk:3 kbkf:2 simulation:2 tried:1 covariance:2 decomposition:1 reduction:1 liu:2 contains:2 lichman:1 woodruff:3 ours:1 franklin:1 outperforms:5 existing:3 ka:4 com:2 current:1 savage:1 chicago:1 happen:1 numerical:2 benign:1 remove:1 plot:5 designed:1 v:2 half:1 intelligence:2 core:1 alexandros:1 provides:3 quantized:1 node:1 c2:2 symposium:5 qij:8 prove:1 suspicious:1 focs:1 overhead:2 combine:2 introduce:1 privacy:1 ofword:1 pairwise:2 expected:3 roughly:1 jegou:1 decreasing:2 actual:4 increasing:2 spain:1 discover:1 notation:1 provided:1 estimating:3 moreover:1 bhojanapalli:6 what:1 dimakis:3 finding:2 guarantee:7 runtime:3 exactly:1 k2:2 control:1 unit:8 grant:1 positive:1 engineering:1 io:2 consequence:1 despite:1 approximately:2 therein:1 co:8 factorization:1 smc:1 analytics:1 lost:1 implement:4 kat:7 procedure:1 nnz:8 significantly:1 projection:2 convenient:1 word:4 refers:1 magen:1 get:4 cannot:1 close:2 operator:1 storage:1 demonstrated:1 reviewer:1 sarlos:1 straightforward:3 go:1 independently:3 shanshan:2 spark:8 recovery:1 simplicity:1 amazon:1 identifying:1 factored:1 estimator:6 rule:1 embedding:14 handle:1 srht:1 racah:1 krishnamurthy:1 imagine:1 tan:2 suppose:1 user:2 exact:1 target:1 us:3 element:4 trend:2 preprint:5 capture:5 calculate:3 ensures:1 decrease:1 rescaled:10 technological:1 valuable:1 intuition:1 vanishes:1 complexity:8 algebra:1 triangle:1 drineas:1 joint:1 represented:2 jain:1 fast:1 monte:1 demmel:1 query:2 artificial:1 tell:1 quite:1 whose:1 kai:7 larger:2 voelker:1 distortion:1 valued:1 otherwise:5 statistic:2 ward:1 transform:4 final:1 online:4 advantage:1 analytical:1 propose:4 aro:1 interaction:3 product:29 douze:1 uci:2 hadamard:3 networked:1 achieve:1 description:1 frobenius:2 normalize:1 cluster:6 produce:3 generating:1 help:2 illustrate:1 develop:2 completion:3 nearest:1 ij:3 job:1 eq:10 come:2 direction:1 liberty:1 kb:1 dii:1 require:2 resilient:2 maschhoff:1 suffices:1 anonymous:1 tighter:1 exploring:1 considered:1 ic:1 bj:1 rdd:1 major:1 achieves:3 purpose:1 estimation:2 bag:2 utexas:3 individually:2 agrees:2 establishes:1 tool:1 eti:1 clearly:2 always:4 gaussian:3 nr3:2 ej:6 rank:46 indicates:1 contrast:1 sense:2 abstraction:1 streaming:7 typically:1 entire:1 wij:2 subroutine:2 interested:1 provably:1 sketched:5 colt:1 denoted:1 kottalam:1 art:1 special:2 yip:1 aware:1 construct:4 karnin:1 sampling:13 chernoff:1 represents:2 future:1 sanghavi:5 few:4 oblivious:2 modern:1 randomly:2 subsampled:2 phase:3 bw:6 n1:15 maintain:1 interest:1 chowdhury:1 evaluation:1 mahoney:2 photonics:1 extreme:1 genotype:1 accurate:2 closer:1 partial:2 orthogonal:3 unless:1 mccauley:1 divide:1 desired:5 bk2f:1 renormalize:1 theoretical:1 mk:3 instance:1 column:15 ar:3 w911nf:1 retains:1 cost:1 entry:28 subset:3 johnson:1 too:3 stored:3 dependency:1 gerhardt:1 synthetic:8 gd:1 international:3 randomized:4 ec2:2 siam:5 picking:1 together:1 sketching:11 squared:1 central:1 rn1:5 choose:1 possibly:1 leveraged:1 worse:1 rescaling:1 account:1 rn2:1 sec:1 satisfy:1 explicitly:3 ad:2 depends:1 performed:2 stoica:1 lot:1 analyze:1 characterizes:1 red:1 decaying:1 option:1 recover:1 slope:1 defer:2 implementation1:1 contribution:1 square:3 accuracy:2 variance:1 efficiently:1 none:1 carlo:1 lu:1 dave:1 explain:4 strongest:1 sixth:1 against:1 intentionally:1 proof:2 spie:1 recovers:1 sampled:9 proved:1 dataset:9 popular:1 knowledge:1 dimensionality:1 actually:1 higher:1 improved:3 entrywise:2 done:1 shrink:1 correlation:4 sketch:23 hand:3 tropp:1 ei:3 aj:1 indicated:2 scientific:1 believe:2 grows:1 effect:1 true:3 ccf:1 former:1 hence:4 alternating:3 illustrated:1 during:1 please:1 kak:1 cosine:1 mpi:1 prominent:2 pdf:1 performs:1 image:3 novel:4 recently:1 smp:41 common:1 apache:3 overview:1 empirically:1 cohen:2 volume:2 jl:16 million:2 occurred:2 shenker:1 significant:3 refer:1 ai:2 rd:6 sujay:1 fk:2 ssd:1 dot:10 access:1 stable:4 longer:1 etc:1 base:1 recent:3 store:2 binary:1 arbitrarily:4 fault:1 preserving:3 canon:1 additional:4 zouzias:1 determine:2 sharma:1 ii:1 multiple:2 reduces:1 faster:1 cross:2 offer:1 involving:1 regression:1 scalable:1 arxiv:10 iteration:6 represent:1 achieved:3 c1:2 addition:2 want:1 separately:1 singular:4 source:1 malicious:1 biased:3 extra:1 rest:1 archive:1 pass:4 file:1 comment:1 integer:1 prabhat:1 yang:1 enough:1 embeddings:1 fit:4 perfectly:1 reduce:1 idea:5 inner:2 tradeoff:1 br:6 texas:3 bottleneck:1 pca:55 url:6 gb:4 clarkson:1 remark:4 kbk2f:2 useful:1 chhugani:1 http:4 generate:1 canonical:3 nsf:1 notice:1 estimated:1 blue:1 discrete:2 key:1 four:1 nevertheless:1 kbj:7 drawn:4 cone:8 run:4 angle:12 fourth:1 uncertainty:3 distorted:1 soda:1 arrive:1 throughout:1 reasonable:1 wu:2 draw:1 appendix:9 vb:2 comparable:1 cca:2 bound:6 completing:1 followed:2 quadratic:1 annual:5 optic:1 infinity:1 generates:1 min:5 performing:1 optical:1 relatively:1 speedup:1 structured:1 according:2 poor:2 smaller:6 gittens:1 intuitively:1 previously:1 count:2 r3:1 know:1 available:1 observe:5 spectral:23 occurrence:1 alternative:1 original:2 top:4 running:2 subsampling:1 denotes:2 assumes:1 completed:1 giving:1 especially:1 approximating:1 society:1 already:1 occurs:2 dependence:2 kak2:3 nr:5 diagonal:1 subspace:2 distance:1 thank:1 atr:4 gracefully:1 kak2f:2 carbonell:1 mail:1 nelson:1 collected:1 reason:2 kannan:1 code:2 besides:2 ratio:6 innovation:1 difficult:1 potentially:1 implementation:6 design:4 twenty:2 perform:8 upper:1 observation:1 datasets:12 truncated:1 situation:1 flop:4 extended:1 rn:1 arbitrary:2 usenix:1 ttic:1 bk:3 pair:4 specified:1 barcelona:1 nip:8 beyond:1 proceeds:1 pattern:1 sparsity:1 max:4 memory:9 power:2 residual:1 scheme:1 improve:1 github:4 theta:1 axis:1 naive:2 extract:1 schmid:1 prior:1 literature:2 l2:1 geometric:1 kf:1 acknowledgement:1 multiplication:1 kakf:3 interesting:2 limitation:1 zaharia:1 versus:2 foundation:1 incurred:1 foster:1 heavy:1 row:4 austin:4 summary:1 repeat:1 supported:1 aij:1 side:2 harrell:1 steaming:1 institute:1 neighbor:1 saul:1 taking:1 sparse:2 benefit:2 distributed:6 dimension:6 world:1 stand:1 lindenstrauss:1 computes:8 valid:1 author:1 xlarge:2 transition:2 adaptive:1 transaction:1 approximate:2 keep:2 ml:1 global:1 tolerant:1 assumed:1 search:2 reputation:2 why:2 table:2 obtaining:1 da:1 main:5 big:1 noise:1 n2:21 sub:1 toyota:1 srinadh:2 theorem:8 rk:2 removing:1 specific:1 r2:1 quantization:1 illustrates:1 chen:2 phenotype:1 kbk2:3 eij:2 simply:1 forming:2 satisfies:1 chance:1 extracted:1 acm:5 ma:3 identity:1 goal:1 twofold:1 hard:1 specifically:1 principal:1 boufounos:1 total:1 pas:23 svd:20 experimental:2 m3:2 bkf:3 phenomenon:1
5,609
6,076
Learning values across many orders of magnitude Hado van Hasselt Arthur Guez Matteo Hessel Volodymyr Mnih David Silver Google DeepMind Abstract Most learning algorithms are not invariant to the scale of the signal that is being approximated. We propose to adaptively normalize the targets used in the learning updates. This is important in value-based reinforcement learning, where the magnitude of appropriate value approximations can change over time when we update the policy of behavior. Our main motivation is prior work on learning to play Atari games, where the rewards were clipped to a predetermined range. This clipping facilitates learning across many different games with a single learning algorithm, but a clipped reward function can result in qualitatively different behavior. Using adaptive normalization we can remove this domain-specific heuristic without diminishing overall performance. 1 Introduction Many machine-learning algorithms rely on a-priori access to data to properly tune relevant hyperparameters [Bergstra et al., 2011, Bergstra and Bengio, 2012, Snoek et al., 2012]. It is much harder to learn efficiently from a stream of data when we do not know the magnitude of the function we seek to approximate beforehand, or if these magnitudes can change over time, as is typically the case in reinforcement learning when the policy of behavior improves over time. Our main motivation is the work by Mnih et al. [2015], in which Q-learning [Watkins, 1989] is combined with a deep convolutional neural network [cf. LeCun et al., 2015]. The resulting deep Q network (DQN) algorithm learned to play a varied set of Atari 2600 games from the Arcade Learning Environment (ALE) [Bellemare et al., 2013], which was proposed as an evaluation framework to test general learning algorithms on solving many different interesting tasks. DQN was proposed as a singular solution, using a single set of hyperparameters. The magnitudes and frequencies of rewards vary wildly between different games. For instance, in Pong the rewards are bounded by 1 and +1 while in Ms. Pac-Man eating a single ghost can yield a reward of up to +1600. To overcome this hurdle, rewards and temporal-difference errors were clipped to [ 1, 1], so that DQN would perceive any positive reward as +1, and any negative reward as 1. This is not a satisfying solution for two reasons. First, the clipping introduces domain knowledge. Most games have sparse non-zero rewards. Clipping results in optimizing the frequency of rewards, rather than their sum. This is a fairly reasonable heuristic in Atari, but it does not generalize to many other domains. Second, and more importantly, the clipping changes the objective, sometimes resulting in qualitatively different policies of behavior. We propose a method to adaptively normalize the targets used in the learning updates. If these targets are guaranteed to be normalized it is much easier to find suitable hyperparameters. The proposed technique is not specific to DQN or to reinforcement learning and is more generally applicable in supervised learning and reinforcement learning. There are several reasons such normalization can be desirable. First, sometimes we desire a single system that is able to solve multiple different problems with varying natural magnitudes, as in the Atari domain. Second, for multi-variate functions the 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. normalization can be used to disentangle the natural magnitude of each component from its relative importance in the loss function. This is particularly useful when the components have different units, such as when we predict signals from sensors with different modalities. Finally, adaptive normalization can help deal with non-stationary. For instance, in reinforcement learning the policy of behavior can change repeatedly during learning, thereby changing the distribution and magnitude of the values. 1.1 Related work Input normalization has long been recognized as important to efficiently learn non-linear approximations such as neural networks [LeCun et al., 1998], leading to research on how to achieve scale-invariance on the inputs [e.g., Ross et al., 2013, Ioffe and Szegedy, 2015, Desjardins et al., 2015]. Output or target normalization has not received as much attention, probably because in supervised learning data sets are commonly available before learning commences, making it straightforward to determine appropriate normalizations or to tune hyper-parameters. However, this assumes the data is available a priori, which is not true in online (potentially non-stationary) settings. Natural gradients [Amari, 1998] are invariant to reparameterizations of the function approximation, thereby avoiding many scaling issues, but these are computationally expensive for functions with many parameters such as deep neural networks. This is why approximations are regularly proposed, typically trading off accuracy to computation [Martens and Grosse, 2015], and sometimes focusing on a certain aspect such as input normalization [Desjardins et al., 2015, Ioffe and Szegedy, 2015]. Most such algorithms are not fully invariant to the scale of the target function. In the Atari domain several algorithmic variants and improvements for DQN have been proposed [van Hasselt et al., 2016, Bellemare et al., 2016, Schaul et al., 2016, Wang et al., 2016], as well as alternative solutions [Liang et al., 2016, Mnih et al., 2016]. However, none of these address the clipping of the rewards or explicitly discuss the impacts of clipping on performance or behavior. 1.2 Preliminaries n Concretely, we consider learning from a stream of data {(Xt , Yt )}1 t=1 where the inputs Xt 2 R k and targets Yt 2 R are real-valued tensors. The aim is to update parameters ? of a function f? : Rn ! Rk such that the output f? (Xt ) is (in expectation) close to the target Yt according to some loss lt (f? ), for instance defined as a squared difference: lt (f? ) = 12 (f? (Xt ) Yt )> (f? (Xt ) Yt ). A canonical update is stochastic gradient descent (SGD). For a sample (Xt , Yt ), the update is then ?t+1 = ?t ?r? lt (f? ), where ? 2 [0, 1] is a step size. The magnitude of this update depends on both the step size and the loss, and it is hard to pick suitable step sizes when nothing is known about the magnitude of the loss. An important special case is when f? is a neural network [McCulloch and Pitts, 1943, Rosenblatt, 1962], which are often trained with a form of SGD [Rumelhart et al., 1986], with hyperparameters that interact with the scale of the loss. Especially for deep neural networks [LeCun et al., 2015, Schmidhuber, 2015] large updates may harm learning, because these networks are highly non-linear and such updates may ?bump? the parameters to regions with high error. 2 Adaptive normalization with Pop-Art We propose to normalize the targets Yt , where the normalization is learned separately from the approximating function. We consider an affine transformation of the targets Y?t = ?t 1 (Yt ?t ) , (1) where ?t and ?t are scale and shift parameters that are learned from data. The scale matrix ?t can be dense, diagonal, or defined by a scalar t as ?t = t I. Similarly, the shift vector ?t can contain separate components, or be defined by a scalar ?t as ?t = ?t 1. We can then define a loss on a normalized function g(Xt ) and the normalized target Y?t . The unnormalized approximation for any input x is then given by f (x) = ?g(x) + ?, where g is the normalized function and f is the unnormalized function. At first glance it may seem we have made little progress. If we learn ? and ? using the same algorithm as used for the parameters of the function g, then the problem has not become fundamentally different 2 or easier; we would have merely changed the structure of the parameterized function slightly. Conversely, if we consider tuning the scale and shift as hyperparameters then tuning them is not fundamentally easier than tuning other hyperparameters, such as the step size, directly. Fortunately, there is an alternative. We propose to update ? and ? according to a separate objective with the aim of normalizing the updates for g. Thereby, we decompose the problem of learning an appropriate normalization from learning the specific shape of the function. The two properties that we want to simultaneously achieve are (ART) to update scale ? and shift ? such that ? 1 (Y ?) is appropriately normalized, and (POP) to preserve the outputs of the unnormalized function when we change the scale and shift. We discuss these properties separately below. We refer to algorithms that combine output-preserving updates and adaptive rescaling, as Pop-Art algorithms, an acronym for ?Preserving Outputs Precisely, while Adaptively Rescaling Targets?. 2.1 Preserving outputs precisely Unless care is taken, repeated updates to the normalization might make learning harder rather than easier because the normalized targets become non-stationary. More importantly, whenever we adapt the normalization based on a certain target, this would simultaneously change the output of the unnormalized function of all inputs. If there is little reason to believe that other unnormalized outputs were incorrect, this is undesirable and may hurt performance in practice, as illustrated in Section 3. We now first discuss how to prevent these issues, before we discuss how to update the scale and shift. The only way to avoid changing all outputs of the unnormalized function whenever we update the scale and shift is by changing the normalized function g itself simultaneously. The goal is to preserve the outputs from before the change of normalization, for all inputs. This prevents the normalization from affecting the approximation, which is appropriate because its objective is solely to make learning easier, and to leave solving the approximation itself to the optimization algorithm. Without loss of generality the unnormalized function can be written as f?,?,?,W,b (x) ? ?g?,W,b (x) + ? ? ?(Wh? (x) + b) + ? , (2) where h? is a parametrized (non-linear) function, and g?,W,b = Wh? (x) + b is the normalized function. It is not uncommon for deep neural networks to end in a linear layer, and then h? can be the output of the last (hidden) layer of non-linearities. Alternatively, we can always add a square linear layer to any non-linear function h? to ensure this constraint, for instance initialized as W0 = I and b0 = 0. The following proposition shows that we can update the parameters W and b to fulfill the second desideratum of preserving outputs precisely for any change in normalization. Proposition 1. Consider a function f : Rn ! Rk defined as in (2) as f?,?,?,W,b (x) ? ? (Wh? (x) + b) + ? , where h? : R ! R is any non-linear function of x 2 Rn , ? is a k ? k matrix, ? and b are k-element vectors, and W is a k ? m matrix. Consider any change of the scale and shift parameters from ? to ?new and from ? to ?new , where ?new is non-singular. If we then additionally change the parameters W and b to Wnew and bnew , defined by n m 1 Wnew = ?new ?W 1 bnew = ?new (?b + ? and ?new ) , then the outputs of the unnormalized function f are preserved precisely in the sense that f?,?,?,W,b (x) = f?,?new ,?new ,Wnew ,bnew (x) , 8x . This and later propositions are proven in the appendix. For the special case of scalar scale and shift, with ? ? I and ? ? ?1, the updates to W and b become Wnew = ( / new )W and bnew = ( b + ? ?new )/ new . After updating the scale and shift we can update the output of the normalized function g?,W,b (Xt ) toward the normalized output Y?t , using any learning algorithm. Importantly, the normalization can be updated first, thereby avoiding harmful large updates just before they would otherwise occur. This observation is made more precise in Proposition 2 in Section 2.2. 3 Algorithm 1 SGD on squared loss with Pop-Art For a given differentiable function h? , initialize ?. Initialize W = I, b = 0, ? = I, and ? = 0. while learning do Observe input X and target Y Use Y to compute new scale ?new and new shift ?new 1 1 W ?new ?W , b ?new (?b + ? ?new ) ? ?new , ? ?new h h? (X) J (r? h?,1 (X), . . . , r? h?,m (X)) Wh + b ? 1 (Y ?) ? ? ?J W> W W ? h> b b ? end while (rescale W and b) (update scale and shift) (store output of h? ) (compute Jacobian of h? ) (compute normalized error) (compute SGD update for ?) (compute SGD update for W) (compute SGD update for b) Algorithm 1 is an example implementation of SGD with Pop-Art for a squared loss. It can be generalized easily to any other loss by changing the definition of . Notice that W and b are updated twice: first to adapt to the new scale and shift to preserve the outputs of the function, and then by SGD. The order of these updates is important because it allows us to use the new normalization immediately in the subsequent SGD update. 2.2 Adaptively rescaling targets A natural choice is to normalize the targets to approximately have zero mean and unit variance. For clarity and conciseness, we consider scalar normalizations. It is straightforward to extend to diagonal or dense matrices. If we have data {(Xi , Yi )}ti=1 up to some time t, we then may desire t X t (Yi ?t )/ t 1X (Yi t i=1 and =0 i=1 such that ?t = t t )?t 1 + t Yt and 2 t = 1, t 1X Yi t i=1 and This can be generalized to incremental updates ?t = (1 ?t ) 2 / 2 t ?2t , = ?t t = 1X 2 Y t i=1 i where ?t = (1 ?2t . t )?t 1 + (3) 2 t Yt . (4) Here ?t estimates the second moment of the targets and t 2 [0, 1] is a step size. If ?t ?2t is positive initially then it will always remain so, although to avoid issues with numerical precision it can be useful to enforce a lower bound explicitly by requiring ?t ?2t ? with ? > 0. For full equivalence to (3) we can use t = 1/t. If t = is constant we get exponential moving averages, placing more weight on recent data points which is appropriate in non-stationary settings. A constant has the additional benefit of never becoming negligibly small. Consider the first time a target is observed that is much larger than all previously observed targets. If t is small, our statistics would adapt only slightly, and the resulting update may be large enough to harm the learning. If t is not too small, the normalization can adapt to the large target before updating, potentially making learning more robust. In particular, the following proposition holds. Proposition 2. When using updates (4) to adapt the normalization parameters ized targets are bounded for all t by p p (1 ?t )/ t ? (1 t )/ t ? (Yt t )/ t . and ?, the normal- For instance, if t = = 10 4 for all t, then the normalized target is guaranteed to be in ( 100, 100). Note that Proposition 2 does not rely on any assumptions about the distribution of the targets. This is an important result, because it implies we can bound the potential normalized errors before learning, without any prior knowledge about the actual targets we may observe. 4 Algorithm 2 Normalized SGD For a given differentiable function h? , initialize ?. while learning do Observe input X and target Y Use Y to compute new scale ? h h? (X) J (rh?,1 (X), . . . , rh?,m (X))> Wh + b Y ? ? ?J (? 1 W)> ? 1 W W ? g> b b ? end while (store output of h? ) (compute Jacobian of h? ) (compute unnormalized error) (update ? with scaled SGD) (update W with SGD) (update b with SGD) It is an open question whether it is uniformly best to normalize by mean and variance. In the appendix we discuss other normalization updates, based on percentiles and mini-batches, and derive correspondences between all of these. 2.3 An equivalence for stochastic gradient descent We now step back and analyze the effect of the magnitude of the errors on the gradients when using regular SGD. This analysis suggests a different normalization algorithm, which has an interesting correspondence to Pop-Art SGD. We consider SGD updates for an unnormalized multi-layer function of form f?,W,b (X) = Wh? (X) + b. The update for the weight matrix W is Wt = Wt 1 + ?t t h?t (Xt )> , where t = f?,W,b (X) Yt is gradient of the squared loss, which we here call the unnormalized error. The magnitude of this update depends linearly on the magnitude of the error, which is appropriate when the inputs are normalized, because then the ideal scale of the weights depends linearly on the magnitude of the targets.1 Now consider the SGD update to the parameters of h? , ?t = ?t 1 ?Jt Wt> 1 t where Jt = (rg?,1 (X), . . . , rg?,m (X))> is the Jacobian for h? . The magnitudes of both the weights W and the errors depend linearly on the magnitude of the targets. This means that the magnitude of the update for ? depends quadratically on the magnitude of the targets. There is no compelling reason for these updates to depend at all on these magnitudes because the weights in the top layer already ensure appropriate scaling. In other words, for each doubling of the magnitudes of the targets, the updates to the lower layers quadruple for no clear reason. This analysis suggests an algorithmic solution, which seems to be novel in and of itself, in which we track the magnitudes of the targets in a separate parameter t , and then multiply the updates for all lower layers with a factor t 2 . A more general version of this for matrix scalings is given in Algorithm 2. We prove an interesting, and perhaps surprising, connection to the Pop-Art algorithm. Proposition 3. Consider two functions defined by and f?,?,?,W,b (x) = ?(Wh? (x) + b) + ? f?,W,b (x) = Wh? (x) + b , where h? is the same differentiable function in both cases, and the functions are initialized identically, using ?0 = I and ? = 0, and the same initial ?0 , W0 and b0 . Consider updating the first function using Algorithm 1 (Pop-Art SGD) and the second using Algorithm 2 (Normalized SGD). Then, for 1 any sequence of non-singular scales {?t }1 t=1 and shifts {?t }t=1 , the algorithms are equivalent in 1 the sense that 1) the sequences {?t }t=0 are identical, 2) the outputs of the functions are identical, for any input. The proposition shows a duality between normalizing the targets, as in Algorithm 1, and changing the updates, as in Algorithm 2. This allows us to gain more intuition about the algorithm. In particular, 1 In general care should be taken that the inputs are well-behaved; this is exactly the point of recent work on input normalization [Ioffe and Szegedy, 2015, Desjardins et al., 2015]. 5 RMSE (log scale) 10000 1000 100 Pop-Art Art SGD 10 0 2500 5000 number of samples Fig. 1a. Median RMSE on binary regression for SGD without normalization (red), with normalization but without preserving outputs (blue, labeled ?Art?), and with Pop-Art (green). Shaded 10?90 percentiles. Fig. 1b. `2 gradient norms for DQN during learning on 57 Atari games with actual unclipped rewards (left, red), clipped rewards (middle, blue), and using PopArt (right, green) instead of clipping. Shaded areas correspond to 95%, 90% and 50% of games. in Algorithm 2 the updates in top layer are not normalized, thereby allowing the last linear layer to adapt to the scale of the targets. This is in contrast to other algorithms that have some flavor of adaptive normalization, such as RMSprop [Tieleman and Hinton, 2012], AdaGrad [Duchi et al., 2011], and Adam [Kingma and Adam, 2015] that each component in the gradient by a square root of an empirical second moment of that component. That said, these methods are complementary, and it is straightforward to combine Pop-Art with other optimization algorithms than SGD. 3 Binary regression experiments We first analyze the effect of rare events in online learning, when infrequently a much larger target is observed. Such events can for instance occur when learning from noisy sensors that sometimes captures an actual signal, or when learning from sparse non-zero reinforcements. We empirically compare three variants of SGD: without normalization, with normalization but without preserving outputs precisely (i.e., with ?Art?, but without ?Pop?), and with Pop-Art. The inputs are binary representations of integers drawn uniformly randomly between 0 and n = 210 1. The desired outputs are the corresponding integer values. Every 1000 samples, we present the binary representation of 216 1 as input (i.e., all 16 inputs are 1) and as target 216 1 = 65, 535. The approximating function is a fully connected neural network with 16 inputs, 3 hidden layers with 10 nodes per layer, and tanh internal activation functions. This simple setup allows extensive sweeps over hyper-parameters, to avoid bias towards any algorithm by the way we tune these. The step sizes ? for SGD and for the normalization are tuned by a grid search over {10 5 , 10 4.5 , . . . , 10 1 , 10 0.5 , 1}. Figure 1a shows the root mean squared error (RMSE, log scale) for each of 5000 samples, before updating the function (so this is a test error, not a train error). The solid line is the median of 50 repetitions, and shaded region covers the 10th to 90th percentiles. The plotted results correspond to the best hyper-parameters according to the overall RMSE (i.e., area under the curve). The lines are slightly smoothed by averaging over each 10 consecutive samples. SGD favors a relatively small step size (? = 10 3.5 ) to avoid harmful large updates, but this slows learning on the smaller updates; the error curve is almost flat in between spikes. SGD with adaptive normalization (labeled ?Art?) can use a larger step size (? = 10 2.5 ) and therefore learns faster, but has high error after the spikes because the changing normalization also changes the outputs of the smaller inputs, increasing the errors on these. In comparison, Pop-Art performs much better. It prefers the same step size as Art (? = 10 2.5 ), but Pop-Art can exploit a much faster rate for the statistics (best performance with = 10 0.5 for Pop-Art and = 10 4 for Art). The faster tracking of statistics protects Pop-Art from the large spikes, while the output preservation avoids invalidating 6 the outputs for smaller targets. We ran experiments with RMSprop but left these out of the figure as the results were very similar to SGD. 4 Atari 2600 experiments An important motivation for this work is reinforcement learning with non-linear function approximators such as neural networks (sometimes called deep reinforcement learning). The goal is to predict and optimize action values defined as the expected sum of future rewards. These rewards can differ arbitrarily from one domain to the next, and non-zero rewards can be sparse. As a result, the action values can span a varied and wide range which is often unknown before learning commences. Mnih et al. [2015] combined Q-learning with a deep neural network in an algorithm called DQN, which impressively learned to play many games using a single set of hyper-parameters. However, as discussed above, to handle the different reward magnitudes with a single system all rewards were clipped to the interval [ 1, 1]. This is harmless in some games, such as Pong where no reward is ever higher than 1 or lower than 1, but it is not satisfactory as this heuristic introduces specific domain knowledge that optimizing reward frequencies is approximately is useful as optimizing the total score. However, the clipping makes the DQN algorithm blind to differences between certain actions, such as the difference in reward between eating a ghost (reward >= 100) and eating a pellet (reward = 25) in Ms. Pac-Man. We hypothesize that 1) overall performance decreases when we turn off clipping, because it is not possible to tune a step size that works on many games, 2) that we can regain much of the lost performance by with Pop-Art. The goal is not to improve state-of-the-art performance, but to remove the domain-dependent heuristic that is induced by the clipping of the rewards, thereby uncovering the true rewards. We ran the Double DQN algorithm [van Hasselt et al., 2016] in three versions: without changes, without clipping both rewards and temporal difference errors, and without clipping but additionally using Pop-Art. The targets are the cumulation of a reward and the discounted value at the next state: Yt = Rt+1 + Q(St , argmax Q(St , a; ?); ? ) , (5) a where Q(s, a; ?) is the estimated action value of action a in state s according to current parameters ?, and where ? is a more stable periodic copy of these parameters [cf. Mnih et al., 2015, van Hasselt et al., 2016, for more details]. This is a form of Double Q-learning [van Hasselt, 2010]. We roughly tuned the main step size and the step size for the normalization to 10 4 . It is not straightforward to tune the unclipped version, for reasons that will become clear soon. Figure 1b shows `2 norm of the gradient of Double DQN during learning as a function of number of training steps. The left plot corresponds to no reward clipping, middle to clipping (as per original DQN and Double DQN), and right to using Pop-Art instead of clipping. Each faint dashed lines corresponds to the median norms (where the median is taken over time) on one game. The shaded areas correspond to 50%, 90%, and 95% of games. Without clipping the rewards, Pop-Art produces a much narrower band within which the gradients fall. Across games, 95% of median norms range over less than two orders of magnitude (roughly between 1 and 20), compared to almost four orders of magnitude for clipped Double DQN, and more than six orders of magnitude for unclipped Double DQN without Pop-Art. The wide range for the latter shows why it is impossible to find a suitable step size with neither clipping nor Pop-Art: the updates are either far too small on some games or far too large on others. After 200M frames, we evaluated the actual scores of the best performing agent in each game on 100 episodes of up to 30 minutes of play, and then normalized by human and random scores as described by Mnih et al. [2015]. Figure 2 shows the differences in normalized scores between (clipped) Double DQN and Double DQN with Pop-Art. The main eye-catching result is that the distribution in performance drastically changed. On some games (e.g., Gopher, Centipede) we observe dramatic improvements, while on other games (e.g., Video Pinball, Star Gunner) we see a substantial decrease. For instance, in Ms. Pac-Man the clipped Double DQN agent does not care more about ghosts than pellets, but Double DQN with Pop-Art learns to actively hunt ghosts, resulting in higher scores. Especially remarkable is the improved performance on games like Centipede and Gopher, but also notable is a game like Frostbite which went from below 50% to a near-human performance level. Raw scores can be found in the appendix. 7 1600% Normalized differences 800% 400% 200% 100% 0% -100% -200% -400% -800% Video Pinball Star Gunner James Bond Double Dunk Breakout Time Pilot Wizard of Wor Defender Phoenix Chopper Command Q*Bert Battle Zone Amidar Skiing Beam Rider Tutankham H.E.R.O. River Raid Seaquest Ice Hockey Robotank Alien Up and Down Berzerk Pong Montezuma?s Revenge Private Eye Freeway Pitfall Gravitar Surround Space Invaders Asteroids Kangaroo Crazy Climber Bank Heist Solaris Yars Revenge Asterix Kung-Fu Master Bowling Ms. Pacman Frostbite Zaxxon Road Runner Fishing Derby Boxing Venture Name This Game Enduro Krull Tennis Demon Attack Centipede Assault Atlantis Gopher -1600% Figure 2: Differences between normalized scores for Double DQN with and without Pop-Art on 57 Atari games. Some games fare worse with unclipped rewards because it changes the nature of the problem. For instance, in Time Pilot the Pop-Art agent learns to quickly shoot a mothership to advance to a next level of the game, obtaining many points in the process. The clipped agent instead shoots at anything that moves, ignoring the mothership. However, in the long run in this game more points are scored with the safer and more homogeneous strategy of the clipped agent. One reason for the disconnect between the seemingly qualitatively good behavior combined with lower scores is that the agents are fairly myopic: both use a discount factor of = 0.99, and therefore only optimize rewards that happen within a dozen or so seconds into the future. On the whole, the results show that with Pop-Art we can successfully remove the clipping heuristic that has been present in all prior DQN variants, while retaining overall performance levels. Double DQN with Pop-Art performs slightly better than Double DQN with clipped rewards: on 32 out of 57 games performance is at least as good as clipped Double DQN and the median (+0.4%) and mean (+34%) differences are positive. 5 Discussion We have demonstrated that Pop-Art can be used to adapt to different and non-stationary target magnitudes. This problem was perhaps not previously commonly appreciated, potentially because in deep learning it is common to tune or normalize a priori, using an existing data set. This is not as straightforward in reinforcement learning when the policy and the corresponding values may repeatedly change over time. This makes Pop-Art a promising tool for deep reinforcement learning, although it is not specific to this setting. We saw that Pop-Art can successfully replace the clipping of rewards as done in DQN to handle the various magnitudes of the targets used in the Q-learning update. Now that the true problem is exposed to the learning algorithm we can hope to make further progress, for instance by improving the exploration [Osband et al., 2016], which can now be informed about the true unclipped rewards. References S. I. Amari. Natural gradient works efficiently in learning. Neural computation, 10(2):251?276, 1998. ISSN 0899-7667. M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation platform for general agents. J. Artif. Intell. Res. (JAIR), 47:253?279, 2013. 8 M. G. Bellemare, G. Ostrovski, A. Guez, P. S. Thomas, and R. Munos. Increasing the action gap: New operators for reinforcement learning. In AAAI, 2016. J. Bergstra and Y. Bengio. Random search for hyper-parameter optimization. The Journal of Machine Learning Research, 13(1):281?305, 2012. J. S. Bergstra, R. Bardenet, Y. Bengio, and B. K?gl. Algorithms for hyper-parameter optimization. In Advances in Neural Information Processing Systems, pages 2546?2554, 2011. G. Desjardins, K. Simonyan, R. Pascanu, and K. Kavukcuoglu. Natural neural networks. In Advances in Neural Information Processing Systems, pages 2062?2070, 2015. J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121?2159, 2011. S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. D. P. Kingma and J. B. Adam. A method for stochastic optimization. In International Conference on Learning Representation, 2015. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998. Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436?444, 05 2015. Y. Liang, M. C. Machado, E. Talvitie, and M. H. Bowling. State of the art control of atari games using shallow reinforcement learning. In International Conference on Autonomous Agents and Multiagent Systems, 2016. J. Martens and R. B. Grosse. Optimizing neural networks with kronecker-factored approximate curvature. In Proceedings of the 32nd International Conference on Machine Learning, volume 37, pages 2408?2417, 2015. W. S. McCulloch and W. Pitts. A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 5(4):115?133, 1943. V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540): 529?533, 2015. V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, 2016. I. Osband, C. Blundell, A. Pritzel, and B. Van Roy. Deep exploration via bootstrapped DQN. CoRR, abs/1602.04621, 2016. F. Rosenblatt. Principles of Neurodynamics. Spartan, New York, 1962. S. Ross, P. Mineiro, and J. Langford. Normalized online learning. In Proceedings of the 29th Conference on Uncertainty in Artificial Intelligence, 2013. D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error propagation. In Parallel Distributed Processing, volume 1, pages 318?362. MIT Press, 1986. T. Schaul, J. Quan, I. Antonoglou, and D. Silver. Prioritized experience replay. In International Conference on Learning Representations, Puerto Rico, 2016. J. Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 61:85?117, 2015. J. Snoek, H. Larochelle, and R. P. Adams. Practical bayesian optimization of machine learning algorithms. In Advances in neural information processing systems, pages 2951?2959, 2012. T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012. H. van Hasselt. Double Q-learning. Advances in Neural Information Processing Systems, 23:2613?2621, 2010. H. van Hasselt, A. Guez, and D. Silver. Deep reinforcement learning with Double Q-learning. AAAI, 2016. Z. Wang, N. de Freitas, T. Schaul, M. Hessel, H. van Hasselt, and M. Lanctot. Dueling network architectures for deep reinforcement learning. In International Conference on Machine Learning, New York, NY, USA, 2016. C. J. C. H. Watkins. Learning from delayed rewards. PhD thesis, University of Cambridge England, 1989. 9
6076 |@word private:1 middle:2 version:3 seems:1 norm:4 nd:1 open:1 calculus:1 seek:1 pick:1 dramatic:1 sgd:27 thereby:6 solid:1 harder:2 moment:2 initial:1 score:8 tuned:2 document:1 bootstrapped:1 existing:1 hasselt:8 current:1 freitas:1 cumulation:1 surprising:1 activation:1 guez:3 written:1 subsequent:1 numerical:1 happen:1 predetermined:1 shape:1 remove:3 hypothesize:1 plot:1 update:47 stationary:5 intelligence:1 nervous:1 talvitie:1 pascanu:1 node:1 attack:1 centipede:3 mathematical:1 wierstra:1 become:4 incorrect:1 prove:1 pritzel:1 combine:2 snoek:2 expected:1 roughly:2 behavior:7 nor:1 multi:2 discounted:1 pitfall:1 little:2 actual:4 freeway:1 increasing:2 spain:1 bounded:2 linearity:1 mcculloch:2 atari:9 deepmind:1 robotank:1 informed:1 transformation:1 temporal:2 every:1 ti:1 exactly:1 scaled:1 control:2 unit:2 atlantis:1 positive:3 before:8 ice:1 quadruple:1 solely:1 matteo:1 approximately:2 becoming:1 might:1 twice:1 equivalence:2 conversely:1 suggests:2 shaded:4 hunt:1 range:4 practical:1 lecun:5 practice:1 lost:1 area:3 riedmiller:1 empirical:1 word:1 road:1 regular:1 arcade:2 petersen:1 get:1 close:1 undesirable:1 operator:1 impossible:1 bellemare:5 optimize:2 equivalent:1 marten:2 yt:13 demonstrated:1 fishing:1 straightforward:5 attention:1 williams:1 immediately:1 perceive:1 factored:1 importantly:3 harmless:1 handle:2 autonomous:1 hurt:1 updated:2 target:38 play:4 homogeneous:1 invader:1 element:1 rumelhart:2 expensive:1 approximated:1 satisfying:1 particularly:1 updating:4 hessel:2 infrequently:1 recognition:1 roy:1 labeled:2 observed:3 negligibly:1 preprint:1 wang:2 capture:1 region:2 connected:1 coursera:1 episode:1 went:1 decrease:2 ran:2 substantial:1 intuition:1 environment:2 pong:3 rmsprop:3 reward:35 reparameterizations:1 trained:1 bnew:4 solving:2 depend:2 exposed:1 easily:1 various:1 train:1 spartan:1 artificial:1 hyper:6 kangaroo:1 heuristic:5 larger:3 solve:1 valued:1 amari:2 otherwise:1 favor:1 statistic:3 simonyan:1 itself:3 noisy:1 online:4 seemingly:1 sequence:2 differentiable:3 propose:4 regain:1 relevant:1 wizard:1 achieve:2 schaul:3 demon:1 breakout:1 venture:1 normalize:6 double:17 produce:1 silver:5 leave:1 incremental:1 adam:4 help:1 derive:1 rescale:1 b0:2 received:1 progress:2 trading:1 implies:1 larochelle:1 differ:1 stochastic:4 exploration:2 human:3 preliminary:1 decompose:1 proposition:9 hold:1 normal:1 algorithmic:2 predict:2 pitt:2 bump:1 solaris:1 desjardins:4 vary:1 consecutive:1 applicable:1 bond:1 tanh:1 ross:2 saw:1 repetition:1 pellet:2 successfully:2 tool:1 puerto:1 hope:1 mit:1 sensor:2 always:2 aim:2 rather:2 fulfill:1 avoid:4 unclipped:5 rusu:1 varying:1 eating:3 command:1 zaxxon:1 properly:1 improvement:2 legg:1 alien:1 contrast:1 sense:2 dependent:1 typically:2 diminishing:1 hidden:2 initially:1 issue:3 overall:4 uncovering:1 priori:3 seaquest:1 retaining:1 art:39 special:2 fairly:2 initialize:3 wor:1 platform:1 never:1 veness:2 identical:2 placing:1 future:2 pinball:2 others:1 mirza:1 fundamentally:2 defender:1 randomly:1 simultaneously:3 preserve:3 intell:1 delayed:1 argmax:1 harley:1 ab:1 ostrovski:2 highly:1 mnih:8 multiply:1 evaluation:2 runner:1 introduces:2 uncommon:1 myopic:1 beforehand:1 fu:1 arthur:1 experience:1 unless:1 harmful:2 divide:1 initialized:2 desired:1 plotted:1 re:1 catching:1 instance:9 compelling:1 cover:1 clipping:19 rare:1 too:3 periodic:1 combined:3 adaptively:4 st:2 international:6 river:1 off:2 asterix:1 quickly:1 squared:5 aaai:2 thesis:1 worse:1 leading:1 rescaling:3 actively:1 szegedy:4 volodymyr:1 potential:1 de:1 bergstra:4 star:2 disconnect:1 notable:1 explicitly:2 depends:4 stream:2 blind:1 later:1 root:2 analyze:2 hazan:1 red:2 parallel:1 rmse:4 square:2 accuracy:1 convolutional:1 variance:2 efficiently:3 yield:1 correspond:3 generalize:1 raw:1 bayesian:1 kavukcuoglu:3 none:1 whenever:2 definition:1 frequency:3 james:1 conciseness:1 gain:1 pilot:2 wh:8 logical:1 knowledge:3 improves:1 enduro:1 back:1 derby:1 focusing:1 rico:1 higher:2 jair:1 supervised:2 improved:1 evaluated:1 done:1 wildly:1 generality:1 just:1 langford:1 propagation:1 google:1 glance:1 perhaps:2 behaved:1 believe:1 artif:1 krull:1 dqn:25 effect:2 name:1 normalized:22 true:4 contain:1 requiring:1 lillicrap:1 usa:1 satisfactory:1 illustrated:1 deal:1 game:26 during:3 bowling:3 percentile:3 anything:1 unnormalized:11 m:4 generalized:2 duchi:2 performs:2 pacman:1 shoot:2 novel:1 common:1 machado:1 empirically:1 phoenix:1 overview:1 volume:2 extend:1 discussed:1 fare:1 refer:1 surround:1 cambridge:1 tuning:3 grid:1 similarly:1 moving:1 access:1 stable:1 tennis:1 badia:1 add:1 disentangle:1 curvature:1 recent:3 optimizing:4 skiing:1 schmidhuber:2 store:2 certain:3 binary:4 arbitrarily:1 approximators:1 yi:4 preserving:6 fortunately:1 care:3 additional:1 recognized:1 determine:1 dashed:1 signal:3 ale:1 multiple:1 desirable:1 full:1 preservation:1 faster:3 adapt:7 england:1 long:2 biophysics:1 impact:1 variant:3 desideratum:1 regression:2 expectation:1 arxiv:2 hado:1 normalization:34 sometimes:5 beam:1 preserved:1 affecting:1 hurdle:1 separately:2 want:1 interval:1 singular:3 median:6 modality:1 appropriately:1 probably:1 induced:1 facilitates:1 quan:1 regularly:1 seem:1 call:1 integer:2 near:1 ideal:1 bengio:5 enough:1 identically:1 variate:1 architecture:1 idea:1 haffner:1 shift:15 blundell:1 whether:1 six:1 accelerating:1 osband:2 york:2 repeatedly:2 prefers:1 deep:17 action:6 generally:1 useful:3 clear:2 berzerk:1 tune:6 discount:1 band:1 canonical:1 notice:1 estimated:1 track:1 per:2 rosenblatt:2 blue:2 naddaf:1 four:1 drawn:1 changing:6 prevent:1 clarity:1 neither:1 frostbite:2 bardenet:1 assault:1 subgradient:1 merely:1 sum:2 run:1 parameterized:1 master:1 uncertainty:1 clipped:12 almost:2 reasonable:1 tutankham:1 lanctot:1 appendix:3 scaling:3 layer:11 bound:2 guaranteed:2 montezuma:1 correspondence:2 activity:1 occur:2 precisely:5 constraint:1 kronecker:1 flat:1 protects:1 aspect:1 span:1 performing:1 relatively:1 according:4 battle:1 across:3 slightly:4 remain:1 smaller:3 rider:1 climber:1 shallow:1 revenge:2 making:2 invariant:3 taken:3 computationally:1 previously:2 discus:5 turn:1 singer:1 know:1 antonoglou:2 end:3 acronym:1 available:2 boxing:1 observe:4 appropriate:7 enforce:1 alternative:2 batch:2 hassabis:1 original:1 thomas:1 assumes:1 top:2 cf:2 ensure:2 running:1 exploit:1 especially:2 approximating:2 tensor:1 objective:3 sweep:1 question:1 already:1 spike:3 move:1 strategy:1 rt:1 diagonal:2 said:1 gradient:12 separate:3 fidjeland:1 parametrized:1 w0:2 reason:7 toward:1 issn:1 mini:1 liang:2 setup:1 potentially:3 negative:1 ized:1 slows:1 implementation:1 policy:5 unknown:1 allowing:1 observation:1 kumaran:1 descent:2 hinton:4 ever:1 precise:1 frame:1 rn:3 varied:2 smoothed:1 bert:1 david:1 extensive:1 connection:1 learned:4 quadratically:1 amidar:1 barcelona:1 pop:32 nip:1 kingma:2 address:1 able:1 below:2 ghost:4 green:2 video:2 dueling:1 suitable:3 event:2 natural:6 rely:2 gopher:3 improve:1 eye:2 prior:3 adagrad:1 relative:1 graf:2 loss:11 fully:2 multiagent:1 lecture:1 interesting:3 impressively:1 proven:1 remarkable:1 agent:8 affine:1 principle:1 bank:1 changed:2 gl:1 last:2 copy:1 soon:1 asynchronous:1 drastically:1 bias:1 appreciated:1 wide:2 fall:1 bulletin:1 munos:1 sparse:3 van:9 benefit:1 overcome:1 curve:2 distributed:1 avoids:1 concretely:1 qualitatively:3 reinforcement:16 adaptive:7 commonly:2 made:2 far:2 approximate:2 ioffe:4 harm:2 xi:1 alternatively:1 search:2 mineiro:1 why:2 hockey:1 neurodynamics:1 additionally:2 learn:3 nature:3 robust:1 promising:1 ignoring:1 obtaining:1 improving:1 interact:1 bottou:1 domain:8 main:4 dense:2 linearly:3 rh:2 motivation:3 whole:1 hyperparameters:6 scored:1 immanent:1 nothing:1 repeated:1 complementary:1 fig:2 grosse:2 ny:1 raid:1 precision:1 exponential:1 replay:1 crazy:1 watkins:2 jacobian:3 learns:3 dozen:1 rk:2 minute:1 down:1 xt:9 specific:5 jt:2 pac:3 invalidating:1 covariate:1 faint:1 normalizing:2 corr:1 importance:1 phd:1 magnitude:28 gap:1 easier:5 flavor:1 rg:2 lt:3 chopper:1 prevents:1 desire:2 tracking:1 scalar:4 doubling:1 sadik:1 corresponds:2 tieleman:2 wnew:4 goal:3 narrower:1 king:1 towards:1 prioritized:1 replace:1 man:3 change:14 hard:1 safer:1 uniformly:2 asteroid:1 wt:3 averaging:1 reducing:1 beattie:1 called:2 total:1 invariance:1 duality:1 gravitar:1 zone:1 internal:3 latter:1 commences:2 kung:1 avoiding:2
5,610
6,077
Online Bayesian Moment Matching for Topic Modeling with Unknown Number of Topics Wei-Shou Hsu and Pascal Poupart David R. Cheriton School of Computer Science University of Waterloo Wateroo, ON N2L 3G1 {wwhsu,ppoupart}@uwaterloo.ca Abstract Latent Dirichlet Allocation (LDA) is a very popular model for topic modeling as well as many other problems with latent groups. It is both simple and effective. When the number of topics (or latent groups) is unknown, the Hierarchical Dirichlet Process (HDP) provides an elegant non-parametric extension; however, it is a complex model and it is difficult to incorporate prior knowledge since the distribution over topics is implicit. We propose two new models that extend LDA in a simple and intuitive fashion by directly expressing a distribution over the number of topics. We also propose a new online Bayesian moment matching technique to learn the parameters and the number of topics of those models based on streaming data. The approach achieves higher log-likelihood than batch and online HDP with fixed hyperparameters on several corpora. The code is publicly available at https://github.com/whsu/bmm. 1 Introduction Latent Dirichlet Allocation (LDA) [3] recently emerged as the dominant framework for topic modeling as well as many other applications with latent groups. The Hierarchical Dirichlet Process (HDP) [18] provides an elegant extension to LDA when the number of topics (latent groups) is unknown. The non-parametric nature of HDPs is quite attractive since HDPs effectively allow an unbounded number of topics to be inferred from the data. There is also a rich mathematical theory underlying HDPs as well as attractive metaphors (e.g., stick breaking process, Chinese restaurant franchise) to ease the understanding by those less comfortable with non-parametric statistics [18]. That being said, HDPs are not perfect. They do not expose an explicit distribution over the topics that could allow practitioners to incorporate prior knowledge and to inspect the model?s posterior confidence in different number of topics. Furthermore, the implicit distribution over the number of topics is restricted to a regime where the number of topics grows logarithmically with the amount of data in expectation [18]. For instance, this growth rate is insufficient for applications that exhibit a power law distribution [6] ? a generalization of the HDP known as the hierarchical Pitman-Yor process [21] is often used instead. Existing inference algorithms for HDPs (e.g., Gibbs sampling [18], variational inference [19, 24, 23, 4, 17]) are also fairly complex. As a result, practitioners often stick with LDA and estimate the number of topics by repeatedly evaluating different number of topics by cross-validation; however, this is an expensive procedure. We propose two new models that extend LDA in a simple and intuitive fashion by directly expressing a distribution over the number of topics under the assumption that an upper bound on the number of topics is available. When the amount of data is finite, this assumption is perfectly fine since there cannot be more topics than the amount of data. Otherwise, domain experts can often define a suitable range for the number of topics and if they plan to inspect the resulting topics, they cannot inspect 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. an unbounded number of topics. We also propose a novel Bayesian moment matching algorithm to compute a posterior distribution over the model parameters and the number of topics. Bayesian learning naturally lends itself to online learning for streaming data since the posterior is updated sequentially after each data point and there is no need to go over the data more than once. The main issue is that the posterior becomes intractable. We approximate the posterior after each observed word by a tractable distribution that matches some moments of the exact posterior (hence the name Bayesian Moment Matching). The approach compares favorably to online HDP on several topic modeling tasks. 2 Related work Setting the number of topics to use can be treated as a model selection problem. One solution is to train a topic model multiple times, each time with a different number of topics, and choose the number of topics that minimizes some cost function on a heldout test set. More recently nonparametric Bayesian methods have been used to bypass the model selection problem. Hierarchical Dirichlet process (HDP) [18] is the natural extension of LDA in this direction. With HDP, the number of topics is learned from data as part of the inference procedure. Gibbs sampling [7, 15] and Variational Bayes [3, 20] are by far the most popular inference techniques for LDA. They have been extended to HDP [18, 19, 17]. With the rise of streaming data, online variants of Variational Bayes have also been developed for LDA [8] and HDP [24, 23, 4]. The first online variational technique [24] used a truncation that effectively bounds the number of topics while subsequent techniques [23, 4] avoid any fixed truncation to fully exploit the non-parametric nature of HDP. These online variational techniques perform stochastic gradient ascent on mini-batches, which reduces their data efficiency, but improves computational efficiency. We propose two new models that are simpler than HDP and express a distribution directly on the number of topics. We extend online Bayesian moment matching (originally designed for LDA with a fixed number of topics [14]) to learn the number of topics. This technique avoids mini-batches. It approximates Bayesian learning by Assumed Density Filtering [13], which can be thought as a single forward iteration of Expectation Propagation [12]. Note that Bayesian moment matching is different from frequentist moment matching techniques such as spectral learning [1, 2, 9, 11]. In BMM, we compute a posterior over the parameters of the model and approximate the posterior with a simpler distribution that matches some moments of the exact posterior. In spectral learning, moments of the empirical distribution of the data are used to find parameters that yield the same moments in the model. This is usually achieved by a spectral (or tensor) decomposition of the empirical moments, hence the name spectral learning. Although both BMM and spectral learning use the method of moments, they match different moments in different distributions resulting in completely different algorithms. While stochastic gradient descent can be used to compute tensor decompositions in an online fashion [5, 10], no online variant of spectral learning has been developed to infer the number of topics in LDA. 3 Models We investigate the problem of online clustering of grouped discrete observations. Using terminology from text processing, we will call each observation a word and each group a document. The observed N data set is then a corpus of N words, {wn }N n=1 , along with the IDs, {dn }n=1 , of the documents to which these words belong. We will let D denote the number of documents and V the number of distinct words in the vocabulary. Figure 1 shows the generative models we are considering. The basic model is LDA, in which the number of the topics T is fixed. We propose two extensions to the basic model where the parameter T is unknown and inferred from data, with the assumption that ~ specifies T ranges from 1 to K. Each ?~ specifies the topic distribution of a document, while each ? the word distribution of a topic. In the rest of the paper, we will use ? to denote the collection of all ~ and ? the collection of all ??s ~ in the model. ??s 2 ?~t ? ~d ~t ? ?~d D dn tn ~? ? ~d T ?~d T ?~t ~t ? D wn dn tn N ~? ? ~ k,d ?~t T ?~k,d ~t ? K wn DK dn tn N K wn N Figure 1: Graphical representations of basic model with fixed number of topics (left), degenerate Dirichlet model (middle), and triangular Dirichlet model (right) 3.1 Degenerate Dirichlet model The generative process of the degenerate Dirichlet model (DDM), as shown in the middle in Figure 1, ~ K ~ D works by first sampling the hyperparameters ~? , {~ ?d }D d=1 , and {?t }t=1 . The parameters T , {?d }d=1 , K ~ and {?t }t=1 are then sampled from the following conditional distributions: P (T |~? ) = Discrete(T ; ~? ) P (?~d |~ ?d , T ) = Dir(?~d ; ? ~ d, T ) ~ t |?~t ) = Dir(? ~t; ? ~t ) P (? where Dir(?~d ; ? ~ d , T ) denotes a degenerate Dirichlet distribution Dir(?~d ; ? ~ d0 ) with  ?d,t for t ? T 0 ?d,t = 0 for t > T and Discrete(T ; ~? ) is the general discrete distribution with probability P (T = k) = ?k for k = 1, . . . , K. Finally, the N observations are generated by first sampling the topic indicators tn according to the distribution P (tn |dn , ?) = ?dn ,tn . Note that since ?~dn is sampled from a degenerate Dirichlet, we have ?dn ,tn = 0 for tn > T . Given tn , the words are then sampled according to the categorical distribution P (wn |tn , ?) = ?tn ,wn . 3.2 Triangular Dirichlet model The triangular Dirichlet model (TDM), shown on the right in Figure 1, works in a similar way except the document-topic distribution ? is represented by a three-dimensional array that is also indexed by the number of topics T in addition to the document ID d and the topic ID t. Given T and d, the topic t is drawn according to the probability P (t|d, ?, T ) = ?T,d,t for 1 ? t ? T . The array ? therefore has a triangular shape in the first and third dimension. Again, we place a Dirichlet prior on each ?~k,d : P (?~k,d |~ ?k,d ) = Dir(?~k,d ; ? ~ k,d ). In this case, however, ?~k,d has no dependence on T . 4 Bayesian update by moment matching Let Pn (?, ?, T ) denote the joint posterior probability of ?, ?, and T after seeing the first n observations. Then1 Pn (?, ?, T ) = P (?, ?, T |w1:n ) = K 1 X P (tn |?, T )P (wn |?, tn )Pn?1 (?, ?, T ) cn t =1 (1) n where cn = P (wn |w1:n?1 ). From (1) we can see that after seeing each new observation wn , the number of terms in the posterior is increased by a factor of K, resulting in an exponential complexity for exact Bayesian update. Therefore, we will instead approximate Pn by a different distribution, whose parameters will be estimated by moment matching. In the derivations that follow, the dependence on the document IDs {dn }D ?, n=1 and the hyperparameters ~ ?, and ? is implicit and not shown. 1 3 4.1 Approximating distribution To make the inference tractable, we approximate Pn using a factorized distribution: Pn (?, ?, T ) = f? (?)f? (?)fT (T ). For TDM, we choose the factorized distribution to have the exact same form as the prior distribution, i.e., f? (?) = K Y D Y Dir(?~k,d ; ? ~ k,d ) (2) k=1 d=1 K Y ~ t ; ?~t ) Dir(? (3) fT (T ) = Discrete(T ; ~? ) (4) f? (?) = t=1 For DDM, we use the same f? and fT , but rather than choosing f? as degenerate Dirichlets again, we instead approximate the posterior over ? using proper Dirichlet distributions to decouple ? from T: D Y f? (?) = Dir(?~d ; ? ~ d) (5) d=1 4.2 Moment matching Let x be a random variable with distribution p(x). The i-th moment of x about zero is defined as the expectation of xi over p, and we denote it by Mxi (p):   Mxi (p) = Ep xi (6) For a K-dimensional Dirichlet distribution Dir(x1 , . . . , xK ; ?1 , . . . , ?K ), we can uniquely solve for the parameters ?1 , . . . , ?K if we have K ? 1 first moments, Mx1 , . . . , MxK?1 , and one second moment, Mx21 . Given the moments, we can determine the Dirichlet parameters as ?k = Mxk Mx1 ? Mx21 (7) Mx21 ? Mx21 for k = 1, . . . , K. Therefore, we can compute the parameters for f? and f? using (7): for ?d , replace ?k with ?d,k and xk with ?d,k ; and for ?t , replace ?k with ?t,k and xk with ?t,k . The parameters for Discrete(T ; ~? ) are estimated directly as ?k = E[?T,k ] (8) where ? denotes the the Kronecker delta  ?i,j = 4.3 1 0 if i = j . if i 6= j (9) Moment computation From (7) and (8), we see that to approximate Pn by moment matching, we need to compute the first and second moments of ? and ? as well as the expectation E[?T,k ] with respect to Pn . They can be calculated using the Bayesian update equation (1). To keep the notation uncluttered, let S~x,:m denote the sum of the first m elements in a vector ~x and S~x the sum of all elements in ~x. We can then compute the moments of DDM as follows: cn = K X T =1 EPn [?T,k ] = T X ?dn ,tn ?tn ,wn ?T S ~ dn ,:T S? ~t t =1 ? (10) n n k X 1 ?dn ,tn ?tn ,wn ?k cn S ~ dn ,:k S? ~t t =1 ? n 4 n (11) M?d,t (Pn ) = K T X 1 X ?dn ,tn ?tn ,wn ?d,t + ?d,dn ?t,tn ?T cn S~ dn ,:T S?~t S?~ d ,:T + ?d,dn t =1 ? T =t 2 M?d,t (Pn ) = K T X 1 X ?dn ,tn ?tn ,wn ?d,t + ?d,dn ?t,tn ?d,t + 1 + ?d,dn ?t,tn ?T cn S~ dn ,:T S?~t S?~ d ,:T + ?d,dn S?~ d ,:T + 1 + ?d,dn t =1 ? T =t T =1 n (14) n n K T X 1 X ?dn ,tn ?tn ,wn ?t,w + ?t,tn ?w,wn ?t,w + 1 + ?t,tn ?w,wn ?T cn S S?~t + ?t,tn S?~t + 1 + ?t,tn ~ dn ,:T S? ~t t =1 ? T =1 (13) n n K T X 1 X ?dn ,tn ?tn ,wn ?t,w + ?t,tn ?w,wn M?t,w (Pn ) = ?T cn S S?~t + ?t,tn ~ dn ,:T S? ~t t =1 ? M?2t,w (Pn ) = (12) n n (15) n For TDM, the moments are computed similarly except that T is used to index into ? rather than to take partial sums. The equations are included in the supplement. 4.4 Parameter update For TDM, the approximating distribution for the posterior has the exact same form as the prior; therefore, the parameters we compute for Pn in the n-th update can be used directly as the parameters for the prior in the (n + 1)-th update. However, for DDM, the prior for ? consists of degenerate Dirichlet distributions conditionally dependent on T , whereas the approximating distribution for the posterior is a fully factorized distribution with proper Dirichlets. Therefore, we have to make a further approximation to match the parameters of the two distributions. When Pn is being used as the prior in the (n + 1)-th update, we use the same ? that was obtained by moment matching during the n-th update, but it now has a different meaning. During the n-th update, ? is computed as parameters of proper Dirichlet distributions, but in the next update, it is used as parameters of a weighted sum of degenerate Dirichlet distributions. As a result, the DDM has a natural bias towards smaller number of topics. 4.5 Algorithm summary In summary, starting from a prior distribution, the algorithm successively updates the posterior by first computing the exact moments according to the Bayesian update equation (1), and then updating the parameters by matching the moments with those of an approximating distribution. In the case of TDM, the approximating distribution has the same form as the prior, whereas a simplified distribution is used for DDM. Algorithm 1 summarizes the procedure for the two models. Algorithm 1 Online Bayesian moment matching algorithm 1: Initialize ?, ?, and ~ ?. 2: for n = 1, . . . , N do 3: Read the n-th observation (dn , wn ). 4: Compute moments according to (10)?(15) for DDM or equations in supplement for TDM. 5: Update ?, ?, and ~? according to (7) and (8) with appropriate substitutions. 6: end for 5 Experiments In this section, we discuss our experiments on a synthetic dataset and three real text corpora. The TDM and DDM implementations are available at https://github.com/whsu/bmm. For both models we initialized the hyperparameters to be ?d,t = 1 and ?t,w = ?1V for all d, t, and w. The reason that ?t,w was not initialized to 1 was to encourage the algorithm to find topics with more concentrated word distributions. 5 DDM TDM 10 0 2 DDM TDM 20 Predicted T Predicted T 20 4 6 8 10 10 0 Actual T 2 4 6 8 10 Actual T (a) (b) Figure 2: Number of topics discovered by the DDM and TDM on synthetic datasets using (a) uniform prior and (b) exponentially decreasing prior on T . The results are averaged over 100 randomly generated datasets for each actual T . Error bars show plus/minus one standard deviation. Gray line indicates the true number of topics that generated the datasets. 5.1 Synthetic data We first ran some tests on synthetic data to see how well the models estimate the number of topics. For this experiment, the actual number of topics T was varied from 1 to 10, and for each value of T , we generated 100 random datasets with D = 100, V = 200, and N =100,000. Each random dataset was created by first sampling ? from Dir(~ ?d |0.05) and ? from Dir(?~t |0.1). The observations were then sampled from ? and ?. 1 We set K = 20 and used the uniform prior P (T ) = K for T = 1, . . . , K. The estimated number of topics is shown in Figure 2(a). Both models were able to discover more topics as the actual number of topics increases. They tend to overestimate the number of topics because the initial value ?t,w = ?1V encourages topics with smaller number of words. However, in both models, the modeler has direct control over the number of topics. If there is reason to believe the data come from a smaller number of topics, the modeler can change the prior distribution on T accordingly as is typical in a Bayesian framework. For this example, we also tested on an exponentially decreasing prior P (T ) ? e?T for T = 1, . . . , K. The results are shown in Figure 2(b). In this case, TDM shows a slight decrease than with a uniform prior, whereas DDM produces an estimate that is close to the true number of topics. 5.2 Text modeling We compare the two proposed models by using them to model the distributions of three real text corpora containing Reuters news articles, NIPS conference proceedings, and Yelp reviews. We also include online HDP (oHDP) in the comparisons, as well as the basic moment matching (basic MM) algorithm with different values of T . For online HDP, we used the gensim 0.10.3 [16] implementation with the default parameters except for the top-level truncation, which we set equal to the maximum number of topics we used for DDM and TDM. Because DDM and TDM do not estimate a global alpha as oHDP, for oHDP we include the results with both uniform alpha (oHDP unif) and alpha that is learned (oHDP alpha). We followed a similar experimental setup as in [22, 4]. Each dataset was divided into a training set Dtrain and a test set Dtest based on document IDs. The words in the test set were further split into two subsets W1 and W2 , where W1 contains the words in the first half of each document in the test set, and W2 contains the second half. The evaluation metric used is the per-word log likelihood 2 |W1 ,Dtrain ) L = log p(W|W where |W2 | denotes the total number of tokens in W2 . 2| For each experiment we also report the number of topics inferred by DDM, TDM. We do not report this number for online HDP because it is not returned by the implementation. 6 50 DDM TDM DDM TDM Basic MM oHDP alpha oHDP unif -6.7 40 30 -6.9 T Per-word log likelihood -6.5 20 -7.1 10 -7.3 0 20 40 60 80 0 100 0 T 500 1000 (?1000) n (a) (b) Figure 3: Text modeling on Reuters-21578: (a) Per-word test log likelihood and (b) Number of topics found as a function of number of observations. 5.2.1 Reuters-21578 The Reuters-21578 corpus contains 21,578 Reuters news articles in 1987. For this dataset, we divided the data into training and test sets according to the LEWISSPLIT attribute that is available as part of the distribution at http://www.daviddlewis.com/resources/ testcollections/reuters21578/. The text was passed through a stemmer, and stopwords and words appearing in five or fewer documents were removed. This resulted in a total of 1,307,468 tokens and a vocabulary of 7,720 distinct words. We chose K to be 100 for both models with 1 uniform prior P (T ) = K . Figure 3(a) shows the experimental results. DDM discovered 39 topics while TDM found 36, and they both achieved similar per-word log likelihood as the best models with fixed T showing that they were able to automatically determine the number of topics necessary to model the data. While both models found the a similar number of topics in the end, they progressed to the final values in different ways. Fig. 3(b) shows the number of topics found by the two models as a function of number of observations. DDM shows a logarithmically increasing trend as more words are observed, whereas TDM follows a more irregular progression. 5.2.2 NIPS We also tested the two models on 2,742 articles from the NIPS conference for the years 1988?2004. We used the raw text versions available at http://cs.nyu.edu/?roweis/data.html (1988?1999) and http://ai.stanford.edu/?gal/data.html (2000?2004). The first set was used as the training set and the second as the test set. The corpus was again passed through a stemmer, and stopwords and words appearing no more than 50 times were removed. After preprocessing we are left with 2,207,106 total words and a vocabulary of 4,383 unique words. For this dataset we used K = 400 with the exponentially decreasing prior. DDM discovered 54 topics, and TDM found 89 topics. Figure 4(a) shows the per-word log likelihood on the test set. In this experiment, both DDM and TDM obtained closed to the optimal likelihood compared to basic MM. 5.2.3 Yelp In our third experiment, we tested the models on a subset of the Yelp Academic Dataset (http: //www.yelp.com/dataset_challenge). We took the 129,524 reviews in the dataset that were given to businesses in the Food category. The reviews were randomly split so that 70% were used for training and 30% for testing. Similar preprocessing was performed. The corpus was passed through a stemmer, and stopwords and words appearing no more than 50 times were removed. After preprocessing the corpus contains a total of 5,317,041 words and a vocabulary of 5,640 distinct words. 7 -6.7 DDM TDM Basic MM oHDP alpha oHDP unif -6.9 Per-word log likelihood Per-word log likelihood -6.7 -7.1 -7.3 -7.5 -7.7 DDM TDM Basic MM oHDP alpha oHDP unif -6.9 -7.1 -7.3 -7.5 -7.7 0 100 200 300 400 0 20 40 T 60 80 100 T (a) (b) Figure 4: Per-word test log likelihood of (a) NIPS and (b) Yelp. For this dataset, we tested with K=100 using the exponentially decreasing prior on T . Figure 4(b) shows the per-word log likelihood on the test set. DDM found the optimal number of topics while both models achieved close to best likelihood on the test set compared to basic MM. 5.2.4 Comparison with online HDP Because DDM and TDM do not estimate the global alpha, in the experiments we compute the test likelihood using a uniform alpha. If we also use a uniform alpha for online HDP, DDM and TDM achieve higher test likelihood. However, online HDP is able to learn the global alpha, which results in higher likelihood. This is a shortcoming of our models, and we are exploring ways to estimate the global alpha. 5.3 Additional experimental results Additional experimental results may be found in the supplement, including running time of the experiments and samples of topics discovered in the Reuters and NIPS corpora, as well as experiments on using the models as dimensionality reduction preprocessors in text classification. 6 Conclusions In this paper we proposed two topic models that can be used when the number of topics is not known. Unlike nonparametric Bayesian models, the proposed models provide explicit control over the prior for the number of topics. We then presented an online learning algorithm based on Bayesian moment matching, and experiments showed that reasonable topics could be recovered using the proposed models. Additional experiments on text classification and visual inspection of the inferred topics show that the clusters discovered were indeed semantically meaningful. One unsolved problem is that the proposed models do not estimate the global alpha, resulting in lower test likelihood compared to online HDP, which is able to estimate alpha. Developing a robust way to estimate alpha will be the next step to improve the models. References [1] Anima Anandkumar, Dean P Foster, Daniel Hsu, Sham Kakade, and Yi-Kai Liu. A spectral algorithm for latent Dirichlet allocation. In NIPS, pages 926?934, 2012. [2] Sanjeev Arora, Rong Ge, and Ankur Moitra. Learning topic models?going beyond SVD. In Foundations of Computer Science, pages 1?10. IEEE, 2012. [3] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993?1022, 2003. [4] Michael Bryant and Erik B Sudderth. Truly nonparametric online variational inference for hierarchical Dirichlet processes. In NIPS, pages 2699?2707, 2012. 8 [5] Rong Ge, Furong Huang, Chi Jin, and Yang Yuan. Escaping from saddle points online stochastic gradient for tensor decomposition. In Conference on Learning Theory, page 797842, 2015. [6] Sharon Goldwater, Mark Johnson, and Thomas L Griffiths. Interpolating between types and tokens by estimating power-law generators. In NIPS, pages 459?466, 2005. [7] Tom Griffiths. Gibbs sampling in the generative model of latent Dirichlet allocation. 2002. [8] Matthew Hoffman, Francis R Bach, and David M Blei. Online learning for latent Dirichlet allocation. In NIPS, pages 856?864, 2010. [9] Daniel Hsu and Sham M Kakade. Learning mixtures of spherical Gaussians: moment methods and spectral decompositions. In Conference on Innovations in Theoretical Computer Science, pages 11?20. ACM, 2013. [10] Furong Huang, U. N. Niranjan, Mohammad Umar Hakeem, and Animashree Anandkumar. Online tensor methods for learning latent variable models. Journal of Machine Learning Research, 16:2797?2835, 2015. [11] Furong Huang, UN Niranjan, Mohammad Umar Hakeem, and Animashree Anandkumar. Fast detection of overlapping communities via online tensor methods. arXiv preprint arXiv:1309.0787, 2013. [12] Thomas Minka and John Lafferty. Expectation-propagation for the generative aspect model. In UAI, pages 352?359, 2002. [13] Thomas P Minka. Expectation propagation for approximate Bayesian inference. In UAI, pages 362?369, 2001. [14] Farheen Omar. Online Bayesian Learning in Probabilistic Graphical Models using Moment Matching with Applications. PhD thesis, David R. Cheriton School of Computer Science, University of Waterloo, 2016. [15] Ian Porteous, David Newman, Alexander Ihler, Arthur Asuncion, Padhraic Smyth, and Max Welling. Fast collapsed Gibbs sampling for latent Dirichlet allocation. In ACM SIGKDD, pages 569?577, 2008. ? u?rek and P. Sojka. Software Framework for Topic Modelling with Large Corpora. In [16] R. Reh? Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45?50, Valletta, Malta, May 2010. ELRA. [17] Issei Sato, Kenichi Kurihara, and Hiroshi Nakagawa. Practical collapsed variational Bayes inference for hierarchical Dirichlet process. In ACM SIGKDD, pages 105?113. ACM, 2012. [18] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101:pp. 1566?1581, 2006. [19] Yee W Teh, Kenichi Kurihara, and Max Welling. Collapsed variational inference for HDP. In NIPS, pages 1481?1488, 2007. [20] Yee W Teh, David Newman, and Max Welling. A collapsed variational Bayesian inference algorithm for latent Dirichlet allocation. In NIPS, pages 1353?1360, 2006. [21] Yee Whye Teh. A hierarchical Bayesian language model based on Pitman-Yor processes. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 985?992. Association for Computational Linguistics, 2006. [22] C. Wang, J. Paisley, and D. Blei. Online variational inference for the hierarchical Dirichlet process. In G. Gordon, D Dunson, and M. Dud??k, editors, AISTATS, volume 15. JMLR W&CP, 2011. [23] Chong Wang and David M Blei. Truncation-free online variational inference for Bayesian nonparametric models. In NIPS, pages 413?421, 2012. [24] Chong Wang, John W Paisley, and David M Blei. Online variational inference for the hierarchical Dirichlet process. In AISTATS, pages 752?760, 2011. 9
6077 |@word version:1 middle:2 unif:4 decomposition:4 minus:1 reduction:1 moment:35 substitution:1 contains:4 liu:1 initial:1 daniel:2 document:10 existing:1 recovered:1 com:4 john:2 subsequent:1 shape:1 designed:1 update:13 generative:4 half:2 fewer:1 accordingly:1 inspection:1 xk:3 blei:6 provides:2 simpler:2 five:1 unbounded:2 shou:1 mathematical:1 along:1 dn:28 direct:1 stopwords:3 yuan:1 consists:1 issei:1 preprocessors:1 indeed:1 chi:1 decreasing:4 spherical:1 automatically:1 food:1 actual:5 metaphor:1 considering:1 increasing:1 becomes:1 spain:1 discover:1 underlying:1 notation:1 estimating:1 factorized:3 minimizes:1 developed:2 gal:1 growth:1 bryant:1 stick:2 control:2 comfortable:1 overestimate:1 yelp:5 id:5 plus:1 chose:1 ankur:1 ease:1 range:2 averaged:1 unique:1 practical:1 testing:1 procedure:3 empirical:2 thought:1 matching:17 confidence:1 word:29 griffith:2 seeing:2 cannot:2 close:2 selection:2 sojka:1 collapsed:4 yee:3 www:2 dean:1 go:1 starting:1 array:2 updated:1 exact:6 mxk:2 smyth:1 logarithmically:2 element:2 expensive:1 trend:1 updating:1 observed:3 ft:3 ep:1 preprint:1 wang:3 news:2 decrease:1 removed:3 ran:1 complexity:1 efficiency:2 completely:1 joint:1 represented:1 derivation:1 train:1 distinct:3 fast:2 effective:1 shortcoming:1 hiroshi:1 newman:2 choosing:1 quite:1 emerged:1 whose:1 solve:1 stanford:1 kai:1 otherwise:1 triangular:4 statistic:1 g1:1 itself:1 final:1 online:30 beal:1 took:1 propose:6 degenerate:8 achieve:1 roweis:1 ppoupart:1 intuitive:2 cluster:1 produce:1 perfect:1 franchise:1 school:2 predicted:2 c:1 come:1 direction:1 attribute:1 stochastic:3 generalization:1 extension:4 exploring:1 rong:2 mm:6 matthew:1 achieves:1 daviddlewis:1 expose:1 waterloo:2 grouped:1 reuters21578:1 weighted:1 hoffman:1 rather:2 avoid:1 pn:14 modelling:1 likelihood:16 indicates:1 sigkdd:2 inference:13 dependent:1 streaming:3 going:1 issue:1 classification:2 html:2 pascal:1 plan:1 fairly:1 initialize:1 equal:1 once:1 ng:1 sampling:7 progressed:1 report:2 gordon:1 randomly:2 resulted:1 detection:1 investigate:1 reh:1 evaluation:1 chong:2 truly:1 mixture:1 encourage:1 partial:1 necessary:1 arthur:1 indexed:1 initialized:2 theoretical:1 instance:1 increased:1 modeling:6 cost:1 deviation:1 subset:2 uniform:7 johnson:1 dtrain:2 dir:11 synthetic:4 st:1 density:1 international:1 probabilistic:1 michael:1 dirichlets:2 sanjeev:1 w1:5 thesis:1 again:3 successively:1 padhraic:1 choose:2 containing:1 dtest:1 moitra:1 huang:3 expert:1 american:1 gensim:1 performed:1 closed:1 francis:1 bayes:3 asuncion:1 publicly:1 yield:1 goldwater:1 bayesian:22 raw:1 anima:1 pp:1 minka:2 naturally:1 ihler:1 modeler:2 unsolved:1 hsu:3 sampled:4 dataset:8 popular:2 animashree:2 knowledge:2 improves:1 dimensionality:1 furong:3 higher:3 originally:1 follow:1 tom:1 wei:1 furthermore:1 implicit:3 overlapping:1 propagation:3 lda:12 gray:1 believe:1 grows:1 name:2 ohdp:11 true:2 hence:2 read:1 dud:1 attractive:2 conditionally:1 during:2 uniquely:1 encourages:1 whye:1 mohammad:2 tn:34 cp:1 meaning:1 variational:12 novel:1 recently:2 exponentially:4 volume:1 extend:3 belong:1 approximates:1 association:3 slight:1 expressing:2 gibbs:4 ai:1 paisley:2 n2l:1 similarly:1 language:1 dominant:1 posterior:15 showed:1 tdm:24 meeting:1 yi:1 additional:3 determine:2 multiple:1 sham:2 infer:1 reduces:1 uncluttered:1 d0:1 match:4 academic:1 bach:1 cross:1 divided:2 niranjan:2 variant:2 basic:10 expectation:6 metric:1 arxiv:2 iteration:1 achieved:3 irregular:1 addition:1 whereas:4 fine:1 sudderth:1 w2:4 rest:1 unlike:1 ascent:1 tend:1 elegant:2 lafferty:1 jordan:2 practitioner:2 call:1 anandkumar:3 yang:1 split:2 wn:19 restaurant:1 testcollections:1 perfectly:1 escaping:1 cn:8 passed:3 returned:1 repeatedly:1 amount:3 nonparametric:4 ddm:26 concentrated:1 category:1 http:6 specifies:2 mx1:2 estimated:3 delta:1 hdps:5 per:9 discrete:6 express:1 group:5 terminology:1 drawn:1 sharon:1 sum:4 year:1 place:1 reasonable:1 summarizes:1 bound:2 epn:1 lrec:1 followed:1 annual:1 sato:1 kronecker:1 software:1 aspect:1 developing:1 according:7 cheriton:2 kenichi:2 smaller:3 kakade:2 bmm:4 restricted:1 equation:4 resource:1 discus:1 ge:2 tractable:2 end:2 available:5 gaussians:1 progression:1 hierarchical:10 spectral:8 appropriate:1 appearing:3 frequentist:1 batch:3 thomas:3 denotes:3 dirichlet:31 clustering:1 include:2 top:1 graphical:2 running:1 porteous:1 nlp:1 linguistics:3 umar:2 exploit:1 chinese:1 approximating:5 tensor:5 parametric:4 dependence:2 said:1 exhibit:1 gradient:3 lends:1 poupart:1 topic:75 omar:1 reason:2 hdp:19 code:1 erik:1 index:1 insufficient:1 mini:2 innovation:1 difficult:1 setup:1 dunson:1 favorably:1 rise:1 implementation:3 proper:3 unknown:4 perform:1 teh:4 upper:1 inspect:3 observation:9 datasets:4 finite:1 descent:1 jin:1 extended:1 discovered:5 varied:1 community:1 inferred:4 david:7 learned:2 barcelona:1 nip:13 able:4 bar:1 beyond:1 usually:1 regime:1 challenge:1 including:1 max:3 power:2 suitable:1 treated:1 natural:2 business:1 indicator:1 improve:1 github:2 mxi:2 arora:1 created:1 categorical:1 text:9 prior:20 understanding:1 review:3 law:2 fully:2 heldout:1 allocation:8 filtering:1 generator:1 validation:1 foundation:1 article:3 foster:1 editor:1 bypass:1 summary:2 token:3 truncation:4 free:1 bias:1 allow:2 stemmer:3 pitman:2 yor:2 dimension:1 vocabulary:4 evaluating:1 avoids:1 rich:1 calculated:1 default:1 forward:1 collection:2 rek:1 preprocessing:3 simplified:1 far:1 welling:3 approximate:7 alpha:15 keep:1 global:5 sequentially:1 uai:2 corpus:10 assumed:1 xi:2 un:1 latent:13 learn:3 nature:2 robust:1 ca:1 complex:2 interpolating:1 domain:1 aistats:2 main:1 uwaterloo:1 reuters:6 hyperparameters:4 x1:1 fig:1 fashion:3 explicit:2 exponential:1 breaking:1 jmlr:1 third:2 ian:1 showing:1 nyu:1 dk:1 intractable:1 workshop:1 effectively:2 supplement:3 phd:1 saddle:1 visual:1 hakeem:2 acm:4 conditional:1 towards:1 replace:2 change:1 included:1 typical:1 except:3 nakagawa:1 semantically:1 kurihara:2 decouple:1 total:4 experimental:4 svd:1 meaningful:1 mark:1 alexander:1 incorporate:2 tested:4
5,611
6,078
On Mixtures of Markov Chains Rishi Gupta? Stanford University Stanford, CA 94305 rishig@cs.stanford.edu Ravi Kumar Google Research Mountain View, CA 94043 ravi.k53@gmail.com Sergei Vassilvitskii Google Research New York, NY 10011 sergeiv@google.com Abstract We study the problem of reconstructing a mixture of Markov chains from the trajectories generated by random walks through the state space. Under mild nondegeneracy conditions, we show that we can uniquely reconstruct the underlying chains by only considering trajectories of length three, which represent triples of states. Our algorithm is spectral in nature, and is easy to implement. 1 Introduction Markov chains are a simple and incredibly rich tool for modeling, and act as a backbone in numerous applications?from Pagerank for web search to language modeling for machine translation. While the true nature of the underlying behavior is rarely Markovian [6], it is nevertheless often a good mathematical assumption. In this paper, we consider the case where we are given observations from a mixture of L Markov chains, each on the same n states, with n ? 2L. Each observation is a series of states, and is generated as follows: a Markov chain and starting state are selected from a distribution S, and then the selected Markov chain is followed for some number of steps. The goal is to recover S and the transition matrices of the L Markov chains from the observations. When all of the observations follow from a single Markov chain (namely, when L = 1), recovering the mixture parameters is easy. A simple calculation shows that the empirical starting distribution and the empirical transition probabilities form the maximum likelihood Markov chain. So we are largely interested in the case when L > 1. As a motivating example, consider the usage of a standard maps app on a phone. There are a number of different reasons one might use the app: to search for a nearby business, to get directions from one point to another, or just to orient oneself. However, the users of the app never specify an explicit intent, rather they swipe, type, zoom, etc., until they are satisfied. Each one of the latent intents can be modeled by a Markov chain on a small state space of actions. If the assignment of each session to an intent were explicit, recovering these Markov chains would simply reduce to several instances of the L = 1 case. Here we are interested in the unsupervised setting of finding the underlying chains when this assignment is unknown. This allows for a better understanding of usage patterns. For example: ? Common uses for the app that the designers had not expected, or had not expected to be common. For instance, maybe a good fraction of users (or user sessions) simply use the app to check the traffic. ? Whether different types of users use the app differently. For instance, experienced users might use the app differently than first time users, either due to having different goals, or due to accomplishing the same tasks more efficiently. ? Undiscoverable flows, with users ignoring a simple, but hidden menu setting, and instead using a convoluted path to accomplish the same goal. ? Part of this work was done while the author was visiting Google Research. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. The question of untangling mixture models has received a lot of attention in a variety of different situations, particularly in the case of learning mixtures of Gaussians, see for example the seminal work of [8], as well as later work by [5, 11, 15] and the references therein. This is, to the best of our knowledge, the first work that looks at unraveling mixtures of Markov chains. There are two immediate approaches to solving this problem. The first is to use the ExpectationMaximization (EM) algorithm [9]. The EM algorithm starts by guessing an initial set of parameters for the mixture, and then performs local improvements that increase the likelihood of the proposed solution. The EM algorithm is a useful benchmark and will converge to some local optimum, but it may be slow to get there [12], and has are no guarantees on the quality of the final solution. The second approach is to model the problem as a Hidden Markov Model (HMM), and employ the machinery for learning HMMs, particularly the recent tensor decomposition methods [2, 3, 10]. As in our case, this machinery relies on having more observed states than hidden states. Unfortunately, directly modeling a Markov chain mixture as an HMM (or as a mixture of HMMs, as in [13]) requires nL hidden states for n observed states. Given that, one could try adapting the tensor decomposition arguments from [3] to our problem, which is done in Section 4.3 of [14]. However, as the authors note, this requires accurate estimates for the distribution of trajectories (or trails) of length five, whereas our results only require estimates for the distribution of trails of length three. This is a large difference in the amount of data one might need to collect, as one would expect to need ?(nt ) samples to estimate the distribution of trails of length t. An entirely different approach is to assume a Dirichlet prior on the mixture, and model the problem as learning a mixture of Dirichlet distributions [14]. Besides requiring the Dirichlet prior, this method also requires very long trails. Finally, we would like to note a connection to the generic identifiability results for HMMs and various mixture models in [1]. Their results are existential rather than algorithmic, but dimension three also plays a central role. Our contributions. We propose and study the problem of reconstructing a mixture of Markov chains from a set of observations, or trajectories. Let a t-trail be a trajectory of length t: a starting state chosen according to S along with t ? 1 steps along the appropriate Markov chain. (i) We identify a weak non-degeneracy condition on mixtures of Markov chains and show that under that non-degeneracy condition, 3-trails are sufficient for recovering the underlying mixture parameters. We prove that for random instances, the non-degeneracy condition holds with probability 1. (ii) Under the non-degeneracy condition, we give an efficient algorithm for uniquely recovering the mixture parameters given the exact distribution of 3-trails. (iii) We show that our algorithm outperforms the most natural EM algorithm for the problem in some regimes, despite EM being orders of magnitude slower. Organization. In Section 2 we present the necessary background material that will be used in the rest of the paper. In Section 3 we state and motivate the non-degeneracy condition that is sufficient for unique reconstruction. Using this assumption, in Section 4 we present our four-step algorithm for reconstruction. In Section 5 we present our experimental results on synthetic and real data. In Section 6 we show that random instances are non-degenerate with probability 1. 2 Preliminaries Let [n] = {1, . . . , n} be a state space. We consider Markov chains defined on [n]. For a Markov chain given by its n ? n transition matrix M , let M (i, j) denote the probability of moving from state P i to state j. By definition, M is a stochastic matrix, M (i, j) ? 0 and j M (i, j) = 1. (In general we use A(i, j) to denote the (i, j)th entry of a matrix A.) For a matrix A, let A denote its transpose. Every n ? n matrix A of rank r admits a singular value decomposition (SVD) of the form A = U ?V where U and V are n ? r orthogonal matrices and ? is an r ? r diagonal matrix with non-negative entries. For an L ? n matrix B of full rank, its right pseudoinverse B ?1 is an n ? L matrix of full rank such that BB ?1 = I; it is a standard fact that pseudoinverses exist and can be computed efficiently when n ? L. We now formally define a mixture of Markov chains (M, S). Let L ? 1 be an integer. Let M = {M 1 , . . . , M L } be L transition matrices, all defined on [n]. Let S = {s1 , . . . , sL } be a 2 P corresponding set of positive n-dimensional vectors of starting probabilities such that `,i s`i = 1. Given M and S, a t-trail is generated as follows: first pick the chain ` and the starting state i with probability s`i , and then perform a random walk according to the transition matrix M ` , starting from i, for t ? 1 steps. Throughout, we use i, j, k to denote states in [n] and ` to denote a particular chain. Let 1n be a column vector of n 1?s. Definition 1 (Reconstructing a Mixture of Markov Chains). Given a (large enough) set of trails generated by a mixture of Markov chains and an L > 1, find the parameters M and S of the mixture. Note that the number of parameters is O(n2 ? L). In this paper, we focus on a seemingly restricted version of the reconstruction problem, where all of the given trails are of length three, i.e., every trail is of the form i ? j ? k for some three states i, j, k ? [n]. Surprisingly, we show that 3-trails are sufficient for perfect reconstruction. By the definition of mixtures, the probability of generating a given 3-trail i ? j ? k is X s`i ? M ` (i, j) ? M ` (j, k), (1) ` which captures the stochastic process of choosing a particular chain ` using S and taking two steps in M ` . Since we only observe the trails, the choice of the chain ` in the above process is latent. For each j ? [n], let Oj be an n ? n matrix such that Oj (i, k) equals the value in (1). It is easy to see that using O((n3 log n)/2 ) sample trails, every entry in Oj for every j is approximated to within an additive ?. For the rest of the paper, we assume we know each Oj (i, k) exactly, rather than an approximation of it from samples. We now give a simple decomposition of Oj in terms of the transition matrices in M and the starting probabilities in S. Let Pj be the L ? n matrix whose (`, i)th entry denotes the probability of using chain `, starting in state i, and transitioning to state j, i.e., Pj (`, i) = s`i ? M ` (i, j). In a similar manner, let Qj be the L ? n matrix whose (`, k)th entry denotes the probability of starting in state j, and transitioning to state k under chain `, i.e., Qj (`, k) = s`j ? M ` (j, k). Finally, let Sj = diag(s1j , . . . , sL j ) be the L ? L diagonal matrix of starting probabilities in state j. Then, Oj = Pj ? Sj?1 ? Qj . This decomposition will form the key to our analysis. 3 (2) Conditions for unique reconstruction Before we delve into the details of the algorithm, we first identify a condition on the mixture (M, S) such that there is a unique solution to the reconstruction problem when we consider trails of length 0 three. (To appreciate such a need, consider a mixture where two of the matrices M ` and M ` in 0 0 M are identical. Then for a fixed vector v, any s` and s` with s` + s` = v will give the same observations, regardless of the length of the trails.) To motivate the condition we require, consider again the sets of L ? n matrices P = {P1 , . . . , Pn } and Q = {Q1 , . . . , Qn } as defined in (2). Together these matrices capture the n2 L ? 1 parameters of the problem, namely, n ? 1 for each of the n rows of each of the L transition matrices M ` , and nL ? 1 parameters defining S. However, together P and Q have 2n2 L entries, implying algebraic dependencies between them. Definition 2 (Shuffle pairs). Two ordered sets X = {X1 , . . . , Xn } and Y = {Y1 , . . . , Yn } of L ? n matrices are shuffle pairs if the jth column of Xi is identical to the ith column of Yj for all i, j ? [n]. Note that P and Q are shuffle pairs. We state an equivalent way of specifying this definition. Consider a 2nL ? n2 matrix A(P, Q) that consists of a top and a bottom half. The top half is an nL ? n2 block diagonal matrix with Pi as the ith block. The bottom half is a concatenation of n different nL ? n block diagonal matrices; the ith block of the jth matrix is the jth column of ?Qi . A representation of A is given in Figure 1. As intuition, note that in each column, the two blocks of L entries are the same up to negation. Let F be the L ? 2nL matrix consisting of 2n L ? L identity matrices in a row. It is straightforward to see that P and Q are shuffle pairs if and only if F ? A(P, Q) = 0. Let the co-kernel of a matrix X be the vector space comprising the vectors v for which vX = 0. We have the following definition. 3 Figure 1: A(P, Q) for L = 2, n = 4. When P and Q are shuffle pairs, each column has two copies of the same L-dimensional vector (up to negation). M is well-distributed if there are no non-trivial vectors v for which v ? A(P, Q) = 0. Definition 3 (Well-distributed). The set of matrices M is well-distributed if the co-kernel of A(P, Q) has rank L. Equivalently, M is well-distributed if the co-kernel of A(P, Q) is spanned by the rows of F . Section 4 shows how to uniquely recover a mixture from the 3-trail probabilities Oj when M is well-distributed and S has only non-zero entries. Section 6 shows that nearly all M are well-distributed, or more formally, that the set of non well-distributed M has (Lebesgue) measure 0. 4 Reconstruction algorithm We present an algorithm to recover a mixture from its induced distribution on 3-trails. We assume for the rest of the section that M is well-distributed (see Definition 3) and S has only non-zero entries, which also means Pj , Qj , and Oj have rank L for each j. At a high level, the algorithm begins by performing an SVD of each Oj , thus recovering both Pj and Qj , as in (2), up to unknown rotation and scaling. The key to undoing the rotation will be the fact that the sets of matrices P and Q are shuffle pairs, and hence have algebraic dependencies. More specifically, our algorithm consists of four high-level steps. We first list the steps and provide an informal overview; later we will describe each step in full detail. (i) Matrix decomposition: Using SVD, we compute a decomposition Oj = Uj ?j Vj and let Pj0 = Uj and Q0j = ?j Vj . These are the initial guesses at (Pj , Qj ). We prove in Lemma 4 that there exist L ? L matrices Yj and Zj so that Pj = Yj Pj0 and Qj = Zj Q0j for each j ? [n]. (ii) Co-kernel: Let P 0 = {P10 , . . . , Pn0 }, and Q0 = {Q01 , . . . , Q0n }. We compute the co-kernel of matrix A(P 0 , Q0 ) as defined in Section 3, to obtain matrices Yj0 and Zj0 . We prove that there is a single matrix R for which Yj = RYj0 and Zj = RZj0 for all j. (iii) Diagonalization: Let R0 be the matrix of eigenvectors of (Z10 Y10 )?1 (Z20 Y20 ). We prove that there is a permutation matrix ? and a diagonal matrix D such that R = D?R0 . (iv) Two-trail matching: Given Oj it is easy to compute the probability distribution of the mixture over 2-trails. We use these to solve for D, and using D, compute R, Yj , Pj , and Sj for each j. 4.1 Matrix decomposition From the definition, both Pj0 and Q0j are L ? n matrices of full rank. The following lemma states that the SVD of the product of two matrices A and B returns the original matrices up to a change of basis. 4 Lemma 4. Let A, B, C, D be L ? n matrices of full rank, such that AB = CD. Then there is an L ? L matrix X of full rank such that C = X ?1 A and D = XB. Proof. Note that A = ABB ?1 = CDB ?1 = CW for W = DB ?1 . Since A has full rank, W must as well. We then get CD = AB = CW B, and since C has full column rank, D = W B. Setting X = W completes the proof. Since Oj = Pj (Sj?1 Qj ) and Oj = Pj0 Q0j , Lemma 4 implies that there exists an L ? L matrix Xj of full rank such that Pj = Xj?1 Pj0 and Qj = Sj Xj Q0j . Let Yj = Xj?1 , and let Zj = Sj Xj . Note that both Yj and Zj have full rank, for each j. Once we have Yj and Zj , we can easily compute both Pj and Sj , so we have reduced our problem to finding Yj and Zj . 4.2 Co-kernel Since (P, Q) is a shuffle pair, ((Yj Pj0 )j?[n] , (Zj Q0j )j?[n] ) is also a shuffle pair. We can write the latter fact as B(Y, Z) A(P 0 , Q0 ) = 0, where B(Y, Z) is the L ? 2nL matrix comprising 2n matrices concatenated together; first Yj for each j, and then Zj for each j. We know A(P 0 , Q0 ) from the matrix decomposition step, and we are trying to find B(Y, Z). By well-distributedness, the co-kernel of A(P, Q) has rank L. Let D be the 2nL ? 2nL block diagonal matrix with the diagonal entries (Y1?1 , Y2?1 , . . . , Yn?1 , Z1?1 , Z2?1 , . . . , Zn?1 ). Then A(P 0 , Q0 ) = D A(P, Q). Since D has full rank, the co-kernel of A(P 0 , Q0 ) has rank L as well. We compute an arbitrary basis of the co-kernel of A(P 0 , Q0 )),2 and write it as an L ? 2nL matrix as an initial guess B(Y 0 , Z 0 ) for B(Y, Z). Since B(Y, Z) lies in the co-kernel of A(P 0 , Q0 ), and has exactly L rows, there exists an L ? L matrix R such that B(Y, Z) = R B(Y 0 , Z 0 ), or equivalently, such that Yj = RYj0 and Zj = RZj0 for every j. Since Yj and Zj have full rank, so does R. Now our problem is reduced to computing R. 4.3 Diagonalization Recall from the matrix decomposition step that there exist matrices Xj such that Yj = Xj?1 and Zj = Sj Xj . Hence Zj0 Yj0 = (R?1 Zj )(Yj R?1 ) = R?1 Sj R?1 . It seems difficult to compute R directly from equations of the form R?1 Sj R?1 , but we can multiply any two of them together to get, e.g., (Z10 Y10 )?1 (Z20 Y20 ) = RS1?1 S2 R?1 . Since S1?1 S2 is a diagonal matrix, we can diagonalize RS1?1 S2 R?1 as a step towards computing R. Let R0 be the matrix of eigenvectors of RS1?1 S2 R?1 . Now, R is determined up to a scaling and ordering of the eigenvectors. In other words, there is a permutation matrix ? and diagonal matrix D such that R = D?R0 . 4.4 Two-trail matching First, Oj 1n = Pj Sj?1 Qj 1n = Pj 1L for each j, since each row of Sj?1 Qj is simply the set of transition probabilities out of a particular Markov chain and state. Another way to see it is that both Oj 1n and Pj 1L are vectors whose ith coordinate is the probability of the trail i ? j. From the first three steps of the algorithm, we also have Pj = Yj Pj0 = RYj0 Pj0 = D?R0 Yj0 Pj0 . Hence 1L D? = 1L P1 (R0 Y10 P10 )?1 = O1 1n (R0 Y10 P10 )?1 , where the inverse is a pseudoinverse. We arbitrarily fix ?, from which we can compute D, R, Yj , and finally Pj for each j. From the diagonalization step (Section 4.3), we can also compute Sj = R(Zj0 Yj0 )R for each j. Note that the algorithm implicitly includes a proof of uniqueness, up to a setting of ?. Different orderings of ? correspond to different orderings of M ` in M. 2 For instance, by taking the SVD of A(P 0 , Q0 ), and looking at the singular vectors. 5 5 Experiments We have presented an algorithm for reconstructing a mixture of Markov chains from the observations, assuming the observation matrices are known exactly. In this section we demonstrate that the algorithm is efficient, and performs well even when we use empirical observations. In addition, we also compare its performance against the most natural EM algorithm for the reconstruction problem. Synthetic data. We begin by generating well distributed instances M and S. Let Dn be the uniform distribution over the n-dimensional unit simplex, namely, the uniform distribution over vectors in Rn whose coordinates are non-negative and sum to 1. For a specific n and L, we generate an instance (M, S) as follows. For each state i and Markov chain M ` , the set of transition probabilities leaving i is distributed as Dn . We draw each s` from Dn as well, and then divide by L, so that the sum over all s` (i) is 1. In other words, each trail is equally likely to come from any of the L Markov chains. This restriction has little effect on our algorithm, but is needed to make EM tractable. For each instance, we generate T samples of 3-trails. The results that we report are the medians of 100 different runs. Metric for synthetic data. Our goal is exact recovery of the underlying instance M. Given two n ? n matrices A and B, the error is the P average total variation distance between the transition probabilities: error(A, B) = 1/(2n) ? i,j |A(i, j) ? B(i, j)|. Given a pair of instances M = {M 1 , . . . , M L } and N = {N 1 , . . . , N L } on the same state space [n], the recovery error is the minimum average error over all matchings of chains in N to M. Let ? be a permutation on [L], then: 1X recovery error(M, N ) = min error(M ` , N ?(`) ). ? L ` ` p Given all the pairwise errors error(M , N ), this minimum can be computed in time O(L3 ) by the Hungarian algorithm. Note that the recovery error ranges from 0 to 1. Real data. We use the last.fm 1K dataset3 , which contains the list of songs listened by heavy users of Last.Fm. We use the top 25 artist genres4 as the states of the Markov chain. We consider the ten heaviest users in the data set, and for each user, consider the first 3001 state transitions that change their state. We break each sequence into 3000 3-trails. Each user naturally defines a Markov chain on the genres, and the goal is to recover these individual chains from the observed mixture of 3-trails. Metric for real data. Given a 3-trail from one of the users, our goal is to predict which user the 3-trail came from. Specifically, given a 3-trail t and a mixture of Markov chains (M, S), we assign t to the Markov chain most likely to have generated it. A recovered mixture (M, S) thereby partitions the observed 3-trails into L groups. The prediction error is the minimum over all matchings between groups and users of the fraction of trails that are matched to the wrong user. The prediction error ranges from 0 to 1 ? 1/L. Handling approximations. Because the algorithm operates on real data, rather than perfect observation matrices, we make two minor modifications to make it more robust. First, in the diagonalization 0 step (Section 4.3), we sum (Zi0 Yi )?1 (Zi+1 Yi+1 )?1 over all i before diagonalizing to estimate R0 , instead of just using i = 1. Second, due to noise, the matrices M that we recover at the end need not be stochastic. Following the work of [7] we normalize the values by first taking absolute values of all entries, and then normalizing so that each of the columns sums to 1. Baseline. We turn to EM as a practical baseline for this reconstruction problem. In our implementation, we continue running EM until the log likelihood changes by less than 10?7 in each iteration; this corresponds to roughly 200-1000 iterations. Although EM continues to improve its solution past this point, even at the 10?7 cutoff, it is already 10-50x slower than the algorithm we propose. 5.1 Recovery and prediction error 3 4 http://mtg.upf.edu/static/datasets/last.fm/lastfm-dataset-1K.tar.gz http://static.echonest.com/Lastfm-ArtistTags2007.tar.gz 6 0.07 0.25 0.05 Median Recovery Error Median Recovery Error 0.20 0.04 0.03 0.02 1.0 Alg EM 100 EM 1000 Alg EM 1000 0.8 Median Prediction Error Alg EM 0.06 0.15 0.10 0.05 0.6 0.4 0.2 0.01 0.00 0.0 0.2 0.4 0.6 Number of Samples 0.8 1.0 1e9 (a) 0.00 4 5 6 7 L 8 9 0.0 4 10 5 6 7 L (b) 8 9 10 (c) Figure 3: (a) Performance of EM and our algorithm vs number of samples (b) Performance of EM and our algorithm vs L (synthetic data) (c) Performance of EM and our algorithm (real data) For the synthetic data, we fix n = 6 and L = 3, and for each of the 100 instances generate a progressively larger set of samples. Recall that the number of unknown parameters grows as ?(n2 L), so even this relatively simple setting corresponds to over 100 unknown parameters. Figure 3(a) shows the median recovery error of both approaches. It is clear that the proposed method significantly outperforms the EM approach, routinely achieving errors of 10-90% lower. Furthermore, while we did not make significant attempts to speed up EM, it is already over 10x slower than our algorithm at n = 6 and L = 3, and Figure 2: Performance of the algorithm as a function of n and L for a fixed numbecomes even slower as n and L grow. ber of samples. In Figure 3(b) we study the error as a function of L. Our approach is significantly faster, and easily outperforms EM at 100 iterations. Running EM for 1000 iterations results with prediction error on par with our algorithm, but takes orders of magnitude more time to complete. Median Recovery Error 0.20 0.15 l=3 l=5 l=7 l=9 0.10 0.05 0.00 5 10 15 20 25 30 n For the real data, there are n = 25 states, and we tried L = 4, . . . , 10 for the number of users. We run EM for 500 iterations and show the results in Figure 3(c). While our algorithm slightly underperforms EM, it is significantly faster in practice. 5.2 Dependence on n and L To investigate the dependence of our approach on the size of the input, namely n and L, we fix the number of samples to 108 but vary both the number of states from 6 to 30, as well as the number of chains from 3 to 9. Recall that the number of parameters grows as n2 L, therefore, the largest examples have almost 1000 parameters that we are trying to fit. We plot the results in Figure 2. As expected, the error grows linearly with the number of chains. This is expected ? since we are keeping the number of samples fixed, the relative error (from the true observations) grows as well. It is therefore remarkable that the error grows only linearly with L. We see more interesting behavior with respect to n. Recall that the proofs required n ? 2L. Empirically we see that at n = 2L the approach is relatively brittle, and errors are relatively high. However, as n increases past that, we see the recovery error stabilizes. Explaining this behavior formally is an interesting open question. 6 Analysis We now show that nearly all M are well-distributed (see Definition 3), or more formally, that the set of non well-distributed M has (Lebesgue) measure 0 for every L > 1 and n ? 2L. We first introduce some notation. All arrays and indices are 1-indexed. In previous sections, we have interpreted i, j, k, and ` as states or as indices of a mixture; in this section we drop these interpretations and just use them as generic indices. 7 For vectors v1 , . . . , vn ? RL , let v[n] denote (v1 , . . . , vn ), and let ?(v1 , . . . , vn ) denote the vi ?s concatenated together to form a vector in RnL . Let vi [j] denote the jth coordinate of vector vi . We first show that there exists at least one well-distributed P for each n and L. Lemma 5 (Existence of a well-distributed P). For every n and L with n ? 2L, there exists a P for which the co-kernel of A(P, Q) has rank L. Proof. It is sufficient to show it for n = 2L, since for larger n we can pad with zeros. Also, recall that F ? A(P, Q) = 0 for any P, where F is the L ? 2nL matrix consisting of 2n identity matrices concatenated together. So the co-kernel of any A(P, Q) has rank at least L, and we just need to show that there exists a P where the co-kernel of A(P, Q) has rank at most L. Now, let e` be the `th basis vector in RL . Let P ? = (P1? , . . . , Pn? ), and let p?ij denote the jth column of Pi? . We set p?ij to the (i, j)th entry of ? e1 ? eL ? . ? . ? . ? ? e2 ?e ? 1 ?e ? L ? . ? .. e2 e2 e1 ??? ??? eL e1 eL .. . e2 eL e2 e1 ??? ??? eL ? eL?1 eL?1 ? .. .. ? ? . . ? ? e3 ? ? ? e1 e3 ? ? ? e1 ? . e2 ? ? ? eL e1 ? ? ? eL?1 ? ? ? e1 ? ? ? eL?1 eL?1 eL ? ? ? eL?2 ? .. .. .. ? . . . ? e3 ? ? ? e1 e1 e2 ? ? ? eL  e if i ? L or j ? L ? Formally, pij = j?i+1 , where subscripts are taken mod L. Note that we can ej?i , if i, j > L   E E split the above matrix into four L ? L blocks where E 0 is a horizontal ?rotation? of E. E E0 Now, let a[n] , b[n] be any vectors in RL such that v = ?(a1 , . . . , an , b1 , . . . , bn ) ? R2nL is in the co-kernel of A(P ? , Q? ). Recall this means v ? A(P ? , Q? ) = 0. Writing out the matrix A, it is not too hard to see that this holds if and only if hai , p?ij i = hbj , p?ij i for each i and j. Consider the i and j where p?ij = e1 . For each k ? [L], we have ak [1] = bk [1] from the upper left quadrant, ak [1] = bL+k [1] from the upper right quadrant, aL+k [1] = bk [1] from the lower left quadrant, and aL+k [1] = bL+(k+1 (mod L)) [1] from the lower right quadrant. It is easy to see that these combine to imply that ai [1] = bj [1] for all i, j ? [n]. A similar argument for each l ? [L] shows that ai [l] = bj [l] for all i, j and l. Equivalently, ai = bj for each i and j, which means that v lives in a subspace of dimension L, as desired. We now bootstrap from our one example to show that almost all P are well-distributed. Theorem 6 (Almost all P are well-distributed). The set of non-well-distributed P has Lebesgue measure 0 for every n and L with n ? 2L. Proof. Let A0 (P, Q) be all but the last L rows of A(P, Q). For any P, let h(P) = det |A0 (P, Q) A0 (P, Q)|. Note that h(P) is non-zero if and only if P is well-distributed. Let P ? be the P ? from Lemma 5. Since A0 (P ? , Q? ) has full row rank, h(P ? ) 6= 0. Since h is a polynomial function of the entries of P, and h is non-zero somewhere, h is non-zero almost everywhere [4]. 7 Conclusions In this paper we considered the problem of reconstructing Markov chain mixtures from given observation trails. We showed that unique reconstruction is algorithmically possible under a mild technical condition on the ?well-separatedness? of the chains. While our condition is sufficient, we conjecture it is also necessary; proving this is an interesting research direction. Extending our analysis to work for the noisy case is also a plausible research direction, though we believe the corresponding analysis could be quite challenging. 8 References [1] E. S. Allman, C. Matias, and J. A. Rhodes. Identifiability of parameters in latent structure models with many observed variables. The Annals of Statistics, pages 3099?3132, 2009. [2] A. Anandkumar, R. Ge, D. Hsu, S. M. Kakade, and M. Telgarsky. Tensor decompositions for learning latent variable models. JMLR, 15(1):2773?2832, 2014. [3] A. Anandkumar, D. Hsu, and S. M. Kakade. A method of moments for mixture models and hidden Markov models. In COLT, pages 33.1?33.34, 2012. [4] R. Caron and T. Traynor. The zero set of a polynomial. WSMR Report, pages 05?02, 2005. [5] K. Chaudhuri and S. Rao. Learning mixtures of product distributions using correlations and independence. In COLT, pages 9?20, 2008. [6] F. Chierichetti, R. Kumar, P. Raghavan, and T. Sarlos. Are web users really Markovian? In WWW, pages 609?618, 2012. [7] S. B. Cohen, K. Stratos, M. Collins, D. P. Foster, and L. Ungar. Experiments with spectral learning of latent-variable PCFGs. In NAACL, pages 148?157, 2013. [8] S. Dasgupta. Learning mixtures of Gaussians. In FOCS, pages 634?644, 1999. [9] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1?38, 1977. [10] D. Hsu, S. M. Kakade, and T. Zhang. A spectral algorithm for learning hidden Markov models. JCSS, 78(5):1460?1480, 2012. [11] A. Moitra and G. Valiant. Settling the polynomial learnability of mixtures of Gaussians. In FOCS, pages 93?102, 2010. [12] R. A. Redner and H. F. Walker. Mixture densities, maximum likelihood, and the EM algorithm. SIAM Review, 26:195?239, 1984. [13] C. Subakan, J. Traa, and P. Smaragdis. Spectral learning of mixture of hidden Markov models. In NIPS, pages 2249?2257, 2014. [14] Y. C. S?bakan. Probabilistic time series classification. Master?s thesis, Bo?gazi?i University, 2011. [15] S. Vempala and G. Wang. A spectral algorithm for learning mixture models. JCSS, 68(4):841? 860, 2004. 9
6078 |@word mild:2 version:1 polynomial:3 seems:1 open:1 tried:1 bn:1 decomposition:11 q1:1 pick:1 thereby:1 moment:1 initial:3 series:3 contains:1 outperforms:3 past:2 recovered:1 com:3 nt:1 z2:1 gmail:1 sergei:1 must:1 additive:1 partition:1 plot:1 drop:1 progressively:1 v:2 implying:1 half:3 selected:2 guess:2 ith:4 zhang:1 five:1 mathematical:1 along:2 dn:3 rnl:1 focs:2 prove:4 consists:2 combine:1 introduce:1 manner:1 pairwise:1 expected:4 roughly:1 p1:3 behavior:3 little:1 considering:1 spain:1 begin:2 underlying:5 matched:1 notation:1 mountain:1 backbone:1 interpreted:1 finding:2 guarantee:1 every:8 act:1 exactly:3 wrong:1 unit:1 yn:2 positive:1 before:2 local:2 despite:1 ak:2 subscript:1 path:1 might:3 therein:1 collect:1 specifying:1 challenging:1 co:14 delve:1 hmms:3 pcfgs:1 zi0:1 range:2 unique:4 practical:1 yj:17 practice:1 block:7 implement:1 rishi:1 bootstrap:1 empirical:3 adapting:1 significantly:3 matching:2 word:2 quadrant:4 get:4 seminal:1 writing:1 restriction:1 equivalent:1 map:1 sarlos:1 www:1 straightforward:1 attention:1 incredibly:1 starting:10 regardless:1 recovery:10 array:1 spanned:1 menu:1 proving:1 coordinate:3 variation:1 annals:1 play:1 yj0:4 user:17 exact:2 us:1 trail:33 approximated:1 particularly:2 continues:1 observed:5 role:1 bottom:2 jcss:2 wang:1 capture:2 ordering:3 shuffle:8 intuition:1 dempster:1 motivate:2 solving:1 untangling:1 basis:3 matchings:2 easily:2 differently:2 various:1 routinely:1 genre:1 describe:1 choosing:1 whose:4 quite:1 stanford:3 solve:1 larger:2 plausible:1 reconstruct:1 statistic:1 noisy:1 laird:1 final:1 seemingly:1 sequence:1 propose:2 reconstruction:10 product:2 degenerate:1 chaudhuri:1 convoluted:1 normalize:1 optimum:1 extending:1 generating:2 perfect:2 telgarsky:1 ij:5 minor:1 received:1 expectationmaximization:1 recovering:5 c:1 hungarian:1 implies:1 come:1 direction:3 stochastic:3 vx:1 raghavan:1 material:1 require:2 assign:1 ungar:1 fix:3 really:1 preliminary:1 hold:2 considered:1 algorithmic:1 predict:1 bj:3 stabilizes:1 vary:1 uniqueness:1 rhodes:1 largest:1 tool:1 rather:4 pn:2 ej:1 tar:2 focus:1 improvement:1 rank:20 likelihood:5 check:1 baseline:2 el:14 a0:4 pad:1 hidden:7 k53:1 interested:2 comprising:2 classification:1 colt:2 equal:1 once:1 never:1 having:2 identical:2 look:1 unsupervised:1 nearly:2 simplex:1 report:2 employ:1 zoom:1 individual:1 consisting:2 lebesgue:3 negation:2 ab:2 attempt:1 organization:1 investigate:1 multiply:1 mixture:40 pj0:9 nl:11 xb:1 chain:43 accurate:1 necessary:2 dataset3:1 machinery:2 orthogonal:1 indexed:1 iv:1 divide:1 incomplete:1 walk:2 desired:1 e0:1 instance:12 column:9 modeling:3 markovian:2 rao:1 zj0:3 zn:1 assignment:2 entry:13 uniform:2 s1j:1 too:1 learnability:1 motivating:1 listened:1 dependency:2 accomplish:1 synthetic:5 pn0:1 density:1 siam:1 probabilistic:1 together:6 again:1 central:1 satisfied:1 heaviest:1 moitra:1 thesis:1 e9:1 return:1 includes:1 vi:3 later:2 try:1 view:1 lot:1 break:1 nondegeneracy:1 traffic:1 start:1 recover:5 identifiability:2 cdb:1 contribution:1 accomplishing:1 largely:1 efficiently:2 correspond:1 identify:2 hbj:1 weak:1 artist:1 trajectory:5 app:7 definition:10 against:1 matias:1 e2:7 naturally:1 proof:6 static:2 degeneracy:5 hsu:3 dataset:1 recall:6 knowledge:1 redner:1 pseudoinverses:1 follow:1 specify:1 done:2 though:1 furthermore:1 just:4 until:2 correlation:1 horizontal:1 web:2 google:4 defines:1 quality:1 grows:5 believe:1 usage:2 effect:1 naacl:1 requiring:1 true:2 y2:1 hence:3 q0:9 uniquely:3 abb:1 trying:2 complete:1 demonstrate:1 performs:2 q01:1 common:2 rotation:3 empirically:1 overview:1 rl:3 cohen:1 interpretation:1 significant:1 caron:1 ai:3 session:2 language:1 had:2 l3:1 moving:1 etc:1 recent:1 showed:1 phone:1 arbitrarily:1 came:1 continue:1 life:1 yi:2 p10:3 minimum:3 undoing:1 r0:8 converge:1 upf:1 ii:2 full:13 technical:1 faster:2 calculation:1 long:1 equally:1 e1:11 a1:1 qi:1 prediction:5 metric:2 iteration:5 represent:1 kernel:14 underperforms:1 whereas:1 background:1 addition:1 completes:1 singular:2 leaving:1 median:6 diagonalize:1 grow:1 walker:1 rest:3 induced:1 db:1 lastfm:2 rs1:3 flow:1 mod:2 integer:1 anandkumar:2 allman:1 iii:2 easy:5 enough:1 split:1 variety:1 xj:8 fit:1 zi:1 independence:1 fm:3 reduce:1 oneself:1 det:1 qj:11 vassilvitskii:1 whether:1 song:1 algebraic:2 e3:3 york:1 action:1 distributedness:1 useful:1 clear:1 eigenvectors:3 maybe:1 amount:1 ten:1 reduced:2 generate:3 http:2 sl:2 exist:3 zj:13 designer:1 algorithmically:1 write:2 dasgupta:1 group:2 key:2 four:3 nevertheless:1 achieving:1 pj:16 cutoff:1 ravi:2 traa:1 y10:4 v1:3 fraction:2 sum:4 orient:1 inverse:1 run:2 everywhere:1 master:1 throughout:1 almost:4 vn:3 draw:1 scaling:2 entirely:1 followed:1 smaragdis:1 mtg:1 n3:1 nearby:1 speed:1 argument:2 min:1 kumar:2 performing:1 vempala:1 relatively:3 conjecture:1 according:2 slightly:1 reconstructing:5 em:25 kakade:3 modification:1 s1:2 restricted:1 taken:1 equation:1 turn:1 needed:1 know:2 ge:1 tractable:1 end:1 informal:1 gaussians:3 z10:2 observe:1 spectral:5 generic:2 appropriate:1 slower:4 existence:1 original:1 denotes:2 dirichlet:3 top:3 running:2 somewhere:1 concatenated:3 uj:2 society:1 appreciate:1 bl:2 tensor:3 question:2 already:2 dependence:2 unraveling:1 guessing:1 visiting:1 diagonal:9 hai:1 subspace:1 cw:2 distance:1 concatenation:1 hmm:2 trivial:1 reason:1 assuming:1 length:8 besides:1 modeled:1 o1:1 index:3 equivalently:3 difficult:1 unfortunately:1 negative:2 intent:3 implementation:1 unknown:4 perform:1 upper:2 observation:12 markov:34 datasets:1 benchmark:1 immediate:1 situation:1 defining:1 looking:1 y1:2 rn:1 arbitrary:1 bk:2 namely:4 pair:9 required:1 connection:1 z1:1 barcelona:1 nip:2 pattern:1 regime:1 pagerank:1 oj:15 royal:1 business:1 natural:2 settling:1 diagonalizing:1 improve:1 imply:1 numerous:1 gz:2 existential:1 prior:2 understanding:1 review:1 relative:1 expect:1 permutation:3 par:1 brittle:1 interesting:3 remarkable:1 triple:1 sufficient:5 pij:1 rubin:1 foster:1 pi:2 cd:2 translation:1 row:7 heavy:1 surprisingly:1 last:4 transpose:1 copy:1 jth:5 keeping:1 ber:1 explaining:1 taking:3 absolute:1 distributed:18 dimension:2 xn:1 transition:11 rich:1 qn:1 author:2 bb:1 sj:13 implicitly:1 pseudoinverse:2 z20:2 b1:1 xi:1 search:2 latent:5 nature:2 robust:1 ca:2 ignoring:1 alg:3 diag:1 vj:2 did:1 linearly:2 q0j:6 s2:4 noise:1 n2:7 x1:1 ny:1 slow:1 chierichetti:1 experienced:1 explicit:2 lie:1 jmlr:1 theorem:1 transitioning:2 specific:1 list:2 admits:1 gupta:1 normalizing:1 exists:5 valiant:1 diagonalization:4 magnitude:2 stratos:1 simply:3 likely:2 ordered:1 bo:1 corresponds:2 relies:1 goal:6 identity:2 towards:1 y20:2 change:3 hard:1 specifically:2 determined:1 operates:1 lemma:6 total:1 experimental:1 svd:5 rarely:1 formally:5 latter:1 collins:1 handling:1
5,612
6,079
High Dimensional Structured Superposition Models Arindam Banerjee Dept of Computer Science & Engineering University of Minnesota, Twin Cities banerjee@cs.umn.edu Qilong Gu Dept of Computer Science & Engineering University of Minnesota, Twin Cities guxxx396@cs.umn.edu Abstract High dimensional superposition models characterize observations using parameters which can be written as a sum of multiple component parameters, each with its own structure, e.g., sum of low rank and sparse matrices, sum of sparse and rotated sparse vectors, etc. In this paper, we consider general superposition models which allow sum of any number of component parameters, and each component structure can be characterized by any norm. We present a simple estimator for such models, give a geometric condition under which the components can be accurately estimated, characterize sample complexity of the estimator, and give high probability nonasymptotic bounds on the componentwise estimation error. We use tools from empirical processes and generic chaining for the statistical analysis, and our results, which substantially generalize prior work on superposition models, are in terms of Gaussian widths of suitable sets. 1 Introduction For high-dimensional structured estimation problems [3, 15], considerable advances have been made in accurately estimating a sparse or structured parameter ? ? Rp even when the sample size n is far smaller than the ambient dimensionality of ?, i.e., n  p. Instead of a single structure, such as sparsity or low rank, recent years have seen interest in parameter estimation when the parameter ? is Pk a superposition or sum of multiple different structures, i.e., ? = i=1 ?i , where ?1 may be sparse, ?2 may be low rank, and so on [1, 6, 7, 9, 11, 12, 13, 23, 24]. In this paper, we substantially generalize the non-asymptotic estimation error analysis for such superposition models such that (i) the parameter ? can be the superposition of any number of component parameters ?i , and (ii) the structure in each ?i can be captured by any suitable norm Ri (?i ). We will analyze the following linear measurement based superposition model y=X k X ?i + ? , (1) i=1 where X ? Rn?p is a random sub-Gaussian design or compressive matrix, k is the number of components, ?i is one component of the unknown parameters, y ? Rn is the response vector, and ? ? Rn is random noise independent of X. The structure in each component ?i is captured by any suitable norm Ri (?), such that Ri (?i ) has a small value, e.g., sparsity captured by k?i k1 , low-rank (for matrix ?i ) captured by the nuclear norm k?i k? , etc. Popular models such as Morphological Component Analysis (MCA) [10] and Robust PCA [6, 9] can be viewed as a special cases of this framework (see Section D). The superposition estimation problem can be posed as follows: Given (y, X) generated following (1), estimate component parameters {??i } such that all the component-wise estimation errors ?i = ??i ??i? , where ?i? is the population mean, are small. Ideally, we want to obtain high-probability non-asymptotic Pk bounds on the total componentwise error measured as i=1 k??i ? ?i? k2 , with the bound improving (getting smaller) with increase in the number n of samples. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. We propose the following estimator for the superposition model in (1): 2 k X s.t. Ri (?i ) ? ?i , i = 1, . . . , k , ?i min y ? X {?1 ,...,?k } i=1 (2) 2 ? ? where ?i are suitable constants. In this paper, we focus on ?i = ? the case where ? Ri (?i ), e.g., if ?i is ? ? s-sparse with k?i k2 = 1 and Ri (?) = k ? k1 , then ?i = s so that Ri (?i ) ? s, noting that recent advances [16] can be used to extend our results to more general settings. The superposition estimator in (2) succeeds if a certain geometric condition, which we call structural coherence (SC), is satisfied by certain sets (cones) associated with the component norms Ri (?). Since the estimate ??i = ?i? + ?i is in the feasible set of the optimization problem (2), the error vector ?i satisfies the constraint Ri (?i? + ?i ) ? ?i where ?i = Ri (?i? ). The SC condition is a geometric relationship between the corresponding error cones Ci = cone{?i |Ri (?i? + ?i ) ? Ri (?i? )}. If SC is satisfied, then we can show that the sum of componentwise estimation error can be bounded with high probability, and the bound takes the form: ? k X maxi w(Ci ? Bp ) + log k ? ? ? k?i ? ?i k2 ? c , (3) n i=1 where n is the sample size, k is the number of components, and w(Ci ? Bp ) is the Gaussian width p [3, 8, 22] of the intersection of the error cone C? i with the unit Euclidean ball Bp ? R . Interestingly, the estimation error decreases at the rate of 1/ n, similar to the case of single parameter estimators [15, 3], and depends only logarithmically on the number of components k. Further, while dependency of the error on Gaussian width of the error cone has been established in recent results involving a single parameter [3, 22], the bound in (3) depends on the maximum of the Gaussian width of individual error cones, not their sum. The analysis thus gives a general way to construct estimators for superposition problems along with high-probability non-asymptotic upper bounds on the sum of componentwise errors. To show the generality of our work, we review and compare related work in Appendix B. Notation: In this paper, we use k.k to denote vector norm, and |||.||| to denote operator norm. For example, k.k2 is the Euclidean norm for a vector or matrix, and |||.|||? is the nuclear norm of a matrix. We denote cone{E} as the smallest closed cone that contains a given set E. We denote h., .i as the inner product. The rest of this paper is organized as follows: We start with a deterministic estimation error bound in Section 2, while laying down the key geometric and statistical quantities involved in the analysis. In Section 3, we discuss the geometry of the structural coherence (SC) condition, and in Section 4 show that the geometric SC condition implies statistical restricted eigenvalue (RE) condition. In Section 5, we develop the main error bound on the sum of componentwise errors which hold with high probability for sub-Gaussian designs and noise. We apply our error bound to practical problems in Section 6, and present experimental results in Section 7. We conclude in Section 8. In the Appendix, we compare an estimator using ?infimal convolution?[18] of norms with our estimator (2) for the noiseless case, and provide some addition examples and experiments. The proofs of all technical results are also in the Appendix. 2 Error Structure and Recovery Guarantees In this section, we start with some basic results and, under suitable assumptions, provide a deterministic bound for the componentwise estimation error in superposition models. Subsequently, we will show that the assumptions made here hold with high probability as long as a purely geometric non-probabilistic condition characterized by structural coherence (SC) is satisfied. Let {??i } be a solution to the superposition estimation problem in (2), {?i? } be the optimal (population) parameters involved in the true data generation process. Let ?i = ??i ? ?i? be the error vector for component i of the superposition. Our goal is to provide a preliminary understanding of the structure of error sets where ?i live, identify conditions under which a bound on the total componentwise Pk error i=1 k??i ? ?i? k2 will hold, and provide a preliminary version of such a bound, which will be subsequently refined to the form in (3) in Section 5. Since ??i = ?i? + ?i lies in the feasible set of (2), 2 as discussed in Section 1, the error vectors ?i will lie in the error sets Ei = {?i ? Rp |Ri (?i? +?i ) ? Ri (?i? )} respectively. For the analysis, we will be focusing on the cone of such error sets, given by Ci = cone{?i ? Rp |Ri (?i? + ?i ) ? Ri (?i? )} . (4) P P P k k k Let ?? = i=1 ?i? , ?? = i=1 ??i , and ? = i=1 ?i , so that ? = ?? ? ?? . From the optimality of ?? as a solution to (2), we have ? 2 ? ky ? X?? k2 ? kX?k2 ? 2? T X? , ky ? X ?k (5) using ?? = ?? + ? and y = X?? + ?. In order to establish recovery guarantees, under suitable assumptions we construct a lower bound to kX?k2 , the left hand side of (5). The lower bound is a generalized form of the restricted eigenvalue (RE) condition studied in the literature [4, 5, 17]. We also construct an upper bound to ? T X?, the right hand side of (5), which needs to carefully analyze the noise-design (ND) interaction, i.e., between the noise ? and the design X. We start by assuming that a generalized form of RE condition is satisfied by the superposition of errors: there exists a constant ? > 0 such that for all ?i ? Ci , i = 1, 2, . . . , k: k k X 1 X ? X ?i ? ? k?i k2 . (6) (RE) n i=1 i=1 2 The above RE condition considers the following set: nP o Pk k H= ? : ? ? C , k? k = 1 . i i i 2 i=1 i i=1 (7) which involves all the k error cones, and the lower bound is over the sum of norms of the component wise errors. If k = 1, the RE condition in (6) above simplifies to the widely studied RE condition in the current literature on Lasso-type and Dantzig-type estimators [4, 17, 3] where only one error cone is involved. If we set all components but ?i to zero, then (6) becomes the RE condition only for component i. We also note that the general RE condition as explicitly stated in (6) has been implicitly ? defined as used in [1] and [24]. For subsequent analysis, we introduce the set H nP o Pk k ?= H ? : ? ? C , k? k ? 1 . (8) i i i 2 i=1 i i=1 ? noting that H ? H. The general RE condition in (6) depends on the random design matrix X, and is hence an inequality which will hold with certain probability depending on X and the set H. For superposition problems, the probabilistic RE condition as in (6) is intimately related to the following deterministic structural coherence (SC) condition on the interaction of the different component cones Ci , without any explicit reference to the random design matrix X: there is a constant ? > 0 such that for all ?i ? Ci , i = 1, . . . , k, k k X X (SC) ?i ? ? k?i k2 . (9) i=1 i=1 2 If k = 1, the SC condition is trivially satisfied with ? = 1. Since most existing literature on highdimensional structured models focus on the k = 1 setting [4, 17, 3], there was no reason to study the SC condition carefully. For k > 1, the SC condition (9) implies a non-trivial relationship among the Pk component cones. In particular, if the SC condition is true, then the sum i=1 ?i being zero implies that each component ?i must also be zero. As presented in (9), the SC condition comes across as an algebraic condition. In Section 3, we present a geometric characterization of the SC condition [13], and illustrate that the condition is both necessary and sufficient for accurate recovery of each component. In Section 4, we show that for sub-Gaussian design matrices X, the SC condition in (9) in fact implies that the RE condition in (6) will hold with high probability, after the number of samples crosses a certain sample complexity, which depends on the Gaussian width of the component cones. For now, we assume the RE condition in (6) to hold, and proceed with the error bound analysis. To establish recovery guarantee, following (5), we need an upper bound on the interaction between noise ? and design X [3, 14]. In particular, we consider the noise-design (ND) interaction   ? 1 (ND) sn (?) = inf s : sup ? ? T Xu ? ?s2 n , (10) s>0 n u?sH 3 Figure 1: Geometry of SC condition when k = 2. The error sets E1 and E2 are respectively shown as blue an green squares, and the corresponding error cones are C1 and C2 respectively. ?C1 is the reflection of error cone C1 . If ?C1 and C2 do not share a ray, i.e., the angle ? between the cones is larger than 0, then ?0 < 1, and the SC condition will hold. where ? > 0 is a constant, and sH is the scaled version of H where the scaling factor is s > 0. Here, sn (?) denotes the minimal scaling needed on H such that one obtains a uniform bound over ? ? sH of the form: n1 ? T X? ? ?s2n (?). Then, from the basic inequality in (5), with the bounds implied by the RE condition and the ND interaction, we have k X 1 1 ? T ? ? kX?k2 ? ? ? X? ? ? (11) k?i k2 ? ?sn (?) , n n i=1 which implies a bound on the component-wise error. The main deterministic bound below states the result formally: Theorem 1 (Deterministic bound) Assume that the RE condition in (6) is satisfied in H with paPk rameter ?. Then, if ?2 > ?, we have i=1 k?i k2 ? 2sn (?). The above bound is deterministic and holds only when the RE condition in (6) is satisfied with constant ? such that ?2 > ?. In the sequel, we first give a geometric characterization of the SC condition in Section 3, and show that the SC condition implies the RE condition with high probability in Section 4. Further, we give a high probability characterization of sn (?) based on the noise ? and design X in terms of the Gaussian widths of the component cones, and also illustrate how one can choose ? in Section 5. With these characterizations, we will obtain the desired component-wise error bound of the form (3). 3 Geometry of Structural Coherence In this section, we give a geometric characterize the structural coherence (SC) condition in (9). We start with the simplest case of two vectors x, y. If they are not reflections of each other, i.e., x 6= ?y, then the following relationship holds: Proposition 2 If there exists a ? < 1 such that ?hx, yi ? ?kxk2 kyk2 , then q kx + yk2 ? 1?? 2 (kxk2 + kyk2 ) . (12) Next, we generalize the condition of Proposition 2 to vectors in two different cones C1 and C2 . Given the cones, define ?0 = sup ? hx, yi . (13) x?C1 ?S p?1 ,y?C2 ?S p?1 By construction, ?hx, yi ? ?0 kxk2 kyk2 for all p x ? C1 and y ? C2 . If ?0 < 1, then (12) continues to hold for all x ? C1 and y ? C2 with constant (1 ? ?0 )/2 > 0. Note that this corresponds to the SC p condition with k = 2 and ? = (1 ? ?0 )/2. We can interpret this geometrically as follows: first reflect cone C1 to get ?C1 , then ? is the cosine of the minimum angle between ?C1 and C2 . If ?0 = 1, then ?C1 and C2 share a ray, and structural coherence does not hold. Otherwise, ?0 < 1, implying ?C1 ? C2 = {0}, i.e., the two cones intersect only at the origin, and structural coherence holds. For the general case involving k cones, denote ?i = supP hu, vi . (14) u??Ci ?Sp?1 ,v? j6=i Cj ?Sp?1 P In recent work, [13] concluded that if ?i < 1 for each i = 1, . . . , k then ?Ci and j6=i Cj does not share a ray, and the original signal can be recovered in noiseless case. We show that the condition above in fact implies ? > 0 for the SC condition in (9), which is sufficient for accurate recovery even in the noisy case. In particular, with ? := maxi ?i , we have the following result: 4 Theorem 3 (Structural Coherence (SC) Condition) Let ? := maxi ?i with ?i as defined in (14). If ? < 1, then there exists a ? > 0 such that for any ?i ? Ci , i = 1, . . . , k, the SC condition in (9) holds, i.e., P Pk k (15) i=1 ?i ? ? i=1 k?i k2 . 2 Thus, the SC condition is satisfied in the general case as long Pas the reflection ?Ci of any cone Ci does not intersect, i.e., share a ray, with the Minkowski sum j6=i Cj of the other cones. 4 Restricted Eigenvalue Condition for Superposition Models Assuming that the SC condition is satisfied by the error cones {Ci }, i = 1, . . . , k, in this section we show that the general RE condition in (6) will be satisfied with high probability when the number of samples n in the sub-Gaussian design matrix X ? Rn?p crosses the sample complexity n0 . We give a precise characterization of the sample complexity n0 in terms of the Gaussian width of the set H. Our analysis is based on the results and techniques in [20, 14], and we note that [3] has related results using mildly different techniques. We start with a restricted eigenvalue condition on C. For a random vector Z ? Rp , we define marginal tail function for an arbitrary set E as Q? (E; Z) = inf u?E P (|hZ, ui| ? ?) , (16) noting that it is deterministic given the set E ? Rp . Let i , i = 1, . . . , n, be independent Rademacher random variables, i.e., random variable with probability 12 of being either +1 or ?1, and let Xi , i = 1, . . . , n, be independent copies of Z. We define empirical width of E as Pn Wn (E; Z) = supu?E hh, ui, where h = ?1n i=1 i Xi . (17) With this notation, we recall the following result from [20]: Lemma 1 Let X ? Rn?p be a random design matrix with each row the independent copy of sub-Gaussian random vector Z. Then for any ?, ?, t > 0, we have ? (18) inf kXuk2 ? ?? nQ2?? (H; Z) ? 2Wn (H; Z) ? ??t u?H t2 with probability at least 1 ? e? 2 . In order to obtain lower bound of ? in RE condition (6), we need to lower bound Q2?? (H; Z) and upper bound Wn (H; Z). To lower bound Q2?? (H; Z), we consider the spherical cap Pk A = ( i=1 Ci ) ? S p?1 . (19) From [20, 14], one can obtain a lower bound to Q? (A; Z) based on the Paley-Zygmund inequality. The Paley-Zygmund inequality lower bound the tail distribution of a random variable by its second momentum. Let u be an arbitrary vector, we use the following version of the inequality. P (|hZ, ui| ? 2?) ? [E|hZ,ui|?2?]2+ E|hZ,ui|2 (20) In the current context, the following result is a direct consequence of SC condition, which shows that Q2?? (H; Z) is lower bounded by Q? (A; Z), which in turn is strictly bounded away from 0 . The proof of Lemma 2 is given in Appendix H.1. Lemma 2 Let sets H and A be as defined in (7) and (19) respectively. If the SC condition in (9) holds, then the marginal tail functions of the two sets have the following relationship: Q?? (H; Z) ? Q? (A; Z). (21) Next we discuss how to upper bound the empirical width Wn (H; Z). Let set E be arbitrary, and random vector g ? N (0, Ip ) be a standard Gaussian random vector in Rp . The Gaussian width [3] of E is defined as w(E) = E suphg, ui. (22) u?E Empirical width Wn (H; Z) can be seen as the supremum of a stochastic process. One way to upper bound the supremum of a stochastic process is by generic chaining [19, 3, 20], and by using generic 5 chaining we can upper bound the stochastic process by a Gaussian process, which is the Gaussian width. As we can bound Q2?? (H; Z) and Wn (H; Z), we come to the conclusion on RE condition. Let X ? Rn?p be a random matrix where each row is an independent copy of the sub-Gaussian random vector Z ? Rp , and where Z has sub-Gaussian norm |||Z|||?2 ? ?x [21]. Let ? = inf u?S p?1 E[|hZ, ui|] so that ? > 0 [14, 20]. We have the following lower bound of the RE condition. The proof of Theorem 4 is based on the proof of [20, Theorem 6.3], and we give it in appendix H.2. Theorem 4 (Restricted Eigenvalue Condition) Let X be the sub-Gaussian design matrix that satisfies the assumptions above. If the SC condition (9) holds with a ? > 0, then with probability at least 1 ? exp(?t2 /2), we have ? inf kXuk2 ? c1 ? n ? c2 w(H) ? c3 ?t (23) u?H where c1 , c2 and c3 are positive constants determined by ?x , ?? and ?. ? To get a ? > 0 in (6), one can simply choose t = (c1 ? n ? c2 w(H))/2c3 ?. Then as long as n > c4 w2 (H)/?2 for c4 = c22 /c21 , we have   ? ? = inf u?H ?1n kXuk2 ? 21 c1 ? ? c2 w(H) > 0, n with high probability. From the discussion above, if SC condition holds and the sample size n is large enough, then we can find a matrix X such that RE condition holds. On the other hand, once there is a matrix X such that RE condition holds, then we can show that SC must also be true. Its proof is give in Appendix H.3. Proposition 5 If X is a matrix such that the RE condition (6) holds for ?i ? Ci , then the SC condition (9) holds. Proposition 5 demonstrates that SC condition is a necessary condition for the possibility of RE. If SC condition does not hold, then there is {?i } such that ?i 6= 0 for some i = 1, . . . , k, but Pk Pk Pk k i=1 ?i k2 = 0 which implies i=1 ?i = 0. Then for every matrix X, we have X i=1 ?i = 0, and RE condition is not possible. 5 General Error Bound Recall that the error bound in Theorem 1 is given in terms of the noise-design (ND) interaction n ? o sn (?) = inf s>0 s : supu?sC ?1n ? T Xu ? ?s2 n . (24) In this section, we give a characterization of the ND interaction, which yields the final bound on the componentwise error as long as n ? n0 , i.e., the sample complexity is satisfied. Let ? be a centered sub-Gaussian random vector, and its sub-Gaussian norm |||?|||?2 ? ?? . Let X be a row-wise i.i.d. sub-Gaussian random matrix, for each row Z, its sub-Gaussian norm |||Z|||?2 ? ?x . The ND interaction can be bounded by the following conclusion, and the proof of lemma 3 is given in appendix I.1. Lemma 3 Let design X ? Rn?p be a row-wise i.i.d. sub-Gaussian random matrix, and noise ? H) ? . for some constant c > 0 ? ? Rn be a centered sub-Gaussian random vector. Then sn (?) ? c w( ? n ? ? c3 exp(?c4 n). Constant c depends on ?x and ?? . with probability at least 1 ? c1 exp(?c2 w2 (H)) ? and H respectively. From definition, In lemma 3 and theorem 6, we need the Gaussian width of H ? ? and both H and H is related to the union of different cones; therefore bounding the width of H ? H may be difficult. We have the following bound of w(H) and w(H) in terms of the width of the component spherical caps. The proof of Lemma 4 is given in Appendix I.2. ? be as defined in (7) and (8) respectively. Then, we Lemma 4 (Gaussian width bound) Let ? H and H  ? ? = O maxi w(Ci ? Bp ) + log k . have w(H) = O maxi w(Ci ? Sp?1 ) + log k and w(H) By applying lemma 4, we can derive the error bound using the Gaussian width of individual error cone. From our conclusion on deterministic bound in theorem 1, we can choose an appropriate ? such that ?2 > ?. Then, by combining the result of theorem 1, theorem 4, lemma 3 and lemma 4, we have the final form of the bound, as originally discussed in (3): 6 Theorem 6 For estimator (2), let Ci = cone{? : Ri (?i? + ?) ? Ri (?i? )}, design X be a random matrix with each row an independent copy of sub-Gaussian random vector Z, noise ? be a centered sub-Gaussian random vector, and Bp ? Rp be the centered unit euclidean ball. If sample size n > c(maxi w2 (Ci ? Sp?1 ) + log k)/?2 , then we have with probability at least 1 ? ?k1 exp(??2 maxi w2 (Ci ? Sp?1 )) ? ?3 exp(??4 n), ? Pk )+ log k ?i ? ?? k2 ? C maxi w(Ci ?B ?p k ? , (25) 2 i i=1 ? n for constants c, C > 0 that depend on sub-Gaussian norms |||Z|||?2 and |||?|||?2 . Thus, assuming the SC condition in (9) is satisfied, the sample complexity and error bound of the estimator depends on the largest Gaussian width, rather than the sum of Gaussian widths. The result can be viewed as a direct generalization of existing results for k = 1, when the SC condition is always satisfied, and the sample complexity and error is given by w2 (C1 ? Sp?1 ) and w(C1 ? Bp ) [3, 8]. 6 Application of General Bound In this section, we instantiate the general error bounds on Morphological Component Analysis (MCA), and low-rank and sparse matrix decomposition. The comprehensive results are provided in Appendix D. 6.1 Morphological Component Analysis In Morphological Component Analysis [10], we consider the following linear model y = X(?1? + ?2? ) + ? (26) where vector ?1? is sparse and ?2? is sparse under a rotation Q. Consider the following estimator min ky ? X(?1 + ?2 )k22 ?1 ,?2 s.t. k?1 k1 ? k?1? k1 , kQ?2 k1 ? kQ?2? k1 , (27) where vector y ? Rn is the observation, vectors ?1 , ?2 ? Rp are the parameters we want to estimate, matrix X ? Rn?p is a sub-Gaussian random design, matrix Q ? Rp?p is orthogonal. We assume ?1? and Q?2? are s1 -sparse and s2 -sparse vectors respectively. Function kQ.k1 is still a norm. In general, we can derive the following error bound from Theorem 6:  q  q s1 log p s2 log p ? ? k?1 ? ?1 k2 + k?2 ? ?2 k2 = O max , . n n 6.2 Low-rank and Sparse Matrix Decomposition To recover a sparse matrix and low-rank matrix from their sum [6, 9], one can use L1 norm to induce sparsity and nuclear norm to induce low-rank. These two kinds of norm ensure that the sparsity and the rank of the estimated matrices are small. Suppose we have a rank-r matrix L? and a sparse matrix S ? with s nonzero entries, S ? , L? ? Rd1 ?d2 . Our observation Y comes from the following problem Yi = hXi , L? + S ? i + Ei , i = 1, . . . , n, where each Xi ? Rd1 ?d2 is a sub-Gaussian random design matrix. Ei is the noise matrix. The estimator takes the form: n X min (Yi ? hXi , L + Si)2 s.t. |||L|||? ? |||L? |||? , kSk1 ? kS ? k1 . (28) L,S i=1 By using Theorem 6, and existing results on Gaussian widths, the error bound is given by   q q s log(d1 d2 ) r(d1 +d2 ?r) , . kL ? L? k2 + kS ? S ? k2 = O max n n 7 Experimental Results In this section, we confirm the theoretical results in this paper with some simple experiments. We show our experimental results under different settings. In our experiments we focus on MCA when k = 2. The design matrix X are generated from Gaussian distribution such that every entry of X 7 1 2.5 Fraction of Successful Recovery ? ? 0.026 ? ? = 1/ 2 ?=0 k?1 ? ?1? k2 + k?2 ? ?2? k2 2 1.5 1 0.5 0 100 200 300 400 500 600 700 800 900 0.6 0.4 0.2 0 0 1000 Samples p=20 p=40 p=80 p=160 0.8 0 5 10 15 20 25 30 35 40 Samples (a) (b) Figure 2: (a)? Effect of parameter ? on estimation error when noise ? 6= 0. We choose the parameter ? to be 0, 1/ 2, and a random sample. (b) Effect of dimension p on fraction of successful recovery in noiseless case. Dimension p varies in {20, 40, 50, 150} subjects to N (0, 1). The noise ? is generated from Gaussian distribution such that every entry of ? subjects to N (0, 1). We implement our algorithm 1 in MATLAB. We use synthetic data in all our experiments, and let the true signal ?1 = (1, . . . , 1, 0 . . . , 0), Q?2 = (1, . . . , 1, 0 . . . , 0) | {z } | {z } s1 s2 We generate our data in different ways for our three experiments. 7.1 Recovery From Noisy Observation In our first experiment, we test the impact of ? on the estimation error. We choose three different matrices Q, and ? is determined the choice of Q. The first Q is given by random sampling: we sample a random orthogonal matrix Q such that Qij > 0, and ? is lower bounded?by (42). The second and third Q is given by identity matrix I and its negative ?I; therefore ? = 1/ 2 and ? = 0 respectively. We choose dimension p = 1000, and let s1 = s2 = 1. The number of samples n varied between 1 and 1000. Observation y is given by y = X(?1? + ?2? ) + ?. In this experiment, given Q, for each n, we generate 100 pairs of X and w. For each (X, w) pair, we get a solution ??1 and ??2 . We take the average over all k??1 ? ?1? k2 + k??2 ? ?2? k2 . Figure 2(a) shows the plot of number of samples vs the average error. From figure 2(a), we can see that the error curve given by random Q lies between curves given by two extreme cases, and larger ? gives lower curve. In Appendix E, we provide an additional experiment using k-support norm [2]. 7.2 Recovery From Noiseless Observation In our second experiment, we test how the dimension p affects the successful recovery of true value. In this experiment, we choose different dimension p with p = 20, p = 40, p = 80, and p = 160. We let s1 = s2 = 1. To avoid the impact of ?, for each sample size n, we sample 100 random orthogonal matrices Q. Observation y is given by y = X(?1? + ?2? ). For each solution ??1 and ??2 of (41), we calculate the proportion of Q such that k??1 ? ?1? k2 + k??2 ? ?2? k2 ? 10?4 . We increase n from 1 to 40, and the plot we get is figure 2(b). From figure 2(b) we can find that the sample complexity required to recover ?1? and ?2? increases with dimension p. 8 Conclusions We present a simple estimator for general superposition models and give a purely geometric characterization, based on structural coherence, of when accurate estimation of each component is possible. Further, we establish sample complexity of the estimator and upper bounds on componentwise estimation error and show that both, interestingly, depend on the largest Gaussian width among the spherical caps induced by the error cones corresponding to the component norms. Going forward, it will be interesting to investigate specific component structures which satisfy structural coherence, and also extend our results to allow more general measurement models. Acknowledgements: The research was also supported by NSF grants IIS-1563950, IIS-1447566, IIS-1447574, IIS-1422557, CCF-1451986, CNS- 1314560, IIS-0953274, IIS-1029711, NASA grant NNX12AQ39A, and gifts from Adobe, IBM, and Yahoo. 8 References [1] A. Agarwal, S. Negahban, and M. J. Wainwright. Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions. The Annals of Statistics, 40(2):1171?1197, 2012. [2] A. Argyriou, R. Foygel, and N. Srebro. Sparse Prediction with the k-Support Norm. In Advances in Neural Information Processing Systems, Apr. 2012. [3] A. Banerjee, S. Chen, F. Fazayeli, and V. Sivakumar. Estimation with Norm Regularization. In Advances in Neural Information Processing Systems, 2014. [4] P. J. Bickel, Y. Ritov, and A. B. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector. The Annals of Statistics, 37(4):1705?1732, 2009. [5] P. Buhlmann and S. van de Geer. Statistics for High Dimensional Data: Methods, Theory and Applications. Springer Series in Statistics. Springer, 2011. [6] E. J. Cand?s, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? J. ACM, 58(3):1?37, 2011. [7] V. Chandrasekaran, P. A. Parrilo, and A. S. Willsky. Latent variable graphical model selection via convex optimization. The Annals of Statistics, 40(4):1935?1967, 2012. [8] V. Chandrasekaran, B. Recht, P. A. Parrilo, and A. S. Willsky. The Convex Geometry of Linear Inverse Problems. Foundations of Computational Mathematics, 12:805?849, 2012. [9] V. Chandrasekaran, S. Sanghavi, P. a. Parrilo, and A. S. Willsky. Rank-Sparsity Incoherence for Matrix Decomposition. SIAM Journal on Optimization, 21(2):572?596, 2011. [10] D. L. Donoho and X. Huo. Uncertainty principles and ideal atomic decomposition. IEEE Transactions on Information Theory, 47(7):2845?2862, 2001. [11] R. Foygel and L. Mackey. Corrupted Sensing: Novel Guarantees for Separating Structured Signals. IEEE Transactions on Information Theory, 60(2):1223?1247, Feb. 2014. [12] D. Hsu, S. M. Kakade, and T. Zhang. Robust matrix decomposition with sparse corruptions. IEEE Transactions on Information Theory, 57(11):7221?7234, 2011. [13] M. B. McCoy and J. A. Tropp. The achievable performance of convex demixing. arXiv, 2013. [14] S. Mendelson. Learning without concentration. J. ACM, 62(3):21:1?21:25, June 2015. [15] S. N. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A Unified Framework for HighDimensional Analysis of M -Estimators with Decomposable Regularizers. Statistical Science, 27(4):538?557, Nov. 2012. [16] S. Oymak, B. Recht, and M. Soltanolkotabi. Sharp Time?Data Tradeoffs for Linear Inverse Problems. ArXiv e-prints, July 2015. [17] G. Raskutti, M. J. Wainwright, and B. Yu. Restricted Eigenvalue Properties for Correlated Gaussian Designs. Journal of Machine Learning Research, 11:2241?2259, 2010. [18] R. T. Rockafellar. Convex Analysis. Princeton University Press, 1970. [19] M. Talagrand. Upper and Lower Bounds for Stochastic Processes. A Series of Modern Surveys in Mathematics. Springer-Verlag Berlin Heidelberg, 2014. [20] J. A. Tropp. Convex recovery of a structured signal from independent random linear measurements. arXiv, May 2014. [21] R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. In Y. Eldar and G. Kutyniok, editors, Compressed Sensing, pages 210?268. Cambridge University Press, Cambridge, Nov. 2012. [22] R. Vershynin. Estimation in high dimensions: a geometric perspective. Sampling Theory, a Renaissance, pages 3?66, 2015. [23] J. Wright, A. Ganesh, K. Min, and Y. Ma. Compressive principal component pursuit. IEEE International Symposium on Information Theory, pages 1276?1280, 2012. [24] E. Yang and P. Ravikumar. Dirty statistical models. Advances in Neural Information Processing Systems, pages 1?9, 2012. 9
6079 |@word version:3 achievable:1 norm:23 proportion:1 nd:7 hu:1 d2:4 decomposition:6 contains:1 series:2 interestingly:2 existing:3 ksk1:1 current:2 recovered:1 si:1 written:1 must:2 subsequent:1 plot:2 n0:3 v:1 implying:1 mackey:1 instantiate:1 huo:1 characterization:7 c22:1 zhang:1 along:1 c2:14 direct:2 symposium:1 qij:1 ray:4 introduce:1 cand:1 spherical:3 becomes:1 spain:1 estimating:1 bounded:5 notation:2 provided:1 gift:1 kind:1 substantially:2 q2:4 compressive:2 unified:1 guarantee:4 every:3 kutyniok:1 k2:26 scaled:1 demonstrates:1 unit:2 grant:2 positive:1 engineering:2 consequence:1 sivakumar:1 incoherence:1 studied:2 dantzig:2 k:2 c21:1 practical:1 atomic:1 union:1 implement:1 supu:2 intersect:2 empirical:4 induce:2 get:4 selection:1 operator:1 context:1 live:1 applying:1 deterministic:8 convex:6 survey:1 decomposable:1 recovery:11 estimator:16 d1:2 nuclear:3 population:2 annals:3 construction:1 suppose:1 origin:1 pa:1 logarithmically:1 continues:1 calculate:1 morphological:4 decrease:1 complexity:9 ui:7 ideally:1 depend:2 purely:2 gu:1 sc:37 refined:1 posed:1 widely:1 larger:2 otherwise:1 compressed:1 statistic:5 noisy:3 ip:1 final:2 paley:2 eigenvalue:6 propose:1 interaction:8 product:1 combining:1 ky:3 getting:1 rademacher:1 rotated:1 depending:1 develop:1 illustrate:2 derive:2 measured:1 c:2 involves:1 implies:8 come:3 subsequently:2 stochastic:4 centered:4 hx:3 generalization:1 preliminary:2 proposition:4 strictly:1 hold:21 wright:2 exp:5 bickel:1 smallest:1 estimation:17 superposition:19 largest:2 city:2 tool:1 gaussian:40 always:1 rather:1 pn:1 avoid:1 renaissance:1 mccoy:1 focus:3 june:1 rank:11 going:1 among:2 eldar:1 yahoo:1 special:1 marginal:2 construct:3 once:1 sampling:2 yu:2 np:2 t2:2 sanghavi:1 modern:1 comprehensive:1 individual:2 geometry:4 cns:1 n1:1 interest:1 possibility:1 investigate:1 umn:2 fazayeli:1 sh:3 extreme:1 regularizers:1 accurate:3 ambient:1 necessary:2 orthogonal:3 euclidean:3 re:26 desired:1 theoretical:1 minimal:1 entry:3 uniform:1 kq:3 successful:3 characterize:3 dependency:1 varies:1 corrupted:1 synthetic:1 vershynin:2 recht:2 international:1 negahban:2 siam:1 oymak:1 sequel:1 probabilistic:2 reflect:1 satisfied:13 choose:7 li:1 supp:1 nonasymptotic:1 parrilo:3 de:1 twin:2 rockafellar:1 satisfy:1 explicitly:1 depends:6 vi:1 closed:1 analyze:2 sup:2 start:5 recover:2 square:1 yield:1 identify:1 generalize:3 accurately:2 corruption:1 j6:3 simultaneous:1 definition:1 involved:3 e2:1 associated:1 proof:7 hsu:1 popular:1 recall:2 cap:3 dimensionality:1 organized:1 cj:3 carefully:2 nasa:1 focusing:1 originally:1 response:1 nq2:1 ritov:1 generality:1 talagrand:1 hand:3 tropp:2 ei:3 ganesh:1 banerjee:3 effect:2 k22:1 true:5 ccf:1 regularization:1 hence:1 nonzero:1 width:21 kyk2:3 chaining:3 cosine:1 generalized:2 l1:1 reflection:3 wise:6 arindam:1 novel:1 rotation:1 raskutti:1 extend:2 discussed:2 tail:3 interpret:1 measurement:3 cambridge:2 trivially:1 mathematics:2 soltanolkotabi:1 minnesota:2 hxi:2 yk2:1 etc:2 feb:1 own:1 recent:4 perspective:1 inf:7 certain:4 verlag:1 inequality:5 yi:5 seen:2 captured:4 minimum:1 mca:3 additional:1 july:1 signal:4 ii:7 multiple:2 technical:1 characterized:2 cross:2 long:4 rameter:1 e1:1 ravikumar:2 impact:2 adobe:1 involving:2 basic:2 prediction:1 noiseless:4 arxiv:3 agarwal:1 c1:20 addition:1 want:2 concluded:1 w2:5 rest:1 hz:5 subject:2 induced:1 call:1 structural:11 noting:3 ideal:1 yang:1 enough:1 wn:6 affect:1 lasso:2 inner:1 simplifies:1 tradeoff:1 zygmund:2 pca:1 algebraic:1 proceed:1 kxuk2:3 matlab:1 tsybakov:1 simplest:1 generate:2 nsf:1 estimated:2 blue:1 key:1 relaxation:1 geometrically:1 fraction:2 sum:14 year:1 cone:31 angle:2 inverse:2 uncertainty:1 chandrasekaran:3 infimal:1 coherence:11 appendix:10 scaling:2 bound:51 constraint:1 bp:6 ri:18 min:4 optimality:1 minkowski:1 structured:6 ball:2 smaller:2 across:1 intimately:1 kakade:1 s1:5 restricted:6 foygel:2 discus:2 turn:1 nnx12aq39a:1 hh:1 needed:1 pursuit:1 apply:1 away:1 generic:3 appropriate:1 s2n:1 rp:10 original:1 denotes:1 dirty:1 ensure:1 graphical:1 k1:9 establish:3 implied:1 print:1 quantity:1 concentration:1 separating:1 berlin:1 considers:1 trivial:1 reason:1 willsky:3 laying:1 assuming:3 relationship:4 difficult:1 stated:1 negative:1 design:20 unknown:1 upper:9 observation:7 convolution:1 precise:1 rn:10 varied:1 arbitrary:3 sharp:1 buhlmann:1 pair:2 required:1 kl:1 c3:4 componentwise:9 c4:3 established:1 barcelona:1 nip:1 below:1 sparsity:5 green:1 max:2 wainwright:3 suitable:6 sn:7 prior:1 geometric:11 review:1 understanding:1 literature:3 acknowledgement:1 asymptotic:4 generation:1 interesting:1 srebro:1 foundation:1 sufficient:2 principle:1 editor:1 share:4 ibm:1 row:6 supported:1 copy:4 side:2 allow:2 sparse:16 van:1 curve:3 dimension:8 forward:1 made:2 far:1 transaction:3 nov:2 obtains:1 selector:1 implicitly:1 supremum:2 confirm:1 conclude:1 xi:3 latent:1 robust:3 improving:1 heidelberg:1 sp:6 pk:12 main:2 apr:1 s2:7 noise:13 bounding:1 xu:2 sub:19 momentum:1 explicit:1 lie:3 kxk2:3 third:1 down:1 theorem:13 specific:1 maxi:8 sensing:2 demixing:1 exists:3 mendelson:1 ci:21 kx:4 mildly:1 chen:1 rd1:2 intersection:1 simply:1 springer:3 corresponds:1 satisfies:2 acm:2 ma:2 viewed:2 goal:1 identity:1 donoho:1 considerable:1 feasible:2 determined:2 lemma:11 principal:2 total:2 geer:1 experimental:3 succeeds:1 formally:1 highdimensional:2 support:2 dept:2 princeton:1 argyriou:1 correlated:1
5,613
608
Summed Weight Neuron Perturbation: An O(N) Improvement over Weight Perturbation. Barry Flower and Marwan Jabri SEDAL Department of Electrical Engineering University of Sydney NSW 2006 Australia Abstract The algorithm presented performs gradient descent on the weight space of an Artificial Neural Network (ANN), using a finite difference to approximate the gradient The method is novel in that it achieves a computational complexity similar to that of Node Perturbation, O(N3), but does not require access to the activity of hidden or internal neurons. This is possible due to a stochastic relation between perturbations at the weights and the neurons of an ANN. The algorithm is also similar to Weight Perturbation in that it is optimal in terms of hardware requirements when used for the training ofVLSI implementations of ANN's. 1 INTRODUCTION Optimization of the weights of an ANN may be performed by, the application of a gradient descent teclmique. The gradient may be calculated directly as in Backpropagation, or it may be approximated by a Finite Difference Method which is what we concern ourselves with in this paper. These methods lend themselves to the task of training hardware implementations of ANNs where real estate is at a premium and synaptic density is of great importance. Neuron Perturbation (NP), as described by the Madaline Rule ill (MRllI) (Widrow and Lehr, 1990), is a teclmique that approximates the gradient of the Mean Square Error (MSE) with respect to the change at a given neuron by applying a small perturbation to the input of the neuron and measuring the change in the MSE. The weight dE I1w ij = -tl'-:l-'Xr onet.I (1) update is then calculated from the product of this gradient measure and the activation of 212 Summed Weight Neuron Perturbation: An (O)N Improvement over Weight Perturbation the neuron from which the weight is fed, as described by (1). Weight Perturbation (WP), as described by Jabri and Flower (Jabri and Flower, 1992) is a neural network training techniques based on gradient descent using a Finite Difference method to approximate the gradient. The gradient of the MSE with respect to a weight is approximated by applying a small pertubation to the weight and measuring the change in the MSE. This gradient is then used to calculated the weight update such that: aW r'J iJE = -11? :l-.. uw .. (2) I) The advantages of WP over NP are that it performs better when limited precision weights are used, as shown by Xie and Jabri (Xie and Jabri, 1992), and is optimal with respect to hardware requirements when used to train VLSI implementations of ANNs. However, WP has O(~) computational complexity whilst NP has O(N3) computational complexity. Summed Weight Neuron Perturbation (SWNP) is similar to NP in that it has a computational complexity of O(N3) but it has the added advantage that the activation of internal neurons does not need to be known. The cost of this reduced computational complexity is that SWNP needs to save the perturbation vector used. In the following sections a description of the SWNP algorithm is provided and, finally, some experimental results are presented. 2 THE SUMMED WEIGHT NEURON PERTURBATION ALGORITHM A subsection of a feedforward ANN containing N neurons is shown in Figure 1. on which nomenclature the following derivation is based. FIGURE 1: Description Of Indices Used To Describe The Neurons Weights And Perturbations In An ANN. In a feedforward network of size N neurons the activation of a given neuron is determined by: Xi (P) = Ii (net i (p? , and net i (p) = ~WilXl (p) , (3) and Ii (y) is the ith neuron transfer function, Xi (p) is the activation of the ith neuron for the pth pattern, and W iI is the weight connecting the Ith neuron's output to the ith neuron's input. The error function, (MSE), is defmed as in (4), where T is the set of output neurons and dt (p) is the expected value of the output on the kth neuron. The change in E (p) 213 214 Flower and Jabri with respect to a given weight may then be expressed as (5). 1 2: E(p) = E (dt(p) -xt (p?2. (4) tE T oE (p) dw ij oE (p) = anet i (p) .Xj (p) . (5) The first term of on the right-hand side of (5) can be determined using a Finite Difference, which in this case is a Forward Difference, so that: AEr . (p) oE(p) =-------c- = ~. onet i (p) f +0 (6) (ri ), where, and AEr(p) = Er, (p) -E(p), , (7) r i is the perturbation applied to the ith neuron, E r., (p) is the error for the pth pattern with a perturbation applied to the ith neuron and E (p) is the error for the pth pattern without a perturbation applied to any neurons. The error introduced by the approximation is represented by the last term on the right-hand side in (6). The perturbation of one or more of the weights that are inputs to the qth neuron can be thought of as being equal to some perturbation applied directly to that neuron. Hence: rq = ~Yqrl(P)' (8) where Yql is the perturbation applied to weight w ql . As will be shown, perturbing the qth neuron by perturbing all the weights feeding into it, enables the sign of the gradient o~~) to be determined without performing the product on the right-hand side of (5). fJ Further more, the activation of hidden neurons, (i.e. Xj (p) in (5? need not be known. The contribution of the perturbation of weight w ij to the perturbation of the ith neuron is (9) Yi/j(p) . Let us take the degenerate case where there is only one weight for the ith neuron. Then the gradient of the MSE with respect to weight w ij is: AEr (p) Xj (p) = '() y.x). p fJ +0 (r.) f ll.Er . (p) = ' Yij +0 (r.) , f (10) Summed Weight Neuron Perturbation: An (O)N Improvement over Weight Perturbation noting that Xj (p) has been eliminated. In the general case where the ith neuron has more than one weight the gradient with respect to weight w ij is shown in (11). oE(p) Ow fJ.. Mr_ (p) Xj (p) = r. I +O(ri ) f = Mr. (p) + O(r.) I ':P ij f (11) where, r. f ':P ij = x. (p) . (12) J The form of (10) and (11) are the same and it will be shown that y.. can be substituted for fJ ':P fJ.. in (1) due to a stochastic relationship between them. Let us represent the sign of yfJ.. and ':P lJ.. as either +1 or -1 such that: and v fJ.. I':PiJ~ = \TI' T .. fJ (13) The set of all possible states for the system represented by the vector (1.1"J v ..) , assuming fJ f} y .. and ':P .. are never zero, is: lJ fJ {(-1, -1), (-1, 1), (1, -1), (1, 1)} . and it can be seen that when 1.1 fJ.. = v fJ.. (14) then the sign of the gradient of the MSE with respect to weight w ij given by (0) is the same as that given by (11). If the sign of Yij is chosen randomly then the probability of 1.1fJ.. = v fJ.. being true is 0.5, from (4), and so (0) will generate a gradient that is in the correct direction 50% of the time. This in itself is not sufficient to allow the network to be trained as it will take as many steps in the incorrect direction as the correct direction if the steps themselves are of the same size, (i.e. the magnitude of r.f is the same for a step in the correct direction as a step in the incorrect direction). Fortunately it can be shown that the size of the steps in the correct direction are greater than those in the incorrect direction. Let us take the case where a particular yf}.. is chosen such that 1.1fJ.. = v IJ... Now by substituting (8), (2) and (13) into (5) we get: (15) 215 216 Flower and Jabri ~ Yitxt (p) - x?J (16) -----'--- ~YitXt (p) Yij x?J rearranging to give, (17) ~YitXt (p) Yi/j which implies that the contribution to r. made by the pertUIbation y.. is of the same sign v I r I.. Let us designate this neuron pertUIbation as r.I (A) . Now we take the other possible as case where, JI ij * (18) V ij' assuming every other parameter is the same, and only the sign of y .. is changed. The IJ equality in (17) is now untrue and the contribution to r.I made by the perturbation yIJ.. is of the opposite sign as r I.. Let us designate this neuron perturbation as r I.(B) . From (8) we can determine that, (19) Equation (19) shows the relationship between the two possible states of the system where r. (A) represents the summed neuron perturbation for a selected weight perturbation y .. I that generates a step in the corrected direction and r i (B) v is similar but for a step in the incorrect direction. Clearly the correct step is always calculated from an approximated gradient that is larger than that for an incorrect step as the neuron perturbation is larger. The weight update rule then becomes: Mr. (p) ~Wij = -Tl. I (20) Yij The algorithm for SWNP is shown as pseudo code in Figure 2. 2.1 HARDWARE COMPATmILITY OF SWNP This optimisation technique is ideally suited to the training of hardware implementations of ANN's whether they consist of discrete components or are VLSI technology. The speed up over WP of 0 (N) achieved is at the cost of an 0 (N) storage requirement but this sto~ge can be achieved with a single bit per neuron. SWNP is the same order of complexity as NP but does not require access to the activation of internal neurons and therefore can treat a network as a "black box" into which an input vector and weight matrix is fed and an Summed Weight Neuron Perturbation: An (O)N Improvement over Weight Perturbation output vector is received. While (total error> error threshold) { For (all patterns in training set) { Select next pattern and training vector, Forward Prop.;Measure, (calculate) and save error; Accumulate total error; For (all non-input neurons) { For (all weights of current neuron) { } Apply & Save perturbation of random polarity; Forward Prop.;Measure, (calculate) and save &!rror; For (all weights of current neuron) { Restore value of weight; Calculate weight delta using saved perturbation value; } If (Online Mode) Update current weight; If (Online Mode) } Forward Prop.; Measure, (calculate) and save new error; If (Batch Mode) { For (all weights) Update current weight; } } } FIGURE 2: Algorithm in Pseudo Code for Summed Weight Neuron Perturbation. 3 TEST RESULTS USING SWNP The results for a series of tests are shown in the next three tables and are summarised in Figure 4. The headings are, N the number of neurons in the network, P the number of patterns in the training set, FF-SWNP the number of feedforward passes for the SWNP Algorithm, FF-WP the number of feedforward passes for the WP Algorithm, and RA the ratio between the number of feedforward passes for WP against SWNP. The feedforward passes are recorded to 1 significant figure. no The results for a series of simulations comparing the performance of SWNP against WP are shown in Table 1. The simulations utilised floating point synaptic and neuron precis1ons. The results for a series of simulations comparing the performance of SWNP against WP are shown in Table 2. The simulations utilised limited synaptic precision, (i.e. 6 bits) and floating point neuron precisions The results for a series of experiments comparing the performance of SWNP against WP are shown in Table 2. Note: the training algorithm are the variations of WP and SWNP that are combined with the Random Search Algorithm (RSA). The results reported are averaged over 10 trials. An example of the training error trajectories of WP and SWNP for the Monk 2 problem are shown in Figure 3. 217 218 Flower and Jabri Table 1: Performance Of SWMP Versus WP, Comparing Feedforward Operations To Convergence. (Simulations With Floating Point Precision) PROBLEM N P FF-SWNP FF-WP ERROR RATIO XOR 3 4 1.6xlQ3 1.9xlcP 0.0125 1. 22 4 Encoder 5 4 0.9xlcP 1.8xlcY 0.0125 1.84 8 Encoder 11 8 1.5xlOS 4.5xlOS 0.0125 2.88 ICEG 15 119 3.7xlOS 7.9xlrf' 0.0125 21.34 Table 2: Performance Of SWNP Versus WP, Comparing Feedforward Operations To Convergence. (Simulations With Limited Precision) PROBLEM N P SWNP WP ERROR RATIO Monk 1 4 129 1.OxlOS 1.9x106 0.001 19.38 MonIa 17 169 3.6xlOS 6.8x106 0.0005 18.71 Monk3 17 122 1.2xl06 7.1xl06 0.022 5.87 IECG 55 5 8 1.6xl04 7.2xl04 0.0001 4.2 Table 3: Performance Of SWNP Versus WP, Comparing Feedforward Operations To Convergence. (Hardware Implementation) PROBLEM N P SWNP WP ERROR RATIO ECG 55 ECG045 5 5 8 8 3.1xlQ3 1.1xl04 3.6xlQ3 2.Oxl04 0.00001 0.001 1.13 1.78 -- MONX 2 PROBLEM YBKa lIT' ..,CO lIWRP ",",co _co >:"'co :IOOCO 1I0CO ItDCO ,OOCO 1:"'CO 'OOCO lOCO ?lCO ooCO :lOCO 000 OCO 2).(1) ?),(1) ?).IX) tom 10000 12).00 leO) J!IIIOC2B FIGURE 3: Comparison ofWP and SWNP For Monk 2 Problem Summed Weight Neuron Perturbation: An (O)N Improvement over Weight Perturbation FIGURE 4: Comparison of the number of Feedforward passes performed to achieve convergence on a range of problems using SWNP and WP. 20 18 16 14 12 10 8 6 4 2 mwsm SWNP Feedforward Passes _ WP Feedforward Passes XORx~02 4ENCODERxI 8ENCODERxI ICEGxl MONK. 1xl MONK.2xl MONlOxl lCEG 55xl ECG 55xl ECG045xl 4 CONCLUSION The algorithm presented, SWNP, performs gradient descent on the weight space of an ANN, using a fInite difference to approximate the gradient. The method is novel in that it achieves 0 (N3 ) computational complexity similar to that of Node Perturbation but does not require access to the activity of hidden or internal neurons. The algorithm is also similar to Weight Perturbation in that it is optimal in terms of hardware requirements when used for the training of VLSI implementations of ANN's. Results are presented that show the algorithm in operation on floating point simulations, limited precision simulations and an actual hardware implementation of an ANN. References labri, M. and Flower, B. (1992). Weight perturbation: An optimal architecture and learning technique for analog vlsi feedforward and recurrent multilayer networks. IEEE Transactions on Neural Networks, 3(1):154-157. Widrow, B. and Lehr, M. A. (1990). 30 years of adaptive neural networks: Perceptron, madaline, and backpropagation. Proceedings of the IEEE, 78(9):1415-1442. Xie, Y. and lahri, M. (1992). Analysis of the effects of quantization in multilayer neural networks using a statistical model. IEEE Transactions on Neural Networks, 3(2):334-338. 219
608 |@word sydney:1 trial:1 effect:1 true:1 implies:1 hence:1 direction:9 equality:1 added:1 correct:5 saved:1 wp:20 simulation:8 stochastic:2 australia:1 ll:1 nsw:1 defmed:1 gradient:18 kth:1 ow:1 require:3 feeding:1 series:4 designate:2 yij:5 qth:2 performs:3 fj:14 current:4 comparing:6 assuming:2 code:2 index:1 activation:6 novel:2 great:1 relationship:2 polarity:1 ratio:4 madaline:2 ql:1 substituting:1 ji:1 achieves:2 enables:1 perturbing:2 update:5 analog:1 implementation:7 approximates:1 selected:1 sedal:1 accumulate:1 significant:1 monk:5 neuron:48 ith:9 finite:5 descent:4 lco:1 clearly:1 node:2 always:1 perturbation:39 access:3 introduced:1 incorrect:5 improvement:5 expected:1 ra:1 themselves:2 yi:2 seen:1 flower:7 fortunately:1 lj:2 greater:1 mr:2 pattern:6 actual:1 hidden:3 relation:1 vlsi:4 wij:1 becomes:1 provided:1 determine:1 barry:1 ii:3 lend:1 ill:1 what:1 restore:1 summed:9 whilst:1 equal:1 never:1 untrue:1 eliminated:1 technology:1 pseudo:2 represents:1 every:1 lit:1 ti:1 oco:1 multilayer:2 optimisation:1 np:5 represent:1 achieved:2 randomly:1 engineering:1 treat:1 floating:4 ourselves:1 versus:3 pass:7 black:1 pij:1 ecg:2 sufficient:1 co:5 lehr:2 estate:1 limited:4 noting:1 feedforward:13 range:1 averaged:1 changed:1 xj:5 last:1 architecture:1 heading:1 rsa:1 loco:2 opposite:1 backpropagation:2 side:3 xr:1 allow:1 perceptron:1 whether:1 thought:1 calculated:4 forward:4 made:2 adaptive:1 get:1 nomenclature:1 measuring:2 anet:1 pth:3 storage:1 transaction:2 applying:2 cost:2 approximate:3 hardware:8 reported:1 reduced:1 generate:1 aw:1 marwan:1 xi:2 combined:1 rule:2 search:1 density:1 sign:7 yf:1 delta:1 per:1 table:7 dw:1 transfer:1 summarised:1 discrete:1 variation:1 rearranging:1 connecting:1 mse:7 threshold:1 recorded:1 containing:1 ije:1 jabri:8 substituted:1 approximated:3 uw:1 year:1 de:1 electrical:1 tl:2 calculate:4 ff:4 sto:1 oe:4 precision:6 performed:2 utilised:2 rq:1 xl:4 bit:2 complexity:7 onet:2 ideally:1 ix:1 trained:1 contribution:3 activity:2 square:1 aer:3 xt:1 xor:1 er:2 n3:4 ri:2 concern:1 consist:1 generates:1 quantization:1 speed:1 represented:2 importance:1 performing:1 derivation:1 train:1 leo:1 teclmique:2 trajectory:1 describe:1 department:1 magnitude:1 te:1 artificial:1 labri:1 anns:2 suited:1 synaptic:3 larger:2 against:4 expressed:1 encoder:2 rror:1 pertubation:1 x106:2 itself:1 online:2 advantage:2 equation:1 subsection:1 net:2 prop:3 ann:10 product:2 ge:1 fed:2 iceg:1 xl04:3 change:4 dt:2 xie:3 operation:4 tom:1 degenerate:1 achieve:1 apply:1 determined:3 corrected:1 description:2 box:1 total:2 experimental:1 premium:1 save:5 batch:1 convergence:4 hand:3 requirement:4 select:1 internal:4 widrow:2 recurrent:1 mode:3 ij:12 received:1
5,614
6,080
Truncated Variance Reduction: A Unified Approach to Bayesian Optimization and Level-Set Estimation Ilija Bogunovic1 , Jonathan Scarlett1 , Andreas Krause2 , Volkan Cevher1 1 Laboratory for Information and Inference Systems (LIONS), EPFL 2 Learning and Adaptive Systems Group, ETH Z?urich {ilija.bogunovic,jonathan.scarlett,volkan.cevher}@epfl.ch, krausea@ethz.ch Abstract We present a new algorithm, truncated variance reduction (T RU VA R), that treats Bayesian optimization (BO) and level-set estimation (LSE) with Gaussian processes in a unified fashion. The algorithm greedily shrinks a sum of truncated variances within a set of potential maximizers (BO) or unclassified points (LSE), which is updated based on confidence bounds. T RU VA R is effective in several important settings that are typically non-trivial to incorporate into myopic algorithms, including pointwise costs and heteroscedastic noise. We provide a general theoretical guarantee for T RU VA R covering these aspects, and use it to recover and strengthen existing results on BO and LSE. Moreover, we provide a new result for a setting where one can select from a number of noise levels having associated costs. We demonstrate the effectiveness of the algorithm on both synthetic and real-world data sets. 1 Introduction Bayesian optimization (BO) [1] provides a powerful framework for automating design problems, and finds applications in robotics, environmental monitoring, and automated machine learning, just to name a few. One seeks to find the maximum of an unknown reward function that is expensive to evaluate, based on a sequence of suitably-chosen points and noisy observations. Numerous BO algorithms have been presented previously; see Section 1.1 for an overview. Level-set estimation (LSE) [2] is closely related to BO, with the added twist that instead of seeking a maximizer, one seeks to classify the domain into points that lie above or below a certain threshold. This is of considerable interest in applications such as environmental monitoring and sensor networks, allowing one to find all ?sufficiently good? points rather than the best point alone. While BO and LSE are closely related, they are typically studied in isolation. In this paper, we provide a unified treatment of the two via a new algorithm, Truncated Variance Reduction (T RU VA R), which enjoys theoretical guarantees, good computational complexity, and the versatility to handle important settings such as pointwise costs, non-constant noise, and multi-task scenarios. The main result of this paper applies to the former two settings, and even the fixed-noise and unit-cost case, we refine existing bounds via a significantly improved dependence on the noise level. 1.1 Previous Work Three popular myopic techniques for Bayesian optimization are expected improvement (EI), probability of improvement (PI), and Gaussian process upper confidence bound (GP-UCB) [1, 3], each of which chooses the point maximizing an acquisition function depending directly on the current posterior mean and variance. In [4], the GP-UCB-PE algorithm was presented for BO, choosing the highest-variance point within a set of potential maximizers that is updated based on confidence bounds. Another relevant BO algorithm is BaMSOO [5], which also keeps track of potential maximizers, but instead chooses points based on a global optimization technique called simultaneous 1 online optimization (SOO). An algorithm for level-set estimation with GPs is given in [2], which keeps track of a set of unclassified points. These algorithms are computationally efficient and have various theoretical guarantees, but it is unclear how best to incorporate aspects such as pointwise costs and heteroscedastic noise [6]. The same is true for the Straddle heuristic for LSE [7]. Entropy search (ES) [8] and its predictive version [9] choose points to reduce the uncertainty of the location of the maximum, doing so via a one-step lookahead of the posterior rather than only the current posterior. While this is more computationally expensive, it also permits versatility with respect to costs [6], heteroscedastic noise [10], and multi-task scenarios [6]. A recent approach called minimum regret search (MRS) [11] also performs a look-ahead, but instead chooses points to minimize the regret. To our knowledge, no theoretical guarantees have been provided for these. The multi-armed bandit (MAB) [12] literature has developed alongside the BO literature, with the two often bearing similar concepts. The MAB literature is far too extensive to cover here, but we briefly mention some variants relevant to this paper. Extensive attention has been paid to the bestarm identification problem [13], and cost constraints have been incorporated in a variety of forms [14]. Moreover, the concept of ?zooming in? to the optimal point has been explored [15]. In general, the assumptions and analysis techniques in the MAB and BO literature are quite different. 1.2 Contributions We present a unified analysis of Bayesian optimization and level-set estimation via a new algorithm Truncated Variance Reduction (T RU VA R). The algorithm works by keeping track of a set of potential maximizers (BO) or unclassified points (LSE), selecting points that shrink the uncertainty within that set up to a truncation threshold, and updating the set using confidence bounds. Similarly to ES and MRS, the algorithm performs a one-step lookahead that is highly beneficial in terms of versatility. However, unlike these previous works, our lookahead avoids the computationally expensive task of averaging over the posterior distribution and the observations. Also in contrast with ES and MRS, we provide theoretical bounds for T RU VA R characterizing the cost required to achieve a certain accuracy in finding a near-optimal point (BO) or in classifying each point in the domain (LSE). By applying this to the standard BO setting, we not only recover existing results [2, 4], but we also strengthen them via a significantly improved dependence on the noise level, with better asymptotics in the small noise limit. Moreover, we provide a novel result for a setting in which the algorithm can choose the noise level, each coming with an associated cost. Finally, we compare our algorithm to previous works on several synthetic and real-world data sets, observing it to perform favorably in a variety of settings. 2 Problem Setup and Proposed Algorithm Setup: We seek to sequentially optimize an unknown reward function f (x) over a finite domain D.1 At time t, we query a single point xt 2 D and observe a noisy sample yt = f (xt ) + zt , where zt ? N (0, 2 (xt )) for some known noise function 2 (?) : D ! R+ . Thus, in general, some points may be noisier than others, in which case we have heteroscedastic noise [10]. We associate with each point a cost according to some known cost function c : D ! R+ . If both 2 (?) and c(?) are set to be constant, then we recover the standard homoscedastic and unit-cost setting. We model f (x) as a Gaussian process (GP) [16] having mean zero and kernel function k(x, x0 ), normalized so that k(x, x) = 1 for all x 2 D. The posterior distribution of f given the points and observations up to time t is again a GP, with the posterior mean and variance given by [10] ?t (x) = kt (x)T Kt + ?t 1 (1) yt 1 = k(x, x) kt (x)T Kt + ?t kt (x), (2) ? ?t ? ? where kt (x) = k(xi , x) i=1 , Kt = k(xt , xt0 ) t,t0 , and ?t = diag( 2 (x1 ), . . . , 2 (xt )). We also let t2 1|x (x) denote the posterior variance of x upon observing x along with x1 , ? ? ? , xt 1 . t (x) 1 2 Extensions to continuous domains are discussed in the supplementary material. 2 Confidence Selected point Target Max. lower bound Potential maximizers (a) t = 6 (b) t = 7 (c) t = 8 (d) t = 9 Figure 1: An illustration of the T RU VA R algorithm. In (a), (b), and (c), three points within the set of potential maximizers Mt are selected in order to bring the confidence bounds to within the target range, and Mt shrinks during this process. In (d), the target confidence width shrinks as a result of the last selected point bringing the confidence within Mt to within the previous target. We consider both Bayesian optimization, which consists of finding a point whose function value is as high as possible, and level-set estimation, which consists of classifying the domain according into points that lie above or below a given threshold h. The precise performance criteria for these settings are given in Definition 3.1 below. Essentially, after spending a certain cost we report a point (BO) or a classification (LSE), but there is no preference on the values of f (xt ) for the points xt chosen before coming to such a decision (in contrast with other notions such as cumulative regret). T RU VA R algorithm: Our algorithm is described in Algorithm 1, making use of the updates described in Algorithm 2. The algorithm keeps track of a sequence of unclassified points Mt , representing potential maximizers for BO or points close to h for LSE. This set is updated based on the confidence bounds depending on constants (i) . The algorithm proceeds in epochs, 1/2 where in the i-th epoch it seeks to bring the confidence (i) t (x) of points within Mt below a target value ?(i) . It does this by greedily minimizing the sum of truncated variances P 2 x2Mt 1 max{ (i) t 1|x (x), ?(i) } arising from choosing the point x, along with a normalization and division by c(x) to favor low-cost points. The truncation by ?(i) in this decision rule means that once the confidence of a point is below the current target value, there is no preference in making it any lower (until the target is decreased). Once the confidence of every point in Mt is less than a factor 1 + above the target value, the target confidence is reduced according to a multiplication by r 2 (0, 1). An illustration of the process is given in Figure 1, with details in the caption. For level-set estimation, we also keep track of the sets Ht and Lt , containing points believed to have function values above and below h, respectively. The constraint x 2 Mt 1 in (5)?(7) ensures that {Mt } is non-increasing with respect to inclusion, and Ht and Lt are non-decreasing. Algorithm 1 Truncated Variance Reduction (T RU VA R) Input: Domain D, GP prior (?0 , 0 , k), confidence bound parameters > 0, r 2 (0, 1), { (i) }i 1 , ?(1) > 0, and for LSE, level-set threshold h 1: Initialize the epoch number i = 1 and potential maximizers M(0) = D. 2: for t = 1, 2, . . . do 3: Choose P P 2 2 2 2 x2Mt 1 max{ (i) t 1 (x), ?(i) } x2Mt 1 max{ (i) t 1|x (x), ?(i) } xt = arg max . (3) c(x) x2D 4: 5: 6: Observe the noisy function sample yt , and update according to Algorithm 2 to obtain Mt , ?t , t , lt and ut , as well as Ht and Lt in the case of LSE 1/2 while maxx2Mt (i) t (x) ? (1 + )?(i) do Increment i, set ?(i) = r ? ?(i 1) . The choices of (i) , , and r are discussed in Section 4. As with previous works, the kernel is assumed known in our theoretical results, whereas in practice it is typically learned from training data [3]. Characterizing the effect of model mismatch or online hyperparameter updates is beyond the scope of this paper, but is an interesting direction for future work. 3 Algorithm 2 Parameter Updates for T RU VA R Input: Selected points and observations {xt0 }tt0 =1 ; {yt0 }tt0 =1 , previous sets Mt 1 , Ht 1 , Lt 1/2 parameter (i) , and for LSE, level-set threshold h. 1: Update ?t and t according to (1)?(2), and form the upper and lower confidence bounds 1/2 2: For BO, set or for LSE, set H t = Ht Mt = x 2 Mt 1 1/2 ut (x) = ?t (x) + (i) t (x), `t (x) = ?t (x) (i) ? Mt = x 2 Mt 1 : ut (x) max `t (x) , x2Mt 1 [ x 2 Mt 1 1 : ut (x) : `t (x) > h , t (x). 1 (4) (5) h and `t (x) ? h Lt = L t 1, (6) [ x 2 Mt 1 : ut (x) < h . (7) Some variants of our algorithm and theory are discussed in the supplementary material due to lack of space, including pure variance reduction, non-Bayesian settings [3], continuous domains [3], the batch setting [4], and implicit thresholds for level-set estimation [2]. 3 Theoretical Bounds In order to state our results for BO and LSE in a unified fashion, we define a notion of ?-accuracy for the two settings. That is, we define this term differently in the two scenarios, but then we provide theorems that simultaneously apply to both. All proofs are given in the supplementary material. Definition 3.1. After time step t of T RU VA R, we use the following terminology: ? For BO, the set Mt is ?-accurate if it contains all true maxima x? 2 arg maxx f (x), and all of its points satisfy f (x? ) f (x) ? ?. ? For LSE, the triplet (Mt , Ht , Lt ) is ?-accurate if all points in Ht satisfy f (x) > h, all points in Lt satisfy f (x) < h, and all points in Mt satisfy |f (x) h| ? 2? . Pt In both cases, the cumulative cost after time t is defined as Ct = t0 =1 c(xt0 ). We use 2? in the LSE setting instead of ? since this creates a region of size ? where the function value lies, which is consistent with the BO setting. Our performance criterion for level-set estimation is slightly different from that of [2], but the two are closely related. 3.1 General Result Preliminary definitions: Suppose that the { (i) } are chosen to ensure valid confidence bounds, i.e., lt (x) ? f (x) ? ut (x) with high probability; see Theorem 3.1 and its proof below for such choices. In this case, we have after the i-th epoch that all points are either already discarded (BO) or classified (LSE), or are known up to the confidence level (1 + )?(i) . For the points with such confidence, we have ut (x) lt (x) ? 2(1 + )?(i) , and hence ut (x) ? lt (x) + 2(1 + )?(i) ? f (x) + 2(1 + )?(i) , (8) and similarly lt (x) f (x) 2(1 + )?(i) . This means that all points other than those within a gap of width 4(1 + )?(i) must have been discarded or classified: Mt ? x : f (x) Mt ? x : |f (x) f (x? ) 4(1 + )?(i) =: M (i) h| ? 2(1 + )?(i) =: M (i) (LSE) Since no points are discarded or classified initially, we define M (0) = D. 4 (BO) (9) (10) For a collection of points S = (x01 , . . . , x0|S| ), possibly containing duplicates, we write the total cost P|S| as c(S) = i=1 c(x0i ). Moreover, we denote the posterior variance upon observing the points up to time t 1 and the additional points in S by t 1|S (x). Therefore, c(x) = c({x}) and t 1|x (x) = t 1|{x} (x). The minimum cost (respectively, maximum cost) is denoted by cmin = minx2D c(x) (respectively, cmax = maxx2D c(x)). Finally, we introduce the quantity n o C ? (?, M ) = min c(S) : max 0|S (x) ? ? , (11) S x2M representing the minimum cost to achieve a posterior standard deviation of at most ? within M . Main result: In all of our results, we make the following assumption. Assumption 3.1. The kernel k(x, x0 ) is such that the variance reduction function t,x (S) 2 t (x) = 2 t|S (x) (12) is submodular [17] for any time t, and any selected points (x1 , . . . , xt ) and query point x. This assumption has been used in several previous works based on Gaussian processes, and sufficient conditions for its validity can be found in [18, Sec. 8]. We now state the following general guarantee. Theorem 3.1. Fix ? > 0 and 2 (0, 1), and suppose there exist values {C(i) } and { (i) } such that ? ? ?(i) |M (i 1) | (i) C(i) C ? , M + cmax , (13) (i 1) log 2 2 1/2 ?(i) (i) and (i) 2 log Then if T RU VA R is run with these choices of C? = P |D| i0 ?i C(i0 ) 6 c2min ? (14) . until the cumulative cost reaches X C(i) , (i) i : 4(1+ )?(i then with probability at least 1 2 2 (15) 1) >? , we have ?-accuracy. While this theorem is somewhat abstract, it captures the fact that the algorithm improves when points having a lower cost and/or lower noise are available, since both of these lead to a smaller value of C ? (?, M ); the former by directly incurring a smaller cost, and the latter by shrinking the variance more rapidly. Below, we apply this result to some important cases. 3.2 Results for Specific Settings Homoscedastic and unit-cost setting: Define the maximum mutual information [3] 1 log det IT + 2 KT , (16) T = max x1 ,...,xT 2 and consider the case that 2 (x) = 2 and c(x) = 1. In the supplementary material, we provide a 1 theorem with a condition for ?-accuracy of the form T ?? C1 ?T2 T + 1 with C1 = log(1+ 2) , thus matching [2, 4] up to logarithmic factors. In the following, we present a refined version that has a significantly better dependence on the noise level, thus exemplifying that a more careful analysis of (13) can provide improvements over the standard bounding techniques. 2 2 ? Corollary 3.1. Fix ? > 0 and 2 (0, 1), define T = 2 log |D|T , and set ?(1) = 1 and r = 12 . 6 There exist choices of (i) (not depending on the time horizon T ) such that we have ?-accuracy with probability at least 1 once the following condition holds: ? ? l 96(1 + )2 6(1 + )2 32(1 + )2 m 16(1 + )2 |D| T 2 T 2 + C + 2 log log , T T 1 T T 2 2 2 ?2 ? ?2 (17) where C1 = 1 log(1+ 2) . This condition is of the form T 5 ?? 2 T ?2 T + C1 T 2 T +1 . The choices ?(1) = 1 and r = 12 are made for mathematical convenience, and a similar result follows for any other choices ?(1) > 0 and r 2 (0, 1), possibly with different constant factors. As 2 ! 1 (i.e., high noise), both of the above-mentioned bounds have noise dependence O? ( 2 ), since log(1 + ? 1 ) = O(? 1 ) as ? ! 1. On the other hand, as 2 ! 0 (i.e., low noise), C1 is logarithmic, and Corollary 3.1 is significantly better provided that ? ? . Choosing the noise and cost: Here we consider the setting that there is a domain of points D0 that the reward function depends on, and alongside each point we can choose a noise variance 2 (k) (k = 1, . . . , K). Hence, D = D0 ?{1, ? ? ? , K}. Lower noise variances incur a higher cost according to a cost function c(k). Corollary 3.2. For each k = 1, ? ? ? , K, let T ? (k) denote the smallest value of T such that (17) holds |D|T 2 c2 ? 2 with 2 (k) in place of 2 , and with T = 2 log 6 c2max . Then, under the preceding setting, there min exist choices of (i) (not depending on T ) such that we have ?-accuracy with probability at least 1 once the cumulative cost reaches mink c(k)T ? (k). This result roughly states that we obtain a bound as good as that obtained by sticking to any fixed choice of noise level. In other words, every choice of noise (and corresponding cost) corresponds to a different version of a BO or LSE algorithm (e.g., [2, 4]), and our algorithm has a similar performance guarantee to the best among all of those. This is potentially useful in avoiding the need for running an algorithm once per noise level and then choosing the best-performing one. Moreover, we found numerically that beyond matching the best fixed noise strategy, we can strictly improve over it by mixing the noise levels; see Section 4. 4 Experimental Results We evaluate our algorithm in both the level-set estimation and Bayesian optimization settings. Parameter choices: As with previous GP-based algorithms that use confidence bounds, our theoretical choice of (i) in T RU VA R is typically overly conservative. Therefore, instead of using (14) directly, we use a more aggressive variant with similar dependence on the domain size and time: 2 (i) = a log(|D|t(i) ), where t(i) is the time at which the epoch starts, and a is a constant. Instead of the choice a = 2 dictated by (14), we set a = 0.5 for BO to avoid over-exploration. We found exploration to be slightly more beneficial for LSE, and hence set a = 1 for this setting. We found T RU VA R to be quite robust with respect to the choices of the remaining parameters, and simply set ?(1) = 1, r = 0.1, and = 0 in all experiments; while our theory assumes > 0, in practice there is negligible difference between choosing zero and a small positive value. Level-set estimation: For the LSE experiments, we use a common classification rule in all al? t = {x : ?t (x) gorithms, classifying the points according to the posterior mean as H h} and ? Lt = {x : ?t (x) < h}. The classification accuracy is measured by the F1 -score (i.e., the harmonic mean of precision and recall) with respect to the true super- and sub-level sets. We compare T RU VA R against the GP-based LSE algorithm [2], which we name via the authors? surnames as GCHK, as well as the state-of-the-art straddle (STR) heuristic [7] and the maximum variance rule (VAR) [2]. Descriptions can be found in the supplementary material. GCHK includes 1/2 an exploration constant t , and we follow the recommendation in [2] of setting t = 3. Lake data (unit cost): We begin with a data set from the domain of environmental monitoring of inland waters, consisting of 2024 in situ measurements of chlorophyll concentration within a vertical transect plane, collected by an autonomous surface vessel in Lake Z?urich [19]. As in [2], our goal is to detect regions of high concentration. We evaluate each algorithm on a 50 ? 50 grid of points, with the corresponding values coming from the GP posterior that was derived using the original data (see Figure 2d). We use the Mat?ern-5/2 ARD kernel, setting its hyperparameters by maximizing the likelihood on the second (smaller) available dataset. The level-set threshold h is set to 1.5. In Figure 2a, we show the performance of the algorithms averaged over 100 different runs; here the randomness is only with respect to the starting point, as we are in the noiseless setting. We observe that in this unit-cost case, T RU VA R performs similarly to GCHK and STR. All three methods outperform VAR, which is good for global exploration but less suited to level-set estimation. 6 1 0.8 0.8 0.9 0.6 0.4 TruVaR GCHK STR VAR 0.2 F1 score 1 F1 score F1 score 1 0.6 0.4 0.2 0 20 40 60 Time 80 100 TruVaR GCHK high noise GCHK medium noise GCHK small noise 0.5 0 120 0.7 0.6 TruVaR GCHK 0 0 0.8 0.5 1 Cost (?104 ) 1.5 2 0.1 0.15 ?104 0.2 0.25 0.3 Cost (?104 ) 0.35 0.4 (a) Lake data, unit-cost (b) Lake data, varying cost (c) Synthetic data, varying noise (d) Inferred concentration function (e) Points chosen by GCHK (f) Points chosen by T RU VA R Figure 2: Experimental results for level-set estimation. 0.27 Averaged Regret Median Regret 10-2 10-4 TruVaR EI GP-UCB TruVaR EI GP-UCB ES MRS 100 Validation Error TruVaR EI GP-UCB ES MRS 100 10-2 10-4 0.26 0.25 10-6 10-6 0 20 40 60 80 100 120 0.24 0 20 40 60 80 100 120 0 20 Time Time (a) Synthetic, median (b) Synthetic, outlier-adjusted mean 40 60 80 100 Time (c) SVM data Figure 3: Experimental results for Bayesian optimization. Lake data (varying cost): Next, we modify the above setting by introducing pointwise costs that are a function of the previous sampled point x0 , namely, cx0 (x) = 0.25|x1 x01 | + 4(|x2 | + 1), where x1 is the vessel position and x2 is the depth. Although we did not permit such a dependence on x0 in our original setup, the algorithm itself remains unchanged. Our choice of cost penalizes the distance traveled |x1 x01 |, as well as the depth of the measurement |x2 |. Since incorporating costs into existing algorithms is non-trivial, we only compare against the original version of GCHK that ignores costs. In Figure 2b, we see that TruVaR significantly outperforms GCHK, achieving a higher F1 score for a significantly smaller cost. The intuition behind this can be seen in Figures 2e and 2f, where we show the points sampled by TruVaR and GCHK in one experiment run, connecting all pairs of consecutive points. GCHK is designed to pick few points, but since it ignores costs, the distance traveled is large. In contrast, by incorporating costs, T RU VA R tends to travel small distances, often even staying in the same x1 location to take measurements at multiple depths x2 . Synthetic data with multiple noise levels: In this experiment, we demonstrate Corollary 3.2 by considering the setting in which the algorithm can choose the sampling noise variance and incur the associated cost. We use a synthetic function sampled from a GP on a 50 ? 50 grid with an isotropic squared exponential kernel having length scale l = 0.1 and unit variance, and set h = 2.25. We use three different noise levels, 2 2 {10 6 , 10 3 , 0.05}, with corresponding costs {15, 10, 2}. We run GCHK separately for each of the three noise levels, while running T RU VA R as normal and allowing it to mix between the noise levels. The resulting F1 -scores are shown in Figure 2c. The best-performing version of GCHK changes throughout the time horizon, while T RU VA R is consistently better than all three. A discussion on how T RU VA R mixes between the noise levels can be found in the supplementary material. 7 Bayesian optimization. We now provide the results of two experiments for the BO setting. Synthetic data: We first conduct a similar experiment as that in [8, 11], generating 200 different test functions defined on [0, 1]2 . To generate a single test function, 200 points are chosen uniformly at random from [0, 1]2 , their function values are generated from a GP using an isotropic squared exponential kernel with length scale l = 0.1 and unit variance, and the resulting posterior mean forms the function on the whole domain [0, 1]2 . We subsequently assume that samples of this function are corrupted by Gaussian noise with 2 = 10 6 . The extension of T RU VA R to continuous domains is straightforward, and is explained in the supplementary material. For all algorithms considered, we evaluate the performance according to the regret of a single reported point, namely, the one having the highest posterior mean. We compare the performance of T RU VA R against expected improvement (EI), GP-upper confidence bound (GP-UCB), entropy search (ES) and minimum regret search (MRS), whose acquisition functions are outlined in the supplementary material. We use publicly available code for ES and MRS [20]. The exploration parameter t in GP-UCB is set according to the recommendation in [3] of dividing the theoretical value by five, and the parameters for ES and MRS are set according to the recommendations given in [11, Section 5.1]. Figure 3a plots the median of the regret, and Figure 3b plots the mean after removing outliers (i.e., the best and worst 5% of the runs). In the earlier rounds, ES and MRS provide the best performance, while T RU VA R improves slowly due to exploration. However, the regret of T RU VA R subsequently drops rapidly, giving the best performance in the later rounds after ?zooming in? towards the maximum. GP-UCB generally performs well with the aggressive choice of t , despite previous works? experiments revealing it to perform poorly with the theoretical value. Hyperparameter tuning data: In this experiment, we use the SVM on grid dataset, previously used in [21]. A 25 ? 14 ? 4 grid of hyperparameter configurations resulting in 1400 data points was preevaluated, forming the search space. The goal is to find a configuration with small validation error. We use a Mat?ern-5/2 ARD kernel, and re-learn its hyperparameters by maximizing the likelihood after sampling every 3 points. Since the hyperparameters are not fixed in advance, we replace Mt 1 by D in (5) to avoid incorrectly ruling points out early on, allowing some removed points to be added again in later steps. Once the estimated hyperparameters stop to vary significantly, the size of the set of potential maximizers decreases almost monotonically. Since we consider the noiseless setting here, we measure performance using the simple regret, i.e., the best point found so far. We again average over 100 random starting points, and plot the resulting validation error in Figure 3c. Even in this noiseless and unit-cost setting that EI and GP-UCB are suited to, we find that T RU VA R performs slightly better, giving a better validation error with smaller error bars. 5 Conclusion We highlight the following aspects in which T RU VA R is versatile: ? Unified optimization and level-set estimation: These are typically treated separately, whereas T RU VA R and its theoretical guarantees are essentially identical in both cases ? Actions with costs: T RU VA R naturally favors cost-effective points, as this is directly incorporated into the acquisition function. ? Heteroscedastic noise: T RU VA R chooses points that effectively shrink the variance of other points, thus directly taking advantage of situations in which some points are noisier than others. ? Choosing the noise level: We provided novel theoretical guarantees for the case that the algorithm can choose both a point and a noise level, cf., Corollary 3.2. Hence, T RU VA R directly handles several important aspects that are non-trivial to incorporate into myopic algorithms. Moreover, compared to other BO algorithms that perform a lookahead (e.g., ES and MRS), T RU VA R avoids the computationally expensive task of averaging over the posterior and/or measurements, and comes with rigorous theoretical guarantees. Acknowledgment: This work was supported in part by the European Commission under Grant ERC Future Proof, SNF Sinergia project CRSII2-147633, SNF 200021-146750, and EPFL Fellows Horizon2020 grant 665667. 8 References [1] B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. de Freitas, ?Taking the human out of the loop: A review of Bayesian optimization,? Proc. IEEE, vol. 104, no. 1, pp. 148?175, 2016. [2] A. Gotovos, N. Casati, G. Hitz, and A. Krause, ?Active learning for level set estimation,? in Int. Joint. Conf. Art. Intel., 2013. [3] N. Srinivas, A. Krause, S. Kakade, and M. Seeger, ?Information-theoretic regret bounds for Gaussian process optimization in the bandit setting,? IEEE Trans. Inf. Theory, vol. 58, no. 5, pp. 3250?3265, May 2012. [4] E. Contal, D. Buffoni, A. Robicquet, and N. Vayatis, Machine Learning and Knowledge Discovery in Databases. Springer Berlin Heidelberg, 2013, ch. Parallel Gaussian Process Optimization with Upper Confidence Bound and Pure Exploration, pp. 225?240. [5] Z. Wang, B. Shakibi, L. Jin, and N. de Freitas, ?Bayesian multi-scale optimistic optimization,? http://arxiv.org/abs/1402.7005. [6] K. Swersky, J. Snoek, and R. P. Adams, ?Multi-task Bayesian optimization,? in Adv. Neur. Inf. Proc. Sys. (NIPS), 2013, pp. 2004?2012. [7] B. Bryan and J. G. Schneider, ?Actively learning level-sets of composite functions,? in Int. Conf. Mach. Learn. (ICML), 2008. [8] P. Hennig and C. J. Schuler, ?Entropy search for information-efficient global optimization,? J. Mach. Learn. Research, vol. 13, no. 1, pp. 1809?1837, 2012. [9] J. M. Hern?andez-Lobato, M. W. Hoffman, and Z. Ghahramani, ?Predictive entropy search for efficient global optimization of black-box functions,? in Adv. Neur. Inf. Proc. Sys. (NIPS), 2014, pp. 918?926. [10] P. W. Goldberg, C. K. Williams, and C. M. Bishop, ?Regression with input-dependent noise: A Gaussian process treatment,? Adv. Neur. Inf. Proc. Sys. (NIPS), vol. 10, pp. 493?499, 1997. [11] J. H. Metzen, ?Minimum regret search for single-and multi-task optimization,? in Int. Conf. Mach. Learn. (ICML), 2016. [12] S. Bubeck and N. Cesa-Bianchi, Regret Analysis of Stochastic and Nonstochastic Multi-Armed Bandit Problems, ser. Found. Trend. Mach. Learn. Now Publishers, 2012. [13] K. Jamieson and R. Nowak, ?Best-arm identification algorithms for multi-armed bandits in the fixed confidence setting,? in Ann. Conf. Inf. Sci. Sys. (CISS), 2014, pp. 1?6. [14] O. Madani, D. J. Lizotte, and R. Greiner, ?The budgeted multi-armed bandit problem,? in Learning Theory. Springer, 2004, pp. 643?645. [15] R. Kleinberg, A. Slivkins, and E. Upfal, ?Multi-armed bandits in metric spaces,? in Proc. ACM Symp. Theory Comp., 2008. [16] C. E. Rasmussen, ?Gaussian processes for machine learning.? MIT Press, 2006. [17] A. Krause and D. Golovin, ?Submodular function maximization,? Tractability: Practical Approaches to Hard Problems, vol. 3, 2012. [18] A. Das and D. Kempe, ?Algorithms for subset selection in linear regression,? in Proc. ACM Symp. Theory Comp. (STOC). ACM, 2008, pp. 45?54. [19] G. Hitz, F. Pomerleau, M.-E. Garneau, E. Pradalier, T. Posch, J. Pernthaler, and R. Y. Siegwart, ?Autonomous inland water monitoring: Design and application of a surface vessel,? IEEE Robot. Autom. Magazine, vol. 19, no. 1, pp. 62?72, 2012. [20] http://github.com/jmetzen/bayesian optimization (accessed 19/05/2016). [21] J. Snoek, H. Larochelle, and R. P. Adams, ?Practical Bayesian optimization of machine learning algorithms,? in Adv. Neur. Inf. Proc. Sys., 2012. [22] K. Swersky, J. Snoek, and R. P. Adams, ?Freeze-thaw Bayesian optimization,? 2014, http://arxiv.org/abs/1406.3896. [23] D. R. Jones, C. D. Perttunen, and B. E. Stuckman, ?Lipschitzian optimization without the Lipschitz constant,? J. Opt. Theory Apps., vol. 79, no. 1, pp. 157?181, 1993. [24] A. Krause and C. Guestrin, ?A note on the budgeted maximization of submodular functions,? 2005, Technical Report. 9
6080 |@word briefly:1 version:5 suitably:1 seek:4 paid:1 pick:1 mention:1 versatile:1 reduction:7 configuration:2 contains:1 score:6 selecting:1 outperforms:1 existing:4 freitas:2 current:3 com:1 must:1 cis:1 designed:1 plot:3 update:5 drop:1 alone:1 selected:5 plane:1 isotropic:2 sys:5 volkan:2 provides:1 location:2 preference:2 org:2 accessed:1 five:1 mathematical:1 along:2 c2:1 shahriari:1 consists:2 symp:2 introduce:1 x0:5 snoek:3 expected:2 roughly:1 multi:10 decreasing:1 armed:5 str:3 considering:1 increasing:1 provided:3 begin:1 moreover:6 project:1 medium:1 developed:1 unified:6 finding:2 guarantee:9 horizon2020:1 fellow:1 every:3 ser:1 unit:9 grant:2 jamieson:1 before:1 negligible:1 positive:1 treat:1 modify:1 limit:1 tends:1 despite:1 mach:4 contal:1 black:1 studied:1 heteroscedastic:5 range:1 averaged:2 acknowledgment:1 practical:2 practice:2 regret:13 asymptotics:1 snf:2 eth:1 significantly:7 maxx:1 matching:2 revealing:1 confidence:22 word:1 composite:1 convenience:1 close:1 selection:1 applying:1 optimize:1 yt:3 maximizing:3 lobato:1 straightforward:1 urich:2 attention:1 starting:2 williams:1 pure:2 rule:3 handle:2 notion:2 autonomous:2 increment:1 updated:3 target:9 pt:1 suppose:2 strengthen:2 caption:1 magazine:1 gps:1 goldberg:1 associate:1 trend:1 expensive:4 updating:1 database:1 wang:2 capture:1 worst:1 region:2 ensures:1 adv:4 decrease:1 highest:2 removed:1 mentioned:1 intuition:1 complexity:1 reward:3 predictive:2 incur:2 upon:2 division:1 creates:1 joint:1 differently:1 various:1 x2d:1 effective:2 query:2 gotovos:1 choosing:6 refined:1 quite:2 heuristic:2 supplementary:8 whose:2 favor:2 gp:18 noisy:3 itself:1 online:2 sequence:2 advantage:1 coming:3 relevant:2 loop:1 rapidly:2 mixing:1 poorly:1 achieve:2 lookahead:4 sticking:1 description:1 generating:1 adam:4 staying:1 depending:4 measured:1 ard:2 x0i:1 dividing:1 come:1 larochelle:1 direction:1 closely:3 subsequently:2 stochastic:1 exploration:7 human:1 material:8 cmin:1 surname:1 fix:2 f1:6 andez:1 preliminary:1 mab:3 opt:1 adjusted:1 extension:2 strictly:1 hold:2 sufficiently:1 considered:1 normal:1 scope:1 vary:1 consecutive:1 smallest:1 homoscedastic:2 early:1 estimation:15 proc:7 travel:1 hoffman:1 mit:1 sensor:1 gaussian:9 super:1 rather:2 avoid:2 varying:3 corollary:5 derived:1 improvement:4 consistently:1 likelihood:2 contrast:3 rigorous:1 greedily:2 lizotte:1 detect:1 seeger:1 inference:1 dependent:1 epfl:3 i0:2 typically:5 initially:1 bandit:6 arg:2 classification:3 among:1 denoted:1 art:2 initialize:1 mutual:1 kempe:1 once:6 having:5 sampling:2 identical:1 look:1 icml:2 jones:1 future:2 others:2 t2:2 report:2 duplicate:1 few:2 simultaneously:1 madani:1 consisting:1 versatility:3 ab:2 interest:1 highly:1 situ:1 behind:1 myopic:3 kt:8 accurate:2 nowak:1 posch:1 conduct:1 penalizes:1 re:1 theoretical:13 siegwart:1 cevher:1 classify:1 earlier:1 cover:1 maximization:2 cost:47 introducing:1 deviation:1 tractability:1 subset:1 too:1 reported:1 commission:1 corrupted:1 synthetic:8 chooses:4 straddle:2 automating:1 connecting:1 squared:2 again:3 cesa:1 containing:2 choose:6 possibly:2 slowly:1 conf:4 actively:1 aggressive:2 potential:9 de:2 sec:1 includes:1 int:3 satisfy:4 depends:1 later:2 optimistic:1 doing:1 observing:3 start:1 recover:3 parallel:1 contribution:1 minimize:1 shakibi:1 publicly:1 accuracy:7 variance:22 apps:1 bayesian:16 identification:2 monitoring:4 comp:2 randomness:1 classified:3 simultaneous:1 reach:2 definition:3 against:3 acquisition:3 pp:12 naturally:1 associated:3 proof:3 sampled:3 stop:1 dataset:2 treatment:2 popular:1 recall:1 knowledge:2 ut:8 improves:2 higher:2 follow:1 improved:2 shrink:5 box:1 just:1 implicit:1 until:2 hand:1 ei:6 maximizer:1 lack:1 tt0:2 name:2 effect:1 validity:1 concept:2 true:3 normalized:1 former:2 hence:4 laboratory:1 round:2 during:1 width:2 covering:1 robicquet:1 criterion:2 theoretic:1 demonstrate:2 cx0:1 performs:5 bring:2 lse:23 spending:1 harmonic:1 novel:2 common:1 mt:22 overview:1 twist:1 discussed:3 numerically:1 measurement:4 freeze:1 tuning:1 grid:4 outlined:1 similarly:3 inclusion:1 erc:1 submodular:3 robot:1 surface:2 posterior:14 recent:1 dictated:1 inf:6 scenario:3 certain:3 seen:1 minimum:5 additional:1 somewhat:1 preceding:1 mr:10 schneider:1 guestrin:1 monotonically:1 multiple:2 mix:2 d0:2 technical:1 believed:1 autom:1 va:32 variant:3 regression:2 essentially:2 noiseless:3 metric:1 arxiv:2 kernel:7 normalization:1 robotics:1 buffoni:1 c1:5 vayatis:1 whereas:2 separately:2 krause:4 decreased:1 median:3 publisher:1 unlike:1 bringing:1 effectiveness:1 near:1 automated:1 variety:2 isolation:1 nonstochastic:1 andreas:1 reduce:1 thaw:1 det:1 t0:2 action:1 useful:1 generally:1 transect:1 reduced:1 generate:1 http:3 outperform:1 exist:3 estimated:1 scarlett:1 track:5 arising:1 per:1 overly:1 bryan:1 perttunen:1 write:1 hyperparameter:3 mat:2 vol:7 hennig:1 group:1 terminology:1 threshold:7 achieving:1 budgeted:2 ht:7 sum:2 run:5 unclassified:4 powerful:1 uncertainty:2 swersky:3 place:1 throughout:1 ruling:1 almost:1 lake:5 decision:2 bound:19 ct:1 refine:1 ahead:1 constraint:2 exemplifying:1 x2:4 kleinberg:1 aspect:4 min:2 performing:2 ern:2 according:10 neur:4 beneficial:2 slightly:3 smaller:5 kakade:1 making:2 outlier:2 explained:1 computationally:4 previously:2 remains:1 hern:1 bogunovic:1 available:3 incurring:1 permit:2 apply:2 observe:3 batch:1 original:3 assumes:1 running:2 ensure:1 remaining:1 cf:1 cmax:2 lipschitzian:1 giving:2 ghahramani:1 unchanged:1 seeking:1 added:2 already:1 quantity:1 strategy:1 concentration:3 dependence:6 unclear:1 distance:3 zooming:2 berlin:1 sci:1 collected:1 trivial:3 water:2 ru:32 length:2 code:1 pointwise:4 illustration:2 minimizing:1 setup:3 potentially:1 stoc:1 favorably:1 mink:1 design:2 pomerleau:1 zt:2 unknown:2 perform:3 allowing:3 upper:4 vertical:1 observation:4 bianchi:1 discarded:3 finite:1 jin:1 stuckman:1 truncated:7 incorrectly:1 situation:1 incorporated:2 precise:1 inferred:1 namely:2 required:1 pair:1 extensive:2 slivkins:1 learned:1 nip:3 trans:1 beyond:2 bar:1 alongside:2 lion:1 below:8 proceeds:1 mismatch:1 including:2 soo:1 max:8 treated:1 arm:1 representing:2 improve:1 github:1 numerous:1 traveled:2 epoch:5 literature:4 prior:1 review:1 discovery:1 multiplication:1 highlight:1 interesting:1 var:3 validation:4 upfal:1 x01:3 krausea:1 sufficient:1 consistent:1 classifying:3 pi:1 yt0:1 supported:1 last:1 keeping:1 truncation:2 rasmussen:1 enjoys:1 characterizing:2 taking:2 depth:3 world:2 avoids:2 cumulative:4 valid:1 ignores:2 author:1 collection:1 adaptive:1 made:1 crsii2:1 far:2 keep:4 global:4 sequentially:1 active:1 assumed:1 x2m:1 xi:1 search:8 continuous:3 triplet:1 ilija:2 learn:5 schuler:1 robust:1 golovin:1 heidelberg:1 bearing:1 vessel:3 european:1 domain:12 diag:1 da:1 did:1 main:2 bounding:1 noise:40 hyperparameters:4 whole:1 x1:8 intel:1 fashion:2 gorithms:1 shrinking:1 precision:1 sub:1 position:1 exponential:2 lie:3 pe:1 theorem:5 removing:1 xt:11 specific:1 bishop:1 explored:1 svm:2 maximizers:9 incorporating:2 effectively:1 horizon:2 gap:1 suited:2 entropy:4 lt:13 logarithmic:2 simply:1 xt0:3 forming:1 bubeck:1 greiner:1 bo:26 recommendation:3 applies:1 springer:2 ch:3 corresponds:1 environmental:3 acm:3 goal:2 ann:1 careful:1 towards:1 replace:1 lipschitz:1 considerable:1 change:1 hard:1 uniformly:1 averaging:2 conservative:1 called:2 total:1 e:10 experimental:3 ucb:9 select:1 latter:1 jonathan:2 noisier:2 bamsoo:1 ethz:1 avoiding:1 incorporate:3 evaluate:4 srinivas:1
5,615
6,081
Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering Micha?l Defferrard Xavier Bresson Pierre Vandergheynst EPFL, Lausanne, Switzerland {michael.defferrard,xavier.bresson,pierre.vandergheynst}@epfl.ch Abstract In this work, we are interested in generalizing convolutional neural networks (CNNs) from low-dimensional regular grids, where image, video and speech are represented, to high-dimensional irregular domains, such as social networks, brain connectomes or words? embedding, represented by graphs. We present a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs. Importantly, the proposed technique offers the same linear computational complexity and constant learning complexity as classical CNNs, while being universal to any graph structure. Experiments on MNIST and 20NEWS demonstrate the ability of this novel deep learning system to learn local, stationary, and compositional features on graphs. 1 Introduction Convolutional neural networks [19] offer an efficient architecture to extract highly meaningful statistical patterns in large-scale and high-dimensional datasets. The ability of CNNs to learn local stationary structures and compose them to form multi-scale hierarchical patterns has led to breakthroughs in image, video, and sound recognition tasks [18]. Precisely, CNNs extract the local stationarity property of the input data or signals by revealing local features that are shared across the data domain. These similar features are identified with localized convolutional filters or kernels, which are learned from the data. Convolutional filters are shift- or translation-invariant filters, meaning they are able to recognize identical features independently of their spatial locations. Localized kernels or compactly supported filters refer to filters that extract local features independently of the input data size, with a support size that can be much smaller than the input size. User data on social networks, gene data on biological regulatory networks, log data on telecommunication networks, or text documents on word embeddings are important examples of data lying on irregular or non-Euclidean domains that can be structured with graphs, which are universal representations of heterogeneous pairwise relationships. Graphs can encode complex geometric structures and can be studied with strong mathematical tools such as spectral graph theory [6]. A generalization of CNNs to graphs is not straightforward as the convolution and pooling operators are only defined for regular grids. This makes this extension challenging, both theoretically and implementation-wise. The major bottleneck of generalizing CNNs to graphs, and one of the primary goals of this work, is the definition of localized graph filters which are efficient to evaluate and learn. Precisely, the main contributions of this work are summarized below. 1. Spectral formulation. A spectral graph theoretical formulation of CNNs on graphs built on established tools in graph signal processing (GSP). [31]. 2. Strictly localized filters. Enhancing [4], the proposed spectral filters are provable to be strictly localized in a ball of radius K, i.e. K hops from the central vertex. 3. Low computational complexity. The evaluation complexity of our filters is linear w.r.t. the filters support?s size K and the number of edges |E|. Importantly, as most real-world graphs are highly sparse, we have |E|  n2 and |E| = kn for the widespread k-nearest neighbor 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Input graph signals Feature extraction e.g. bags of words Classification Convolutional layers Fully connected layers Output signals e.g. labels Graph coarsening Graph signal filtering 3. Sub-sampling 4. Pooling 1. Convolution 2. Non-linear activation Figure 1: Architecture of a CNN on graphs and the four ingredients of a (graph) convolutional layer. (NN) graphs, leading to a linear complexity w.r.t the input data size n. Moreover, this method avoids the Fourier basis altogether, thus the expensive eigenvalue decomposition (EVD) necessary to compute it as well as the need to store the basis, a matrix of size n2 . That is especially relevant when working with limited GPU memory. Besides the data, our method only requires to store the Laplacian, a sparse matrix of |E| non-zero values. 4. Efficient pooling. We propose an efficient pooling strategy on graphs which, after a rearrangement of the vertices as a binary tree structure, is analog to pooling of 1D signals. 5. Experimental results. We present multiple experiments that ultimately show that our formulation is (i) a useful model, (ii) computationally efficient and (iii) superior both in accuracy and complexity to the pioneer spectral graph CNN introduced in [4]. We also show that our graph formulation performs similarly to a classical CNNs on MNIST and study the impact of various graph constructions on performance. The TensorFlow [1] code to reproduce our results and apply the model to other data is available as an open-source software.1 2 Proposed Technique Generalizing CNNs to graphs requires three fundamental steps: (i) the design of localized convolutional filters on graphs, (ii) a graph coarsening procedure that groups together similar vertices and (iii) a graph pooling operation that trades spatial resolution for higher filter resolution. 2.1 Learning Fast Localized Spectral Filters There are two strategies to define convolutional filters; either from a spatial approach or from a spectral approach. By construction, spatial approaches provide filter localization via the finite size of the kernel. However, although graph convolution in the spatial domain is conceivable, it faces the challenge of matching local neighborhoods, as pointed out in [4]. Consequently, there is no unique mathematical definition of translation on graphs from a spatial perspective. On the other side, a spectral approach provides a well-defined localization operator on graphs via convolutions with a Kronecker delta implemented in the spectral domain [31]. The convolution theorem [22] defines convolutions as linear operators that diagonalize in the Fourier basis (represented by the eigenvectors of the Laplacian operator). However, a filter defined in the spectral domain is not naturally localized and translations are costly due to the O(n2 ) multiplication with the graph Fourier basis. Both limitations can however be overcome with a special choice of filter parametrization. Graph Fourier Transform. We are interested in processing signals defined on undirected and connected graphs G = (V, E, W ), where V is a finite set of |V| = n vertices, E is a set of edges and W ? Rn?n is a weighted adjacency matrix encoding the connection weight between two vertices. A signal x : V ? R defined on the nodes of the graph may be regarded as a vector x ? Rn where xi is the value of x at the ith node. An essential operator in spectral graph analysis is the graph Laplacian [6], which combinatorial definition is L = D ?W ? Rn?n where D ? Rn?n is the 1 https://github.com/mdeff/cnn_graph 2 P diagonal degree matrix with Dii = j Wij , and normalized definition is L = In ? D?1/2 W D?1/2 where In is the identity matrix. As L is a real symmetric positive semidefinite matrix, it has a n complete set of orthonormal eigenvectors {ul }n?1 l=0 ? R , known as the graph Fourier modes, and their associated ordered real nonnegative eigenvalues {?l }n?1 l=0 , identified as the frequencies of the graph. The Laplacian is indeed diagonalized by the Fourier basis U = [u0 , . . . , un?1 ] ? Rn?n such that L = U ?U T where ? = diag([?0 , . . . , ?n?1 ]) ? Rn?n . The graph Fourier transform of a signal x ? Rn is then defined as x ? = U T x ? Rn , and its inverse as x = U x ? [31]. As on Euclidean spaces, that transform enables the formulation of fundamental operations such as filtering. Spectral filtering of graph signals. As we cannot express a meaningful translation operator in the vertex domain, the convolution operator on graph ?G is defined in the Fourier domain such that x ?G y = U ((U T x) (U T y)), where is the element-wise Hadamard product. It follows that a signal x is filtered by g? as y = g? (L)x = g? (U ?U T )x = U g? (?)U T x. A non-parametric filter, i.e. a filter whose parameters are all free, would be defined as g? (?) = diag(?), where the parameter ? ? Rn is a vector of Fourier coefficients. (1) (2) Polynomial parametrization for localized filters. There are however two limitations with nonparametric filters: (i) they are not localized in space and (ii) their learning complexity is in O(n), the dimensionality of the data. These issues can be overcome with the use of a polynomial filter g? (?) = K?1 X ?k ? k , (3) k=0 where the parameter ? ? RK is a vector of polynomial coefficients. P The value at vertex j of the filter g? centered at vertex i is given by (g? (L)?i )j = (g? (L))i,j = k ?k (Lk )i,j , where the kernel is localized via a convolution with a Kronecker delta function ?i ? Rn . By [12, Lemma 5.2], dG (i, j) > K implies (LK )i,j = 0, where dG is the shortest path distance, i.e. the minimum number of edges connecting two vertices on the graph. Consequently, spectral filters represented by K th order polynomials of the Laplacian are exactly K-localized. Besides, their learning complexity is O(K), the support size of the filter, and thus the same complexity as classical CNNs. Recursive formulation for fast filtering. While we have shown how to learn localized filters with K parameters, the cost to filter a signal x as y = U g? (?)U T x is still high with O(n2 ) operations because of the multiplication with the Fourier basis U . A solution to this problem is to parametrize g? (L) as a polynomial function that can be computed recursively from L, as K multiplications by a sparse L costs O(K|E|)  O(n2 ). One such polynomial, traditionally used in GSP to approximate kernels (like wavelets), is the Chebyshev expansion [12]. Another option, the Lanczos algorithm [33], which constructs an orthonormal basis of the Krylov subspace KK (L, x) = span{x, Lx, . . . , LK?1 x}, seems attractive because of the coefficients? independence. It is however more convoluted and thus left as a future work. Recall that the Chebyshev polynomial Tk (x) of order k may be computed by the stable recurrence relation Tk (x) = 2xTk?1 (x) ? Tk?2 p(x) with T0 = 1 and T1 = x. These polynomials form an orthogonal basis for L2 ([?1, 1], dy/ 1 ? y 2 ), the Hilbert space of square integrable functions with p 2 respect to the measure dy/ 1 ? y . A filter can thus be parametrized as the truncated expansion g? (?) = K?1 X ? ?k Tk (?), (4) k=0 ? ? of order K ? 1, where the parameter ? ? RK is a vector of Chebyshev coefficients and Tk (?) n?n ? = 2?/?max ? In , a diagonal matrix R is the Chebyshev polynomial of order k evaluated at ? of scaled eigenvalues that lie in [?1, 1]. The filtering operation can then be written as y = g? (L)x = PK?1 n?n ? ? is the Chebyshev polynomial of order k evaluated at the k=0 ?k Tk (L)x, where Tk (L) ? R ? ? ? Rn , we can use the recurrence scaled Laplacian L = 2L/?max ? In . Denoting x ?k = Tk (L)x ? ? The entire filtering operation relation to compute x ?k = 2L? xk?1 ? x ?k?2 with x ?0 = x and x ?1 = Lx. y = g? (L)x = [? x0 , . . . , x ?K?1 ]? then costs O(K|E|) operations. 3 Learning filters. The j th output feature map of the sample s is given by ys,j = Fin X g?i,j (L)xs,i ? Rn , (5) i=1 where the xs,i are the input feature maps and the Fin ? Fout vectors of Chebyshev coefficients ?i,j ? RK are the layer?s trainable parameters. When training multiple convolutional layers with the backpropagation algorithm, one needs the two gradients S X ?E ?E = [? xs,i,0 , . . . , x ?s,i,K?1 ]T ??i,j ?ys,j s=1 and F out X ?E ?E = g?i,j (L) , ?xs,i ?ys,j j=1 (6) where E is the loss energy over a mini-batch of S samples. Each of the above three computations boils down to K sparse matrix-vector multiplications and one dense matrix-vector multiplication for a cost of O(K|E|Fin Fout S) operations. These can be efficiently computed on parallel architectures by leveraging tensor operations. Eventually, [? xs,i,0 , . . . , x ?s,i,K?1 ] only needs to be computed once. 2.2 Graph Coarsening The pooling operation requires meaningful neighborhoods on graphs, where similar vertices are clustered together. Doing this for multiple layers is equivalent to a multi-scale clustering of the graph that preserves local geometric structures. It is however known that graph clustering is NP-hard [5] and that approximations must be used. While there exist many clustering techniques, e.g. the popular spectral clustering [21], we are most interested in multilevel clustering algorithms where each level produces a coarser graph which corresponds to the data domain seen at a different resolution. Moreover, clustering techniques that reduce the size of the graph by a factor two at each level offers a precise control on the coarsening and pooling size. In this work, we make use of the coarsening phase of the Graclus multilevel clustering algorithm [9], which has been shown to be extremely efficient at clustering a large variety of graphs. Algebraic multigrid techniques on graphs [28] and the Kron reduction [32] are two methods worth exploring in future works. Graclus [9], built on Metis [16], uses a greedy algorithm to compute successive coarser versions of a given graph and is able to minimize several popular spectral clustering objectives, from which we chose the normalized cut [30]. Graclus? greedy rule consists, at each coarsening level, in picking an unmarked vertex i and matching it with one of its unmarked neighbors j that maximizes the local normalized cut Wij (1/di + 1/dj ). The two matched vertices are then marked and the coarsened weights are set as the sum of their weights. The matching is repeated until all nodes have been explored. This is an very fast coarsening scheme which divides the number of nodes by approximately two (there may exist a few singletons, non-matched nodes) from one level to the next coarser level. 2.3 Fast Pooling of Graph Signals Pooling operations are carried out many times and must be efficient. After coarsening, the vertices of the input graph and its coarsened versions are not arranged in any meaningful way. Hence, a direct application of the pooling operation would need a table to store all matched vertices. That would result in a memory inefficient, slow, and hardly parallelizable implementation. It is however possible to arrange the vertices such that a graph pooling operation becomes as efficient as a 1D pooling. We proceed in two steps: (i) create a balanced binary tree and (ii) rearrange the vertices. After coarsening, each node has either two children, if it was matched at the finer level, or one, if it was not, i.e the node was a singleton. From the coarsest to finest level, fake nodes, i.e. disconnected nodes, are added to pair with the singletons such that each node has two children. This structure is a balanced binary tree: (i) regular nodes (and singletons) either have two regular nodes (e.g. level 1 vertex 0 in Figure 2) or (ii) one singleton and a fake node as children (e.g. level 2 vertex 0), and (iii) fake nodes always have two fake nodes as children (e.g. level 1 vertex 1). Input signals are initialized with a neutral value at the fake nodes, e.g. 0 when using a ReLU activation with max pooling. Because these nodes are disconnected, filtering does not impact the initial neutral value. While those fake nodes do artificially increase the dimensionality thus the computational cost, we found that, in practice, the number of singletons left by Graclus is quite low. Arbitrarily ordering the nodes at the coarsest level, then propagating this ordering to the finest levels, i.e. node k has nodes 2k and 2k + 1 as children, produces a regular ordering in the finest level. Regular in the sense that adjacent nodes are hierarchically merged at coarser levels. Pooling such a rearranged graph signal is 4 5 0 1 2 3 4 5 6 7 8 9 10 11 6 0 1 4 8 2 3 7 11 3 2 0 1 1 0 4 10 0 1 2 3 4 5 2 5 9 0 1 2 Figure 2: Example of Graph Coarsening and Pooling. Let us carry out a max pooling of size 4 (or two poolings of size 2) on a signal x ? R8 living on G0 , the finest graph given as input. Note that it originally possesses n0 = |V0 | = 8 vertices, arbitrarily ordered. For a pooling of size 4, two coarsenings of size 2 are needed: let Graclus gives G1 of size n1 = |V1 | = 5, then G2 of size n2 = |V2 | = 3, the coarsest graph. Sizes are thus set to n2 = 3, n1 = 6, n0 = 12 and fake nodes (in blue) are added to V1 (1 node) and V0 (4 nodes) to pair with the singeltons (in orange), such that each node has exactly two children. Nodes in V2 are then arbitrarily ordered and nodes in V1 and V0 are ordered consequently. At that point the arrangement of vertices in V0 permits a regular 1D pooling on x ? R12 such that z = [max(x0 , x1 ), max(x4 , x5 , x6 ), max(x8 , x9 , x10 )] ? R3 , where the signal components x2 , x3 , x7 , x11 are set to a neutral value. analog to pooling a regular 1D signal. Figure 2 shows an example of the whole process. This regular arrangement makes the operation very efficient and satisfies parallel architectures such as GPUs as memory accesses are local, i.e. matched nodes do not have to be fetched. 3 3.1 Related Works Graph Signal Processing The emerging field of GSP aims at bridging the gap between signal processing and spectral graph theory [6, 3, 21], a blend between graph theory and harmonic analysis. A goal is to generalize fundamental analysis operations for signals from regular grids to irregular structures embodied by graphs. We refer the reader to [31] for an introduction of the field. Standard operations on grids such as convolution, translation, filtering, dilatation, modulation or downsampling do not extend directly to graphs and thus require new mathematical definitions while keeping the original intuitive concepts. In this context, the authors of [12, 8, 10] revisited the construction of wavelet operators on graphs and techniques to perform mutli-scale pyramid transforms on graphs were proposed in [32, 27]. The works of [34, 25, 26] redefined uncertainty principles on graphs and showed that while intuitive concepts may be lost, enhanced localization principles can be derived. 3.2 CNNs on Non-Euclidean Domains The Graph Neural Network framework [29], simplified in [20], was designed to embed each node in an Euclidean space with a RNN and use those embeddings as features for classification or regression of nodes or graphs. By setting their transition function f as a simple diffusion instead of a neural net with a recursive relation, their state vector becomes s = f (x) = W x. Their point-wise output function g? can further be set as x ? = g? (s, x) = ?(s ? Dx) + x = ?Lx + x instead of another neural net. The Chebyshev polynomials of degree K can then be obtained with a K-layer GNN, to be followed by a non-linear layer and a graph pooling operation. Our model can thus be interpreted as multiple layers of diffusions and node-local operations. The works of [11, 7] introduced the concept of constructing a local receptive field to reduce the number of learned parameters. The idea is to group together features based upon a measure of similarity such as to select a limited number of connections between two successive layers. While this model reduces the number of parameters by exploiting the locality assumption, it did not attempt to exploit any stationarity property, i.e. no weight-sharing strategy. The authors of [4] used this idea for their spatial formulation of graph CNNs. They use a weighted graph to define the local neighborhood and compute a multiscale clustering of the graph for the pooling operation. Inducing weight sharing in a spatial construction is however challenging, as it requires to select and order the neighborhoods when a problem-specific ordering (spatial, temporal, or otherwise) is missing. A spatial generalization of CNNs to 3D-meshes, a class of smooth low-dimensional non-Euclidean spaces, was proposed in [23]. The authors used geodesic polar coordinates to define the convolu5 Model Architecture Classical CNN Proposed graph CNN C32-P4-C64-P4-FC512 GC32-P4-GC64-P4-FC512 Accuracy 99.33 99.14 Table 1: Classification accuracies of the proposed graph CNN and a classical CNN on MNIST. tion on mesh patches, and formulated a deep learning architecture which allows comparison across different manifolds. They obtained state-of-the-art results for 3D shape recognition. The first spectral formulation of a graph CNN, proposed in [4], defines a filter as g? (?) = B?, (7) n?K K where B ? R is the cubic B-spline basis and the parameter ? ? R is a vector of control points. They later proposed a strategy to learn the graph structure from the data and applied the model to image recognition, text categorization and bioinformatics [13]. This approach does however not scale up due to the necessary multiplications by the graph Fourier basis U . Despite the cost of computing this matrix, which requires an EVD on the graph Laplacian, the dominant cost is the need to multiply the data by this matrix twice (forward and inverse Fourier transforms) at a cost of O(n2 ) operations per forward and backward pass, a computational bottleneck already identified by the authors. Besides, as they rely on smoothness in the Fourier domain, via the spline parametrization, to bring localization in the vertex domain, their model does not provide a precise control over the local support of their kernels, which is essential to learn localized filters. Our technique leverages on this work, and we showed how to overcome these limitations and beyond. 4 Numerical Experiments In the sequel, we refer to the non-parametric and non-localized filters (2) as Non-Param, the filters (7) proposed in [4] as Spline and the proposed filters (4) as Chebyshev. We always use the Graclus coarsening algorithm introduced in Section 2.2 rather than the simple agglomerative method of [4]. Our motivation is to compare the learned filters, not the coarsening algorithms. We use the following notation when describing network architectures: FCk denotes a fully connected layer with k hidden units, Pk denotes a (graph or classical) pooling layer of size and stride k, GCk and Ck denote a (graph) convolutional layer with k feature maps. All FCk, Ck and GCk layers are followed by a ReLU activation max(x, 0). The final layer is always a softmax regression and the loss energy E is the cross-entropy with an `2 regularization on the weights of all FCk layers. Mini-batches are of size S = 100. 4.1 Revisiting Classical CNNs on MNIST To validate our model, we applied it to the Euclidean case on the benchmark MNIST classification problem [19], a dataset of 70,000 digits represented on a 2D grid of size 28 ? 28. For our graph model, we construct an 8-NN graph of the 2D grid which produces a graph of n = |V| = 976 nodes (282 = 784 pixels and 192 fake nodes as explained in Section 2.3) and |E| = 3198 edges. Following standard practice, the weights of a k-NN similarity graph (between features) are computed as   kzi ? zj k22 Wij = exp ? , (8) ?2 where zi is the 2D coordinate of pixel i. This is an important sanity check for our model, which must be able to extract features on any graph, including the regular 2D grid. Table 1 shows the ability of our model to achieve a performance very close to a classical CNN with the same architecture. The gap in performance may be explained by the isotropic nature of the spectral filters, i.e. the fact that edges in a general graph do not possess an orientation (like up, down, right and left for pixels on a 2D grid). Whether this is a limitation or an advantage depends on the problem and should be verified, as for any invariance. Moreover, rotational invariance has been sought: (i) many data augmentation schemes have used rotated versions of images and (ii) models have been developed to learn this invariance, like the Spatial Transformer Networks [14]. Other explanations are the lack of experience on architecture design and the need to investigate better suited optimization or initialization strategies. The LeNet-5-like network architecture and the following hyper-parameters are borrowed from the TensorFlow MNIST tutorial2 : dropout probability of 0.5, regularization weight of 5 ? 10?4 , initial 2 https://www.tensorflow.org/versions/r0.8/tutorials/mnist/pros 6 1400 Accuracy Linear SVM Multinomial Naive Bayes Softmax 65.90 68.51 66.28 FC2500 FC2500-FC500 64.64 65.76 GC32 68.26 Table 2: Accuracies of the proposed graph CNN and other methods on 20NEWS. Chebyshev Non-Param / Spline 1200 time (ms) Model 1000 800 600 400 200 0 2000 4000 6000 8000 10000 number of features (words) 12000 Figure 3: Time to process a mini-batch of S = 100 20NEWS documents w.r.t. the number of words n. Accuracy Dataset Architecture MNIST MNIST GC10 GC32-P4-GC64-P4-FC512 Non-Param (2) Spline (7) [4] Chebyshev (4) 95.75 96.28 97.26 97.15 97.48 99.14 Table 3: Classification accuracies for different types of spectral filters (K = 25). Model Architecture Time (ms) CPU GPU Classical CNN Proposed graph CNN C32-P4-C64-P4-FC512 GC32-P4-GC64-P4-FC512 210 1600 31 200 Speedup 6.77x 8.00x Table 4: Time to process a mini-batch of S = 100 MNIST images. learning rate of 0.03, learning rate decay of 0.95, momentum of 0.9. Filters are of size 5 ? 5 and graph filters have the same support of K = 25. All models were trained for 20 epochs. 4.2 Text Categorization on 20NEWS To demonstrate the versatility of our model to work with graphs generated from unstructured data, we applied our technique to the text categorization problem on the 20NEWS dataset which consists of 18,846 (11,314 for training and 7,532 for testing) text documents associated with 20 classes [15]. We extracted the 10,000 most common words from the 93,953 unique words in this corpus. Each document x is represented using the bag-of-words model, normalized across words. To test our model, we constructed a 16-NN graph with (8) where zi is the word2vec embedding [24] of word i, which produced a graph of n = |V| = 10, 000 nodes and |E| = 132, 834 edges. All models were trained for 20 epochs by the Adam optimizer [17] with an initial learning rate of 0.001. The architecture is GC32 with support K = 5. Table 2 shows decent performances: while the proposed model does not outperform the multinomial naive Bayes classifier on this small dataset, it does defeat fully connected networks, which require much more parameters. 4.3 Comparison between Spectral Filters and Computational Efficiency Table 3 reports that the proposed parametrization (4) outperforms (7) from [4] as well as nonparametric filters (2) which are not localized and require O(n) parameters. Moreover, Figure 4 gives a sense of how the validation accuracy and the loss E converges w.r.t. the filter definitions. Figure 3 validates the low computational complexity of our model which scales as O(n) while [4] scales as O(n2 ). The measured runtime is the total training time divided by the number of gradient steps. Table 4 shows a similar speedup as classical CNNs when moving to GPUs. This exemplifies the parallelization opportunity offered by our model, who relies solely on matrix multiplications. Those are efficiently implemented by cuBLAS, the linear algebra routines provided by NVIDIA. 4.4 Influence of Graph Quality For any graph CNN to be successful, the statistical assumptions of locality, stationarity, and compositionality regarding the data must be fulfilled on the graph where the data resides. Therefore, the learned filters? quality and thus the classification performance critically depends on the quality of 7 80 training loss validation accuracy 100 60 Chebyshev Non-Param Spline 40 20 0 500 1000 1500 2000 6.5 6.0 5.5 5.0 4.5 4.0 3.5 3.0 2.5 2.0 Chebyshev Non-Param Spline 500 1000 1500 2000 Figure 4: Plots of validation accuracy and training loss for the first 2000 iterations on MNIST. Architecture 8-NN on 2D Euclidean grid random 97.40 99.14 96.88 95.39 GC32 GC32-P4-GC64-P4-FC512 Table 5: Classification accuracies with different graph constructions on MNIST. bag-of-words 67.50 word2vec pre-learned learned 66.98 68.26 approximate random 67.86 67.75 Table 6: Classification accuracies of GC32 with different graph constructions on 20NEWS. the graph. For data lying on Euclidean space, experiments in Section 4.1 show that a simple k-NN graph of the grid is good enough to recover almost exactly the performance of standard CNNs. We also noticed that the value of k does not have a strong influence on the results. We can witness the importance of a graph satisfying the data assumptions by comparing its performance with a random graph. Table 5 reports a large drop of accuracy when using a random graph, that is when the data structure is lost and the convolutional layers are not useful anymore to extract meaningful features. While images can be structured by a grid graph, a feature graph has to be built for text documents represented as bag-of-words. We investigate here three ways to represent a word z: the simplest option is to represent each word as its corresponding column in the bag-of-words matrix while, another approach is to learn an embedding for each word with word2vec [24] or to use the pre-learned embeddings provided by the authors. For larger datasets, an approximate nearest neighbors algorithm may be required, which is the reason we tried LSHForest [2] on the learned word2vec embeddings. Table 6 reports classification results which highlight the importance of a well constructed graph. 5 Conclusion and Future Work In this paper, we have introduced the mathematical and computational foundations of an efficient generalization of CNNs to graphs using tools from GSP. Experiments have shown the ability of the model to extract local and stationary features through graph convolutional layers. Compared with the first work on spectral graph CNNs introduced in [4], our model provides a strict control over the local support of filters, is computationally more efficient by avoiding an explicit use of the Graph Fourier basis, and experimentally shows a better test accuracy. Besides, we addressed the three concerns raised by [13]: (i) we introduced a model whose computational complexity is linear with the dimensionality of the data, (ii) we confirmed that the quality of the input graph is of paramount importance, (iii) we showed that the statistical assumptions of local stationarity and compositionality made by the model are verified for text documents as long as the graph is well constructed. Future works will investigate two directions. On one hand, we will enhance the proposed framework with newly developed tools in GSP. On the other hand, we will explore applications of this generic model to important fields where the data naturally lies on graphs, which may then incorporate external information about the structure of the data rather than artificially created graphs which quality may vary as seen in the experiments. Another natural and future approach, pioneered in [13], would be to alternate the learning of the CNN parameters and the graph. 8 References [1] Mart?n Abadi et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. 2016. [2] M. Bawa, T. Condie, and P. Ganesan. LSH Forest: Self-Tuning Indexes for Similarity Search. In International Conference on World Wide Web, pages 651?660, 2005. [3] M. Belkin and P. Niyogi. Towards a Theoretical Foundation for Laplacian-based Manifold Methods. Journal of Computer and System Sciences, 74(8):1289?1308, 2008. [4] J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun. Spectral Networks and Deep Locally Connected Networks on Graphs. arXiv:1312.6203, 2013. [5] T.N. Bui and C. Jones. Finding Good Approximate Vertex and Edge Partitions is NP-hard. Information Processing Letters, 42(3):153?159, 1992. [6] F. R. K. Chung. Spectral Graph Theory, volume 92. American Mathematical Society, 1997. [7] A. Coates and A.Y. Ng. Selecting Receptive Fields in Deep Networks. In Neural Information Processing Systems (NIPS), pages 2528?2536, 2011. [8] R.R. Coifman and S. Lafon. Diffusion Maps. Applied and Computational Harmonic Analysis, 21(1):5?30, 2006. [9] I. Dhillon, Y. Guan, and B. Kulis. Weighted Graph Cuts Without Eigenvectors: A Multilevel Approach. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 29(11):1944?1957, 2007. [10] M. Gavish, B. Nadler, and R. Coifman. Multiscale Wavelets on Trees, Graphs and High Dimensional Data: Theory and Applications to Semi Supervised Learning. In International Conference on Machine Learning (ICML), pages 367?374, 2010. [11] K. Gregor and Y. LeCun. Emergence of Complex-like Cells in a Temporal Product Network with Local Receptive Fields. In arXiv:1006.0448, 2010. [12] D. Hammond, P. Vandergheynst, and R. Gribonval. Wavelets on Graphs via Spectral Graph Theory. Applied and Computational Harmonic Analysis, 30(2):129?150, 2011. [13] M. Henaff, J. Bruna, and Y. LeCun. Deep Convolutional Networks on Graph-Structured Data. arXiv:1506.05163, 2015. [14] Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In Advances in Neural Information Processing Systems, pages 2017?2025, 2015. [15] T. Joachims. A Probabilistic Analysis of the Rocchio Algorithm with TFIDF for Text Categorization. Carnegie Mellon University, Computer Science Technical Report, CMU-CS-96-118, 1996. [16] G. Karypis and V. Kumar. A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs. SIAM Journal on Scientific Computing (SISC), 20(1):359?392, 1998. [17] D. Kingma and J. Ba. Adam: A Method for Stochastic Optimization. arXiv:1412.6980, 2014. [18] Y. LeCun, Y. Bengio, and G. Hinton. Deep Learning. Nature, 521(7553):436?444, 2015. [19] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-Based Learning Applied to Document Recognition. In Proceedings of the IEEE, 86(11), pages 2278?2324, 1998. [20] Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated Graph Sequence Neural Networks. [21] U. Von Luxburg. A Tutorial on Spectral Clustering. Statistics and Computing, 17(4):395?416, 2007. [22] S. Mallat. A Wavelet Tour of Signal Processing. Academic press, 1999. [23] Jonathan Masci, Davide Boscaini, Michael Bronstein, and Pierre Vandergheynst. Geodesic convolutional neural networks on riemannian manifolds. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages 37?45, 2015. [24] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Estimation of Word Representations in Vector Space. In International Conference on Learning Representations, 2013. [25] B. Pasdeloup, R. Alami, V. Gripon, and M. Rabbat. Toward an Uncertainty Principle for Weighted Graphs. In Signal Processing Conference (EUSIPCO), pages 1496?1500, 2015. [26] N. Perraudin, B. Ricaud, D. Shuman, and P. Vandergheynst. Global and Local Uncertainty Principles for Signals on Graphs. arXiv:1603.03030, 2016. [27] I. Ram, M. Elad, and I. Cohen. Generalized Tree-based Wavelet Transform. IEEE Transactions on Signal Processing,, 59(9):4199?4209, 2011. [28] D. Ron, I. Safro, and A. Brandt. Relaxation-based Coarsening and Multiscale Graph Organization. SIAM Iournal on Multiscale Modeling and Simulation, 9:407?423, 2011. [29] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini. The Graph Neural Network Model. 20(1):61?80. [30] J. Shi and J. Malik. Normalized Cuts and Image Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 22(8):888?905, 2000. [31] D. Shuman, S. Narang, P. Frossard, A. Ortega, and P. Vandergheynst. The Emerging Field of Signal Processing on Graphs: Extending High-Dimensional Data Analysis to Networks and other Irregular Domains. IEEE Signal Processing Magazine, 30(3):83?98, 2013. [32] D.I. Shuman, M.J. Faraji, and P. Vandergheynst. A Multiscale Pyramid Transform for Graph Signals. IEEE Transactions on Signal Processing, 64(8):2119?2134, 2016. [33] A. Susnjara, N. Perraudin, D. Kressner, and P. Vandergheynst. Accelerated Filtering on Graphs using Lanczos Method. preprint arXiv:1509.04537, 2015. [34] M. Tsitsvero and S. Barbarossa. On the Degrees of Freedom of Signals on Graphs. In Signal Processing Conference (EUSIPCO), pages 1506?1510, 2015. 9
6081 |@word kulis:1 cnn:13 version:4 polynomial:11 seems:1 open:1 simulation:1 tried:1 decomposition:1 recursively:1 carry:1 reduction:1 initial:3 selecting:1 daniel:1 denoting:1 document:7 outperforms:1 diagonalized:1 com:1 comparing:1 activation:3 dx:1 written:1 gpu:2 pioneer:1 must:4 finest:4 numerical:2 mesh:2 partition:1 shape:1 enables:1 designed:1 plot:1 drop:1 n0:2 stationary:3 greedy:2 intelligence:2 xk:1 isotropic:1 parametrization:4 ith:1 gribonval:1 filtered:1 tarlow:1 provides:3 node:35 location:1 lx:3 successive:2 revisited:1 org:1 ron:1 brandt:1 mathematical:6 constructed:3 direct:1 abadi:1 consists:2 compose:1 coifman:2 theoretically:1 x0:2 pairwise:1 indeed:1 frossard:1 multi:2 brain:1 cpu:1 param:5 becomes:2 spain:1 provided:2 moreover:4 matched:5 maximizes:1 notation:1 multigrid:1 interpreted:1 emerging:2 developed:2 finding:1 temporal:2 runtime:1 exactly:3 zaremba:1 scaled:2 classifier:1 control:4 unit:1 szlam:1 partitioning:1 positive:1 t1:1 local:18 eusipco:2 despite:1 encoding:1 path:1 modulation:1 approximately:1 solely:1 pami:2 chose:1 twice:1 initialization:1 studied:1 lausanne:1 challenging:2 micha:1 limited:2 karypis:1 unique:2 lecun:5 tsoi:1 testing:1 recursive:2 practice:2 lost:2 backpropagation:1 x3:1 digit:1 procedure:1 universal:2 rnn:1 revealing:1 matching:3 word:17 pre:2 regular:11 cannot:1 close:1 operator:8 context:2 transformer:2 influence:2 www:1 equivalent:1 map:4 dean:1 missing:1 shi:1 straightforward:1 independently:2 resolution:3 unstructured:1 rule:1 importantly:2 regarded:1 orthonormal:2 embedding:3 traditionally:1 coordinate:2 construction:6 enhanced:1 mallat:1 user:1 pioneered:1 magazine:1 us:1 fout:2 element:1 recognition:4 expensive:1 satisfying:1 cut:4 coarser:4 coarsened:2 preprint:1 revisiting:1 news:6 connected:5 ordering:4 trade:1 balanced:2 complexity:11 geodesic:2 ultimately:1 trained:2 algebra:1 localization:4 upon:1 efficiency:1 basis:11 compactly:1 represented:7 various:1 fast:7 zemel:1 hyper:1 neighborhood:4 sanity:1 whose:2 quite:1 larger:1 elad:1 narang:1 otherwise:1 ability:4 niyogi:1 simonyan:1 g1:1 statistic:1 transform:5 emergence:1 validates:1 final:1 hagenbuchner:1 advantage:1 eigenvalue:3 sequence:1 net:2 propose:1 product:2 p4:12 relevant:1 hadamard:1 achieve:1 intuitive:2 inducing:1 validate:1 convoluted:1 exploiting:1 defferrard:2 extending:1 produce:3 categorization:4 adam:2 converges:1 rotated:1 tk:8 andrew:1 propagating:1 measured:1 nearest:2 borrowed:1 strong:2 implemented:2 c:1 implies:1 switzerland:1 direction:1 radius:1 merged:1 cnns:19 filter:45 stochastic:1 centered:1 dii:1 adjacency:1 require:3 multilevel:4 generalization:3 clustered:1 tfidf:1 biological:1 extension:1 strictly:2 exploring:1 lying:2 exp:1 nadler:1 major:1 arrange:1 sought:1 optimizer:1 vary:1 gavish:1 rocchio:1 polar:1 estimation:1 bag:5 label:1 combinatorial:1 create:1 tool:4 weighted:4 always:3 aim:1 rather:2 ck:2 encode:1 derived:1 exemplifies:1 joachim:1 check:1 sense:2 epfl:2 nn:6 entire:1 hidden:1 relation:3 reproduce:1 wij:3 interested:3 pixel:3 issue:1 classification:9 x11:1 orientation:1 spatial:12 breakthrough:1 special:1 orange:1 art:1 field:7 construct:2 once:1 extraction:1 softmax:2 sampling:1 hop:1 identical:1 graclus:6 x4:1 jones:1 icml:1 ng:1 future:5 report:4 np:2 spline:7 richard:1 few:1 belkin:1 dg:2 preserve:1 recognize:1 scarselli:1 phase:1 versatility:1 n1:2 attempt:1 freedom:1 rearrangement:1 stationarity:4 organization:1 highly:2 investigate:3 multiply:1 evaluation:1 semidefinite:1 rearrange:1 word2vec:4 edge:7 necessary:3 experience:1 orthogonal:1 tree:5 euclidean:8 divide:1 initialized:1 theoretical:2 column:1 fc512:6 modeling:1 bresson:2 lanczos:2 cublas:1 cost:8 vertex:23 neutral:3 tour:1 successful:1 kn:1 fundamental:3 international:4 siam:2 sequel:1 probabilistic:1 picking:1 michael:2 together:3 connecting:1 enhance:1 augmentation:1 central:1 x9:1 von:1 external:1 american:1 inefficient:1 leading:1 chung:1 li:1 singleton:6 stride:1 summarized:1 coefficient:5 depends:2 tion:1 later:1 doing:1 bayes:2 option:2 parallel:2 recover:1 contribution:1 minimize:1 square:1 accuracy:14 convolutional:16 who:1 efficiently:2 poolings:1 generalize:1 produced:1 critically:1 hammond:1 shuman:3 worth:1 confirmed:1 finer:1 c32:2 parallelizable:1 sharing:2 definition:6 energy:2 frequency:1 naturally:2 associated:2 di:1 riemannian:1 boil:1 newly:1 dataset:4 popular:2 davide:1 recall:1 dimensionality:3 hilbert:1 segmentation:1 routine:1 higher:1 originally:1 supervised:1 x6:1 zisserman:1 formulation:9 evaluated:2 arranged:1 until:1 working:1 hand:2 web:1 multiscale:5 lack:1 widespread:1 defines:2 mode:1 quality:6 scientific:1 k22:1 normalized:5 concept:3 xavier:2 hence:1 regularization:2 lenet:1 symmetric:1 dhillon:1 attractive:1 adjacent:1 x5:1 self:1 recurrence:2 mutli:1 m:2 generalized:1 ortega:1 complete:1 demonstrate:2 performs:1 bring:1 pro:1 image:7 meaning:1 wise:3 novel:1 harmonic:3 superior:1 common:1 multinomial:2 cohen:1 defeat:1 volume:1 analog:2 extend:1 refer:3 mellon:1 smoothness:1 tuning:1 grid:11 similarly:1 pointed:1 dj:1 lsh:1 moving:1 stable:1 access:1 similarity:3 bruna:2 v0:4 dominant:1 showed:3 perspective:1 henaff:1 c64:2 store:3 nvidia:1 binary:3 arbitrarily:3 integrable:1 seen:2 minimum:1 r0:1 shortest:1 corrado:1 signal:31 semi:1 ii:7 u0:1 multiple:4 sound:1 living:1 x10:1 reduces:1 smooth:1 technical:1 academic:1 offer:3 cross:1 long:1 divided:1 y:3 laplacian:8 impact:2 regression:2 heterogeneous:2 enhancing:1 cmu:1 vision:1 arxiv:6 iteration:1 kernel:6 kron:1 represent:2 pyramid:2 cell:1 irregular:5 boscaini:1 background:1 addressed:1 source:1 diagonalize:1 parallelization:1 sisc:1 posse:2 strict:1 pooling:23 undirected:1 leveraging:1 coarsening:13 leverage:1 iii:4 embeddings:4 decent:1 enough:1 variety:1 independence:1 relu:2 zi:2 bengio:2 architecture:14 identified:3 rabbat:1 reduce:2 idea:2 regarding:1 haffner:1 chebyshev:12 shift:1 bottleneck:2 t0:1 whether:1 bridging:1 ul:1 algebraic:1 karen:1 speech:1 proceed:1 compositional:1 hardly:1 deep:6 useful:2 fake:8 eigenvectors:3 transforms:2 nonparametric:2 locally:1 rearranged:1 simplest:1 http:2 outperform:1 exist:2 zj:1 r12:1 tutorial:2 coates:1 delta:2 fulfilled:1 per:1 raised:1 blue:1 carnegie:1 express:1 group:2 four:1 verified:2 diffusion:3 backward:1 v1:3 ram:1 graph:134 relaxation:1 sum:1 luxburg:1 inverse:2 coarsenings:1 uncertainty:3 telecommunication:1 letter:1 almost:1 reader:1 patch:1 fetched:1 dy:2 dropout:1 layer:18 gnn:1 followed:2 paramount:1 nonnegative:1 precisely:2 kronecker:2 x2:1 software:1 x7:1 fourier:14 span:1 extremely:1 kumar:1 mikolov:1 coarsest:3 xtk:1 gpus:2 speedup:2 structured:3 metis:1 alternate:1 ball:1 disconnected:2 across:3 smaller:1 alami:1 explained:2 invariant:1 computationally:2 describing:1 eventually:1 r3:1 needed:1 available:1 operation:19 parametrize:1 permit:1 apply:1 hierarchical:1 v2:2 spectral:27 generic:1 pierre:3 anymore:1 batch:4 altogether:1 original:1 denotes:2 clustering:11 gori:1 opportunity:1 exploit:1 especially:1 gregor:1 classical:10 society:1 tensor:1 malik:1 objective:1 g0:1 added:2 arrangement:2 already:1 blend:1 strategy:5 primary:1 costly:1 parametric:2 diagonal:2 receptive:3 conceivable:1 gradient:3 subspace:1 distance:1 parametrized:1 manifold:3 agglomerative:1 reason:1 provable:1 toward:1 connectomes:1 besides:4 code:1 relationship:1 kk:1 mini:4 downsampling:1 rotational:1 index:1 ba:1 design:3 implementation:2 bronstein:1 redefined:1 perform:1 gated:1 convolution:9 datasets:2 fin:3 finite:2 benchmark:1 truncated:1 witness:1 hinton:1 precise:2 rn:12 compositionality:2 introduced:6 pair:2 required:1 connection:2 gripon:1 learned:8 tensorflow:4 established:1 barcelona:1 kingma:1 nip:2 able:3 beyond:1 krylov:1 below:1 pattern:4 yujia:1 challenge:1 monfardini:1 built:3 max:9 memory:3 video:2 including:1 explanation:1 natural:1 rely:1 scheme:4 github:1 lk:3 created:1 carried:1 x8:1 extract:6 naive:2 embodied:1 text:8 epoch:2 geometric:2 l2:1 multiplication:7 fully:3 loss:5 highlight:1 gsp:5 filtering:10 limitation:4 localized:18 vandergheynst:8 ingredient:1 validation:3 foundation:2 degree:3 offered:1 principle:4 translation:5 supported:1 free:1 keeping:1 side:1 neighbor:3 wide:1 evd:2 face:1 sparse:4 distributed:1 overcome:3 world:2 avoids:1 transition:1 resides:1 lafon:1 author:5 forward:2 made:1 simplified:1 social:2 kzi:1 transaction:4 approximate:4 jaderberg:1 bui:1 gene:1 global:1 corpus:1 xi:1 un:1 regulatory:1 search:1 table:13 learn:8 nature:2 brockschmidt:1 forest:1 expansion:2 bottou:1 complex:2 artificially:2 constructing:1 domain:13 diag:2 marc:1 did:1 pk:2 main:1 dense:1 hierarchically:1 unmarked:2 whole:1 motivation:1 n2:9 repeated:1 child:6 x1:1 cubic:1 slow:1 sub:1 momentum:1 explicit:1 lie:2 guan:1 wavelet:6 masci:1 theorem:1 rk:3 down:2 embed:1 specific:1 r8:1 explored:1 x:5 svm:1 decay:1 concern:1 essential:2 workshop:1 mnist:12 importance:3 noticed:1 gap:2 chen:1 locality:2 entropy:1 generalizing:3 led:1 suited:1 fck:3 explore:1 ordered:4 g2:1 ch:1 corresponds:1 satisfies:1 relies:1 extracted:1 mart:1 goal:2 identity:1 marked:1 consequently:3 formulated:1 towards:1 shared:1 hard:2 experimentally:1 lemma:1 total:1 pas:1 invariance:3 experimental:1 meaningful:5 select:2 support:7 jonathan:1 bioinformatics:1 accelerated:1 incorporate:1 evaluate:1 trainable:1 avoiding:1
5,616
6,082
Sampling for Bayesian Program Learning Kevin Ellis Brain and Cognitive Sciences MIT ellisk@mit.edu Armando Solar-Lezama CSAIL MIT asolar@csail.mit.edu Joshua B. Tenenbaum Brain and Cognitive Sciences MIT jbt@mit.edu Abstract Towards learning programs from data, we introduce the problem of sampling programs from posterior distributions conditioned on that data. Within this setting, we propose an algorithm that uses a symbolic solver to efficiently sample programs. The proposal combines constraint-based program synthesis with sampling via random parity constraints. We give theoretical guarantees on how well the samples approximate the true posterior, and have empirical results showing the algorithm is efficient in practice, evaluating our approach on 22 program learning problems in the domains of text editing and computer-aided programming. 1 Introduction Learning programs from examples is a central problem in artificial intelligence, and many recent approaches draw on techniques from machine learning. Connectionist approaches, like the Neural Turing Machine [1, 2] and symbolic approaches, like Hierarchical Bayesian Program Learning [3, 4, 5], couple a probabilistic learning framework with either gradient- or sampling-based search procedures. In this work, we consider the problem of Bayesian inference over program spaces. We combine solver-based program synthesis [6] and sampling via random projections [7], showing how to sample from posterior distributions over programs where the samples come from a distribution provably arbitrarily close to the true posterior. The new approach is implemented in a system called P ROGRAM S AMPLE and evaluated on a set of program induction problems that include list and string manipulation routines. 1.1 Motivation and problem statement Consider the problem of learning string edit programs, a well studied domain for programming by example. Often end users provide these examples and are unwilling to give more than one instance, which leaves the target program highly ambiguous. We model this ambiguity by sampling string edit programs, allowing us to learn from very few examples (Figure 1) and offer different plausible solutions. Our sampler also incorporates a description-length prior to bias us towards simpler programs. Input ?1/21/2001? Output ?01? substr(pos(?0?,-1),-1) const(?01?) substr(-2,-1) ?last 0 til end? ?output 01? ?take last two? Figure 1: Learning string manipulation programs by example (top input/output pair). Our system receives data like that shown above and then sampled the programs shown below. Another program learning domain comes from computer-aided programming, where the goal is to synthesize algorithms from either examples or formal specifications. This problem can be ill posed because many programs may satisfy the specification or examples. When this ambiguity arises, P ROGRAM S AMPLE proposes multiple implementations with a bias towards shorter or simpler ones. The samples can also be used to efficiently approximate the posterior predictive distribution, 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. effectively integrating out the program. We show P ROGRAM S AMPLE learning routines for counting and recursively sorting/reversing lists while modeling the uncertainty over the correct algorithm. Because any model can be represented as a (probabilistic or deterministic) program, we need to carefully delimit the scope of this work. The programs we learn are a subset of those handled by constraint-based program synthesis tools. This means that the program is finite (bounded size, bounded runtime, bounded memory consumption), can be modeled in a constraint solver (like a SAT or SMT solver), and that the program?s high-level structure is already given as a sketch [6], which can take the form of a recursive grammar over expressions. The sketch defines the search space and imparts prior knowledge. For example, we use one sketch when learning string edit programs and a different sketch when learning recursive list manipulation programs. More formally, our sketch specifies a finite set of programs, S, as well as a measure of the programs? description length, which we write as |x| for x ? S. This defines a prior (? 2?|x| ). For each program learning problem, we have a specification (such as consistency with input/output examples) and want to sample from S conditioned upon the specification holding, giving the posterior over programs ? 2?|x| 1[specification holds for x]. Throughout the rest of the paper, we write p(?) to mean this posterior distribution, and write X to mean the set of all programs in S consistent with the P ?|x| specification. So the problem is to sample from p(x) = 2 Z where Z = x?X 2?|x| . We can invoke a solver, which enumerates members of X, possibly subject to extra constraints, but without any guarantees on the order of enumeration. Throughout this work we use a SAT solver, and encode x ? X in the values of n Boolean decision variables. With a slight abuse of notation we will use x to refer to both a member of X and an assignment to those n decision variables. An assignment to j th variable we write as xj for 1 ? j ? n. Section 1.2 briskly summarizes the constraint-solving program synthesis approach. 1.2 Program synthesis by constraint solving The constraint solving approach to program synthesis, pioneered in [6, 8], synthesizes programs by (1) modeling the space of programs as assignments to Boolean decision variables in a constraint satisfaction problem; (2) adding constraints to enforce consistency with a specification; (3) asking the solver to find any solution to the constraints; and (4) reinterpreting that solution as a program. Figure 2 illustrates this approach for the toy problem of synthesizing programs in a language consisting of single-bit operators. Each program has one input (i in Figure 2) which it transforms using nand gates. The grammar in Figure 2a is the sketch. If we inline the grammar, we can diagram the space of all programs as an AND/OR graph (Figure 2b), where xj are Boolean decision variables that control the program?s structure. For each of the input/output examples (Figure 2d) we have constraints that model the program execution (Figure 2c) and enforce the desired output (P1 taking value 1). After solving for a satisfying assignment to the xj ?s, we can read these off as a program (Figure 2e). In this work we measure the description length of a program x as the number of bits required to specify its structure (so |x| is a natural number).1 P ROGRAM S AMPLE further constrains unused bits to take a canonical form, such as all being zero. This causes the mapping between programs x ? X and variable assignments {xj }N j=1 to be one-to-one. 1.3 Algorithmic contribution In the past decade different groups of researchers have concurrently developed solver-based techniques for (1) sampling of combinatorial spaces [9, 7, 10, 11] and (2) program synthesis [6, 8]. This work merges these two lines of research to attack the problem of program learning in a probabilistic setting. We use program synthesis tools to convert a program learning problem into a SAT formula. Then, rather than search for one program (formula solution), we augment the formula with random constraints that cause it to (approximately) sample the space of programs, effectively ?upgrading? our SAT solver from a program synthesizer to a program sampler. The groundbreaking algorithms in [9] gave the first scheme (XORSample) for sampling discrete spaces by adding random constraints to a constraint satisfaction problem. While one could use a tool like Sketch to reduce a program learning problem to SAT and then use an algorithm like XORSample, 1 This is equivalent to the assumption that x is drawn from a probabilistic grammar specified by the sketch 2 Program ::= i | nand(Program,Program) (a) Sketch i ? 0 ? P1 ? 1 x1 ? (P1 ? i) P1 x1 x1 i nand Program(i = 0) = 1 P2 (d) Specification x1 = 0, x2 = 1, x3 = 1 Program = nand(i, i) (e) A constraint solution; |x| = 3 bits P3 ...... x2 x2 i nand ...... x1 ? (P1 ? P2 ? P3 ) x2 ? (P2 ? i) x2 ? (P2 ? P4 ? P5 ) x3 ? (P3 ? i) ?????? (c) Constraints for SAT solver (b) Program space Figure 2: Synthesizing a program via sketching and constraint solving. Typewriter font refers to pieces of programs or sketches, while math font refers to pieces of a constraint satisfaction problem. The variable i is the program input. PAWS, or WeightGen [9, 7, 10] to sample programs from a description length prior, doing so can be surprisingly inefficient2 . The efficiency of these sampling algorithms depends critically on a x p(x) quantity called the distribution?s tilt, introduced in [10] as max minx p(x) . When there are a few very likely (short) programs and many extremely unlikely (long) programs, the posterior over programs becomes extremely tilted. Recent work has relied on upper bounding the tilt, often to around 20 [10]. For program sampling problems, we usually face very high tilt upwards of 250 . Our main algorithmic contribution is a new approach that extends these techniques to distributions with high tilt, such as those encountered in program induction. 2 The sampling algorithm Given the distribution p(?) on the program space X, it is always possible to define a higher dimensional space E (an embedding) and a mapping F : E ? X such that sampling uniformly from E and applying F will give us approximately p-distributed samples [7]. But, when the tilt of p(?) becomes large, we found that such an approach is no longer practical.3 Our approach instead is to define an F 0 : E ? X such that uniform samples on E map to a distribution q(?) that is guaranteed to have low tilt, but whose KL divergence from p(?) is low. The discrepancy between the distributions p(?) and q(?) can be corrected through rejection sampling. Sampling uniformly from E is itself not trivial, but a variety of techniques exist to approximate uniform sampling by adding random XOR constraints (random projections mod 2) to the set E, which is extensively studied in [9, 12, 10, 13, 11]. These techniques introduce approximation error that can be made arbitrarily small at the expense of lower efficiency. Figure 3 illustrates this process. 2.1 Getting high-quality samples Low-tilt approximation. We introduce a parameter into the sampling algorithm, d, that parameterizes q(?). The parameter d acts as a threshold, or cut-off, for the description length of a program; the distribution q(?) acts as though any program with description length exceeding d can be encoded using d bits. Concretely,  ?|x| 2 , if |x| ? d q(x) ? (1) 2?d , otherwise If we could sample exactly from q(?), we could reject a sample x with probability 1 ? A(x) where A is  1, if |x| ? d A(x) ? (2) 2?|x|+d , otherwise 2 In many cases, slower than rejection sampling or enumerating all of the programs [10] take a qualitatively different approach from [7] not based on an embedding, but which still becomes prohibitively expensive in the high-tilt regime. 3 3 and get exact samples from p(?), where the acceptance rate would approach 1 exponentially quickly in d. We have the following result; see supplement for proofs. Proposition 1. Let x ? X be a sample from q(?). The probability of accepting x is at least 1 where x? = arg minx |x|. 1+|X|2|x? |?d The distribution q(?) is useful because we can guarantee that it has tilt bounded by 2d?|x? | . Introducing the proposal q(?) effectively reifies the tilt, making it a parameter of the sampling algorithm, not the distribution over programs. We now show how to approximately sample from q(?) using a variant of the Embed and Project framework [7]. The embedding. The idea is to define a new set of programs, which we call E, such that short programs are included in the set much more often than long programs. Each program x will be represented in E by an amount proportional to 2? min(|x|,d) , thus proportional to q(x), such that sampling elements uniformly from E samples according to q(?). Figure 3: P ROGRAM S AMPLE twice distorts the posterior distribution p(?). First it introduces a parameter d that bounds the tilt; we correct for this by accepting samples w.p. A(x). Second it samples from q(?) by drawing instead from r(?), where KL(q||r) can be made arbitrarily small by appropriately setting another parameter, K. The distribution of samples is A(x)r(x). We embed X within the larger set E by introducing d auxiliary variables, written (y1 , ? ? ? , yd ), such that every element of E is a tuple of an element of x = (x1 , ? ? ? , xn ) and an assignment to y = (y1 , ? ? ? , yd ): ^ E = {(x, y) : x ? X, |x| ? j ? yj = 1} (3) 1?j?d Suppose we sample (x, y) uniformly from E. Then the probability of getting a particular x ? X is proportional to |{(x0 , y) ? E : x0 = x}| = |{y : |x| ? j ? yj = 1}| = 2min(0,d?|x|) which is proportional to q(x). Notice that |E| grows exponentially with d, and thus with the tilt of the q(?). This is the crux of the inefficiency of sampling from high-tilt distributions in these frameworks: these auxiliary variables combine with the random constraints to entangle otherwise independent Boolean decision variables, while also increasing the number of variables and clauses. The random projections. We could sample exactly from E by invoking the solver |E| + 1 times to get every element of E, but in general it will have O(|X|2d ) elements, which could be very large. Instead, we ask the solver for all the elements of E consistent with K random constraints such that (1) few elements of E are likely to satisfy (?survive?) the constraints, and (2) any element of E is approximately equally likely to satisfy the constraints. We can then sample a survivor uniformly to get an approximate sample from E, an idea introduced in the XORSample0 algorithm [9]. Although simple compared to recent approaches [10, 14, 15], it suffices for our theoretical and empirical results. Our random constraints take the form of XOR, or parity constraints, which are random projections mod 2. Each constraint fixes the parity of a random subset of SAT variables in x to either 1 or 0; thus any x survives a constraint with probability 12 . A useful feature of random parity constraints is that whether an assignment to the SAT variables survives is independent of whether another, different assignment survives, which has been exploited to create a variety of approximate sampling algorithms [9, 12, 10, 13, 11]. 2 Then the K constraints are of the form h ( xy ) ? b where h is a K ? (d + n) binary matrix and b is a K-dimensional binary vector. If no solutions satisfy the K constraints then the sampling attempt is rejected. These samples are close to uniform in the following sense: 1 Proposition 2. The probability of sampling (x, y) is at least |E| ? 1+2K1 /|E| and the probability of getting any sample at all is at least 1 ? 2K /|E|. So we get approximate samples from E as long asP |E|2?K is not small. In reference to Figure 3, we call the distribution of these samples r(x) = y r(x, y). Schemes more sophisticated than XORSample0 , like [7], also guarantee upper bounds on sampling probability, but we found that these 4 Algorithm 1 P ROGRAM S AMPLE Input: Program space X, number of samples N , failure probability ?, parameters ? > 0, ? > 0 Output: N samples Set |x? | = minx?X |x| Set BX = ApproximateUpperBoundModelCount(X,?/2) Set d = d? + log BX + |x? |e V Define E = {(x, y) : x ? X, 1?j?d |x| ? j =? yj = 1} Set BE = ApproximateLowerBoundModelCount(E,?/2) Set K = blog BE ? ?c Initialize samples = [ ] repeat Sample h uniformly from {0, 1}(d+n)?K Sample b uniformly from {0, 1}K Enumerate S = {(x, y) where h(x, y) = b ? x ? X} if |S| > 0 then Sample (x, y) uniformly from S if Uniform(0, 1) < 2d?|x| then samples = samples + [x] end if end if until |samples| = N return samples were unnecessary for our main result, which is that the KL between p(?) and A(x)r(x) goes to zero exponentially quickly in a new quantity we call ?: Proposition 3. Write Ar(x) to mean the distribution proportional to A(x)r(x). Then D(p||Ar) <   1+2?? log 1 + 1+2? where ? = log |E| ? K and ? = d ? log |X| ? |x? |. So we can approximate the true distribution p(?) arbitrarily well, but at the expense of either more calls to the solver (increasing ?) or a larger embedding (increasing ?; our main algorithmic contribution). See supplement for theoretical and empirical analyses of this accuracy/runtime trade-off. Proposition 3 requires knowing minx |x| to set K and d. We compute minx |x| using the iterative minimization routine in [16]; in practice this is very efficient for finite program spaces. We also need to calculate |X| and |E|, which are model counts that are in general difficult to compute exactly. However, many approximate model counting schemes exist, which provide upper and lower bounds that hold with arbitrarily high probability. We use Hybrid-MBound [13] to upper bound |X| and lower bound |E| that each individually hold with probability at least 1 ? ?/2, thus giving lower bounds on both the ? and ? parameters of Proposition 3 with probability at least 1 ? ? and thus an upper bound on the KL divergence. Algorithm 1 puts these ideas together. 3 Experimental results We evaluated P ROGRAM S AMPLE on program learning problems in a text editing domain and a list manipulation domain. For each domain, we wrote down a sketch and produced SAT formulas using the tool in [6], specifying a large but finite set of possible programs. This implicitly defined a description-length prior, where |x| is the number of bits required to specify x in the SAT encoding. We used CryptoMiniSAT [17], which can efficiently handle parity constraints. 3.1 Learning Text Edit Scripts We applied our program sampling algorithm to a suite of programming by demonstration problems within a text editing domain. Here, the challenge is to learn a small text editing program from very few examples and apply that program to held out inputs. This problem is timely, given the widespread use of the FlashFill program synthesis tool, which now ships by default in Microsoft Excel [18] and can learn sophisticated edit operations in real time from examples. We modeled a subset of 5 the FlashFill language; our goal here is not to compete with FlashFill, which is cleverly engineered for its specific domain, but to study the behavior of our more general-purpose program learner in a real-world task. To impart domain knowledge, we used a sketch equivalent to Figure 4. Because FlashFill?s training set is not yet public, we drew text editing problems from [19] and adapted them to our subset of FlashFill, giving 19 problems, each with 5 training examples. The supplement contains these text edit problems. Program ::= Term | Program + Term Term ::= String | substr(Pos,Pos) Pos ::= Number | pos(String,String,Number) Number ::= 0 | 1 | 2 | ... | -1 | -2 | ... String ::= Character | Character + String Character ::= a | b | c | ... We are interested both in the ability of the learner to generalize and in P RO GRAM S AMPLE ?s ability to generate samples quickly. Table 1 shows the average time per sampling attempt using P RO - Figure 4: The sketch (program space) for learning text GRAM S AMPLE , which is on the order of edit scripts a minute. These text edit problems come from distributions with extremely high tilt: often the smallest program is only tens of bits long, but the program space contains (implausible) solutions with over 100 bits. By putting d to |x? | ? n we eliminate the tilt correction and recover a variant of the approaches in [7]. This baseline does not produce any samples for any of our text edit problems in under an hour.4 Other baselines also failed to produce samples in a reasonable amount of time (see supplement). For example, pure rejection sampling (drawing from the prior) is also infeasible, with consistent programs having prior probability ? 2?50 in some cases. The learner generalizes to unseen examples, as Figure 5 shows. We evaluated the performance of the learner on held out test examples while varying training set size, and compare with baselines that either (1) enumerate programs in the arbitrary order provided by the underlying solver, or (2) takes the most likely program under p(x) (MDL learner). The posterior is sharply peaked, with most samples being from the MAP solution, and so our learner does about as well as the MDL learner. However, sampling offers an (approximate) predictive posterior over predictions on the held out examples; in a real world scenario, one would offer the top C predictions to the user and let them choose, much like how spelling correction works. This procedure allows us to offer the correct predictions more often than the MDL learner (Figure 6), because we correctly handle ambiguous problems like in Figure 1. We see this as a primary strength of the sampling approach to Bayesian program learning: when learning from one or a few examples, a point estimate of the posterior can often miss the mark. Figure 5: Generalization when learning text edit operations by example. Results averaged across 19 problems. Solid: 100 samples from P ROGRAM S AMPLE . Dashed: enumerating 100 programs. Dotted: MDL learner. Test cases past 1 (respectively 2,3) examples are held out when trained on 1 (respectively 2,3) examples. Figure 6: Comparing the MDL learner (dashed black line) to program sampling when doing one-shot learning. We count a problem as ?solved? if the correct joint prediction to the test cases is in the top C most frequent samples. 4 Approximate model counting of E was also intractable in this regime, so we used the lower bound |E| ? 2d?|x? | + |X| ? 1 6 Table 1: Average solver time to generate a sample measured in seconds. See Figure 9 and 5 for training set sizes. n ? 180, 65 for text edit, list manipulation domains, respectively. w/o tilt correction, sampling text edit & count takes > 1 hour. text edit sort reverse count 3.2 Large set 49?3 1549?155 326?42 ?1 Medium set 21 ?1 905 ?58 141 ?18 ?1 Small set 84 ?3 463 ?65 39 ?3 ?1 Figure 7: Sampling frequency vs. ground truth probability on a counting task with ? = 3 and ? = 4. Learning list manipulation algorithms One goal of program synthesis is computer-aided programming [6], which is the automatic generation of executable code from either declarative specifications or examples of desired behavior. Systems with this goal have been successfully applied to, for example, synthesizing intricate bitvector routines from specifications [18]. However, when learning from examples, there is often uncertainty over the correct program. While past approaches have handled this uncertainty within an optimization framework (see [20, 21, 16]), we show that P ROGRAM S AMPLE can sample algorithms. We take as our goal to learn recursive routines for sorting, reversing, and counting list elements from input/output examples, particularly in the ambiguous, unconstrained regime of few examples. We used a sketch with a set of basis primitives capable of representing a range of list manipulation routines equivalent to Figure 8. Program ::= (if Bool List (append RecursiveList RecursiveList RecursiveList)) Bool ::= (<= Int) | (>= Int) Int ::= 0 | (1+ Int) | (1- Int) | (length List) | (head List) List ::= nil | (filter Bool List) | X | (tail List) | (list Int) RecursiveList ::= List | (recurse List) A description-length prior that penalizes longer programs allowed learning of recursive list manipulation routines (from production Program) and a non-recursive count routine (from production Int); see Figure 9, which shows average accuracy on held out test data when trained on Figure 8: The sketch (program space) for learning variable numbers of short randomly generated list manipulation routines; X is program input lists. With the large training set (5?11 examples) P ROGRAM S AMPLE recovers a correct implementation, and with less data it recovers a distribution over programs that functions as a probabilistic algorithm despite being composed of only deterministic programs. For some of these tasks the number of consistent programs is small enough that we can enumerate all of them, allowing us to compare our sampler with ground-truth probabilities. Figure 7 shows this comparison for a counting problem with 80 consistent programs, showing empirically that the tilt correction and random constraints do not significantly perturb the distribution. Table 1 shows the average solver time per sample. Generating recursive routines like sorting and reversing is much more costly than generating the nonrecursive counting routine. The constraint-based approach propositionalizes higher-order constructs like recursion, and Figure 9: Learning to manipulate lists. Trained on so reasoning about them is much more costly. lists of length ? 3; tested on lists of length ? 14. Yet counting problems are highly tilted due to count?s short implementation, which makes them intractable without our tilt correction. 7 4 4.1 Discussion Related work There is a vast literature on program learning in the AI and machine learning communities. Many employ a (possibly stochastic) heuristic search over structures using genetic programming [22] or MCMC [23]. These approaches often find good programs and can discover more high-level structure than our approach. However, they are prone to getting trapped in local minima and, when used as a sampler, lack theoretical guarantees. Other work has addressed learning priors over programs in a multitask setting [4, 5]. We see our work as particularly complementary to these methods: while they focus on learning the structure of the hypothesis space, we focus on efficiently sampling an already given hypothesis space (the sketch). Several recent proposals for recurrent deep networks can learn algorithms [2, 1]. We see our system working in a different regime, where we want to quickly learn an algorithm from a small number of examples or an ambiguous specification. The program synthesis community has several recently proposed learners that work in an optimization framework [20, 21, 16]. By computing a posterior over programs, we can more effectively represent uncertainty, particularly in the small data limit, but at the cost of more computation. P ROGRAM S AMPLE borrows heavily from a line of work started in [9, 13] on sampling of combinatorial spaces using random XOR constraints. An exciting new approach is to use sparse XOR constraints [14, 15] , which might sample more efficiently from our embedding of the program space. 4.2 Limitations of the approach Constraint-based synthesis methods tend to excel in domains where the program structure is restricted by a sketch [6] and where much of the program?s description length can be easily computed from the program text. For example, P ROGRAM S AMPLE can synthesize text editing programs that are almost 60 bits long in a couple seconds, but spends 10 minutes synthesizing a recursive sorting routine that is shorter but where the program structure is less restricted. Constraint-based methods also require the entire problem to be represented symbolically, so they have trouble when the function to be synthesized involves difficult to analyze building blocks such as numerical routines. For such problems, stochastic search methods [23, 22] can be more effective because they only need to run the functions under consideration. Finally, past work shows empirically that these methods scale poorly with data set size, although this can be mitigated by considering data incrementally [21, 20]. The requirement of producing representative samples imposes additional overhead on our approach, so scalability can more limited than for standard symbolic techniques on some problems. For example, our method requires 1 MAP inference query, and 2 queries to an approximate model counter. These serve to ?calibrate? the sampler, and its cost can be amortized because they only has to be invoked once in order to generate an arbitrary number of iid samples. Approximate model counters like MBound [13] have complexity comparable with that of generating a sample, but the complexity can depend on the number of solutions. Thus, for good performance, P ROGRAM S AMPLE requires that there not be too many programs consistent with the data?the largest spaces considered in our experiments had ? 107 programs. This limitation, together with the general performance characteristics of symbolic techniques, means that the approach will work best for ?needle in a haystack? problems, where the space of possible programs is large but restricted in its structure, and where only a small fraction of the programs satisfy the constraints. 4.3 Future work This work could naturally extend to other domains that involve inducing latent symbolic structure from small amounts of data, such as semantic parsing to logical forms [24], synthesizing motor programs [3], or learning relational theories [25]. These applications have some component of transfer learning, and building efficient program learners that can transfer inductive biases across tasks is a prime target for future research. Acknowledgments We are grateful for feedback from Adam Smith, Kuldeep Meel, and our anonymous reviewers. Work supported by NSF-1161775 and AFOSR award FA9550-16-1-0012. 8 References [1] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv:1410.5401, 2014. [2] Scott Reed and Nando de Freitas. Neural programmer-interpreters. CoRR, abs/1511.06279, 2015. [3] Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332?1338, 2015. [4] Percy Liang, Michael I. Jordan, and Dan Klein. Learning programs: A hierarchical bayesian approach. In Johannes F?rnkranz and Thorsten Joachims, editors, ICML, pages 639?646. Omnipress, 2010. [5] Aditya Menon, Omer Tamuz, Sumit Gulwani, Butler Lampson, and Adam Kalai. A machine learning framework for programming by example. In ICML, pages 187?195, 2013. [6] Armando Solar Lezama. Program Synthesis By Sketching. PhD thesis, EECS Department, University of California, Berkeley, Dec 2008. [7] Stefano Ermon, Carla P Gomes, Ashish Sabharwal, and Bart Selman. Embed and project: Discrete sampling with universal hashing. In Advances in Neural Information Processing Systems, pages 2085?2093, 2013. [8] Susmit Jha, Sumit Gulwani, Sanjit A Seshia, and Ashish Tiwari. Oracle-guided component-based program synthesis. In ICSE, volume 1, pages 215?224. IEEE, 2010. [9] Carla P Gomes, Ashish Sabharwal, and Bart Selman. Near-uniform sampling of combinatorial spaces using xor constraints. In Advances In Neural Information Processing Systems, pages 481?488, 2006. [10] Supratik Chakraborty, Daniel Fremont, Kuldeep Meel, Sanjit Seshia, and Moshe Vardi. Distribution-aware sampling and weighted model counting for sat. In AAAI Conference on Artificial Intelligence, 2014. [11] Supratik Chakraborty, Kuldeep S Meel, and Moshe Y Vardi. A scalable and nearly uniform generator of sat witnesses. In International Conference on Computer Aided Verification, pages 608?623. Springer, 2013. [12] Leslie G Valiant and Vijay V Vazirani. Np is as easy as detecting unique solutions. In Proceedings of the seventeenth annual ACM symposium on Theory of computing, pages 458?463. ACM, 1985. [13] Carla P Gomes, Ashish Sabharwal, and Bart Selman. Model counting: A new strategy for obtaining good bounds. In AAAI Conference on Artificial Intelligence, 2006. [14] Stefano Ermon, Carla Gomes, Ashish Sabharwal, and Bart Selman. Low-density parity constraints for hashing-based discrete integration. In ICML, pages 271?279, 2014. [15] Dimitris Achlioptas and Pei Jiang. Stochastic integration via errorcorrecting codes. UAI, 2015. [16] Rishabh Singh, Sumit Gulwani, and Armando Solar-Lezama. Automated feedback generation for introductory programming assignments. In ACM SIGPLAN Notices, volume 48, pages 15?26. ACM, 2013. [17] Cryptominisat. http://www.msoos.org/documentation/cryptominisat/. [18] Sumit Gulwani. Automating string processing in spreadsheets using input-output examples. In ACM SIGPLAN Notices, volume 46, pages 317?330. ACM, 2011. [19] Dianhuan Lin, Eyal Dechter, Kevin Ellis, Joshua B. Tenenbaum, and Stephen Muggleton. Bias reformulation for one-shot function induction. In ECAI 2014, pages 525?530, 2014. [20] Veselin Raychev, Pavol Bielik, Martin Vechev, and Andreas Krause. Learning programs from noisy data. In POPL, pages 761?774. ACM, 2016. [21] Kevin Ellis, Armando Solar-Lezama, and Josh Tenenbaum. Unsupervised learning by program synthesis. In Advances in Neural Information Processing Systems, pages 973?981, 2015. [22] John R. Koza. Genetic programming - on the programming of computers by means of natural selection. Complex adaptive systems. MIT Press, 1993. [23] Eric Schkufza, Rahul Sharma, and Alex Aiken. Stochastic superoptimization. In ACM SIGARCH Computer Architecture News, volume 41, pages 305?316. ACM, 2013. [24] P. Liang, M. I. Jordan, and D. Klein. Learning dependency-based compositional semantics. In Association for Computational Linguistics (ACL), pages 590?599, 2011. [25] Yarden Katz, Noah D. Goodman, Kristian Kersting, Charles Kemp, and Joshua B. Tenenbaum. Modeling semantic cognition as logical dimensionality reduction. In CogSci, pages 71?76, 2008. 9
6082 |@word multitask:1 chakraborty:2 invoking:1 solid:1 shot:2 recursively:1 reduction:1 inefficiency:1 contains:2 paw:1 daniel:1 genetic:2 past:4 freitas:1 comparing:1 superoptimization:1 synthesizer:1 yet:2 written:1 parsing:1 john:1 tilted:2 numerical:1 dechter:1 motor:1 kuldeep:3 v:1 bart:4 intelligence:3 leaf:1 ivo:1 smith:1 short:4 fa9550:1 accepting:2 detecting:1 math:1 attack:1 org:1 simpler:2 symposium:1 combine:3 reinterpreting:1 overhead:1 dan:1 introductory:1 introduce:3 x0:2 intricate:1 behavior:2 p1:5 brain:2 salakhutdinov:1 enumeration:1 solver:16 increasing:3 becomes:3 spain:1 project:2 bounded:4 notation:1 provided:1 underlying:1 medium:1 mitigated:1 discover:1 string:11 spends:1 developed:1 interpreter:1 suite:1 guarantee:5 berkeley:1 every:2 act:2 flashfill:5 runtime:2 exactly:3 prohibitively:1 ro:2 control:1 wayne:1 producing:1 danihelka:1 local:1 limit:1 despite:1 encoding:1 jiang:1 abuse:1 approximately:4 yd:2 black:1 twice:1 might:1 studied:2 acl:1 specifying:1 limited:1 range:1 averaged:1 seventeenth:1 practical:1 acknowledgment:1 unique:1 yj:3 recursive:7 practice:2 block:1 x3:2 procedure:2 empirical:3 universal:1 reject:1 significantly:1 projection:4 schkufza:1 integrating:1 refers:2 symbolic:5 get:4 close:2 needle:1 operator:1 selection:1 put:1 applying:1 www:1 equivalent:3 deterministic:2 map:3 reviewer:1 go:1 delimit:1 primitive:1 unwilling:1 pure:1 embedding:5 handle:2 target:2 suppose:1 heavily:1 user:2 pioneered:1 programming:10 exact:1 us:1 hypothesis:2 synthesize:2 element:9 satisfying:1 expensive:1 particularly:3 amortized:1 documentation:1 cut:1 p5:1 solved:1 calculate:1 news:1 fremont:1 trade:1 counter:2 complexity:2 constrains:1 lezama:4 trained:3 depend:1 solving:5 grateful:1 singh:1 predictive:2 serve:1 upon:1 efficiency:2 learner:12 basis:1 eric:1 po:5 joint:1 easily:1 represented:3 effective:1 cogsci:1 artificial:3 query:2 kevin:3 whose:1 encoded:1 posed:1 plausible:1 larger:2 heuristic:1 drawing:2 otherwise:3 grammar:4 ability:2 unseen:1 itself:1 noisy:1 propose:1 p4:1 frequent:1 omer:1 poorly:1 description:9 inducing:1 scalability:1 getting:4 requirement:1 produce:2 generating:3 adam:2 recurrent:1 measured:1 p2:4 implemented:1 auxiliary:2 involves:1 come:3 sabharwal:4 guided:1 correct:6 filter:1 stochastic:4 nando:1 engineered:1 human:1 programmer:1 ermon:2 public:1 require:1 crux:1 suffices:1 fix:1 generalization:1 anonymous:1 proposition:5 correction:5 hold:3 around:1 considered:1 ground:2 scope:1 mapping:2 algorithmic:3 cognition:1 smallest:1 purpose:1 ruslan:1 combinatorial:3 edit:13 individually:1 largest:1 create:1 successfully:1 tool:5 survives:3 weighted:1 minimization:1 impart:1 mit:7 concurrently:1 always:1 rather:1 kalai:1 asp:1 kersting:1 varying:1 encode:1 focus:2 joachim:1 survivor:1 baseline:3 sense:1 inference:2 unlikely:1 eliminate:1 nand:5 entire:1 interested:1 semantics:1 provably:1 arg:1 ill:1 augment:1 proposes:1 integration:2 initialize:1 sanjit:2 construct:1 once:1 having:1 aware:1 sampling:37 armando:4 unsupervised:1 survive:1 icml:3 nearly:1 peaked:1 discrepancy:1 future:2 connectionist:1 np:1 few:6 employ:1 randomly:1 composed:1 divergence:2 consisting:1 microsoft:1 attempt:2 ab:1 acceptance:1 highly:2 mdl:5 introduces:1 recurse:1 rishabh:1 held:5 tuple:1 capable:1 xy:1 shorter:2 typewriter:1 penalizes:1 desired:2 theoretical:4 instance:1 modeling:3 boolean:4 elli:3 asking:1 ar:2 assignment:9 leslie:1 calibrate:1 mbound:2 introducing:2 cost:2 subset:4 uniform:6 sumit:4 too:1 dependency:1 eec:1 density:1 international:1 csail:2 automating:1 probabilistic:6 invoke:1 off:3 michael:1 ashish:5 synthesis:15 sketching:2 seshia:2 quickly:4 together:2 thesis:1 central:1 ambiguity:2 aaai:2 choose:1 possibly:2 cognitive:2 til:1 bx:2 return:1 toy:1 de:1 int:7 jha:1 satisfy:5 depends:1 piece:2 script:2 eyal:1 doing:2 entangle:1 analyze:1 relied:1 recover:1 sort:1 timely:1 solar:4 contribution:3 accuracy:2 xor:5 greg:1 characteristic:1 efficiently:5 generalize:1 bayesian:5 considering:1 critically:1 produced:1 iid:1 researcher:1 implausible:1 failure:1 frequency:1 naturally:1 proof:1 recovers:2 couple:2 sampled:1 ask:1 logical:2 knowledge:2 enumerates:1 tiwari:1 dimensionality:1 routine:13 carefully:1 sophisticated:2 higher:2 hashing:2 specify:2 rahul:1 editing:6 evaluated:3 though:1 rejected:1 achlioptas:1 until:1 sketch:17 receives:1 working:1 lack:1 incrementally:1 widespread:1 defines:2 quality:1 menon:1 grows:1 building:2 concept:1 true:3 inductive:1 read:1 jbt:1 semantic:2 ambiguous:4 percy:1 stefano:2 upwards:1 omnipress:1 reasoning:1 consideration:1 invoked:1 recently:1 charles:1 executable:1 clause:1 empirically:2 tilt:18 exponentially:3 volume:4 tail:1 slight:1 extend:1 association:1 synthesized:1 katz:1 refer:1 haystack:1 ai:1 automatic:1 unconstrained:1 consistency:2 language:2 had:1 bool:3 specification:11 longer:2 posterior:13 recent:4 ship:1 manipulation:9 scenario:1 reverse:1 prime:1 binary:2 arbitrarily:5 blog:1 joshua:4 exploited:1 minimum:1 additional:1 sharma:1 dashed:2 stephen:1 multiple:1 offer:4 long:5 lin:1 muggleton:1 equally:1 manipulate:1 award:1 imparts:1 variant:2 prediction:4 scalable:1 arxiv:1 spreadsheet:1 represent:1 dec:1 proposal:3 want:2 krause:1 addressed:1 diagram:1 appropriately:1 extra:1 rest:1 popl:1 goodman:1 smt:1 subject:1 tend:1 ample:15 member:2 incorporates:1 mod:2 jordan:2 call:4 near:1 counting:10 unused:1 enough:1 easy:1 automated:1 variety:2 xj:4 gave:1 architecture:1 reduce:1 idea:3 parameterizes:1 knowing:1 andreas:1 enumerating:2 whether:2 expression:1 handled:2 gulwani:4 cause:2 compositional:1 deep:1 enumerate:3 useful:2 involve:1 johannes:1 transforms:1 amount:3 tenenbaum:5 extensively:1 ten:1 generate:3 specifies:1 http:1 exist:2 canonical:1 nsf:1 notice:3 dotted:1 koza:1 trapped:1 per:2 correctly:1 klein:2 write:5 discrete:3 group:1 putting:1 reformulation:1 threshold:1 aiken:1 drawn:1 groundbreaking:1 graph:1 cryptominisat:3 vast:1 symbolically:1 convert:1 fraction:1 compete:1 turing:2 run:1 uncertainty:4 extends:1 throughout:2 reasonable:1 almost:1 p3:3 lake:1 draw:1 decision:5 summarizes:1 comparable:1 bit:9 bound:9 guaranteed:1 encountered:1 oracle:1 annual:1 adapted:1 strength:1 noah:1 constraint:41 sharply:1 alex:2 x2:5 sigplan:2 extremely:3 min:2 martin:1 upgrading:1 department:1 according:1 cleverly:1 across:2 character:3 making:1 icse:1 restricted:3 thorsten:1 errorcorrecting:1 count:6 end:4 generalizes:1 operation:2 apply:1 hierarchical:2 enforce:2 gate:1 slower:1 top:3 include:1 trouble:1 linguistics:1 const:1 sigarch:1 giving:3 k1:1 perturb:1 already:2 quantity:2 moshe:2 font:2 strategy:1 primary:1 costly:2 spelling:1 gradient:1 minx:5 consumption:1 kemp:1 trivial:1 declarative:1 induction:4 length:12 code:2 modeled:2 reed:1 demonstration:1 liang:2 difficult:2 statement:1 holding:1 expense:2 synthesizing:5 append:1 implementation:3 pei:1 allowing:2 upper:5 finite:4 relational:1 witness:1 head:1 y1:2 arbitrary:2 brenden:1 community:2 introduced:2 pair:1 required:2 specified:1 kl:4 distorts:1 california:1 merges:1 barcelona:1 hour:2 nip:1 below:1 usually:1 scott:1 dimitris:1 regime:4 challenge:1 program:134 max:1 memory:1 satisfaction:3 natural:2 hybrid:1 recursion:1 representing:1 scheme:3 started:1 excel:2 text:16 prior:9 literature:1 graf:1 afosr:1 generation:2 limitation:2 proportional:5 borrows:1 generator:1 verification:1 consistent:6 imposes:1 exciting:1 editor:1 production:2 prone:1 surprisingly:1 parity:6 last:2 repeat:1 infeasible:1 supported:1 ecai:1 bias:4 formal:1 taking:1 face:1 sparse:1 distributed:1 feedback:2 default:1 xn:1 evaluating:1 world:2 gram:2 rnkranz:1 concretely:1 made:2 qualitatively:1 selman:4 adaptive:1 meel:3 approximate:12 vazirani:1 implicitly:1 wrote:1 uai:1 sat:12 unnecessary:1 gomes:4 butler:1 search:5 iterative:1 latent:1 decade:1 table:3 learn:7 transfer:2 obtaining:1 synthesizes:1 complex:1 domain:12 main:3 motivation:1 bounding:1 vardi:2 allowed:1 complementary:1 x1:6 representative:1 tamuz:1 exceeding:1 bitvector:1 xorsample:2 formula:4 down:1 embed:3 minute:2 specific:1 showing:3 list:23 intractable:2 adding:3 supratik:2 valiant:1 effectively:4 drew:1 corr:1 supplement:4 phd:1 execution:1 conditioned:2 illustrates:2 sorting:4 vijay:1 rejection:3 carla:4 likely:4 josh:1 failed:1 aditya:1 springer:1 kristian:1 truth:2 acm:9 goal:5 towards:3 aided:4 included:1 uniformly:8 reversing:3 sampler:5 corrected:1 miss:1 called:2 nil:1 experimental:1 formally:1 mark:1 arises:1 mcmc:1 tested:1
5,617
6,083
Poisson?Gamma Dynamical Systems Aaron Schein College of Information and Computer Sciences University of Massachusetts Amherst Amherst, MA 01003 aschein@cs.umass.edu Mingyuan Zhou McCombs School of Business The University of Texas at Austin Austin, TX 78712 mingyuan.zhou@mccombs.utexas.edu Hanna Wallach Microsoft Research New York 641 Avenue of the Americas New York, NY 10011 hanna@dirichlet.net Abstract We introduce a new dynamical system for sequentially observed multivariate count data. This model is based on the gamma?Poisson construction?a natural choice for count data?and relies on a novel Bayesian nonparametric prior that ties and shrinks the model parameters, thus avoiding overfitting. We present an efficient MCMC inference algorithm that advances recent work on augmentation schemes for inference in negative binomial models. Finally, we demonstrate the model?s inductive bias using a variety of real-world data sets, showing that it exhibits superior predictive performance over other models and infers highly interpretable latent structure. 1 Introduction Sequentially observed count vectors y (1) , . . . , y (T ) are the main object of study in many real-world applications, including text analysis, social network analysis, and recommender systems. Count data pose unique statistical and computational challenges when they are high-dimensional, sparse, and overdispersed, as is often the case in real-world applications. For example, when tracking counts of user interactions in a social network, only a tiny fraction of possible edges are ever active, exhibiting bursty periods of activity when they are. Models of such data should exploit this sparsity in order to scale to high dimensions and be robust to overdispersed temporal patterns. In addition to these characteristics, sequentially observed multivariate count data often exhibit complex dependencies within and across time steps. For example, scientific papers about one topic may encourage researchers to write papers about another related topic in the following year. Models of such data should therefore capture the topic structure of individual documents as well as the excitatory relationships between topics. The linear dynamical system (LDS) is a widely used model for sequentially observed data, with many well-developed inference techniques based on the Kalman filter [1, 2]. The LDS assumes that each sequentially observed V -dimensional vector r (t) is real valued and Gaussian distributed: r (t) ? N (? ? (t) , ?), where ? (t) ? RK is a latent state, with K components, that is linked to the observed space via ? ? RV ?K . The LDS derives its expressive power from the way it assumes that the latent states evolve: ? (t) ? N (? ? (t?1) , ?), where ? ? RK?K is a transition matrix that captures between-component dependencies across time steps. Although the LDS can be linked to non-real observations via the extended Kalman filter [3], it cannot efficiently model real-world count data because inference is O((K + V )3 ) and thus scales poorly with the dimensionality of the data [2]. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Many previous approaches to modeling sequentially observed count data rely on the generalized linear modeling framework [4] to link the observations to a latent Gaussian space?e.g., via the Poisson?lognormal link [5]. Researchers have used this construction to factorize sequentially observed count matrices under a Poisson likelihood, while modeling the temporal structure using well-studied Gaussian techniques [6, 7]. Most of these previous approaches assume a simple Gaussian state-space model?i.e., ? (t) ? N (? (t?1) , ?)?that lacks the expressive transition structure of the LDS; one notable exception is the Poisson linear dynamical system [8]. In practice, these approaches exhibit prohibitive computational complexity in high dimensions, and the Gaussian assumption may fail to accommodate the burstiness often inherent to real-world count data [9]. We present the Poisson?gamma dynamical system (PGDS)?a new dynamical system, based on the gamma?Poisson construction, that supports the expressive transition structure of the LDS. This model naturally handles overdispersed data. We introduce a new Bayesian nonparametric prior to automatically infer the model?s rank. We develop an elegant and efficient algorithm for inferring the parameters of the transition structure that advances recent work on augmentation schemes for inference in negative binomial models [10] and scales with the number of non-zero counts, thus exploiting the sparsity inherent to real-world count data. We examine the way in which the dynamical gamma? Poisson construction propagates information and derive the model?s steady state, which involves the Lambert W function [11]. Finally, we use the PGDS to analyze a diverse range of real-world data sets, showing that it exhibits excellent predictive performance on smoothing and forecasting tasks and infers interpretable latent structure, an example of which is depicted in figure 1. 2 25 20 15 10 5 1988 1991 1994 1997 2000 Figure 1: The time-step factors for three components inferred by the PGDS from a corpus of NIPS papers. Each component is associated with a feature factor for each word type in the corpus; we list the words with the largest factors. The inferred structure tells a familiar story about the rise and fall of certain subfields of machine learning. Poisson?Gamma Dynamical Systems We can represent a data set of V -dimensional sequentially observed count vectors y (1) , . . . , y (T ) as a (t) V ? T count matrix Y . The PGDS models a single count yv ? {0, 1, . . .} in this matrix as follows: P PK (t) (t) (t?1) K yv(t) ? Pois(? (t) k=1 ?vk ?k ) and ?k ? Gam(?0 k2 =1 ?kk2 ?k2 , ?0 ), (1) (t) where the latent factors ?vk and ?k are both positive, and represent the strength of feature v in component k and the strength of component k at time step t, respectively. The scaling factor ? (t) captures the scale of the counts at time step t, and therefore obviates the need to rescale the data as a preprocessing step. We refer to the PGDS as stationary if ? (t) = ? for t = 1, . . . , T . We can view the feature factors as a V ?K matrix ? and the time-step factors as a T ?K matrix ?. Because we can also collectively view the scaling factors and time-step factors as a T ? K matrix ?, where element ?tk = (t) ? (t) ?k , the PGDS is a form of Poisson matrix factorization: Y ? Pois(? ?T ) [12, 13, 14, 15]. The PGDS is characterized by its expressive transition structure, which assumes that each time-step (t) factor ?k is drawn from a gamma distribution, whose shape parameter is a linear combination of the K factors at the previous time step. The latent transition weights ?11 , . . . , ?k1 k2 , . . . , ?KK , which we can view as a K ? K transition matrix ?, capture the excitatory relationships between components. (t) (t) The vector ? (t) = (?1 , . . . , ?K ) has an expected value of E[? (t) | ? (t?1) , ?] = ? ? (t?1) and is therefore analogous to a latent state in the the LDS. The concentration parameter ?0 determines the variance of ? (t) ?specifically, Var (? (t) | ? (t?1) , ?) = (? ? (t?1) ) ?0?1 ?without affecting its expected value. To model the strength of each component, we introduce K component weights ? = (?1 , . . . , ?K ) and place a shrinkage prior over them. We assume that the time-step factors and transition weights for component k are tied to its component weight ?k . Specifically, we define the following structure: (1) ?k ? Gam(?0 ?k , ?0 ) and ? k ? Dir(?1 ?k , . . . , ??k . . . , ?K ?k ) and ?k ? Gam( ?K0 , ?), 2 (2) PK where ? k = (?1k , . . . , ?Kk ) is the k th column of ?. Because k1 =1 ?k1 k = 1, we can interpret ?k1 k as the probability of transitioning from component k to component k1 . (We note that interpreting ? as a stochastic transition matrix relates the PGDS to the discrete hidden Markov model.) For a fixed value of ?0 , increasing K will encourage many of the component weights to be small. A small value of (1) ?k will shrink ?k , as well as the transition weights in the k th row of ?. Small values of the transition weights in the k th row of ? therefore prevent component k from being excited by the other components (t) and by itself. Specifically, because the shape parameter for the gamma prior over ?k involves a (t?1) linear combination of ? and the transition weights in the k th row of ?, small transition weights (t) will result in a small shape parameter, shrinking ?k . Thus, the component weights play a critical role in the PGDS by enabling it to automatically turn off any unneeded capacity and avoid overfitting. Finally, we place Dirichlet priors over the feature factors and draw the other parameters from a noninformative gamma prior: ?k = (?1k , . . . , ?V k ) ? Dir(?0 , . . . , ?0 ) and ? (t) , ?, ? ? Gam(0 , 0 ). The PGDS therefore has four positive hyperparameters to be set by the user: ?0 , ?0 , ?0 , and 0 . Bayesian nonparametric interpretation: As KP? ?, the component weights and their correspond? ing feature factor vectors constitute a draw G = k=1 ?k 1?k from a gamma process GamP (G0 , ?), where ? is a scale parameter and G0 is a finite and continuous base measure over a complete separable metric space ? [16]. Models based on the gamma process have an inherent shrinkage mechanism because the number of atoms with weights greater than ? > 0 follows a Poisson distribution with a fiR? nite mean?specifically, Pois(?0 ? d? ? ?1 exp (?? ?)), where ?0 = G0 (?) is the total mass under the base measure. This interpretation enables us to view the priors over ? and ? as novel stochastic processes, which we call the column-normalized relational gamma process and the recurrent gamma process, respectively. We provide the definitions of these processes in the supplementary material. Non-count observations: The PGDS can also model non-count data by linking the observed vectors (t) (t) to latent counts. A binary observation bv can be linked to a latent Poisson count yv via the Bernoulli? PK (t) (t) (t) (t) (t) Poisson distribution: bv = 1(yv ? 1) and yv ? Pois(? k=1 ?vk ?k ) [17]. Similarly, a (t) (t) real-valued observation rv can be linked to a latent Poisson count yv via the Poisson randomized gamma distribution [18]. Finally, Basbug and Engelhardt [19] recently showed that many types of non-count matrices can be linked to a latent count matrix via the compound Poisson distribution [20]. 3 MCMC Inference MCMC inference for the PGDS consists of drawing samples of the model parameters from their joint posterior distribution given an observed count matrix Y and the model hyperparameters ?0 , ?0 , ?0 , 0 . In this section, we present a Gibbs sampling algorithm for drawing these samples. At a high level, our approach is similar to that used to develop Gibbs sampling algorithms for several other related models [10, 21, 22, 17]; however, we extend this approach to handle the unique properties of the PGDS. The main technical challenge is sampling ? from its conditional posterior, which does not have a closed form. We address this challenge by introducing a set of auxiliary variables. Under this augmented version of the model, marginalizing over ? becomes tractable and its conditional posterior has a closed form. Moreover, by introducing these auxiliary variables and marginalizing over ?, we obtain an alternative model specification that we can subsequently exploit to obtain closed-form conditional posteriors for ?, ?, and ?. We marginalize over ? by performing a ?backward filtering? pass, starting with ? (T ) . We repeatedly exploit the following three definitions in order to do this. PN Definition 1: If y? = n=1 yn , where yn ? Pois(?n ) are independent Poisson-distributed random variPN ables, then (y1 , . . . , yN ) ? Mult(y? , ( PN?1 ? , . . . , PN?N ? )) and y? ? Pois( n=1 ?n ) [23, 24]. n=1 n n=1 n c Definition 2: If y ? Pois(c ?), where c is a constant, and ? ? Gam(a, b), then y ? NB(a, b+c ) is a negative binomial?distributed random variable. We can equivalently parameterize it as y ? NB(a, g(?)), where g(z) = 1 ? exp (?z) is the Bernoulli?Poisson link [17] and ? = ln (1 + cb ). Definition 3: If y ? NB(a, g(?)) and l ? CRT(y, a) is a Chinese restaurant table?distributed random variable, then y and l are equivalently jointly distributed as y ? SumLog(l, g(?)) and l ? Pois(a ?) [21]. The sum logarithmic distribution is further defined as the sum of l independent Pl and identically logarithmic-distributed random variables?i.e., y = i=1 xi and xi ? Log(g(?)). 3 Marginalizing over ?: We first note that we can re-express the Poisson likelihood in equation 1 PK (t) (t) (t) (t) (t) in terms of latent subcounts [13]: yv = yv? = k=1 yvk and yvk ? Pois(? (t) ?vk ?k ). We then P P (t) (t) (t) (t) V V define y?k = v=1 yvk . Via definition 1, we obtain y?k ? Pois(? (t) ?k ) because v=1 ?vk = 1. (T ) We start with ?k because none of the other time-step factors depend on it in their priors. Via (T ) definition 2, we can immediately marginalize over ?k to obtain the following equation: PK (T ) (T ) (T ?1) , g(? (T ) )), where ? (T ) = ln (1 + ??0 ). y?k ? NB(?0 k2 =1 ?kk2 ?k2 (3) (T ?1) (T ) Next, we marginalize over ?k . To do this, we introduce an auxiliary variable: lk ? PK (T ) (T ?1) (T ) (T ) CRT(y?k , ?0 k2 =1 ?kk2 ?k2 ). We can then re-express the joint distribution over y?k and lk as (T ) (T ) (T ) y?k ? SumLog(lk , g(? (T ) ) and lk ? Pois(? (T ) ?0 PK k2 =1 ?kk2 (T ?1) ?k2 ). (4) (T ?1) We are still unable to marginalize over ?k because it appears in a sum in the parameter of the (T ) Poisson distribution over lk ; however, via definition 1, we can re-express this distribution as PK (T ) (T ) (T ) (T ) (T ?1) lk = lk? = k2 =1 lkk2 and lkk2 ? Pois(? (T ) ?0 ?kk2 ?k2 ). (5) (T ) (T ) k1 =1 lk1 k . Again via definition 1, we can express the distribution (T ) (T ) (T ?1) over l?k as l?k ? Pois(? (T ) ?0 ?k ). We note that this expression does not depend on PK the transition weights because k1 =1 ?k1 k = 1. We also note that definition 1 implies that (T ) (T ) (T ) (T ?1) (T ?1) (T ) (l1k , . . . , lKk ) ? Mult(l?k , (?1 , . . . , ?K )). Next, we introduce mk = y?k + l?k , which (T ?1) (T ) summarizes all of the information about the data at time steps T ? 1 and T via y?k and l?k , (T ?1) (T ) respectively. Because y?k and l?k are both Poisson distributed, we can use definition 1 to obtain We then define l?k = PK (T ?1) mk (T ?1) ? Pois(?k (? (T ?1) + ? (T ) ?0 )). (6) (T ?1) Combining this likelihood with the gamma prior in equation 1, we can marginalize over ?k : PK (T ?1) (T ?1) (T ?2) mk ? NB(?0 k2 =1 ?kk2 ?k2 , g(? (T ?1) )), where ? (T ?1) = ln (1 + ? ?0 + ? (T ) ). (7) (T ?1) We then introduce lk (T ?1) lk (T ?1) ? CRT(mk , ?0 (T ?2) PK k2 =1 ?kk2 ?k2 ) and re-express the joint distribu- (T ?1) mk tion over and as the product of a Poisson and a sum logarithmic distribution, similar to (T ?2) equation 4. This then allows us to marginalize over ?k to obtain a negative binomial distribution. (1) (1) We can repeat the same process all the way back to t = 1, where marginalizing over ?k yields mk ? (t) NB(?0 ?k , g(? (1) )). We note that just as mk summarizes all of the information about the data at time (t) steps t, . . . , T , ? (t) = ln (1 + ??0 + ? (t+1) ) summarizes all of the information about ? (t) , . . . , ? (T ) . As we mentioned previously, introducing these auxiliary variables and marginalizing over ? also enables us to define an alternative model specification that we can exploit to obtain closed-form conditional posteriors for ?, ?, and ?. We provide part of its generative process in figure 2. We define (T ) (T ) (T +1) (T +1) mk = y?k + l?k , where l?k = 0, (T +1) and ? = 0 so that we can present the alternative model specification concisely. (1) lk? ? Pois(? (1) ?0 ?k ) (t) (t) (t) (l1k , . . . , lKk ) ? Mult(l?k , (?1k , . . . , ?Kk )) for t > 1 P (t) (t) lk? = K k2 =1 lkk2 for t > 1 (t) (t) mk ? SumLog(lk? , g(? (t) )) (t) (t+1) (y?k , l?k (t) (t) (t) ) ? Bin(mk , ( ?(t) +?? (t+1) ? , 0 (t) ? (t+1) ?0 )) ? (t) +? (t+1) ?0 (t) (y1k , . . . , yV k ) ? Mult(y?k , (?1k , . . . , ?V k )) Figure 2: Alternative model specification. (t) Steady state: We draw particular attention to the backward pass ? (t) = ln (1 + ??0 + ? (t+1) ) that propagates information about ? (t) , . . . , ? (T ) as we marginalize over ?. In the case of the stationary PGDS?i.e., ? (t) = ??the backward pass has a fixed point that we define in the following proposition. 4 Proposition 1: The backward pass has a fixed point of ? ? = ?W?1 (? exp (?1 ? ? ?0 )) ?1? ? ?0 . The function W?1 (?) is the lower real part of the Lambert W function [11]. We prove this proposition in the supplementary material. During inference, we perform the O(T ) backward pass repeatedly. The existence of a fixed point means that we can assume the stationary PGDS is in its steady state and replace the backward pass with an O(1) computation1 of the fixed point ? ? . To make this assumption, (T +1) (T ) (T +1) we must also assume that l?k ? Pois(? ? ?0 ?k ) instead of l?k = 0. We note that an analogous steady-state approximation exists for the LDS and is routinely exploited to reduce computation [25]. Gibbs sampling algorithm: Given Y and the hyperparameters, Gibbs sampling involves resampling each auxiliary variable or model parameter from its conditional posterior. Our algorithm involves a ?backward filtering? pass and a ?forward sampling? pass, which together form a ?backward filtering? forward sampling? algorithm. We use ? \ ?(?t) to denote everything excluding ? (t) , . . . , ? (T ) . Sampling the auxiliary variables: This step is the ?backward filtering? pass. For the stationary PGDS (T +1) (T ) in its steady state, we first compute ? ? and draw (l?k | ?) ? Pois(? ? ?0 ?k ). For the other vari(T +1) ants of the model, we set l?k = ? (T +1) = 0. Then, working backward from t = T, . . . , 2, we draw PK (t) (t) (t+1) (t?1) (lk? | ? \ ?(?t) ) ? CRT(y?k + l?k , ?0 k2 =1 ?kk2 ?k2 ) and (8) (t) (t) (t) (lk1 , . . . , lkK | ? \ ?(?t) ) ? Mult(lk? , ( PK (t?1) (t?1) ?kK ?K ?k1 ?1 k2 (t?1) =1 ?kk2 ?k , . . . , PK (t) After using equations 8 and 9 for all k = 1, . . . , K, we then set l?k = state variants, we also set ? (t) = ln (1 + ? (t) ?0 +? (t+1) k2 2 (t) k1 =1 lk1 k . PK (t?1) =1 ?kk2 ?k )). (9) 2 For the non-steady- ); for the steady-state variant, we set ? (t) = ? ? . Sampling ?: We sample ? from its conditional posterior by performing a ?forward sampling? pass, (2) (T +1) starting with ? (1) . Conditioned on the values of l?k , . . . , l?k and ? (2) , . . . , ? (T +1) obtained via the ?backward filtering? pass, we sample forward from t = 1, . . . , T , using the following equations: (1) (1) (2) (?k | ? \ ?) ? Gam(y?k + l?k + ?0 ?k , ?0 + ? (1) + ? (2) ?0 ) and PK (t) (t) (t+1) (t?1) (?k | ? \ ?(?t) ) ? Gam(y?k + l?k + ?0 k2 =1 ?kk2 ?k2 , ?0 + ? (t) + ? (t+1) ?0 ). (10) (11) Sampling ?: The alternative model specification, with ? marginalized out, assumes that (t) (t) (t) (l1k , . . . , lKk ) ? Mult(l?k , (?1k , . . . , ?Kk )). Therefore, via Dirichlet?multinomial conjugacy, PT (t) PT (t) PT (t) (? k | ? \ ?) ? Dir(?1 ?k + t=1 l1k , . . . , ??k + t=1 lkk , . . . , ?K ?k + t=1 lKk ). (12) Sampling ? and ?: We use the alternative model specification to obtain closed-form conditional posteriors for ?k and ?. First, we marginalize over ? k to obtain a Dirichlet?multinomial distribution. When augmented with a beta-distributed auxiliary variable, the Dirichlet?multinomial distribution is proportional to the negative binomial distribution [26]. We draw such an auxiliary variable, which we use, along with negative binomial augmentation schemes, to derive closed-form conditional posteriors for ?k and ?. We provide these posteriors, along with their derivations, in the supplementary material. We also provide the conditional posteriors for the remaining model parameters??, ? (1) , . . . , ? (T ) , and ??which we obtain via Dirichlet?multinomial, gamma?Poisson, and gamma?gamma conjugacy. 4 Experiments In this section, we compare the predictive performance of the PGDS to that of the LDS and that of gamma process dynamic Poisson factor analysis (GP-DPFA) [22]. GP-DPFA models a single PK (t) (t) count in Y as yv ? Pois( k=1 ?k ?vk ?k ), where each component?s time-step factors evolve as a simple gamma Markov chain, independently of those belonging to the other components: (t) (t?1) (t) ?k ? Gam(?k , c ). We consider the stationary variants of all three models.2 We used five data (t) sets, and tested each model on two time-series prediction tasks: smoothing?i.e., predicting yv given 1 2 Several software packages contain fast implementations of the Lambert W function. We used the pykalman Python library for the LDS and implemented GP-DPFA ourselves. 5 (1) (t?1) (t+1) (T +s) (T ) (1) (T ) yv , . . . , y v given yv , . . . , yv for , yv , . . . , yv ?and forecasting?i.e., predicting yv some s ? {1, 2, . . .} [27]. We provide brief descriptions of the data sets below before reporting results. Global Database of Events, Language, and Tone (GDELT): GDELT is an international relations data set consisting of country-to-country interaction events of the form ?country i took action a toward country j at time t,? extracted from news corpora. We created five count matrices, one for each year from 2001 through 2005. We treated directed pairs of countries i?j as features and counted the number of events for each pair during each day. We discarded all pairs with fewer than twenty-five total events, leaving T = 365, around V ? 9, 000, and three to six million events for each matrix. Integrated Crisis Early Warning System (ICEWS): ICEWS is another international relations event data set extracted from news corpora. It is more highly curated than GDELT and contains fewer events. We therefore treated undirected pairs of countries i?j as features. We created three count matrices, one for 2001?2003, one for 2004?2006, and one for 2007?2009. We counted the number of events for each pair during each three-day time step, and again discarded all pairs with fewer than twenty-five total events, leaving T = 365, around V ? 3, 000, and 1.3 to 1.5 million events for each matrix. State-of-the-Union transcripts (SOTU): The SOTU corpus contains the text of the annual SOTU speech transcripts from 1790 through 2014. We created a single count matrix with one column per year. After discarding stopwords, we were left with T = 225, V = 7, 518, and 656,949 tokens. DBLP conference abstracts (DBLP): DBLP is a database of computer science research papers. We used the subset of this corpus that Acharya et al. used to evaluate GP-DPFA [22]. This subset corresponds to a count matrix with T = 14 columns, V = 1, 771 unique word types, and 13,431 tokens. NIPS corpus (NIPS): The NIPS corpus contains the text of every NIPS conference paper from 1987 to 2003. We created a single count matrix with one column per year. We treated unique word types as features and discarded all stopwords, leaving T = 17, V = 9, 836, and 3.1 million tokens. 2500 900 800 700 600 500 400 300 200 100 0 2000 1500 1000 500 0 1988 1992 1995 1998 2002 Israel?Palestine Russia?USA China?USA Iraq?USA Mar 2009 Jun 2009 Aug 2009 Oct 2009 Dec 2009 (t) Figure 3: yv over time for the top four features in the NIPS (left) and ICEWS (right) data sets. Experimental design: For each matrix, we created four masks indicating some randomly selected subset of columns to treat as held-out data. For the event count matrices, we held out six (noncontiguous) time steps between t = 2 and t = T ? 3 to test the models? smoothing performance, as well as the last two time steps to test their forecasting performance. The other matrices have fewer time steps. For the SOTU matrix, we therefore held out five time steps between t = 2 and t = T ? 2, as well as t = T . For the NIPS and DBLP matrices, which contain substantially fewer time steps than the SOTU matrix, we held out three time steps between t = 2 and t = T ? 2, as well as t = T . For each matrix, mask, and model combination, we ran inference four times.3 For the PGDS and GP-DPFA, we performed 6,000 Gibbs sampling iterations, imputing the missing counts from the ?smoothing? columns at the same time as sampling the model parameters. We then discarded the first 4,000 samples and retained every hundredth sample thereafter. We used each of these samples to predict the missing counts from the ?forecasting? columns. We then averaged the predictions over the samples. For the LDS, we ran EM to learn the model parameters. Then, given these parameter values, we used the Kalman filter and smoother [1] to predict the held-out data. In practice, for all five data sets, V was too large for us to run inference for the LDS, which is O((K + V )3 ) [2], using all V features. We therefore report results from two independent sets of experiments: one comparing all three models using only the top V = 1, 000 features for each data set, and one comparing the PGDS to just GP-DPFA using all the features. The first set of experiments is generous to the LDS because the Poisson distribution is well approximated by the Gaussian distribution when its mean is large. 3 For the PGDS and GP-DPFA we used K = 100. For the PGDS, we set ?0 = 1, ?0 = 50, ?0 = 0 = 0.1. We set the hyperparameters of GP-DPFA to the values used by Acharya et al. [22]. For the LDS, we used the default hyperparameters for pykalman, and report results for the best-performing value of K ? {5, 10, 25, 50}. 6 Table 1: Results for the smoothing (?S?) and forecasting (?F?) tasks. For both error measures, lower ? of each data set. values are better. We also report the number of time steps T and the burstiness B Mean Relative Error (MRE) T ? Task B PGDS GP-DPFA Mean Absolute Error (MAE) LDS PGDS GP-DPFA LDS GDELT 365 1.27 S 2.335 ?0.19 2.951 ?0.32 3.493 ?0.53 9.366 ?2.19 9.278 ?2.01 F 2.173 ?0.41 2.207 ?0.42 2.397 ?0.29 7.002 ?1.43 7.095 ?1.67 10.098 ?2.39 7.047 ?1.25 ICEWS 365 1.10 S 0.808 ?0.11 0.877 ?0.12 1.023 ?0.15 2.867 ?0.56 2.872 ?0.56 F 0.743 ?0.17 0.792 ?0.17 0.937 ?0.31 1.788 ?0.47 1.894 ?0.50 3.104 ?0.60 1.973 ?0.62 SOTU 225 1.45 S 0.233 ?0.01 0.238 ?0.01 0.260 ?0.01 0.408 ?0.01 0.414 ?0.01 F 0.171 ?0.00 0.173 ?0.00 0.225 ?0.01 0.323 ?0.00 0.314 ?0.00 0.448 ?0.00 0.370 ?0.00 DBLP 14 1.64 S 0.417 ?0.03 0.422 ?0.05 0.405 ?0.05 0.771 ?0.03 0.782 ?0.06 F 0.322 ?0.00 0.323 ?0.00 0.369 ?0.06 0.747 ?0.01 0.715 ?0.00 0.831 ?0.01 0.943 ?0.07 NIPS 17 0.33 S F 0.415 ?0.07 0.392 ?0.07 1.609 ?0.43 29.940 ?2.95 28.138 ?3.08 108.378 ?15.44 0.343 ?0.01 0.312 ?0.00 0.642 ?0.14 62.839 ?0.37 52.963 ?0.52 95.495 ?10.52 Results: We used two error measures?mean relative error (MRE) and mean absolute error (MAE)? to compute the models? smoothing and forecasting scores for each matrix and mask combination. We then averaged these scores over the masks. For the data sets with multiple matrices, we also averaged the scores over the matrices. The two error measures differ as follows: MRE accommodates the |y (t) ?? y (t) | scale of the data, while MAE does not. This is because relative error?which we define as v (t)v , 1+yv (t) (t) where yv is the true count and y?v is the prediction?divides the absolute error by the true count and thus penalizes overpredictions more harshly than underpredictions. MRE is therefore an especially natural choice for data sets that are bursty?i.e., data sets that exhibit short periods of activity that far exceed their mean. Models that are robust to these kinds of overdispersed temporal patterns are less likely to make overpredictions following a burst, and are therefore rewarded accordingly by MRE. In table 1, we report the MRE and MAE scores for the experiments using the top V = 1, 000 features. We also report the average burstiness of each data set. We define the burstiness of feature v in matrix (t+1) (t) PT (t) ?v = 1 PT ?1 |yv ?yv | , where ? Y to be B ?v = 1 yv . For each data set, we calculated T ?1 t=1 ? ?v T t=1 the burstiness of each feature in each matrix, and then averaged these values to obtain an average ? The PGDS outperformed the LDS and GP-DPFA on seven of the ten prediction burstiness score B. tasks when we used MRE to measure the models? performance; when we used MAE, the PGDS outperformed the other models on five of the tasks. In the supplementary material, we also report the results for the experiments comparing the PGDS to GP-DPFA using all the features. The superiority of the PGDS over GP-DPFA is even more pronounced in these results. We hypothesize that the difference between these models is related to the burstiness of the data. For both error measures, the only data set for which GP-DPFA outperformed the PGDS on both tasks was the NIPS data set. This data set has a substantially lower average burstiness score than the other data sets. We provide visual (t) evidence in figure 3, where we display yv over time for the top four features in the NIPS and ICEWS data sets. For the former, the features evolve smoothly; for the latter, they exhibit bursts of activity. Exploratory analysis: We also explored the latent structure inferred by the PGDS. Because its parameters are positive, they are easy to interpret. In figure 1, we depict three components inferred from the NIPS data set. By examining the time-step factors and feature factors for these components, we see that they capture the decline of research on neural networks between 1987 and 2003, as well as the rise of Bayesian methods in machine learning. These patterns match our prior knowledge. In figure 4, we depict the three components with the largest component weights inferred by the PGDS from the 2003 GDELT matrix. The top component is in blue, the second is in green, and the third is in red. For each component, we also list the sixteen features (directed pairs of countries) with the largest feature factors. The top component (blue) is most active in March and April, 2003. Its features involve USA, Iraq (IRQ), Great Britain (GBR), Turkey (TUR), and Iran (IRN), among others. This component corresponds to the 2003 invasion of Iraq. The second component (green) exhibits a noticeable increase in activity immediately after April, 2003. Its top features involve Israel (ISR), Palestine (PSE), USA, and Afghanistan (AFG). The third component exhibits a large burst of activity 7 6 5 4 3 2 1 Jan 2003 Mar 2003 May 2003 Aug 2003 Oct 2003 Dec 2003 Figure 4: The time-step factors for the top three components inferred by the PGDS from the 2003 GDELT matrix. The top component is in blue, the second is in green, and the third is in red. For each component, we also list the features (directed pairs of countries) with the largest feature factors. in August, 2003, but is otherwise inactive. Its top features involve North Korea (PRK), South Korea (KOR), Japan (JPN), China (CHN), Russia (RUS), and USA. This component corresponds to the six-party talks?a series of negotiations between these six countries for the purpose of dismantling North Korea?s nuclear program. The first round of talks occurred during August 27?29, 2003. 0 1 2 3 4 5 6 7 8 9 In figure 5, we also show the component weights for the top ten com- 6 ponents, along with the corresponding subset of the transition matrix 543 ?. There are two components with weights greater than one: the com- 21 0 ponents that are depicted in blue and green in figure 4. The transition weights in the corresponding rows of ? are also large, meaning that other components are likely to transition to them. As we mentioned previously, the GDELT data set was extracted from news corpora. Therefore, patterns in the data primarily reflect patterns in media coverage of international affairs. We therefore interpret the latent structure inferred by the PGDS in the following way: in 2003, the media briefly covered various major events, including the six-party talks, before quickly returning to a backdrop of the ongoing Iraq war and Israeli? Palestinian relations. By inferring the kind of transition structure depicted in figure 5, the PGDS is able to model persistent, long-term 0 1 2 3 4 5 6 7 8 9 temporal patterns while accommodating the burstiness often inherent to real-world count data. This ability is what enables the PGDS to Figure 5: The latent tranachieve superior predictive performance over the LDS and GP-DPFA. sition structure inferred by the PGDS from the 2003 GDELT matrix. Top: The 5 Summary component weights for the top ten components, in deWe introduced the Poisson?gamma dynamical system (PGDS)?a new creasing order from left to Bayesian nonparametric model for sequentially observed multivariate right; two of the weights are count data. This model supports the expressive transition structure greater than one. Bottom: of the linear dynamical system, and naturally handles overdispersed The transition weights in the data. We presented a novel MCMC inference algorithm that remains corresponding subset of the efficient for high-dimensional data sets, advancing recent work on aug- transition matrix. This strucmentation schemes for inference in negative binomial models. Finally, ture means that all compowe used the PGDS to analyze five real-world data sets, demonstrating nents are likely to transition that it exhibits superior smoothing and forecasting performance over to the top two components. two baseline models and infers highly interpretable latent structure. Acknowledgments We thank David Belanger, Roy Adams, Kostis Gourgoulias, Ben Marlin, Dan Sheldon, and Tim Vieira for many helpful conversations. This work was supported in part by the UMass Amherst CIIR and in part by NSF grants SBE-0965436 and IIS-1320219. Any opinions, findings, conclusions, or recommendations are those of the authors and do not necessarily reflect those of the sponsors. 8 References [1] R. E. Kalman. A new approach to linear filtering and prediction problems. Journal of Basic Engineering, 82(1):35?45, 1960. [2] Z. Ghahramani and S. T. Roweis. Learning nonlinear dynamical systems using an EM algorithm. In Advances in Neural Information Processing Systems, pages 431?437, 1998. [3] S. S. Haykin. Kalman Filtering and Neural Networks. 2001. [4] P. McCullagh and J. A. Nelder. Generalized linear models. 1989. [5] M. G. Bulmer. On fitting the Poisson lognormal distribution to species-abundance data. Biometrics, pages 101?110, 1974. [6] D. M. Blei and J. D. Lafferty. Dynamic topic models. In Proceedings of the 23rd International Conference on Machine Learning, pages 113?120, 2006. [7] L. Charlin, R. Ranganath, J. McInerney, and D. M. Blei. Dynamic Poisson factorization. In Proceedings of the 9th ACM Conference on Recommender Systems, pages 155?162, 2015. [8] J. H. Macke, L. Buesing, J. P. Cunningham, B. M. Yu, K. V. Krishna, and M. Sahani. Empirical models of spiking in neural populations. In Advances in Neural Information Processing Systems, pages 1350?1358, 2011. [9] J. Kleinberg. Bursty and hierarchical structure in streams. Data Mining and Knowledge Discovery, 7(4):373?397, 2003. [10] M. Zhou and L. Carin. Augment-and-conquer negative binomial processes. In Advances in Neural Information Processing Systems, pages 2546?2554, 2012. [11] R. Corless, G. Gonnet, D. E. G. Hare, D. J. Jeffrey, and D. E. Knuth. On the LambertW function. Advances in Computational Mathematics, 5(1):329?359, 1996. [12] J. Canny. GaP: A factor model for discrete data. In Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 122?129, 2004. [13] A. T. Cemgil. Bayesian inference for nonnegative matrix factorisation models. Computational Intelligence and Neuroscience, 2009. [14] M. Zhou, L. Hannah, D. Dunson, and L. Carin. Beta-negative binomial process and Poisson factor analysis. In Proceedings of the 15th International Conference on Artificial Intelligence and Statistics, 2012. [15] P. Gopalan, J. Hofman, and D. Blei. Scalable recommendation with Poisson factorization. In Proceedings of the 31st Conference on Uncertainty in Artificial Intelligence, 2015. [16] T. S. Ferguson. A Bayesian analysis of some nonparametric problems. The Annals of Statistics, 1(2):209?230, 1973. [17] M. Zhou. Infinite edge partition models for overlapping community detection and link prediction. In Proceedings of the 18th International Conference on Artificial Intelligence and Statistics, pages 1135?1143, 2015. [18] M. Zhou, Y. Cong, and B. Chen. Augmentable gamma belief networks. Journal of Machine Learning Research, 17(163):1?44, 2016. [19] M. E. Basbug and B. Engelhardt. Hierarchical compound Poisson factorization. In Proceedings of the 33rd International Conference on Machine Learning, 2016. [20] R. M. Adelson. Compound Poisson distributions. OR, 17(1):73?75, 1966. [21] M. Zhou and L. Carin. Negative binomial process count and mixture modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(2):307?320, 2015. [22] A. Acharya, J. Ghosh, and M. Zhou. Nonparametric Bayesian factor analysis for dynamic count matrices. Proceedings of the 18th International Conference on Artificial Intelligence and Statistics, 2015. [23] J. F. C. Kingman. Poisson Processes. Oxford University Press, 1972. [24] D. B. Dunson and A. H. Herring. Bayesian latent variable models for mixed discrete outcomes. Biostatistics, 6(1):11?25, 2005. [25] W. J. Rugh. Linear System Theory. Pearson, 1995. [26] M. Zhou. Nonparametric Bayesian negative binomial factor analysis. arXiv:1604.07464. [27] J. Durbin and S. J. Koopman. Time Series Analysis by State Space Methods. Oxford University Press, 2012. 9
6083 |@word briefly:1 version:1 excited:1 accommodate:1 series:3 uma:2 contains:3 score:6 document:1 comparing:3 com:2 must:1 herring:1 partition:1 noninformative:1 enables:3 shape:3 hypothesize:1 prk:1 interpretable:3 depict:2 resampling:1 stationary:5 generative:1 prohibitive:1 fewer:5 selected:1 tone:1 accordingly:1 gbr:1 intelligence:6 affair:1 short:1 haykin:1 blei:3 five:8 stopwords:2 along:3 burst:3 augmentable:1 beta:2 persistent:1 consists:1 prove:1 dan:1 fitting:1 introduce:6 icews:5 mask:4 expected:2 examine:1 automatically:2 increasing:1 becomes:1 spain:1 moreover:1 mass:1 medium:2 biostatistics:1 israel:2 crisis:1 kind:2 what:1 substantially:2 developed:1 marlin:1 warning:1 finding:1 ghosh:1 temporal:4 every:2 tie:1 returning:1 k2:22 grant:1 yn:3 superiority:1 palestine:2 positive:3 before:2 engineering:1 treat:1 cemgil:1 oxford:2 kk2:11 studied:1 china:2 wallach:1 factorization:4 range:1 subfields:1 averaged:4 directed:3 unique:4 acknowledgment:1 practice:2 union:1 ciir:1 nite:1 jan:1 empirical:1 mult:6 word:4 cannot:1 marginalize:8 nb:6 missing:2 britain:1 attention:1 starting:2 independently:1 sigir:1 immediately:2 factorisation:1 nuclear:1 population:1 handle:3 exploratory:1 analogous:2 annals:1 construction:4 play:1 pt:5 user:2 element:1 roy:1 approximated:1 curated:1 afghanistan:1 iraq:4 database:2 observed:12 role:1 bottom:1 capture:5 parameterize:1 cong:1 news:3 tur:1 burstiness:9 mentioned:2 ran:2 complexity:1 dynamic:4 mccombs:2 depend:2 hofman:1 predictive:4 joint:3 k0:1 l1k:4 routinely:1 tx:1 america:1 afg:1 talk:3 derivation:1 various:1 fast:1 kp:1 artificial:4 pgds:37 tell:1 outcome:1 pearson:1 whose:1 widely:1 valued:2 supplementary:4 drawing:2 lk1:3 otherwise:1 ability:1 statistic:4 gp:15 jointly:1 itself:1 net:1 took:1 interaction:2 product:1 canny:1 combining:1 poorly:1 roweis:1 description:1 pronounced:1 exploiting:1 adam:1 ben:1 object:1 tk:1 derive:2 develop:2 recurrent:1 pose:1 tim:1 rescale:1 school:1 noticeable:1 transcript:2 aug:3 auxiliary:8 c:1 involves:4 implies:1 implemented:1 coverage:1 exhibiting:1 differ:1 filter:3 stochastic:2 subsequently:1 crt:4 opinion:1 material:4 everything:1 bin:1 proposition:3 pl:1 around:2 exp:3 bursty:3 cb:1 great:1 predict:2 major:1 nents:1 generous:1 early:1 purpose:1 outperformed:3 utexas:1 largest:4 gaussian:6 pn:3 zhou:9 avoid:1 poi:18 shrinkage:2 vk:6 rank:1 likelihood:3 bernoulli:2 baseline:1 helpful:1 inference:13 ferguson:1 unneeded:1 integrated:1 cunningham:1 hidden:1 relation:3 irn:1 among:1 augment:1 negotiation:1 development:1 smoothing:7 corless:1 atom:1 sampling:14 yu:1 adelson:1 carin:3 report:6 others:1 inherent:4 acharya:3 primarily:1 randomly:1 gamma:22 individual:1 familiar:1 ourselves:1 consisting:1 jeffrey:1 microsoft:1 detection:1 highly:3 mining:1 mixture:1 held:5 chain:1 edge:2 encourage:2 korea:3 biometrics:1 divide:1 penalizes:1 re:4 y1k:1 schein:1 mk:10 column:8 modeling:4 introducing:3 subset:5 examining:1 too:1 dependency:2 gamp:1 dir:3 st:1 international:9 amherst:3 randomized:1 off:1 together:1 quickly:1 augmentation:3 again:2 reflect:2 russia:2 fir:1 macke:1 kingman:1 japan:1 koopman:1 north:2 hundredth:1 notable:1 stream:1 invasion:1 performed:1 tion:1 view:4 closed:6 linked:5 analyze:2 red:2 yv:24 start:1 variance:1 characteristic:1 efficiently:1 correspond:1 yield:1 ant:1 buesing:1 lds:18 bayesian:10 lambert:3 none:1 researcher:2 definition:11 hare:1 naturally:2 associated:1 massachusetts:1 knowledge:2 conversation:1 infers:3 dimensionality:1 vieira:1 back:1 mingyuan:2 appears:1 mre:7 day:2 april:2 charlin:1 shrink:2 mar:2 just:2 working:1 belanger:1 expressive:5 nonlinear:1 overlapping:1 lack:1 gonnet:1 scientific:1 usa:6 normalized:1 contain:2 true:2 inductive:1 overdispersed:5 former:1 round:1 during:4 steady:7 generalized:2 complete:1 demonstrate:1 interpreting:1 meaning:1 novel:3 recently:1 superior:3 imputing:1 multinomial:4 spiking:1 million:3 linking:1 interpretation:2 extend:1 mae:5 interpret:3 occurred:1 refer:1 gibbs:5 rd:2 mathematics:1 similarly:1 language:1 specification:6 base:2 multivariate:3 posterior:11 recent:3 showed:1 rewarded:1 irq:1 compound:3 certain:1 binary:1 palestinian:1 exploited:1 krishna:1 greater:3 period:2 ii:1 rv:2 relates:1 smoother:1 multiple:1 infer:1 turkey:1 ing:1 technical:1 match:1 characterized:1 long:1 retrieval:1 mcinerney:1 sponsor:1 prediction:6 variant:3 basic:1 scalable:1 sition:1 metric:1 poisson:33 arxiv:1 iteration:1 represent:2 dec:2 addition:1 affecting:1 country:9 leaving:3 south:1 elegant:1 undirected:1 lafferty:1 call:1 exceed:1 identically:1 easy:1 ture:1 variety:1 restaurant:1 reduce:1 decline:1 avenue:1 texas:1 inactive:1 expression:1 six:5 pse:1 war:1 forecasting:7 speech:1 york:2 constitute:1 repeatedly:2 action:1 covered:1 involve:3 gopalan:1 nonparametric:7 iran:1 ten:3 nsf:1 neuroscience:1 per:2 blue:4 diverse:1 write:1 discrete:3 express:5 kor:1 thereafter:1 four:5 demonstrating:1 drawn:1 prevent:1 backward:11 advancing:1 fraction:1 year:4 sum:4 run:1 package:1 uncertainty:1 place:2 reporting:1 draw:6 summarizes:3 scaling:2 display:1 annual:2 activity:5 nonnegative:1 strength:3 bv:2 durbin:1 software:1 sheldon:1 kleinberg:1 ables:1 noncontiguous:1 performing:3 separable:1 combination:4 march:1 belonging:1 across:2 em:2 ln:6 equation:6 conjugacy:2 previously:2 remains:1 turn:1 count:39 fail:1 mechanism:1 tractable:1 lkk:6 gam:8 hierarchical:2 alternative:6 existence:1 obviates:1 binomial:11 dirichlet:6 assumes:4 remaining:1 top:14 marginalized:1 exploit:4 k1:10 chinese:1 especially:1 ghahramani:1 conquer:1 g0:3 concentration:1 exhibit:9 link:4 unable:1 dpfa:15 capacity:1 accommodates:1 thank:1 accommodating:1 topic:5 seven:1 toward:1 engelhardt:2 ru:1 kalman:5 retained:1 relationship:2 kk:5 equivalently:2 dunson:2 negative:11 rise:2 implementation:1 design:1 twenty:2 perform:1 recommender:2 observation:5 markov:2 discarded:4 enabling:1 finite:1 extended:1 ever:1 relational:1 excluding:1 y1:1 august:2 community:1 inferred:8 introduced:1 david:1 pair:8 concisely:1 barcelona:1 nip:12 israeli:1 address:1 able:1 dynamical:11 pattern:7 below:1 sparsity:2 challenge:3 program:1 including:2 green:4 belief:1 power:1 critical:1 event:12 business:1 natural:2 rely:1 predicting:2 treated:3 scheme:4 brief:1 library:1 lk:14 created:5 jun:1 sahani:1 text:3 prior:10 discovery:1 python:1 evolve:3 marginalizing:5 relative:3 mixed:1 filtering:7 proportional:1 var:1 sixteen:1 sbe:1 propagates:2 story:1 tiny:1 austin:2 row:4 excitatory:2 summary:1 token:3 repeat:1 last:1 supported:1 distribu:1 bias:1 fall:1 lognormal:2 absolute:3 sparse:1 isr:1 distributed:8 yvk:3 dimension:2 default:1 world:9 transition:22 vari:1 calculated:1 forward:4 author:1 preprocessing:1 counted:2 far:1 party:2 social:2 transaction:1 ranganath:1 global:1 sequentially:9 overfitting:2 active:2 corpus:9 nelder:1 factorize:1 xi:2 continuous:1 latent:18 table:3 learn:1 robust:2 hanna:2 excellent:1 complex:1 harshly:1 necessarily:1 pk:18 main:2 hyperparameters:5 augmented:2 backdrop:1 ny:1 shrinking:1 inferring:2 tied:1 third:3 abundance:1 hannah:1 rk:2 transitioning:1 discarding:1 showing:2 list:3 explored:1 evidence:1 derives:1 exists:1 knuth:1 conditioned:1 dblp:5 gap:1 chen:1 smoothly:1 depicted:3 logarithmic:3 chn:1 likely:3 visual:1 tracking:1 recommendation:2 collectively:1 corresponds:3 determines:1 relies:1 extracted:3 ma:1 acm:2 oct:2 conditional:9 ponents:2 replace:1 mccullagh:1 specifically:4 infinite:1 total:3 specie:1 pas:11 experimental:1 aaron:1 exception:1 college:1 indicating:1 support:2 latter:1 ongoing:1 evaluate:1 mcmc:4 tested:1 avoiding:1
5,618
6,084
Fast -free Inference of Simulation Models with Bayesian Conditional Density Estimation George Papamakarios School of Informatics University of Edinburgh g.papamakarios@ed.ac.uk Iain Murray School of Informatics University of Edinburgh i.murray@ed.ac.uk Abstract Many statistical models can be simulated forwards but have intractable likelihoods. Approximate Bayesian Computation (ABC) methods are used to infer properties of these models from data. Traditionally these methods approximate the posterior over parameters by conditioning on data being inside an -ball around the observed data, which is only correct in the limit  ? 0. Monte Carlo methods can then draw samples from the approximate posterior to approximate predictions or error bars on parameters. These algorithms critically slow down as  ? 0, and in practice draw samples from a broader distribution than the posterior. We propose a new approach to likelihood-free inference based on Bayesian conditional density estimation. Preliminary inferences based on limited simulation data are used to guide later simulations. In some cases, learning an accurate parametric representation of the entire true posterior distribution requires fewer model simulations than Monte Carlo ABC methods need to produce a single sample from an approximate posterior. 1 Introduction A simulator-based model is a data-generating process described by a computer program, usually with some free parameters we need to learn from data. Simulator-based modelling lends itself naturally to scientific domains such as evolutionary biology [1], ecology [24], disease epidemics [10], economics [8] and cosmology [23], where observations are best understood as products of underlying physical processes. Inference in these models amounts to discovering plausible parameter settings that could have generated our observed data. The application domains mentioned can require properly calibrated distributions that express uncertainty over plausible parameters, rather than just point estimates, in order to reach scientific conclusions or make decisions. As an analytical expression for the likelihood of parameters given observations is typically not available for simulator-based models, conventional likelihood-based Bayesian inference is not applicable. An alternative family of algorithms for likelihood-free inference has been developed, referred to as Approximate Bayesian Computation (ABC). These algorithms simulate the model repeatedly and only accept parameter settings which generate synthetic data similar to the observed data, typically gathered in a real-world experiment. Rejection ABC [21], the most basic ABC algorithm, simulates the model for each setting of proposed parameters, and rejects parameters if the generated data is not within a certain distance from the observations. The accepted parameters form a set of independent samples from an approximate posterior. Markov Chain Monte Carlo ABC (MCMC-ABC) [13] is an improvement over rejection ABC which, instead of independently proposing parameters, explores the parameter space by perturbing the most recently accepted parameters. Sequential Monte Carlo ABC (SMC-ABC) [2, 5] uses importance sampling to simulate a sequence of slowly-changing distributions, the last of which is an approximation to the parameter posterior. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Conventional ABC algorithms such as the above suffer from three drawbacks. First, they only represent the parameter posterior as a set of (possibly weighted or correlated) samples. A samplebased representation easily gives estimates and error bars of individual parameters, and model predictions. However these computations are noisy, and it is not obvious how to perform some other computations using samples, such as combining posteriors from two separate analyses. Second, the parameter samples do not come from the correct Bayesian posterior, but from an approximation based on assuming a pseudo-observation that the data is within an -ball centred on the data actually observed. Third, as the -tolerance is reduced, it can become impractical to simulate the model enough times to match the observed data even once. When simulations are expensive to perform, good quality inference becomes impractical. We propose a parametric approach to likelihood-free inference, which unlike conventional ABC does not suffer from the above three issues. Instead of returning samples from an -approximation to the posterior, our approach learns a parametric approximation to the exact posterior, which can be made as accurate as required. Preliminary fits to the posterior are used to guide future simulations, which can reduce the number of simulations required to learn an accurate approximation by orders of magnitude. Our approach uses conditional density estimation with Bayesian neural networks, and draws upon advances in parametric density estimation, stochastic variational inference, and recognition networks, as discussed in the related work section. 2 2.1 Bayesian conditional density estimation for likelihood-free inference Simulator-based models and ABC Let ? be a vector of parameters controlling a simulator-based model, and let x be a data vector generated by the model. The model may be provided as a probabilistic program that can be easily simulated, and implicitly defines a likelihood p(x | ?), which we assume we cannot evaluate. Let p(?) encode our prior beliefs about the parameters. Given an observation xo , we are interested in the parameter posterior p(? | x = xo ) ? p(x = xo | ?) p(?). As the likelihood p(x = xo | ?) is unavailable, conventional Bayesian inference cannot be carried out. The principle behind ABC is to approximate p(x = xo | ?) by p(kx ? xo k <  | ?) for a sufficiently small value of , and then estimate the latter?e.g. by Monte Carlo?using simulations from the model. Hence, ABC approximates the posterior by p(? | kx ? xo k < ), which is typically broader and more uncertain. ABC can trade off computation for accuracy by decreasing , which improves the approximation to the posterior but requires more simulations from the model. However, the approximation becomes exact only when  ? 0, in which case simulations never match the observations, p(kx ? xo k <  | ?) ? 0, and existing methods break down. In this paper, we refer to p(? | x = xo ) as the exact posterior, as it corresponds to setting  = 0 in p(? | kx ? xo k < ). In most practical applications of ABC, x is taken to be a fixed-length vector of summary statistics that is calculated from data generated by the simulator, rather than the raw data itself. Extracting statistics is often necessary in practice, to reduce the dimensionality of the data and maintain p(kx ? xo k <  | ?) to practically acceptable levels. For the purposes of this paper, we will make no distinction between raw data and summary statistics, and we will regard the calculation of summary statistics as part of the data generating process. 2.2 Learning the posterior Rather than using simulations from the model in order to estimate an approximate likelihood, p(kx ? xo k <  | ?), we will use the simulations to directly estimate p(? | x = xo ). We will run simulations for parameters drawn from a distribution, p?(?), which we shall refer to as the proposal prior. The proposition below indicates how we can then form a consistent estimate of the exact posterior, using a flexible family of conditional densities, q? (? | x), parameterized by a vector ?. Proposition 1. We assume that each of a set of N pairs (?n , xn ) was independently generated by ?n ? p?(?) and xn ? p(x | ?n ). (1) Q In the limit N ? ?, the probability of the parameter vectors n q? (?n | xn ) is maximized w.r.t. ? if and only if p?(?) q? (? | x) ? p(? | x), (2) p(?) 2 provided a setting of ? that makes q? (? | x) proportional to p(?) ? p(?) p(? | x) exists. Intuition: if we simulated enough parameters from the prior, the density estimator q? would learn a conditional of the joint prior model over parameters and data, which is the posterior p(? | x). If we simulate parameters drawn from another distribution, we need to ?importance reweight? the result. A more detailed proof can be found in Section A of the supplementary material. The proposition above suggests the following procedure for learning the posterior: (a) propose a set of parameter vectors {?n } from the proposal prior; (b) for each ?n run the simulator to obtain a corresponding data vector xn ; (c) train q? with maximum likelihood on {?n , xn }; and (d) estimate the posterior by p(?) p?(? | x = xo ) ? q? (? | xo ). (3) p?(?) This procedure is summarized in Algorithm 2. 2.3 Choice of conditional density estimator and proposal prior In choosing the types of density estimator q? (? | x) and proposal prior p?(?), we need to meet the following criteria: (a) q? should be flexible enough to represent the posterior but easy to train with maximum likelihood; (b) p?(?) should be easy to evaluate and sample from; and (c) the right-hand side expression in Equation (3) should be easily evaluated and normalized. We draw upon work on conditional neural density estimation and take q? to be a Mixture Density Network (MDN) [3] with fully parameterized covariance matrices. That is, q? takes the form of a P mixture of K Gaussian components q? (? | x) = k ?k N (? | mk , Sk ), whose mixing coefficients {?k }, means {mk } and covariance matrices {Sk } are computed by a feedforward neural network parameterized by ?, taking x as input. Such an architecture is capable of representing any conditional distribution arbitrarily accurately?provided the number of components K and number of hidden units in the neural network are sufficiently large?while remaining trainable by backpropagation. The parameterization of the MDN is detailed in Section B of the supplementary material. We take the proposal prior to be a single Gaussian p?(?) = N (? | m0 , S0 ), with mean m0 and full covariance matrix S0 . Assuming the prior p(?) is a simple distribution (uniform or Gaussian, as is typically the case in practice), then this choice allows us to calculate p?(? | x = xo ) in Equation (3) analytically. That is, p?(? | x = xo ) will be a mixture of K Gaussians, whose parameters will be a function of {?k , mk , Sk } evaluated at xo (as detailed in Section C of the supplementary material). 2.4 Learning the proposal prior Simple rejection ABC is inefficient because the posterior p(? | x = xo ) is typically much narrower than the prior p(?). A parameter vector ? sampled from p(?) will rarely be plausible under p(? | x = xo ) and will most likely be rejected. Practical ABC algorithms attempt to reduce the number of rejections by modifying the way they propose parameters; for instance, MCMC-ABC and SMC-ABC propose new parameters by perturbing parameters they already consider plausible, in the hope that nearby parameters remain plausible. In our framework, the key to efficient use of simulations lies in the choice of proposal prior. If we take p?(?) to be the actual prior, then q? (? | x) will learn the posterior for all x, as can be seen from Equation (2). Such a strategy however is grossly inefficient if we are only interested in the posterior for x = xo . Conversely, if p?(?) closely matches p(? | x = xo ), then most simulations will produce samples that are highly informative in learning q? (? | x) for x = xo . In other words, if we already knew the true posterior, we could use it to construct an efficient proposal prior for learning it. We exploit this idea to set up a fixed-point system. Our strategy becomes to learn an efficient proposal prior that closely approximates the posterior as follows: (a) initially take p?(?) to be the prior p(?); (b) propose N samples {?n } from p?(?) and corresponding samples {xn } from the simulator, and train q? (? | x) on them; (c) approximate the posterior using Equation (3) and set p?(?) to it; (d) repeat until p?(?) has converged. This procedure is summarized in Algorithm 1. In the procedure above, as long as q? (? | x) has only one Gaussian component (K = 1) then p?(?) remains a single Gaussian throughout. Moreover, in each iteration we initialize q? with the density 3 Algorithm 2: Training of posterior initialize q? (? | x) with K components // if q? available by Algorithm 1 // initialize by replicating its // one component K times for n = 1..N do sample ?n ? p?(?) sample xn ? p(x | ?n ) end train q? (? | x) on {?n , xn } p(?) q? (? | xo ) p?(? | x = xo ) ? p(?) ? Algorithm 1: Training of proposal prior initialize q? (? | x) with one component p?(?) ? p(?) repeat for n = 1..N do sample ?n ? p?(?) sample xn ? p(x | ?n ) end retrain q? (? | x) on {?n , xn } p?(?) ? p(?) q? (? | xo ) p(?) ? until p?(?) has converged; estimator learnt in the iteration before, thus we keep training q? throughout. This initialization allows us to use a small sample size N in each iteration, thus making efficient use of simulations. As we shall demonstrate in Section 3, the procedure above learns Gaussian approximations to the true posterior fast: in our experiments typically 4?6 iterations of 200?500 samples each were sufficient. This Gaussian approximation can be used as a rough but cheap approximation to the true posterior, or it can serve as a good proposal prior in Algorithm 2 for efficiently fine-tuning a non-Gaussian multi-component posterior. If the second strategy is adopted, then we can reuse the single-component neural density estimator learnt in Algorithm 1 to initialize q? in Algorithm 2. The weights in the final layer of the MDN are replicated K times, with small random perturbations to break symmetry. 2.5 Use of Bayesian neural density estimators To make Algorithm 1 as efficient as possible, the number of simulations per iteration N should be kept small, while at the same time it should provide a sufficient training signal for q? . With a conventional MDN, if N is made too small, there is a danger of overfitting, especially in early iterations, leading to over-confident proposal priors and an unstable procedure. Early stopping could be used to avoid overfitting; however a significant fraction of the N samples would have to be used as a validation set, leading to inefficient use of simulations. As a better alternative, we developed a Bayesian version of the MDN using Stochastic Variational Inference (SVI) for neural networks [12]. We shall refer to this Bayesian version of the MDN as MDN-SVI. An MDN-SVI has two sets of adjustable parameters of the same size, the means ?m and the log variances ?s . The means correspond to the parameters ? of a conventional MDN. During training, Gaussian noise of variance exp ?s is added to the means independently for each training example (?n , xn ). The Bayesian interpretation of this procedure is that it optimizes a variational Gaussian posterior with a diagonal covariance matrix over parameters ?. At prediction time, the noise is switched off and the MDN-SVI behaves like a conventional MDN with ? = ?m . Section D of the supplementary material details the implementation and training of MDN-SVI. We found that using an MDN-SVI instead of an MDN improves the robustness and efficiency of Algorithm 1 because (a) MDN-SVI is resistant to overfitting, allowing us to use a smaller number of simulations N ; (b) no validation set is needed, so all samples can be used for training; and (c) since overfitting is not an issue, no careful tuning of training time is necessary. 3 Experiments We showcase three versions of our approach: (a) learning the posterior with Algorithm 2 where q? is a conventional MDN and the proposal prior p?(?) is taken to be the actual prior p(?), which we refer to as MDN with prior; (b) training a proposal prior with Algorithm 1 where q? is an MDN-SVI, which we refer to as proposal prior; and (c) learning the posterior with Algorithm 2 where q? is an MDN-SVI and the proposal prior p?(?) is taken to be the one learnt in (b), which we refer to as MDN with proposal. All MDNs were trained using Adam [11] with its default parameters. We compare to three ABC baselines: (a) rejection ABC [21], where parameters are proposed from the prior and are accepted if kx ? xo k < ; (b) MCMC-ABC [13] with a spherical Gaussian proposal, whose variance we manually tuned separately in each case for best performance; and (c) SMC4 Proposal prior MDN with proposal 10 ? 5 Mean 75% of mass 99% of mass xo = 0 10 5 0 ? True posterior MDN with prior ?5 ?2 ?1 0 ? 1 2 3 ?15 0 ?5 ?10 ?3 Mean 75% of mass 99% of mass xo = 0 ?10 ?10 ?5 0 x 5 10 15 ?15 ?10 ?5 0 x 5 10 15 Figure 1: Results on mixture of two Gaussians. Left: approximate posteriors learnt by each strategy for xo = 0. Middle: full conditional density q? (?|x) leant by the MDN trained with prior. Right: full conditional density q? (?|x) learnt by the MDN-SVI trained with proposal prior. Vertical dashed lines show the location of the observation xo = 0. ABC [2], where the sequence of ?s was exponentially decayed, with a decay rate manually tuned separately in each case for best performance. MCMC-ABC was given the unrealistic advantage of being initialized with a sample from rejection ABC, removing the need for an otherwise necessary burn-in period. Code for reproducing the experiments is provided in the supplementary material and at https://github.com/gpapamak/epsilon_free_inference. 3.1 Mixture of two Gaussians The first experiment is a toy problem where the goal is to infer the common mean ? of a mixture of two 1D Gaussians, given a single datapoint xo . The setup is   p(?) = U(? | ?? , ?? ) and p(x | ?) = ? N x | ?, ?12 + (1 ? ?) N x | ?, ?22 , (4) where ?? = ?10, ?? = 10, ? = 0.5, ?1 = 1, ?2 = 0.1 and xo = 0. The posterior can be calculated analytically, and is proportional to an equal mixture of two Gaussians centred at xo with variances ?12 and ?22 , restricted to [?? , ?? ]. This problem is often used in the SMC-ABC literature to illustrate the difficulty of MCMC-ABC in representing long tails. Here we use it to demonstrate the correctness of our approach and its ability to accurately represent non-Gaussian long-tailed posteriors. Figure 1 shows the results of neural density estimation using each strategy. All MDNs have one hidden layer with 20 tanh units and 2 Gaussian components, except for the proposal prior MDN which has a single component. Both MDN with prior and MDN with proposal learn good parametric approximations to the true posterior, and the proposal prior is a good Gaussian approximation to it. We used 10K simulations to train the MDN with prior, whereas the prior proposal took 4 iterations of 200 simulations each to train, and the MDN with proposal took 1000 simulations on top of the previous 800. The MDN with prior learns the posterior distributions for a large range of possible observations x (middle plot of Figure 1), whereas the MDN with proposal gives accurate posterior probabilities only near the value actually observed (right plot of Figure 1). 3.2 Bayesian linear regression In Bayesian linear regression, the goal is to infer the parameters ? of a linear map from noisy observations of outputs at known inputs. The setup is  Q p(?) = N (? | m, S) and p(x | ?) = i N xi | ? T ui , ? 2 , (5) where we took m = 0, S = I, ? = 0.1, randomly generated inputs {ui } from a standard Gaussian, and randomly generated observations xo from the model. In our setup, ? and x have 6 and 10 dimensions respectively. The posterior is analytically tractable, and is a single Gaussian. All MDNs have one hidden layer of 50 tanh units and one Gaussian component. ABC methods were run for a sequence of decreasing ?s, up to their failing points. To measure the approximation quality to the posterior, we analytically calculated the KL divergence from the true posterior to the learnt posterior (which for ABC was taken to be a Gaussian fit to the set of returned posterior samples). The left of Figure 2 shows the approximation quality vs ; MDN methods are shown as horizontal 5 KL divergence 10 2 MDN with prior Proposal prior MDN with prop. 101 100 10?1 10?2 ?1 10 10  0 10 1 # simulations (per effective sample for ABC) Rej. ABC MCMC-ABC SMC-ABC 103 107 True posterior MDN with prior Proposal prior Rej. ABC MCMC-ABC SMC-ABC MDN with prior Proposal prior MDN with prop. 106 105 104 MDN with prop. MCMC-ABC SMC-ABC 103 102 101 100 ?2 10 10?1 100 101 102 KL divergence 103 104 ?0.20 ?0.15 ?0.10 ?0.05 ?1 0.00 0.05 Figure 2: Results on Bayesian linear regression. Left: KL divergence from true posterior to approximation vs ; lower is better. Middle: number of simulations vs KL divergence; lower left is better. Note that number of simulations is total for MDNs, and per effective sample for ABC. Right: Posterior marginals for ?1 as computed by each method. ABC posteriors (represented as histograms) correspond to the setting of  that minimizes the KL in the left plot. lines. As  is decreased, ABC methods sample from an increasingly better approximation to the true posterior, however they eventually reach their failing point, or take prohibitively long. The best approximations are achieved by MDN with proposal and a very long run of SMC-ABC. The middle of Figure 2 shows the increase in number of simulations needed to improve approximation quality (as  decreases). We quote the total number of simulations for MDN training, and the number of simulations per effective sample for ABC. Section E of the supplementary material describes how the number of effective samples is calculated. The number of simulations per effective sample should be multiplied by the number of effective samples needed in practice. Moreover, SMC-ABC will not work well with only one particle, so many times the quoted cost will always be needed. Here, MDNs make more efficient use of simulations than Monte Carlo ABC methods. Sequentially fitting a prior proposal was more than ten times cheaper than training with prior samples, and more accurate. 3.3 Lotka?Volterra predator-prey population model The Lotka?Volterra model is a stochastic Markov jump process that describes the continuous time evolution of a population of predators interacting with a population of prey. There are four possible reactions: (a) a predator being born, (b) a predator dying, (c) a prey being born, and (d) a prey being eaten by a predator. Positive parameters ? = (?1 , ?2 , ?3 , ?4 ) control the rate of each reaction. Given a set of statistics xo calculated from an observed population time series, the objective is to infer ?. We used a flat prior over log ?, and calculated a set of 9 statistics x. The full setup is detailed in Section F of the supplementary material. The Lotka?Volterra model is commonly used in the ABC literature as a realistic model which can be simulated, but whose likelihood is intractable. One of the properties of Lotka?Volterra is that typical nature-like observations only occur for very specific parameter settings, resulting in narrow, Gaussian-like posteriors that are hard to recover. The MDN trained with prior has two hidden layers of 50 tanh units each, whereas the MDN-SVI used to train the proposal prior and the MDN-SVI trained with proposal have one hidden layer of 50 tanh units. All three have one Gaussian component. We found that using more than one components made no difference to the results; in all cases the MDNs chose to use only one component and switch the rest off, which is consistent with our observation about the near-Gaussianity of the posterior. We measure how well each method retrieves the true parameter values that were used to generate xo by calculating their log probability under each learnt posterior; for ABC a Gaussian fit to the posterior samples was used. The left panel of Figure 3 shows how this log probability varies with , demonstrating the superiority of MDN methods over ABC. In the middle panel we can see that MDN training with proposal makes efficient use of simulations compared to training with prior and ABC; note that for ABC the number of simulations is only for one effective sample. In the right panel, we can see that the estimates returned by MDN methods are more confident around the true parameters compared to ABC, because the MDNs learn the exact posterior rather than an inflated version of it like ABC does (plots for the other three parameters look similar). We found that when training an MDN with a well-tuned proposal that focuses on the plausible region, an MDN with fewer parameters is needed compared to training with the prior. This is because the 6 10 MDN with prior Proposal prior MDN with prop. 5 0 ?5 10 ?2 10 ?1  10 0 10 1 106 Rej. ABC MCMC-ABC SMC-ABC MDN with prior Proposal prior MDN with prop. 105 104 103 ?3.8 ?4.2 ?4.4 2 ?4.6 101 ?4.8 10 100 ?5 0 5 10 15 Neg. log probability of true parameters True value ?4.0 log ?1 Rej. ABC MCMC-ABC SMC-ABC # simulations (per effective sample for ABC) Neg. log probability of true parameters 15 ?5.0 Rej. MCMC SMC ABC ABC ABC MDN prior Prop. prior MDN prop. Figure 3: Results on Lotka?Volterra. Left: negative log probability of true parameters vs ; lower is better. Middle: number of simulations vs negative log probability; lower left is better. Note that number of simulations is total for MDNs, but per effective sample for ABC. Right: Estimates of log ?1 with 2 standard deviations. ABC estimates used many more simulations with the smallest feasible . MDN trained with proposal needs to learn only the local relationship between x and ? near xo , as opposed to in the entire domain of the prior. Hence, not only are savings achieved in number of simulations, but also training the MDN itself becomes more efficient. 3.4 M/G/1 queue model The M/G/1 queue model describes the processing of a queue of continuously arriving jobs by a single server. In this model, the time the server takes to process each job is independently and uniformly distributed in the interval [?1 , ?2 ]. The time interval between arrival of two consecutive jobs is independently and exponentially distributed with rate ?3 . The server observes only the time intervals between departure of two consecutive jobs. Given a set of equally-spaced percentiles xo of inter-departure times, the task is to infer parameters ? = (?1 , ?2 , ?3 ). This model is easy to simulate but its likelihood is intractable, and it has often been used as an ABC benchmark [4, 16]. Unlike Lotka?Volterra, data x is weakly informative about ?, and hence the posterior over ? tends to be broad and non-Gaussian. In our setup, we placed flat independent priors over ?1 , ?2 ? ?1 and ?3 , and we took x to be 5 equally spaced percentiles, as detailed in Section G of the supplementary material. The MDN trained with prior has two hidden layers of 50 tanh units each, whereas the MDN-SVI used to train the proposal prior and the one trained with proposal have one hidden layer of 50 tanh units. As observed in the Lotka?Volterra demo, less capacity is required when training with proposal, as the relationship to be learned is local and hence simpler, which saves compute time and gives a more accurate final posterior. All MDNs have 8 Gaussian components (except the MDN-SVI used to train the proposal prior, which always has one), which, after experimentation, we determined are enough for the MDNs to represent the non-Gaussian nature of the posterior. Figure 4 reports the log probability of the true parameters under each posterior learnt?for ABC, the log probability was calculated by fitting a mixture of 8 Gaussians to posterior samples using Expectation-Maximization?and the number of simulations needed to achieve it. As before, MDN methods are more confident compared to ABC around the true parameters, which is due to ABC computing a broader posterior than the true one. MDN methods make more efficient use of simulations, since they use all of them for training and, unlike ABC, do not throw a proportion of them away. 4 Related work Regression adjustment. An early parametric approach to ABC is regression adjustment, where a parametric regressor is trained on simulation data in order to learn a mapping from x to ?. The learnt mapping is then used to correct for using a large , by adjusting the location of posterior samples gathered by e.g. rejection ABC. Beaumont et al. [1] used linear regressors, and later Blum and Fran?ois [4] used neural networks with one hidden layer that separately predicted the mean and variance of ?. Both can be viewed as rudimentary density estimators and as such they are a predecessor to our work. However, they were not flexible enough to accurately estimate the posterior, and they were only used within some other ABC method to allow for a larger . In our work, we make conditional density estimation flexible enough to approximate the posterior accurately. 7 2 MDN with prior Proposal prior MDN with prop. 1 0 ?1 ?2 ?3 10 ?4 10 ?3  10 ?2 10 ?1 108 106 105 10 12 Rej. ABC MCMC-ABC SMC-ABC MDN with prior Proposal prior MDN with prop. 107 4 True value 10 8 ?2 Rej. ABC MCMC-ABC SMC-ABC 3 # simulations (per effective sample for ABC) Neg. log probability of true parameters 4 103 6 4 102 10 2 1 100 ?3 ?2 ?1 0 1 2 Neg. log probability of true parameters 3 0 Rej. MCMC SMC ABC ABC ABC MDN prior Prop. prior MDN prop. Figure 4: Results on M/G/1. Left: negative log probability of true parameters vs ; lower is better. Middle: number of simulations vs negative log probability; lower left is better. Note that number of simulations is total for MDNs, and per effective sample for ABC. Right: Estimates of ?2 with 2 standard deviations; ABC estimates correspond to the lowest setting of  used. Synthetic likelihood. Another parametric approach is synthetic likelihood, where parametric models are used to estimate the likelihood p(x | ?). Wood [24] used a single Gaussian, and later Fan et al. [7] used a mixture Gaussian model. Both of them learnt a separate density model of x for each ? by repeatedly simulating the model for fixed ?. More recently, Meeds and Welling [14] used a Gaussian process model to interpolate Gaussian likelihood approximations between different ??s. Compared to learning the posterior, synthetic likelihood has the advantage of not depending on the choice of proposal prior. Its main disadvantage is the need of further approximate inference on top of it in order to obtain the posterior. In our work we directly learn the posterior, eliminating the need for further inference, and we address the problem of correcting for the proposal prior. Efficient Monte Carlo ABC. Recent work on ABC has focused on reducing the simulation cost of sample-based ABC methods. Hamiltonian ABC [15] improves upon MCMC-ABC by using stochastically estimated gradients in order to explore the parameter space more efficiently. Optimization Monte Carlo ABC [16] explicitly optimizes the location of ABC samples, which greatly reduces rejection rate. Bayesian optimization ABC [10] models p(kx ? xo k | ?) as a Gaussian process and then uses Bayesian optimization to guide simulations towards the region of small distances kx ? xo k. In our work we show how a significant reduction in simulation cost can also be achieved with parametric methods, which target the posterior directly. Recognition networks. Our use of neural density estimators for learning posteriors is reminiscent of recognition networks in machine learning. A recognition network is a neural network that is trained to invert a generative model. The Helmholtz machine [6], the variational auto-encoder [12] and stochastic backpropagation [22] are examples where a recognition network is trained jointly with the generative network it is designed to invert. Feedforward neural networks have been used to invert black-box generative models [18] and binary-valued Bayesian networks [17], and convolutional neural networks have been used to invert a physics engine [25]. Our work illustrates the potential of recognition networks in the field of likelihood-free inference, where the generative model is fixed, and inference of its parameters is the goal. Learning proposals. Neural density estimators have been employed in learning proposal distributions for importance sampling [20] and Sequential Monte Carlo [9, 19]. Although not our focus here, our fit to the posterior could also be used within Monte Carlo inference methods. In this work we see how far we can get purely by fitting a series of conditional density estimators. 5 Conclusions Bayesian conditional density estimation improves likelihood-free inference in three main ways: (a) it represents the posterior parametrically, as opposed to as a set of samples, allowing for probabilistic evaluations later on in the pipeline; (b) it targets the exact posterior, rather than an -approximation of it; and (c) it makes efficient use of simulations by not rejecting samples, by interpolating between samples, and by gradually focusing on the plausible parameter region. Our belief is that neural density estimation is a tool with great potential in likelihood-free inference, and our hope is that this work helps in establishing its usefulness in the field. 8 Acknowledgments We thank Amos Storkey for useful comments. George Papamakarios is supported by the Centre for Doctoral Training in Data Science, funded by EPSRC (grant EP/L016427/1) and the University of Edinburgh, and by Microsoft Research through its PhD Scholarship Programme. References [1] M. A. Beaumont, W. Zhang, and D. J. Balding. Approximate Bayesian Computation in population genetics. Genetics, 162:2025?2035, Dec. 2002. [2] M. A. Beaumont, J.-M. Cornuet, J.-M. Marin, and C. P. Robert. Adaptive Approximate Bayesian Computation. Biometrika, 96(4):983?990, 2009. [3] C. M. Bishop. Mixture density networks. Technical Report NCRG/94/004, Aston University, 1994. [4] M. G. B. Blum and O. Fran?ois. Non-linear regression models for Approximate Bayesian Computation. Statistics and Computing, 20(1):63?73, 2010. [5] F. V. Bonassi and M. West. Sequential Monte Carlo with adaptive weights for Approximate Bayesian Computation. Bayesian Analysis, 10(1):171?187, Mar. 2015. [6] P. Dayan, G. E. Hinton, R. M. Neal, and R. S. Zemel. The Helmholtz machine. Neural Computation, 7: 889?904, 1995. [7] Y. Fan, D. J. Nott, and S. A. Sisson. Approximate Bayesian Computation via regression density estimation. Stat, 2(1):34?48, 2013. [8] C. Gouri?roux, A. Monfort, and E. Renault. Indirect inference. Journal of Applied Econometrics, 8(S1): S85?S118, 1993. [9] S. Gu, Z. Ghahramani, and R. E. Turner. Neural adaptive Sequential Monte Carlo. Advances in Neural Information Processing Systems 28, pages 2629?2637, 2015. [10] M. U. Gutmann and J. Corander. Bayesian optimization for likelihood-free inference of simulator-based statistical models. arXiv e-prints, abs/1501.03291v3, 2015. [11] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. Proceedings of the 3rd International Conference on Learning Representations, 2014. [12] D. P. Kingma and M. Welling. Auto-encoding variational Bayes. Proceedings of the 2nd International Conference on Learning Representations, 2013. [13] P. Marjoram, J. Molitor, V. Plagnol, and S. Tavar?. Markov chain Monte Carlo without likelihoods. Proceedings of the National Academy of Sciences, 100(26):15324?15328, Dec. 2003. [14] E. Meeds and M. Welling. GPS-ABC: Gaussian Process Surrogate Approximate Bayesian Computation. Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence, 30, 2014. [15] E. Meeds, R. Leenders, and M. Welling. Hamiltonian ABC. Proceedings of the 31st Conference on Uncertainty in Artificial Intelligence, pages 582?591, 2015. [16] T. Meeds and M. Welling. Optimization Monte Carlo: Efficient and embarrassingly parallel likelihood-free inference. Advances in Neural Information Processing Systems 28, pages 2071?2079, 2015. [17] Q. Morris. Recognition networks for approximate inference in BN20 networks. Proceedings of the 17th Conference on Uncertainty in Artificial Intelligence, pages 370?377, 2001. [18] V. Nair, J. Susskind, and G. E. Hinton. Analysis-by-synthesis by learning to invert generative black boxes. Proceedings of the 18th International Conference on Artificial Neural Networks, 5163:971?981, 2008. [19] B. Paige and F. Wood. Inference networks for Sequential Monte Carlo in graphical models. Proceedings of the 33rd International Conference on Machine Learning, 2016. [20] G. Papamakarios and I. Murray. Distilling intractable generative models. Probabilistic Integration Workshop at Neural Information Processing Systems, 2015. [21] J. K. Pritchard, M. T. Seielstad, A. Perez-Lezaun, and M. W. Feldman. Population growth of human Y chromosomes: a study of Y chromosome microsatellites. Molecular Biology and Evolution, 16(12): 1791?1798, 1999. [22] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. Proceedings of the 31st International Conference on Machine Learning, pages 1278?1286, 2014. [23] C. M. Schafer and P. E. Freeman. Likelihood-free inference in cosmology: Potential for the estimation of luminosity functions. Statistical Challenges in Modern Astronomy V, pages 3?19, 2012. [24] S. N. Wood. Statistical inference for noisy nonlinear ecological dynamic systems. Nature, 466(7310): 1102?1104, 2010. [25] J. Wu, I. Yildirim, J. J. Lim, B. Freeman, and J. Tenenbaum. Galileo: Perceiving physical object properties by integrating a physics engine with deep learning. Advances in Neural Information Processing Systems 28, pages 127?135, 2015. 9
6084 |@word middle:7 version:4 eliminating:1 proportion:1 nd:1 simulation:47 lezaun:1 covariance:4 reduction:1 born:2 series:2 tuned:3 existing:1 reaction:2 com:1 reminiscent:1 realistic:1 informative:2 cheap:1 plot:4 designed:1 v:7 generative:7 fewer:2 discovering:1 intelligence:3 parameterization:1 hamiltonian:2 location:3 simpler:1 zhang:1 wierstra:1 become:1 predecessor:1 fitting:3 inside:1 inter:1 papamakarios:4 simulator:9 multi:1 freeman:2 decreasing:2 spherical:1 actual:2 becomes:4 spain:1 provided:4 underlying:1 moreover:2 panel:3 mass:4 schafer:1 lowest:1 minimizes:1 developed:2 proposing:1 dying:1 astronomy:1 impractical:2 beaumont:3 pseudo:1 growth:1 returning:1 prohibitively:1 biometrika:1 uk:2 control:1 unit:7 grant:1 superiority:1 before:2 positive:1 understood:1 local:2 tends:1 limit:2 marin:1 encoding:1 establishing:1 meet:1 black:2 burn:1 chose:1 initialization:1 doctoral:1 suggests:1 conversely:1 limited:1 smc:14 range:1 practical:2 acknowledgment:1 galileo:1 practice:4 backpropagation:3 svi:14 susskind:1 procedure:7 danger:1 reject:1 word:1 integrating:1 get:1 cannot:2 conventional:8 map:1 economics:1 independently:5 focused:1 roux:1 correcting:1 iain:1 estimator:10 population:6 traditionally:1 controlling:1 target:2 exact:6 gps:1 us:3 s85:1 storkey:1 helmholtz:2 expensive:1 recognition:7 showcase:1 econometrics:1 observed:8 epsrc:1 ep:1 calculate:1 region:3 gutmann:1 trade:1 decrease:1 observes:1 leenders:1 disease:1 mentioned:1 intuition:1 ui:2 dynamic:1 trained:11 weakly:1 serve:1 upon:3 purely:1 efficiency:1 meed:4 balding:1 gu:1 easily:3 joint:1 indirect:1 represented:1 retrieves:1 train:9 fast:2 effective:11 monte:15 artificial:4 zemel:1 choosing:1 whose:4 supplementary:8 plausible:7 larger:1 valued:1 otherwise:1 epidemic:1 encoder:1 ability:1 statistic:7 jointly:1 itself:3 noisy:3 final:2 sisson:1 sequence:3 advantage:2 analytical:1 took:4 propose:6 product:1 combining:1 mixing:1 achieve:1 academy:1 produce:2 generating:2 adam:2 object:1 help:1 illustrate:1 depending:1 ac:2 stat:1 school:2 job:4 throw:1 ois:2 predicted:1 come:1 inflated:1 distilling:1 drawback:1 correct:3 closely:2 modifying:1 stochastic:6 human:1 material:8 require:1 preliminary:2 proposition:3 practically:1 around:3 sufficiently:2 exp:1 great:1 mapping:2 m0:2 early:3 smallest:1 consecutive:2 purpose:1 failing:2 estimation:12 applicable:1 tanh:6 quote:1 correctness:1 tool:1 weighted:1 amos:1 hope:2 rough:1 gaussian:30 always:2 rather:5 nott:1 avoid:1 broader:3 encode:1 rezende:1 focus:2 properly:1 improvement:1 modelling:1 likelihood:26 indicates:1 microsatellites:1 greatly:1 tavar:1 baseline:1 inference:27 dayan:1 stopping:1 entire:2 typically:6 accept:1 initially:1 hidden:8 eaten:1 interested:2 issue:2 flexible:4 integration:1 initialize:5 field:2 equal:1 once:1 never:1 construct:1 sampling:2 manually:2 biology:2 represents:1 broad:1 look:1 saving:1 future:1 report:2 modern:1 randomly:2 divergence:5 national:1 individual:1 cheaper:1 interpolate:1 renault:1 maintain:1 microsoft:1 ecology:1 attempt:1 ab:1 plagnol:1 highly:1 evaluation:1 mixture:10 perez:1 behind:1 chain:2 accurate:6 capable:1 necessary:3 initialized:1 uncertain:1 mk:3 instance:1 disadvantage:1 maximization:1 cost:3 deviation:2 parametrically:1 uniform:1 usefulness:1 too:1 varies:1 learnt:10 synthetic:4 calibrated:1 confident:3 st:2 density:27 explores:1 decayed:1 international:5 probabilistic:3 off:3 informatics:2 physic:2 regressor:1 synthesis:1 continuously:1 opposed:2 slowly:1 possibly:1 stochastically:1 inefficient:3 leading:2 toy:1 potential:3 centred:2 summarized:2 gaussianity:1 coefficient:1 explicitly:1 later:4 break:2 recover:1 bayes:1 parallel:1 predator:5 accuracy:1 convolutional:1 variance:5 efficiently:2 maximized:1 gathered:2 correspond:3 spaced:2 bayesian:28 raw:2 accurately:4 critically:1 rejecting:1 yildirim:1 carlo:15 converged:2 datapoint:1 reach:2 ed:2 grossly:1 mohamed:1 obvious:1 naturally:1 cosmology:2 proof:1 sampled:1 adjusting:1 lim:1 improves:4 dimensionality:1 embarrassingly:1 actually:2 focusing:1 evaluated:2 box:2 mar:1 just:1 rejected:1 until:2 hand:1 horizontal:1 nonlinear:1 defines:1 bonassi:1 quality:4 scientific:2 normalized:1 true:23 evolution:2 hence:4 analytically:4 neal:1 during:1 percentile:2 criterion:1 demonstrate:2 seielstad:1 rudimentary:1 variational:5 recently:2 common:1 behaves:1 physical:2 perturbing:2 conditioning:1 exponentially:2 ncrg:1 discussed:1 interpretation:1 approximates:2 tail:1 marginals:1 molitor:1 refer:6 significant:2 feldman:1 tuning:2 rd:2 particle:1 centre:1 replicating:1 funded:1 resistant:1 posterior:75 recent:1 optimizes:2 certain:1 server:3 ecological:1 binary:1 arbitrarily:1 neg:4 seen:1 george:2 employed:1 v3:1 period:1 signal:1 dashed:1 full:4 infer:5 reduces:1 technical:1 match:3 calculation:1 long:5 equally:2 molecular:1 prediction:3 basic:1 regression:7 expectation:1 arxiv:1 iteration:7 represent:4 histogram:1 achieved:3 invert:5 dec:2 proposal:49 whereas:4 fine:1 separately:3 decreased:1 interval:3 rest:1 unlike:3 comment:1 simulates:1 monfort:1 extracting:1 near:3 feedforward:2 enough:6 easy:3 switch:1 fit:4 architecture:1 reduce:3 idea:1 bn20:1 expression:2 reuse:1 suffer:2 queue:3 returned:2 paige:1 repeatedly:2 deep:2 useful:1 detailed:5 amount:1 mdns:11 ten:1 morris:1 tenenbaum:1 reduced:1 generate:2 http:1 estimated:1 per:9 shall:3 express:1 lotka:7 key:1 four:1 demonstrating:1 blum:2 drawn:2 changing:1 prey:4 kept:1 fraction:1 wood:3 run:4 parameterized:3 uncertainty:4 family:2 throughout:2 wu:1 fran:2 draw:4 decision:1 acceptable:1 layer:8 fan:2 occur:1 flat:2 nearby:1 simulate:5 ball:2 remain:1 smaller:1 increasingly:1 describes:3 cornuet:1 making:1 s1:1 restricted:1 gradually:1 xo:41 samplebased:1 taken:4 pipeline:1 equation:4 remains:1 eventually:1 needed:6 tractable:1 end:2 adopted:1 available:2 gaussians:6 experimentation:1 multiplied:1 away:1 simulating:1 save:1 alternative:2 robustness:1 rej:8 top:2 remaining:1 graphical:1 mdn:67 calculating:1 exploit:1 scholarship:1 murray:3 especially:1 ghahramani:1 objective:1 already:2 added:1 print:1 volterra:7 parametric:10 strategy:5 diagonal:1 corander:1 surrogate:1 evolutionary:1 gradient:1 lends:1 distance:2 separate:2 thank:1 simulated:4 capacity:1 unstable:1 assuming:2 length:1 code:1 relationship:2 setup:5 robert:1 reweight:1 negative:4 ba:1 implementation:1 adjustable:1 perform:2 allowing:2 vertical:1 observation:12 markov:3 benchmark:1 luminosity:1 hinton:2 interacting:1 perturbation:1 reproducing:1 pritchard:1 pair:1 required:3 kl:6 engine:2 distinction:1 narrow:1 learned:1 barcelona:1 kingma:2 nip:1 address:1 bar:2 usually:1 below:1 departure:2 challenge:1 program:2 belief:2 unrealistic:1 difficulty:1 turner:1 marjoram:1 representing:2 improve:1 github:1 aston:1 carried:1 auto:2 prior:68 literature:2 fully:1 proportional:2 validation:2 switched:1 sufficient:2 consistent:2 s0:2 principle:1 genetics:2 summary:3 repeat:2 last:1 free:12 arriving:1 placed:1 supported:1 guide:3 side:1 allow:1 taking:1 edinburgh:3 tolerance:1 regard:1 calculated:7 xn:11 world:1 default:1 dimension:1 distributed:2 forward:1 made:3 jump:1 replicated:1 commonly:1 regressors:1 programme:1 far:1 adaptive:3 welling:5 approximate:21 implicitly:1 keep:1 overfitting:4 sequentially:1 knew:1 xi:1 quoted:1 demo:1 continuous:1 sk:3 tailed:1 learn:10 nature:3 chromosome:2 symmetry:1 unavailable:1 interpolating:1 domain:3 main:2 noise:2 arrival:1 referred:1 retrain:1 west:1 slow:1 lie:1 third:1 learns:3 down:2 removing:1 specific:1 bishop:1 decay:1 intractable:4 exists:1 workshop:1 sequential:5 importance:3 phd:1 magnitude:1 illustrates:1 kx:9 rejection:8 likely:1 explore:1 adjustment:2 corresponds:1 abc:96 prop:11 nair:1 conditional:14 goal:3 narrower:1 viewed:1 careful:1 towards:1 feasible:1 hard:1 typical:1 except:2 uniformly:1 determined:1 reducing:1 perceiving:1 total:4 accepted:3 rarely:1 latter:1 evaluate:2 mcmc:15 trainable:1 correlated:1
5,619
6,085
Bi-Objective Online Matching and Submodular Allocations Hossein Esfandiari University of Maryland College Park, MD 20740 hossein@cs.umd.edu Nitish Korula Google Research New York, NY 10011 nitish@google.com Vahab Mirrokni Google Research New York, NY 10011 mirrokni@google.com Abstract Online allocation problems have been widely studied due to their numerous practical applications (particularly to Internet advertising), as well as considerable theoretical interest. The main challenge in such problems is making assignment decisions in the face of uncertainty about future input; effective algorithms need to predict which constraints are most likely to bind, and learn the balance between short-term gain and the value of long-term resource availability. In many important applications, the algorithm designer is faced with multiple objectives to optimize. In particular, in online advertising it is fairly common to optimize multiple metrics, such as clicks, conversions, and impressions, as well as other metrics which may be largely uncorrelated such as ?share of voice?, and ?buyer surplus?. While there has been considerable work on multi-objective offline optimization (when the entire input is known in advance), very little is known about the online case, particularly in the case of adversarial input. In this paper, we give the first results for bi-objective online submodular optimization, providing almost matching upper and lower bounds for allocating items to agents with two submodular value functions. We also study practically relevant special cases of this problem related to Internet advertising, and obtain improved results. All our algorithms are nearly best possible, as well as being efficient and easy to implement in practice. 1 Introduction As a central optimization problem with a wide variety of applications, online resource allocation problems have attracted a large body of research in networking, distributed computing, and electronic commerce. Here, items arrive one at a time (i.e. online), and when each item arrives, the algorithm must irrevocably assign it to an agent; each agent has a limited resource budget / capacity for items assigned to him. A big challenge in developing good algorithms for these problems is to predict future binding constraints or learn future capacity availability, and allocate items one by one to agents who are unlikely to hit their capacity in the future. Various stochastic and adversarial models have been proposed to study such online allocation problems, and many techniques have been developed for these problems. For stochastic input, a natural approach is to build a predicted instance (for instance, via sampling, or using historical data), and some of these techniques solve a dual linear program to learn dual variables that are used by the online algorithm moving forward [6, 10, 2, 23, 16, 18]. However, stochastic approaches may provide poor results on some input (for example, when there are unexpected spikes in supply / demand), and hence such problems have been extensively studied in adversarial models as well. Here, the algorithm typically maintains a careful balance between greedily exploiting the current item by assigning it to agents with high value for it, and assigning the item to a lower-value agent for whom the value is further from the distribution of ?typical? items they 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. have received. Again, primal-dual techniques have been applied to learn the dual variables used by the algorithm in an online manner [17, 3, 9]. A central practical application of such online algorithms is the online allocation of impressions or page-views to ads on the Internet [9, 2, 23, 5, 7]. Such problems are present both in the context of sponsored search advertising where advertisers have global budget constraints [17, 6, 3], or in display advertising where each ad campaign has a desired goal or a delivery constraint [9, 10, 2, 23, 5, 7]. Many of these online optimization techniques apply to general optimization problems including the online submodular welfare maximization problem (SWM) [20, 13]. For many real-world optimization problems, the goal is to optimize multiple objective functions [14, 1]. For instance, in Internet advertising, such objectives might include revenue, clicks, or conversions. A variety of techniques have been developed for multi-objective optimization problems; however, in most cases, these techniques are only applicable for offline multi-objective optimization problems [21, 26], and they do not apply to online settings, especially for online competitive algorithms that work against an adversarial input [17, 9] or in the presence of traffic spikes [18, 8] or hard-to-predict traffic patterns [5, 4, 22]. Our contributions. Motivated by the above applications and the increasing need to satisfy multiple objectives, we study a wide class of multi-objective online optimization problems, and present both hardness results and (almost tight) bi-objective approximation algorithms for them. In particular, we study resource allocation problems in which a sequence of items (also referred to as impressions) i from an unknown set I arrive one by one, and we have to allocate each item to one agent (for example, one advertiser) a in a given set of agents A. Each agent a has two monotone submodular set functions fa , ga : 2I ? R associated with it. Let Sa be the set of items assigned to bin a as a result of online allocation decisions. The goal of the online allocation algorithm is to maximize P P two social welfare functions based on fa ?s and ga , i.e., a?A fa (Sa ) and a?A ga (Sa ). We first present almost tight online approximation algorithms for the general online bi-objective submodular welfare maximization problem (see Theorems 2.3 and 2.5, and Fig. 1). We show that a simple random selection rule along with the greedy algorithm (when each item arrives, randomly pick one objective to greedily optimize) results in almost optimal algorithms. Our allocation rule is thus both very fast to run and trivially easy to implement. The main technical result of this part is the hardness result showing that the achieved approximation factor is almost tight unless P=NP. Furthermore, we consider special cases of this problem motivated by online ad allocation. In particular, for the special cases of online budgeted allocation and online weighted matching, motivated by sponsored search and display advertising (respectively), we present improved primal-dual-based algorithms along with improved hardness results for these problems (see, for example, the tight Theorem 3.1). Related Work. It is known that the greedy algorithm leads to a 1/2-approximation for the submodular social welfare maximization problem (SWM) [11], and this problem admits a 1 ? 1/e-approximation in the offline setting [24], which is tight [19]. However, for the online setting, the problem does not admit a better than 1/2-approximation algorithm unless P= NP [12]. Bi-objective online allocation problems have been studied in two previous papers [14, 1]. The first paper presents [14] an online bi-objective algorithm for the problem of maximizing a general weight function and the cardinality function, and the second paper [1] presents results for the combined budgeted allocation and cardinality constraints. Our results in this paper improve and generalize those results for more general settings. Submodular partitioning problems have also been studied based on mixed robust/average-case objectives [25]. Our work is related to online ad allocation problems, including the Display Ads Allocation (DA) problem [9, 10, 2, 23], and the Budgeted Allocation (AdWords) problem [17, 6]. In both of these problems, the publisher must assign online impressions to an inventory of ads, optimizing efficiency or revenue of the allocation while respecting pre-specified contracts. The Display Ad (DA) problem is the online matching problem described above with a single weight objective [9, 7]. In the Budgeted Allocation problem, the publisher allocates impressions resulting from search queries. Advertiser a has a budget B(a) on the total spend, instead of a bound n(a) on the number of impressions. Assigning impression i to advertiser a consumes wia units of a?s budget instead of 1 of the n(a) slots, as in the DA problem. For both of these problems, 1 ? 1e -approximation algorithms have been designed under the assumption of large capacities [17, 3, 9]. None of the above papers for adversarial models studies multiple objectives at the same time. 2 2 2.1 Bi-Objective Online Submodular Welfare Maximization Model and Overview For any allocation S, let Sa denote the set of items assigned to agent a ? A by this allocation. In the classic Submodular Welfare Maximization problem (SWM) for which there is a single monotone submodular objective, each agent a ? A is associated withP a submodular function fa defined on the set of items I. The welfare of allocation S is defined as a fa (Sa ), and the goal of SWM is to maximize this welfare. In the classic SWM, the natural greedy algorithm is to assign each item (when it arrives) to the agent whose gain increases the most. This greedy algorithm (note that it is an online algorithm) is (1/2 + 1/n)-competitive, and this is the best possible [15]. In this section, we consider the extension of online SWM to two monotone submodular functions. Formally, each agent a ? A is associated with two submodular functions a - defined on I. P - fa and gP The goal is to find an allocation S that does well on both objectives a fa (Sa ) and a ga (Sa ). We measure the performance of the for each objective: Palgorithm by comparison to the offline optimum P Let S ?f = arg maxallocations S a fa (Sa ) and S ?g = arg maxallocations S a ga (Sa ). P An algorithm AP is (?, ?)-competitive if, for every input, it produces an allocation S such that a fa (Sa ) ? P P ?f ?g ? a fa (Sa ) and a ga (Sa ) ? ? a ga (Sa ). A (1, 1)-competitive algorithm would be one that finds an allocation which is simultaneously optimal in both objectives, but since the objectives are distinct, no single allocation may maximize both, even ignoring computational difficulties or lack of knowledge of the future. One could attempt to maximize a linear combination of the two submodular objectives, but since the linear combination is itself submodular, this is no harder than the classic online SWM. Instead, we provide algorithms with the stronger guarantee that they are simultaneously competitive with the optimal solution for each objective separately. Further, our algorithms are parametrized, so the user can balance the importance of the two objectives. Similar to previous approaches for bi-objective online allocation problems [14], we run two simultaneous greedy algorithms, each based on one of the objective functions. Upon arrival of each online item, with probability p we pass the item to the greedy algorithm based on the objective function f , and with probability 1 ? p we pass the item to the greedy algorithm based on g. First, as a warmup, we provide a charging argument to show that the greedy algorithm for (singleobjective) SWM is 1/2-competitive. This charging argument is similar to the usual primal-dual analysis for allocation problems. However, since the objective functions are not linear, it may not be possible to interpret the proof using a primal-dual technique. Later, we modify our charging argument and show that if we run the greedy algorithm for SWM but only consider items for allocation with p probability p, the competitive ratio is 1+p . (Note that a naive analysis would yield a competitive ratio of p/2, since we lose a factor of p in the sampling and a factor of 1/2 due to the greedy algorithm.) Since our algorithm for bi-objective online SWM passes items to the ?first? greedy algorithm with probability p and passes items to the second greedy algorithm with probability 1 ? p, the modified p charging argument immediately implies that our algorithm is ( 1+p , 1?p 2?p ) competitive, as we state in Theorem 2.3 below. Also, using a factor-revealing framework, assuming N P 6= RP , we provide an almost tight hardness result, which holds even if the objective functions have the simpler ?coverage? structure. Both our competitive ratio and the associated hardness result are presented in Figure 1. 2.2 Algorithm for Bi-Objective online SWM We define some notation and ideas that we use to bound the competitive ratio of our algorithm. Let Gr be the greedy algorithm and let Opt be a fixed optimum allocation. For an agent j, and an algorithm Alg, let Algj be the set of online items allocated to the agent j by Alg; Optj denotes the set of online items allocated to j in Opt. Trivially, for any two agents j and k, we have Algj ? Algk = ?. For each online item i we define a variable ?i , and for each agent j we define a variable ?j . In order to bound the competitive ratio of the algorithm c, it suffices P Alg byP  to set the values of ?i s and ?j s n m such that 1) the value of Alg is at least c i=1 ?i + j=1 ?j and 2) the value of Opt is at most Pn Pm i=1 ?i + j=1 ?j . 3 Figure 1: The lower (blue) curve is the competitive ratio of our algorithm, and the red curve is the upper bound on the competitive ratio of any algorithm. Theorem 2.1. (Warmup) The greedy algorithm is 0.5-competitive for online SWM. Proof. For each online itemPi, let ?i be the marginal gain by Gr from allocating item i upon its n arrival. It is easy to see that i=1 ?i is equal to the value of Gr. For each agent j, P let ?j be the total m value of the allocation to j at the end of the algorithm. By definition, we know that j=1 ?j is equal P  Pm n to the value of Gr. Thus, the value of Gr is clearly 0.5 i=1 ?i + j=1 ?j . Recall that fj (.) denotes the valuation function of agent j. Below, we show that fj (Optj ) is upperP bounded by ?j + i?Optj ?i . Note that for distinct agents j and k, Optj and Optk are disjoint. Thus, Pn Pm by summing over all agents, we can upper-bound the value of Opt by i=1 ?i + j=1 ?j . This means that the competitive ratio of Gr is 0.5. P Now, we just need to show that for any agent j we have fj (Optj ) ? ?j + i?Optj ?i . Note that for any item i ? Optj , the value of ?i is at least the marginal gain that would have been obtained from assigning i to j when it arrives. Applying submodularity of fj , we have ?i ? fj (Grj ? i) ? fj (Grj ). Moreover, by definition we have ?j = fj (Grj ). Thus, we have: X X ?j + ?i ? fj (Grj ) + (fj (Grj ? i) ? fj (Grj )) i?Optj i?Optj  ? fj (Grj ) + fj (Grj ? Optj ) ? fj (Grj ) = fj (Grj ? Optj ) ? fj (Optj ), where the second inequality follows by submodularity, and the last inequality by monotonicity. This completes the proof. Lemma 2.2. Let Grp be an algorithm that with probability p passes each online item to Gr for p -competitive for online SWM. allocation, and leaves it unmatched otherwise. Grp is 1+p Proof. The proof here is fairly similar to Theorem 2.1. For each online item i, set ?i to be the marginal gain that would have been achieved from allocating item i upon its arrival (assuming i is passed to Gr), given the current allocation of items. Note that ?i is a random variable (depending on the outcome of previous decisions to pass items to Gr or not), but it is independent of the coin toss that determines whether it is passed to Gr, and so the expected marginal gain of allocating item i, (given P all previous allocations) is pE[?i ]. Thus, by linearity of expectation, the expected value of Grp n is pE[ i=1 ?i ]. On the other hand, for each agent Pmj, set ?j to be the value of the actual allocations to j at the end of the algorithm. Again, we have j=1 ?j equal to the value of Grp . Combining these P  Pm n 1 E[? ] + E[? ] two, we conclude that the expected value of Grp is equal to 1+1/p i j . i=1 j=1 4 As before, we show that fj (Optj ) is upper-bounded by ?j + that the competitive ratio of Grp is 1 1+1/p = P i?Optj ?i . Therefore, we can conclude p 1+p . P It remains only to show that for any agent j, we have fj (Optj ) ? ?j + i?Optj ?i . This is exactly the same as our proof for Theorem 2.1: By submodularity of fj we have, ?i ? fj (Grp (j) ? i) ? fj (Grp (j)), and by definition we have ?j = fj (Grp (j)). We provide the complete proof in the full version. The main theorem of this section follows immediately. p Theorem 2.3. For any 0 < p < 1, there is a ( 1+p , 1?p 2?p )-competitive algorithm for bi-objective online SWM. 2.3 Hardness of Bi-Objective online SWM We now prove that Theorem 2.3 is almost tight, by describing a hard instance for bi-objective online SWM. To describe this instance, we define notions of super nodes and super edges, which capture the hardness of maximizing a submodular function even in the offline setting. Using the properties of super nodes and edges, we construct and analyze a hard example for bi-objective online SWM. Our construction generalizes that of Kapralov et al. [12], who prove the upper bound corresponding to the two points (0.5, 0) and (0, 0.5) in the curve shown in Figure 1. They use the following result: For any fixed c0 and 0 it is NP-hard to distinguish between the following two cases for offline SWM with n agents and m = kn items. This holds even for submodular functions with ?coverage? valuations. ? There is an allocation with value n. ? For any l ? c0 , no allocation allocates kl items and gets a value more than 1 ? e?l + 0 . Intuitively, in the former case, we can assign k items to each agent and obtain value 1 per agent. In the latter case, even if we assign 2k items (however they are split across agents), we can obtain total value at most 0.865. It also follows that there exist ?hard? instances such that there is an optimal solution of value n, but for any l < 1, any assigment of ml edges obtains value at most (1 ? e?l + 0 )n. We now define a super edge to be a hard instance of offline SWM as defined above. We refer to the set of agents in a super edge as the agent super node, and the set of items in the super edge as the item super node. If two super edges share a super node, it means that they share the agents / items corresponding to that super node in the same order. If (in expectation) we allocate ml items of a super edge, we say the load of that super edge is l. Similarly, if (in expectation) we allocate ml items to an agent super node, we say the load of that super node is l. Using the definition of super edge and super node, the hardness result of Kapralov et al. [12] gives us the following lemma: Lemma 2.4. Assume RP 6= N P and let  be an arbitrary small constant. If the (expected) load of a randomized polynomial algorithm on an agent super node is l, the expected welfare of all agents is at most (1 ? e?l + )n. Now with Lemma 2.4 in hand, we are ready to present an upper bound for bi-objective online SWM. Theorem 2.5. Assume RP 6= N P . The competitive ratio (?, ?) of any algorithm for bi-objective online SWM is upper bounded by the red curve in Figure 1. More precisely (assuming w.l.o.g. that 2 /6 ? ? ?), for any ? ? [0, 1], there is no algorithm with ? > 0.5+? and ? > ??. 1+? 2 3 Bi-Objective Online Weighted Matching In this section, we consider two special cases of bi-objective online SWM, each of which generalizes the (single objective) online weighted matching problem (with free disposal). Here, each item i has f g two weights wij and wij for agent j, and each agent j has (large) capacity Cj . The weights of item i are revealed when it arrives, and the algorithm must allocate it to some agent immediately. In the first model, after the algorithm terminates, and each agent j has received items Sj , it chooses a P P f subset Sj0 ? Sj of at most Cj items. The total value in the first objective is then j i?S 0 wij , and j P P g in the second objective j i?S 0 wij . Intuitively, each agent must pick a subset of its items, and it j 5 Exponential Weight Algorithm. Set ?j to 0 for each agent j. Upon arrival of each item i: 1. If there is agent j with wij ? ?j > 0 (a) Let j be the agent that maximizes wij ? ?j (b) Assign i to j, and set ?i to wij ? ?j . (c) Let w1 , w2 , . . . , wCj be the weights of the Cj highest weight items, matched to j in a non-increasing order. PCj (d) Set ?j to j=1 j?1  wj 1+ C1 j Cj ((1+1/Cj )Cj ?1) . 2. Else: Leave i unassigned. Figure 2: Exponential weight algorithm for online matching with free disposal. gets paid its (additive) value for these items. In the (single-objective) case where each agent can only be allocated Cj items, this is the online weighted b-matching problem, where vertices are arriving online, and we have edge weights in the bipartite (item, agent) graph. This problem is completely intractable in the online setting, while the free disposal variant [9] in which additional items can be assigned, but at most Cj items count towards the objective, is of theoretical and practical interest. In the second model, after the algorithm terminates and agent j has received items Sj , it chooses two (not necessarily disjoint) subsets Sj0f and Sj0g ; items in Sj0f are counted towards the first objective, and those in Sj0g are counted towards the second objective. Theorem 3.1. For any (?, ?) such that ? + ? ? 1 ? 1e , there is an (?, ?)-competitive algorithm for the first model of the bi-objective online weighted matching. For any constant  > 0, there is no such algorithm when ? + ? > 1 ? 1e + . To obtain the positive result, with probability p, run the exponential weight algorithm (see Figure 2) for the first objective (for all items), and with probability 1 ? p run the exponential weight algorithm for the second objective for all items; this combination is (p(1 ? 1e ), (1 ? p)(1 ? 1e ))-competitive. We deffer the proof of this and the matching hardness results to the full version. Having given matching upper and lower bounds for the first model, we now consider the second model, where if we assign a set Sj of items / edges to an agent j we can select two subsets Sj0f , Sj0g ? Sj and use them for the first and second objective functions respectively. 1 1 Theorem 3.2. There is a (p(1 ? e1/p ), (1 ? p)(1 ? e1/(1?p) ))-competitive algorithm for the biobjective online weighted matching problem in the second model as minj {Cj } tends to infinity. Theorem 3.3. The competitive ratio of any algorithm for bi-objective online weighted matching in the second model is upper bounded by the curve in figure 3. 4 Bi-Objective Online Budgeted Allocation In this section, we consider the bi-objective online allocation problem where one of the objectives is a budgeted allocation problem and the other objective function is weighted matching. Here, each item i has a weight wij and a bid bij for agent j. Each agent j has a capacity Cj and a budget Bj . If an agent is allocated items SjP , for the first objective (weighted matching), it choosesPa subset Sj0 of at most Cj items; its score is i?S 0 wij . For the second objective, its score is min{ i?Sj bij , Bj }. j Note that in the second objective, the agent does not need to choose a subset; it obtains the sum of the bids of all items assigned to it, capped at its budget Bj . Clearly, if we set all bids bij to 1, the goal of the budgeted allocation part will be maximizing the cardinality. Thus, this is a clear generalization of the bi-objective online allocation to maximize weight and cardinality, and the same hardness results hold here. 6 Figure 3: The blue curve is the competitive ratio of our algorithm in the second model, while the red line and the green curves are the upper bounds on the competitive ratio of any algorithm. As is standard, throughout this section we assume that the bid of each agent for each item is vanishingly small compared to the budget of each bidder. Interestingly, again here, we provide a 1 1 (p(1 ? e1/p ), (1 ? p)(1 ? e1/(1?p) ))-competitive algorithm, which is almost tight. At the end, as a 1 1 corollary of our results, we provide a a (p(1 ? e1/p ), (1 ? p)(1 ? e1/(1?p) ))-competitive algorithm, for the case that both objectives are budgeted allocation problems (with separate budgets). Our algorithm here is roughly the same as for two weight objectives. For each item, with probability 1 ? p, we pass it to the Exponential Weight algorithm for matching, and allocate it based on its weight. With the remaining probability p, we assign the algorithm based on its bids and count it towards the Budgeted Allocation objective. However, the algorithm we use for Budgeted Allocation is slightly different: We virtually run the Balance algorithm of Mehta et al. [17] for Budgeted Allocation (Fig. 4), as though we were assigning all items (not just those passed to this algorithm), but with each item?s bids scaled down by a factor of p. For those p fraction of items to be assigned by the Budgeted Allocation algorithm, assign them according to the recommendation of the virtual Balance algorithm. 1 Theorem 3.2 from the previous section shows that our algorithm is (1 ? p)(1 ? e1/(1?p) )-competitive against the optimum weighted matching objective. Thus, in the rest of this section, we only need 1 to show that this algorithm is p(1 ? e1/p )-competitive against the optimum Budgeted Allocation solution. First, using a primal dual approach, we show that the outcome of the virtual Balance 1 algorithm (that runs on p fraction of the value of each item) is p(1 ? e1/p ) against the optimum with the actual weights. Then, using the Hoeffding inequality, we show that the expected value of our allocation for the budgeted allocation objective is fairly close to the virtual algorithm?s value, i.e. the difference between the competitive ratio of our allocation and the virtual allocation is o(1). max b i,j ij Lemma 4.1. When ? 0, the total allocation of the virtual balance algorithm that runs on Bj 1 p fraction of the value of each bid is at least p(1 ? e1/p ) times that of the optimum with the actual values. The proof of this lemma is similar to the analysis of Buchbinder et al. [3] for the basic Budgeted Allocation problem. We provide this proof in the full version. max b i,j ij Lemma 4.2. For any constant p, assuming ? 0, the budgeted allocation value of our Bj algorithm tends to the value of Balance with p fraction of each bid, with high probability. In the virtual Balance algorithm, we allocate p fraction of each item, while in our real algorithm, we allocate every item according to the virtual Balance algorithm with probability p. Since each item?s bids are small compared to the budgets, the lemma follows from a straightforward concentration argument. We present the complete proof in the full version. The following lemma is an immediate result of combining Lemma 4.1 and Lemma 4.2. 7 Virtual Balance algorithm on p fraction of values. Set ?j and yj to 0 for each agent j. Upon arrival of each item i: 1. If i has a neighbor with bij (1 ? ?j ) > 0 (a) Let j be the agent that maximizes bij (1 ? ?j ) (b) Assign i to j i.e. set xij to 1. (c) Set ?i to bij (1 ? ?j ). (d) Increase yj by bij Bj (e) Increase ?j by eyj ?1/p bij 1?e?1/p Bj 2. Else: Leave i unassigned. Figure 4: Maintaining solution to primal and dual LPs. max b i,j ij ? 0, our algorithm is p(1 ? Lemma 4.3. For any constant p, assuming Bj against the optimum budgeted allocation solution. 1 )-competitive e1/p Lemma 4.3 immediately gives us the following theorem. max b i,j ij 1 ? 0, there is a (p(1 ? e1/p Theorem 4.4. For any constant p, assuming ), (1 ? p)(1 ? Bj 1 ))-competitive algorithm for the bi-objective online allocation with two budgeted allocation e1/(1?p) objectives. Moreover, if we pass each item to the exponential weight algorithm with probability p, the expected 1 size of the output matching is at least p(1 ? e1/p ) that of the optimum [14]. Together with Lemma 4.3, this gives us the following theorem. max b i,j ij 1 Theorem 4.5. For any constant p, assuming ? 0, there is a (p(1 ? e1/p ), (1 ? p)(1 ? Bj 1 ))-competitive algorithm for the bi-objective online allocation with a budgeted allocation e1/(1?p) objective and a weighted matching objective. 5 Conclusions In this paper, we gave the first algorithms for several bi-objective online allocation problems. Though these are nearly tight, it would be interesting to consider other models for bi-objective online allocation, special cases where one may be able to go beyond our hardness results, and other objectives such as fairness to agents. References [1] Gagan Aggarwal, Yang Cai, Aranyak Mehta, and George Pierrakos. Biobjective online bipartite matching. In WINE, pages 218?231, 2014. [2] Shipra Agrawal, Zizhuo Wang, and Yinyu Ye. A dynamic near-optimal algorithm for online linear programming. Computing Research Repository, 2009. [3] N. Buchbinder, Kamal Jain, and J. Naor. Online primal-dual algorithms for maximizing ad-auctions revenue. In ESA, pages 253?264. Springer, 2007. [4] Dragos Florin Ciocan and Vivek F. Farias. Model predictive control for dynamic resource allocation. Math. Oper. Res., 37(3):501?525, 2012. [5] N. R. Devanur, K. Jain, B. Sivan, and C. A. Wilkens. Near optimal online algorithms and fast approximation algorithms for resource allocation problems. In EC, pages 29?38. ACM, 2011. [6] Nikhil Devanur and Thomas Hayes. The adwords problem: Online keyword matching with budgeted bidders under random permutations. In EC, pages 71?78, 2009. 8 [7] Nikhil R. Devanur, Zhiyi Huang, Nitish Korula, Vahab S. Mirrokni, and Qiqi Yan. Whole-page optimization and submodular welfare maximization with online bidders. In EC, pages 305?322, 2013. [8] Hossein Esfandiari, Nitish Korula, and Vahab Mirrokni. Online allocation with traffic spikes: Mixing stochastic and adversarial inputs. In EC. ACM, 2015. [9] J. Feldman, N. Korula, V. Mirrokni, S. Muthukrishnan, and M. Pal. Online ad assignment with free disposal. In WINE, 2009. [10] Jon Feldman, Monika Henzinger, Nitish Korula, Vahab S. Mirrokni, and Cliff Stein. Online stochastic packing applied to display ad allocation. In ESA, pages 182?194. Springer, 2010. [11] M. L. Fisher, G. L. Nemhauser, and L. A. Wolsey. An analysis of approximations for maximizing submodular set functions. II. Math. Programming Stud., 8:73?87, 1978. Polyhedral combinatorics. [12] Michael Kapralov, Ian Post, and Jan Vondr?k. Online submodular welfare maximization: Greedy is optimal. In SODA, pages 1216?1225, 2013. [13] Nitish Korula, Vahab Mirrokni, and Morteza Zadimoghaddam. Online submodular welfare maximization: Greedy beats 1/2 in random order. In STOC, pages 889?898. ACM, 2015. [14] Nitish Korula, Vahab S. Mirrokni, and Morteza Zadimoghaddam. Bicriteria online matching: Maximizing weight and cardinality. In WINE, pages 305?318, 2013. [15] Lehman, Lehman, and N. Nisan. Combinatorial auctions with decreasing marginal utilities. Games and Economic Behaviour, pages 270?296, 2006. [16] Mohammad Mahdian and Qiqi Yan. Online bipartite matching with random arrivals: A strongly factor revealing LP approach. In STOC, pages 597?606, 2011. [17] Aranyak Mehta, Amin Saberi, Umesh Vazirani, and Vijay Vazirani. Adwords and generalized online matching. J. ACM, 54(5):22, 2007. [18] Vahab S. Mirrokni, Shayan Oveis Gharan, and Morteza ZadiMoghaddam. Simultaneous approximations of stochastic and adversarial budgeted allocation problems. In SODA, pages 1690?1701, 2012. [19] Vahab S. Mirrokni, Michael Schapira, and Jan Vondr?k. Tight information-theoretic lower bounds for welfare maximization in combinatorial auctions. In EC, pages 70?77, 2008. [20] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An analysis of approximations for maximizing submodular set functions. I. Math. Programming, 14(3):265?294, 1978. [21] Christos H Papadimitriou and Mihalis Yannakakis. On the approximability of trade-offs and optimal access of web sources. In FOCS, pages 86?92. IEEE, 2000. [22] Bo Tan and R Srikant. Online advertisement, optimization and stochastic networks. In CDC-ECC, pages 4504?4509. IEEE, 2011. [23] Erik Vee, Sergei Vassilvitskii, and Jayavel Shanmugasundaram. Optimal online assignment with forecasts. In EC, pages 109?118, 2010. [24] Jan Vondr?k. Optimal approximation for the Submodular Welfare Problem in the value oracle model. In STOC, pages 67?74, 2008. [25] Kai Wei, Rishabh K Iyer, Shengjie Wang, Wenruo Bai, and Jeff A Bilmes. Mixed robust/average submodular partitioning: Fast algorithms, guarantees, and applications. In Advances in Neural Information Processing Systems, pages 2233?2241, 2015. [26] Mihalis Yannakakis. Approximation of multiobjective optimization problems. In WADS, page 1, 2001. 9
6085 |@word shayan:1 repository:1 version:4 polynomial:1 stronger:1 c0:2 mehta:3 assigment:1 pick:2 paid:1 bicriteria:1 harder:1 bai:1 score:2 interestingly:1 current:2 com:2 wilkens:1 assigning:5 attracted:1 must:4 sergei:1 additive:1 designed:1 sponsored:2 greedy:16 leaf:1 item:73 short:1 math:3 node:10 simpler:1 warmup:2 along:2 supply:1 stud:1 prove:2 naor:1 focs:1 polyhedral:1 manner:1 expected:7 hardness:11 roughly:1 multi:4 mahdian:1 decreasing:1 grj:10 little:1 actual:3 cardinality:5 increasing:2 optj:16 spain:1 notation:1 bounded:4 moreover:2 linearity:1 maximizes:2 matched:1 developed:2 guarantee:2 every:2 exactly:1 scaled:1 hit:1 partitioning:2 unit:1 control:1 before:1 positive:1 ecc:1 bind:1 swm:22 modify:1 tends:2 multiobjective:1 cliff:1 ap:1 might:1 studied:4 irrevocably:1 limited:1 campaign:1 bi:27 practical:3 commerce:1 yj:2 practice:1 implement:2 palgorithm:1 jan:3 yan:2 revealing:2 matching:24 pre:1 get:2 ga:7 selection:1 close:1 wad:1 context:1 applying:1 zhiyi:1 optimize:4 maximizing:7 straightforward:1 go:1 devanur:3 immediately:4 rule:2 classic:3 notion:1 construction:1 tan:1 user:1 programming:3 oveis:1 particularly:2 wang:2 capture:1 wj:1 keyword:1 trade:1 highest:1 consumes:1 respecting:1 dynamic:2 optk:1 tight:10 predictive:1 upon:5 bipartite:3 efficiency:1 completely:1 shipra:1 farias:1 packing:1 various:1 muthukrishnan:1 distinct:2 fast:3 effective:1 describe:1 jain:2 query:1 outcome:2 whose:1 widely:1 solve:1 spend:1 say:2 nikhil:2 otherwise:1 kai:1 sj0:2 gp:1 itself:1 online:85 sequence:1 agrawal:1 cai:1 vanishingly:1 relevant:1 combining:2 mixing:1 wenruo:1 amin:1 exploiting:1 optimum:8 produce:1 leave:2 depending:1 ij:5 received:3 sa:13 coverage:2 c:1 predicted:1 implies:1 submodularity:3 stochastic:7 virtual:8 bin:1 behaviour:1 assign:10 suffices:1 generalization:1 opt:4 extension:1 hold:3 practically:1 welfare:14 predict:3 bj:10 wine:3 applicable:1 lose:1 combinatorial:2 him:1 pcj:1 weighted:11 offs:1 clearly:2 super:18 modified:1 pn:2 unassigned:2 gharan:1 corollary:1 korula:7 adversarial:7 greedily:2 entire:1 unlikely:1 typically:1 shengjie:1 wij:9 arg:2 hossein:3 dual:10 special:5 fairly:3 marginal:5 equal:4 construct:1 having:1 sampling:2 park:1 yannakakis:2 nearly:2 fairness:1 jon:1 kamal:1 future:5 papadimitriou:1 np:3 randomly:1 simultaneously:2 attempt:1 interest:2 withp:1 arrives:5 rishabh:1 primal:7 allocating:4 edge:12 allocates:2 unless:2 desired:1 re:1 theoretical:2 vahab:8 instance:7 assignment:3 maximization:9 vertex:1 subset:6 gr:10 pal:1 byp:1 kn:1 combined:1 chooses:2 randomized:1 contract:1 michael:2 together:1 w1:1 again:3 central:2 choose:1 esfandiari:2 hoeffding:1 huang:1 unmatched:1 admit:1 oper:1 bidder:3 availability:2 lehman:2 satisfy:1 combinatorics:1 ad:10 nisan:1 later:1 view:1 analyze:1 traffic:3 red:3 competitive:34 kapralov:3 maintains:1 contribution:1 largely:1 who:2 yield:1 generalize:1 none:1 advertising:7 bilmes:1 simultaneous:2 minj:1 networking:1 definition:4 against:5 henzinger:1 associated:4 proof:11 gain:6 recall:1 knowledge:1 cj:11 surplus:1 disposal:4 improved:3 wei:1 though:2 strongly:1 furthermore:1 just:2 hand:2 web:1 lack:1 google:4 schapira:1 ye:1 former:1 hence:1 assigned:6 vivek:1 game:1 generalized:1 impression:7 complete:2 mohammad:1 theoretic:1 fj:21 auction:3 saberi:1 umesh:1 common:1 overview:1 interpret:1 refer:1 feldman:2 trivially:2 pm:4 similarly:1 submodular:25 moving:1 access:1 optimizing:1 zadimoghaddam:3 buchbinder:2 inequality:3 additional:1 george:1 maximize:5 advertiser:4 ii:1 multiple:5 full:4 aggarwal:1 technical:1 long:1 post:1 e1:16 variant:1 basic:1 metric:2 expectation:3 achieved:2 c1:1 separately:1 completes:1 else:2 source:1 wcj:1 publisher:2 allocated:4 w2:1 rest:1 umd:1 pass:3 virtually:1 near:2 presence:1 yang:1 revealed:1 split:1 easy:3 variety:2 bid:9 gave:1 grp:9 florin:1 click:2 economic:1 idea:1 whether:1 motivated:3 vassilvitskii:1 allocate:8 utility:1 passed:3 mihalis:2 york:2 clear:1 stein:1 extensively:1 exist:1 xij:1 srikant:1 designer:1 disjoint:2 per:1 blue:2 sivan:1 budgeted:21 graph:1 monotone:3 fraction:6 sum:1 run:8 uncertainty:1 soda:2 arrive:2 almost:8 adwords:3 throughout:1 electronic:1 delivery:1 decision:3 bound:11 internet:4 distinguish:1 display:5 aranyak:2 oracle:1 constraint:5 precisely:1 infinity:1 nitish:7 argument:5 min:1 approximability:1 developing:1 according:2 combination:3 poor:1 across:1 terminates:2 slightly:1 lp:2 making:1 intuitively:2 yinyu:1 pmj:1 resource:6 remains:1 describing:1 count:2 know:1 end:3 generalizes:2 apply:2 voice:1 coin:1 rp:3 thomas:1 denotes:2 remaining:1 include:1 maintaining:1 build:1 especially:1 objective:75 spike:3 fa:10 concentration:1 md:1 mirrokni:10 usual:1 nemhauser:2 separate:1 maryland:1 capacity:6 parametrized:1 whom:1 valuation:2 assuming:7 erik:1 providing:1 balance:11 ratio:14 vee:1 stoc:3 unknown:1 conversion:2 upper:10 beat:1 immediate:1 arbitrary:1 esa:2 specified:1 kl:1 barcelona:1 nip:1 capped:1 able:1 beyond:1 below:2 pattern:1 challenge:2 program:1 including:2 green:1 max:5 charging:4 natural:2 difficulty:1 improve:1 numerous:1 ready:1 naive:1 faced:1 permutation:1 cdc:1 mixed:2 interesting:1 wolsey:2 allocation:66 revenue:3 agent:54 uncorrelated:1 share:3 last:1 free:4 arriving:1 qiqi:2 monika:1 offline:7 wide:2 neighbor:1 face:1 distributed:1 curve:7 world:1 forward:1 historical:1 counted:2 ec:6 social:2 sj:6 vazirani:2 obtains:2 vondr:3 monotonicity:1 ml:3 global:1 hayes:1 summing:1 conclude:2 search:3 wia:1 learn:4 robust:2 ignoring:1 alg:4 inventory:1 necessarily:1 da:3 main:3 big:1 whole:1 arrival:6 body:1 fig:2 referred:1 ny:2 christos:1 exponential:6 pe:2 advertisement:1 bij:8 ian:1 theorem:18 down:1 load:3 showing:1 admits:1 intractable:1 importance:1 iyer:1 budget:9 demand:1 forecast:1 morteza:3 vijay:1 gagan:1 likely:1 unexpected:1 bo:1 recommendation:1 binding:1 springer:2 determines:1 acm:4 slot:1 goal:6 careful:1 towards:4 toss:1 jeff:1 fisher:2 considerable:2 hard:6 sjp:1 typical:1 lemma:14 total:5 pas:5 buyer:1 formally:1 college:1 select:1 latter:1
5,620
6,086
Learning HMMs with Nonparametric Emissions via Spectral Decompositions of Continuous Matrices Kirthevasan Kandasamy? Carnegie Mellon University Pittsburgh, PA 15213 kandasamy@cs.cmu.edu Maruan Al-Shedivat? Carnegie Mellon University Pittsburgh, PA 15213 alshedivat@cs.cmu.edu Eric P. Xing Carnegie Mellon University Pittsburgh, PA 15213 epxing@cs.cmu.edu Abstract Recently, there has been a surge of interest in using spectral methods for estimating latent variable models. However, it is usually assumed that the distribution of the observations conditioned on the latent variables is either discrete or belongs to a parametric family. In this paper, we study the estimation of an m-state hidden Markov model (HMM) with only smoothness assumptions, such as H?lderian conditions, on the emission densities. By leveraging some recent advances in continuous linear algebra and numerical analysis, we develop a computationally efficient spectral algorithm for learning nonparametric HMMs. Our technique is based on computing an SVD on nonparametric estimates of density functions by viewing them as continuous matrices. We derive sample complexity bounds via concentration results for nonparametric density estimation and novel perturbation theory results for continuous matrices. We implement our method using Chebyshev polynomial approximations. Our method is competitive with other baselines on synthetic and real problems and is also very computationally efficient. 1 Introduction Hidden Markov models (HMMs) [1] are one of the most popular statistical models for analyzing time series data in various application domains such as speech recognition, medicine, and meteorology. In an HMM, a discrete hidden state undergoes Markovian transitions from one of m possible states to another at each time step. If the hidden state at time t is ht , we observe a random variable xt 2 X drawn from an emission distribution, Oj = P(xt |ht = j). In its most basic form X is a discrete set and Oj are discrete distributions. When dealing with continuous observations, it is conventional to assume that the emissions Oj belong to a parametric class of distributions, such as Gaussian. Recently, spectral methods for estimating parametric latent variable models have gained immense popularity as a viable alternative to the Expectation Maximisation (EM) procedure [2?4]. At a high level, these methods estimate higher order moments from the data and recover the parameters via a series of matrix operations such as singular value decompositions, matrix multiplications and pseudo-inverses of the moments. In the case of discrete HMMs [2], these moments correspond exactly to the joint probabilities of the observations in the sequence. Assuming parametric forms for the emission densities is often too restrictive since real world distributions can be arbitrary. Parametric models may introduce incongruous biases that cannot be reduced even with large datasets. To address this problem, we study nonparametric HMMs only assuming some mild smoothness conditions on the emission densities. We design a spectral algorithm for this setting. Our methods leverage some recent advances in continuous linear algebra [5, 6] which views two-dimensional functions as continuous analogues of matrices. Chebyshev polynomial approximations enable efficient computation of algebraic operations on these continuous objects [7, 8]. Using these ideas, we extend existing spectral methods for discrete HMMs to the continuous nonparametric setting. Our main contributions are: ? Joint lead authors. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 1. We derive a spectral learning algorithm for HMMs with nonparametric emission densities. While the algorithm is similar to previous spectral methods for estimating models with a finite number of parameters, many of the ideas used to generalise it to the nonparametric setting are novel, and, to the best of our knowledge, have not been used before in the machine learning literature. 2. We establish sample complexity bounds for our method. For this, we derive concentration results for nonparametric density estimation and novel perturbation theory results for the aforementioned continuous matrices. The perturbation results are new and might be of independent interest. 3. We implement our algorithm by approximating the density estimates via Chebyshev polynomials which enables efficient computation of many of the continuous matrix operations. Our method outperforms natural competitors in this setting on synthetic and real data and is computationally more efficient than most of them. Our Matlab code is available at github.com/alshedivat/nphmm. While we focus on HMMs in this exposition, we believe that the ideas presented in this paper can be easily generalised to estimating other latent variable models and predictive state representations [9] with nonparametric observations using approaches developed by Anandkumar et al. [3]. Related Work: Parametric HMMs are usually estimated using maximum likelihood principle via EM techniques [10] such as the Baum-Welch procedure [11]. However, EM is a local search technique, and optimization of the likelihood may be difficult. Hence, recent work on spectral methods has gained appeal. Our work builds on Hsu et al. [2] who showed that discrete HMMs can be learned efficiently, under certain conditions. The key idea is that any HMM can be completely characterised in terms of quantities that depend entirely on the observations, called the observable representation, which can be estimated from data. Siddiqi et al. [4] show that the same algorithm works under slightly more general assumptions. Anandkumar et al. [3] proposed a spectral algorithm for estimating more general latent variable models with parametric observations via a moment matching technique. That said, we are aware of little work on estimating latent variable models, including HMMs, when the observations are nonparametric. A commonly used heuristic is the nonparametric EM [12], which lacks theoretical underpinnings. This should not be surprising because EM is degenerate for most nonparametric problems as a maximum likelihood procedure [13]. Only recently, De Castro et al. [14] have provided a minimax-type of result for the nonparametric setting. In their work, Siddiqi et al. [4] proposed a heuristic based on kernel smoothing, to modify the discrete algorithm for continuous observations. Further, their procedure cannot be used to recover the joint or conditional probabilities of a sequence, which would be needed to compute probabilities of events and other inference tasks. Song et al. [15, 16] developed an RKHS-based procedure for estimating the Hilbert space embedding of an HMM. While they provide theoretical guarantees, their bounds are in terms of the RKHS distance of the true and estimated embeddings. This metric depends on the choice of the kernel and it is not clear how it translates to a suitable distance measure on the observation space such as an L1 or L2 distance. While their method can be used for prediction and pairwise testing, it cannot recover the joint and conditional densities. On the contrary, our model provides guarantees in terms of the more interpretable total variation distance and is able to recover the joint and conditional probabilities. 2 A Pint-sized Review of Continuous Linear Algebra We begin with a pint-sized review on continuous linear algebra which treats functions as continuous analogues of matrices. Appendix A contains a quart-sized review. Both sections are based on [5, 6]. While these objects can be viewed as operators on Hilbert spaces which have been studied extensively in the years, the above line of work simplified and specialised the ideas to functions. A matrix F 2 Rm?n is an m ? n array of numbers where F (i, j) denotes the entry in row i, column j. m or n could be (countably) infinite. A column qmatrix (quasi-matrix) Q 2 R[a,b]?m is a collection of m functions defined on [a, b] where the row index is continuous and column index is discrete. Writing Q = [q1 , . . . , qm ] where qj : [a, b] ! R is the j th function, Q(y, j) = qj (y) denotes the value of the j th function at y 2 [a, b]. Q> 2 Rm?[a,b] denotes a row qmatrix with Q> (j, y) = Q(y, j). A cmatrix (continuous-matrix) C 2 R[a,b]?[c,d] is a two dimensional function where both row and column indices are continuous and C(y, x) is the value of the function at (y, x) 2 [a, b] ? [c, d]. C > 2 R[c,d]?[a,b] denotes its transpose with C > (x, y) = C(y, x). Qmatrices and cmatrices permit all matrix multiplications with suitably defined inner products. For example, if R 2 R[c,d]?m and Rd C 2 R[a,b]?[c,d] , then CR = T 2 R[a,b]?m where T (y, j) = c C(y, s)R(s, j)ds. 2 A cmatrix has a singular P value decomposition (SVD). If C 2 R[a,b]?[c,d] , it decomposes as an 1 2 infinite sum, C(y, x) = ??? 0 2 j=1 j uj (y)vj (x), that converges in L . Here 1 are the singular values of C. {uj }j 1 and {vj }j 1 are functions that form orthonormal bases for L2 ([a, b]) and L2 ([c, d]), respectively. We can write the SVD as C = U ?V > by writing the singular vectors as infinite qmatrices U = [u1 , u2 . . . ], V = [v1 , v2 . . . ], and ? = diag( 1 , 2 . . . ). If only m < 1 first singular values are nonzero, we say that C is of rank m. The SVD of a qmatrix Q 2 R[a,b]?m is, Q = U ?V > where U 2 R[a,b]?m and V 2 Rm?m have orthonormal columns and ? = diag( 1 , . . . , m ) with 1 ? ? ? 0. The rank of a column qmatrix is the number m of linearly independent columns (i.e. functions) and is equal to the number of nonzero singular values. Finally, as for the finite matrices, the pseudo inverse of the cmatrix C is C ? = V ? 1 U > with ? 1 = diag(1/ 1 , 1/ 2 , . . . ). The pseudo inverse of a qmatrix is defined similarly. 3 Nonparametric HMMs and the Observable Representation Notation: Throughout this manuscript, we will use P to denote probabilities of events while p will denote probability density functions (pdf). An HMM characterises a probability distribution over a sequence of hidden states {ht }t 0 and observations {xt }t 0 . At a given time step, the HMM can be in one of m hidden states, i.e. ht 2 [m] = {1, . . . , m}, and the observation is in some bounded continuous domain X . Without loss of generality, we take2 X = [0, 1]. The nonparametric HMM will be completely characterised by the initial state distribution ? 2 Rm , the state transition matrix T 2 Rm?m and the emission densities Oj : X ! R, j 2 [m]. ?i = P(h1 = i) is the probability that the HMM would be in state i at the first time step. The element T (i, j) = P(ht+1 = i|ht = j) of T gives the probability that a hidden state transitions from state j to state i. The emission function, Oj : X ! R+ , describes the pdf of the observation conditioned on the hidden state j, i.e. R Oj (s) = p(xt = s|ht = j). Note that we have Oj (x) > 0, 8x and Oj (?) = 1 for all j 2 [m]. In [0,1]?m this exposition, we denote the emission densities by the qmatrix, O = [O1 , . . . , Om ] 2 R+ . e e In addition, let O(x) = diag(O1 (x), . . . , Om (x)), and A(x) = T O(x). Let x1:t = {x1 , . . . , xt } be an ordered sequence and xt:1 = {xt , . . . , x1 } denote its reverse. For brevity, we will overload notation for A for sequences and write A(xt:1 ) = A(xt )A(xt 1 ) . . . A(x1 ). It is well known [2, 17] that the joint probability density of the sequence x1:t can be computed via p(x1:t ) = 1> m A(xt:1 )?. Key structural assumption: Previous work on estimating HMMs with continuous observations typically assumed that the emissions, Oj , take a parametric form, e.g. Gaussian. Unlike them, we only make mild nonparametric smoothness assumptions on Oj . As we will see, to estimate the HMM well in this problem we will need to estimate entire pdfs well. For this reason, the nonparametric setting is significantly more difficult than its parametric counterpart as the latter requires estimating only a finite number of parameters. When compared to the previous literature, this is the crucial distinction and the main challenge in this work. Observable Representation: The observable representation is a description of an HMM in terms of quantities that depend on the observations [17]. This representation is useful for two reasons: (i) it depends only on the observations and can be directly estimated from the data; (ii) it can be used to compute joint and conditional probabilities of sequences even without the knowledge of T and O and therefore can be used for inference and prediction. First, we define the joint densities, P1 , P21 , P321 : P1 (t) = p(x1 = t), P21 (s, t) = p(x2 = s, x1 = t), P321 (r, s, t) = p(x3 = r, x2 = s, x1 = t), where xi , i = 1, 2, 3 denotes the observation at time i. Denote P3x1 (r, t) = P321 (r, x, t) for all x. We will find it useful to view both P21 , P3x1 2 R[0,1]?[0,1] as cmatrices. We will also need an additional qmatrix U 2 R[0,1]?m such that U > O 2 Rm?m is invertible. Given one such U , the observable representation of an HMM is described by the parameters b1 , b1 2 Rm and B : [0, 1] ! Rm?m , b1 = U > P1 , > b1 = (P21 U ) ? P1 , B(x) = (U > P3x1 )(U > P21 )? (1) As before, for a sequence, xt:1 = {xt , . . . , x1 }, we define B(xt:1 ) = B(xt )B(xt 1 ) . . . B(x1 ). The following lemma shows that the first m left singular vectors of P21 are a natural choice for U . Lemma 1. Let ? > 0, T and O be of rank m and U be the qmatrix composed of the first m left singular vectors of P21 . Then U > O is invertible. 2 We discuss the case of higher dimensions in Section 7. 3 To compute the joint and conditional probabilities using the observable representation, we maintain an internal state, bt , which is updated as we see more observations. The internal state at time t is bt = B(xt 1:1 )b1 . > b1 B(xt 1:1 )b1 (2) This definition of bt is consistent with b1 . The following lemma establishes the relationship between the observable representation and the internal states to the HMM parameters and probabilities. Lemma 2 (Properties of the Observable Representation). Let rank(T ) = rank(O) = m and U > O be invertible. Let p(x1:t ) denote the joint density of a sequence x1:t and p(xt+1:t+t0 |x1:t ) denote the conditional density of xt+1:t+t0 given x1:t in a sequence x1:t+t0 . Then the following are true. 1. b1 = U > O? > 2. b1 = 1> m (U O) 4. bt+1 = B(xt )bt /(b> 1 B(xt )bt ). 1 3. B(x) = (U > O)A(x)(U > O) 1 8 x 2 [0, 1]. 5. p(x1:t ) = b> 1 B(xt:1 )b1 . 6. p(xt+t0 :t+1 |x1:t ) = b> 1 B(xt+t0 :t+1 )bt . The last two claims of the Lemma 2 show that we can use the observable representation for computing the joint and conditional densities. The proofs of Lemmas 1 and 2 are similar to the discrete case and mimic Lemmas 2, 3 & 4 of Hsu et al. [2]. 4 Spectral Learning of HMMs with Nonparametric Emissions The high level idea of our algorithm, NP-HMM-SPEC, is as follows. First we will obtain density estimates for P1 , P21 , P321 which will then be used to recover the observable representation b1 , b1 , B by plugging in the expressions in (1). Lemma 2 then gives us a way to estimate the joint and conditional probability densities. For now, we will assume that we have N i.i.d sequences of triples (j) (j) (j) (j) {X (j) }N = (X1 , X2 , X3 ) are the observations at the first three time steps. We j=1 where X describe learning from longer sequences in Section 4.3. 4.1 Kernel Density Estimation The first step is the estimation of the joint probabilities which requires a nonparametric density estimate. While there are several techniques [18], we use kernel density estimation (KDE) since it is easy to analyse and works well in practice. The KDE for P1 , P21 , and P321 take the form: ! ! ! N N (j) (j) (j) X X 1 1 t X 1 1 s X t X 1 2 1 Pb1 (t) = K , Pb21 (s, t) = K K , N j=1 h1 h1 N j=1 h221 h21 h21 ! ! ! N (j) (j) (j) X 1 1 r X s X t X 3 2 1 Pb321 (r, s, t) = K K K . (3) N j=1 h3321 h321 h321 h321 Here K : [0, 1] ! R is a symmetric function called a smoothing kernel and satisfies (at the very R1 R1 least) 0 K(s)ds = 1, 0 sK(s)ds = 0. The parameters h1 , h21 , h321 are the bandwidths, and are typically decreasing with N . In practice they are usually chosen via cross-validation. 4.2 The Spectral Algorithm Algorithm 1 NP-HMM-SPEC (j) (j) (j) Input: Data {X (j) = (X1 , X2 , X3 )}N j=1 , number of states m. ? Obtain estimates Pb1 , Pb21 , Pb321 for P1 , P21 , P321 via kernel density estimation (3). b 2 R[0,1]?m be the first m left singular vectors of Pb21 . ? Compute the cmatrix SVD of Pb21 . Let U b is a Rm?m valued function. ? Compute the parameters observable representation. Note that B bb1 = U b > Pb1 , bb1 = (P > U b ?b 21 ) P1 , 4 b b > Pb3x1 )(U b > Pb21 )? B(x) = (U The algorithm, given above in Algorithm 1, follows the roadmap set out at the beginning of this section. While the last two steps are similar to the discrete HMM algorithm of Hsu et al. [2], the SVD, pseudoinverses and multiplications are with q/c-matrices. Once we have the estimates bb1 , bb1 , b and B(x) the joint and predictive (conditional) densities can be estimated via (see Lemma 2): b b pb(x1:t ) = bb> 1 B(xt:1 )b1 , b b pb(xt+t0 :t+1 |x1:t ) = bb> 1 B(xt+t0 :t+1 )bt . (4) b in (2). Theoretically, these Here bbt is the estimated internal state obtained by plugging in bb1 , bb1 , B estimates can be negative in which case they can be truncated to 0 without affecting the theoretical results in Section 5. However, in our experiments these estimates were never negative. 4.3 Implementation Details C/Q-Matrix operations using Chebyshev polynomials: While our algorithm and analysis are conceptually well founded, the important practical challenge lies in the efficient computation of the many aforementioned operations on c/q-matrices. Fortunately, some very recent advances in the numerical analysis literature, specifically on computing with Chebyshev polynomials, have rendered the above algorithm practical [6, Ch.3-4]. Due to the space constraints, we provide only a summary. Chebyshev polynomials is a family of orthogonal polynomials on compact intervals, known to be an excellent approximator of one-dimensional functions [19, 20]. A recent line of work [5, 8] has extended the Chebyshev technology to two dimensional functions enabling the mentioned operations and factorisations such as QR, LU and SVD [6, Sections 4.6-4.8] of continuous matrices to be carried efficiently. The density estimates Pb1 , Pb21 , Pb321 are approximated by Chebyshev polynomials to within machine precision. Our implementation makes use of the Chebfun library [7] which provides an efficient implementation for the operations on continuous and quasi matrices. Computation time: Representing the KDE estimates Pb1 , Pb21 , Pb321 using Chebfun was roughly linear in N and is the brunt of the computational effort. The bandwidths for the three KDE estimates are chosen via cross validation which takes O(N 2 ) effort. However, in practice the cost was dominated by the Chebyshev polynomial approximation. In our experiments we found that NPHMM-SPEC runs in linear time in practice and was more efficient than most alternatives. Training with longer sequences: When training with longer sequences we can use a sliding window of length 3 across the sequence to create the triples of observations needed for the algorithm. That is, given N samples each of length `(j) , j = 1, . . . , N , we create an augmented dataset of triples (j) (j) (j) (j) { {(Xt , Xt+1 , Xt+2 )}`t=1 2 }N j=1 and run NP-HMM-SPEC with the augmented data. As is with conventional EM procedures, this requires the additional assumption that the initial state is the stationary distribution of the transition matrix T . 5 Analysis We now state our assumptions and main theoretical results. Following [2, 4, 15] we assume i.i.d sequences of triples are used for training. With longer sequences, the analysis should only be modified to account for the mixing of the latent state Markov chain, which is inessential for the main intuitions. We begin with the following regularity condition on the HMM. Assumption 3. ? > 0 element-wise. T 2 Rm?m and O 2 R[0,1]?m are of rank m. The rank condition on O means that emission pdfs are linearly independent. If either T or O are rank deficient, then the learner may confuse state outputs, which makes learning difficult3 . Next, while we make no parametric assumptions on the emissions, some smoothness conditions are used to make density estimation tractable. We use the H?lder class, H1 ( , L), which is standard in the nonparametrics literature. For = 1, this assumption reduces to L-Lipschitz continuity. Assumption 4. All emission densities belong to the H?lder class, H1 ( , L). That is, they satisfy, d? Oj (s) ds? for all ? ? b c, j 2 [m], s, t 2 [0, 1] d? Oj (t) ? L|s dt? t| |?| . Here b c is the largest integer strictly less than . 3 Siddiqi et al. [4] show that the discrete spectral algorithm works under a slightly more general setting. Similar results hold for the nonparametric case too but will restrict ourselves to the full rank setting for simplicity. 5 Under the above assumptions we bound the total variation distance between the true and the estimated densities of a sequence, x1:t . Let ?(O) = 1 (O)/ m (O) denote the condition number of the observation qmatrix. The following theorem states our main result. Theorem 5. Pick any sufficiently small ? > 0 and a failure probability 2 (0, 1). Let t 1. Assume that the HMM satisfies Assumptions 3 and 4 and the number of samples N satisfies, ? ?2+ 3 ? ?1+ 23 3 N ?(O)2+ t 1 1+ 23 Cm log . 4 4+ log(N ) ? m (P21 ) Then, with probability at least 1 , the estimated joint density for a t-length sequence satisfies R |p(x1:t ) pb(x1:t )|dx1:t ? ?. Here, C is a constant depending on and L and pb is from (4). Synopsis: Observe that the sample complexity depends critically on the conditioning of O and P21 . The closer they are to being singular, the more samples is needed to distinguish different states and learn the HMM. It is instructive to compare the results above with the discrete case result of Hsu et al. 2 t2 1 [2], whose sample complexity bound4 is N & m m?(O) (P21 )4 ?2 log . Our bound is different in two regards. First, the exponents are worsened by additional ? 1 terms. This characterizes the difficulty of the problem in the nonparametric setting. While we do not have any lower bounds, given the current understanding of the difficulty of various nonparametric tasks [21?23], we think our bound might be unimprovable. As the smoothness of the densities increases ! 1, we approach the parametric sample complexity. The second difference is the additional log(N ) term on the left hand side. This is due to the fact that we want the KDE to concentrate around its expectation in L2 over [0, 1], instead of just point-wise. It is not clear to us whether the log can be avoided. To prove Theorem 5, first we will derive some perturbation theory results for c/q-matrices; we will need them to bound the deviation of the singular values and vectors when we use Pb21 instead of P21 . Some of these perturbation theory results for continuous linear algebra are new and might be of independent interest. Next, we establish a concentration result for the kernel density estimator. 5.1 Some Perturbation Theory Results for C/Q-matrices The first result is an analog of Weyl?s theorem which bounds the difference in the singular values in terms of the operator norm of the perturbation. Weyl?s theorem has been studied for general operators [24] and cmatrices [6]. We have given one version in Lemma 21 of Appendix B. In addition to this, we will also need to bound the difference in the singular vectors and the pseudo-inverses of the truth and the estimate. To our knowledge, these results are not yet known. To that end, we establish the following results. Here k (A) denotes the k th singular value of a c/q-matrix A. ? E 2 R[0,1]?[0,1] where Lemma 6 (Simplified Wedin?s Sine Theorem for Cmatrices). Let A, A, [a,b]?m ? ? ? A = A + E and rank(A) = m. Let U, U 2 R be qthe first m left singular vectors of A and A ? > U xk2 kxk2 1 2kEk2 2 / m (A) ? 2. respectively. Then, for all x 2 Rm , kU L ? E 2 R[a,b]?m and A? = A + E. Then, Lemma 7 (Pseudo-inverse Theorem for Qmatrices). Let A, A, ? A?? ) ? 3 max{ 1 (A? )2 , 1 (A? )2 } 1 (E). 1 (A 5.2 Concentration Bound for the Kernel Density Estimator Next, we bound the error for kernel density estimation. To obtain the best rates under H?lderian assumptions on O, the kernels used in KDE need to be of order . A order kernel satisfies, Z 1 Z 1 Z 1 K(s)ds = 1, s? K(s)ds = 0, for all ? ? b c, s K(s)ds ? 1. (5) 0 0 0 Such kernels can be constructed using Legendre polynomials [18]. Given N i.i.d samples from a d dimensional density f , where d 2 {1, 2, 3} and f 2 {P1 , P21 , P321 }, for appropriate choices of the bandwidths h1 , h21 , h321 , the KDE f? 2 {Pb1 , Pb21 , Pb321 } concentrates around f . Informally, we show ? ? ? ? 2 d P kf? f kL2 > " . exp log(N ) 2 +d N 2 +d "2 . (6) 4 Hsu et al. [2] provide a more refined bound but we use this form to simplify the comparison. 6 0.25 0.1 0.05 0.39 0.388 0.386 104 3 105 10 10 103 10 2 101 5 0.25 0.1 True MG-HMM NP-HMM-BIN NP-HMM-EM NP-HMM-HSE NP-HMM-SPEC 0.37 Prediction absolute error MG-HMM NP-HMM-BIN NP-HMM-EM NP-HMM-SPEC 0.366 MG-HMM NP-HMM-EM NP-HMM-HSE NP-HMM-SPEC 104 0.362 104 10 3 102 MG-HMM NP-HMM-EM NP-HMM-HSE NP-HMM-SPEC 0.358 0.05 5 10 50 Number of training sequences # 103 100 5 10 50 Number of training sequences # 103 105 Number of training sequences Number of training sequences Number of training sequences 0.5 4 10 Training time, sec 103 4 100 103 0.384 0.025 Predictive L1 error 10 True MG-HMM NP-HMM-BIN NP-HMM-EM NP-HMM-HSE NP-HMM-SPEC Training time, sec MG-HMM NP-HMM-BIN NP-HMM-EM NP-HMM-SPEC Prediction absolute error Predictive L1 error 0.5 100 101 5 10 50 100 Number of training sequences Figure 1: The upper and lower panels correspond to m = 4 m = 8 respectively. All figures are in log-log scale and the x-axis is the number of triples used for training. Left: L1 error between true conditional density p(x6 |x1:5 ), and the estimate for each method. Middle: The absolute error between the true observation and a one-step-ahead prediction. The error of the true model is denoted by a black dashed line. Right: Training time. d for all sufficiently small " and N/ log N & " 2+ . Here ., & denote inequalities ignoring constants. See Appendix C for a formal statement. Note that when the observations are either discrete or parametric, it is possible to estimate the distribution using O(1/"2 ) samples to achieve " error in a suitable metric, say, using the maximum likelihood estimate. However, the nonparametric setting is inherently more difficult and therefore the rate of convergence is slower. This slow convergence is also observed in similar concentration bounds for the KDE [25, 26]. A note on the Proofs: For Lemmas 6, 7 we follow the matrix proof in Stewart and Sun [27] and derive several intermediate results for c/q-matrices in the process. The main challenge is that several properties for matrices, e.g. the CS and Schur decompositions, are not known for c/q-matrices. In addition, dealing with various notions of convergences with these infinite objects can be finicky. The main challenge with the KDE concentration result is that we want an L2 bound ? so usual techniques (such as McDiarmid?s [13, 18]) do not apply. We use a technical lemma from Gin? and Guillou [26] which allows us to bound the L2 error in terms of the VC characteristics of the class of functions induced by an i.i.d sum of the kernel. The proof of theorem 5 just mimics the discrete case analysis of Hsu et al. [2]. While, some care is needed (e.g. kxkL2 ? kxkL1 does not hold for functional norms) the key ideas carry through once we apply Lemmas 21, 6, 7 and (6). A more refined bound on N that is tighter in polylog(N ) terms is possible ? see Corollary 25 and equation 13 in the appendix. 6 Experiments We compare NP-HMM-SPEC to the following. MG-HMM: An HMM trained using EM with the emissions modeled as a mixture of Gaussians. We tried 2, 4 and 8 mixtures and report the best result. NP-HMM-BIN: A naive baseline where we bin the space into n intervals and use the discrete spectral algorithm [2] with n states. We tried several values for n and report the best. NP-HMM-EM: The Nonparametric EM heuristic of [12]. NP-HMM-HSE: The Hilbert space embedding method of [15]. Synthetic Datasets: We first performed a series of experiments on synthetic data where the true distribution is known. The goal is to evaluate the estimated models against the true model. We generated triples from two HMMs with m = 4 and m = 8 states and nonparametric emissions. The details of the set up are given in Appendix E. Figure 1 presents the results. First we compare the methods on estimating the one step ahead conditional density p(x6 |x1:5 ). We report the L1 error between the true and estimated models. In Figure 2 we visualise the estimated one step ahead conditional densities. NP-HMM-SPEC outperforms all methods on this metric. Next, we compare the methods on the prediction performance. That is, we sample sequences of length 6 and test how well a learned model can predict x6 conditioned on x1:5 . When comparing on squared error, the best predictor is the mean of the distribution. For all methods we use the mean of pb(x6 |x1:5 ) except 7 0.4 0.2 0 Predictive density 0.6 1 Truth NP-HMM-HKZ-BIN 0.8 0.6 0.4 0.2 0 -1 -0.5 0 0.5 1 1 Truth NP-HMM-EM 0.8 Predictive density 1 Truth MG-HMM Predictive density Predictive density 1 0.8 0.6 0.4 0.2 0 -1 -0.5 X 0 0.5 1 Truth NP-HMM-SPEC 0.8 0.6 0.4 0.2 0 -1 X -0.5 0 0.5 1 X -1 -0.5 0 0.5 1 X Figure 2: True and estimated one step ahead densities p(x4 |x1:3 ) for each model. Here m = 4 and N = 104 . Dataset Internet Traffic Laser Gen Patient Sleep MG-HMM NP-HMM-BIN NP-HMM-HSE NP-HMM-SPEC 0.143 ? 0.001 0.33 ? 0.018 0.330 ? 0.002 0.188 ? 0.004 0.31 ? 0.017 0.38 ? 0.011 0.0282 ? 0.0003 0.19 ? 0.012 0.197 ? 0.001 0.016 ? 0.0002 0.15 ? 0.018 0.225 ? 0.001 Table 1: The mean prediction error and the standard error on the 3 real datasets. for NP-HMM-HSE for which we used the mode since the mean cannot be computed. No method can do better than the true model (shown via the dotted line) in expectation. NP-HMM-SPEC achieves the performance of the true model with large datasets. Finally, we compare the training times of all methods. NP-HMM-SPEC is orders of magnitude faster than NP-HMM-HSE and NP-HMM-EM. Note that the error of MG-HMM?a parametric model?stops decreasing even with large data. This is due to the bias introduced by the parametric assumption. We do not train NP-HMM-EM for longer sequences because it is too slow. A limitation of the NP-HMM-HSE method is that it cannot recover conditional probabilities, so we exclude it from that experiment. We could not include the method of [4] in our comparisons since their code was not available and their method is not straightforward to implement. Further, their method cannot compute joint/predictive probabilities. Real Datasets: We compare all the above methods (except NP-HMM-EM which was too slow) on prediction error on 3 real datasets: internet traffic [28], laser generation [29] and sleep data [30]. The details on these datasets are in Appendix E. For all methods we used the mode of the conditional distribution p(xt+1 |x1:t ) as the prediction as it performed better. For NP-HMM-SPEC, NP-HMMHSE,NP-HMM-BIN we follow the procedure outlined in Section 4.3 to create triples and train with the triples. In Table 1 we report the mean prediction error and the standard error. NP-HMM-HSE and NP-HMM-SPEC perform better than the other two methods. However, NP-HMM-SPEC was faster to train (and has other attractive properties) when compared to NP-HMM-HSE. 7 Conclusion We proposed and studied a method for estimating the observable representation of a Hidden Markov Model whose emission probabilities are smooth nonparametric densities. We derive a bound on the sample complexity for our method. While our algorithm is similar to existing methods for discrete models, many of the ideas that generalise it to the nonparametric setting are new. In comparison to other methods, the proposed approach has some desirable characteristics: we can recover the joint/conditional densities, our theoretical results are in terms of more interpretable metrics, the method outperforms baselines and is orders of magnitude faster to train. In this exposition only focused on one dimensional observations. The multidimensional case is handled by extending the above ideas and technology to multivariate functions. Our algorithm and the analysis carry through to the d-dimensional setting, mutatis mutandis. The concern however, is more practical. While we have the technology to perform various c/q-matrix operations for d = 1 using Chebyshev polynomials, this is not yet the case for d > 1. Developing efficient procedures for these operations in the high dimensional settings is a challenge for the numerical analysis community and is beyond the scope of this paper. That said, some recent advances in this direction are promising [8, 31]. While our method has focused on HMMs, the ideas in this paper apply for a much broader class of problems. Recent advances in spectral methods for estimating parametric predictive state representations [32], mixture models [3] and other latent variable models [33] can be generalised to the nonparamatric setting using our ideas. Going forward, we wish to focus on such models. Acknowledgements The authors would like to thank Alex Townsend, Arthur Gretton, Ahmed Hefny, Yaoliang Yu, and Renato Negrinho for the helpful discussions. This work was supported by NIH R01GM114311, AFRL/DARPA FA87501220324. 8 References [1] Lawrence R. Rabiner. A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. In Proceedings of the IEEE, 1989. [2] Daniel J. Hsu, Sham M. Kakade, and Tong Zhang. A Spectral Algorithm for Learning Hidden Markov Models. In COLT, 2009. [3] Animashree Anandkumar, Daniel Hsu, and Sham M Kakade. A Method of Moments for Mixture Models and Hidden Markov Models. arXiv preprint arXiv:1203.0683, 2012. [4] Sajid M. Siddiqi, Byron Boots, and Geoffrey J. Gordon. Reduced-Rank Hidden Markov Models. In AISTATS, 2010. [5] Alex Townsend and Lloyd N Trefethen. Continuous analogues of matrix factorizations. In Proc. R. Soc. A, 2015. [6] Alex Townsend. Computing with Functions in Two Dimensions. PhD thesis, University of Oxford, 2014. [7] Tobin A Driscoll, Nicholas Hale, and Lloyd N Trefethen. Chebfun guide. Pafnuty Publ, 2014. [8] Townsend, Alex and Trefethen, Lloyd N. An extension of chebfun to two dimensions. SIAM J. Scientific Computing, 2013. [9] Michael L Littman, Richard S Sutton, and Satinder P Singh. Predictive representations of state. In NIPS, volume 14, pages 1555?1561, 2001. [10] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 1977. [11] Lloyd R Welch. Hidden Markov models and the Baum-Welch algorithm. IEEE Information Theory Society Newsletter, 2003. [12] Tatiana Benaglia, Didier Chauveau, and David R Hunter. An em-like algorithm for semi-and nonparametric estimation in multivariate mixtures. Journal of Computational and Graphical Statistics, 18(2):505?526, 2009. [13] Larry Wasserman. All of Nonparametric Statistics. Springer-Verlag NY, 2006. [14] Yohann De Castro, ?lisabeth Gassiat, and Claire Lacour. Minimax adaptive estimation of nonparametric hidden markov models. arXiv preprint arXiv:1501.04787, 2015. [15] Le Song, Byron Boots, Sajid M Siddiqi, Geoffrey J Gordon, and Alex Smola. Hilbert space embeddings of hidden markov models. In ICML, 2010. [16] Le Song, Animashree Anandkumar, Bo Dai, and Bo Xie. Nonparametric Estimation of Multi-View Latent Variable Models. pages 640?648, 2014. [17] Herbert Jaeger. Observable operator models for discrete stochastic time series. Neural Computation, 2000. [18] Alexandre B. Tsybakov. Introduction to Nonparametric Estimation. Springer, 2008. [19] L. Fox and I. B. Parker. Chebyshev polynomials in numerical analysis. Oxford U.P. cop., 1968. [20] Lloyd N. Trefethen. Approximation Theory and Approximation Practice. Society for Industrial and Applied Mathematics, 2012. [21] Lucien Birg? and Pascal Massart. Estimation of integral functionals of a density. Ann. of Stat., 1995. [22] James Robins, Lingling Li, Eric Tchetgen, and Aad W van der Vaart. Quadratic semiparametric Von Mises Calculus. Metrika, 69(2-3):227?247, 2009. [23] Kirthevasan Kandasamy, Akshay Krishnamurthy, Barnab?s P?czos, Larry Wasserman, and James Robins. Nonparametric Von Mises Estimators for Entropies, Divergences and Mutual Informations. In NIPS, 2015. [24] Woo Young Lee. Weyl?s theorem for operator matrices. Integral Equations and Operator Theory, 1998. [25] Han Liu, Min Xu, Haijie Gu, Anupam Gupta, John D. Lafferty, and Larry A. Wasserman. Forest Density Estimation. Journal of Machine Learning Research, 12:907?951, 2011. [26] Evarist Gin? and Armelle Guillou. Rates of strong uniform consistency for multivariate kernel density estimators. In Annales de l?IHP Probabilit?s et statistiques, 2002. [27] G. W. Stewart and Ji-guang Sun. Matrix Perturbation Theory. Academic Press, 1990. [28] Vern Paxson and Sally Floyd. Wide area traffic: the failure of Poisson modeling. IEEE/ACM Transactions on Networking, 1995. [29] U H?bner, NB Abraham, and CO Weiss. Dimensions and entropies of chaotic intensity pulsations in a single-mode far-infrared NH 3 laser. Physical Review A, 1989. [30] Santa Fe Time Series Competition. http://www-psych.stanford.edu/ andreas/Time-Series/SantaFe.html. [31] Hashemi, B. and Trefethen, L. N. Chebfun to three dimensions. In preparation, 2016. [32] Satinder Singh, Michael R. James, and Matthew R. Rudary. Predictive State Representations: A New Theory for Modeling Dynamical Systems. In UAI, 2004. [33] Animashree Anandkumar, Rong Ge, Daniel Hsu, Sham M. Kakade, and Matus Telgarsky. Tensor Decompositions for Learning Latent Variable Models. JMLR, 2014. 9
6086 |@word mild:2 version:1 middle:1 polynomial:12 norm:2 suitably:1 calculus:1 tried:2 decomposition:5 q1:1 pick:1 carry:2 moment:5 initial:2 liu:1 series:7 contains:1 daniel:3 rkhs:2 outperforms:3 existing:2 current:1 com:1 comparing:1 surprising:1 yet:2 john:1 numerical:4 weyl:3 enables:1 interpretable:2 stationary:1 kandasamy:3 spec:19 selected:1 metrika:1 pb1:6 beginning:1 provides:2 didier:1 mcdiarmid:1 zhang:1 constructed:1 viable:1 prove:1 introduce:1 theoretically:1 pairwise:1 roughly:1 p1:9 surge:1 multi:1 decreasing:2 little:1 window:1 maruan:1 spain:1 estimating:12 provided:1 begin:2 notation:2 bounded:1 panel:1 cm:1 psych:1 developed:2 guarantee:2 pseudo:5 multidimensional:1 exactly:1 inessential:1 rm:11 qm:1 before:2 generalised:2 local:1 modify:1 treat:1 sutton:1 analyzing:1 oxford:2 might:3 black:1 sajid:2 studied:3 negrinho:1 co:1 hmms:16 factorization:1 practical:3 testing:1 maximisation:1 incongruous:1 implement:3 practice:5 x3:3 chaotic:1 procedure:8 probabilit:1 area:1 significantly:1 matching:1 cannot:6 operator:6 hse:11 nb:1 writing:2 www:1 conventional:2 baum:2 straightforward:1 focused:2 welch:3 simplicity:1 lderian:2 factorisation:1 wasserman:3 estimator:4 array:1 orthonormal:2 embedding:2 hkz:1 notion:1 variation:2 krishnamurthy:1 updated:1 pa:3 element:2 recognition:2 approximated:1 infrared:1 observed:1 preprint:2 sun:2 mentioned:1 intuition:1 dempster:1 complexity:6 littman:1 trained:1 depend:2 singh:2 santafe:1 algebra:5 predictive:12 eric:2 learner:1 completely:2 gu:1 easily:1 joint:17 darpa:1 haijie:1 various:4 worsened:1 vern:1 laser:3 train:4 describe:1 refined:2 whose:2 heuristic:3 trefethen:5 valued:1 stanford:1 say:2 lder:2 statistic:2 vaart:1 think:1 analyse:1 laird:1 cop:1 sequence:27 mg:10 product:1 pulsation:1 gen:1 mixing:1 degenerate:1 achieve:1 description:1 competition:1 qr:1 convergence:3 regularity:1 r1:2 extending:1 jaeger:1 telgarsky:1 converges:1 object:3 derive:6 develop:1 depending:1 stat:1 polylog:1 strong:1 soc:1 fa87501220324:1 c:4 concentrate:2 direction:1 stochastic:1 vc:1 viewing:1 enable:1 larry:3 bin:9 barnab:1 qmatrices:3 tighter:1 strictly:1 extension:1 rong:1 hold:2 sufficiently:2 around:2 exp:1 lawrence:1 scope:1 predict:1 matus:1 claim:1 matthew:1 achieves:1 xk2:1 estimation:15 proc:1 lucien:1 mutandis:1 p321:7 largest:1 create:3 establishes:1 gaussian:2 modified:1 cr:1 broader:1 corollary:1 emission:18 focus:2 pdfs:2 rank:11 likelihood:5 industrial:1 baseline:3 helpful:1 inference:2 typically:2 entire:1 bt:8 qthe:1 hidden:16 yaoliang:1 quasi:2 going:1 aforementioned:2 colt:1 pascal:1 denoted:1 exponent:1 html:1 smoothing:2 mutual:1 equal:1 aware:1 once:2 never:1 x4:1 yu:1 icml:1 mimic:2 np:49 t2:1 simplify:1 report:4 gordon:2 richard:1 composed:1 divergence:1 ourselves:1 maintain:1 shedivat:1 interest:3 unimprovable:1 mixture:5 wedin:1 immense:1 bb1:6 chain:1 integral:2 underpinnings:1 closer:1 arthur:1 orthogonal:1 fox:1 incomplete:1 theoretical:5 column:7 modeling:2 markovian:1 stewart:2 cost:1 deviation:1 entry:1 predictor:1 uniform:1 too:4 synthetic:4 density:47 siam:1 rudary:1 lee:1 invertible:3 michael:2 squared:1 thesis:1 von:2 li:1 account:1 exclude:1 de:3 sec:2 lloyd:5 satisfy:1 depends:3 sine:1 view:3 h1:7 performed:2 characterizes:1 traffic:3 xing:1 competitive:1 recover:7 contribution:1 om:2 who:1 efficiently:2 characteristic:2 correspond:2 rabiner:1 bbt:1 conceptually:1 critically:1 hunter:1 lu:1 p21:15 networking:1 definition:1 competitor:1 failure:2 against:1 kl2:1 james:3 proof:4 mi:2 hsu:9 stop:1 dataset:2 popular:1 animashree:3 knowledge:3 hilbert:4 hefny:1 pseudoinverses:1 manuscript:1 alexandre:1 afrl:1 higher:2 dt:1 xie:1 x6:4 follow:2 synopsis:1 wei:1 nonparametrics:1 generality:1 guillou:2 just:2 smola:1 d:7 hand:1 statistiques:1 lack:1 continuity:1 undergoes:1 mode:3 scientific:1 believe:1 true:14 ihp:1 counterpart:1 hence:1 symmetric:1 nonzero:2 attractive:1 floyd:1 pdf:2 newsletter:1 l1:5 wise:2 novel:3 recently:3 nih:1 functional:1 ji:1 physical:1 conditioning:1 volume:1 visualise:1 belong:2 extend:1 analog:1 nh:1 mellon:3 smoothness:5 rd:1 outlined:1 mathematics:1 similarly:1 consistency:1 han:1 longer:5 base:1 multivariate:3 recent:7 showed:1 belongs:1 reverse:1 certain:1 verlag:1 inequality:1 der:1 herbert:1 additional:4 fortunately:1 care:1 dai:1 dashed:1 semi:1 ii:1 sliding:1 desirable:1 sham:3 reduces:1 full:1 gretton:1 smooth:1 technical:1 faster:3 academic:1 ahmed:1 cross:2 plugging:2 prediction:10 basic:1 patient:1 cmu:3 expectation:3 metric:4 arxiv:4 poisson:1 kernel:14 addition:3 affecting:1 want:2 semiparametric:1 interval:2 singular:15 crucial:1 unlike:1 massart:1 induced:1 deficient:1 byron:2 contrary:1 leveraging:1 lafferty:1 schur:1 anandkumar:5 integer:1 structural:1 tobin:1 leverage:1 intermediate:1 hashemi:1 embeddings:2 easy:1 bandwidth:3 restrict:1 inner:1 idea:11 andreas:1 translates:1 chebyshev:11 qj:2 t0:7 whether:1 expression:1 handled:1 effort:2 song:3 algebraic:1 speech:2 matlab:1 useful:2 clear:2 quart:1 informally:1 santa:1 nonparametric:34 tsybakov:1 meteorology:1 extensively:1 siddiqi:5 reduced:2 http:1 tutorial:1 dotted:1 estimated:12 popularity:1 carnegie:3 discrete:18 write:2 key:3 pb:5 drawn:1 ht:7 v1:1 annales:1 year:1 sum:2 run:2 inverse:5 family:2 throughout:1 appendix:6 entirely:1 renato:1 bound:18 internet:2 distinguish:1 sleep:2 quadratic:1 ahead:4 constraint:1 alex:5 x2:4 dominated:1 u1:1 min:1 rendered:1 developing:1 legendre:1 describes:1 slightly:2 em:21 across:1 kakade:3 castro:2 computationally:3 equation:2 discus:1 needed:4 ge:1 tractable:1 end:1 available:2 operation:9 gaussians:1 permit:1 apply:3 observe:2 v2:1 spectral:16 appropriate:1 birg:1 nicholas:1 alternative:2 anupam:1 slower:1 denotes:6 include:1 graphical:1 medicine:1 tatiana:1 restrictive:1 build:1 establish:3 approximating:1 uj:2 society:3 tensor:1 quantity:2 parametric:15 concentration:6 usual:1 said:2 gin:2 distance:5 thank:1 hmm:75 roadmap:1 reason:2 assuming:2 driscoll:1 code:2 o1:2 index:3 relationship:1 length:4 modeled:1 difficult:3 fe:1 statement:1 kde:9 negative:2 kirthevasan:2 paxson:1 design:1 implementation:3 publ:1 perform:2 upper:1 boot:2 observation:23 markov:11 datasets:7 finite:3 enabling:1 truncated:1 extended:1 perturbation:8 arbitrary:1 community:1 intensity:1 introduced:1 david:1 learned:2 distinction:1 barcelona:1 nip:3 address:1 able:1 beyond:1 usually:3 dynamical:1 challenge:5 oj:12 including:1 max:1 royal:1 analogue:3 event:2 suitable:2 natural:2 difficulty:2 kek2:1 townsend:4 minimax:2 representing:1 github:1 technology:3 epxing:1 library:1 chauveau:1 axis:1 carried:1 woo:1 naive:1 review:4 literature:4 l2:6 characterises:1 understanding:1 multiplication:3 kf:1 acknowledgement:1 loss:1 bner:1 generation:1 limitation:1 approximator:1 geoffrey:2 triple:8 validation:2 consistent:1 rubin:1 principle:1 row:4 claire:1 summary:1 supported:1 last:2 transpose:1 tchetgen:1 czos:1 bias:2 side:1 formal:1 generalise:2 guide:1 aad:1 wide:1 akshay:1 absolute:3 van:1 regard:1 dimension:5 transition:4 world:1 author:2 commonly:1 collection:1 forward:1 simplified:2 avoided:1 founded:1 adaptive:1 far:1 transaction:1 bb:2 functionals:1 observable:13 compact:1 countably:1 dealing:2 satinder:2 uai:1 b1:14 pittsburgh:3 assumed:2 xi:1 continuous:24 latent:10 search:1 decomposes:1 sk:1 table:2 robin:2 promising:1 learn:1 ku:1 inherently:1 ignoring:1 evarist:1 forest:1 excellent:1 domain:2 vj:2 diag:4 aistats:1 main:7 linearly:2 abraham:1 gassiat:1 guang:1 x1:31 augmented:2 xu:1 pb21:9 parker:1 slow:3 tong:1 ny:1 precision:1 wish:1 lie:1 kxk2:1 jmlr:1 young:1 theorem:9 lacour:1 xt:32 hale:1 dx1:1 appeal:1 gupta:1 concern:1 gained:2 phd:1 magnitude:2 conditioned:3 confuse:1 specialised:1 entropy:2 ordered:1 mutatis:1 bo:2 sally:1 u2:1 springer:2 ch:1 truth:5 satisfies:5 acm:1 conditional:15 sized:3 viewed:1 goal:1 ann:1 exposition:3 lipschitz:1 characterised:2 infinite:4 specifically:1 except:2 lemma:15 called:2 total:2 svd:7 pint:2 internal:4 h21:4 latter:1 brevity:1 overload:1 preparation:1 evaluate:1 instructive:1
5,621
6,087
Dimension-Free Iteration Complexity of Finite Sum Optimization Problems Yossi Arjevani Weizmann Institute of Science Rehovot 7610001, Israel yossi.arjevani@weizmann.ac.il Ohad Shamir Weizmann Institute of Science Rehovot 7610001, Israel ohad.shamir@weizmann.ac.il Abstract Many canonical machine learning problems boil down to a convex optimization problem with a finite sum structure. However, whereas much progress has been made in developing faster algorithms for this setting, the inherent limitations of these problems are not satisfactorily addressed by existing lower bounds. Indeed, current bounds focus on first-order optimization algorithms, and only apply in the often unrealistic regime where the number of iterations is less than O(d/n) (where d is the dimension and n is the number of samples). In this work, we extend the framework of Arjevani et al. [3, 5] to provide new lower bounds, which are dimension-free, and go beyond the assumptions of current bounds, thereby covering standard finite sum optimization methods, e.g., SAG, SAGA, SVRG, SDCA without duality, as well as stochastic coordinate-descent methods, such as SDCA and accelerated proximal SDCA. 1 Introduction Many machine learning tasks reduce to Finite Sum Minimization (FSM) problems of the form n min F (w) := w?Rd 1X fi (w), n i=1 (1) where fi are L-smooth and ?-strongly convex. In recent years, a major breakthrough was made when a linear convergence rate was established for this setting (SAG [16] and SDCA [18]), and since then, many methods have been developed to achieve better convergence rate. However, whereas a large body of literature is devoted for upper bounds, the optimal convergence rate with respect to the problem parameters is not quite settled. Let us discuss existing lower bounds for this setting, along with their shortcomings, in detail. One approach to obtain lower bounds for this setting is to consider the average of carefully handcrafted functions defined on n disjoint sets of variables. This approach was taken by Agarwal and Bottou [1] who derived a lower bound for FSM under the first-order oracle model (see Nemirovsky and Yudin [12]). In this model, optimization algorithms are assumed to access a given function by issuing queries to an external first-order oracle procedure. Upon receiving a query point in the problem domain, the oracle reports the corresponding function value and gradient. The construction used by Agarwal and Bottou consisted of n different quadratic functions which are adversarially determined based on the first-order queries being issued during the optimization process. The resulting bound in this case does not apply to stochastic algorithms, rendering it invalid for current state-of-the-art methods. Another instantiation of this approach was made by Lan [10] who considered n disjoint copies of a quadratic function proposed by Nesterov in [13, Section 2.1.2]. This technique is based on the assumption that any iterate generated by the optimization algorithm lies in the span of previously acquired gradients. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. This assumption is rather permissive and is satisfied by many first-order algorithms, e.g., SAG and SAGA [6]. However, the lower bound stated in the paper faces limitations in a few aspects. First, the validity of the derived bound is restricted to d/n iterations. In many datasets, even if d, n are very large, d/n is quite small. Accordingly, the admissible regime of the lower bound is often not very interesting. Secondly, it is not clear how the proposed construction can be expressed as a Regularized Loss Minimization (RLM) problem with linear predictors (see Section 4). This suggests that methods specialized in dual RLM problems, such as SDCA and accelerated proximal SDCA [19], can not be addressed by this bound. Thirdly, at least the formal theorem requires assumptions (such as querying in the span of previous gradients, or sampling from a fixed distribution over the individual functions), which are not met by some state-of-the-art methods, such as coordinate descent methods, SVRG [9] and without-replacements sampling algorithms [15]. Another relevant approach in this setting is to model the functional form of the update rules. This approach was taken by Arjevani et al. [3] where new iterates are assumed to be generated by a recurrent application of some fixed linear transformation. Although this method applies to SDCA ? and produces a tight lower bound of ?((n + 1/?) ln(1/)), its scope is rather limited. In recent work, Arjevani and Shamir [5] considerably generalized parts of this framework by introducing the class of first-order oblivious optimization algorithms, whose step sizes are scheduled regardless of the function under consideration, and deriving tight lower bounds for general smooth convex minimization problems (note that obliviousness rules out, e.g., quasi-Newton methods where gradients obtained at each iteration are multiplied by matrices which strictly depend on the function at hand, see Definition 2 below). In this work, building upon the framework of oblivious algorithms, we take a somewhat more abstract point of view which allows us to easily incorporate coordinate-descent methods, as well as stochastic algorithms. Our framework subsumes the vast majority of optimization methods for machine learning problems, in particular, it applies to SDCA, accelerated proximal SDCA, SDCA without duality [17], SAG, SAGA, SVRG and acceleration schemes [7, 11]), as well as to a large number of methods for smooth convex optimization (i.e., FSM with n = 1), e.g., (stochastic) Gradient descent (GD), Accelerated Gradient Descent (AGD, [13]), the Heavy-Ball method (HB, [14]) and stochastic coordinate descent. Under this structural assumption, we derive lower bounds for FSM (1), according to which the iteration complexity, i.e., the number of iterations required to obtain an -optimal solution in terms of function value, is at least1 p ? + n(? ? 1) ln(1/)), ?(n (2) where ? denotes the condition number of F (w) (that is, the smoothness parameter over the strong convexity parameter). To the best of our knowledge, this is the first tight lower bound to address all the algorithms mentioned above. Moreover, our bound is dimension-free and thus applies to settings in machine learning which are not covered in the current literature (e.g., when n is ?(d)). We also derive a dimension-free nearly-optimal lower bound for smooth convex optimization of   1/? ? (L(? ? 2)/) , for any ? ? (2, 4), which holds for any oblivious stochastic first-order algorithm. It should be noted that our lower bounds remain valid under any source of randomness which may be introduced into the optimization process (by the oracle or by the optimization algorithm). In particular, our bounds hold in cases where the variance of the iterates produced by the algorithm converges to zero, a highly desirable property of optimization algorithms in this setting. Two implications can be readily derived from this lower bound. First, obliviousness forms a real barrier for optimization algorithms, and whereas non-oblivious algorithms may achieve a super-linear convergence rate at later stages of the optimization process (e.g., quasi-newton), or practically zero error after ?(d) iterations (e.g. Center of Gravity method, MCG), oblivious algorithms are bound to linear convergence indefinitely, as demonstrated by Figure 1. We believe that this indicates that a major progress can be made in solving machine learning problems by employing non-oblivious methods for settings where d  n. It should be further noted that another major advantage of 1 Following standard conventions, here tilde notation hides logarithmic factors in the parameters of a given class of optimization problems, e.g., smoothness parameter and number of components. 2 non-oblivious algorithms is their ability to obtain optimal convergence rates without an explicit specification of the problem parameters (e.g., [5, Section 4.1]). 104 GD AGD HB L-BFGS Lower Bound 102 Error 100 10-2 10-4 10-6 10-8 0 500 1000 1500 2000 2500 Number of Iterations Figure 1: Comparison of first-order methods based on the function used by Nesterov in [13, Section 2.1.2] over R500 . Whereas L-BFGS (with a memory size of 100) achieves a super-linear convergence rate after ?(d) iterations, the convergence rate of GD, AGD and HB remains linear as predicted by our bound. Secondly, many practitioners have noticed that oftentimes sampling the individual functions without replacement at each iteration performs better than sampling with replacement (e.g., [18, 15], see also [8, 20]). The fact that our lower bound holds regardless of how the individual functions are sampled and is attained using with-replacement sampling (e.g., accelerated proximal SDCA), implies that, in terms of iteration complexity, one should expect to gain no more than log factors in the problem parameters when using one method over the other (it is noteworthy that when comparing with and without replacement samplings, apart from iteration complexity, other computational resources, such as limited communication in distributed settings [4], may significantly affect the overall runtime). 2 2.1 Framework Motivation Due to difficulties which arise when studying the complexity of general optimization problems under discrete computational models, it is common to analyze the computational hardness of optimization algorithms by modeling the way a given algorithm interacts with the problem instances (without limiting its computational resources). In the seminal work of Nemirovsky and Yudin [12], it is shown that algorithms which access the function at hand exclusively by querying a first-order oracle require at least  ?  ? min d, ? ln(1/) , ? ?>0 (3) p ? ?(min{d ln(1/), L/}), ?=0 oracle calls to obtain an -optimal solution, where L and ? are the smoothness and the strong convexity parameter, respectively (note that, here and throughout this section we refer to FSM problems with n = 1). This lower bound is tight and its dimension-free part is attained by Nesterov?s well-known accelerated gradient descent, and by MCG otherwise. The fact that this approach is based on information considerations alone is very appealing and renders it valid for any first-order algorithm. However, discarding the resources needed for executing a given algorithm, in particular the per-iteration cost (in time and space), the complexity boundaries drawn by this approach are too crude from a computational point of view. Indeed, the per-iteration cost of MCG, the only method known with oracle complexity of O(d ln(1/)), is excessively high, rendering it prohibitive for high-dimensional problems. We are thus led into the question of how well can a given optimization algorithm perform assuming that its per-iteration cost is constrained? Arjevani et al. [3, 5] adopted a more structural approach 3 where instead of modeling how information regarding the function at hand is being collected, one models the update rules according to which iterates are being generated. Concretely, they proposed the framework of p-CLI optimization algorithms where, roughly speaking, new iterates are assumed to form linear combinations of the previous p iterates and gradients, and the coefficients of these linear combinations are assumed to be either stationary (i.e., remain fixed throughout the optimization process) or oblivious. Based on this structural assumption, they showed that the iteration complexity ? ?? ln(1/)). The fact that this lower of minimizing smooth and strongly convex functions is ?( bound is stronger than (3), in the sense that it does not depend on the dimension, confirms that controlling the functional form of the update rules allows one to derive tighter lower bounds. The framework of p-CLIs forms the nucleus of our formulation below. 2.2 Definitions When considering lower bounds one must be very precise as to the scope of optimization algorithms to which they apply. Below, we give formal definitions for oblivious stochastic CLI optimization algorithms and iteration complexity (which serves as a crude proxy for their computational complexity). Definition 1 (Class of Optimization Problems). A class of optimization problems is an ordered triple (F, I, Of ), where F is a family of functions defined over some domain designated by domF, I is the side-information given prior to the optimization process and Of is a suitable oracle which upon receiving x ? domF and ? in the parameter set ?, returns Of (x, ?) ? dom(F) for a given f ? F (we shall omit the subscript in Of when f is clear from the context). For example, in FSM, F contains functions as defined in (1), the side-information contains the smoothness parameter L, the strong convexity parameter ? and the number of components n (although it carries a crucial effect on the iteration complexity, e.g., [5], in this work, we shall ignore the sideinformation and assume that all the parameters of the class are given). We shall assume that both first-order and coordinate-descent oracles (see 10,11 below) are allowed to be used during the optimization process. Formally, this is done by introducing an additional parameter which indicates which oracle is being addressed. This added degree of freedom does not violate our lower bounds. We now turn to rigorously define CLI optimization algorithms. Note that, compared with the definition of first-order p-CLIs provided in [5], here, in order to handle coordinate-descent and first-order oracles in a unified manner, we base our formulation on general oracle procedures. Definition 2 (CLI). An optimization algorithm is called a Canonical Linear Iterative (CLI) optimization algorithm over a class of optimization problems (F, I, Of ), if given an instance f ? F (0) and initialization points {wi }i?J ? dom(F), where J is some index set, it operates by iteratively generating points such that for any i ? J ,   X (k+1) (k) (k) wi ? Of wj ; ?ij , k = 0, 1, . . . (4) j?J (k) holds, where ?ij ? ? are parameters chosen, stochastically or deterministically, by the algorithm, possibly depending on the side-information. If the parameters do not depend on previously acquired oracle answers, we say that the given algorithm is oblivious. Lastly, algorithms with |J | ? p, for some p ? N, are denoted by p-CLI. (k) Note that assigning different weights to different terms in (4) can be done through ?ij ? ? (e.g., oracle 10 below). This allows a succinct definition for obliviousness. Lastly, we define iteration complexity. Definition 3 (Iteration Complexity). The iteration complexity of a given CLI w.r.t. a given problem class (F, I, Of ) is defined to be the minimal number of iterations K such that (k) E[f (w1 ) ? min f (w)] < , w?domF ?f ? F, k ? K where the expectation is taken over all the randomness introduced into the optimization process (k) (choosing w1 merely serves as a convention and is not necessary for our bounds to hold). 4 2.3 Proof Technique - Deriving Lower Bounds via Approximation Theory Consider the following parametrized class of L-smooth and ?-strongly convex optimization problems, min f? (w) := w?R ?w2 ? w, 2 ? ? [?, L]. (5) Clearly, the minimizer of f? are w? (?) := 1/?, with norm bounded by 1/?. For simplicity, we will consider a special case, namely, vanilla gradient descent (GD) with step size 1/L, which produces new iterates as follows  1 ?  (k) 1 w(k+1) (?) = w(k) (?) ? f?0 (w(k) (?)) = 1 ? w (?) + . L L L Setting the initialization point to be w(0) (?) = 0, we derive an explicit expression for w(k) (?):   k?1 1X k w(k) (?) = (?1)i (?/L)i . (6) L i=0 i+1 Approximating polynomials 1 GD, w(1)(2) GD, w(2)(2) GD, w(3)(2) 0.9 GD, w(4)(2) AGD, w(1)(2) AGD, w(2)(2) 0.8 AGD, w(3)(2) AGD, w(4)(2) 1/2 0.7 0.6 0.5 0.4 0.3 0.2 1 1.5 2 2.5 3 3.5 4 Figure 2: The first four iterates of GD and AGD, which form polynomials in ?, the parameter of problem (5), are compared to 1/? over [1, 4]. It turns out that each w(k) (?) forms a univariate polynomial whose degree is at most k. Furthermore, since f? (w) are L-smooth ?-strongly convex for any ? ? [?, L], standard convergence analysis for k GD (e.g., [13], Theorem 2.1.14) guarantees that |w(k) (?) ? w? (?)| ? (1 ? 2/(1 + ?)) 2 |w? (?)|, where ? denotes the condition number. Substituting Equation (6) for w(k) (?) yields     k2 1 k?1 k 2 1 X i i (?1) (?/L) ? 1/? ? 1? . max ? i+1 1+? ??[?,L] L i=0 Thus, we see that the faster the convergence rate of a given optimization algorithm is, the better the induced sequence of polynomials (w(k) (?))k?0 approximate 1/? w.r.t. the maximum norm k ? kL? ([?,L]) over [?, L]. In Fig. 2, we compare the first 4 polynomials induced by GD and AGD. Not surprisingly, AGD polynomials approximates 1/? better than those of GD. Now, one may ask, assuming that iterates of a given optimization algorithm A for (5) can be expressed as polynomials sk (?) whose degree does not exceed the iteration number, just how fast can these iterates converge to the minimizer? Since the convergence rate is bounded from below by ksk (?) ? 1/?kL? ([?,L]) , we may address the following question instead: min ks(?) ? 1/?kL? ([?,L]) , s(?)?Pk (7) where Pk denotes the set of univariate polynomials whose degree does not exceed k. Problem (7) and other related settings are main topics of study in approximation theory. Accordingly, our technique 5 for proving lower bounds makes an extensive use of tools borrowed from this area. Specifically, in a paper from 1899 [21] Chebyshev showed that ? 1 (c ? c2 ? 1)k min s(?) ? ? , c > 1, (8) ? ? c L? ([?1,1]) c2 ? 1 s(?)?Pk by which we derive the following theorem (see Appendix A.1 for a detailed proof). ? ?? ln(1/)). Theorem 1. The number of iterations required by A to get an -optimal solution is ?( In the following sections, we apply oblivious CLI on various parameterized optimization problems so that the resulting iterates are polynomials in the problem parameters. We then apply arguments similar to the above A similar reduction, from optimization problems to approximation problems, was used before in a few contexts to analyze the iteration complexity of deterministic CLIs (e.g., [5, Section 3], see also Conjugate Gradient convergence analysis [14]). But, what if we allow random algorithms? should we expect the same iteration complexity? To answer this, we use Yao?s minimax principle according to which the performance of a given stochastic optimization algorithm w.r.t. its worst input are bounded from below by the performance of the best deterministic algorithm w.r.t. distributions over the input space. Thus, following a similar reduction one can show that the convergence rate of stochastic algorithms is bounded from below by Z L 1 min |s(?) ? 1/?| d?. (9) L?? s(?)?Pk ? That is, a lower bound for the stochastic case can be attained by considering an approximation problem w.r.t. weighted L1 with the uniform distribution over [?, L]. Other approximation problems considered in this work involve L2 -norm and different distributions. We provide a schematic description of our proof technique in Scheme 2.1. S CHEME G IVEN 2.1 F ROM O PTIMIZATION P ROBLEMS TO A PPROXIMATION P ROBLEMS F , A SUITABLE ORACLE O Sk OVER SOME PARAMETERS SET H . k A SUBSET OF FUNCTIONS {f? ? F |? ? H}, S . T. w (?) ? Sk . ? THE MINIMIZER w (?) FOR ANY f? ? FROM BELOW THE BEST APPROXIMATION FOR w (?) W. R . T. Sk ? AND A NORM k ? k, I . E ., min{ks(?) ? w (?)k | s(?) ? Sk } A CLASS OF FUNCTIONS AND A SEQUENCE OF SETS OF FUNCTION C HOOSE C OMPUTE B OUND 3 Lower Bound for Finite Sums Minimization Methods Having described our analytic approach, we now turn to present some concrete applications, starting with iteration complexity lower bounds in the context of FSM problems (1). In what follows, we derive a lower bound on the iteration complexity of oblivious (possibly stochastic) CLI algorithms equipped with first-order and coordinate-descent oracles for FSM. Strictly speaking, we focus on optimization algorithms equipped with both generalized first order oracle, O(w; A, B, c, j) = A?fj (w) + Bw + c, A, B ? Rd?d , c ? Rd , j ? [n], (10) and steepest coordinate-descent oracle O(w; i, j) = w + t? ei , t? ? argmin fj (w1 , . . . , wi?1 , wi + t, wi+1 , . . . , wd ), j ? [n], (11) t?R where ei denotes the i?th unit vector. We remark that coordinate-descent steps w.r.t. partial gradients can be implemented using (10) by setting A to be some principal minor of the unit matrix. It should be further noted that our results below hold for scenarios where the optimization algorithm is free to call a different oracle at different iterations. First, we sketch the proof of the lower bound for deterministic oblivious CLIs. Following Scheme 2.1, we restrict our attention to a parameterized subset of problems. We assume2 d > 1 and denote by 2 Clearly, in order to derive a lower bound for coordinate-descent algorithms, we must assume d > 1. If only a first-order oracle is allowed, then the same lower bound as in Theorem 2 can be derived for d = 1. 6 HFSM the set of all (?1 , . . . , ?n ) ? Rn such that all the entries equal ?(L ? ?)/2, except for some j ? [n], for which ?j ? [?(L ? ?)/2, (L ? ?)/2]. Now, given ? := (?1 , . . . , ?n ) ? HFSM we define  n  1X 1 > F? (w) := w Q?i w ? q> w , where (12) n i=1 2 ? R? ? ? ? L+? ? ?i 2 2 L+? ? R? ? ? ? ? ? ? i ? ? ? 2 ? 2 ? ? ? ? Q?i := ? ? , q := ? 0 ? . ? . ? ? ? .. ? . ? ? ? . . ? 0 It is easy to verify that the minimizers of (12) are ? ?> R? R? , ?   , 0, . . . , 0? . w? (?) = ? ?  Pn Pn L+? 1 1 ? ? 2 L+? + 2 + i=1 i i=1 i 2 n 2 n (13) We would like to show that the coordinates of the iterates of deterministic oblivious CLIs, which minimize F? using first-order and coordinate-descent oracles, form multivariate polynomials in ? of total degrees (the maximal sum of powers over all the terms) which does not exceed the iteration (k) number. Indeed, if the coordinates of wi (?) are multivariate polynomial in ? of total degree at most k, then the coordinates of the vectors returned by both oracles (k) (k) First-order oracle: O(wj ; A, B, c, j) = A(Q?j wi Coordinate-descent oracle: (k) O(wj ; i, j) (k) ? q) + Bwi + c, (14)  (k) = I ? (1/(Q?j )ii )ei (Q?j )i,? wi ? qi /(Q?j )ii ei , are multivariate polynomials of total degree of at most k + 1, as all the parameters (A, B, C, i and j) do not depend on ? (due to obliviousness) and the rest of the terms (Q?j , q, I, 1/(Q?j )ii , (Q?j )i,? , ei and qi ) are either linear in ?j or constants. Now, since the next iterates are generated simply by summing up all the oracle answers, they also form multivariate polynomials of total degree of at most (k) k + 1. Thus, denoting the first coordinate of w1 (?) by s(?) and using Inequality (8), we get the following bound R? (k) ?  max kw1 (?) ? w (?)k ? s(?) ? ?  (15) P n ??HFSM 1 + ? 2 L+? i i=1 2 n L? ([?,L]) ?q ? ?(1) ? q ??1 n +1?1 ??1 n +1+1 ?k/n ? , (16) where ?(1) designates a constant which does not depend on k (but may depend on the problem parameters). Lastly, this implies that for any deterministic oblivious CLI and any iteration number, there exists some ? ? HFSM such that the convergence rate of the algorithm, when applied on F? , is bounded from below by Inequality (16). We note that, as opposed to other related lower bounds, e.g., [10], our proof is non-constructive. As discussed in subsection 2.3, this type of analysis can be extended to stochastic algorithms by considering (15) w.r.t. other norms such as weighted L1 -norm. We now arrive at the following theorem whose proof, including the corresponding logarithmic factors and constants, can be found in Appendix A.2. Theorem 2. The iteration complexity of oblivious (possibly stochastic) CLIs for FSM (1) equipped with first-order (10) and coordinate-descent oracles (11), is bounded from below by p ? + n(? ? 1) ln(1/)). ?(n The lower bound stated in Theorem 2 is tight and is attained by, e.g., SAG combined with an acceleration scheme (e.g., [11]). Moreover, as mentioned earlier, our lower bound does not depend on the problem dimension (or equivalently, holds for any number of iterations, regardless of d and 7 n), and covers coordinate descent methods with stochastic or deterministic coordinate schedule (in the special case where n = 1, this gives a lower bound for minimizing smooth and strongly convex functions by performing steepest coordinate descent steps). Also, our bound implies that using mini-batches for tackling FSM does not reduce the overall iteration complexity. Lastly, it is noteworthy that the n term in the lower bound above holds for any algorithm accompanied with an incremental oracle, which grants access to at most one individual function each time. We also derive a nearly-optimal lower bound for smooth non-strongly convex functions for the more restricted setting of n = 1 and first-order oracle. The parameterized subset of functions we use 2 (see Scheme 2.1) is g? (x) := ?2 kxk ? R?e> ? ? (0, L]. The corresponding minimizer (as 1 x, ? a function of ?) is x (?) = Re1 , and in this case we seek to approximate it w.r.t. L2 -norm using k-degree univariate polynomials whose constant term vanishes. The resulting bound is dimension-free and improves upon other bounds for this setting (e.g. [5]) in that it applies to deterministic algorithms, as well as to stochastic algorithms (see A.3 for proof). Theorem 3. The iteration complexity of any oblivious (possibly stochastic) CLI for L-smooth convex functions equipped with a first-order oracle, is bounded from below by   1/? ? (L(? ? 2)/) , ? ? (2, 4). 4 Lower Bound for Dual Regularized Loss Minimization with Linear Predictors The form of functions (12) discussed in the previous section does not readily adapt to general RLM problems with linear predictors, i.e., n ? 1X 2 ?i (hxi , wi) + kwk , (17) min P (w) := d n i=1 2 w?R where the loss functions ?i are L-smooth and convex, the samples x1 , . . . , xn are d-dimensional vectors in Rd and ? is some positive constant. Thus, dual methods which exploit the added structure of this setting through the dual problem [18], 2 n n ? 1X ? 1 X ?i (??i ) + xi ?i , minn D(?) = (18) ??R n 2 ?n i=1 i=1 such as SDCA and accelerated proximal SDCA, are not covered by Theorem 2. Accordingly, in this section, we address the iteration complexity of oblivious (possibly stochastic) CLI algorithms equipped with dual RLM oracles: O(?; t, j) = ? + t?j D(?)ej , t ? R, j ? [n], (19) ? ? O(?; j) = ? + t ej , t = argmin D(?1 , . . . , ?j?1 , ?j + t, ?j+1 , . . . , ?d ), j ? [n], t?R Following Scheme 2.1, we first describe the relevant parametrized subset of RLM problems. For the sake of simplicity, we assume that n is even (the proof for odd n holds mutatis mutandis). We denote by HRLM the set of all (?1 , . . . , ?n/2 ) ? Rn/2 such that all entries are 0, except for some j ? [n/2], for which ?j ? [??/2, ?/2]. Now, given ? ? HRLM , we set P? (defined in 17) as follows  1 cos(?(i+1)/2 )ei + sin(?(i+1)/2 )ei+1 i is odd 2 ?i (w) = (w + 1) , x?,i = . ei o.w. 2 We state below the corresponding lower bound, whose proof, including logarithmic factors and constants, can be found in Appendix A.4. Theorem 4. The iteration complexity of oblivious (possibly stochastic) CLIs for RLM (17) equipped with dual RLM oracles (19) is bounded from below by p ? + nL/? ln(1/)). ?(n This bound is tight w.r.t. the class of oblivious CLIs and is attained by accelerated proximal SDCA. As ? mentioned earlier, a tighter lower bound of ?((n + 1/?) ln(1/)) is known for SDCA [3], suggesting that a tighter bound might hold for the more restricted set of stationary CLIs (for which the oracle parameters remain fixed throughout the optimization process). 8
6087 |@word polynomial:14 stronger:1 norm:7 confirms:1 seek:1 nemirovsky:2 thereby:1 carry:1 reduction:2 contains:2 exclusively:1 denoting:1 existing:2 current:4 comparing:1 wd:1 assigning:1 issuing:1 must:2 readily:2 tackling:1 analytic:1 update:3 alone:1 stationary:2 prohibitive:1 accordingly:3 steepest:2 indefinitely:1 iterates:12 along:1 c2:2 manner:1 acquired:2 hardness:1 indeed:3 roughly:1 equipped:6 considering:3 spain:1 provided:1 moreover:2 notation:1 bounded:8 israel:2 what:2 iven:1 argmin:2 developed:1 unified:1 transformation:1 guarantee:1 sag:5 gravity:1 runtime:1 k2:1 r500:1 unit:2 grant:1 omit:1 before:1 positive:1 subscript:1 noteworthy:2 might:1 initialization:2 k:2 suggests:1 co:1 limited:2 weizmann:4 satisfactorily:1 procedure:2 sdca:15 area:1 significantly:1 get:2 context:3 seminal:1 deterministic:7 demonstrated:1 center:1 go:1 regardless:3 starting:1 attention:1 convex:12 sideinformation:1 simplicity:2 rule:4 deriving:2 proving:1 handle:1 coordinate:20 limiting:1 shamir:3 construction:2 controlling:1 worst:1 wj:3 mentioned:3 hoose:1 convexity:3 complexity:23 vanishes:1 nesterov:3 rigorously:1 dom:2 depend:7 tight:6 solving:1 upon:4 easily:1 various:1 fast:1 shortcoming:1 describe:1 query:3 choosing:1 quite:2 whose:7 say:1 otherwise:1 ability:1 advantage:1 sequence:2 maximal:1 relevant:2 achieve:2 description:1 cli:21 convergence:14 produce:2 generating:1 incremental:1 converges:1 executing:1 derive:8 recurrent:1 ac:2 depending:1 ij:3 odd:2 minor:1 borrowed:1 progress:2 strong:3 implemented:1 predicted:1 implies:3 met:1 convention:2 pproximation:1 stochastic:18 require:1 tighter:3 secondly:2 strictly:2 hold:10 practically:1 considered:2 scope:2 substituting:1 major:3 achieves:1 mutandis:1 ound:1 tool:1 weighted:2 minimization:5 clearly:2 super:2 rather:2 pn:2 ej:2 derived:4 focus:2 indicates:2 sense:1 minimizers:1 quasi:2 overall:2 dual:6 denoted:1 art:2 breakthrough:1 constrained:1 special:2 equal:1 having:1 sampling:6 adversarially:1 nearly:2 report:1 inherent:1 few:2 oblivious:20 individual:4 replacement:5 bw:1 freedom:1 highly:1 nl:1 devoted:1 implication:1 fsm:10 partial:1 necessary:1 ohad:2 minimal:1 instance:2 modeling:2 earlier:2 cover:1 cost:3 introducing:2 subset:4 entry:2 predictor:3 uniform:1 too:1 answer:3 proximal:6 considerably:1 gd:12 combined:1 receiving:2 concrete:1 yao:1 w1:4 settled:1 satisfied:1 opposed:1 possibly:6 external:1 stochastically:1 return:1 suggesting:1 bfgs:2 accompanied:1 subsumes:1 coefficient:1 later:1 view:2 analyze:2 kwk:1 minimize:1 il:2 variance:1 who:2 yield:1 produced:1 randomness:2 definition:8 proof:9 permissive:1 boil:1 sampled:1 gain:1 ask:1 knowledge:1 subsection:1 improves:1 schedule:1 carefully:1 attained:5 formulation:2 done:2 strongly:6 furthermore:1 just:1 stage:1 lastly:4 hand:3 sketch:1 ei:8 scheduled:1 believe:1 building:1 effect:1 validity:1 consisted:1 verify:1 excessively:1 iteratively:1 sin:1 during:2 covering:1 noted:3 generalized:2 performs:1 l1:2 fj:2 consideration:2 fi:2 common:1 specialized:1 functional:2 handcrafted:1 thirdly:1 extend:1 bwi:1 approximates:1 discussed:2 refer:1 smoothness:4 rd:4 vanilla:1 kw1:1 hxi:1 access:3 specification:1 base:1 multivariate:4 recent:2 hide:1 showed:2 re1:1 apart:1 scenario:1 issued:1 inequality:2 additional:1 somewhat:1 converge:1 ii:3 violate:1 desirable:1 smooth:11 faster:2 adapt:1 schematic:1 qi:2 expectation:1 iteration:37 agarwal:2 whereas:4 ompute:1 addressed:3 source:1 crucial:1 w2:1 rest:1 induced:2 practitioner:1 call:2 structural:3 exceed:3 easy:1 rendering:2 iterate:1 hb:3 affect:1 restrict:1 reduce:2 regarding:1 chebyshev:1 expression:1 roblems:2 arjevani:6 render:1 returned:1 speaking:2 remark:1 clear:2 covered:2 detailed:1 involve:1 canonical:2 disjoint:2 per:3 rehovot:2 discrete:1 shall:3 four:1 lan:1 drawn:1 vast:1 merely:1 sum:6 year:1 parameterized:3 arrive:1 throughout:3 family:1 appendix:3 bound:56 quadratic:2 oracle:32 sake:1 aspect:1 argument:1 min:10 span:2 performing:1 developing:1 according:3 designated:1 ball:1 combination:2 conjugate:1 remain:3 wi:9 appealing:1 restricted:3 taken:3 ln:10 resource:3 equation:1 previously:2 remains:1 discus:1 turn:3 needed:1 yossi:2 serf:2 studying:1 adopted:1 cheme:1 multiplied:1 apply:5 batch:1 denotes:4 newton:2 exploit:1 approximating:1 noticed:1 question:2 added:2 interacts:1 gradient:11 majority:1 parametrized:2 topic:1 collected:1 rom:1 assuming:2 index:1 minn:1 mini:1 minimizing:2 equivalently:1 stated:2 perform:1 upper:1 datasets:1 finite:5 descent:19 tilde:1 extended:1 communication:1 precise:1 incorporate:1 rn:2 domf:3 introduced:2 namely:1 required:2 kl:3 extensive:1 established:1 barcelona:1 nip:1 address:3 beyond:1 below:15 regime:2 max:2 memory:1 including:2 unrealistic:1 suitable:2 power:1 difficulty:1 regularized:2 minimax:1 scheme:6 mcg:3 prior:1 literature:2 l2:2 loss:3 expect:2 ksk:1 interesting:1 limitation:2 querying:2 triple:1 nucleus:1 degree:9 proxy:1 principle:1 heavy:1 surprisingly:1 copy:1 free:7 svrg:3 formal:2 side:3 allow:1 institute:2 face:1 barrier:1 distributed:1 boundary:1 dimension:9 xn:1 valid:2 yudin:2 obliviousness:4 concretely:1 made:4 oftentimes:1 employing:1 agd:10 approximate:2 ignore:1 instantiation:1 summing:1 assumed:4 xi:1 iterative:1 designates:1 sk:5 bottou:2 domain:2 pk:4 main:1 motivation:1 arise:1 succinct:1 allowed:2 body:1 x1:1 fig:1 saga:3 explicit:2 deterministically:1 lie:1 crude:2 admissible:1 down:1 theorem:11 discarding:1 exists:1 logarithmic:3 led:1 simply:1 univariate:3 expressed:2 ordered:1 kxk:1 mutatis:1 applies:4 minimizer:4 acceleration:2 invalid:1 determined:1 specifically:1 operates:1 except:2 rlm:7 principal:1 called:1 total:4 duality:2 formally:1 accelerated:8 constructive:1
5,622
6,088
Adversarial Multiclass Classification: A Risk Minimization Perspective Rizal Fathony Anqi Liu Kaiser Asif Brian D. Ziebart Department of Computer Science University of Illinois at Chicago Chicago, IL 60607 {rfatho2, aliu33, kasif2, bziebart}@uic.edu Abstract Recently proposed adversarial classification methods have shown promising results for cost sensitive and multivariate losses. In contrast with empirical risk minimization (ERM) methods, which use convex surrogate losses to approximate the desired non-convex target loss function, adversarial methods minimize non-convex losses by treating the properties of the training data as being uncertain and worst case within a minimax game. Despite this difference in formulation, we recast adversarial classification under zero-one loss as an ERM method with a novel prescribed loss function. We demonstrate a number of theoretical and practical advantages over the very closely related hinge loss ERM methods. This establishes adversarial classification under the zero-one loss as a method that fills the long standing gap in multiclass hinge loss classification, simultaneously guaranteeing Fisher consistency and universal consistency, while also providing dual parameter sparsity and high accuracy predictions in practice. 1 Introduction A common goal for standard classification problems in machine learning is to find a classifier that minimizes the zero-one loss. Since directly minimizing this loss over training data via empirical risk minimization (ERM) [1] is generally NP-hard [2], convex surrogate losses are employed to approximate the zero-one loss. For example, the logarithmic loss is minimized by the logistic regression classifier [3] and the hinge loss is minimized by the support vector machine (SVM) [4, 5]. Both are Fisher consistent [6, 7] and universally consistent [8, 9] for binary classification, meaning they minimize the zero-one loss and are Bayes-optimal classifiers when they learn from any true distribution of data using a rich feature representation. SVMs provide the additional advantage of dual parameter sparsity so that when combined with kernel methods, extremely rich feature representations can be efficiently considered. Unfortunately, generalizing the hinge loss to classification tasks with more than two labels is challenging and existing multiclass convex surrogates [10?12] tend to lose their consistency guarantees [13?15] or produce low accuracy predictions in practice [15]. Adversarial classification [16, 17] uses a different approach to tackle non-convex losses like the zero-one loss. Instead of approximating the desired loss function and evaluating over the training data, it adversarially approximates the available training data within a minimax game formulation with game payoffs defined by the desired (zero-one) loss function [18, 19]. This provides promising empirical results for cost-sensitive losses [16] and multivariate losses such as the F-measure and the precision-at-k [17]. Conceptually, parameter optimization for the adversarial method forces the adversary to ?behave like? certain properties of the training data sample, making labels easier to predict within the minimax prediction game. However, a key bottleneck for these methods has been 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. their reliance on zero-sum game solvers for inference, which are computationally expensive relative to inference in other prediction methods, such as SVMs. In this paper, we recast adversarial prediction from an empirical risk minimization perspective by analyzing the Nash equilibrium value of adversarial zero-one classification games to define a new multiclass loss1 . This enables us to demonstrate that zero-one adversarial classification fills the long standing gap in ERM-based multiclass classification by simultaneously: (1) guaranteeing Fisher consistency and universal consistency; (2) enabling computational efficiency via the kernel trick and dual parameter sparsity; and (3) providing competitive performance in practice. This reformulation also provides significant computational efficiency improvements compared to previous adversarial classification training methods [16]. 2 Background and Related Work 2.1 Multiclass SVM generalizations The multiclass support vector machine (SVM) seeks class-based potentials fy (xi ) for each input vector x ? X and class y ? Y so that the discriminant function, y?f (xi ) = argmaxy fy (xi ), minimizes misclassification errors, lossf (xi , yi ) = I(yi 6= y?f (xi )). Unfortunately, empirical risk minimization (ERM), minf EP? (x,y) [lossf (X, Y )], for the zero-one loss is NP-hard once the set of potentials is (parametrically) restricted (e.g., as a linear function of input features) [2]. Instead, a hinge loss approximation is employed by the SVM. In the binary setting, yi ? {?1, +1}, where the potential of one class can be set to zero (f?1 = 0) with no loss in generality, the hinge loss is defined as [1 ? yi f+1 (xi )]+ , with the compact definition [g(.)]+ , max(0, g(.)). Binary SVM, which is an empirical risk minimizer using the hinge loss with L2 regularization, min EP? (x,y) [lossf? (X, Y )] + ?2 ||?||22 , f? (1) provides strong theoretical guarantees (Fisher consistency and universal consistency) [8, 21] and computational efficiency [1]. Many methods have been proposed to generalize SVM to the multiclass setting. Apart from the one-vs-all and one-vs-one decomposed formulations [22], there are three main joint formulations: the WW model by Weston et al. P [11], which incorporates the sum of hinge losses for all alternative labels, lossWW (xi , yi ) = j6=yi [1 ? (fyi (xi ) ? fj (xi ))]+ ; the CS model by Crammer and Singer [10], which uses the hinge loss of only the largest alternative label, lossCS (xi , yi ) = maxj6=yi [1 ? (fyi (xi ) ? fj (xi ))]+ ; and the LLW model by Lee et al. [12], which P employs an absolute hinge loss, lossLLW (xi , yi ) = j6=yi [1 + fj (xi )]+ , and a constraint that P j fj (xi ) = 0. The former two models (CS and WW) both utilize the pairwise class-based potential differences fyi (xi ) ? fj (xi ) and are therefore categorized as relative margin methods. LLW, on the other hand, is an absolute margin method that only relates to fj (xi )[15]. Fisher consistency, or Bayes consistency [7, 13] guarantees that minimization of a surrogate loss for the true distribution provides the Bayes-optimal classifier, i.e., minimizes the zero-one loss. If given any possible distribution of data, a classifier is Bayes-optimal, it is called universally consistent. Of these, only the LLW method is Fisher consistent and universally consistent [12?14]. However, as pointed out by Do?gan et al. [15], LLW?s use of an absolute margin in the loss (rather than the relative margin of WW and CS) often causes it to perform poorly for datasets with low dimensional feature spaces. From the opposite direction, the requirements for Fisher consistency have been well-characterized [13], yet this has not led to a multiclass classifier that is both Fisher consistent and performs well in practice. 2.2 Adversarial prediction games Building on a variety of diverse formulations for adversarial prediction [23?26], Asif et al. [16] proposed an adversarial game formulation for multiclass classification with cost-sensitive loss functions. Under this formulation, the empirical training data is replaced by an adversarially chosen conditional label distribution P? (? y |x) that must closely approximate the training data, but otherwise 1 Farnia & Tse independently and concurrently discovered this same loss function [20]. They provide an analysis focused on generalization bounds and experiments for binary classification. 2 seeks to maximize expected loss, while an estimator player P? (? y |x) seeks to minimize expected loss. For the zero-one loss, the prediction game is: h i min max EP? (x)P? (?y|x)P? (?y|x) I(Y? 6= Y? ) . (2) P? ? P? :EP (x)P? (y|x) [?(X,Y? )]=? ? The vector of feature moments, ?? = EP? (x,y) [?(X, Y )], is measured from sample training data. Using minimax and strong Lagrangian duality, the optimization of Eq. (2) reduces to minimizing the equilibrium game values of a new set of zero-sum games characterized by matrix L0xi ,? : ? ? ?1,yi (xi ) ? ? ? ?|Y|,yi (xi ) + 1 X ? ? .. .. .. ? Txi L0xi ,? p ? xi ; L0xi ,? = ? min max min p (3) ?; . . . ? p ? ? p i ?1,yi (xi ) + 1 ? ? ? ?|Y|,yi (xi ) ? xi is a vector representation of the conditional where ? is a vector of Lagrangian model parameters, p ? xi = [P? (Y? = 1|xi ) P? (Y? = 2|xi ) . . .]T , and similarly for label distribution, P? (Y? = k|xi ), i.e. p ? xi . The matrix L0xi ,? is a zero-sum game matrix for each example, with ?j,yi (xi ) = fj (xi ) ? p fyi (xi ) = ?T (?(xi , j) ? ?(xi , yi )). This optimization problem (Eq. (3)) is convex in ? and the inner zero-sum game can be solved using linear programming [16]. 3 Risk Minimization Perspective of Adversarial Multiclass Classification 3.1 Nash equilibrium game value Despite the differences in formulation between adversarial loss minimization and empirical risk minimization, we now recast the zero-one loss adversarial game as the solution to an empirical risk minimization problem. Theorem 1 defines the loss function that provides this equivalence by considering all possible combinations of the adversary?s label assignments with non-zero probability in the Nash equilibrium of the game.2 Theorem 1. The model parameters ? for multiclass zero-one adversarial classification are equivalently obtained from empirical risk minimization under the adversarial zero-one loss function: P j?S ?j,yi (xi ) + |S| ? 1 0-1 ALf (xi , yi ) = max , (4) |S| S?{1,...,|Y|}, S6=? where S is any non-empty member of the powerset of classes {1, 2, . . . , |Y|}. Thus, AL0-1 is the maximum value over 2|Y| ? 1 linear hyperplanes. For binary prediction tasks, there are three linear ? (x)+?2,y (x)+1 hyperplanes: ?1,y (x), ?2,y (x) and 1,y . Figure 2 1 shows the loss function in potential difference spaces ? when the true label is y = 1. Note that AL0-1 combines two hinge functions at ?2,y (x) = ?1 and ?2,y (x) = 1, rather than SVM?s single hinge at ?1,y (x) = ?1. This difference from the hinge loss corresponds to the loss that is realized by randomizing label predictions.3 For three classes, the loss function has seven facets as shown in Figure 2a. Figures 2a, 2b, and 2c show the similarities and differences between AL0-1 and the multiclass SVM surrogate losses based on class potential differences. Note that AL0-1 is also a relative margin loss function that utilizes the pairwise potential difference ?j,y (x). 3.2 Figure 1: AL0-1 evaluated over the space of potential differences (?j,y (x) = fj (x) ? fy (x); and ?j,j (x) = 0) for binary prediction tasks when the true label is y = 1. Consistency properties Fisher consistency is a desirable property for a surrogate loss function that guarantees its minimizer, given the true distribution, P (x, y), will yield the Bayes optimal decision boundary [13, 14]. For 2 3 The proof of this theorem and others in the paper are contained in the Supplementary Materials. We refer the reader to Appendix H for a comparison of the binary adversarial method and the binary SVM. 3 (a) (b) (c) Figure 2: Loss function contour plots over the space of potential differences for the prediction task with three classes when the true label is y = 1 under AL0-1 (a), the WW loss (b), and the CS loss (c). (Note that ?i in the plots refers to ?j,y (x) = fj (x) ? fy (x); and ?j,j (x) = 0.) multiclass zero-one loss, given that we know Pj (x) , P (Y = j|x), Fisher consistency requires ? that argmaxj fj? (x) ? argmaxj Pj (x), where f ? (x) = [f1? (x), . . . , f|Y| (x)]T is the minimizer of E [lossf (X, Y )|X = x]. Since any constant can be added to all fj? (x) while keeping argmaxj fj? (x) P|Y| the same, we employ a sum-to-zero constraint, j=1 fj (x) = 0, to remove redundant solutions. We establish an important property of the minimizer for AL0-1 in the following theorem.   Theorem 2. The loss for the minimizer f ? of E AL0-1 f (X, Y )|X = x resides on the hyperplane defined (in Eq. 4) by the complete set of labels, S = {1, . . . , |Y|}. As an illustration for the case of three classes (Figure 2a), the area described in the theorem above corresponds to the region in the middle where the hyperplane that supports AL0-1 is ?1,y (x)+?2,y (x)+?3,y (x)+2 1 , and, equivalently, where ? |Y| ? fj (x) ? |Y|?1 3 |Y| , ?j ? {1, . . . , |Y|} P with a constraint that j fj (x) = 0. Based on this restriction, we focus on the minimization of   |Y|?1 1 E AL0-1 f (X, Y )|X = x subject to ? |Y| ? fj (x) ? |Y| , ?j ? {1, . . . , |Y|} and the sum of potentials equal to zero. This minimization reduces to the following optimization: max f |Y| X Py (x)fy (x) subject to: ? y=1 1 |Y| ? 1 ? fj (x) ? |Y| |Y| j ? {1, . . . , |Y|}; |Y| X fj (x) = 0. j=1 The solution for this maximization (a linear program) satisfies fj? (x) = |Y|?1 |Y| if j = argmaxj Pj (x), 1 and ? |Y| otherwise, which therefore implies the Fisher consistency theorem. Theorem 3. The adversarial zero-one loss, AL0-1 , from Eq. (4) is Fisher consistent. Theorem 3 implies that AL0-1 (Eq. (4)) is classification calibrated, which indicates minimization of that loss for all distributions on X ? Y also minimizes the zero-one loss [21, 13]. As proven in general by Steinwart and Christmann [2], Micchelli et al. [27], since AL0-1 (Eq.(4)) is a Lipschitz loss with constant 1, the adversarial multiclass classifier is universally consistent under the conditions specified in Corollary 1. Corollary 1. Given a universal kernel and regularization parameter ? in Eq. (1) tending to zero slower than n1 , the adversarial multiclass classifier is also universally consistent. 3.3 Optimization In the learning process for adversarial classification, Asif et al. [16] requires a linear program to be solved that finds the Nash equilibrium game value and strategy for every training data point in each gradient update. This requirement is computationally burdensome compared to multiclass SVMs, which must simply find potential-maximizing labels. We propose two approaches with improved 4 efficiency by leveraging an oracle for finding the maximization inside AL0-1 and Lagrange duality in the quadratic programming formulation. 3.3.1 Primal optimization using stochastic sub-gradient descent TheP sub-gradient in the empirical risk minimization of AL0-1 includes the mean of feature differences, 0-1 1 j?R [?(xi , j) ? ?(xi , yi )] , where R is the set that maximizes AL . The set R is computed |R| by the oracle using a greedy algorithm. Given ? and a sample (xi , yi ), the algorithm calculates all potentials ?j,yi (xi ) for each label j ? {1, . . . , |Y|} and sorts them in non-increasing order. Starting with the empty P set R = ?, it then adds labels to R in sorted order until adding a label would decrease ?j,yi (xi )+|R|?1 the value of j?R |R| . Theorem 4. The proposed greedy algorithm used by the oracle is optimal. 3.3.2 Dual optimization In the next subsections, we focus on the dual optimization technique as it enables us to establish convergence guarantees. We re-formulate the learning algorithm (with L2 regularization) as a constrained quadratic program (QP) with ?i specifying the amount of AL0-1 incurred by each of the n training examples: n X 1 ?i min k?k2 + C ? 2 i=1 subject to: ?i ? ?i,k ?i ? {1, . . . n}k ? {1, . . . , 2|Y| ? 1}, (5) where we denote each of the 2|Y| ?1 possible constraints for example i corresponding to non-empty elP j?Y ?j,yi (xi )+|Y|?1 ements of the label powerset as ?i,k (e.g., ?i,1 = ?1,yi (xi ), and ?i,2|Y| ?1 = ). |Y| Note also that non-negativity for ?i is enforced since ?i,yi = ?yi ,yi (xi ) = 0. Theorem 5. Let ?i,k be the partial derivative of ?i,k with respect to ?, i.e., ?i,k = ?1,yi (xi )+?3,yi (xi )+?4,yi (xi )+2 , 3 is the constant part of ?i,k (for example if ?i,k = then the corresponding dual optimization for the primal minimization (Eq. 5) is: |Y| max ? n 2X ?1 X i=1 k=1 d?i,k d? then ?i,k = 23 ), |Y| m 2 ?1 1 X X ?i,k ?i,k ? ?i,k ?j,l [?i,k ? ?j,l ] 2 i,j=1 subject to: ?i,k ? 0, and ?i,k (6) k,l=1 2|Y| ?1 X ?i,k = C, i ? {1, . . . , n}, k ? {1, . . . , 2|Y| ? 1}, k=1 where ?i,k is the dual variable for the k-th constraint of the i-th sample. Note that the dual formulation above only depends on the dot product of two constraints? partial derivatives (with respect to ?) and the constant part of the constraints. The original primal variable ? can be Pn P2|Y| ?1 recovered from the dual variables using the formula: ? = ? i=1 k=1 ?i,k ?i,k . Given a new datapoint x, de-randomized predictions are obtained from argmaxj fj (x) = argmaxj ?T ?(x, j). 3.3.3 Efficiently incorporating rich feature spaces using kernelization Considering large feature spaces is important for developing an expressive classifier that can learn from large amounts of training data. Indeed, Fisher consistency requires such feature spaces for its guarantees to be meaningful. However, na?vely projecting from the original input space, xi , to richer (or possibly infinite) feature spaces ?(xi ), can be computationally burdensome. Kernel methods enable this feature expansion by allowing the dot products of certain feature functions to be computed implicitly, i.e., K(xi , xj ) = ?(xi ) ? ?(xj ). Since our dual formulation only depends on dot products, we employ kernel methods to incorporate rich feature spaces into our formulation as stated in the following theorem. Theorem 6. Let X be the input space and K be a positive definite real valued kernel on X ? X with a mapping function ?(x) : X ? H that maps the input space X to a reproducing kernel Hilbert 5 space H. Then all the values in the dual optimization of Eq. (6) needed to operate in the Hilbert space H can be computed in terms of the kernel function K(xi , xj ) as: |Y| ?i,k ? ?j,l = c(i,k),(j,l) K(xi , xj ), ?i,k = ? n 2X ?1 X j=1 ?j,l c(j,l),(i,k) K(xj , xi ) + ?i,k , (7) l=1 |Y| fm (xi ) = ? n 2X ?1 X j=1 where c(i,k),(j,l) =  ?j,l l=1   1(m ? Rj,l ) ? 1(m = yj ) K(xj , xi ) , |Rj,l | (8)   |Y|  X 1(m ? Ri,k ) 1(m ? Rj,l ) ? 1(m = yi ) ? 1(m = yj ) , |Ri,k | |Rj,l | m=1 and Ri,k is the set of labels included in the constraint ?i,k (for example if ?i,k = ?1,yi (xi )+?3,yi (xi )+?4,yi (xi )+2 , then Ri,k = {1, 3, 4}), the function 1(j = yi ) returns 1 if j = yi or 3 0 otherwise, and the function 1(j ? Ri,k ) returns 1 if j is a member of set Ri,k or 0 otherwise. 3.3.4 Efficient optimization using constraint generation The number of constraints in the QP formulation above grows exponentially with the number of classes: O(2|Y| ). This prevents the na?ve formulation from being efficient for large multiclass problems. We employ a constraint generation method to efficiently solve the dual quadratic programming formulation that is similar to those used for extending the SVM to multivariate loss functions [28] and structured prediction settings [29]. Algorithm 1 Constraint generation method Require: Training data (x1 , y1 ), . . . (xn , yn ), C,  1: ? ? 0 2: A?i ? {?i,k |?i,k = ?yi ,yi (xi )} ?i = 1, . . . , n . Actual label enforces non-negativity 3: repeat 4: for i ? 1, n do . Find the most violated constraint 5: a ? arg maxk|?i,k ?Ai ?i,k 6: ?i ? maxk|?i,k ?A?i ?i,k . Compute the example?s current loss estimate 7: if ?i,a > ?i +  then 8: A?i ? A?i ? {?i,a } . Add it to the enforced constraints set 9: ? ? Optimize dual over A?P= ?iP A?i n 10: Compute ? from ?: ? = ? i=1 k|?i,k ?A? ?i,k ?i,k i 11: end if 12: end for 13: until no A?i has changed in the iteration Algorithm 1 incrementally expands the set of enforced constraints, A?i , until no remaining constraint from the set of all 2|Y| ? 1 constraints (in Ai ) is violated by more than . To obtain the most violated constraint, we use the greedy algorithm described in the primal optimization. The constraint generation algorithm?s stopping criterion ensures that a solution close to the optimal is returned (violating no constraint by more than ). Theorem 7 provides a polynomial run time convergence bounds for the Algorithm 1. Theorem 7. For any  > 0 and training dataset {(x1 , y1 ), . . . , (xn, yn )} with U = maxi [xi ? xi ], 4nCU Algorithm 1 terminates after incrementally adding at most max 2n constraints to the  , 2 ? constraint set A . The proof of Theorem 7 follows the procedures developed by Tsochantaridis et al. [28] for bounding the running time of structured support vector machines. We observe that this bound is quite loose in practice and the algorithm tends to converge much faster in our experiments. 6 4 Experiments We evaluate the performance of the AL0-1 classifier and compare with the three most popular multiclass SVM formulations: WW [11], CS [10], and LLW [12]. We use 12 datasets from the UCI Machine Learning repository [30] with various sizes and numbers of classes (details in Table 1). For each dataset, we consider the methods using the original feature space (linear kernel) and a kernelized feature space using the Gaussian radial basis function kernel. Table 1: Properties of the datasets, the number of constraints considered by SVM models (WW/CS/LLW), the average number of constraints added to the constraint set for AL0-1 and the average number of active constraints at the optima under both linear and Gausssian kernels. Properties Dataset (1) iris (2) glass (3) redwine (4) ecoli (5) vehicle (6) segment (7) sat (8) optdigits (9) pageblocks (10) libras (11) vertebral (12) breasttissue # class # train # test # feature SVM constraints 3 6 10 8 4 7 7 10 5 15 3 6 105 149 1119 235 592 1617 4435 3823 3831 252 217 74 45 65 480 101 254 693 2000 1797 1642 108 93 32 4 9 11 7 18 19 36 64 10 90 6 9 210 745 10071 1645 1776 9702 26610 34407 15324 3528 434 370 AL0-1 constraints added and active Linear kernel Gauss. kernel 213 578 5995 614 1310 4410 11721 7932 9459 1592 344 258 223 490 3811 821 1201 4312 11860 10072 9155 1165 342 271 13 125 1681 117 311 244 1524 597 427 389 78 65 38 252 1783 130 248 469 6269 2315 551 353 86 145 For our experimental methodology, we first make 20 random splits of each dataset into training and testing sets. We then perform two stage, five-fold cross validation on the training set of the first split to tune each model?s parameter C and the kernel parameter ? under the kernelized formulation. In the first stage, the values for C are 2i , i = {0, 3, 6, 9, 12} and the values for ? are 2i , i = {?12, ?9, ?6, ?3, 0}. We select final values for C from 2i C0 , i = {?2, ?1, 0, 1, 2} and values for ? from 2i ?0 , i = {?2, ?1, 0, 1, 2} in the second stage, where C0 and ?0 are the best parameters obtained in the first stage. Using the selected parameters, we train each model on the 20 training sets and evaluate the performance on the corresponding testing set. We use the Shark machine learning library [31] for the implementation of the three multiclass SVM formulations. Despite having an exponential number of possible constraints (i.e., n(2|Y| ? 1) for n examples versus n(|Y| ? 1) for SVMs), a much smaller number of constraints need to be considered by the AL0-1 algorithm in practice to realize a better approximation ( = 0) than Theorem 7 provides. Table 1 shows how the total number of constraints for multiclass SVM compares to the number considered in practice by our AL0-1 algorithm for linear and Gaussian kernel feature spaces. These range from a small fraction (0.23) of the SVM constraints for optdigits to a slightly greater number (with a fraction of 1.06) for iris. More specifically, of the over 3.9 million (= 210 ?3823) possible constraints for optdigits when training the classifier, fewer than 0.3% (7932 or 10072 depending on the feature representation) are added to the constraint set during the constraint generation process. Fewer still (597 or 2315 constraints?less than 0.06%) are constraints that are active in the final classifier with non-zero dual parameters. The sparsity of the dual parameters provides a key computational benefit for support vector machines over logistic regression, which has essentially all non-zero dual parameters. The small number of active constraints shown in Table 1 demonstrate that AL0-1 induces similar sparsity, providing efficiency when employed with kernel methods. We report the accuracy of each method averaged over the 20 dataset splits for both linear feature representations and Gaussian kernel feature representations in Table 2. We denote the results that are either the best of all four methods or not worse than the best with statistical significance (under paired t-test with ? = 0.05) using bold font. We also show the accuracy averaged over all of the datasets for each method and the number of dataset for which each method is ?indistinguishably best? (bold numbers) in the last row. As we can see from the table, the only alternative model that is Fisher 7 Table 2: The mean and (in parentheses) standard deviation of the accuracy for each model with linear kernel and Gaussian kernel feature representations. Bold numbers in each case indicate that the result is the best or not significantly worse than the best (paired t-test with ? = 0.05). Linear Kernel D AL (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) avg #bold 0-1 Gaussian Kernel AL 0-1 WW CS LLW WW CS 96.3 (3.1) 62.5 (6.0) 58.8 (2.0) 86.2 (2.2) 78.8 (2.2) 94.9 (0.7) 84.9 (0.7) 96.6 (0.6) 96.0 (0.5) 74.1 (3.3) 85.5 (2.9) 64.4 (7.1) 96.0 (2.6) 62.2 (3.6) 59.1 (1.9) 85.7 (2.5) 78.8 (1.7) 94.9 (0.8) 85.4 (0.7) 96.5 (0.7) 96.1 (0.5) 72.0 (3.8) 85.9 (2.7) 59.7 (7.8) 96.3 (2.4) 62.5 (3.9) 56.6 (2.0) 85.8 (2.3) 78.4 (2.3) 95.2 (0.8) 84.7 (0.7) 96.3 (0.6) 96.3 (0.5) 71.3 (4.3) 85.4 (3.3) 66.3 (6.9) 79.7 (5.5) 52.8 (4.6) 57.7 (1.7) 74.1 (3.3) 69.8 (3.7) 75.8 (1.5) 74.9 (0.9) 76.2 (2.2) 92.5 (0.8) 34.0 (6.4) 79.8 (5.6) 58.3 (8.1) 96.7 (2.4) 69.5 (4.2) 63.3 (1.8) 86.0 (2.7) 84.3 (2.5) 96.5 (0.6) 91.9 (0.5) 98.7 (0.4) 96.8 (0.5) 83.6 (3.8) 86.0 (3.1) 68.4 (8.6) 96.4 (2.4) 66.8 (4.3) 64.2 (2.0) 84.9 (2.4) 84.4 (2.6) 96.6 (0.5) 92.0 (0.6) 98.8 (0.4) 96.6 (0.4) 83.8 (3.4) 85.3 (2.9) 68.1 (6.5) 96.2 (2.3) 69.4 (4.8) 64.2 (1.9) 85.6 (2.4) 83.8 (2.3) 96.3 (0.6) 91.9 (0.5) 98.8 (0.3) 96.7 (0.4) 85.0 (3.9) 85.5 (3.3) 66.6 (8.9) 95.4 (2.1) 69.2 (4.4) 64.7 (2.1) 86.0 (2.5) 84.4 (2.6) 96.4 (0.5) 91.9 (0.4) 98.9 (0.3) 96.6 (0.4) 83.2 (4.2) 84.4 (2.7) 68.0 (7.2) LLW 81.59 9 81.02 6 81.25 8 68.80 0 85.14 9 84.82 6 85.00 6 84.93 7 consistent?the LLW model?performs poorly on all datasets when only linear features are employed. This matches with previous experimental results conducted by Do?gan et al. [15] and demonstrates a weakness of using an absolute margin for the loss function (rather than the relative margins of all other methods). The AL0-1 classifier performs competitively with the WW and CS models with a slight advantages on overall average accuracy and a larger number of ?indistinguishably best? performances on datasets?or, equivalently, fewer statistically significant losses to any other method. The kernel trick in the Gaussian kernel case provides access to much richer feature spaces, improving the performance of all models, and the LLW model especially. In general, all models provide competitive results in the Gaussian kernel case. The AL0-1 classifier maintains a similarly slight advantage and only provides performance that is sub-optimal (with statistical significance) in three of the twelve datasets versus six of twelve and five of twelve for the other methods. We conclude that the multiclass adversarial method performs well in both low and high dimensional feature spaces. Recalling the theoretical analysis of the adversarial method, it is a well-motivated (from the adversarial zero-one loss minimization) multiclass classifier that enjoys both strong theoretical properties (Fisher consistency and universal consistency) and empirical performance. 5 Conclusion Generalizing support vector machines to multiclass settings in a theoretically sound manner remains a long-standing open problem. Though the loss function requirements guaranteeing Fisher-consistency are well-understood [13], the few Fisher-consistent classifiers that have been developed (e.g., LLW) often are not competitive with inconsistent multiclass classifiers in practice. In this paper, we have sought to fill this gap between theory and practice. We have demonstrated that multiclass adversarial classification under zero-one loss can be recast from an empirical risk minimization perspective and its surrogate loss, AL0-1 , shown to satisfy the Fisher consistency property, leading to a universally consistent classifier that also performs well in practice. We believe that this is an important contribution in understanding both adversarial methods and the generalized hinge loss. Our future work includes investigating the adversarial methods under the different losses and exploring other theoretical properties of the adversarial framework, including generalization bounds. Acknowledgments This research was supported as part of the Future of Life Institute (futureoflife.org) FLI-RFP-AI1 program, grant#2016-158710 and by NSF grant RI-#1526379. 8 References [1] Vladimir Vapnik. Principles of risk minimization for learning theory. In Advances in Neural Information Processing Systems, pages 831?838, 1992. [2] Ingo Steinwart and Andreas Christmann. Support Vector Machines. Springer Publishing Company, Incorporated, 1st edition, 2008. ISBN 0387772413. [3] Peter McCullagh and John A Nelder. Generalized linear models, volume 37. CRC press, 1989. [4] Bernhard E Boser, Isabelle M Guyon, and Vladimir N Vapnik. A training algorithm for optimal margin classifiers. In Proceedings of the Workshop on Computational Learning Theory, pages 144?152, 1992. [5] Corinna Cortes and Vladimir Vapnik. Support-vector networks. Machine Learning, 20(3):273?297, 1995. [6] Yi Lin. Support vector machines and the bayes rule in classification. Data Mining and Knowledge Discovery, 6(3):259?275, 2002. [7] Peter L. Bartlett, Michael I. Jordan, and Jon D. McAuliffe. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 101(473):138?156, 2006. [8] Ingo Steinwart. Support vector machines are universally consistent. J. Complexity, 18(3):768?791, 2002. [9] Ingo Steinwart. Consistency of support vector machines and other regularized kernel classifiers. IEEE Trans. Information Theory, 51(1):128?142, 2005. [10] Koby Crammer and Yoram Singer. On the algorithmic implementation of multiclass kernel-based vector machines. The Journal of Machine Learning Research, 2:265?292, 2002. [11] Jason Weston, Chris Watkins, et al. Support vector machines for multi-class pattern recognition. In ESANN, volume 99, pages 219?224, 1999. [12] Yoonkyung Lee, Yi Lin, and Grace Wahba. Multicategory support vector machines: Theory and application to the classification of microarray data and satellite radiance data. Journal of the American Statistical Association, 99(465):67?81, 2004. [13] Ambuj Tewari and Peter L Bartlett. On the consistency of multiclass classification methods. The Journal of Machine Learning Research, 8:1007?1025, 2007. [14] Yufeng Liu. Fisher consistency of multicategory support vector machines. In International Conference on Artificial Intelligence and Statistics, pages 291?298, 2007. [15] ?r?n Do?gan, Tobias Glasmachers, and Christian Igel. A unified view on multi-class support vector classification. Journal of Machine Learning Research, 17(45):1?32, 2016. [16] Kaiser Asif, Wei Xing, Sima Behpour, and Brian D. Ziebart. Adversarial cost-sensitive classification. In Proceedings of the Conference on Uncertainty in Artificial Intelligence, 2015. [17] Hong Wang, Wei Xing, Kaiser Asif, and Brian Ziebart. Adversarial prediction games for multivariate losses. In Advances in Neural Information Processing Systems, pages 2710?2718, 2015. [18] Flemming Tops?e. Information theoretical optimization techniques. Kybernetika, 15(1):8?27, 1979. [19] Peter D. Gr?nwald and A. Phillip Dawid. Game theory, maximum entropy, minimum discrepancy, and robust Bayesian decision theory. Annals of Statistics, 32:1367?1433, 2004. [20] Farzan Farnia and David Tse. A minimax approach to supervised learning. In Advances in Neural Information Processing Systems, pages 4233?4241. 2016. [21] Peter L Bartlett, Michael I Jordan, and Jon D McAuliffe. Large margin classifiers: Convex loss, low noise, and convergence rates. In Advances in Neural Information Processing Systems, pages 1173?1180, 2003. [22] Naiyang Deng, Yingjie Tian, and Chunhua Zhang. Support vector machines: optimization based theory, algorithms, and extensions. CRC press, 2012. [23] Nilesh Dalvi, Pedro Domingos, Sumit Sanghai, Deepak Verma, et al. Adversarial classification. In Proceedings of the International Conference on Knowledge Discovery and Data Mining, pages 99?108. ACM, 2004. [24] Anqi Liu and Brian Ziebart. Robust classification under sample selection bias. In Advances in Neural Information Processing Systems, pages 37?45, 2014. [25] Gert RG Lanckriet, Laurent El Ghaoui, Chiranjib Bhattacharyya, and Michael I Jordan. A robust minimax approach to classification. The Journal of Machine Learning Research, 3:555?582, 2003. [26] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672?2680, 2014. [27] Charles A. Micchelli, Yuesheng Xu, and Haizhang Zhang. Universal kernels. Journal of Machine Learning Research, 6:2651?2667, 2006. [28] Ioannis Tsochantaridis, Thorsten Joachims, Thomas Hofmann, and Yasemin Altun. Large margin methods for structured and interdependent output variables. In JMLR, pages 1453?1484, 2005. [29] Thorsten Joachims. A support vector method for multivariate performance measures. In Proceedings of the International Conference on Machine Learning, pages 377?384, 2005. [30] M. Lichman. UCI machine learning repository, 2013. URL http://archive.ics.uci.edu/ml. [31] Christian Igel, Verena Heidrich-Meisner, and Tobias Glasmachers. Shark. Journal of Machine Learning Research, 9:993?996, 2008. 9
6088 |@word repository:2 middle:1 polynomial:1 c0:2 open:1 seek:3 moment:1 liu:3 lichman:1 bhattacharyya:1 existing:1 recovered:1 current:1 anqi:2 yet:1 must:2 john:1 realize:1 indistinguishably:2 chicago:2 hofmann:1 enables:2 christian:2 remove:1 treating:1 plot:2 update:1 v:2 greedy:3 selected:1 fewer:3 intelligence:2 generative:1 provides:10 hyperplanes:2 org:1 zhang:2 five:2 combine:1 dalvi:1 haizhang:1 inside:1 manner:1 theoretically:1 pairwise:2 indeed:1 expected:2 multi:2 decomposed:1 company:1 actual:1 solver:1 considering:2 increasing:1 spain:1 fathony:1 maximizes:1 minimizes:4 developed:2 kybernetika:1 unified:1 finding:1 guarantee:6 every:1 expands:1 tackle:1 classifier:21 k2:1 demonstrates:1 sherjil:1 grant:2 yn:2 mcauliffe:2 positive:1 bziebart:1 understood:1 tends:1 despite:3 analyzing:1 laurent:1 equivalence:1 specifying:1 challenging:1 range:1 statistically:1 averaged:2 igel:2 tian:1 practical:1 acknowledgment:1 enforces:1 yj:2 testing:2 practice:10 definite:1 procedure:1 area:1 empirical:13 universal:6 significantly:1 radial:1 refers:1 altun:1 close:1 tsochantaridis:2 selection:1 risk:14 py:1 restriction:1 optimize:1 map:1 lagrangian:2 demonstrated:1 maximizing:1 starting:1 independently:1 convex:8 focused:1 formulate:1 pouget:1 estimator:1 rule:1 fill:3 s6:1 gert:1 annals:1 target:1 programming:3 us:2 domingo:1 goodfellow:1 lanckriet:1 trick:2 fyi:4 dawid:1 expensive:1 recognition:1 ep:5 solved:2 wang:1 worst:1 sanghai:1 futureoflife:1 region:1 ensures:1 decrease:1 nash:4 convexity:1 complexity:1 ziebart:4 tobias:2 warde:1 segment:1 efficiency:5 basis:1 joint:1 various:1 train:2 artificial:2 yufeng:1 quite:1 richer:2 supplementary:1 valued:1 solve:1 larger:1 jean:1 otherwise:4 statistic:2 uic:1 ip:1 final:2 advantage:4 isbn:1 net:1 propose:1 product:3 uci:3 poorly:2 convergence:3 empty:3 requirement:3 extending:1 optimum:1 produce:1 satellite:1 guaranteeing:3 depending:1 measured:1 eq:9 p2:1 esann:1 strong:3 c:9 christmann:2 implies:2 indicate:1 direction:1 closely:2 stochastic:1 enable:1 material:1 libra:1 crc:2 require:1 glasmachers:2 f1:1 generalization:3 brian:4 exploring:1 extension:1 considered:4 ic:1 equilibrium:5 mapping:1 predict:1 algorithmic:1 alf:1 sought:1 radiance:1 lose:1 label:19 sensitive:4 largest:1 establishes:1 minimization:19 concurrently:1 gaussian:7 rather:3 pn:1 corollary:2 focus:2 joachim:2 improvement:1 indicates:1 pageblocks:1 contrast:1 adversarial:35 burdensome:2 glass:1 inference:2 stopping:1 el:1 kernelized:2 llw:11 arg:1 classification:29 dual:16 overall:1 constrained:1 equal:1 once:1 having:1 adversarially:2 koby:1 minf:1 jon:2 discrepancy:1 future:2 minimized:2 np:2 others:1 report:1 employ:4 few:1 mirza:1 yoshua:1 simultaneously:2 ve:1 replaced:1 powerset:2 n1:1 recalling:1 mining:2 ai1:1 weakness:1 loss1:1 argmaxy:1 farley:1 primal:4 partial:2 vely:1 desired:3 re:1 theoretical:6 uncertain:1 tse:2 facet:1 assignment:1 maximization:2 cost:4 deviation:1 parametrically:1 conducted:1 gr:1 sumit:1 randomizing:1 combined:1 calibrated:1 st:1 twelve:3 randomized:1 international:3 standing:3 lee:2 nilesh:1 michael:3 na:2 possibly:1 worse:2 american:2 derivative:2 leading:1 return:2 potential:12 de:1 bold:4 ioannis:1 includes:2 satisfy:1 depends:2 vehicle:1 view:1 jason:1 competitive:3 bayes:6 sort:1 maintains:1 xing:2 contribution:1 minimize:3 il:1 accuracy:6 efficiently:3 yield:1 conceptually:1 generalize:1 bayesian:1 ecoli:1 j6:2 datapoint:1 definition:1 proof:2 dataset:6 popular:1 subsection:1 knowledge:2 yuesheng:1 hilbert:2 verena:1 violating:1 supervised:1 methodology:1 improved:1 wei:2 formulation:18 evaluated:1 though:1 generality:1 stage:4 until:3 hand:1 steinwart:4 expressive:1 mehdi:1 elp:1 incrementally:2 defines:1 logistic:2 grows:1 believe:1 building:1 phillip:1 true:6 former:1 regularization:3 sima:1 game:19 during:1 iris:2 criterion:1 generalized:2 hong:1 complete:1 demonstrate:3 txi:1 performs:5 fj:20 meaning:1 novel:1 recently:1 charles:1 common:1 tending:1 qp:2 exponentially:1 volume:2 million:1 association:2 slight:2 approximates:1 significant:2 refer:1 isabelle:1 ai:2 consistency:22 similarly:2 pointed:1 illinois:1 maxj6:1 dot:3 access:1 similarity:1 heidrich:1 add:2 multivariate:5 perspective:4 apart:1 vertebral:1 chunhua:1 certain:2 asif:5 binary:8 life:1 yi:40 yasemin:1 minimum:1 additional:1 greater:1 employed:4 deng:1 converge:1 maximize:1 redundant:1 nwald:1 relates:1 desirable:1 rj:4 reduces:2 sound:1 faster:1 characterized:2 match:1 cross:1 long:3 lin:2 l0xi:4 paired:2 parenthesis:1 calculates:1 prediction:15 regression:2 essentially:1 iteration:1 kernel:27 background:1 microarray:1 operate:1 archive:1 subject:4 tend:1 member:2 incorporates:1 leveraging:1 inconsistent:1 jordan:3 split:3 bengio:1 variety:1 xj:6 behpour:1 flemming:1 fm:1 opposite:1 wahba:1 inner:1 andreas:1 multiclass:28 bottleneck:1 six:1 motivated:1 bartlett:3 url:1 peter:5 returned:1 cause:1 generally:1 tewari:1 tune:1 amount:2 induces:1 svms:4 http:1 nsf:1 diverse:1 key:2 four:1 reliance:1 reformulation:1 pj:3 utilize:1 fraction:2 sum:7 enforced:3 run:1 uncertainty:1 reader:1 guyon:1 shark:2 utilizes:1 decision:2 appendix:1 bound:5 courville:1 ements:1 quadratic:3 fold:1 oracle:3 constraint:38 ri:7 prescribed:1 extremely:1 min:5 department:1 developing:1 structured:3 combination:1 terminates:1 smaller:1 slightly:1 making:1 projecting:1 restricted:1 erm:6 ghaoui:1 thorsten:2 computationally:3 chiranjib:1 remains:1 bing:1 loose:1 argmaxj:6 singer:2 know:1 needed:1 end:2 available:1 competitively:1 observe:1 rizal:1 alternative:3 corinna:1 slower:1 original:3 thomas:1 top:1 remaining:1 running:1 gan:3 publishing:1 hinge:14 yoram:1 multicategory:2 especially:1 establish:2 approximating:1 micchelli:2 added:4 realized:1 kaiser:3 strategy:1 font:1 grace:1 surrogate:7 gradient:3 chris:1 seven:1 fy:5 discriminant:1 rfp:1 ozair:1 illustration:1 providing:3 minimizing:2 vladimir:3 equivalently:3 unfortunately:2 stated:1 implementation:2 perform:2 allowing:1 datasets:7 ingo:3 enabling:1 descent:1 behave:1 payoff:1 maxk:2 incorporated:1 y1:2 ww:9 discovered:1 reproducing:1 david:2 specified:1 boser:1 barcelona:1 nip:1 trans:1 adversary:2 pattern:1 sparsity:5 program:4 ambuj:1 recast:4 max:7 including:1 misclassification:1 force:1 regularized:1 minimax:6 library:1 al0:25 negativity:2 understanding:1 l2:2 discovery:2 interdependent:1 relative:5 loss:71 generation:5 proven:1 versus:2 validation:1 incurred:1 consistent:13 principle:1 verma:1 row:1 changed:1 repeat:1 last:1 keeping:1 supported:1 enjoys:1 bias:1 institute:1 deepak:1 absolute:4 farzan:1 benefit:1 boundary:1 xn:2 evaluating:1 rich:4 contour:1 resides:1 avg:1 universally:7 approximate:3 compact:1 implicitly:1 bernhard:1 ml:1 active:4 investigating:1 sat:1 conclude:1 nelder:1 xi:62 thep:1 table:7 promising:2 learn:2 robust:3 improving:1 expansion:1 fli:1 significance:2 main:1 bounding:1 noise:1 edition:1 categorized:1 x1:2 xu:2 precision:1 sub:3 exponential:1 meisner:1 watkins:1 jmlr:1 ian:1 theorem:17 formula:1 maxi:1 abadie:1 svm:16 cortes:1 incorporating:1 workshop:1 vapnik:3 adding:2 margin:10 gap:3 easier:1 farnia:2 entropy:1 generalizing:2 logarithmic:1 led:1 simply:1 rg:1 gausssian:1 lagrange:1 prevents:1 contained:1 springer:1 pedro:1 corresponds:2 minimizer:5 satisfies:1 acm:1 weston:2 conditional:2 goal:1 sorted:1 optdigits:3 lipschitz:1 fisher:19 hard:2 mccullagh:1 included:1 infinite:1 specifically:1 hyperplane:2 called:1 total:1 duality:2 gauss:1 experimental:2 player:1 meaningful:1 aaron:1 select:1 support:17 crammer:2 violated:3 incorporate:1 evaluate:2 kernelization:1
5,623
6,089
Graphons, mergeons, and so on! Justin Eldridge Mikhail Belkin Yusu Wang The Ohio State University {eldridge, mbelkin, yusu}@cse.ohio-state.edu Abstract In this work we develop a theory of hierarchical clustering for graphs. Our modeling assumption is that graphs are sampled from a graphon, which is a powerful and general model for generating graphs and analyzing large networks. Graphons are a far richer class of graph models than stochastic blockmodels, the primary setting for recent progress in the statistical theory of graph clustering. We de?ne what it means for an algorithm to produce the ?correct" clustering, give su?cient conditions in which a method is statistically consistent, and provide an explicit algorithm satisfying these properties. 1 Introduction A fundamental problem in the theory of clustering is that of de?ning a cluster. There is no single answer to this seemingly simple question. The right approach depends on the nature of the data and the proper modeling assumptions. In a statistical setting where the objects to be clustered come from some underlying probability distribution, it is natural to de?ne clusters in terms of the distribution itself. The task of a clustering, then, is twofold ? to identify the appropriate cluster structure of the distribution and to recover that structure from a ?nite sample. Thus we would like to say that a clustering is good if it is in some sense close to the ideal structure of the underlying distribution, and that a clustering method is consistent if it produces clusterings which converge to the true clustering, given larger and larger samples. Proving the consistency of a clustering method deepens our understanding of it, and provides justi?cation for using the method in the appropriate setting. In this work, we consider the setting in which the objects to be clustered are the vertices of a graph sampled from a graphon ? a very general random graph model of signi?cant recent interest. We develop a statistical theory of graph clustering in the graphon model; To the best of our knowledge, this is the ?rst general consistency framework developed for such a rich family of random graphs. The speci?c contributions of this paper are threefold. First, we de?ne the clusters of a graphon. Our de?nition results in a graphon having a tree of clusters, which we call its graphon cluster tree. We introduce an object called the mergeon which is a particular representation of the graphon cluster tree that encodes the heights at which clusters merge. Second, we develop a notion of consistency for graph clustering algorithms in which a method is said to be consistent if its output converges to the graphon cluster tree. Here the graphon setting poses subtle yet fundamental challenges which di?erentiate it from classical clustering models, and which must be carefully addressed. Third, we prove the existence of consistent clustering algorithms. In particular, we provide su?cient conditions under which a graphon estimator leads to a consistent clustering method. We then identify a speci?c practical algorithm which satis?es these conditions, and in doing so present a simple graph clustering algorithm which provably recovers the graphon cluster tree. Related work. Graphons are objects of signi?cant recent interest in graph theory, statistics, and machine learning. The theory of graphons is rich and diverse; A graphon can be interpreted as a generalization of a weighted graph with uncountably many nodes, as the limit of a sequence of ?nite graphs, or, more importantly for the present work, as a very general model for generating 29th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. unweighted, undirected graphs. Conveniently, any graphon can be represented as a symmetric, measurable function W : [0, 1]2 ? [0, 1], and it is this representation that we use throughout this paper. The graphon as a graph limit was introduced in recent years by [16], [5], and others. The interested reader is directed to the book by Lov?sz [15] on the subject. There has also been a considerable recent e?ort to produce consistent estimators of the graphon, including the work of [20], [8], [2], [18], and others. We will analyze a simple modi?cation of the graphon estimator proposed by [21] and show that it leads to a graph clustering algorithm which is a consistent estimator of the graphon cluster tree. Much of the previous statistical theory of graph clustering methods assumes that graphs are generated by the so-called stochastic blockmodel. The simplest form of the model generates a graph with n nodes by assigning each node, randomly or deterministically, to one of two communities. An edge between two nodes is added with probability ? if they are from the same community and with probability ? otherwise. A graph clustering method is said to achieve exact recovery if it identi?es the true community assignment of every node in the graph with high probability as n ? ?. The blockmodel is a special case of a graphon model, and our notion of consistency will imply exact recovery of communities. Stochastic blockmodels are widely studied, and it is known that, for example, spectral methods like that of [17] are able to recover the communities exactly as n ? ?, provided that ? and ? remain constant, or that the gap between them does not shrink too quickly. For a summary of consistency results in the blockmodel, see [1], which also provides information-theoretic thresholds for the conditions under which exact recovery is possible. In a related direction, [4] examines the ability of spectral clustering to withstand noise in a hierarchical block model. The density setting. The problem of de?ning the underlying cluster structure of a probability distribution goes back to Hartigan [12] who considered the setting in which the objects to be clustered are points sampled from a density f : X ? R+ . In this case, the high density clusters of f are de?ned to be the connected components of the upper level sets {x : f (x) ? ?} for any ? > 0. The set of all such clusters forms the so-called density cluster tree. Hartigan [12] de?ned a notion of consistency for the density cluster tree, and proved that single-linkage clustering is not consistent. In recent years, [9] and [14] have demonstrated methods which are Hartigan consistent. [10] introduced a distance between a clustering of the data and the density cluster tree, called the merge distortion metric. A clustering method is said to be consistent if the trees it produces converge in merge distortion to density cluster tree. It is shown that convergence in merge distortion is stronger than Hartigan consistency, and that the method of [9] is consistent in this stronger sense. In the present work, we will be motivated by the approach taken in [12] and [10]. We note, however, that there are signi?cant and fundamental di?erences between the density case and the graphon setting. Speci?cally, it is possible for two graphons to be equivalent in the same way that two graphs are: up to a relabeling of the vertices. As such, a graphon W is a representative of an equivalence class of graphons modulo appropriately de?ned relabeling. It is therefore necessary to de?ne the clusters of W in a way that does not depend upon the particular representative used. A similar problem occurs in the density setting when we wish to de?ne the clusters not of a single density function, but rather of a class of densities which are equal almost everywhere; Steinwart [19] provides an elegant solution. But while the domain of a density is equipped with a meaningful metric ? the mass of a ball around a point x is the same under two equivalent densities ? the ambient metric on the vertices of a graphon is not useful. As a result, approaches such as that of [19] do not directly apply to the graphon case, and we must carefully produce our own. Additionally, we will see that the procedure for sampling a graph from a graphon involves latent variables which are in principle unrecoverable from data. These issues have no analogue in the classical density setting, and present very distinct challenges. Miscellany. Due to space constraints, most of the (rather involved) technical details are in the appendix. We will use [n] to denote the set {1, . . . , n}, ? for the symmetric di?erence, ? for the Lebesgue measure on [0, 1], and bold letters to denote random variables. 2 2 The graphon model In order to discuss the statistical properties of a graph clustering algorithm, we must ?rst model the process by which graphs are generated. Formally, a random graph model is a sequence of random variables G1 , G2 , . . . such that the range of Gn consists of undirected, unweighted graphs with node set [n], and the distribution of Gn is invariant under relabeling of the nodes ? that is, isomorphic graphs occur with equal probability. A random graph model of considerable recent interest is the graphon model, in which the distribution over graphs is determined by a symmetric, measurable function W : [0, 1]2 ? [0, 1] called a graphon. Informally, a graphon W may be thought of as the weight matrix of an in?nite graph whose node set is the continuous unit interval, so that W(x, y) represents the weight of the edge between nodes x and y. Interpreting W(x, y) as a probability suggests the following graph sampling procedure: To draw a graph with n nodes, we ?rst select n points x1 , . . . , xn at random from the uniform distribution on [0, 1] ? we can think of these xi as being random ?nodes? in the graphon. We then sample a random graph G on node set [n] by admitting the edge (i, j) with probability W(xi , x j ); by convention, selfedges are not sampled. It is important to note that while we begin by drawing a set of nodes {xi } from the graphon, the graph as given to us is labeled by integers. Therefore, the correspondence between node i in the graph and node xi in the graphon is latent. It can be shown that this sampling procedure de?nes a distribution on ?nite graphs, such that the probability of graph G = ([n], E) is given by ? ? ? [ ]? 1 ? W(xi , x j ) W(xi , x j ) dxi . (1) PW (G = G) = [0,1]n (i, j)?E (i, j)?E i?[n] A very general class of random graph models may be represented as graphons. In particular, a random graph model G1 , G2 , . . . is said to be consistent if the random graph Fk?1 obtained by deleting node k from Gk has the same distribution as Gk . A random graph model is said to be local if whenever S , T ? [k] are disjoint, the random subgraphs of Gk induced by S and T are independent random variables. A result of Lov?sz and Szegedy [16] is that any consistent, local random graph model is equivalent to the distribution on graphs de?ned by PW for some graphon W; the converse is true as well. That is, any such random graph model is equivalent to a graphon. { { { For a ?xed choice of x1 , . . . , xn ? [0, 1], the integrand represents the likelihood that the graph G is sampled when the probability of the edge (i, j) is assumed to be W(xi , x j ). By integrating over all possible choices of x1 , . . . , xn , we obtain the probability of the graph. (a) Graphon W. A particular random graph model is not uniquely de?ned by a graphon ? it is clear from Equation 1 that two graphons W1 and W2 which are equal almost everywhere (i.e., di?er on a set of measure zero) de?ne the same distribution on graphs. In fact, the distribution de?ned by W is unchanged by ?relabelings? of W?s nodes. More formally, if ? is the sigma-algebra of Lebesgue measurable subsets of [0, 1] and ? is the Lebesgue measure, we say that a relabeling function ? : ([0, 1], ?) ? ([0, 1], ?) is measure preserving if for any measurable set A ? ?, ?(??1 (A)) = ?(A). ? We de?ne the relabeled graphon W ? by W ? (x, y) = W(?(x), ?(y)). By analogy (b) W weakly isomorphic to with ?nite graphs, we say that graphons W1 and W2 are weakly isomorphic if they W. are equivalent up to relabeling, i.e., if there exist measure preserving maps ?1 and ?1 ?2 ?2 such that W1 = W2 almost everywhere. Weak isomorphism is an equivalence relation, and most of the important properties of a graphon in fact belong to its equivalence class. For instance, a powerful result of [15] is that two graphons de?ne the same random graph model if and only if they are weakly isomorphic. An example of a graphon W is shown in Figure 1a. It is conventional to plot the graphon as one typically plots an adjacency matrix: with the origin in the (c) An instance upper-left corner. Darker shades correspond to higher values of W. Figure 1b of a graph addepicts a graphon W ? which is weakly isomorphic to W. In particular, W ? is jacency sampled the relabeling of W by the measure preserving transformation ?(x) = 2x mod 1. from W. As such, the graphons shown in Figures 1a and 1b de?ne the same distribution Figure 1 on graphs. Figure 1c shows the adjacency matrix A of a graph of size n = 50 3 sampled from the distribution de?ned by the equivalence class containing W and W ? . Note that it is in principle not possible to determine from A alone which graphon W or W ? it was sampled from, or to what node in W a particular column of A corresponds to. 3 The graphon cluster tree We now identify the cluster structure of a graphon. We will de?ne a graphon?s clusters such that they are analogous to the maximally-connected components of a ?nite graph. It turns out that the collection of all clusters has hierarchical structure; we call this object the graphon cluster tree. We propose that the goal of clustering in the graphon setting is the recovery of the graphon cluster tree. Connectedness and clusters. Consider a ?nite weighted graph. It is natural to cluster the graph into connected components. In fact, because of the weighted edges, we can speak of the clusters of the graph at various levels. More precisely, we say that a set of nodes A is internally connected ? or, from now on, just connected ? at level ? if for every pair of nodes in A there is a path between them such that every node along the path is also in A, and the weight of every edge in the path is at least ?. Equivalently, A is connected at level ? if and only if for every partitioning of A into disjoint, non-empty sets A1 and A2 there is an edge of weight ? or greater between A1 and A2 . The clusters at level ? are then the largest connected components at level ?. A graphon is, in a sense, an in?nite weighted graph, and we will de?ne the clusters of a graphon using the example above as motivation. In doing so, we must be careful to make our notion robust to changes of the graphon on a set of zero measure, as such changes do not a?ect the graph distribution de?ned by the graphon. We base our de?nition on that of Janson [13], who de?ned what it means for a graphon to be connected as a whole. We extend the de?nition in [13] to speak of the connectivity of subsets of the graphon?s nodes at a particular height. Our de?nition is directly analogous to the notion of internal connectedness in ?nite graphs. De?nition 1 (Connectedness). Let W be a graphon, and let A ? [0, 1] be a set of positive measure. We say that A is disconnected at level ? if there exists a measurable S ? A such that 0 < ?(S ) < ?(A), and W < ? almost everywhere on S ? (A \ S ). Otherwise, we say that A is connected at level ?. We now identify the clusters of a graphon; as in the ?nite case, we will frame our de?nition in terms of maximally-connected components. We begin by gathering all subsets of [0, 1] which should belong to some cluster at level ?. Naturally, if a set is connected at level ?, it should be in a cluster at level ?; for technical reasons, we will also say that a set which is connected at all levels ?? < ? (though perhaps not at ?) should be contained in a cluster at level ?, as well. That is, for any ?, the collection A? of sets which should be contained in some cluster at level ? is A? = { A ? ? : ?(A) > 0 and A is connected at every level ?? < ?}. Now suppose A1 , A2 ? A? , and that there is a set A ? A? such that A ? A1 ? A2 . Naturally, the cluster to which A belongs should also contain A1 and A2 , since both are subsets of A. We will therefore consider A1 and A2 to be equivalent, in the sense that they should be contained in the same cluster at level ?. More formally, we de?ne a relation ?? on A? by A1 ?? A2 ?? ?A ? A? s.t. A ? A1 ? A2 . It can be veri?ed that ?? is an equivalence relation on A? ; see Claim 9 in Appendix B. Each equivalence class A in the quotient space A? /?? . consists of connected sets which should intuitively be clustered together at level ?. Naturally, we will de?ne the clusters to be the largest elements of each class; in some sense, these are the maximally-connected components at level ?. More precisely, suppose A is such an equivalence class. It is clear that in general no single member A ? A can contain all other members of A , since adding a null set (i.e., a set of measure zero) to A results in a larger set A? which is nevertheless still a member of A . However, we can ?nd a member A? ? A which contains all but a null set of every other set in A . More formally, we say that A? is an essential maximum of the class A if A? ? A and for every A ? A , ?(A \ A? ) = 0. A? is of course not unique, but it is unique up to a null set; i.e., for any two essential maxima A1 , A2 of A , we have ?(A1 ? A2 ) = 0. We will write the set of essential maxima of A as ess max A ; the fact that the essential maxima are well-de?ned is proven in Claim 10 in Appendix B. We then de?ne clusters as the maximal members of each equivalence class in A? /?? : De?nition 2 (Clusters). The set of clusters at level ? in W, written CW (?), is de?ned to be the countable collection CW (?) = { ess max A : A ? A? /?? } . 4 Note that a cluster C of a graphon is not a subset of the unit interval per se, but rather an equivalence class of subsets which di?er only by null sets. It is often possible to treat clusters as sets rather than equivalence classes, and we may write ?(C ), C ? C ? , etc., without ambiguity. In addition, if ? : [0, 1] ? [0, 1] is a measure preserving transformation, then ??1 (C ) is well-de?ned. For a concrete example of our notion of a cluster, consider the graphon W depicted in Figure 1a. A, B, and C represent sets of the graphon?s nodes. By our de?nitions there are three clusters at level ?3 : A , B, and C . Clusters A and B merge into a cluster A ? B at level ?2 , while C remains a separate cluster. Everything is joined into a cluster A ? B ? C at level ?1 . We have taken care to de?ne the clusters of a graphon in such a way as to be robust to changes of measure zero to the graphon itself. In fact, clusters are also robust to measure preserving transformations. The proof of this result is non-trivial, and comprises Appendix C. Claim 1. Let W be a graphon and ? a measure preserving transformation. Then C is a cluster of W ? at level ? if and only if there exists a cluster C ? of W at level ? such that C = ??1 (C ? ). Cluster trees and mergeons. The set of all clusters of a graphon at any level has hierarchical structure in the sense that, given any pair of distinct clusters C1 and C2 , either one is ?essentially? contained within the other, i.e., C1 ? C2 , or C2 ? C1 , or they are ?essentially? disjoint, i.e., ?(C1 ? C2 ) = 0, as is proven by Claim 8 in Appendix B. Because of this hierarchical structure, we call the set CW of all clusters from any level of the graphon W the graphon cluster tree of W. It is this tree that we hope to recover by applying a graph clustering algorithm to a graph sampled from W. Nevertheless, consider a measurable function M : [0, 1]2 ? [0, 1] which assigns a merge height to every pair of points. While the value of M on any given pair is arbitrary, the value of M on sets of positive measure is constrained. Intuitively, if C is a cluster at level ?, then we must have M ? ? almost everywhere on C ? C . If M satis?es this constraint for every cluster C we call M a mergeon for C, as it is a graphon which determines a particular choice for the merge heights of every pair of points in [0, 1]. More formally: De?nition 3 (Mergeon). Let C be a cluster?tree. A mergeon1 of C is a graphon M such that for all ? ? [0, 1], M ?1 [?, 1] = C ?CW (?) C ? C , where M ?1 [?, 1] = {(x, y) ? [0, 1]2 : M(x, y) ? ?}. (a) Cluster tree CW of W. { { { We may naturally speak of the height at which pairs of distinct clusters merge in the cluster tree. For instance, let C1 and C2 be distinct clusters of C. We say that the merge height of C1 and C2 is the level ? at which they are joined into a single cluster, i.e., max{? : C1 ? C2 ? C(?)}. However, while the merge height of clusters is well-de?ned, the merge height of individual points is not. This is because the cluster tree is not a collection of sets, but rather a collection of equivalence classes of sets, and so a point does not belong to any one cluster more than any other. Note that this is distinct from the classical density case considered in [12], [9], and [1], where the merge height of any pair of points is well-de?ned. (b) Mergeon M of CW . Figure 2 An example of a mergeon and the cluster tree it represents is shown in Figure 2. In fact, the cluster tree depicted is that of the graphon W from Figure 1a. The mergeon encodes the height at which clusters A , B, and C merge. In particular, the fact that M = ?2 everywhere on A ? B represents the merging of A and B at level ?2 in W. It is clear that in general there is no unique mergeon representing a graphon cluster tree, however, the above de?nition implies that two mergeons representing the same cluster tree are equal almost everywhere. Additionally, we have the following two claims, whose proofs are in Appendix B. Claim 2. Let C be a cluster tree, and suppose M is a mergeon representing C. Then C ? C(?) if and only if C is a cluster in M at level ?. In other words, the cluster tree of M is also C. Claim 3. Let W be a graphon and M a mergeon of the cluster tree of W. If ? is a measure preserving transformation, then M ? is a mergeon of the cluster tree of W ? . 1 The de?nition given here involves a slight abuse of notation. For a precise ? but more technical ? version, see Appendix A.2. 5 4 Notions of consistency We have so far de?ned the sense in which a graphon has hierarchical cluster structure. We now turn to the problem of determining whether a clustering algorithm is able to recover this structure when applied to a graph sampled from a graphon. Our approach is to de?ne a distance between the in?nite graphon cluster tree and a ?nite clustering. We will then de?ne consistency by requiring that a consistent method converge to the graphon cluster tree in this distance for all inputs minus a set of vanishing probability. Merge distortion. A hierarchical clustering C of a set S ? or, from now on, just a clustering of S ? is hierarchical collection of subsets of S such that S ? C and for all C, C ? ? C, either C ? C ? , C ? ? C, or C ? C ? = ?. Suppose C is a clustering of a ?nite set S consisting of graphon nodes; i.e, S ? [0, 1]. How might we measure the distance between this clustering and a graphon cluster tree C? Intuitively, the two trees are close if every pair of points in S merges in C at about the same level as they merge in C. But this informal description faces two problems: First, C is a collection of equivalence classes of sets, and so the height at which any pair of points merges in C is not de?ned. Recall, however, that the cluster tree has an alternative representation as a mergeon. A mergeon does de?ne a merge height for every pair of nodes in a graphon, and thus provides a solution to this ?rst issue. Second, the clustering C is not equipped with a height function, and so the height at which any pair of points merges in C is also unde?ned. Following [10], our approach is to induce a merge height function on the clustering using the mergeon in the following way: De?nition 4 (Induced merge height). Let M be a mergeon, and suppose S is a ?nite subset of [0, 1]. Let C be a clustering of S . The merge height function on C induced by M is de?ned by ? C (s, s? ) = minu,v?C(s,s? ) M(u, v), for every s, s? ? S ? S , where C(s, s? ) denotes the smallest cluster M C ? C which contains both s and s? . We measure the distance between a clustering C and the cluster tree C using the merge distortion: De?nition 5. Let M be a mergeon, S a ?nite subset of [0, 1], and C a clustering of S . The merge ? C ) = max s,s? ?S , s?s? |M(s, s? ) ? M ? C (s, s? )|. distortion is de?ned by dS (M, M De?ning the induced merge height and merge distortion in this way leads to an especially meaningful interpretation of the merge distortion. In particular, if the merge distortion between C and C is ?, then any two clusters of C which are separated at level ? but merge below level ? ? ? are correctly separated in the clustering C. A similar result guarantees that a cluster in C is connected in C at within ? of the correct level. For a precise statement of these results, see Claim 5 in Appendix A.4. The label measure. We will use the merge distortion to measure the distance between C, a hierarchical clustering of a graph, and C, the graphon cluster tree. Recall, however, that the nodes of a graph sampled from a graphon have integer labels. That is, C is a clustering of [n], and not of a subset of [0, 1]. Hence, in order to apply the merge distortion, we must ?rst relabel the nodes of the graph, placing them in direct correspondence to nodes of the graphon, i.e., points in [0, 1]. Recall that we sample a graph of size n from a graphon W by ?rst drawing n points x1 , . . . , xn uniformly at random from the unit interval. We then generate a graph on node set [n] by connecting nodes i and j with probability W(xi , xj ). However, the nodes of the sampled graph are not labeled by x1 , . . . , xn , but rather by the integers 1, . . . , n. Thus we may think of xi as being the ?true? latent label of node i. In general the latent node labeling is not recoverable from data, as is demonstrated by the ?gure to the right. We might suppose that the graph shown is sampled from the graphon above it, and that node 1 corresponds to a, node 2 to b, node 3 to c, and node 4 to d. However, it is just as likely that node 4 corresponds to d? , and so neither labeling is more ?correct?. It is clear, though, that some labelings are less likely than others. For instance, the existence of the edge (1, 2) makes it impossible that 1 corresponds to a and 2 to c, since W(a, c) is zero. Therefore, given a graph G = ([n], E) sampled from a graphon, there are many possible relabelings of G which place its nodes in correspondence with nodes of the graphon, but some are more likely than others. The merge distortion depends which labeling of G we assume, but, intuitively, a good clustering of G will have small distortion with respect to highly probable labelings, and only have large distortion on improbable labelings. Our approach is to assign a probability to every pair (G, S ) of a graph and possible labeling. We will thus be able to measure the probability mass of the set of 6 pairs for which a method performs poorly, i.e., results in a large merge distortion. More formally, let Gn denote the set of all undirected, unweighted graphs on node set [n], and let ?n be the sigma-algebra of Lebesgue-measurable subsets of [0, 1]n . A graphon W induces a unique product measure ?W,n de?ned on the product sigma-algebra 2Gn ? ?n such that for all G ? 2Gn and S ? ?n : ?W,n (G ? S) = ? G?G (? S ) LW (S |G) dS , where LW (S | G) = ? (i, j)?E(G) W(xi , x j ) ? [ (i, j)?E(G) ] 1 ? W(xi , x j ) , where E(G) represents the edge set of the graph G. We recognize LW (S | G) as the integrand in Equation 1 for the probability of a graph as determined by a graphon. If G is ?xed, integrating LW (S | G) over all S ? [0, 1]n gives the probability of G under the model de?ned by W. We may now formally de?ne our notion of consistency. First, some notation: If C is a clustering of [n] and S = (x1 , . . . , xn ), write C ? S to denote the relabeling of C by S , in which i is replaced by xi in every cluster. Then if f is a hierarchical graph clustering method, f (G) ? S is a clustering of S , ? f (G)?S denotes the merge function induced on f (G) ? S by M. and M De?nition 6 (Consistency). Let W be a graphon and M be a mergeon of W. A hierarchical graph clustering method f is said ({ to be a consistent estimator of }) the graphon cluster tree of W if for any ? f (G)?S ) > ? ? 0. ?xed ? > 0, as n ? ?, ?W,n (G, S ) : dS (M, M The choice of mergeon for the graphon W does not a?ect consistency, as any two mergeons of the same graphon di?er on a set of measure zero. Furthermore, consistency is with respect to the random graph model, and not to any particular graphon representing the model. The following claim, the proof of which is in Appendix B, makes this precise. Claim 4. Let W be a graphon and ? a measure preserving transformation. A clustering method f is a consistent estimator of the graphon cluster tree of W if and only if it is a consistent estimator of the graphon cluster tree of W ? . Consistency and the blockmodel. If a graph clustering method is consistent in the sense de?ned above, it is also consistent in the stochastic blockmodel; i.e., it ensures strict recovery of the communities with high probability as the size of the graphs grow large. For instance, suppose W is a stochastic blockmodel graphon with ? along the block-diagonal and ? everywhere else. W has two clusters at level ?, merging into one cluster at level ?. When the merge distortion between the graphon cluster tree and a clustering is less than ? ? ?, which will eventually be the case with high probability if the method is consistent, the two clusters are totally disjoint in C; this implication is made precise by Claim 5 in Appendix A.4. 5 Consistent algorithms We now demonstrate that consistent clustering methods exist. We present two results: First, we show that any method which is capable of consistently estimating the probability of each edge in a random graph leads to a consistent clustering method. We then analyze a modi?cation of an existing algorithm to show that it consistently estimates edge probabilities. As a corollary, we identify a graph clustering method which satis?es our notion of consistency. Our results will be for graphons which are piecewise Lipschitz (or weakly isomorphic to a piecewise Lipschitz graphon): De?nition 7 (Piecewise Lipschitz). We say that B = {B1 , . . . , Bk } is a block partition if each Bi is an open,?half-open, or closed interval in [0, 1] with positive measure, Bi ? B j is empty whenever i ? j, and B = [0, 1]. We say that a graphon W is piecewise c-Lipschitz if there exists a set of blocks B such that for any (x, y) and (x? , y? ) in Bi ? B j , |W(x, y) ? W(x? , y? )| ? c(|x ? x? | + |y ? y? |). Our ?rst result concerns methods which are able to consistently estimate edge probabilities in the following sense. Let S = (x1 , . . . , xn ) be an ordered set of n uniform random variables drawn from the unit interval. Fix a graphon W, and let P be the random matrix whose i j entry is given by W(xi , xj ). We say that P is the random edge probability matrix. Assuming that W has structure, it is possible to estimate P from a single graph sampled from W. We say that an estimator P? of P is consistent in max-norm if, for any ? > 0, limn?? P(maxi? j |Pi j ? P? i j | > ?) = 0. The following nontrivial theorem, whose proof comprises Appendix D, states that any estimator which is consistent in this sense leads to a consistent clustering algorithm: 7 Theorem 1. Let W be a piecewise c-Lipschitz graphon. Let P? be a consistent estimator of P in max-norm. Let f be the clustering method which performs single-linkage clustering using P? as a similarity matrix. Then f is a consistent estimator of the graphon cluster tree of W. Estimating the matrix of edge probabilities has Algorithm 1 Clustering by nbhd. smoothing been a direction of recent research, however we are only aware of results which show consis- Require: Adjacency matrix A, C ? (0, 1) % Step 1: Compute the estimated edge tency in mean squared error; That is, the liter2 % probability matrix P? using neighborhood 2 ? 1 ature contains estimators for which /n ?P ? P?F % smoothing algorithm based on [21] tends to zero in probability. One practical n ? Size(A) method is the neighborhood smoothing algo? h ? C (log n)/n rithm of [21]. The method constructs for each for i ? j ? [n] ? [n] do node i in the graph G a neighborhood of nodes A? ? A after setting row/column j to zero Ni which are similar to i in the sense that for evfor i? ? [n] \ {i, j} do ery i? ? Ni , the corresponding column Ai? of the d j (i, i? ) ? maxk?i,i? , j |(A? 2 /n)ik ? (A? 2 /n)i? k | adjacency matrix is close to Ai in a particular end for distance. Aij is clearly not a good estimate for qi j ? hth quantile of {d j (i, i? ) : i? ? i, j} the probability of the edge (i, j), as it is either Ni j ? {i? ? i, j : d j (i, i? ) ? qi j (h)} zero or one, however, if the graphon is piece? end for wise Lipschitz, the average Ai? j over i ? Nij for (i, j) ? [n] will intuitively tend to the true probability. Like ( ? [n] do ) ? 1 1 ? others, the method of [21] is proven to be con? Pi j ? 2 Ni j i? ?Ni j Ai? j + N1ji j? ?N ji Ai j? sistent in mean squared error. Since Theorem 1 end for requires consistency in max-norm, we analyze % Step 2: Cluster P? with single linkage a slight modi?cation of this algorithm and show C ? the single linkage clusters of P? that it consistently estimates P in this stronger C return sense. The technical details are in Appendix E. Theorem 2. If the graphon W is piecewise Lipschitz, the modi?ed neighborhood smoothing algorithm in Appendix E is a consistent estimator of P in max-norm. As a corollary, we identify a practical graph clustering algorithm which is a consistent estimator of the graphon cluster tree. The algorithm is shown in Algorithm 1, and details are in Appendix E.2. Appendix F contains experiments in which the algorithm is applied to real and synthetic data. Corollary 1. If the graphon W is piecewise Lipschitz, Algorithm 1 is a consistent estimator of the graphon cluster tree of W. 6 Discussion We have presented a consistency framework for clustering in the graphon model and demonstrated that a practical clustering algorithm is consistent. We now identify two interesting directions of future research. First, it would be interesting to consider the extension of our framework to sparse random graphs; many real-world networks are sparse, and the graphon generates dense graphs. Recently, however, sparse models which extend the graphon have been proposed; see [7, 6]. It would be interesting to see what modi?cations are necessary to apply our framework in these models. Second, it would be interesting to consider alternative ways of de?ning the ground truth clustering of a graphon. Our construction is motivated by interpreting the graphon W not only as a random graph model, but also as a similarity function, which may not be desirable in certain settings. For example, consider a ?bipartite? graphon W, which is one along the block-diagonal and zero elsewhere. The cluster tree of W consists of a single cluster at all levels, whereas the ideal bipartite clustering has two clusters. Therefore, consider applying a transformation S to W which maps it to a ?similarity? graphon. The goal of clustering then becomes the recovery of the cluster tree of S (W) given a random graph sampled from W. For instance, let S : W 7? W 2 , where W 2 is the operator square of the bipartite graphon W. The cluster tree of S (W) has two clusters at all positive levels, and so represents the desired ground truth. In general, any such transformation S leads to a di?erent clustering goal. We speculate that, with minor modi?cation, the framework herein can be used to prove consistency results in a wide range of graph clustering settings. Acknowledgements. This work was supported by NSF grant IIS-1550757. 8 References [1] Emmanuel Abbe, Afonso S Bandeira, and Georgina Hall. Exact recovery in the stochastic block model. IEEE Trans. Inf. Theory, 62(1):471?487, 2015. [2] Edoardo M Airoldi, Thiago B Costa, and Stanley H Chan. Stochastic blockmodel approximation of a graphon: Theory and consistent estimation. In C J C Burges, L Bottou, M Welling, Z Ghahramani, and K Q Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 692?700. Curran Associates, Inc., 2013. [3] Robert B Ash and Catherine Doleans-Dade. Probability and measure theory. Academic Press, 2000. [4] Sivaraman Balakrishnan, Min Xu, Akshay Krishnamurthy, and Aarti Singh. Noise thresholds for spectral clustering. In J Shawe-Taylor, R S Zemel, P L Bartlett, F Pereira, and K Q Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 954?962. Curran Associates, Inc., 2011. [5] C Borgs, J T Chayes, L Lov?sz, V T S?s, and K Vesztergombi. Convergent sequences of dense graphs I: Subgraph frequencies, metric properties and testing. Adv. Math., 219(6):1801?1851, 20 December 2008. [6] Christian Borgs, Jennifer T Chayes, Henry Cohn, and Nina Holden. Sparse exchangeable graphs and their limits via graphon processes. arXiv:1601.07134, 26 January 2016. [7] Fran?ois Caron and Emily B Fox. Sparse graphs using exchangeable random measures. arXiv:1401.1137, 6 January 2014. [8] Stanley Chan and Edoardo Airoldi. A consistent histogram estimator for exchangeable graph models. In Proceedings of The 31st International Conference on Machine Learning, pages 208?216, 2014. [9] Kamalika Chaudhuri and Sanjoy Dasgupta. Rates of convergence for the cluster tree. In Advances in Neural Information Processing Systems, pages 343?351, 2010. [10] Justin Eldridge, Mikhail Belkin, and Yusu Wang. Beyond hartigan consistency: Merge distortion metric for hierarchical clustering. In Proceedings of The 28th Conference on Learning Theory, pages 588?606, 2015. [11] M Girvan and M E J Newman. Community structure in social and biological networks. Proc. Natl. Acad. Sci. U. S. A., 99(12):7821?7826, 11 June 2002. [12] J. A. Hartigan. Consistency of Single Linkage for High-Density Clusters. Journal of the American Statistical Association, 76(374):388?394, June 1981. ISSN 0162-1459. doi: 10.1080/01621459.1981. 10477658. [13] Svante Janson. Connectedness in graph limits. arXiv:0802.3795, 26 February 2008. [14] Samory Kpotufe and Ulrike V. Luxburg. Pruning nearest neighbor cluster trees. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 225?232, New York, NY, USA, 2011. ACM. [15] L?szl? Lov?sz. Large networks and graph limits, volume 60. American Mathematical Soc., 2012. [16] L?szl? Lov?sz and Bal?zs Szegedy. Limits of dense graph sequences. J. Combin. Theory Ser. B, 96(6): 933?957, November 2006. [17] F McSherry. Spectral partitioning of random graphs. In Foundations of Computer Science, 2001. Proceedings. 42nd IEEE Symposium on, pages 529?537, October 2001. [18] Karl Rohe, Sourav Chatterjee, and Bin Yu. Spectral clustering and the high-dimensional stochastic blockmodel. Ann. Stat., 39(4):1878?1915, August 2011. [19] I Steinwart. Adaptive density level set clustering. In Proceedings of The 24th Conference on Learning Theory, pages 703?737, 2011. [20] Patrick J Wolfe and So?a C Olhede. Nonparametric graphon estimation. arXiv:1309.5936, 23 September 2013. [21] Yuan Zhang, Elizaveta Levina, and Ji Zhu. Estimating network edge probabilities by neighborhood smoothing. arXiv:1509.08588, 29 September 2015. 9
6089 |@word version:1 pw:2 stronger:3 norm:4 nd:2 open:2 minus:1 contains:4 deepens:1 janson:2 existing:1 yet:1 assigning:1 must:6 written:1 partition:1 cant:3 christian:1 plot:2 alone:1 half:1 es:2 vanishing:1 olhede:1 gure:1 provides:4 math:1 cse:1 node:43 zhang:1 height:18 mathematical:1 along:3 c2:7 direct:1 ect:2 ik:1 symposium:1 yuan:1 prove:2 consists:3 introduce:1 lov:5 equipped:2 totally:1 becomes:1 spain:1 provided:1 underlying:3 begin:2 notation:2 mass:2 estimating:3 null:4 what:4 xed:3 interpreted:1 z:1 developed:1 transformation:8 guarantee:1 every:16 exactly:1 ser:1 partitioning:2 unit:4 converse:1 internally:1 grant:1 exchangeable:3 positive:4 local:2 treat:1 tends:1 limit:6 acad:1 analyzing:1 path:3 merge:33 connectedness:4 abuse:1 might:2 studied:1 equivalence:12 suggests:1 range:2 statistically:1 bi:3 directed:1 practical:4 unique:4 testing:1 block:6 procedure:3 nite:15 erence:1 thought:1 word:1 integrating:2 induce:1 close:3 operator:1 applying:2 impossible:1 measurable:7 equivalent:6 demonstrated:3 map:2 conventional:1 go:1 emily:1 recovery:7 assigns:1 subgraphs:1 estimator:16 examines:1 importantly:1 proving:1 notion:9 krishnamurthy:1 analogous:2 construction:1 suppose:7 modulo:1 exact:4 speak:3 curran:2 origin:1 associate:2 element:1 wolfe:1 satisfying:1 nitions:1 labeled:2 wang:2 ensures:1 connected:16 adv:1 depend:1 weakly:5 singh:1 algebra:3 algo:1 upon:1 bipartite:3 represented:2 various:1 separated:2 distinct:5 doi:1 zemel:1 labeling:4 newman:1 neighborhood:5 whose:4 richer:1 larger:3 widely:1 say:13 distortion:17 otherwise:2 drawing:2 ability:1 statistic:1 g1:2 think:2 itself:2 seemingly:1 chayes:2 sequence:4 propose:1 maximal:1 product:2 subgraph:1 poorly:1 achieve:1 chaudhuri:1 description:1 rst:7 convergence:2 cluster:112 empty:2 produce:5 generating:2 converges:1 object:6 develop:3 stat:1 pose:1 nearest:1 erent:1 minor:1 progress:1 soc:1 quotient:1 signi:3 come:1 involves:2 convention:1 implies:1 direction:3 ning:4 ois:1 correct:3 stochastic:8 everything:1 adjacency:4 bin:1 require:1 assign:1 fix:1 clustered:4 generalization:1 probable:1 biological:1 extension:1 graphon:115 around:1 considered:2 ground:2 hall:1 minu:1 claim:11 a2:10 smallest:1 aarti:1 estimation:2 proc:1 label:3 tency:1 sivaraman:1 largest:2 weighted:4 hope:1 clearly:1 rather:6 corollary:3 june:2 consistently:4 likelihood:1 blockmodel:8 mbelkin:1 sense:12 typically:1 holden:1 unde:1 relation:3 labelings:3 interested:1 provably:1 issue:2 constrained:1 special:1 smoothing:5 equal:4 aware:1 construct:1 having:1 sampling:3 represents:6 placing:1 yu:1 icml:1 abbe:1 future:1 others:5 piecewise:7 belkin:2 randomly:1 modi:6 recognize:1 individual:1 relabeling:7 replaced:1 consisting:1 lebesgue:4 interest:3 satis:3 highly:1 unrecoverable:1 szl:2 admitting:1 natl:1 mcsherry:1 implication:1 ambient:1 edge:17 capable:1 necessary:2 improbable:1 fox:1 tree:48 taylor:1 desired:1 combin:1 nij:1 instance:6 column:3 modeling:2 gn:5 assignment:1 vertex:3 subset:11 entry:1 uniform:2 too:1 answer:1 synthetic:1 st:1 density:17 fundamental:3 international:2 together:1 quickly:1 concrete:1 connecting:1 w1:3 squared:2 connectivity:1 ambiguity:1 containing:1 corner:1 book:1 american:2 withstand:1 return:1 szegedy:2 de:60 speculate:1 bold:1 inc:2 depends:2 piece:1 closed:1 doing:2 analyze:3 ulrike:1 recover:4 ery:1 contribution:1 square:1 ni:5 who:2 correspond:1 identify:7 weak:1 cation:6 yusu:3 whenever:2 ed:2 afonso:1 frequency:1 involved:1 naturally:4 proof:4 di:7 recovers:1 dxi:1 con:1 sampled:16 costa:1 proved:1 recall:3 knowledge:1 stanley:2 subtle:1 carefully:2 back:1 higher:1 maximally:3 shrink:1 though:2 erences:1 furthermore:1 just:3 d:3 steinwart:2 su:2 cohn:1 perhaps:1 usa:1 contain:2 true:5 requiring:1 hence:1 symmetric:3 uniquely:1 bal:1 theoretic:1 demonstrate:1 performs:2 interpreting:2 wise:1 ohio:2 recently:1 ji:2 volume:1 belong:3 extend:2 slight:2 interpretation:1 thiago:1 association:1 caron:1 ai:5 consistency:20 fk:1 shawe:1 henry:1 similarity:3 ort:1 base:1 etc:1 patrick:1 own:1 recent:8 chan:2 belongs:1 inf:1 catherine:1 certain:1 bandeira:1 nition:14 preserving:8 greater:1 care:1 speci:3 converge:3 determine:1 ii:1 recoverable:1 desirable:1 technical:4 levina:1 academic:1 a1:10 qi:2 relabel:1 essentially:2 metric:5 arxiv:5 histogram:1 represent:1 relabelings:2 c1:7 addition:1 whereas:1 addressed:1 interval:5 else:1 grow:1 limn:1 appropriately:1 w2:3 veri:1 strict:1 subject:1 induced:5 elegant:1 undirected:3 tend:1 member:5 balakrishnan:1 december:1 mod:1 call:4 integer:3 consis:1 ideal:2 vesztergombi:1 xj:2 whether:1 motivated:2 isomorphism:1 bartlett:1 linkage:5 edoardo:2 york:1 useful:1 clear:4 informally:1 se:1 nonparametric:1 induces:1 simplest:1 generate:1 exist:2 nsf:1 estimated:1 disjoint:4 per:1 correctly:1 diverse:1 write:3 threefold:1 dasgupta:1 threshold:2 nevertheless:2 drawn:1 hartigan:6 neither:1 graph:100 year:2 luxburg:1 everywhere:8 powerful:2 letter:1 place:1 family:1 throughout:1 reader:1 almost:6 fran:1 draw:1 appendix:15 convergent:1 correspondence:3 nontrivial:1 occur:1 constraint:2 precisely:2 encodes:2 generates:2 integrand:2 min:1 ned:22 ball:1 disconnected:1 remain:1 intuitively:5 invariant:1 gathering:1 taken:2 equation:2 remains:1 jennifer:1 discus:1 turn:2 eventually:1 end:3 informal:1 apply:3 hierarchical:12 appropriate:2 spectral:5 alternative:2 weinberger:2 existence:2 assumes:1 clustering:67 denotes:2 cally:1 quantile:1 especially:1 emmanuel:1 ghahramani:1 classical:3 february:1 unchanged:1 question:1 added:1 occurs:1 primary:1 diagonal:2 said:6 september:2 elizaveta:1 distance:7 cw:6 separate:1 sci:1 trivial:1 reason:1 nina:1 assuming:1 issn:1 equivalently:1 october:1 robert:1 statement:1 gk:3 sigma:3 countable:1 proper:1 kpotufe:1 upper:2 november:1 january:2 maxk:1 precise:4 frame:1 arbitrary:1 august:1 community:7 introduced:2 bk:1 pair:13 identi:1 merges:3 herein:1 barcelona:1 nip:1 trans:1 justin:2 able:4 beyond:1 below:1 graphons:12 challenge:2 including:1 max:8 deleting:1 analogue:1 natural:2 zhu:1 representing:4 imply:1 ne:20 understanding:1 acknowledgement:1 determining:1 girvan:1 interesting:4 analogy:1 proven:3 ash:1 foundation:1 consistent:34 principle:2 editor:2 pi:2 uncountably:1 row:1 course:1 summary:1 elsewhere:1 supported:1 karl:1 aij:1 burges:1 wide:1 neighbor:1 face:1 akshay:1 mikhail:2 sparse:5 xn:7 world:1 rich:2 unweighted:3 collection:7 made:1 adaptive:1 far:2 welling:1 social:1 sourav:1 miscellany:1 pruning:1 sz:5 b1:1 assumed:1 svante:1 xi:13 continuous:1 latent:4 additionally:2 nature:1 robust:3 hth:1 bottou:1 domain:1 eldridge:3 blockmodels:2 dense:3 motivation:1 noise:2 whole:1 x1:7 xu:1 representative:2 cient:2 rithm:1 darker:1 ny:1 samory:1 comprises:2 explicit:1 deterministically:1 wish:1 pereira:1 lw:4 third:1 justi:1 theorem:4 shade:1 rohe:1 borgs:2 er:3 maxi:1 concern:1 exists:3 essential:4 adding:1 merging:2 kamalika:1 ature:1 relabeled:1 airoldi:2 chatterjee:1 gap:1 depicted:2 likely:3 conveniently:1 contained:4 ordered:1 g2:2 joined:2 corresponds:4 truth:2 determines:1 acm:1 goal:3 ann:1 careful:1 twofold:1 lipschitz:8 considerable:2 change:3 determined:2 uniformly:1 called:5 sanjoy:1 isomorphic:6 e:4 meaningful:2 formally:7 select:1 internal:1
5,624
609
Probability Estimation from a Database Using a Gibbs Energy Model John W. Miller Microsoft Research (9/1051) One Microsoft Way Redmond, WA 98052 Rodney M. Goodman Dept. of Electrical Engineering (116-81) California Institute of Technology Pasadena, CA 91125 Abstract We present an algorithm for creating a neural network which produces accurate probability estimates as outputs. The network implements a Gibbs probability distribution model of the training database. This model is created by a new transformation relating the joint probabilities of attributes in the database to the weights (Gibbs potentials) of the distributed network model. The theory of this transformation is presented together with experimental results. One advantage of this approach is the network weights are prescribed without iterative gradient descent. Used as a classifier the network tied or outperformed published results on a variety of databases. 1 INTRODUCTION This paper addresses the problem of modeling a discrete database. The database is viewed as a collection of independent samples from a probability distribution. This distribution is called the underlying distribution. In contrast, the empirical distribution is the distribution obtained if you take independent random samples from the database (with replacement). The task of creating a probability model can be separated into two parts. The first part is the problem of choosing statistics of the samples which are expected to accurately represent the underlying distribution. The second part is the problem of choosing a model which is consistent with these statistics. Under reasonable assumptions, the optimal solution to the second problem is the method of Maximum Entropy. For a broad class of statistics, the 531 532 Miller and Goodman Maximum Entropy solution is a Gibbs probability distribution (Slepian, 1972). In this paper, the background and theoretical result of a transformation from joint statistics to a Gibbs energy (or network weight) representation is presented. We then outline the experimental test results of an efficient algorithm implementing this transform without using gradient descent iteration. 2 BACKGROUND Define a set T to be the set of attributes (or fields) in a database. For a particular entry (or record) of the database, define the associated set of attribute values to be the configuration W of the attributes. The set of attribute values associated with a subset beT is called a sub configuration Wb. Using this set notation the Gibbs probability distribution may be defined: pew) = Z-l . eVT(w) (1) where (2) bCT The function V is called the energy. The function Jb, called the potential junction, defines a real value for every sub configuration of the set b. Z is the normalizing constant that makes the sum of probabilities of all configurations equal to unity. Prior work in the neural network literature using the Gibbs distribution (such as the Boltzmann Machine) has primarily used second order models (Jb = 0 if Ibl > 2) (Hinton, 1986). By adding new attributes not in the original database, second order potentials have been used to model complex distributions. The work presented in this paper, in contrast, uses higher order potentials to model complex probability distributions. We begin by considering the case where every potential of every order is used to model the distribution. The Principle of Inclusion-Exclusion from set theory states that the following two equations are equivalent: g(A) Lf(b) (3) b~A L(-l)IA-b l g(b). bCA f(A) (4) The method of inverting an equation from the form of (3) into one in the form of (4) is a special case of Mobius Inversion. Clifford-Hammersley (Kindermann, 1980) used this relation to invert formula (2): JA(w) = L(-l)IA-b l Vb(w) (5) bCA Define the probability of a sub configuration p(Wb) to be the probability that the attributes in set b take on the values defined in the configuration w. Using (1) to describe the probability distribution of sub configurations, equation (5) can be written: JA(w) = L(-l)IA-b l In( p(Wb? (6) b~A Probability Estimation from a Database Using a Gibbs Energy Model 3 A TRANSFORMATION TO GIBBS POTENTIALS Equation (6) provides a technique for modeling distributions by potential functions rather than directly through the observable joint statistics of sets of attributes. If the model is truncated by setting high order potentials to zero, then the energy model becomes an estimate of the model obtained by collecting the joint statistics, rather than an exact equivalent. If equation (6) is used directly, the error in the energy due to setting all potentials of order d to zero grows quickly with d. For this reason (6) must be normalized if it is going to be used in a truncated modeling scheme. A normalization version of equation (2) that corrects for the unequal number of potentials of different orders is: VA(w) IAI- 1)-1 = L ( Ibl- 1 Jb(W) (7) b~A This equation can be inverted to show the surprising result, a weight associated with WA: (8) JA(W) In(PA(w? - (IAI- 1)-1 In(Pb(w? L = tEA b=A-t For example, with three attribute values {x, y, z}, the following potentials are defined: J{x} In(p(x)) = J{y} = In(p(y)) J{z} = In(p(z)) p(xy) ) J{xy} = In ( p(x)p(y) p(yz) ) J{yz} = In ( p(y)p(z) p(xz) ) J{xz} = In ( p(x)p(z) J{xyz} = In ( p(xyz) ) ylp(xy)p(yz)p(xz) For a given database sample, a potential is activated if all of its defined attribute values are true for the sample. The weighted sum of all activated potentials recovers an approximation of the probability of the database sample. If all potentials of every order have been used to create the model, then this approximation is exactly the probability of the sample in the empirical distribution. The correct weighting is given by equation (7). For example it is easily verified that: In(p(xyz)) (22) J{xyz} + (2)-1 1 (J{xy} + J{xz} + J{yz}) 2) + (0 (J{x} + J{y} + J{z}). -1 -1 533 534 Miller and Goodman The Gibbs model truncated to second order potentials would estimate the probability in this example by: In(p( xyz)) ~ ~ 4 (2) -1 (J{xy} + J{xz} + J{yz}) + (2)-1 ? (J{x} + 1 J{y} + J{z})' In Vp(xy)p(yz)p(xz) PROOF OF THE INVERSION FORMULA Theorem: Let T be a finite set . Each element of T will be called an attribute. Each attribute can take on one of a finite set of states called attribute values. A collection of attribute values for every element of T is called a configuration w. For all A ~ T (including both the empty set A 0 and the full set A T), let VA(w) and JA(W) be functions mapping the states of the elements of A to the real numbers. Define (r;:) = m!j(( m - n)! . n!) to be "m choose n." Let V0(w) = 0, J0(W) = 0, and let VA(W) = JA(w) if IAI = l. Then for IAI > 1: = = (9) and L (IAI- 1)-1 . Vb(w) (10) bCA Ibl=IAI-l are equivalent in that any assignment of VA and J A values for all A (9) if and only if they also satisfy (1 0). ~ T will satisfy Proof: Let .:7 be any assignment ofthe values JA(W) for all A ~ T. Let V be any assignment of all the values VA(W) for all A ~ T. Then clearly (9) maps any assignment .:7 to a unique V. We will represent this mapping by the function I, so (9) is abbreviated V = 1(.:7). Similarly (10) maps any assignment V to a unique.:7. Equation (10) will be abbreviated .:7 = g(V). The result of Lemma Cl below, applied with the value 1) set to n, shows that l(g(V)) V. In Lemma C2 below, it is shown g(/(.:7)) .:7. Therefore the equations (9) and (10) are inverse one-to-one mappings and the association of assignments between .:7 and V are identical for the two Q.E.D. equations. = = Lemma Cl: Rather than simply showing l(g(V)) = V, a more general result will be shown. Since the number of potentials of a given order increases exponentially with the order, it is useful to approximate the energy of a configuration by defining a maximum order 1) such that all potentials of greater order are assumed to be zero Jb(W) =0 \:I b such that Ibl > 1). Let VA(W) be the resulting approximation to the energy VA(W). Let IAI = n. Probability Estimation from a Database Using a Gibbs Energy Model Given L JA(W) = VA(W) - (n - 1)-1 . ~(w) (11) bCA Ibl=n-l and the order V approximation to equation (7): VA(w) = Lv ( 1)-1 L ~~1 i=l then (12) Jb(W), bCA IbT=. ( 1) VA(W) = L nV-I A -1 Vb(w). bCA Ib!=V Note: For the case V = n, the approximation is exact VA(w) and so f(g(V? = V is shown. = VA(W), The lemma's result has a simple interpretation. The energy of a configuration is approximated by a scaled average of the energies of the configurations of order V. Using equation (1) to relate energies to probabilities, shows that the estimated probability is a scaled geometric mean of the order V marginal probabilities. Proof: We start with the given equation for VA(W) Lv ( ~ ~ 11)-1 L Jb(w). i=l b~A Ibl=? Use equation (11) to substitute Jb(W) out of the equation: VA(W) t (~~ :) -1 ~ (~(W) - cCt;;~l 1)-1 . Vc(W?) (i - Ibl=. Icl=lbl-l Separate the term in the first sum where i = V VA(w) ,E;( ~ 1)-1 1 V.(w) ("-1 (1)-1 ~. V.(w) ) + ~ : ~1 -2:"(~~11)-1 L 2: i=l b~A (i-1)-1?Vc(w). cCb,lbl~l Ibl=. 1c1=lbl-l By subtracting VA(W) from both sides using equation (12) and noting the second summation over i has no terms when i = 1 we see that it is sufficient to show I:(:~;rLV.(W) i=l bCA Ihl=. t(~~;rL i=2 L cCb Ibl=. Icl=lbl-l b~A (i-W'? V,(w). 535 536 Miller and Goodman The right hand side inner double summation counts a given llc(w) once for every b such that eC b ~ A with i Ibl lei + 1. This occurs exactly IAI- lei n - i + 1 times. Thus = = L V-I ( ~~ i=l 11)-1 L ~(w) = = Lv (1)-1 ~~1 L i=2 "CA 1"1=. ?+1 1 . Vc(w). ~~ n cCA Icl=i-l = i-Ion the right hand side ~(w) = ~ (n ~ 1) -1 L n. j . Vc(w). Now perform a change of variables. Let j ~ (~ i=l ; ) -1 t - L J j=l "CA 1"1=. J cCA !cl=j Clearly both sides are identical since n-t Q.E.D. = Lemma C2: g(/(:1)) :1 Let IAI = n. It is sufficient to show that substituting an identity: JA(W) = VA(w) - L ~ out of (10) using (9) yields -1 (Ib l - 1)-1 L lel- 1 Jc(w). cCb (n _1)-1 ?1Ib(w) bCA,n;tl Ibl=n-l n - 1) ( IblL 1 bCA L -1 Jb(W) - (n -1) bCA,n;tl - Ibl=n-l- Separate the term in the first sum for which b JA(W) = JA(w) n_l)-l + L ( Ibl- 1 h(w) - bCA b;tA =A L (n - 1) -1 "CA,n;tl (lb l -l)-l L lel- 1 Jc(w). cCb Ibl=n-l- Subtract J A (w) from both sides. The right hand side double sum counts a given Jc(w) once for every b such that e ~ b C A with Ibl = IAI- 1 n - 1. This occurs IAI- lei n - lei times. It is sufficient to show = = cCA,ctA Both sides are identical since: _1)-1 ( ~Z - 1 n- z n-l n -lei n-l ( n-2 ) lel- 1 -1 Jc(w). _2)-1 ( ~Z - 1 Q.E.D. Probability Estimation from a Database Using a Gibbs Energy Model 5 USING THE INVERSION FORMULA TO SET NETWORK WEIGHTS Our method of probability estimation is to first collect empirical frequencies of patterns (sub configurations) from the database. (An efficient hash table implementation of the algorithm is described in (Miller, 1993). The basic idea is to remove from the database a pattern with low potential whenever there is a hash collision which prevents a new pattern count from being stored.) Second, interpreting these frequencies as probabilities, we convert each pattern frequency to a potential using equation (8). We assume patterns with unknown or uncalculated frequencies have zero potential. Low order patterns which never occur are assigned a large negative potential (this approximation is needed to model events with zero probability in the empirical distribution). Finally, we calculate the probability of any new pattern not in the training set using the neural network implementation of equations (7) and (1). 6 RESULTS One way to validate the performance of a probability model is to test its performance as a classifier. The probability model is used as a classifier by calculating the probabilities of each unknown class value together with the known attribute values. The most probable combination is then chosen as the predicted class. Used as a classifier the Gibbs model tied or outperformed published results on a variety of databases. Table 1 outlines results on three datasets taken from the UC Irvine archive (Murphy, 1992). The Gibbs model results were collected from the very first experiment using the algorithm with the datasets. No difficult parameter adjustment is necessary to get the algorithm to classify at these rates. The iris database has 4 real value attributes. Each attribute was quantized into a decile ranking for use by the algorithm. 7 CONCLUSION A new method of extracting a Gibbs probability model from a database has been presented. The approach uses the Principle of Inclusion-Exclusion to invert a set of collected statistics into a set of potentials for a Gibbs energy model. A hash table implementation is used to efficiently process database records in order to collect the most important potentials, or weights, which can be stored in the available memory. Although the model is designed to give accurate probability estimates rather than simply class labels, the model in practice works well as a classifier on a variety of databases. Acknowledgements This work is funded in part by DARPA and ONR under grant NOOOI4-92-J-1860. 537 538 Miller and Goodman Table 1: Summary of Classification Results Database A C R Train Test Trials Gibbs Rate House Voting Iris Iris Breast Cancer Breast Cancer 16 4 4 9 9 2 3 3 2 2 435 150 150 699 369 335 120 149 599 200 100 30 1 100 169 50 100 1000 100 100 95.3% 96.3% 97.1% 97.3% 95.7% Compare 95% n.a. 98.0% n.a. 93.7% = = = A Attribute count in the database, excluding the class attribute C = Class count R Record count Train Number of records used to create the energy for one trial Test = Number of records tested in a single trial Trials = Number of independent train-test trials used to calculate the rate Gibbs Rate = Gibbs energy model classification rate Compare = Baseline classification result of other methods (Schlimmer, 1987), (Weiss, 1992),(Zhang, 1992) respectively References D. Slepian, "On Maxentropic Discrete Stationary Processes," Bell System Technical Journal, 51, pp.629-653, 1972. G.E. Hinton and T .J. Sejnowski, "Learning and Relearning in Boltzmann Machines," in Parallel Distributed Processing, Vol. I., pp.282-317, Cambridge MA: MIT Press, 1986. R. Kindermann, J .L. Snell, Markov Random Fields and their Applications, Providence, RI: American Mathematical Society, 1980. J. W. Miller, "Building Probabilistic Models from Databases" California Institute of Technology, Ph.D. Thesis 1993. P. Murphy, and D. Aha, UCI Repository of Machine Learning Databases [Machine-readable data repository at ics.uci.edu in directory /pub/machine-Iearning-databases]. Irvine, CA: University of California, Department of Information and Computer Science, 1992. Schlimmer, J. C., "Concept Acquisition Through Representational Adjustment" University of California at Irvine, Ph.D. Thesis 1987. S. Weiss, and I. Kapouleas, "An Empirical Comparison of Pattern Recognition, Neural Nets, and Machine Learning Classification Methods," in Proceedings of the 11th International Joint Conference on Artificial Intelligence Vol. 1, pp.781-787, Los Gatos, CA: Morgan Kaufmann, 1992. J. Zhang, "Selecting Typical Instances in Instance-Based Learning," in Proceedings of the Ninth International Machine Learning Conference Aberdeen, Scotland, pp.470-479, San Mateo CA: Morgan Kaufmann, 1992.
609 |@word trial:5 repository:2 version:1 inversion:3 ylp:1 configuration:12 pub:1 selecting:1 surprising:1 written:1 must:1 john:1 remove:1 designed:1 hash:3 stationary:1 intelligence:1 directory:1 scotland:1 record:5 provides:1 quantized:1 zhang:2 mathematical:1 c2:2 cta:1 expected:1 xz:6 cct:1 considering:1 becomes:1 begin:1 underlying:2 notation:1 transformation:4 every:7 collecting:1 voting:1 iearning:1 exactly:2 scaled:2 classifier:5 decile:1 grant:1 engineering:1 mateo:1 collect:2 unique:2 practice:1 implement:1 lf:1 j0:1 empirical:5 bell:1 get:1 equivalent:3 map:2 ccb:4 exact:2 us:2 pa:1 element:3 approximated:1 recognition:1 database:27 electrical:1 calculate:2 easily:1 joint:5 darpa:1 train:3 separated:1 describe:1 sejnowski:1 artificial:1 choosing:2 statistic:7 transform:1 advantage:1 net:1 subtracting:1 uci:2 representational:1 validate:1 los:1 empty:1 double:2 produce:1 predicted:1 correct:1 attribute:19 vc:4 implementing:1 ja:10 snell:1 probable:1 summation:2 gatos:1 slepian:2 ibt:1 ic:1 mapping:3 substituting:1 estimation:5 outperformed:2 label:1 kindermann:2 create:2 weighted:1 mit:1 clearly:2 rather:4 bet:1 contrast:2 ibl:15 baseline:1 pasadena:1 relation:1 going:1 classification:4 special:1 uc:1 marginal:1 field:2 equal:1 once:2 never:1 identical:3 broad:1 jb:8 primarily:1 murphy:2 replacement:1 microsoft:2 activated:2 schlimmer:2 accurate:2 necessary:1 xy:6 aha:1 lbl:4 theoretical:1 instance:2 classify:1 modeling:3 wb:3 assignment:6 entry:1 subset:1 stored:2 providence:1 international:2 probabilistic:1 corrects:1 together:2 quickly:1 clifford:1 thesis:2 choose:1 creating:2 american:1 potential:23 satisfy:2 jc:4 ranking:1 start:1 parallel:1 rodney:1 kaufmann:2 efficiently:1 miller:7 yield:1 ofthe:1 vp:1 accurately:1 published:2 whenever:1 energy:16 acquisition:1 frequency:4 pp:4 associated:3 proof:3 recovers:1 irvine:3 icl:3 ihl:1 noooi4:1 higher:1 ta:1 wei:2 iai:11 hand:3 defines:1 lei:5 grows:1 building:1 normalized:1 true:1 concept:1 assigned:1 iris:3 outline:2 interpreting:1 rl:1 exponentially:1 association:1 interpretation:1 relating:1 cambridge:1 gibbs:19 pew:1 similarly:1 inclusion:2 funded:1 v0:1 exclusion:2 onr:1 inverted:1 morgan:2 greater:1 full:1 technical:1 va:17 basic:1 breast:2 iteration:1 represent:2 normalization:1 invert:2 ion:1 c1:1 background:2 goodman:5 archive:1 nv:1 extracting:1 noting:1 variety:3 inner:1 idea:1 useful:1 collision:1 ph:2 bct:1 estimated:1 discrete:2 tea:1 vol:2 pb:1 verified:1 sum:5 convert:1 inverse:1 you:1 reasonable:1 vb:3 cca:3 occur:1 ri:1 prescribed:1 department:1 combination:1 unity:1 taken:1 equation:19 abbreviated:2 count:6 xyz:5 needed:1 junction:1 available:1 original:1 substitute:1 readable:1 n_l:1 calculating:1 lel:3 yz:6 society:1 occurs:2 gradient:2 separate:2 collected:2 reason:1 difficult:1 relate:1 negative:1 implementation:3 boltzmann:2 unknown:2 perform:1 datasets:2 markov:1 finite:2 descent:2 truncated:3 defining:1 hinton:2 excluding:1 ninth:1 lb:1 inverting:1 california:4 unequal:1 address:1 redmond:1 below:2 pattern:8 hammersley:1 including:1 memory:1 ia:3 event:1 scheme:1 technology:2 created:1 bca:11 prior:1 acknowledgement:1 literature:1 geometric:1 lv:3 sufficient:3 consistent:1 principle:2 cancer:2 summary:1 side:7 institute:2 distributed:2 llc:1 collection:2 san:1 ec:1 approximate:1 observable:1 assumed:1 iterative:1 table:4 ca:7 complex:2 cl:3 tl:3 sub:5 house:1 tied:2 ib:3 weighting:1 formula:3 theorem:1 showing:1 normalizing:1 adding:1 relearning:1 subtract:1 entropy:2 aberdeen:1 simply:2 prevents:1 adjustment:2 ma:1 viewed:1 identity:1 change:1 typical:1 lemma:5 called:7 experimental:2 dept:1 tested:1
5,625
6,090
Backprop KF: Learning Discriminative Deterministic State Estimators Tuomas Haarnoja, Anurag Ajay, Sergey Levine, Pieter Abbeel {haarnoja, anuragajay, svlevine, pabbeel}@berkeley.edu Department of Computer Science, University of California, Berkeley Abstract Generative state estimators based on probabilistic filters and smoothers are one of the most popular classes of state estimators for robots and autonomous vehicles. However, generative models have limited capacity to handle rich sensory observations, such as camera images, since they must model the entire distribution over sensor readings. Discriminative models do not suffer from this limitation, but are typically more complex to train as latent variable models for state estimation. We present an alternative approach where the parameters of the latent state distribution are directly optimized as a deterministic computation graph, resulting in a simple and effective gradient descent algorithm for training discriminative state estimators. We show that this procedure can be used to train state estimators that use complex input, such as raw camera images, which must be processed using expressive nonlinear function approximators such as convolutional neural networks. Our model can be viewed as a type of recurrent neural network, and the connection to probabilistic filtering allows us to design a network architecture that is particularly well suited for state estimation. We evaluate our approach on synthetic tracking task with raw image inputs and on the visual odometry task in the KITTI dataset. The results show significant improvement over both standard generative approaches and regular recurrent neural networks. 1 Introduction State estimation is an important component of mobile robotic applications, including autonomous driving and flight [22]. Generative state estimators based on probabilistic filters and smoothers are one of the most popular classes of state estimators. However, generative models are limited in their ability to handle rich observations, such as camera images, since they must model the full distribution over sensor readings. This makes it difficult to directly incorporate images, depth maps, and other high-dimensional observations. Instead, the most popular methods for vision-based state estimation (such as SLAM [22]) are based on domain knowledge and geometric principles. Discriminative models do not need to model the distribution over sensor readings, but are more complex to train for state estimation. Discriminative models such as CRFs [16] typically do not use latent variables, which means that training data must contain full state observations. Most real-world state estimation problem settings only provide partial labels. For example, we might observe noisy position readings from a GPS sensor and need to infer the corresponding velocities. While discriminative models can be augmented with latent state [18], this typically makes them harder to train. We propose an efficient and scalable method for discriminative training of state estimators. Instead of performing inference in a probabilistic latent variable model, we instead construct a deterministic computation graph with equivalent representational power. This computation graph can then be optimized end-to-end with simple backpropagation and gradient descent methods. This corresponds to a type of recurrent neural network model, where the architecture of the network is informed by the 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. structure of the probabilistic state estimator. Aside from the simplicity of the training procedure, one of the key advantages of this approach is the ability to incorporate arbitrary nonlinear components into the observation and transition functions. For example, we can condition the transitions on raw camera images processed by multiple convolutional layers, which have been shown to be remarkably effective for interpreting camera images. The entire network, including the observation and transition functions, is trained end-to-end to optimize its performance on the state estimation task. The main contribution of this work is to draw a connection between discriminative probabilistic state estimators and recurrent computation graphs, and thereby derive a new discriminative, deterministic state estimation method. From the point of view of probabilistic models, we propose a method for training expressive discriminative state estimators by reframing them as representationally equivalent deterministic models. From the point of view of recurrent neural networks, we propose an approach for designing neural network architectures that are well suited for state estimation, informed by successful probabilistic state estimation models. We evaluate our approach on a visual tracking problem, which requires processing raw images and handling severe occlusion, and on estimating vehicle pose from images in the KITTI dataset [8]. The results show significant improvement over both standard generative methods and standard recurrent neural networks. 2 Related Work Some of the most successful methods for state estimation have been probabilistic generative state space models (SSMs) based on filtering and smoothing (Figure 1). Kalman filters are perhaps the best known state estimators, and can be extended to the case of nonlinear dynamics through linearization and the unscented transform. Nonparametric filtering methods, such as particle filtering, are also often used for tasks with multimodal posteriors. For a more complete review of state estimation, we refer the reader to standard references on this topic [22]. xt?1 xt xt+1 ot?1 ot ot+1 Figure 1: A generative state space model with hidden states xi and observation ot generated by the model. ot are observed at both training and test time. Generative models aim to estimate the distribution over state observation sequences o1:T as originating from some underlying hidden state x1:T , which is typically taken to be the state space of the system. This becomes impractical when the observation space is extremely high dimensional, and when the observation is a complex, highly nonlinear function of the state, as in the case of vision-based state estimation, where ot corresponds to an image viewed from a robot?s on-board camera. The challenges of generative state space estimation can be mitigated by using complex observation models [14] or approximate inference [15], but building effective generative models of images remains a challenging open problem. As an alternative to generative models, discriminative models such as conditional random fields (CRFs) can directly estimate p(xt |o1:t ) [16]. A number of CRFs and conditional state space models (CSSMs) have been applied to state estimation [21, 20, 12, 17, 9], typically using a log-linear representation. More recently, discriminative fine-tuning of generative models with nonlinear neural network observations [6], as well as direct training of CRFs with neural network factors [7], have allowed for training of nonlinear discriminative models. However, such models have not been extensively applied to state estimation. Training CRFs and CSSMs typically requires access to true state labels, while generative models only require observations, which often makes them more convenient for physical systems where the true underlying state is unknown. Although CRFs have also been combined with latent states [18], the difficulty of CRF inference makes latent state CRF models difficult to train. Prior work has also proposed to optimize SSM parameters with respect to a discriminative loss [1]. In contrast to this work, our approach incorporates rich sensory observations, including images, and allows for training of highly expressive discriminative models. Our method optimizes the state estimator as a deterministic computation graph, analogous to recurrent neural network (RNN) training. The use of recurrent neural networks (RNNs) for state estimation has been explored in several prior works [24, 4, 23, 19], but has generally been limited to simple tasks without complex sensory inputs such as images. Part of the reason for this is the difficulty of training general-purpose RNNs. Recently, innovative RNN architectures have been successful at mitigating this problem, through models such as the long short-term memory (LSTM) [10] and the 2 ot?1 ot ot+1 zt?1 zt zt+1 xt?1 xt st?2 ot?1 ot ot+1 g? g? g? ? zt?1 st?1 yt ? st?1 xt+1 q yt?1 zt yt+1 ?yt?1 (a) zt+1 st st+1 ? st+1 st q q ?yt ?yt+1 (b) Figure 2: (a) Standard two-step engineering approach for filtering with high-dimensional observations. The generative part has hidden state xt and two observations, yt and zt , where the latter observation is actually the output of a second deterministic model zt = g? (ot ), denoted by dashed lines and trained explicitly to predict zt . (b) Computation graph that jointly optimizes both models in (a), consisting of the deterministic map g? and a deterministic filter that infers the hidden state given zt . By viewing the entire model as a single deterministic computation graph, it can be trained end-to-end using backpropagation as explained in Section 4. gated recurrent unit (GRU) [5]. LSTMs have been combined with vision for perception tasks such as activity recognition [3]. However, in the domain of state estimation, such black-box models ignore the considerable domain knowledge that is available. By drawing a connection between filtering and recurrent networks, we can design recurrent computation graphs that are particularly well suited to state estimation and, as shown in our evaluation, can achieve improved performance over standard LSTM models. 3 Preliminaries Performing state estimation with a generative model directly using high-dimensional observations ot , such as camera images, is very difficult, because these observations are typically produced by a complex and highly nonlinear process. However, in practice, a low-dimensional vector, zt , which can be extracted from ot , can fully capture the dependence of the observation on the underlying state of the system. Let xt denote this state, and let yt denote some labeling of the states that we wish to be able to infer from ot . For example, ot might correspond to pairs of images from a camera on an automobile, zt to its velocity, and yt to the location of the vehicle. In that case, we can first train a discriminative model g? (ot ) to predict zt from ot in feedforward manner, and then filter the predictions to output the desired state labels y1:t . For example, a Kalman filter with hidden state xt could be trained to use the predicted zt as observations, and then perform inference over xt and yt at test time. This standard approach for state estimation with high-dimensional observations is illustrated in Figure 2a. While this method may be viewed as an engineering solution without a probabilistic interpretation, it has the advantage that g? (ot ) is trained discriminatively, and the entire model is conditioned on ot , with xt acting as an internal latent variable. This is why the model does not need to represent the distribution over observations explicitly. However, the function g? (ot ) that maps the raw observations ot to low-dimensional predictions zt is not trained for optimal state estimation. Instead, it is trained to predict an intermediate variable zt that can be readily integrated into the generative filter. 4 Discriminative Deterministic State Estimation Our contribution is based on a generalized view of state estimation that subsumes the na?ve, piecewisetrained models discussed in the previous section and allows them to be trained end-to-end using simple and scalable stochastic gradient descent methods. In the na?ve approach, the observation function g? (ot ) is trained to directly predict zt , since a standard generative filter model does not provide for a straightforward way to optimize g? (ot ) with respect to the accuracy of the filter on the labels y1:T . However, the filter can be viewed as a computation graph unrolled through time, as shown in Figure 2b. In this graph, the filter has an internal state defined by the posterior over xt . For 3 example, in a Kalman filter with Gaussian posteriors, we can represent the internal state with the tuple st = (?xt , ?xt ). In general, we will use st to refer to the state of any filter. We also augment this graph with an output function q(st ) = ?yt that outputs the parameters of a distribution over labels yt . In the case of a Kalman filter, we would simply have q(st ) = (Cy ?xt , Cy ?xt CT y ), where the matrix Cy defines a linear observation function from xt to yt . Viewing the filter as a computation graph in this way, g? (ot ) can be trained discriminatively on the entire sequence, rather than individually on single time steps. Let l(?yt ) be a loss function on the output distribution of the computation graph, which might, for example, be given by l(?yt ) = ? log p?yt (yP t ), where p?yt is the distribution induced by the parameters ?yt , and yt is the label. Let L(?) = t l(?yt ) be the loss on an entire sequence with respect to ?. Furthermore, let ?(st , zt+1 ) denote the operation performed by the filter to compute st+1 based on st and zt+1 . We can compute the gradient of l(?) with respect to the parameters ? by first recursively computing the gradient of the loss with respect to the filter state st from the back to the front according to the following recursion: d?yt?1 dL dst dL dL = + , dst?1 dst d?yt?1 dst?1 dst (1) and then applying the chain rule to obtain ?? L(?) = T X dzt dst dL . d? dzt dst t=1 (2) All of the derivatives in these equations can be obtained from g? (ot ), ?(st?1 , zt ), q(st ), and l(?yt ): dst dst = ?st?1 ?(st?1 , zt ), = ?zt ?(st?1 , zt ), dst?1 dzt dL dzt d?yt = ?? g? (ot ). = ??yt l(?yt ), = ?st q(st ), d?yt dst d? (3) The parameters ? can be optimized with gradient descent using these gradients. This is an instance of backpropagation through time (BPTT), a well known algorithm for training recurrent neural networks. Recognizing this connection between state-space models and recurrent neural networks allows us to extend this generic filtering architecture and explore the continuum of models between filters with a discriminatively trained observation model g? (ot ) all the way to fully general recurrent neural networks. In our experimental evaluation, we use a standard Kalman filter update as ?(st , zt+1 ), but we use a nonlinear convolutional neural network observation function g? (ot ). We found that this provides a good trade-off between incorporating domain knowledge and end-to-end learning for the task of visual tracking and odometry, but other variants of this model could be explored in future work. 5 Experimental Evaluation In this section, we compare our deterministic discriminatively trained state estimator with a set of alternative methods, including simple feedforward convolutional networks, piecewise-trained Kalman filter, and fully general LSTM models. We evaluate these models on two tasks that require processing of raw image input: synthetic task of tracking a red disk in the presence of clutter and severe occlusion; and the KITTI visual odometry task [8]. 5.1 State Estimation Models Our proposed model, which we call the ?backprop Kalman filter? (BKF), is a computation graph made up of a Kalman filter (KF) and a feedforward convolutional neural network that distills the observation ot into a low-dimensional signal zt , which serves as the observation for the KF. The neural network outputs both a point observation zt and an observation covariance matrix Rt . Since the network is trained together with the filter, it can learn to use the covariance matrix to communicate the desired degree of uncertainty about the observation, so as to maximize the accuracy of the final filter prediction. 4 ?xt?1 fc zt A?xt?1 reshape diag exp Lt h4 fc fc h3 ReLU fc h2 ReLU ReLU max_pool conv h1 Kalman filter resp_norm ReLU max_pool conv ot resp_norm Feedforward network ?t L Lt LT t ?0xt + Kt zt ? Cz ?0xt Rt 0 T ?0xt CT z Cz ?xt Cz + Rt T A?xt?1 A + Bw QBT w ?0xt yt ?0xt ?1  ?xt Loss PN PT i=1 1 t=1 2T N 2 (i) (i) Cy ?xt ? yt 2 Kt (I ? Kt Cz ) ?0xt ?xt ?xt?1 Figure 3: Illustration of the computation graph for the BKF. The graph is composed of a feedforward ?t part, which processes the raw images ot and outputs intermediate observations zt and a matrix L that is used to form a positive definite observation covariance matrix Rt , and a recurrent part that integrates zt through time to produce filtered state estimates. See Appendix A for details. We compare the backprop KF to three alternative state estimators: the ?feedforward model?, the ?piecewise KF?, and the ?LSTM model?. The simplest of the models, the feedforward model, does not consider the temporal structure in the task at all, and consists only of a feedforward convolutional network that takes in the observations ot and outputs a point estimate y ?t of the label yt . This approach is viable only if the label information can be directly inferred from ot , such as when tracking an object. On the other hand, tasks that require long term memory, such as visual odometry, cannot be solved with a plain feedforward network. The piecewise KF model corresponds to the simple generative approach described in Section 3, which combines the feedforward network with a Kalman ? t . The filter that filters the network predictions zt to produce a distribution over the state estimate x piecewise model is based on the same computation graph as the BKF, but does not optimize the filter and network together end-to-end, instead training the two pieces separately. The only difference between the two graphs is that the piecewise KF does not implement the additional pathway for propagating the uncertainty from the feedforward network into the filter, but instead, the filter needs to learn to handle the uncertainty in zt independently. An example instantiation of BKF is depicted in Figure 3. A detailed overview of the computational blocks shown in the figure is deferred to Appendix A. Finally, we compare to a recurrent neural network based on LSTM hidden units [10]. This model resembles the backprop KF, except that the filter portion of the graph is replaced with a generic LSTM layer. The LSTM model learns the dynamics from data, without incorporating the domain knowledge present in the KF. 5.2 Neural Network Design A special aspect of our network design is a novel response normalization layer that is applied to the convolutional activations before applying the nonlinearity. The response normalization transforms the activations such that the activations of layer i have always mean ?i and variance ?i2 regardless of the input to the layer. The parameters ?i and ?i2 are learned along with other parameters. This normalization is used in all of the convolutional networks in our evaluation, and resembles batch normalization [11] in its behavior. However, we found this approach to be substantially more effective for recurrent models that require backpropagation through time, compared to the more standard batch normalization approach, which is known to require additional care when applied to recurrent networks. It has been since proposed independently from our work in [2], which gives an in-depth analysis of the method. The normalization is followed by a rectified linear unit (ReLU) and a max pooling layer. 5.3 Synthetic Visual State Estimation Task Our state estimation task is meant to reflect some of the typical challenges in visual state estimation: the need for long-term tracking to handle occlusions, the presence of noise, and the need to process raw pixel data. The task requires tracking a red disk from image observations, as shown in Figure 4. Distractor disks with random colors and radii are added into the scene to occlude the red disk, and the trajectories of all disks follow linear-Gaussian dynamics, with a linear spring force that pulls the disks toward the center of the frame and a drag force that prevents high velocities. The disks can temporally leave the frame since contacts are not modeled. Gaussian noise is added to perturb the motion. While these model parameters are assumed to be known in the design of the filter, it is a straightforward to learn also the model parameters. The difficulty of the task can be adjusted by increasing or decreasing the number of distractor disks, which affects the frequency of occlusions. 5 Figure 4: Illustration of six consecutive frames of two training sequences. The objective is to track the red disk (circled in the the first frame for illustrative purposes) throughout the 100-frame sequence. The distractor disks are sampled for each sequence at random and overlaid on top of the target disk. The upper row illustrates an easy sequence (9 distractors), while the bottom row is a sequence of high difficulty (99 distractors). Note that the target is very rarely visible in the hardest sequences. Table 1: Benchmark Results State Estimation Model # Parameters RMS test error ?? feedforward model piecewise KF LSTM model (64 units) LSTM model (128 units) BKF (ours) 0.2322 ? 0.1316 0.1160 ? 0.0330 0.1407 ? 0.1154 0.1423 ? 0.1352 0.0537 ? 0.1235 7394 7397 33506 92450 7493 The easiest variants of the task are solvable with a feedforward estimator, while the hardest variants require long-term tracking through occlusion. To emphasize the sample efficiency of the models, we trained them using 100 randomly sampled sequences. The results in Table 1 show that the BKF outperforms both the standard probabilistic KF-based estimators and the more powerful and expressive LSTM estimators. The tracking error of the simple feedforward model is significantly larger due to the occlusions, and the model tends to predict the mean coordinates when the target is occluded. The piecewise model performs better, but because the observation covariance is not conditioned on ot , the Kalman filter learns to use a very large observation covariance, which forces it to rely almost entirely on the dynamics model for predictions. On the other hand, since the BKF learns to output the observation covariances conditioned on ot that optimize the performance of the filter, it is able to find a compromise between the observations and the dynamics model. Finally, although the LSTM model is the most general, it performs worse than the BKF, since it does not incorporate prior knowledge about the structure of the state estimation problem. 6 feedforward piecewise LSTM 10 0 RMS error To test the robustness of the estimator to occlusions, we trained each model on a training set of 1000 sequences of varying amounts of clutter and occlusions. We then evaluated the models on several test sets, each corresponding to a different level of occlusion and clutter. The tracking error as the test set difficulty is varied is shown Figure 5. Note that even in the absence of distractors, BKF and LSTM models outperform the feedforward model, since the target occasionally leaves the field of view. The performance of the piecewise KF does not change significantly as the difficulty increases: due to the high amount of clutter during training, the piecewise KF learns to use a large observation covariance and rely primarily on feedforward estimates for prediction. The BKF achieves the lowest error in nearly all cases. At the same time, the BKF also has dramatically fewer parameters than the LSTM models, since the transitions correspond to simple Kalman filter updates. 10 -1 10 -2 10 -3 0 20 40 60 80 100 # distractors Figure 5: The RMS error of various models trained on a single training set that contained sequences of varying difficulty. The models were then evaluated on several test sets of fixed difficulty. Figure 6: Example image sequence from the KITTI dataset (top row) and the corresponding difference image that is obtained by subtracting the RGB values of the previous image from the current image (bottom row). The observation ot is formed by concatenating the two images into a six-channel feature map which is then treated as an input to a convolutional neural network. The figure shows every fifth sample from the original sequence for illustrative purpose. Table 2: KITTI Visual Odometry Results Test 100 # training trajectories Translational Error [m/m] piecewise KF LSTM model (128 units) LSTM model (256 units) BKF (ours) Rotational Error [deg/m] piecewise KF LSTM model (128 units) LSTM model (256 units) BKF (ours) 5.4 Test 100/200/400/800 3 6 10 3 6 10 0.3257 0.5022 0.5199 0.3089 0.2452 0.3456 0.3172 0.2346 0.2265 0.2769 0.2630 0.2062 0.3277 0.5491 0.5439 0.2982 0.2313 0.4732 0.4506 0.2031 0.2197 0.4352 0.4228 0.1804 0.1408 0.5484 0.4960 0.1207 0.1028 0.3681 0.3391 0.0901 0.0978 0.3767 0.2933 0.0801 0.1069 0.4123 0.3845 0.0888 0.0768 0. 3573 0.3566 0.0587 0.0754 0.3530 0.3221 0.0556 KITTI Visual Odometry Experiment Next, we evaluated the state estimation models on visual odometry task in the KITTI dataset [8] (Figure 6, top row). The publicly available training set contains 11 trajectories of ego-centric video sequences of a passenger car driving in suburban scenes, along with ground truth position and orientation. The dataset is challenging since it is relatively small for learning-based algorithms, and the trajectories are visually very diverse. For training the Kalman filter variants, we used a simplified state-space model with three of the state variables corresponding to the vehicle?s 2D pose (two spatial coordinates and heading) and two for the forward and angular velocities. Because the dynamics model is non-linear, we equipped our model-based state estimators with extended Kalman filters, which is a straightforward addition to the BKF framework. The objective of the task is to estimate the relative change in the pose during fixed-length subsequences. However, because inferring the pose requires integration over all past observations, a simple feedforward model cannot be used directly. Instead, we trained a feedforward network, consisting of four convolutional and two fully connected layers and having approximately half a million parameters, to estimate the velocities from pairs of images at consecutive time steps. In practice, we found it better to use a difference image, corresponding to the change in the pixel intensities between the images, along with the current image as an input to the feedforward network (Figure 6). The ground truth velocities, which were used to train the piecewise KF as well as to pretrain the other models, were computed by finite differencing from the ground truth positions. The recurrent models?piecewise KF, the BKF, and the LSTM model?were then fine-tuned to predict the vehicle?s pose. Additionally, for the LSTM model, we found it crucial to pretrain the recurrent layer to predict the pose from the velocities before fine-tuning. We evaluated each model using 11-fold cross-validation, and we report the average errors of the held-out trajectories over the folds. We trained the models by randomly sampling subsequences of 100 time steps. For each fold, we constructed two test sets using the held-out trajectory: the first set contains all possible subsequences of 100 time steps, and the second all subsequences of lengths 100, 200, 400, and 800.1 We repeated each experiment using 3, 6, or all 10 of the sequences in each training fold to evaluate the resilience of each method to overfitting. 1 The second test set aims to mimic the official (publicly unavailable) test protocol. Note, however, that because the methods are not tested on the same sequences as the official test set, they are not directly comparable to results on the official KITTI benchmark. 7 Table 2 lists the cross-validation results. As expected, the error decreases consistently as the number of training sequences becomes larger. In each case, BKF outperforms the other variants in both predicting the position and heading of the vehicle. Because both the piecewise KF and the BKF incorporate domain knowledge, they are more data-efficient. Indeed, the performance of the LSTM degrades faster as the number of training sequences is decreased. Although the models were trained on subsequences of 100 time steps, they were also tested on a set containing a mixture of different sequence lengths. The LSTM model generally failed to generalize to longer sequences, while the Kalman filter variants perform slightly better on mixed sequence lengths. 6 Discussion In this paper, we proposed a discriminative approach to state estimation that consists of reformulating probabilistic generative state estimation as a deterministic computation graph. This makes it possible to train our method end-to-end using simple backpropagation through time (BPTT) methods, analogously to a recurrent neural network. In our evaluation, we present an instance of this approach that we refer to as the backprop KF (BKF), which corresponds to a (extended) Kalman filter combined with a feedforward convolutional neural network that processes raw image observations. Our approach to state estimation has two key benefits. First, we avoid the need to construct generative state space models over complex, high-dimensional observation spaces such as raw images. Second, by reformulating the probabilistic state-estimator as a deterministic computation graph, we can apply simple and effective backpropagation and stochastic gradient descent optimization methods to learn the model parameters. This avoids the usual challenges associated with inference in continuous, nonlinear conditional probabilistic models, while still preserving the same representational power as the corresponding approximate probabilistic inference method, which in our experiments corresponds to approximate Gaussian posteriors in a Kalman filter. Our approach also can be viewed as an application of ideas from probabilistic state-space models to the design of recurrent neural networks. Since we optimize the state estimator as a deterministic computation graph, it corresponds to a particular type of deterministic neural network model. However, the architecture of this neural network is informed by principled and well-motivated probabilistic filtering models, which provides us with a natural avenue for incorporating domain knowledge into the system. Our experimental results indicate that end-to-end training of a discriminative state estimators can improve their performance substantially when compared to a standard piecewise approach, where a discriminative model is trained to process the raw observations and produce intermediate lowdimensional observations that can then be integrated into a standard generative filter. The results also indicate that, although the accuracy of the BKF can be matched by a recurrent LSTM network with a large number of hidden units, BKF outperforms the general-purpose LSTM when the dataset is limited in size. This is due to the fact that BKF incorporates domain knowledge about the structure of probabilistic filters into the network architecture, providing it with a better inductive bias when the training data is limited, which is the case in many real-world robotic applications. In our experiments, we primarily focused on models based on the Kalman filter. However, our approach to state estimation can equally well be applied to other probabilistic filters for which the update equations (approximate or exact) can be written in closed form, including the information filter, the unscented Kalman filter, and the particle filter, as well as deterministic filters such as state observers or moving average processes. As long as the filter can be expressed as a differentiable mapping from the observation and previous state to the new state, we can construct and differentiate the corresponding computation graph. An interesting direction for future work is to extend discriminative state-estimators with complex nonlinear dynamics and larger latent state. For example, one could explore the continuum of models that span the space between simple KF-style state estimators and fully general recurrent networks. The trade-off between these two extremes is between generality and domain knowledge, and striking the right balance for a given problem could produce substantially improved results even with relative modest amounts of training data. Acknowledgments This research was funded in part by ONR through a Young Investigator Program award, by the Army Research Office through the MAST program, and by the Berkeley DeepDrive Center. 8 References [1] P. Abbeel, A. Coates, M. Montemerlo, A. Y. Ng, and S. Thrun. Discriminative training of kalman filters. In Robotics: Science and Systems (R:SS), 2005. [2] J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. [3] M. Baccouche, F. Mamalet, C. Wolf, C. Garcia, and A. Baskurt. Sequential deep learning for human action recognition. In Second International Conference on Human Behavior Unterstanding, pages 29?39, Berlin, Heidelberg, 2011. Springer-Verlag. [4] O. Bobrowski, R. Meir, S. Shoham, and Y. C. Eldar. A neural network implementing optimal state estimation based on dynamic spike train decoding. In Advances in Neural Information Processing Systems (NIPS), 2007. [5] K. Cho, B. van Merrienboer, C. Gulcehre, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014. [6] G. E. Dahl, D. Yu, L. Deng, and A. Acero. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. Audio, Speech, and Language Processing, IEEE Transactions on, 20(1):30?42, 2012. [7] T. Do, T. Arti, et al. Neural conditional random fields. In International Conference on Artificial Intelligence and Statistics, pages 177?184, 2010. [8] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun. Vision meets robotics: The KITTI dataset. International Journal of Robotics Research (IJRR), 2013. [9] R. Hess and A. Fern. Discriminatively trained particle filters for complex multi-object tracking. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 240?247. IEEE, 2009. [10] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?1780, 1997. [11] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning (ICML), 2015. [12] M. Kim and V. Pavlovic. Conditional state space models for discriminative motion estimation. In Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on, pages 1?8. IEEE, 2007. [13] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [14] J. Ko and D. Fox. GP-BayesFilters: Bayesian filtering using Gaussian process prediction and observation models. Autonomous Robots, 27(1):75?90, 2009. [15] R. G. Krishnan, U. Shalit, and D. Sontag. Deep kalman filters. arXiv preprint arXiv:1511.05121, 2015. [16] J. Lafferty, A. McCallum, and F. C. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. 2001. [17] B. Limketkai, D. Fox, and L. Liao. CRF-filters: Discriminative particle filters for sequential state estimation. In International Conference on Robotics and Automation (ICRA), 2007. [18] L.-P. Morency, A. Quattoni, and T. Darrell. Latent-dynamic discriminative models for continuous gesture recognition. In Computer Vision and Pattern Recognition, 2007. CVPR?07. IEEE Conference on, pages 1?8. IEEE, 2007. [19] P. Ondruska and I. Posner. Deep tracking: Seeing beyond seeing using recurrent neural networks. arXiv preprint arXiv:1602.00991, 2016. [20] D. A. Ross, S. Osindero, and R. S. Zemel. Combining discriminative features to infer complex trajectories. In Proceedings of the 23rd international conference on Machine learning, pages 761?768. ACM, 2006. [21] C. Sminchisescu, A. Kanaujia, Z. Li, and D. Metaxas. Discriminative density propagation for 3d human motion estimation. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 1, pages 390?397. IEEE, 2005. [22] S. Thrun, W. Burgard, and D. Fox. Probabilistic Robotics. The MIT Press, 2005. [23] R. Wilson and L. Finkel. A neural implementation of the kalman filter. In Advances in neural information processing systems, pages 2062?2070, 2009. [24] N. Yadaiah and G. Sowmya. Neural network based state estimation of dynamical systems. In International Joint Conference on Neural Networks (IJCNN), 2006. 9
6090 |@word bptt:2 disk:11 open:1 pieter:1 rgb:1 covariance:7 arti:1 thereby:1 harder:1 recursively:1 contains:2 tuned:1 ours:3 outperforms:3 past:1 current:2 activation:3 must:4 readily:1 written:1 visible:1 update:3 aside:1 occlude:1 generative:22 leaf:1 fewer:1 half:1 intelligence:1 mccallum:1 short:2 filtered:1 provides:2 location:1 ssm:1 bayesfilters:1 along:3 h4:1 direct:1 constructed:1 viable:1 consists:2 combine:1 pathway:1 manner:1 indeed:1 expected:1 behavior:2 distractor:3 kiros:1 multi:1 decreasing:1 equipped:1 increasing:1 becomes:2 spain:1 estimating:1 underlying:3 mitigated:1 matched:1 conv:2 lowest:1 easiest:1 substantially:3 informed:3 impractical:1 temporal:1 berkeley:3 every:1 unit:10 segmenting:1 positive:1 before:2 engineering:2 resilience:1 tends:1 representationally:1 anurag:1 meet:1 approximately:1 might:3 rnns:2 black:1 resembles:2 drag:1 challenging:2 limited:5 acknowledgment:1 camera:8 practice:2 block:1 definite:1 implement:1 backpropagation:6 procedure:2 rnn:3 significantly:2 shoham:1 convenient:1 pre:1 regular:1 seeing:2 cannot:2 acero:1 dzt:4 applying:2 context:1 optimize:6 equivalent:2 deterministic:17 map:4 yt:30 crfs:6 center:2 straightforward:3 regardless:1 independently:2 focused:1 simplicity:1 estimator:25 rule:1 pull:1 posner:1 handle:4 autonomous:3 coordinate:2 analogous:1 pt:1 target:4 exact:1 gps:1 designing:1 velocity:7 ego:1 recognition:7 particularly:2 reframing:1 observed:1 levine:1 bottom:2 preprint:5 solved:1 capture:1 cy:4 connected:1 trade:2 decrease:1 principled:1 occluded:1 dynamic:9 trained:23 compromise:1 efficiency:1 multimodal:1 joint:1 schwenk:1 various:1 train:9 effective:5 ondruska:1 artificial:1 zemel:1 labeling:2 larger:3 cvpr:3 drawing:1 s:1 encoder:1 ability:2 statistic:1 gp:1 transform:1 noisy:1 jointly:1 final:1 differentiate:1 advantage:2 sequence:23 differentiable:1 propose:3 subtracting:1 lowdimensional:1 combining:1 achieve:1 representational:2 darrell:1 produce:4 stiller:1 adam:1 leave:1 kitti:9 object:2 derive:1 recurrent:25 propagating:1 pose:6 h3:1 predicted:1 indicate:2 direction:1 radius:1 filter:55 stochastic:3 human:3 viewing:2 implementing:1 backprop:5 require:6 abbeel:2 preliminary:1 merrienboer:1 adjusted:1 unscented:2 ground:3 exp:1 visually:1 overlaid:1 mapping:1 predict:7 driving:2 continuum:2 consecutive:2 achieves:1 purpose:4 estimation:40 lenz:1 integrates:1 label:8 ross:1 individually:1 mit:1 sensor:4 gaussian:5 always:1 odometry:7 aim:2 rather:1 pn:1 avoid:1 finkel:1 mobile:1 varying:2 wilson:1 office:1 improvement:2 consistently:1 pretrain:2 contrast:1 kim:1 inference:6 dependent:1 entire:6 typically:7 integrated:2 hidden:7 originating:1 mitigating:1 pixel:2 translational:1 orientation:1 suburban:1 denoted:1 augment:1 eldar:1 smoothing:1 special:1 spatial:1 integration:1 field:4 construct:3 having:1 ng:1 sampling:1 hardest:2 yu:1 nearly:1 icml:1 future:2 mimic:1 report:1 pavlovic:1 piecewise:16 primarily:2 randomly:2 composed:1 ve:2 replaced:1 occlusion:9 consisting:2 bw:1 highly:3 evaluation:5 severe:2 deferred:1 mixture:1 extreme:1 held:2 slam:1 chain:1 kt:3 tuple:1 partial:1 modest:1 fox:3 desired:2 shalit:1 instance:2 phrase:1 burgard:1 recognizing:1 successful:3 osindero:1 front:1 synthetic:3 combined:3 cho:1 st:23 density:1 lstm:24 international:8 probabilistic:21 off:2 decoding:1 together:2 analogously:1 na:2 reflect:1 containing:1 worse:1 derivative:1 style:1 yp:1 li:1 szegedy:1 subsumes:1 automation:1 explicitly:2 passenger:1 piece:1 vehicle:6 view:4 performed:1 h1:1 closed:1 observer:1 red:4 portion:1 contribution:2 formed:1 publicly:2 accuracy:3 convolutional:11 variance:1 correspond:2 generalize:1 raw:11 bayesian:1 metaxas:1 produced:1 fern:1 trajectory:7 rectified:1 quattoni:1 svlevine:1 frequency:1 associated:1 sampled:2 dataset:7 popular:3 knowledge:9 color:1 infers:1 distractors:4 car:1 actually:1 back:1 centric:1 follow:1 response:2 improved:2 evaluated:4 box:1 generality:1 furthermore:1 angular:1 flight:1 hand:2 lstms:1 expressive:4 nonlinear:10 propagation:1 defines:1 perhaps:1 building:1 contain:1 true:2 inductive:1 reformulating:2 i2:2 illustrated:1 during:2 illustrative:2 generalized:1 complete:1 crf:3 performs:2 motion:3 interpreting:1 image:29 novel:1 recently:2 physical:1 overview:1 volume:1 million:1 discussed:1 interpretation:1 extend:2 bougares:1 significant:2 refer:3 hess:1 tuning:2 rd:1 particle:4 nonlinearity:1 language:1 funded:1 moving:1 robot:3 access:1 longer:1 posterior:4 optimizes:2 schmidhuber:1 occasionally:1 verlag:1 onr:1 approximators:1 preserving:1 additional:2 care:1 ssms:1 deng:1 maximize:1 dashed:1 signal:1 smoother:2 full:2 multiple:1 infer:3 faster:1 gesture:1 cross:2 long:6 equally:1 award:1 prediction:7 scalable:2 variant:6 ko:1 liao:1 ajay:1 vision:8 arxiv:10 sergey:1 represent:2 cz:4 normalization:8 robotics:5 hochreiter:1 addition:1 remarkably:1 fine:3 separately:1 decreased:1 crucial:1 ot:38 induced:1 pooling:1 incorporates:2 lafferty:1 call:1 presence:2 feedforward:21 intermediate:3 easy:1 bengio:1 krishnan:1 affect:1 relu:5 architecture:7 idea:1 avenue:1 shift:1 six:2 motivated:1 rms:3 accelerating:1 suffer:1 sontag:1 speech:2 action:1 deep:5 dramatically:1 generally:2 detailed:1 transforms:1 nonparametric:1 clutter:4 amount:3 extensively:1 processed:2 simplest:1 outperform:1 meir:1 coates:1 track:1 diverse:1 key:2 four:1 distills:1 dahl:1 graph:23 uncertainty:3 communicate:1 powerful:1 dst:11 striking:1 throughout:1 reader:1 almost:1 geiger:1 draw:1 appendix:2 comparable:1 entirely:1 layer:9 ct:2 followed:1 fold:4 activity:1 ijcnn:1 scene:2 aspect:1 extremely:1 innovative:1 spring:1 performing:2 span:1 relatively:1 department:1 according:1 slightly:1 explained:1 iccv:1 taken:1 equation:2 remains:1 end:16 serf:1 gulcehre:1 available:2 operation:1 apply:1 observe:1 generic:2 reshape:1 alternative:4 batch:3 robustness:1 original:1 top:3 perturb:1 society:1 icra:1 contact:1 kanaujia:1 objective:2 added:2 spike:1 degrades:1 dependence:1 rt:4 usual:1 gradient:8 thrun:2 capacity:1 berlin:1 decoder:1 topic:1 urtasun:1 reason:1 toward:1 tuomas:1 kalman:22 o1:2 modeled:1 illustration:2 rotational:1 length:4 unrolled:1 providing:1 differencing:1 difficult:3 balance:1 ba:2 haarnoja:2 design:6 implementation:1 zt:32 unknown:1 gated:1 perform:2 upper:1 observation:50 benchmark:2 finite:1 descent:5 extended:3 hinton:1 y1:2 frame:5 varied:1 arbitrary:1 intensity:1 inferred:1 pair:2 gru:1 optimized:3 connection:4 california:1 learned:1 barcelona:1 kingma:1 nip:2 able:2 beyond:1 dynamical:1 perception:1 pattern:3 reading:4 challenge:3 program:2 including:5 memory:3 max:1 video:1 power:2 difficulty:8 force:3 rely:2 treated:1 solvable:1 predicting:1 recursion:1 natural:1 improve:1 ijrr:1 temporally:1 review:1 geometric:1 prior:3 circled:1 kf:20 relative:2 loss:5 fully:5 discriminatively:5 mixed:1 interesting:1 limitation:1 filtering:9 pabbeel:1 validation:2 h2:1 degree:1 principle:1 translation:1 row:5 heading:2 bias:1 fifth:1 benefit:1 van:1 depth:2 plain:1 world:2 transition:4 rich:3 avoids:1 sensory:3 forward:1 made:1 vocabulary:1 simplified:1 transaction:1 approximate:4 emphasize:1 ignore:1 deg:1 robotic:2 instantiation:1 overfitting:1 ioffe:1 assumed:1 discriminative:27 xi:1 subsequence:5 continuous:2 latent:10 why:1 table:4 additionally:1 learn:4 channel:1 unavailable:1 heidelberg:1 sminchisescu:1 automobile:1 complex:11 domain:9 diag:1 official:3 protocol:1 main:1 noise:2 allowed:1 repeated:1 x1:1 augmented:1 board:1 position:4 inferring:1 wish:1 montemerlo:1 concatenating:1 pereira:1 learns:4 young:1 xt:32 covariate:1 explored:2 list:1 dl:5 incorporating:3 sequential:2 linearization:1 conditioned:3 illustrates:1 suited:3 depicted:1 lt:3 fc:4 simply:1 explore:2 army:1 garcia:1 visual:10 failed:1 prevents:1 expressed:1 contained:1 tracking:12 springer:1 corresponds:6 truth:3 wolf:1 extracted:1 acm:1 conditional:6 viewed:5 absence:1 considerable:1 change:3 typical:1 except:1 reducing:1 acting:1 bobrowski:1 morency:1 experimental:3 rarely:1 mast:1 internal:4 latter:1 meant:1 investigator:1 incorporate:4 evaluate:4 bkf:21 audio:1 tested:2 handling:1
5,626
6,091
Operator Variational Inference Rajesh Ranganath Princeton University Jaan Altosaar Princeton University Dustin Tran Columbia University David M. Blei Columbia University Abstract Variational inference is an umbrella term for algorithms which cast Bayesian inference as optimization. Classically, variational inference uses the Kullback-Leibler divergence to define the optimization. Though this divergence has been widely used, the resultant posterior approximation can suffer from undesirable statistical properties. To address this, we reexamine variational inference from its roots as an optimization problem. We use operators, or functions of functions, to design variational objectives. As one example, we design a variational objective with a Langevin-Stein operator. We develop a black box algorithm, operator variational inference (opvi), for optimizing any operator objective. Importantly, operators enable us to make explicit the statistical and computational tradeoffs for variational inference. We can characterize different properties of variational objectives, such as objectives that admit data subsampling?allowing inference to scale to massive data?as well as objectives that admit variational programs?a rich class of posterior approximations that does not require a tractable density. We illustrate the benefits of opvi on a mixture model and a generative model of images. 1 Introduction Variational inference is an umbrella term for algorithms that cast Bayesian inference as optimization [10]. Originally developed in the 1990s, recent advances in variational inference have scaled Bayesian computation to massive data [7], provided black box strategies for generic inference in many models [19], and enabled more accurate approximations of a model?s posterior without sacrificing efficiency [21, 20]. These innovations have both scaled Bayesian analysis and removed the analytic burdens that have traditionally taxed its practice. Given a model of latent and observed variables p.x; z/, variational inference posits a family of distributions over its latent variables and then finds the member of that family closest to the posterior, p.z j x/. This is typically formalized as minimizing a Kullback-Leibler (kl) divergence from the approximating family q./ to the posterior p./. However, while the kl.q k p/ objective offers many beneficial computational properties, it is ultimately designed for convenience; it sacrifices many desirable statistical properties of the resultant approximation. When optimizing kl, there are two issues with the posterior approximation that we highlight. First, it typically underestimates the variance of the posterior. Second, it can result in degenerate solutions that zero out the probability of certain configurations of the latent variables. While both of these issues can be partially circumvented by using more expressive approximating families, they ultimately stem from the choice of the objective. Under the kl divergence, we pay a large price when q./ is big where p./ is tiny; this price becomes infinite when q./ has larger support than p./. In this paper, we revisit variational inference from its core principle as an optimization problem. We use operators?mappings from functions to functions?to design variational objectives, explicitly trading off computational properties of the optimization with statistical properties of the approximation. We use operators to formalize the basic properties needed for variational inference algorithms. We further outline how to use them to define new variational objectives; as one example, we design a variational objective using a Langevin-Stein operator. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. We develop operator variational inference (opvi), a black box algorithm that optimizes any operator objective. In the context of opvi, we show that the Langevin-Stein objective enjoys two good properties. First, it is amenable to data subsampling, which allows inference to scale to massive data. Second, it permits rich approximating families, called variational programs, which do not require analytically tractable densities. This greatly expands the class of variational families and the fidelity of the resulting approximation. (We note that the traditional kl is not amenable to using variational programs.) We study opvi with the Langevin-Stein objective on a mixture model and a generative model of images. Related Work. There are several threads of research in variational inference with alternative divergences. An early example is expectation propagation (ep) [16]. ep promises approximate minimization of the inclusive kl divergence kl.pjjq/ to find overdispersed approximations to the posterior. ep hinges on local minimization with respect to subsets of data and connects to work on ?-divergence minimization [17, 6]. However, it does not have convergence guarantees and typically does not minimize kl or an ?-divergence because it is not a global optimization method. We note that these divergences can be written as operator variational objectives, but they do not satisfy the tractability criteria and thus require further approximations. Li and Turner [14] present a variant of ?-divergences that satisfy the full requirements of opvi. Score matching [9], a method for estimating models by matching the score function of one distribution to another that can be sampled, also falls into the class of objectives we develop. Here we show how to construct new objectives, including some not yet studied. We make explicit the requirements to construct objectives for variational inference. Finally, we discuss further properties that make them amenable to both scalable and flexible variational inference. 2 Operator Variational Objectives We define operator variational objectives and the conditions needed for an objective to be useful for variational inference. We develop a new objective, the Langevin-Stein objective, and show how to place the classical kl into this class. In the next section, we develop a general algorithm for optimizing operator variational objectives. 2.1 Variational Objectives Consider a probabilistic model p.x; z/ of data x and latent variables z. Given a data set x, approximate Bayesian inference seeks to approximate the posterior distribution p.z j x/, which is applied in all downstream tasks. Variational inference posits a family of approximating distributions q.z/ and optimizes a divergence function to find the member of the family closest to the posterior. The divergence function is the variational objective, a function of both the posterior and the approximating distribution. Useful variational objectives hinge on two properties: first, optimizing the function yields a good posterior approximation; second, the problem is tractable when the posterior distribution is known up to a constant. The classic construction that satisfies these properties is the evidence lower bound (elbo), Eq.z/ ?log p.x; z/ log q.z/?: (1) It is maximized when q.z/ D p.z j x/ and it only depends on the posterior distribution up to a tractable constant, log p.x; z/. The elbo has been the focus in much of the classical literature. Maximizing the elbo is equivalent to minimizing the kl divergence to the posterior, and the expectations are analytic for a large class of models [4]. 2.2 Operator Variational Objectives We define a new class of variational objectives, operator variational objectives. An operator objective has three components. The first component is an operator O p;q that depends on p.z j x/ and q.z/. (Recall that an operator maps functions to other functions.) The second component is a family of test functions F , where each f .z/ 2 F maps realizations of the latent variables to real vectors Rd . In the objective, the operator and a function will combine in an expectation Eq.z/ ?.O p;q f /.z/?, designed such that values close to zero indicate that q is close to p. The third component is a distance 2 function t .a/ W R ! ?0; 1/, which is applied to the expectation so that the objective is non-negative. (Our example uses the square function t .a/ D a2 .) These three components combine to form the operator variational objective. It is a non-negative function of the variational distribution, L.qI O p;q ; F ; t / D sup t .Eq.z/ ?.O p;q f /.z/?/: (2) f 2F Intuitively, it is the worst-case expected value among all test functions f 2 F . Operator variational inference seeks to minimize this objective with respect to the variational family q 2 Q. We use operator objectives for posterior inference. This requires two conditions on the operator and function family. 1. Closeness. The minimum of the variational objective is at the posterior, q.z/ D p.z j x/. We meet this condition by requiring that Ep.z j x/ ?.O p;p f /.z/? D 0 for all f 2 F . Thus, optimizing the objective will produce p.z j x/ if it is the only member of Q with zero expectation (otherwise it will produce a distribution in the equivalence class: q 2 Q with zero expectation). In practice, the minimum will be the closest member of Q to p.z j x/. 2. Tractability. We can calculate the variational objective up to a constant without involving the exact posterior p.z j x/. In other words, we do not require calculating the normalizing constant of the posterior, which is typically intractable. We meet this condition by requiring that the operator O p;q ?originally in terms of p.z j x/ and q.z/?can be written in terms of p.x; z/ and q.z/. Tractability also imposes conditions on F : it must be feasible to find the supremum. Below, we satisfy this by defining a parametric family for F that is amenable to stochastic optimization. Equation 2 and the two conditions provide a mechanism to design meaningful variational objectives for posterior inference. Operator variational objectives try to match expectations with respect to q.z/ to those with respect to p.z j x/. 2.3 Understanding Operator Variational Objectives Consider operators where Eq.z/ ?.O p;q f /.z/? only takes positive values. In this case, distance to zero can be measured with the identity t .a/ D a, so tractability implies the operator need only be known up to a constant. This family includes tractable forms of familiar divergences like the kl divergence (elbo), R?nyi?s ?-divergence [14], and the -divergence [18]. When the expectation can take positive or negative values, operator variational objectives are closely related to Stein divergences [2]. Consider a family of scalar test functions F  that have expectation zero with respect to the posterior, Ep.z j x/ ?f  .z/? D 0. Using this family, a Stein divergence is DStein .p; q/ D sup jEq.z/ ?f  .z/? f  2F  Ep.z j x/ ?f  .z/?j: Now recall the operator objective of Equation 2. The closeness condition implies that L.qI O p;q ; F ; t / D sup t .Eq.z/ ?.O p;q f /.z/? f 2F Ep.z j x/ ?.O p;p f /.z/?/: In other words, operators with positive or negative expectations lead to Stein divergences with a more generalized notion of distance. 2.4 Langevin-Stein Operator Variational Objective We developed the operator variational objective. It is a class of tractable objectives, each of which can be optimized to yield an approximation to the posterior. An operator variational objective is built from an operator, function class, and distance function to zero. We now use this construction to design a new type of variational objective. An operator objective involves a class of functions that has known expectations with respect to an intractable distribution. There are many ways to construct such classes [1, 2]. Here, we construct an operator objective from the generator Stein?s method applied to the Langevin diffusion. 3 Let r > f denote the divergence of a vector-valued function f , that is, the sum of its individual gradients. Applying the generator method of Barbour [2] to Langevin diffusion gives the operator p .Ols f /.z/ D rz log p.x; z/> f .z/ C r > f: (3) We call this the Langevin-Stein (ls) operator. We obtain the corresponding variational objective by using the squared distance function and substituting Equation 3 into Equation 2, p L.qI Ols ; F / D sup .Eq ?rz log p.x; z/> f .z/ C r > f ?/2 : (4) f 2F The ls operator satisfies both conditions. First, it satisfies closeness because it has expectation zero under the posterior (Appendix A) and its unique minimizer is the posterior (Appendix B). Second, it is tractable because it requires only the joint distribution. The functions f will also be a parametric family, which we detail later. Additionally, while the kl divergence finds variational distributions that underestimate the variance, the ls objective does not suffer from that pathology. The reason is that kl is infinite when the support of q is larger than p; here this is not the case. We provided one example of a variational objectives using operators, which is specific to continuous variables. In general, operator objectives are not limited to continuous variables; Appendix C describes an operator for discrete variables. 2.5 The KL Divergence as an Operator Variational Objective Finally, we demonstrate how classical variational methods fall inside the operator family. For example, traditional variational inference minimizes the kl divergence from an approximating family to the posterior [10]. This can be construed as an operator variational objective, p;q .OKL f /.z/ D log q.z/ log p.zjx/ 8f 2 F : (5) This operator does not use the family of functions?it trivially maps all functions f to the same function. Further, because kl is strictly positive, we use the identity distance t .a/ D a. The operator satisfies both conditions. It satisfies closeness because KL.pjjp/ D 0. It satisfies tractability because it can be computed up to a constant when used in the operator objective of Equation 2. Tractability comes from the fact that log p.z j x/ D log p.z; x/ log p.x/. 3 Operator Variational Inference We described operator variational objectives, a broad class of objectives for variational inference. We now examine how it can be optimized. We develop a black box algorithm [27, 19] based on Monte Carlo estimation and stochastic optimization. Our algorithm applies to a general class of models and any operator objective. Minimizing the operator objective involves two optimizations: minimizing the objective with respect to the approximating family Q and maximizing the objective with respect to the function class F (which is part of the objective). We index the family Q with variational parameters  and require that it satisfies properties typically assumed by black box methods [19]: the variational distribution q.zI / has a known and tractable density; we can sample from q.zI /; and we can tractably compute the score function r log q.zI /. We index the function class F with parameters  , and require that f ./ is differentiable. In the experiments, we use neural networks, which are flexible enough to approximate a general family of test functions [8]. Given parameterizations of the variational family and test family, operator variational inference (opvi) seeks to solve a minimax problem,  D inf sup t .E ?.O p;q f /.z/?/:  (6)  We will use stochastic optimization [23, 13]. In principle, we can find stochastic gradients of  by rewriting the objective in terms of the optimized value of ,   ./. In practice, however, we 4 Algorithm 1: Operator variational inference Input : Model log p.x; z/, variational approximation q.zI / Output: Variational parameters  Initialize  and  randomly. while not converged do Compute unbiased estimates of r L from Equation 7. Compute unbiased esimates of r L from Equation 8. Update ,  with unbiased stochastic gradients. end simultaneously solve the maximization and minimization. Though computationally beneficial, this produces saddle points. In our experiments we found it to be stable enough. We derive gradients for the variational parameters  and test function parameters . (We fix the distance function to be the square t .a/ D a2 ; the identity t .a/ D a also readily applies.) Gradient with respect to . For a fixed test function with parameters , denote the objective L D t .E ?.O p;q f /.z/?/: The gradient with respect to variational parameters  is r L D 2 E ?.O p;q f /.z/? r E ?.O p;q f /.z/?: Now write the second expectation with the score function gradient [19]. This gradient is r L D 2 E ?.O p;q f /.z/? E ?r log q.zI /.O p;q f /.z/ C r .O p;q f /.z/?: (7) Equation 7 lets us calculate unbiased stochastic gradients. We first generate two sets of independent samples from q; we then form Monte Carlo estimates of the first and second expectations. For the second expectation, we can use the variance reduction techniques developed for black box variational inference, such as Rao-Blackwellization [19]. We described the score gradient because it is general. An alternative is to use the reparameterization gradient for the second expectation [11, 22]. It requires that the operator be differentiable with respect to z and that samples from q can be drawn as a transformation r of a parameter-free noise source , z D r.; /. In our experiments, we use the reparameterization gradient. Gradient with respect to . Mirroring the notation above, the operator objective for fixed variational  is L D t .E ?.O p;q f /.z/?/: The gradient with respect to test function parameters  is r L D 2 E ?.O p;q f /.z/? E ?r O p;q f .z/?: (8) Again, we can construct unbiased stochastic gradients with two sets of Monte Carlo estimates. Note that gradients for the test function do not require score gradients (or reparameterization gradients) because the expectation does not depend on . Algorithm. Algorithm 1 outlines opvi. We simultaneously minimize the variational objective with respect to the variational family q while maximizing it with respect to the function class f . Given a model, operator, and function class parameterization, we can use automatic differentiation to calculate the necessary gradients [3]. Provided the operator does not require model-specific computation, this algorithm satisfies the black box criteria. 3.1 Data Subsampling and opvi With stochastic optimization, data subsampling scales up traditional variational inference to massive data [7, 26]. The idea is to calculate noisy gradients by repeatedly subsampling from the data set, without needing to pass through the entire data set for each gradient. 5 An as illustration, consider hierarchical models. Hierarchical models consist of global latent variables ? that are shared across data points and local latent variables zi each of which is associated to a data point xi . The model?s log joint density is log p.x1Wn ; z1Wn ; ?/ D log p.?/ C n h X iD1 i log p.xi j zi ; ?/ C log p.zi j ?/ : Hoffman et al. [7] calculate unbiased estimates of the log joint density (and its gradient) by subsampling data and appropriately scaling the sum. We can characterize whether opvi with a particular operator supports data subsampling. opvi relies on evaluating the operator and its gradient at different realizations of the latent variables (Equation 7 and Equation 8). Thus we can subsample data to calculate estimates of the operator when it derives from linear operators of the log density, such as differentiation and the identity. This follows as a linear operator of sums is a sum of linear operators, so the gradients in Equation 7 and Equation 8 decompose into a sum. The Langevin-Stein and kl operator are both linear in the log density; both support data subsampling. 3.2 Variational Programs Given an operator and variational family, Algorithm 1 optimizes the corresponding operator objective. Certain operators require the density of q. For example, the kl operator (Equation 5) requires its log density. This potentially limits the construction of rich variational approximations for which the density of q is difficult to compute.1 Some operators, however, do not depend on having a analytic density; the Langevin-Stein (ls) operator (Equation 3) is an example. These operators can be used with a much richer class of variational approximations, those that can be sampled from but might not have analytically tractable densities. We call such approximating families variational programs. Inference with a variational program requires the family to be reparameterizable [11, 22]. (Otherwise we need to use the score function, which requires the derivative of the density.) A reparameterizable variational program consists of a parametric deterministic transformation R of random noise . Formally, let   Normal.0; 1/; z D R.I /: (9) This generates samples for z, is differentiable with respect to , and its density may be intractable. For operators that do not require the density of q, it can be used as a powerful variational approximation. This is in contrast to the standard Kullback-Leibler (kl) operator. As an example, consider the following variational program for a one-dimensional random variable. Let i denote the i th dimension of  and make the corresponding definition for : z D .3 > 0/R.1 I 1 / .3  0/R.2 I 2 /: (10) When R outputs positive values, this separates the parametrization of the density to the positive and negative halves of the reals; its density is generally intractable. In Section 4, we will use this distribution as a variational approximation. Equation 9 contains many densities when the function class R can approximate arbitrary continuous functions. We state it formally. Theorem 1. Consider a posterior distribution p.z j x/ with a finite number of latent variables and continuous quantile function. Assume the operator variational objective has a unique root at the posterior p.z j x/ and that R can approximate continuous functions. Then there exists a sequence of parameters 1 ; 2 : : : ; in the variational program, such that the operator variational objective converges to 0, and thus q converges in distribution to p.z j x/. This theorem says that we can use variational programs with an appropriate q-independent operator to approximate continuous distributions. The proof is in Appendix D. 1 It is possible to construct rich approximating families with kl.qjjp/, but this requires the introduction of an auxiliary distribution [15]. 6 4 Empirical Study We evaluate operator variational inference on a mixture of Gaussians, comparing different choices in the objective. We then study logistic factor analysis for images. 4.1 Mixture of Gaussians Consider a one-dimensional mixture of Gaussians as the posterior of interest, p.z/ D 12 Normal.zI 3; 1/ C 21 Normal.zI 3; 1/. The posterior contains multiple modes. We seek to approximate it with three variational objectives: Kullback-Leibler (kl) with a Gaussian approximating family, Langevin-Stein (ls) with a Gaussian approximating family, and ls with a variational program. KL Langevin-Stein Variational Program Truth Truth Truth 5 0 5 Value of Latent Variable z 5 0 5 Value of Latent Variable z 5 0 5 Value of Latent Variable z Figure 1: The true posterior is a mixture of two Gaussians, in green. We approximate it with a Gaussian using two operators (in blue). The density on the far right is a variational program given in Equation 10 and using the Langevin-Stein operator; it approximates the truth well. The density of the variational program is intractable. We plot a histogram of its samples and compare this to the histogram of the true posterior. Figure 1 displays the posterior approximations. We find that the kl divergence and ls divergence choose a single mode and have slightly different variances. These operators do not produce good results because a single Gaussian is a poor approximation to the mixture. The remaining distribution in Figure 1 comes from the toy variational program described by Equation 10 with the ls operator. Because this program captures different distributions for the positive and negative half of the real line, it is able to capture the posterior. In general, the choice of an objective balances statistical and computational properties of variational inference. We highlight one tradeoff: the ls objective admits the use of a variational program; however, the objective is more difficult to optimize than the kl. 4.2 Logistic Factor Analysis Logistic factor analysis models binary vectors xi with a matrix of parameters W and biases b, zi  Normal.0; 1/ xi;k  Bernoulli. .w> k zi C bk //; where zi has fixed dimension K and  is the sigmoid function. This model captures correlations of the entries in xi through W . We apply logistic factor analysis to analyze the binarized MNIST data set [24], which contains 28x28 binary pixel images of handwritten digits. (We set the latent dimensionality to 10.) We fix the model parameters to those learned with variational expectation-maximization using the kl divergence, and focus on comparing posterior inferences. We compare the kl operator to the ls operator and study two choices of variational models: a fully factorized Gaussian distribution and a variational program. The variational program generates samples by transforming a K-dimensional standard normal input with a two-layer neural network, using rectified linear activation functions and a hidden size of twice the latent dimensionality. Formally, 7 Inference method Completed data log-likelihood Mean-field Gaussian + kl Mean-field Gaussian + ls Variational Program + ls -59.3 -75.3 -58.9 Table 1: Benchmarks on logistic factor analysis for binarized MNIST. The same variational approximation with ls performs worse than kl on likelihood performance. The variational program with ls performs better without directly optimizing for likelihoods. the variational program we use generates samples of z as follows: z0  Normal.0; I / > h0 D ReLU.W0q z0 C bq0 / > h1 D ReLU.W1q h0 C bq1 / > z D W2q h1 C bq2 : The variational parameters are the weights W q and biases bq . For f , we use a three-layer neural network with the same hidden size as the variational program and hyperbolic tangent activations where unit activations were bounded to have norm two. Bounding the unit norm bounds the divergence. We used the Adam optimizer [12] with learning rates 2  10 4 for f and 2  10 5 for the variational approximation. There is no standard for evaluating generative models and their inference algorithms [25]. Following Rezende et al. [22], we consider a missing data problem. We remove half of the pixels in the test set (at random) and reconstruct them from a fitted posterior predictive distribution. Table 1 summarizes the results on 100 test images; we report the log-likelihood of the completed image. ls with the variational program performs best. It is followed by kl and the simpler ls inference. The ls performs better than kl even though the model parameters were learned with kl. 5 Summary We present operator variational objectives, a broad yet tractable class of optimization problems for approximating posterior distributions. Operator objectives are built from an operator, a family of test functions, and a distance function. We outline the connection between operator objectives and existing divergences such as the KL divergence, and develop a new variational objective using the Langevin-Stein operator. In general, operator objectives produce new ways of posing variational inference. Given an operator objective, we develop a black box algorithm for optimizing it and show which operators allow scalable optimization through data subsampling. Further, unlike the popular evidence lower bound, not all operators explicitly depend on the approximating density. This permits flexible approximating families, called variational programs, where the distributional form is not tractable. We demonstrate this approach on a mixture model and a factor model of images. There are several possible avenues for future directions such as developing new variational objectives, adversarially learning [5] model parameters with operators, and learning model parameters with operator variational objectives. Acknowledgments. This work is supported by NSF IIS-1247664, ONR N00014-11-1-0651, DARPA FA8750-14-2-0009, DARPA N66001-15-C-4032, Adobe, NSERC PGS-D, Porter Ogden Jacobus Fellowship, Seibel Foundation, and the Sloan Foundation. The authors would like to thank Dawen Liang, Ben Poole, Stephan Mandt, Kevin Murphy, Christian Naesseth, and the anonymous reviews for their helpful feedback and comments. References [1] Assaraf, R. and Caffarel, M. (1999). Zero-variance principle for monte carlo algorithms. In Phys. Rev. Let. [2] Barbour, A. D. (1988). Stein?s method and poisson process convergence. Journal of Applied Probability. 8 [3] Carpenter, B., Hoffman, M. D., Brubaker, M., Lee, D., Li, P., and Betancourt, M. (2015). The Stan Math Library: Reverse-mode automatic differentiation in C++. arXiv preprint arXiv:1509.07164. [4] Ghahramani, Z. and Beal, M. (2001). Propagation algorithms for variational Bayesian learning. In NIPS 13, pages 507?513. [5] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Generative adversarial nets. In Neural Information Processing Systems. [6] Hern?ndez-Lobato, J. M., Li, Y., Rowland, M., Hern?ndez-Lobato, D., Bui, T., and Turner, R. E. (2015). Black-box ?-divergence Minimization. arXiv.org. [7] Hoffman, M., Blei, D., Wang, C., and Paisley, J. (2013). Stochastic variational inference. Journal of Machine Learning Research, 14(1303?1347). [8] Hornik, K., Stinchcombe, M., and White, H. (1989). Multilayer feedforward networks are universal approximators. Neural networks, 2(5):359?366. [9] Hyv?rinen, A. (2005). Estimation of non-normalized statistical models by score matching. Journal of Machine Learning Research, 6(Apr):695?709. [10] Jordan, M., Ghahramani, Z., Jaakkola, T., and Saul, L. (1999). Introduction to variational methods for graphical models. Machine Learning, 37:183?233. [11] Kingma, D. and Welling, M. (2014). Auto-encoding variational bayes. In (ICLR). [12] Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. CoRR, abs/1412.6980. [13] Kushner, H. and Yin, G. (1997). Stochastic Approximation Algorithms and Applications. Springer New York. [14] Li, Y. and Turner, R. E. (2016). R?nyi divergence variational inference. arXiv preprint arXiv:1602.02311. [15] Maal?e, L., S?nderby, C. K., S?nderby, S. K., and Winther, O. (2016). Auxiliary deep generative models. arXiv preprint arXiv:1602.05473. [16] Minka, T. P. (2001). Expectation propagation for approximate Bayesian inference. In UAI. [17] Minka, T. P. (2004). Power EP. Technical report, Microsoft Research, Cambridge. [18] Nielsen, F. and Nock, R. (2013). On the chi square and higher-order chi distances for approximating f-divergences. arXiv preprint arXiv:1309.3029. [19] Ranganath, R., Gerrish, S., and Blei, D. (2014). Black Box Variational Inference. In AISTATS. [20] Ranganath, R., Tran, D., and Blei, D. M. (2016). Hierarchical variational models. In International Conference on Machine Learning. [21] Rezende, D. J. and Mohamed, S. (2015). Variational inference with normalizing flows. In Proceedings of the 31st International Conference on Machine Learning (ICML-15). [22] Rezende, D. J., Mohamed, S., and Wierstra, D. (2014). Stochastic backpropagation and approximate inference in deep generative models. In International Conference on Machine Learning. [23] Robbins, H. and Monro, S. (1951). A stochastic approximation method. The Annals of Mathematical Statistics, 22(3):pp. 400?407. [24] Salakhutdinov, R. and Murray, I. (2008). On the quantitative analysis of deep belief networks. In International Conference on Machine Learning. [25] Theis, L., van den Oord, A., and Bethge, M. (2016). A note on the evaluation of generative models. In International Conference on Learning Representations. [26] Titsias, M. and L?zaro-Gredilla, M. (2014). Doubly stochastic variational bayes for non-conjugate inference. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1971? 1979. [27] Wingate, D. and Weber, T. (2013). Automated variational inference in probabilistic programming. ArXiv e-prints. 9
6091 |@word norm:2 hyv:1 seek:4 reduction:1 configuration:1 contains:3 score:8 ndez:2 fa8750:1 existing:1 comparing:2 activation:3 yet:2 written:2 readily:1 must:1 analytic:3 christian:1 remove:1 designed:2 plot:1 update:1 generative:7 half:3 parameterization:1 parametrization:1 core:1 blei:4 parameterizations:1 math:1 org:1 simpler:1 wierstra:1 mathematical:1 consists:1 doubly:1 combine:2 inside:1 sacrifice:1 expected:1 examine:1 blackwellization:1 chi:2 salakhutdinov:1 becomes:1 provided:3 spain:1 estimating:1 notation:1 bounded:1 factorized:1 minimizes:1 developed:3 transformation:2 differentiation:3 guarantee:1 quantitative:1 binarized:2 expands:1 scaled:2 unit:2 positive:7 local:2 limit:1 encoding:1 meet:2 mandt:1 black:10 might:1 twice:1 studied:1 equivalence:1 limited:1 unique:2 acknowledgment:1 zaro:1 practice:3 backpropagation:1 digit:1 universal:1 empirical:1 hyperbolic:1 matching:3 word:2 convenience:1 undesirable:1 close:2 operator:98 altosaar:1 context:1 applying:1 optimize:1 equivalent:1 map:3 deterministic:1 missing:1 maximizing:3 lobato:2 l:17 formalized:1 pouget:1 importantly:1 ogden:1 enabled:1 reparameterization:3 classic:1 notion:1 traditionally:1 annals:1 construction:3 massive:4 exact:1 rinen:1 programming:1 us:2 goodfellow:1 nderby:2 distributional:1 observed:1 ep:8 preprint:4 wang:1 reexamine:1 worst:1 calculate:6 capture:3 wingate:1 removed:1 transforming:1 warde:1 ultimately:2 depend:3 predictive:1 titsias:1 efficiency:1 joint:3 darpa:2 monte:4 kevin:1 h0:2 richer:1 widely:1 larger:2 valued:1 solve:2 say:1 elbo:4 otherwise:2 reconstruct:1 statistic:1 noisy:1 beal:1 sequence:1 differentiable:3 net:1 tran:2 realization:2 degenerate:1 x1wn:1 convergence:2 requirement:2 produce:5 adam:2 converges:2 ben:1 illustrate:1 develop:8 derive:1 measured:1 eq:6 auxiliary:2 involves:2 trading:1 indicate:1 implies:2 come:2 direction:1 posit:2 closely:1 nock:1 stochastic:14 enable:1 require:10 fix:2 decompose:1 anonymous:1 strictly:1 normal:6 mapping:1 substituting:1 optimizer:1 early:1 a2:2 estimation:2 robbins:1 hoffman:3 minimization:5 gaussian:7 jaakkola:1 rezende:3 focus:2 bernoulli:1 likelihood:4 greatly:1 contrast:1 adversarial:1 helpful:1 inference:49 typically:5 entire:1 hidden:2 pixel:2 issue:2 fidelity:1 flexible:3 among:1 initialize:1 field:2 construct:6 having:1 adversarially:1 broad:2 icml:2 future:1 report:2 mirza:1 randomly:1 simultaneously:2 divergence:33 individual:1 murphy:1 familiar:1 connects:1 microsoft:1 ab:1 interest:1 evaluation:1 mixture:8 farley:1 amenable:4 accurate:1 rajesh:1 necessary:1 bq:1 sacrificing:1 fitted:1 rao:1 maximization:2 tractability:6 subset:1 entry:1 characterize:2 st:2 density:21 winther:1 international:6 oord:1 probabilistic:2 off:1 lee:1 bethge:1 squared:1 again:1 choose:1 classically:1 worse:1 admit:2 derivative:1 li:4 toy:1 zjx:1 includes:1 satisfy:3 explicitly:2 sloan:1 depends:2 later:1 root:2 try:1 h1:2 analyze:1 sup:5 bayes:2 monro:1 minimize:3 square:3 construed:1 variance:5 maximized:1 yield:2 bayesian:7 handwritten:1 carlo:4 rectified:1 converged:1 phys:1 definition:1 underestimate:2 pp:1 mohamed:2 minka:2 resultant:2 associated:1 proof:1 sampled:2 popular:1 recall:2 dimensionality:2 formalize:1 nielsen:1 originally:2 higher:1 taxed:1 though:3 box:10 jaan:1 correlation:1 expressive:1 propagation:3 porter:1 logistic:5 mode:3 umbrella:2 requiring:2 true:2 unbiased:6 normalized:1 analytically:2 overdispersed:1 leibler:4 white:1 criterion:2 generalized:1 outline:3 demonstrate:2 performs:4 image:7 variational:126 weber:1 ols:2 sigmoid:1 approximates:1 cambridge:1 paisley:1 rd:1 automatic:2 trivially:1 pathology:1 stable:1 posterior:36 closest:3 recent:1 optimizing:7 optimizes:3 inf:1 reverse:1 certain:2 n00014:1 binary:2 onr:1 approximators:1 minimum:2 ii:1 full:1 desirable:1 needing:1 multiple:1 stem:1 technical:1 match:1 x28:1 offer:1 qi:3 adobe:1 variant:1 basic:1 scalable:2 involving:1 multilayer:1 expectation:19 poisson:1 arxiv:10 histogram:2 fellowship:1 source:1 appropriately:1 unlike:1 comment:1 member:4 flow:1 jordan:1 call:2 feedforward:1 bengio:1 enough:2 stephan:1 automated:1 relu:2 zi:13 idea:1 avenue:1 tradeoff:2 thread:1 whether:1 suffer:2 york:1 repeatedly:1 mirroring:1 deep:3 useful:2 generally:1 stein:18 generate:1 nsf:1 revisit:1 blue:1 discrete:1 write:1 promise:1 drawn:1 rewriting:1 diffusion:2 n66001:1 downstream:1 sum:5 powerful:1 place:1 family:33 appendix:4 scaling:1 summarizes:1 bound:3 barbour:2 pay:1 followed:1 layer:2 display:1 courville:1 inclusive:1 generates:3 circumvented:1 developing:1 gredilla:1 poor:1 conjugate:1 beneficial:2 describes:1 across:1 slightly:1 rev:1 intuitively:1 den:1 computationally:1 equation:17 hern:2 discus:1 mechanism:1 needed:2 tractable:11 end:1 maal:1 gaussians:4 permit:2 apply:1 hierarchical:3 generic:1 appropriate:1 alternative:2 rz:2 remaining:1 subsampling:9 kushner:1 completed:2 graphical:1 hinge:2 calculating:1 quantile:1 ghahramani:2 murray:1 approximating:15 classical:3 nyi:2 objective:85 print:1 strategy:1 parametric:3 traditional:3 gradient:24 iclr:1 distance:9 separate:1 thank:1 reason:1 ozair:1 index:2 illustration:1 minimizing:4 balance:1 innovation:1 liang:1 difficult:2 potentially:1 negative:6 reparameterizable:2 ba:1 design:6 allowing:1 benchmark:1 finite:1 langevin:15 defining:1 brubaker:1 arbitrary:1 david:1 bk:1 cast:2 kl:33 optimized:3 connection:1 learned:2 barcelona:1 kingma:2 nip:2 tractably:1 address:1 able:1 poole:1 below:1 program:25 built:2 including:1 green:1 belief:1 stinchcombe:1 power:1 turner:3 minimax:1 library:1 stan:1 auto:1 columbia:2 review:1 literature:1 understanding:1 tangent:1 theis:1 betancourt:1 fully:1 highlight:2 okl:1 generator:2 foundation:2 imposes:1 principle:3 tiny:1 summary:1 supported:1 free:1 enjoys:1 bias:2 allow:1 fall:2 saul:1 benefit:1 van:1 feedback:1 dimension:2 evaluating:2 rich:4 author:1 far:1 rowland:1 welling:1 ranganath:3 approximate:11 kullback:4 bui:1 supremum:1 global:2 uai:1 assumed:1 xi:5 continuous:6 latent:14 table:2 additionally:1 hornik:1 posing:1 aistats:1 apr:1 pgs:1 big:1 noise:2 subsample:1 bounding:1 carpenter:1 xu:1 explicit:2 third:1 dustin:1 theorem:2 z0:2 specific:2 abadie:1 admits:1 evidence:2 closeness:4 burden:1 normalizing:2 intractable:5 consist:1 derives:1 exists:1 mnist:2 corr:1 yin:1 saddle:1 nserc:1 partially:1 scalar:1 applies:2 springer:1 minimizer:1 satisfies:8 relies:1 truth:4 gerrish:1 identity:4 price:2 shared:1 feasible:1 dawen:1 naesseth:1 infinite:2 called:2 pas:1 id1:1 meaningful:1 formally:3 support:4 evaluate:1 princeton:2
5,627
6,092
The Multiple Quantile Graphical Model Alnur Ali Machine Learning Department Carnegie Mellon University alnurali@cmu.edu J. Zico Kolter Computer Science Department Carnegie Mellon University zkolter@cs.cmu.edu Ryan J. Tibshirani Department of Statistics Carnegie Mellon University ryantibs@cmu.edu Abstract We introduce the Multiple Quantile Graphical Model (MQGM), which extends the neighborhood selection approach of Meinshausen and B?hlmann for learning sparse graphical models. The latter is defined by the basic subproblem of modeling the conditional mean of one variable as a sparse function of all others. Our approach models a set of conditional quantiles of one variable as a sparse function of all others, and hence offers a much richer, more expressive class of conditional distribution estimates. We establish that, under suitable regularity conditions, the MQGM identifies the exact conditional independencies with probability tending to one as the problem size grows, even outside of the usual homoskedastic Gaussian data model. We develop an efficient algorithm for fitting the MQGM using the alternating direction method of multipliers. We also describe a strategy for sampling from the joint distribution that underlies the MQGM estimate. Lastly, we present detailed experiments that demonstrate the flexibility and effectiveness of the MQGM in modeling hetereoskedastic non-Gaussian data. 1 Introduction We consider modeling the joint distribution Pr(y1 , . . . , yd ) of d random variables, given n independent draws from this distribution y (1) , . . . , y (n) ? Rd , where possibly d  n. Later, we generalize this setup and consider modeling the conditional distribution Pr(y1 , . . . , yd |x1 , . . . , xp ), given n independent pairs (x(1) , y (1) ), . . . , (x(n) , y (n) ) ? Rp+d . Our starting point is the neighborhood selection method [28], which is typically considered in the context of multivariate Gaussian data, and seen as a tool for covariance selection [8]: when Pr(y1 , . . . , yd ) is a multivariate Gaussian distribution, it is a well-known fact that yj and yk are conditionally independent given the remaining variables if and only if the coefficent corresponding to yk is zero in the (linear) regression of yj on all other variables (e.g., [22]). Therefore, in neighborhood selection we compute, for each k = 1, . . . , d, a lasso regression ? in order to obtain a small set of conditional dependencies ? of yk on the remaining variables, i.e., 2 n  X X (i) (i) minimize yk ? ?kj yj + ?k?k k1 , (1) ?k ?Rd i=1 j6=k for a tuning parameter ? > 0. This strategy can be seen as a pseudolikelihood approximation [4], Pr(y1 , . . . , yd ) ? d Y Pr(yk |y?k ), (2) k=1 where y?k denotes all variables except yk . Under the multivariate Gaussian model for Pr(y1 , . . . , yd ), the conditional distributions Pr(yk |y?k ), k = 1, . . . , d here are (univariate) Gaussians, and maximizing the pseudolikelihood in (2) is equivalent to separately maximizing the conditionals, as is precisely done in (1) (with induced sparsity), for k = 1, . . . , d. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Following the pseudolikelihood-based approach traditionally means carrying out three steps: (i) we write down a suitable family of joint distributions for Pr(y1 , . . . , yd ), (ii) we derive the conditionals Pr(yk |y?k ), k = 1, . . . , d, and then (iii) we maximize each conditional likelihood by (freely) fitting the parameters. Neighborhood selection, and a number of related approaches that came after it (see Section 2.1), can be all thought of in this workflow. In many ways, step (ii) acts as the bottleneck here, and to derive the conditionals, we are usually limited to a homoskedastic and parameteric family for the joint distribution. The approach we take in this paper differs somewhat substantially, as we begin by directly modeling the conditionals in (2), without any preconceived model for the joint distribution ? in this sense, it may be seen a type of dependency network [13] for continuous data. We also employ heteroskedastic, nonparametric models for the conditional distributions, which allows us great flexibility in learning these conditional relationships. Our method, called the Multiple Quantile Graphical Model (MQGM), is a marriage of ideas in high-dimensional, nonparametric, multiple quantile regression with those in the dependency network literature (the latter is typically focused on discrete, not continuous, data). An outline for this paper is as follows. Section 2 reviews background material, and Section 3 develops the MQGM estimator. Section 4 studies basic properties of the MQGM, and establishes a structure recovery result under appropriate regularity conditions, even for heteroskedastic, non-Gaussian data. Section 5 describes an efficient ADMM algorithm for estimation, and Section 6 presents empirical examples comparing the MQGM versus common alternatives. Section 7 concludes with a discussion. 2 2.1 Background Neighborhood selection and related methods Neighborhood selection has motivated a number of methods for learning sparse graphical models. The literature here is vast; we do not claim to give a complete treatment, but just mention some relevant approaches. Many pseudolikelihood approaches have been proposed, see e.g., [35, 33, 12, 24, 17, 1]. These works exploit the connection between estimating a sparse inverse covariance matrix and regression, and they vary in terms of the optimization algorithms they use and the theoretical guarantees they offer. In a clearly related but distinct line of research, [45, 2, 11, 36] proposed `1 -penalized likelihood estimation in the Gaussian graphical model, a method now generally termed the graphical lasso (GLasso). Following this, several recent papers have extended the GLasso in various ways. [10] examined a modification based on the multivariate Student t-distribution, for robust graphical modeling. [37, 46, 42] considered conditional distributions of the form Pr(y1 , . . . , yd |x1 , . . . , xp ). [23] proposed a model for mixed (both continuous and discrete) data types, generalizing both GLasso and pairwise Markov random fields. [25, 26] used copulas for learning non-Gaussian graphical models. A strength of neighborhood-based (i.e., pseudolikelihood-based) approaches lies in their simplicity; because they essentially reduce to a collection of univariate probability models, they are in a sense much easier to study outside of the typical homoskedastic, Gaussian data setting. [14, 43, 44] elegantly studied the implications of using univariate exponential family models for the conditionals in (2). Closely related to pseudoliklihood approaches are dependency networks [13]. Both frameworks focus on the conditional distributions of one variable given all the rest; the difference lies in whether or not the model for conditionals stems from first specifying some family of joint distributions (pseudolikelihood methods), or not (dependency networks). Dependency networks have been thoroughly studied for discrete data, e.g., [13, 29]. For continuous data, [40] proposed modeling the mean in a Gaussian neighborhood regression as a nonparametric, additive function of the remaining variables, yielding flexible relationships ? this is a type of dependency network for continuous data (though it is not described by the authors in this way). Our method, the MQGM, also deals with continuous data, and is the first to our knowledge that allows for fully nonparametric conditional distributions, as well as nonparametric contributions of the neighborhood variables, in each local model. 2.2 Quantile regression In linear regression, we estimate the conditional mean of y|x1 , . . . , xp from samples. Similarly, in ?quantile regression [20], we estimate the conditional ?-quantile of y|x1 , . . . , xp for a given ? ? [0, 1], formally Qy|x1 ,...,xp (?) . . , xp ) ? ?}, by solving the convex optimization Pn= inf{t :(i)Pr(yP?p t|x1 , .(i) problem: minimize? ? j=1 ?j xj ), where ?? (z) = max{?z, (? ? 1)z} is the ?i=1 ?? (y 2 quantile loss (also called the ?pinball? or ?tilted absolute? loss). Quantile regression can be useful when the conditional distribution in question is suspected to be heteroskedastic and/or non-Gaussian, e.g., heavy-tailed, or if we wish to understand properties of the distribution other than the mean, e.g., tail behavior. In multiple quantile regression, we solve several quantile regression problems simultaneously, each corresponding to a different quantile level; these problems can be coupled somehow to increase efficiency in estimation (see details in the next section). Again, the literature on quantile regression is quite vast (especially that from econometrics), and we only give a short review here. A standard text is [18]. Nonparametric modeling of quantiles is a natural extension from the (linear) quantile regression approach outlined above; in the univariate case (one conditioning variable), [21] suggested a method using smoothing splines, and [38] described an approach using kernels. More recently, [19] studied the multivariate nonparametric case (more than one conditioning variable), using additive models. In the high-dimensional setting, where p is large, [3, 16, 9] studied `1 -penalized quantile regression and derived estimation and recovery theory for non-(sub-)Gaussian data. We extend results in [9] to prove structure recovery guarantees for the MQGM (in Section 4.3). 3 The multiple quantile graphical model Many choices can be made with regards to the final form of the MQGM, and to help in understanding these options, we break down our presentation in parts. First fix some ordered set A = {?1 , . . . , ?r } of quantile levels, e.g., A = {0.05, 0.10, . . . , 0.95}. For each variable yk , and each level ?` , we model the conditional ?` -quantile given the other variables, using an additive expansion of the form: Qyk |y?k (?` ) = b?`k + d X ? f`kj (yj ), (3) j6=k ? where b?`k ? R is an intercept term, and f`kj , j = 1, . . . , d are smooth, but not parametric in form. In its most general form, the MQGM estimator is defined as a collection of optimization problems, over k = 1, . . . , d and ` = 1, . . . , r:  X  n ? X X (i) (i) f`kj (yj ) + ?1 P1 (f`kj ) + ?2 P2 (f`kj ) . (4) minimize ??` yk ? b`k ? b`k , f`kj ?F`kj , i=1 j=1,...,d j6=k j6=k Here ?1 , ?2 ? 0 are tuning parameters, F`kj , j = 1, . . . , d are univariate function spaces, ? > 0 is a fixed exponent, and P1 , P2 are sparsity and smoothness penalty functions, respectively. We give three examples below; many other variants are also possible. Example 1: basis expansion model Consider taking F`kj = span{?j1 , . . . , ?jm }, the span of m basis functions, e.g., radial basis functions (RBFs) with centers placed at appropriate locations across the domain of variable j, for each j = 1, . . . , d. This means that each f`kj ? F`kj can be expressed T as f`kj (x) = ?`kj ?j (x), for a coefficient vector ?`kj ? Rm , where ?j (x) = (?j1 (x), . . . , ?jm (x)). Also consider an exponent ? = 1, and the sparsity and smoothness penalties P1 (f`kj ) = k?`kj k2 and P2 (f`kj ) = k?`kj k22 , respectively, which are group lasso and ridge penalties, respectively. With these choices in place, the MQGM problem in (4) can be rewritten in finite-dimensional form:   X  minimize ??` Yk ? b`k 1 ? ??`k + ?1 k?`kj k2 + ?2 k?`kj k22 . (5) b`k , ?`k =(?`k1 ,...,?`kd ) j6=k Pn Above, we have used the abbreviation ??` (z) = i=1 ??` (zi ) for a vector z = (z1 , . . . , zn ) ? Rn , (1) (n) and also Yk = (yk , . . . , yk ) ? Rn for the observations along variable k, 1 = (1, . . . , 1) ? Rn , and (i) n?dm ??R for the basis matrix, with blocks of columns to be understood as ?ij = ?(yj )T ? Rm . The basis expansion model is simple and tends to work well in practice, so we focus on it for most of the paper. In principle, essentially all our results apply to the next two models we describe, as well. Example 2: smoothing splines model Now consider taking F`kj = span{g1j , . . . , gnj }, the span (1) (n) of m = n natural cubic splines with knots at yj , . . . , yj , for j = 1, . . . , d. As before, we can T j then write f`kj (x) = ?`kj g (x) with coefficients ?`kj ? Rn , for f`kj ? F`kj . The work of [27], on high-dimensional additive smoothing splines, suggests a choice of exponent ? = 1/2, and penalties P1 (f`kj ) = kGj ?`kj k22 T and P2 (f`kj ) = ?`kj ?j ?`kj , 3 for sparsity and smoothness, respectively, where Gj ? Rn?n is a spline basis matrix with entries (i) Gjii0 = gij0 (yj ), and ?j is the smoothing spline penalty matrix containing integrated products of pairs of twice differentiated basis functions. The MQGM problem in (4) can be translated into a finite-dimensional form, very similar to what we have done in (5), but we omit this for brevity. Example 3: RKHS model Consider taking F`kj = Hj , a univariate reproducing kernel Hilbert space (RKHS), with kernel function ?j (?, ?). The representer theorem allows Pn us to express each (i) function f`kj ? Hj in terms of the representers of evaluation, i.e., f`kj (x) = i=1 (?`kj )i ?j (x, yj ), n for a coefficient vector ?`kj ? R . The work of [34], on high-dimensional additive RKHS modeling, suggests a choice of exponent ? = 1, and sparsity and smoothness penalties q T Kj? P1 (f`kj ) = kK j ?`kj k2 and P2 (f`kj ) = ?`kj `kj , (i) (i0 ) respectively, where K j ? Rn?n is the kernel matrix with entries Kiij 0 = ?j (yj , yj ). Again, the MQGM problem in (4) can be written in finite-dimensional form, now an SDP, omitted for brevity. Structural constraints Several structural constraints can be placed on top of the MQGM optimization problem in order to guide the estimated component functions to meet particular shape requirements. An important example are non-crossing constraints (commonplace in nonparametric, multiple quantile regression [18, 38]): here, we optimize (4) jointly over ` = 1, . . . , r, subject to X X (i) (i) b`k + f`kj (yj ) ? b`0 k + f`0 kj (yj ), for all ?` < ?`0 , and i = 1, . . . , n. (6) j6=k j6=k This ensures that the estimated quantiles obey the proper ordering, at the observations. For concreteness, we consider the implications for the basis regression model, in Example 1 (similar statements hold for the other two models). For each ` = 1, . . . , r, denote by F`k (b`k , ?`k ) the criterion in (5). Introducing the non-crossing constraints requires coupling (5) over ` = 1, . . . , r, so that we now have the following optimization problems, for each target variable k = 1, . . . , d: r X minimize F`k (b`k , ?`k ) subject to (1BkT + ??k )DT ? 0, (7) Bk ,?k `=1 where we denote Bk = (b1k , . . . , brk ) ? Rr , ? ? Rn?dm the basis matrix as before, ?k ? Rdm?r given by column-stacking ?`k ? Rdm , ` = 1, . . . , r, and D ? R(r?1)?r is the usual discrete difference operator. (The inequality in (7) is to be interpreted componentwise.) Computationally, coupling the subproblems across ` = 1, . . . , r clearly adds to the overall difficulty of the MQGM, but statistically this coupling acts as a regularizer, by constraining the parameter space in a useful way, thus increasing our efficiency in fitting multiple quantile levels from the given data. (i) (i0 ) For a triplet `, k, j, monotonicity constraints are also easy to add, i.e., f`kj (yj ) ? f`kj (yj ) for all (i) (i0 ) yj < yj . Convexity constraints, where we require f`kj to be convex over the observations, for a particular `, k, j, are also straightforward. Lastly, strong non-crossing constraints, where we enforce (6) over all z ? Rd (not just over the observations) are also possible with positive basis functions. Exogenous variables and conditional random fields So far, we have considered modeling the joint distribution Pr(y1 , . . . , yd ), corresponding to learning a Markov random field (MRF). It is not hard to extend our framework to model the conditional distribution Pr(y1 , . . . , yd |x1 , . . . , xp ) given some exogenous variables x1 , . . . , xp , corresponding to learning a conditional random field (CRF). x To extend the basis regression model, we introduce the additional parameters ?`k ? Rp in (5), and the n?q T x loss now becomes ??` (Yk ? b`k 1 ? ??`k ? X?`k ), where X ? R is filled with the exogenous observations x(1) , . . . , x(n) ? Rq ; the other models are changed similarly. 4 4.1 Basic properties and theory Quantiles and conditional independence ? In the model (3), when a particular variable yj has no contribution, i.e., satisfied f`kj = 0 across all quantile levels ?` , ` = 1, . . . , r, what does this imply about the conditional independence between yk and yj , given the rest? Outside of the multivariate normal model (where the feature transformations need only be linear), nothing can be said in generality. But we argue that conditional independence can be understood in a certain approximate sense (i.e., in a projected approximation of the data generating model). We begin with a simple lemma. Its proof is elementary, and given in the supplement. 4 Lemma 4.1. Let U, V, W be random variables, and suppose that all conditional quantiles of U |V, W do not depend on V , i.e., QU |V,W (?) = QU |W (?) for all ? ? [0, 1]. Then U and V are conditionally independent given W . By the lemma, if we knew that QU |V,W (?) = h(?, U, W ) for a function h, then it would follow that U, V are conditionally independent given W (n.b., the converse is true, as well). The MQGM problem in (4), with sparsity imposed on the coefficients, essentially aims to achieve such a representation for the conditional quantiles; of course we cannot use a fully nonparametric representation of the conditional distribution yk |y?k and instead we use an r-step approximation to the conditional cumulative distribution function (CDF) of yk |y?k (corresponding to estimating r conditional quantiles), and (say) in the basis regression model, limit the dependence on conditioning variables to be in terms of an additive function of RBFs in yj , j 6= k. Thus, if at the solution in (5) we find that ??`kj = 0, ` = 1, . . . , r, we may interpret this to mean that yk and yj are conditionally independent given the remaining variables, but according to the distribution defined by the projection of yk |y?k onto the space of models considered in (5) (r-step conditional CDFs, which are additive expansions in yj , j 6= k). This interpretation is no more tenuous (arguably, less so, as the model space here is much larger) than that needed when applying standard neighborhood selection to non-Gaussian data. 4.2 Gibbs sampling and the ?joint? distribution When specifying a form for the conditional distributions in a pseudolikelihood approximation as in (2), it is natural to ask: what is the corresponding joint distribution? Unfortunately, for a general collection of conditional distributions, there need not exist a compatible joint distribution, even when all conditionals are continuous [41]. Still, pseudolikelihood approximations (a special case of composite likelihood approximations), possess solid theoretical backing, in that maximizing the pseudolikelihood relates closely to minimizing a certain (expected composite) Kullback-Leibler divergence, measured to the true conditionals [39]. Recently, [7, 44] made nice progress in describing specific conditions on conditional distributions that give rise to a valid joint distribution, though their work was specific to exponential families. A practical answer to the question of this subsection is to use Gibbs sampling, which attempts to draw samples consistent with the fitted conditionals; this is precisely the observation of [13], who show that Gibbs sampling from discrete conditionals converges to a unique stationary distribution, although this distribution may not actually be compatible with the conditionals. The following result establishes the analogous claim for continuous conditionals; its proof is in the supplement. We demonstrate the practical value of Gibbs sampling through various examples in Section 6. Lemma 4.2. Assume that the conditional distributions Pr(yk |y?k ), k = 1, . . . , d take only positive values on their domain. Then, for any given ordering of the variables, Gibbs sampling converges to a unique stationary distribution that can be reached from any initial point. (This stationary distribution depends on the ordering.) 4.3 Graph structure recovery When log d = O(n2/21 ), and we assume somewhat standard regularity conditions (listed as A1?A4 in the supplement), the MQGM estimate recovers the underlying conditional independencies with high probability (interpreted in the projected model space, as explained in Section 4.1). Importantly, we do not require a Gaussian, sub-Gaussian, or even parametric assumption on the data generating process; instead, we assume i.i.d. draws y (1) , . . . , y (n) ? Rd , where the conditional distributions yk |y?k have quantiles specified by the model in (3) for k = 1, . . . , d, ` = 1, . . . , r, and further, each ? T ? f`kj (x) = ?`kj ?j (x)? for coefficients ?`kj ? Rm , j = 1, . . . , d, as in the basis expansion model. Let E ? denote the corresponding edge set of conditional dependencies from these neighborhood ? ? models, i.e., {k, j} ? E ? ?? max`=1,...,r max{k?`kj k2 , |?`jk k2 } > 0. We define the estimated ? edge set E in the analogous way, based on the solution in (5). Without a loss of generality, we assume ? the features have been scaled to satisfy k?j k ? n for all j = 1, . . . , dm. The following is our recovery result; its proof is provided in the supplement. Theorem 4.3. Assume log d = O(n2/21 ), and conditions A1?A4 in the supplement. Assume that ? the tuning parameters ?1 , ?2 satisfy ?1  (mn log(d2 mr/?) log3 n)1/2 and ?2 = o(n41/42 /?max ), ? ? where ?max = max`,k,j k?`kj k2 . Then for n sufficiently large, the MQGM estimate in (5) exactly ? = E ? , with probability at least 1 ? ?. recovers the underlying conditional dependencies, i.e., E 5 The theorem shows that the nonzero pattern in the MQGM estimate identifies, with high probability, the underlying conditional independencies. But to be clear, we emphasize that the MQGM estimate is not an estimate of the inverse covariance matrix itself (this is also true of neighborhood regression, SpaceJam of [40], and many other methods for learning graphical models). 5 Computational approach By design, the MQGM problem in (5) separates into d subproblems, across k = 1, . . . , d (it therefore suffices to consider only a single subproblem, so we omit notational dependence on k for auxiliary variables). While these subproblems are challenging for off-the-shelf solvers (even for only moderately-sized graphs), the key terms here all admit efficient proximal operators [32], which makes operator splitting methods like the alternating direction method of multipliers [5] a natural choice. As an illustration, we consider the non-crossing constraints in the basis regression model below. Reparameterizing our problem, so that we may apply ADMM, yields: Pr Pd minimize ?A (Z) + ?1 `=1 j=1 kW`j k2 + ?22 kW k2F + I+ (V DT ) ?k ,Bk ,V,W,Z (8) subject to V = 1BkT + ??k , W = ?k , Z = Yk 1T ? 1BkT ? ??k , Pr Pd where for brevity ?A (A) = `=1 j=1 ??` (A`j ), and I+ (?) is the indicator function of the space of elementwise nonnegative matrices. The augmented Lagrangian associated with (8) is: L? (?k , Bk , V, W, Z, UV , UW , UZ ) = ?A (Z) + ?1 r X d X `=1 j=1 kW`j k2 + ?2 kW k2F + I+ (V DT ) 2  ? + k1BkT + ??k ? V + UV k2F + k?k ? W + UW k2F + kYk 1T ? 1BkT ? ??k ? Z + UZ k2F , 2 (9) where ? > 0 is the augmented Lagrangian parameter, and UV , UW , UZ are dual variables corresponding to the equality constraints on V, W, Z, respectively. Minimizing (9) over V yields:  V ? Piso 1BkT + ??k + UV , (10) where Piso (?) denotes the row-wise projection operator onto the isotonic cone (the space of componentwise nondecreasing vectors), an O(nr) operation here [15]. Minimizing (9) over W`j yields the update:   (?k )`j + (UW )`j ?1 /? W`j ? , (11) 1? 1 + ?2 /? k(?k )`j + (UW )`j k2 + where (?)+ is the positive part operator. This can be seen by deriving the proximal operator of the function f (x) = ?1 kxk2 + (?2 /2)kxk22 . Minimizing (9) over Z yields the update: Z ? prox(1/?)?A (Yk 1T ? 1bTk ? ??k + UZ ), (12) where proxf (?) denotes the proximal operator of a function f . For the multiple quantile loss function ?A , this is a kind of generalized soft-thresholding. The proof is given in the supplement. Lemma 5.1. Let P+ (?) and P? (?) be the elementwise positive and negative part operators, respectively, and let a = (?1 , . . . , ?r ). Then proxt?A (A) = P+ (A ? t1aT ) + P? (A ? t1aT ). Finally, differentiation in (9) with respect to Bk and ?k yields the simultaneous updates:    ?1  1 ?T ? + 21 I ?T 1 ?k T ? [I 0] (W ? UW ) + BkT 1T ? 1T 1 2  T [? 1] (Yk 1T ? Z + UZ + V ? UV ) . (13) A complete description of our ADMM algorithm for solving the MQGM problem is in the supplement. Gibbs sampling Having fit the conditionals yk |y?k , k = 1, . . . d, we may want to make predictions or extract joint distributions over subsets of variables. As discussed in Section 4.2, there is no general analytic form for these joint distributions, but the pseudolikelihood approximation underlying the MQGM suggests a natural Gibbs sampler. A careful implementation that respects the additive model in (3) yields a highly efficient Gibbs sampler, especially for CRFs; the supplement gives details. 6 6 Empirical examples 6.1 Synthetic data We consider synthetic examples, comparing the MQGM to neighborhood selection (MB), the graphical lasso (GLasso), SpaceJam [40], the nonparanormal skeptic [26], TIGER [24], and neighborhood selection using the absolute loss (Laplace). Ring example As a simple but telling example, we drew n = 400 samples from a ?ring? distribution in d = 4 dimensions. Data were generated by drawing a random angle ? ? Uniform(0, 1), a random radius R ? N (0, 0.1), and then computing the coordinates y1 = R cos ?, y2 = R sin ? and y3 , y4 ? N (0, 1), i.e., y1 and y2 are the only dependent variables here. The MQGM was used with m = 10 basis functions (RBFs), and r = 20 quantile levels. The left panel of Figure 1 plots samples (blue) of the coordinates y1 , y2 as well as new samples from the MQGM (red) fitted to these same (blue) samples, obtained by using our Gibbs sampler; the samples from the MQGM appear to closely match the samples from the underlying ring. The main panel of Figure 1 shows the conditional dependencies recovered by the MQGM, SpaceJam, GLasso, and MB (plots for the other methods are given in the supplement), when run on the ring data. We visualize these dependencies by forming a d ? d matrix with the cell (j, k) set to black if j, k are conditionally dependent given the others, and white otherwise. Across a range of tuning parameters for each method, the MQGM is the only one that successfully recovers the underlying conditional dependencies, at some point along its solution path. In the supplement, we present an evaluation of the conditional CDFs given by each method, when run on the ring data; again, the MQGM performs best in this setting. Larger examples To investigate performance at larger scales, we drew n ? {50, 100, 300} samples from a multivariate normal and Student t-distribution (with 3 degrees of freedom), both in d = 100 dimensions, both parameterized by a random, sparse, diagonally dominant d ? d inverse covariance matrix, following the procedure in [33, 17, 31, 1]. Over the same set of sample sizes, with d = 100, we also considered an autoregressive setup in which we drew samples of pairs of adjacent variables from the ring distribution. In all three data settings (normal, t, and autoregressive), we used m = 10 and r = 20 for the MQGM. To summarize the performances, we considered a range of tuning parameters for each method, computed corresponding false and true positive rates (in detecting conditional dependencies), and then computed the corresponding area under the curve (AUC), following, e.g., [33, 17, 31, 1]. Table 1 reports the median AUCs (across 50 trials) for all three of these examples; the MQGM outperforms all other methods on the autoregressive example; on the normal and Student t examples, it performs quite competitively. y2 0.5 Truth ?1 =8.00000 ?1 =16.00000 ?1 =32.00000 ?1 =64.00000 ?1 =128.00000 Truth ?1 =0.00781 ?1 =0.01562 ?1 =0.03125 ?1 =0.06250 ?1 =0.12500 Truth ?1 =0.12500 ?1 =0.25000 ?1 =0.50000 ?1 =1.00000 ?1 =2.00000 Truth ?1 =0.06250 ?1 =0.12500 ?1 =0.25000 ?1 =0.50000 ?1 =1.00000 MB truth MQGM MQGM 1.5 1.0 0.0 GLasso ?1.5 ?1.5 SpaceJam ?0.5 ?1.0 ?1.0 ?0.5 0.0 y1 0.5 1.0 1.5 Figure 1: Left: data from the ring distribution (blue) as well as new samples from the MQGM (red) fitted to the same (blue) data, obtained by using our Gibbs sampler. Right: conditional dependencies recovered by the MQGM, MB, GLasso, and SpaceJam on the ring data; black means conditional dependence. The MQGM is the only method that successfully recovers the underlying conditional dependencies along its solution path. Table 1: AUC values for the MQGM, MB, GLasso, SpaceJam, the nonparanormal skeptic, TIGER, and Laplace for the normal, t, and autoregressive data settings; higher is better, best in bold. MQGM MB GLasso SpaceJam Nonpara. TIGER Laplace n = 50 0.953 0.850 0.908 0.889 0.881 0.732 0.803 Normal n = 100 0.976 0.959 0.964 0.968 0.962 0.921 0.931 n = 300 0.988 0.994 0.998 0.997 0.996 0.996 0.989 Student t n = 100 0.947 0.923 0.605 0.965 0.942 0.873 0.876 n = 50 0.928 0.844 0.691 0.893 0.862 0.420 0.800 7 n = 300 0.981 0.988 0.965 0.993 0.998 0.989 0.991 n = 50 0.726 0.532 0.541 0.624 0.545 0.503 0.530 Autoregressive n = 100 n = 300 0.754 0.955 0.563 0.725 0.620 0.711 0.708 0.854 0.590 0.612 0.518 0.718 0.554 0.758 1 10 8 2 5 7 Figure 2: Top panel and bottom row, middle panel: conditional dependencies recovered by the MQGM on the flu data; each of the first ten cells corresponds to a region of the U.S., and black means dependence. Bottom row, left panel: wallclock time (in seconds) for solving one subproblem using ADMM versus SCS. Bottom row, right panel: samples from the fitted marginal distribution of the weekly flu incidence rates at region 6; samples at larger quantiles are shaded lighter, and the median is in darker blue. 3 9 4 6 1 10 8 2 5 7 3 9 4 6 1 6 11 16 1 6 3 10 11 2 10 16 1 6.2 0 50 100 150 Seconds 200 250 % of flu-like symptoms Objective value MQGM SCS 10 Region 6 9 4 10 8 7 6 5 4 3 2 1 0 30 40 50 8 18 28 Week Modeling flu epidemics We study n = 937 weekly flu incidence reports from September 28, 1997 through August 30, 2015, across 10 regions in the United States (see the top panel of Figure 2), obtained from [6]. We considered d = 20 variables: the first 10 encode the current week?s flu incidence (precisely, the percentage of doctor?s visits in which flu-like symptoms are presented) in the 10 regions, and the last 10 encode the same but for the prior week. We set m = 5, r = 99, and also introduced exogenous variables to encode the week numbers, so p = 1. Thus, learning the MQGM here corresponds to learning the structure of a spatiotemporal graphical model, and reduces to solving 20 multiple quantile regression subproblems, each of dimension (19 ? 5 + 1) ? 99 = 9504. All subproblems took about 1 minute on a 6 core 3.3 Ghz Core i7 X980 processor. The bottom left panel in Figure 2 plots the time (in seconds) taken for solving one subproblem using ADMM versus SCS [30], a cone solver that has been advocated as a reasonable choice for a class of problems encapsulating (4); ADMM outperforms SCS by roughly two orders of magnitude. The bottom middle panel of Figure 2 presents the conditional independencies recovered by the MQGM. Nonzero entries in the upper left 10 ? 10 submatrix correspond to dependencies between the yk variables for k = 1, . . . , 10; e.g., the nonzero (0,2) entry suggests that region 1 and 3?s flu reports are dependent. The lower right 10 ? 10 submatrix corresponds to the yk variables for k = 11, . . . , 20, and the nonzero banded entries suggest that at any region the previous week?s flu incidence (naturally) influences the next week?s. The top panel of Figure 2 visualizes these relationships by drawing an edge between dependent regions; region 6 is highly connected, suggesting that it may be a bellwether for other regions, roughly in keeping with the current understanding of flu dynamics. To draw samples from the fitted distributions, we ran our Gibbs sampler over the year, generating 1000 total samples, making 5 passes over all coordinates between each sample, and with a burn-in period of 100 iterations. The bottom right panel of Figure 2 plots samples from the marginal distribution of the percentages of flu reports at region 6 (other regions are in the supplement) throughout the year, revealing the heteroskedastic nature of the data. For space reasons, our last example, on wind power data, is presented in the supplement. 7 Discussion We proposed and studied the Multiple Quantile Graphical Model (MQGM). We established theoretical and empirical backing to the claim that the MQGM is capable of compactly representing relationships between heteroskedastic non-Gaussian variables. We also developed efficient algorithms for both estimation and sampling in the MQGM. All in all, we believe that our work represents a step forward in the design of flexible yet tractable graphical models. Acknowledgements AA was supported by DOE Computational Science Graduate Fellowship DEFG02-97ER25308. JZK was supported by an NSF Expeditions in Computation Award, CompSustNet, CCF-1522054. RJT was supported by NSF Grants DMS-1309174 and DMS-1554123. 8 References [1] Alnur Ali, Kshitij Khare, Sang-Yun Oh, and Bala Rajaratnam. Generalized pseudolikelihood methods for inverse covariance estimation. Technical report, 2016. Available at http://arxiv.org/pdf/1606.00033.pdf. [2] Onureena Banerjee, Laurent El Ghaoui, and Alexandre d?Aspremont. Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. Journal of Machine Learning Research, 9:485?516, 2008. [3] Alexandre Belloni and Victor Chernozhukov. `1 -penalized quantile regression in high-dimensional sparse models. Annals of Statistics, 39(1):82?130, 2011. [4] Julian Besag. Spatial interaction and the statistical analysis of lattice systems. Journal of the Royal Statistical Society: Series B, 36(2): 192?236, 1974. [5] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1):1?122, 2011. [6] Centers for Disease Control and Prevention (CDC). Influenza national and regional level graphs and data, August 2015. URL http: //gis.cdc.gov/grasp/fluview/fluportaldashboard.html. [7] Shizhe Chen, Daniela Witten, and Ali Shojaie. Selection and estimation for mixed graphical models. Biometrika, 102(1):47?64, 2015. [8] Arthur Dempster. Covariance selection. Biometrics, 28(1):157?175, 1972. [9] Jianqing Fan, Yingying Fan, and Emre Barut. Adaptive robust variable selection. Annals of Statistics, 42(1):324?351, 2014. [10] Michael Finegold and Mathias Drton. Robust graphical modeling of gene networks using classical and alternative t-distributions. Annals of Applied Statistics, 5(2A):1057?1080, 2011. [11] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9 (3):432?441, 2008. [12] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Applications of the lasso and grouped lasso to the estimation of sparse graphical models. Technical report, 2010. Available at http://statweb.stanford.edu/~tibs/ftp/ggraph.pdf. [13] David Heckerman, David Maxwell Chickering, David Meek, Robert Rounthwaite, and Carl Kadie. Dependency networks for inference, collaborative filtering, and data visualization. Journal of Machine Learning Research, 1:49?75, 2000. [14] Holger H?fling and Robert Tibshirani. Estimation of sparse binary pairwise Markov networks using pseudo-likelihoods. Journal of Machine Learning Research, 10:883?906, 2009. [15] Nicholas Johnson. A dynamic programming algorithm for the fused lasso and `0 -segmentation. Journal of Computational and Graphical Statistics, 22(2):246?260, 2013. [16] Kengo Kato. Group lasso for high dimensional sparse quantile regression models. Technical report, 2011. Available at http://arxiv. org/pdf/1103.1458.pdf. [17] Kshitij Khare, Sang-Yun Oh, and Bala Rajaratnam. A convex pseudolikelihood framework for high dimensional partial correlation estimation with convergence guarantees. Journal of the Royal Statistical Society: Series B, 77(4):803?825, 2014. [18] Roger Koenker. Quantile Regression. Cambridge University Press, 2005. [19] Roger Koenker. Additive models for quantile regression: Model selection and confidence bandaids. Brazilian Journal of Probability and Statistics, 25(3):239?262, 2011. [20] Roger Koenker and Gilbert Bassett. Regression quantiles. Econometrica, 46(1):33?50, 1978. [21] Roger Koenker, Pin Ng, and Stephen Portnoy. Quantile smoothing splines. Biometrika, 81(4):673?680, 1994. [22] Steffen Lauritzen. Graphical models. Oxford University Press, 1996. [23] Jason Lee and Trevor Hastie. Structure learning of mixed graphical models. In Proceedings of the 16th International Conference on Artificial Intelligence and Statistics, pages 388?396, 2013. [24] Han Liu and Lie Wang. TIGER: A tuning-insensitive approach for optimally estimating Gaussian graphical models. Technical report, 2012. Available at http://arxiv.org/pdf/1209.2437.pdf. [25] Han Liu, John Lafferty, and Larry Wasserman. The nonparanormal: Semiparametric estimation of high dimensional undirected graphs. Journal of Machine Learning Research, 10:2295?2328, 2009. [26] Han Liu, Fang Han, Ming Yuan, John Lafferty, and Larry Wasserman. High-dimensional semiparametric Gaussian copula graphical models. The Annals of Statistics, pages 2293?2326, 2012. [27] Lukas Meier, Sara van de Geer, and Peter Buhlmann. High-dimensional additive modeling. Annals of Statistics, 37(6):3779?3821, 2009. [28] Nicolai Meinshausen and Peter B?hlmann. High-dimensional graphs and variable selection with the lasso. Annals of Statistics, 34(3): 1436?1462, 2006. [29] Jennifer Neville and David Jensen. Dependency networks for relational data. In Proceedings of Fourth IEEE International Conference on the Data Mining, pages 170?177. IEEE, 2004. [30] Brendan O?Donoghue, Eric Chu, Neal Parikh, and Stephen Boyd. Operator splitting for conic optimization via homogeneous self-dual embedding. Technical report, 2013. Available at https://stanford.edu/~boyd/papers/pdf/scs.pdf. [31] Sang-Yun Oh, Onkar Dalal, Kshitij Khare, and Bala Rajaratnam. Optimization methods for sparse pseudolikelihood graphical model selection. In Advances in Neural Information Processing Systems 27, pages 667?675, 2014. [32] Neal Parikh and Stephen Boyd. Proximal algorithms. Foundations and Trends in Optimization, 1(3):123?231, 2013. [33] Jie Peng, Pei Wang, Nengfeng Zhou, and Ji Zhu. Partial correlation estimation by joint sparse regression models. Journal of the American Statistical Association, 104(486):735?746, 2009. [34] Garvesh Raskutti, Martin Wainwright, and Bin Yu. Minimax-optimal rates for sparse additive models over kernel classes via convex programming. Journal of Machine Learning Research, 13:389?427, 2012. [35] Guilherme Rocha, Peng Zhao, and Bin Yu. A path following algorithm for sparse pseudo-likelihood inverse covariance estimation (SPLICE). Technical report, 2008. Available at https://www.stat.berkeley.edu/~binyu/ps/rocha.pseudo.pdf. [36] Adam Rothman, Peter Bickel, Elizaveta Levina, and Ji Zhu. Sparse permutation invariant covariance estimation. Electronic Journal of Statistics, 2:494?515, 2008. [37] Kyung-Ah Sohn and Seyoung Kim. Joint estimation of structured sparsity and output structure in multiple-output regression via inverse covariance regularization. In Proceedings of the 15th International Conference on Artificial Intelligence and Statistics, pages 1081?1089, 2012. [38] Ichiro Takeuchi, Quoc Le, Timothy Sears, and Alexander Smola. Nonparametric quantile estimation. Journal of Machine Learning Research, 7:1231?1264, 2006. [39] Cristiano Varin and Paolo Vidoni. A note on composite likelihood inference and model selection. Biometrika, 92(3):519?528, 2005. [40] Arend Voorman, Ali Shojaie, and Daniela Witten. Graph estimation with joint additive models. Biometrika, 101(1):85?101, 2014. [41] Yuchung Wang and Edward Ip. Conditionally specified continuous distributions. Biometrika, 95(3):735?746, 2008. [42] Matt Wytock and Zico Kolter. Sparse Gaussian conditional random fields: Algorithms, theory, and application to energy forecasting. In Proceedings of the 30th International Conference on Machine Learning, pages 1265?1273, 2013. [43] Eunho Yang, Pradeep Ravikumar, Genevera Allen, and Zhandong Liu. Graphical models via generalized linear models. In Advances in Neural Information Processing Systems 25, pages 1358?1366, 2012. [44] Eunho Yang, Pradeep Ravikumar, Genevera Allen, and Zhandong Liu. Graphical models via univariate exponential family distributions. Journal of Machine Learning Research, 16:3813?3847, 2015. [45] Ming Yuan and Yi Lin. Model selection and estimation in the Gaussian graphical model. Biometrika, 94(1):19?35, 2007. [46] Xiao-Tong Yuan and Tong Zhang. Partial Gaussian graphical model estimation. IEEE Transactions on Information Theory, 60(3): 1673?1687, 2014. 9
6092 |@word trial:1 middle:2 dalal:1 d2:1 covariance:10 mention:1 solid:1 initial:1 liu:5 series:2 united:1 rkhs:3 nonparanormal:3 outperforms:2 recovered:4 comparing:2 incidence:4 current:2 nicolai:1 yet:1 chu:2 written:1 john:2 tilted:1 additive:12 j1:2 shape:1 analytic:1 plot:4 update:3 stationary:3 intelligence:2 kyk:1 short:1 core:2 detecting:1 location:1 org:3 zhang:1 along:3 yuan:3 prove:1 fitting:3 introduce:2 pairwise:2 peng:2 expected:1 roughly:2 p1:5 behavior:1 sdp:1 uz:5 steffen:1 ming:2 gov:1 jm:2 solver:2 increasing:1 becomes:1 spain:1 begin:2 estimating:3 underlying:7 provided:1 panel:11 biostatistics:1 what:3 kind:1 interpreted:2 substantially:1 developed:1 transformation:1 differentiation:1 guarantee:3 pseudo:3 berkeley:1 y3:1 act:2 weekly:2 exactly:1 biometrika:6 k2:9 rm:3 scaled:1 control:1 zico:2 converse:1 omit:2 appear:1 grant:1 arguably:1 before:2 positive:5 understood:2 local:1 tends:1 limit:1 khare:3 oxford:1 meet:1 path:3 flu:11 yd:9 laurent:1 black:3 burn:1 ryantibs:1 twice:1 studied:5 examined:1 meinshausen:2 specifying:2 suggests:4 challenging:1 co:1 shaded:1 sara:1 limited:1 cdfs:2 range:2 statistically:1 graduate:1 practical:2 unique:2 yj:23 parameteric:1 block:1 practice:1 differs:1 procedure:1 area:1 empirical:3 thought:1 composite:3 projection:2 revealing:1 boyd:4 radial:1 confidence:1 suggest:1 er25308:1 cannot:1 onto:2 selection:19 operator:9 context:1 applying:1 intercept:1 influence:1 isotonic:1 optimize:1 equivalent:1 imposed:1 lagrangian:2 center:2 maximizing:3 crfs:1 straightforward:1 gilbert:1 starting:1 convex:4 focused:1 simplicity:1 recovery:5 splitting:2 wasserman:2 estimator:2 kyung:1 importantly:1 deriving:1 oh:3 fang:1 rocha:2 embedding:1 traditionally:1 coordinate:3 analogous:2 laplace:3 annals:6 target:1 suppose:1 exact:1 lighter:1 carl:1 programming:2 homogeneous:1 crossing:4 trend:2 jk:1 econometrics:1 bottom:6 subproblem:4 tib:1 portnoy:1 wang:3 commonplace:1 region:12 ensures:1 connected:1 brk:1 ordering:3 yk:28 rq:1 ran:1 pd:2 convexity:1 disease:1 moderately:1 dempster:1 econometrica:1 dynamic:2 carrying:1 solving:5 depend:1 ali:4 efficiency:2 eric:2 basis:15 translated:1 compactly:1 joint:16 kengo:1 various:2 regularizer:1 sears:1 distinct:1 describe:2 artificial:2 sc:5 varin:1 neighborhood:14 outside:3 onureena:1 richer:1 quite:2 solve:1 larger:4 say:1 drawing:2 otherwise:1 epidemic:1 stanford:2 statistic:12 gi:1 jointly:1 itself:1 nondecreasing:1 final:1 ip:1 rr:1 wallclock:1 took:1 interaction:1 product:1 mb:6 skeptic:2 relevant:1 kato:1 flexibility:2 achieve:1 description:1 convergence:1 regularity:3 requirement:1 p:1 generating:3 adam:1 converges:2 ring:8 help:1 derive:2 develop:1 coupling:3 stat:1 ftp:1 measured:1 ij:1 lauritzen:1 advocated:1 progress:1 strong:1 edward:1 auxiliary:1 c:1 zhandong:2 p2:5 direction:3 radius:1 closely:3 larry:2 material:1 bin:2 require:2 fix:1 suffices:1 ryan:1 elementary:1 rothman:1 extension:1 hold:1 marriage:1 considered:7 sufficiently:1 normal:6 great:1 week:6 visualize:1 claim:3 vary:1 bickel:1 omitted:1 estimation:21 chernozhukov:1 grouped:1 establishes:2 tool:1 successfully:2 reparameterizing:1 clearly:2 gaussian:22 aim:1 pn:3 hj:2 shelf:1 zhou:1 encode:3 derived:1 focus:2 notational:1 likelihood:7 besag:1 brendan:1 kim:1 sense:3 inference:2 dependent:4 el:1 i0:3 typically:2 integrated:1 backing:2 overall:1 dual:2 flexible:2 html:1 exponent:4 prevention:1 smoothing:5 special:1 copula:2 spatial:1 marginal:2 field:5 having:1 ng:1 sampling:8 kw:4 represents:1 holger:1 k2f:5 yu:2 representer:1 pinball:1 others:3 spline:7 develops:1 report:10 employ:1 simultaneously:1 divergence:1 national:1 fling:1 yuchung:1 attempt:1 freedom:1 drton:1 friedman:2 highly:2 investigate:1 mining:1 evaluation:2 grasp:1 pradeep:2 yielding:1 implication:2 edge:3 capable:1 partial:3 arthur:1 biometrics:1 filled:1 theoretical:3 fitted:5 column:2 modeling:13 soft:1 zn:1 hlmann:2 lattice:1 stacking:1 introducing:1 entry:5 subset:1 uniform:1 johnson:1 optimally:1 dependency:19 answer:1 spatiotemporal:1 proximal:4 synthetic:2 thoroughly:1 international:4 kshitij:3 lee:1 off:1 michael:1 fused:1 again:3 satisfied:1 containing:1 possibly:1 wytock:1 admit:1 american:1 zhao:1 yp:1 sang:3 suggesting:1 prox:1 de:1 student:4 bold:1 kadie:1 coefficient:5 representers:1 satisfy:2 kolter:2 depends:1 later:1 break:1 wind:1 jason:1 exogenous:4 ichiro:1 reached:1 red:2 doctor:1 option:1 rbfs:3 expedition:1 contribution:2 minimize:6 collaborative:1 takeuchi:1 who:1 yield:6 correspond:1 jzk:1 generalize:1 rjt:1 knot:1 j6:7 processor:1 visualizes:1 ah:1 simultaneous:1 banded:1 binyu:1 homoskedastic:3 trevor:3 energy:1 dm:5 naturally:1 proof:4 associated:1 recovers:4 treatment:1 ask:1 knowledge:1 subsection:1 hilbert:1 segmentation:1 actually:1 alexandre:2 maxwell:1 higher:1 dt:3 follow:1 done:2 though:2 symptom:2 generality:2 just:2 roger:4 lastly:2 smola:1 jerome:2 correlation:2 expressive:1 banerjee:1 somehow:1 grows:1 believe:1 matt:1 k22:3 multiplier:3 true:4 y2:4 ccf:1 hence:1 equality:1 regularization:1 alternating:3 www:1 leibler:1 nonzero:4 neal:3 deal:1 conditionally:6 white:1 sin:1 adjacent:1 finegold:1 self:1 auc:3 proxf:1 criterion:1 generalized:3 pdf:10 yun:3 outline:1 complete:2 demonstrate:2 ridge:1 g1j:1 crf:1 performs:2 allen:2 wise:1 recently:2 parikh:3 common:1 garvesh:1 tending:1 witten:2 raskutti:1 ji:2 conditioning:3 insensitive:1 influenza:1 tail:1 extend:3 interpretation:1 elementwise:2 interpret:1 discussed:1 association:1 mellon:3 cambridge:1 gibbs:11 smoothness:4 rd:4 tuning:6 outlined:1 uv:5 similarly:2 han:4 gj:1 add:2 dominant:1 multivariate:8 recent:1 inf:1 termed:1 certain:2 jianqing:1 inequality:1 binary:2 came:1 yi:1 victor:1 seen:4 b1k:1 somewhat:2 additional:1 mr:1 freely:1 maximize:1 period:1 ii:2 relates:1 multiple:12 stephen:4 reduces:1 stem:1 smooth:1 technical:6 match:1 borja:1 levina:1 offer:2 lin:1 ravikumar:2 visit:1 award:1 a1:2 prediction:1 underlies:1 basic:3 regression:28 variant:1 essentially:3 cmu:3 mrf:1 arxiv:3 iteration:1 kernel:5 qy:1 cell:2 background:2 conditionals:13 separately:1 want:1 fellowship:1 semiparametric:2 median:2 rest:2 regional:1 posse:1 pass:1 induced:1 subject:3 undirected:1 lafferty:2 effectiveness:1 structural:2 yang:2 constraining:1 iii:1 easy:1 xj:1 independence:3 zi:1 rdm:2 fit:1 lasso:10 hastie:3 reduce:1 idea:1 donoghue:1 i7:1 bottleneck:1 whether:1 motivated:1 rajaratnam:3 url:1 forecasting:1 penalty:6 gnj:1 peter:3 jie:1 workflow:1 generally:1 useful:2 detailed:1 listed:1 clear:1 nonparametric:10 ten:1 sohn:1 http:7 exist:1 percentage:2 nsf:2 estimated:3 tibshirani:4 blue:5 carnegie:3 write:2 discrete:5 paolo:1 express:1 group:2 independency:4 key:1 arend:1 uw:6 vast:2 graph:6 concreteness:1 cone:2 year:2 run:2 inverse:7 angle:1 parameterized:1 fourth:1 extends:1 family:6 place:1 reasonable:1 throughout:1 brazilian:1 electronic:1 draw:4 submatrix:2 meek:1 bala:3 fan:2 nonnegative:1 strength:1 precisely:3 constraint:9 belloni:1 btk:1 span:4 martin:1 department:3 structured:1 according:1 kd:1 describes:1 across:7 heckerman:1 qu:3 modification:1 making:1 quoc:1 explained:1 invariant:1 pr:16 ghaoui:1 taken:1 computationally:1 visualization:1 jennifer:1 describing:1 daniela:2 pin:1 needed:1 encapsulating:1 tractable:1 koenker:4 available:6 gaussians:1 rewritten:1 operation:1 competitively:1 apply:2 obey:1 appropriate:2 differentiated:1 enforce:1 nicholas:1 alternative:2 rp:2 denotes:3 remaining:4 top:4 graphical:29 a4:2 bkt:6 exploit:1 k1:2 quantile:31 establish:1 especially:2 society:2 classical:1 objective:1 question:2 strategy:2 heteroskedastic:5 parametric:2 usual:2 dependence:4 nr:1 said:1 september:1 elizaveta:1 separate:1 argue:1 reason:1 relationship:4 kk:1 illustration:1 minimizing:4 y4:1 julian:1 neville:1 setup:2 unfortunately:1 eunho:2 robert:4 statement:1 subproblems:5 negative:1 rise:1 design:2 implementation:1 proper:1 pei:1 upper:1 observation:6 markov:3 finite:3 alnur:2 extended:1 relational:1 y1:13 rn:7 reproducing:1 august:2 buhlmann:1 peleato:1 bk:5 introduced:1 pair:3 eckstein:1 specified:2 david:4 connection:1 z1:1 componentwise:2 meier:1 established:1 barcelona:1 nip:1 suggested:1 usually:1 below:2 pattern:1 sparsity:7 summarize:1 max:6 royal:2 wainwright:1 power:1 suitable:2 natural:5 difficulty:1 indicator:1 mn:1 representing:1 zhu:2 minimax:1 kxk22:1 imply:1 identifies:2 conic:1 concludes:1 aspremont:1 coupled:1 extract:1 rounthwaite:1 kj:55 text:1 review:2 literature:3 understanding:2 nice:1 prior:1 acknowledgement:1 emre:1 glasso:9 fully:2 loss:6 proxt:1 mixed:3 cdc:2 permutation:1 filtering:1 versus:3 coefficent:1 foundation:2 degree:1 xp:8 consistent:1 xiao:1 suspected:1 principle:1 thresholding:1 cristiano:1 heavy:1 row:4 course:1 penalized:3 changed:1 placed:2 compatible:2 keeping:1 nengfeng:1 zkolter:1 diagonally:1 last:2 guide:1 pseudolikelihood:13 understand:1 telling:1 taking:3 lukas:1 absolute:2 sparse:18 ghz:1 regard:1 curve:1 dimension:3 distributed:1 valid:1 cumulative:1 van:1 autoregressive:5 author:1 collection:3 made:2 projected:2 forward:1 adaptive:1 far:1 log3:1 transaction:1 yingying:1 approximate:1 emphasize:1 kullback:1 tenuous:1 monotonicity:1 gene:1 knew:1 continuous:9 triplet:1 tailed:1 table:2 guilherme:1 nature:1 robust:3 expansion:5 kgj:1 elegantly:1 domain:2 main:1 bassett:1 n2:2 nothing:1 voorman:1 x1:8 augmented:2 quantiles:10 cubic:1 darker:1 tong:2 sub:2 wish:1 exponential:3 lie:3 kxk2:1 chickering:1 splice:1 down:2 theorem:3 minute:1 specific:2 jensen:1 false:1 supported:3 drew:3 supplement:12 onkar:1 magnitude:1 chen:1 easier:1 generalizing:1 timothy:1 univariate:7 forming:1 expressed:1 ordered:1 aa:1 corresponds:3 truth:5 shojaie:2 cdf:1 conditional:48 abbreviation:1 sized:1 presentation:1 seyoung:1 careful:1 admm:6 hard:1 tiger:4 genevera:2 typical:1 except:1 sampler:5 lemma:5 called:2 total:1 mathias:1 geer:1 formally:1 latter:2 jonathan:1 brevity:3 alexander:1
5,628
6,093
A Consistent Regularization Approach for Structured Prediction Carlo Ciliberto ?,1 cciliber@mit.edu 1 Alessandro Rudi ?,1,2 ale_rudi@mit.edu Lorenzo Rosasco 1,2 lrosasco@mit.edu Laboratory for Computational and Statistical Learning - Istituto Italiano di Tecnologia, Genova, Italy & Massachusetts Institute of Technology, Cambridge, MA 02139, USA. 2 Universit? degli Studi di Genova, Genova, Italy. ? Equal contribution Abstract We propose and analyze a regularization approach for structured prediction problems. We characterize a large class of loss functions that allows to naturally embed structured outputs in a linear space. We exploit this fact to design learning algorithms using a surrogate loss approach and regularization techniques. We prove universal consistency and finite sample bounds characterizing the generalization properties of the proposed method. Experimental results are provided to demonstrate the practical usefulness of the proposed approach. 1 Introduction Many machine learning applications require dealing with data-sets having complex structures, e.g. natural language processing, image segmentation, reconstruction or captioning, pose estimation, protein folding prediction to name a few [1, 2, 3]. Structured prediction problems pose a challenge for classic off-the-shelf learning algorithms for regression or binary classification. This has motivated the extension of methods such as support vector machines to structured problems [4]. Dealing with structured prediction problems is also a challenge for learning theory. While the theory of empirical risk minimization provides a very general statistical framework, in practice it needs to be complemented with an ad-hoc analysis for each specific setting. Indeed, in the last few years, an effort has been made to analyze specific structured problems, such as multiclass classification [5], multi-labeling [6], ranking [7] or quantile estimation [8]. A natural question is whether a unifying learning framework can be developed to address a wide range of problems from theory to algorithms. This paper takes a step in this direction, proposing and analyzing a general regularization approach to structured prediction. Our starting observation is that for a large class of these problems, we can define a natural embedding of the associated loss functions into a linear space. This allows to define a (least squares) surrogate problem of the original structured one, that is cast within a multi-output regularized learning framework [9, 10, 11]. We prove that by solving the surrogate, we are able to recover the exact solution of the original structured problem. The corresponding algorithm essentially generalizes approaches considered in [12, 13, 14, 15, 16]. We study the generalization properties of the proposed approach, establishing universal consistency as well as finite sample bounds. The rest of this paper is organized as follows: in Sec. 2 we introduce the structured prediction problem in its generality and present our algorithm to approach it. In Sec. 3 we introduce and discuss a surrogate framework for structured prediction, from which we derive our algorithm. In Sec. 4, we analyze the theoretical properties of the proposed algorithm. In Sec. 5 we draw connections with previous work in structured prediction. Sec. 6 reports promising experimental results on a variety of structured prediction problems. Sec. 7 concludes the paper outlining relevant directions for future research. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 2 A Regularization Approach to Structured prediction The goal of supervised learning is to learn functional relations f : X ? Y between two sets X , Y, given a finite number of examples. In particular in this work we are interested to structured prediction, namely the case where Y is a set of structured outputs (such as histograms, graphs, time sequences, points on a manifold, etc.). Moreover, structure on Y can be implicitly induced by a suitable loss 4 : Y ? Y ? R (such as edit distance, ranking error, geodesic distance, indicator function of a subset, etc.). Then, the problem of structured prediction becomes Z minimize E(f ), with E(f ) = 4(f (x), y) d?(x, y) (1) f :X ?Y X ?Y and the goal is to find a good estimator for the minimizer of the above equation, given a finite number of (training) points {(xi , yi )}ni=1 sampled from a unknown probability distribution ? on X ? Y. In the following we introduce an estimator f? : X ? Y to approach Eq. (1). The rest of this paper is devoted to prove that f? it a consistent estimator for a minimizer of Eq. (1). Our Algorithm for Structured Prediction. In this paper we propose and analyze the following estimator n X f?(x) = argmin ?i (x) 4 (y, yi ) with ?(x) = (K + n?I)?1 Kx ? Rn (Alg. 1) y?Y i=1 given a positive definite kernel k : X ?X ? R and training set {(xi , yi )}ni=1 . In the above expression, ?i (x) is i-th entry in ?(x), K ? Rn?n is the kernel matrix Ki,j = k(xi , xj ), Kx ? Rn the vector with entires (Kx )i = k(x, xi ), ? > 0 a regularization parameter and I the identity matrix. From a computational perspective, the procedure in Alg. 1 is divided in two steps: a learning step where input-dependents weights ?i (?) are computed (which essentially consists in solving a kernel ridge regression problem) and a prediction step where the ?i (x)-weighted linear combination in Alg. 1 is optimized, leading to a prediction f?(x) given an input x. The idea of a similar two-steps strategy goes back to standard approaches for structured prediction and was originally proposed in [17], where a ?score? function F (x, y) was learned to estimate the ?likelihood? of a pair (x, y) sampled from ?, and then used in f?(x) = argminy?Y ?F (x, y), to predict the best f?(x) ? Y given x ? X . This strategy was extended in [4] for the popular SVMstruct and adopted also in a variety of approaches for structured prediction [1, 12, 14]. Intuition. While providing a principled derivation of Alg. 1 for a large class of loss functions is a main contribution of this work, it is useful to first consider the special case where 4 is induced by a reproducing kernel h : Y ? Y ? R on the output set, such that 4(y, y 0 ) = h(y, y) ? 2h(y, y 0 ) + h(y 0 , y 0 ). (2) This choice of 4 was originally considered in Kernel Dependency Estimation (KDE) [18]. In particular, for the special case of normalized kernels (i.e. h(y, y) = 1 ?y ? Y), Alg. 1 essentially reduces to [12, 13, 14] and recalling their derivation is insightful. Note that, since a kernel can be written as h(y, y 0 ) = h?(y), ?(y 0 )iHY , with ? : Y ? HY a non-linear map into a feature space HY [19], then Eq. (2) can be rewritten as 4(f (x), y 0 ) = k?(f (x)) ? ?(y 0 )k2HY . (3) Directly minimizing the equation above with respect to f is generally challenging due to the non linearity ?. A possibility is to replace ? ? f by a function g : X ? HY that is easier to optimize. We can then consider the regularized problem n 1X minimize kg(xi ) ? ?(yi )k2HY + ?kgk2G (4) g?G n i=1 P with G a space of functions1 g : X ? HY of the form g(x) = i=1 k(x, xi )ci with ci ? HY and k a reproducing kernel. Indeed, in this case the solution to Eq. (4) is n X g?(x) = ?i (x)?(yi ) with ?(x) = (K + n?I)?1 Kx ? Rn (5) i=1 1 G is the reproducing kernel Hilbert space for vector-valued functions [9] with inner product hk(xi , ?)ci , k(xj , ?)cj iG = k(xi , xj )hci , cj iHY 2 where the ?i are the same as in Alg. 1. Since we replaced 4(f (x), y) by kg(x) ? ?(y)k2HY , a natural question is how to recover an estimator f? from g?. In [12] it was proposed to consider f?(x) = argmin k?(y) ? g?(x)k2HY = argmin h(y, y) ? 2 y?Y y?Y n X ?i (x)h(y, yi ), (6) i=1 which corresponds to Alg. 1 when h is a normalized kernel. The discussion above provides an intuition on how Alg. 1 is derived but raises also a few questions. First, it is not clear if and how the same strategy could be generalized to loss functions that do not satisfy Eq. (2). Second, the above reasoning hinges on the idea of replacing f? with g? (and then recovering f? by Eq. (6)), however it is not clear whether this approach can be justified theoretically. Finally, we can ask what are the statistical properties of the resulting algorithm. We address the first two questions in the next section, while the rest of the paper is devoted to establish universal consistency and generalization bounds for algorithm Alg. 1. 3 Surrogate Framework and Derivation To derive Alg. 1 we consider ideas from surrogate approaches [20, 21, 7] and in particular [5]. The idea is to tackle Eq. (1) by substituting 4(f (x), y) with a ?relaxation? L(g(x), y) on a space HY , that is easy to optimize. The corresponding surrogate problem is Z minimize R(g), with R(g) = L(g(x), y) d?(x, y), (7) g:X ?HY X ?Y ? and the question is how a solution g for the above problem can be related to a minimizer f ? of Eq. (1). This is made possible by the requirement that there exists a decoding d : HY ? Y, such that Fisher Consistency: E(d ? g ? ) = E(f ? ), Comparison Inequality: E(d ? g) ? E(f ? ) ? ?(R(g) ? R(g ? )), (8) (9) hold for all g : X ? HY , where ? : R ? R is such that ?(s) ? 0 for s ? 0. Indeed, given an estimator g? for g ? , we can ?decode? it considering f? = d ? g? and use the excess risk R(? g ) ? R(g ? ) to control E(f?) ? E(f ? ) via the comparison inequality in Eq. (9). In particular, if g? is a datadependent predictor trained on n points and R(? g ) ? R(g ? ) when n ? +?, we automatically have E(f?) ? E(f ? ). Moreover, if ? in Eq. (9) is known explicitly, generalization bounds for g? are automatically extended to f?. Provided with this perspective on surrogate approaches, here we revisit the discussion of Sec. 2 for the case of a loss function induced by a kernel h. Indeed, by assuming the surrogate L(g(x), y) = kg(x) ? ?(y)k2HY , Eq. (4) becomes the empirical version of the surrogate problem at Eq. (7) and leads to an estimator g? of g ? as in Eq. (5). Therefore, the approach in [12, 14] to recover f?(x) = argminy L(g(x), y) can be interpreted as the result f?(x) = d ? g?(x) of a suitable decoding of g?(x). An immediate question is whether the above framework satisfies Eq. (8) and (9). Moreover, we can ask if the same idea could be applied to more general loss functions. In this work we identify conditions on 4 that are satisfied by a large family of functions and moreover allow to design a surrogate framework for which we prove Eq. (8) and (9). The first step in this direction is to introduce the following assumption. Assumption 1. There exists a separable Hilbert space HY with inner product h?, ?iHY , a continuous embedding ? : Y ? HY and a bounded linear operator V : HY ? HY , such that 4(y, y 0 ) = h?(y), V ?(y 0 )iHY ?y, y 0 ? Y (10) Asm. 1 is similar to Eq. (3) and in particular to the definition of a reproducing kernel. Note however that by not requiring V to be positive semidefinite (or even symmetric), we allow for a surprisingly wide range of functions beyond kernel functions. Indeed, below we give some examples of functions that satisfy Asm. 1 (see supplementary material Sec. C for more details): Example 1. The following functions of the form 4 : Y ? Y ? R satisfy Asm. 1: 3 1. Any loss on Y of finite cardinality. Several problems belong to this setting, such as MultiClass Classification, Multi-labeling, Ranking, predicting Graphs (e.g. protein foldings). 2. Regression and Classification Loss Functions. Least-squares, Logistic, Hinge, -insensitive, ? -Pinball. 3. Robust Loss Functions. Most loss functions used for robust estimation [22] such as the absolute value, Huber, Cauchy, German-McLure, ?Fair? and L2 ? L1. See [22] or the supplementary material for their explicit formulation. 4. KDE. Loss functions 4 induced by a kernel such as in Eq. (2). 5. Distances on Histograms/Probabilities. The ?2 and the Hellinger distances. 6. Diffusion distances on Manifolds. The squared diffusion distance induced by the heat kernel (at time t > 0) on a compact Reimannian manifold without boundary [23]. The Least Squares Loss Surrogate Framework. Asm. 1 implicitly defines the space HY similarly to Eq. (3). The following result motivates the choice of the least squares surrogate and moreover suggests a possible choice for the decoding. Lemma 1. Let 4 : Y ? Y ? R satisfy Asm. 1 with ? : Y ? HY bounded. Then the expected risk in Eq. (1) can be written as Z E(f ) = h?(f (x)), V g ? (x)iHY d?X (x) (11) X for all f : X ? Y, where g ? : X ? HY minimizes Z R(g) = kg(x) ? ?(y)k2HY d?(x, y). (12) X ?Y Lemma 1 shows how Eq. (12) arises naturally as surrogate problem. In particular, Eq. (11) suggests to choose the decoding d(h) = argmin h ?(y) , V h iHY ?h ? HY , (13) y?Y ? since d ? g (x) = arg miny?Y h?(y), V g ? (x)i and therefore E(d ? g ? ) ? E(f ) for any measurable f : X ? Y, leading to Fisher Consistency. We formalize this in the following result. Theorem 2. Let 4 : Y ? Y ? R satisfy Asm. 1 with Y a compact set. Then, for every measurable g : X ? HY and d : HY ? Y satisfying Eq. (13), the following holds E(d ? g ? ) = E(f ? ) (14) p ? E(d ? g) ? E(f ) ? c4 R(g) ? R(g ? ). (15) with c4 = kV k maxy?Y k?(y)kHY . Thm. 2 shows that for all 4 satisfying Asm. 1, the corresponding surrogate framework identified by the surrogate in Eq. (12) and decoding Eq. (13) satisfies Fisher consistency Eq. (14) and the comparison inequality in Eq. (15). We recall that a finite set Y is always compact, and moreover, assuming the discrete topology on Y, we have that any ? : Y ? HY is continuous. Therefore, Thm. 2 applies in particular to any structured prediction problem on Y with finite cardinality. Thm. 2 suggest to approach structured prediction by first learning g? and then decoding it to recover f? = d ? g?. A natural question is how to choose g? in order to compute f? in practice. In the rest of this section we propose an approach to this problem. Derivation for Alg. 1. Minimizing R in Eq. (12) corresponds to a vector-valued regression problem [9, 10, 11]. In this work we adopt an empirical risk minimization approach to learn g? as in Eq. (4). The following result shows that combining g? with the decoding in Eq. (13) leads to the f? in Alg. 1. Lemma 3. Let 4 : Y ? Y ? R satisfy Asm. 1 with Y a compact set. Let g? : X ? HY be the minimizer of Eq. (4). Then, for all x ? X n X d ? g?(x) = argmin ?i (x) 4 (y, yi ) ?(x) = (K + n?I)?1 Kx ? Rn (16) y?Y i=1 4 Lemma 3 concludes the derivation of Alg. 1. An interesting observation is that computing f? does not require explicit knowledge of the embedding ? and the operator V , which are implicitly encoded within the loss 4 by Asm. 1. In analogy to the kernel trick [24] we informally refer to such assumption as the ?loss trick?. We illustrate this effect with an example. Example 2 (Ranking). In ranking problems the goal is to predict ordered sequences of a fixed number ` of labels. For these problems, Y corresponds to the set of all ordered sequences of ` labels and has cardinality |Y| = `!, which is typically dramatically larger than the number n of training examples (e.g. for ` = 15, `! ' 1012 ). Therefore, given an input x ? X , directly computing g?(x) ? R|Y| is impractical. On the opposite, the loss trick allows to express d ? g?(x) only in terms of the n weights ?i (x) in Alg. 1, making the computation of the argmin easier to approach in general. For details on the rank loss 4rank and the corresponding optimization over Y, we refer to the empirical analysis of Sec. 6. In this section we have shown a derivation for the structured prediction algorithm proposed in this work. In Thm. 2 we have shown how the expected risk of the proposed estimator f? is related to an estimator g? via a comparison inequality. In the following we will make use of these results to prove consistency and generalization bounds for Alg. 1. 4 Statistical Analysis In this section we study the statistical properties of Alg. 1 exploiting of the relation between the structured and surrogate problems characterized be the comparison inequality in Thm. 2. We begin our analysis by proving that Alg. 1 is universally consistent. Theorem 4 (Universal Consistency). Let 4 : Y ? Y ? R satisfy Asm. 1, X and Y be compact sets and k : X ? X ? R a continuous universal reproducing kernel2 . For any n ? N and any distribution ? on X ? Y let f?n : X ? Y be obtained by Alg. 1 with {(xi , yi )}ni=1 training points independently sampled from ? and ?n = n?1/4 . Then, lim E(f?n ) = E(f ? ) n?+? with probability 1 (17) Thm. 4 shows that, when the 4 satisfies Asm. 1, Alg. 1 approximates a solution f ? to Eq. (1) arbitrarily well, given a sufficient number of training examples. To the best of our knowledge this is the first consistency result for structured prediction in the general setting considered in this work and characterized by Asm. 1, in particular for the case of Y with infinite cardinality (dense or discrete). The No Free Lunch Theorem [25] states that it is not possible to prove uniform convergence rates for Eq. (17). However, by imposing suitable assumptions on the regularity of g ? it is possible to prove generalization bounds for g? and then, using Thm. 2, extend them to f?. To show this, it is sufficient to require that g ? belongs to G the reproducing kernel Hilbert space used in the ridge regression of Eq. (4). Note that in the proofs of Thm. 4 and Thm. 5, our analysis on g? borrows ideas from [10] and extends their result to our setting for the case of HY infinite dimensional (i.e. when Y has infinite cardinality). Indeed, note that in this case [10] cannot be applied to the estimator g? considered in this work (see supplementary material Sec. B.3, Lemma 18 for details). Theorem 5 (Generalization Bound). Let 4 : Y ? Y ? R satisfy Asm. 1, Y be a compact set and k : X ? X ? R a bounded continuous reproducing kernel. Let f?n denote the solution of Alg. 1 with n training points and ? = n?1/2 . If the surrogate risk R defined in Eq. (12) admits a minimizer g ? ? G, then 1 E(f?n ) ? E(f ? ) ? c? 2 n? 4 (18) holds with probability 1 ? 8e?? for any ? > 0, with c a constant not depending on n and ? . The bound in Thm. 5 is of the same order of the generalization bounds available for the least squares binary classifier [26]. Indeed, in Sec. 5 we show that in classification settings Alg. 1 reduces to least squares classification. This opens the way to possible improvements, as we discuss in the following. 2 This is a standard assumption for universal consistency (see [21]). An example of continuous universal kernel is the Gaussian k(x, x0 ) = exp(??kx ? x0 k2 ), with ? > 0. 5 Remark 1 (Better Comparison Inequality). The generalization bounds for the least squares classifier can be improved by imposing regularity conditions on ? via the Tsybakov condition [26]. This was observed in [26] for binary classification with the least squares surrogate, where a tighter comparison inequality than the one in Thm. 2 was proved. Therefore, a natural question is whether the inequality of Thm. 2 could be similarly improved, consequently leading to better rates for Thm. 5. Promising results in this direction can be found in [5], where the Tsybakov condition was generalized to the multi-class setting and led to a tight comparison inequality analogous to the one for the binary setting. However, this question deserves further investigation. Indeed, it is not clear how the approach in [5] could be further generalized to the case where Y has infinite cardinality. Remark 2 (Other Surrogate Frameworks). In this paper we focused on a least squares surrogate loss function and corresponding framework. A natural question is to ask whether other loss functions could be considered to approach the structured prediction problem, sharing the same or possibly even better properties. This question is related also to Remark 1, since different surrogate frameworks could lead to sharper comparison inequalities. This seems an interesting direction for future work. 5 Connection with Previous Work Binary and Multi-class Classification. It is interesting to note that in classification settings, Alg. 1 corresponds to the least squares classifier [26]. Indeed, let Y = {1, . . . , `} be a set of labels and consider the misclassification loss 4(y, y 0 ) = 1 for y 6= y 0 and 0 otherwise. Then 4(y, y 0 ) = ` ` e> y V ey 0 with ei ? R the i-the element of the canonical basis of R and V = 1 ? I, where I is the ` ? ` identity matrix and 1 the matrix with all entries equal to 1. In the notation of surrogate methods adopted in this work, HY = R` and ?(y) = ey . Note that both Least squares classification and our approach solve the surrogate problem at Eq. (4) n 1X kg(xi ) ? eyi k2R` + ? kgk2G n i=1 (19) to obtain a vector-valued predictor g? : X ? R` as in Eq. (5). Then, the least squares classifier c? and the decoding f? = d ? g? are respectively obtained by f?(x) = argmin V g?(x). c?(x) = argmax g?(x) i=1,...,` (20) i=1,...,` However, since V = 1 ? I, it is easy to see that c?(x) = f?(x) for all x ? X . Kernel Dependency Estimation. In Sec. 2 we discussed the relation between KDE [18, 12] and Alg. 1. In particular, we have observed that if 4 is induced by a kernel h : Y ? Y ? R as in Eq. (2) and h is normalized, i.e. h(y, y) = ? ?y ? Y, with ? > 0, then algorithm Eq. (6) proposed in [12] leads to the same predictor as Alg. 1. Therefore, we can apply Thm. 4 and 5 to prove universal consistency and generalization bounds for methods such as [12, 14]. Some theoretical properties of KDE have been previously studied in [15] from a PAC Bayesian perspective. However, the obtained bounds do not allow to control the excess risk or establish consistency of the method. Moreover, note that when the kernel h is not normalized, the ?decoding? in Eq. (6) is not equivalent to Alg. 1. In particular, given the surrogate solution g ? , applying Eq. (6) leads to predictors that do not minimize Eq. (1). As a consequence, approaches in [12, 13, 14] are not consistent in the general case. Support Vector Machines for Structured Output. A popular approach to structured prediction is the Support Vector Machine for Structured Outputs (SVMstruct) [4] that extends ideas from the well-known SVM algorithm to the structured setting. One of the main advantages of SVMstruct is that it can be applied to a variety of problems since it does not impose strong assumptions on the loss. In this view, our approach shares similar properties, and in particular allows to consider Y of infinite cardinality. Moreover, we note that generalization studies for SVMstruct are available [3] (Ch. 11). However, it seems that these latter results do not allow to derive universal consistency of the method. 6 Experiments In this section we report on preliminary experiments showing the performance of the proposed approach on simulated as well as real structured prediction problems. 6 Rank Loss Linear [7] Hinge [27] Logistic [28] SVM Struct [4] Alg. 1 0.430 ? 0.004 0.432 ? 0.008 0.432 ? 0.012 0.451 ? 0.008 0.396 ? 0.003 Table 1: Normalized 4rank for ranking methods on the MovieLens dataset [29]. Loss KDE [18] (Gaussian) Alg. 1 (Hellinger) 4G 4H 4R 0.149 ? 0.013 0.736 ? 0.032 0.294 ? 0.012 0.172 ? 0.011 0.647 ? 0.017 0.193 ? 0.015 Table 2: Digit reconstruction using Gaussian (KDE [18]) and Hellinger loss. Ranking Movies. We considered the problem of ranking movies in the MovieLens dataset [29] (ratings (from 1 to 5) of 1682 movies by 943 users). The goal was to predict preferences of a given user, i.e. an ordering of the 1682 movies, according to the user?s partial ratings. We applied Alg. 1 to the ranking problem using the rank loss [7] 4rank (y, y 0 ) = M 1 X ?(y 0 )ij (1 ? sign(yi ? yj )), 2 i,j=1 (21) where M is the number of movies, y is a re-ordering of the sequence 1, . . . , M . The scalar ?(y)ij denotes the costs (or reward) of having movie j ranked higher than movie i. Similarly to [7], we set ?(y)ij equal to the difference of ratings provided by user associated to y (from 1 to 5). We chose as k in Alg. 1, a linear kernel on features similar to those proposed in [7], which were computed based on users? profession, age, similarity of previous ratings, etc. Since solving Alg. 1 for 4rank is NP-hard (see [7]) we adopted the Feedback Arc Set approximation (FAS) proposed in [30] to approximate the f?(x) of Alg. 1. Results are reported in Tab. 1 comparing Alg. 1 (Ours) with surrogate ranking methods using a Linear [7], Hinge [27] or Logistic [28] loss and Struct SVM [4]. We randomly sampled n = 643 users for training and tested on the remaining 300. We performed 5-fold cross-validation for model selection. We report the normalized 4rank , averaged over 10 trials to account for statistical variability. Interestingly, our approach appears to outperform all competitors, suggesting that Alg. 1 is a viable approach to ranking. Image Reconstruction with Hellinger Distance. We considered the USPS digits reconstruction experiment originally proposed in [18]. The goal is to predict the lower half of an image depicting a digit, given the upper half of the same image in input. The standard approach is to use a Gaussian kernel kG on images in input and adopt KDE methods such as [18, 12, 14] with loss 4G (y, y 0 ) = 1 ? kG (y, y 0 ). Here we take a different approach and, following [31], we interpret an image depicting a digit as an histogram and normalize it to sum up to 1. Therefore, Y is the unit simplex in R128 (16 ? 16 images) and we adopt the Hellinger distance 4H 0 4H (y, y ) = M X |(yi )1/2 ? (yi0 )1/2 | for y = (yi )M i=1 (22) i=1 to measure distances on Y. We used the kernel kG on the input space and compared Alg. 1 using respectively 4H and 4G . For 4G Alg. 1 correpsponds to [12]. We performed digit reconstruction experiments by training on 1000 examples evenly distributed among the 10 digits of USPS and tested on 5000 images. We performed 5-fold cross-validation for model selection. Tab. 2 reports the performance of Alg. 1 and the KDE methods averaged over 10 runs. Performance are reported according to the Gaussian loss 4G and Hellinger loss 4H . Unsurprisingly, methods trained with respect to a specific loss perform better than the competitor with respect to such loss. Therefore, as a further measure of performance we also introduced the ?Recognition? loss 4R . This loss has to be intended as a measure of how ?well? a predictor was able to correctly reconstruct an image for digit recognition purposes. To this end, we trained an automatic digit classifier and defined 4R to be the misclassification error of such classifier when tested on images reconstructed by the two prediction algorithms. This automatic classifier was trained using a standard SVM [24] on a separate subset of USPS images and achieved an average 0.04% error rate on the true 5000 test sets. In this case a clear difference in performance can be observed between using two different loss functions, suggesting that 4H is more suited for the reconstruction problem. 7 4 n Alg. 1 RNW KRLS 2 50 100 200 500 1000 0 ?2 ?1 ?0.8 ?0.6 ?0.4 ?0.2 0 0.2 0.4 0.6 0.8 Alg. 1 RNW KRR 0.39 ? 0.17 0.21 ? 0.04 0.12 ? 0.02 0.08 ? 0.01 0.07 ? 0.01 0.45 ? 0.18 0.29 ? 0.04 0.24 ? 0.03 0.22 ? 0.02 0.21 ? 0.02 0.62 ? 0.13 0.47 ? 0.09 0.33 ? 0.04 0.31 ? 0.03 0.19 ? 0.02 1 Figure 1: Robust estimation on the regression problem in Sec. 6 by minimizing the Cauchy loss with Alg. 1 (Ours) or Nadaraya-Watson (Nad). KRLS as a baseline predictor. Left. Example of one run of the algorithms. Right. Average distance of the predictors to the actual function (without noise and outliers) over 100 runs with respect to training sets of increasing dimension. Robust Estimation. We considered a regression problem with many outliers and evaluated Alg. 1 using the Cauchy loss (see Example 1 - (3)) for robust estimation. Indeed, in this setting, Y = [?M, M ] ? R is not structured, but the non-convexity of 4 can be an obstacle to the learning process. We generated a dataset according to the model y = sin(6?x) +  + ?, where x was sampled uniformly on [?1, 1] and  according to a zero-mean Gaussian with variance 0.1. ? modeled the outliers and was sampled according to a zero-mean random variable that was 0 with probability 0.90 and a value uniformly at random in [?3, 3] with probability 0.1. We compared Alg. 1 with the Nadaraya-Watson robust estimator (RNW) [32] and kernel ridge regression (KRR) with a Gaussian kernel as baseline. To train Alg. 1 we used a Gaussian kernel on the input and performed predictions (i.e. solved Eq. (16)) using Matlab FMINUNC function for unconstrained minimization. Experiments were performed with training sets of increasing dimension (100 repetitions each) and test set of 1000 examples. 5-fold cross-validation for model selection. Results are reported in Fig. 1, showing that our estimator significantly outperforms the others. Moreover, our method appears to greatly benefit from training sets of increasing size. 7 Conclusions and Future Work In this work we considered the problem of structured prediction from a Statistical Learning Theory perspective. We proposed a learning algorithm for structured prediction that is split into a learning and prediction step similarly to previous methods in the literature. We studied the statistical properties of the proposed algorithm by adopting a strategy inspired to surrogate methods. In particular, we identified a large family of loss functions for which it is natural to identify a corresponding surrogate problem. This perspective allows to prove a derivation of the algorithm proposed in this work. Moreover, by exploiting a comparison inequality relating the original and surrogate problems we were able to prove universal consistency and generalization bounds under mild assumption. In particular, the bounds proved in this work recover those already known for least squares classification, of which our approach can be seen as a generalization. We supported our theoretical analysis with experiments showing promising results on a variety of structured prediction problems. A few questions were left opened. First, we ask whether the comparison inequality can be improved (under suitable hypotheses) to obtain faster generalization bounds for our algorithm. Second, the surrogate problem in our work consists of a vector-valued regression (in a possibly infinite dimensional Hilbert space), we solved this problem by plain kernel ridge regression but it is natural to ask whether approaches from the multi-task learning literature could lead to substantial improvements in this setting. Finally, an interesting question is whether alternative surrogate frameworks could be derived for the setting considered in this work, possibly leading to tighter comparison inequalities. We will investigate these questions in the future. References [1] Pedro F Felzenszwalb, Ross B Girshick, David McAllester, and Deva Ramanan. Object detection with discriminatively trained part-based models. PAMI, IEEE Transactions on, 32(9):1627?1645, 2010. [2] Andrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE Conference on CVPR, pages 3128?3137, 2015. [3] Thomas Hofmann Bernhard Sch?lkopf Alexander J. Smola Ben Taskar Bakir, G?khan and S.V.N Vishwanathan. Predicting structured data. MIT press, 2007. 8 [4] Ioannis Tsochantaridis, Thorsten Joachims, Thomas Hofmann, and Yasemin Altun. Large margin methods for structured and interdependent output variables. In JMLR, pages 1453?1484, 2005. [5] Youssef Mroueh, Tomaso Poggio, Lorenzo Rosasco, and Jean-Jacques Slotine. Multiclass learning with simplex coding. In NIPS, pages 2798?2806, 2012. [6] Wei Gao and Zhi-Hua Zhou. On the consistency of multi-label learning. Artificial Intelligence, 2013. [7] John C Duchi, Lester W Mackey, and Michael I Jordan. On the consistency of ranking algorithms. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 327?334, 2010. [8] Ingo Steinwart, Andreas Christmann, et al. Estimating conditional quantiles with the help of the pinball loss. Bernoulli, 17(1):211?225, 2011. [9] Charles A Micchelli and Massimiliano Pontil. Kernels for multi?task learning. In Advances in Neural Information Processing Systems, pages 921?928, 2004. [10] Andrea Caponnetto and Ernesto De Vito. Optimal rates for the regularized least-squares algorithm. Foundations of Computational Mathematics, 7(3):331?368, 2007. [11] M. ?lvarez, N. Lawrence, and L. Rosasco. Kernels for vector-valued functions: a review. Foundations and Trends in Machine Learning, 4(3):195?266, 2012. see also http://arxiv.org/abs/1106.6251. [12] Corinna Cortes, Mehryar Mohri, and Jason Weston. A general regression technique for learning transductions. In Proceedings of the 22nd international conference on Machine learning, 2005. [13] P. Geurts, L. Wehenkel, and F. d?Alch? Buc. Kernelizing the output of tree-based methods. In ICML, 2006. [14] H. Kadri, M. Ghavamzadeh, and P. Preux. A generalized kernel approach to structured output learning. Proc. International Conference on Machine Learning (ICML), 2013. [15] S. Gigu?re, M. M., K. Sylla, and F. Laviolette. Risk bounds and learning algorithms for the regression approach to structured output prediction. In ICML. JMLR Workshop and Conference Proceedings, 2013. [16] C. Brouard, M. Szafranski, and F. d?Alch? Buc. Input output kernel regression: Supervised and semisupervised structured output prediction with operator-valued kernels. JMLR, 17(176):1?48, 2016. [17] Michael Collins. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, pages 1?8. Association for Computational Linguistics, 2002. [18] Jason Weston, Olivier Chapelle, Vladimir Vapnik, Andr? Elisseeff, and Bernhard Sch?lkopf. Kernel dependency estimation. In Advances in neural information processing systems, pages 873?880, 2002. [19] Alain Berlinet and Christine Thomas-Agnan. Reproducing kernel Hilbert spaces in probability and statistics. Springer Science & Business Media, 2011. [20] Peter L Bartlett, Michael I Jordan, and Jon D McAuliffe. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 101(473):138?156, 2006. [21] Ingo Steinwart and Andreas Christmann. Support Vector Machines. Information Science and Statistics. Springer New York, 2008. [22] Peter J Huber and Elvezio M Ronchetti. Robust statistics. Springer, 2011. [23] Richard Schoen and Shing-Tung Yau. Lectures on differential geometry, volume 2. International press Boston, 1994. [24] Bernhard Sch?lkopf and Alexander J Smola. Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT press, 2002. [25] David H Wolpert. The lack of a priori distinctions between learning algorithms. Neural computation, 8(7):1341?1390, 1996. [26] Yuan Yao, Lorenzo Rosasco, and Andrea Caponnetto. On early stopping in gradient descent learning. Constructive Approximation, 26(2):289?315, 2007. [27] Ralf Herbrich, Thore Graepel, and Klaus Obermayer. Large margin rank boundaries for ordinal regression. Advances in neural information processing systems, pages 115?132, 1999. [28] Ofer Dekel, Yoram Singer, and Christopher D Manning. Log-linear models for label ranking. In Advances in neural information processing systems, page None, 2004. [29] F Maxwell Harper and Joseph A Konstan. The movielens datasets: History and context. ACM Transactions on Interactive Intelligent Systems (TiiS), 5(4):19, 2015. [30] Peter Eades, Xuemin Lin, and William F Smyth. A fast and effective heuristic for the feedback arc set problem. Information Processing Letters, 47(6):319?323, 1993. [31] Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in Neural Information Processing Systems, pages 2292?2300, 2013. [32] Wolfgang H?rdle. Robust regression function estimation. Journal of Multivariate Analysis, 14(2):169?180, 1984. 9
6093 |@word mild:1 trial:1 version:1 schoen:1 seems:2 yi0:1 nd:1 dekel:1 open:1 elisseeff:1 ronchetti:1 score:1 ours:2 interestingly:1 outperforms:1 comparing:1 written:2 john:1 hofmann:2 mackey:1 fminunc:1 half:2 intelligence:1 provides:2 preference:1 herbrich:1 org:1 differential:1 viable:1 yuan:1 prove:10 consists:2 hci:1 hellinger:6 introduce:4 x0:2 theoretically:1 huber:2 expected:2 indeed:10 andrea:2 tomaso:1 multi:8 inspired:1 automatically:2 zhi:1 actual:1 considering:1 cardinality:7 becomes:2 provided:3 spain:1 moreover:10 linearity:1 bounded:3 begin:1 notation:1 increasing:3 what:1 kg:8 argmin:7 interpreted:1 minimizes:1 medium:1 developed:1 proposing:1 impractical:1 every:1 tackle:1 interactive:1 r128:1 universit:1 classifier:7 k2:1 berlinet:1 control:2 unit:1 ramanan:1 lester:1 mcauliffe:1 positive:2 consequence:1 analyzing:1 establishing:1 pami:1 chose:1 acl:1 studied:2 suggests:2 challenging:1 nadaraya:2 range:2 averaged:2 practical:1 yj:1 practice:2 definite:1 digit:8 procedure:1 pontil:1 universal:10 empirical:5 significantly:1 protein:2 suggest:1 altun:1 cannot:1 selection:3 operator:3 andrej:1 tsochantaridis:1 risk:9 applying:1 context:1 optimize:2 measurable:2 map:1 equivalent:1 szafranski:1 go:1 starting:1 independently:1 focused:1 estimator:12 reimannian:1 ralf:1 classic:1 embedding:3 proving:1 svmstruct:4 analogous:1 decode:1 exact:1 user:6 olivier:1 smyth:1 hypothesis:1 trick:3 element:1 trend:1 satisfying:2 recognition:2 observed:3 taskar:1 tung:1 solved:2 ordering:2 principled:1 intuition:2 alessandro:1 convexity:2 substantial:1 miny:1 reward:1 cuturi:1 vito:1 geodesic:1 ghavamzadeh:1 trained:5 raise:1 solving:3 tight:1 deva:1 basis:1 usps:3 alch:2 derivation:7 train:1 heat:1 massimiliano:1 fast:1 effective:1 artificial:1 youssef:1 labeling:2 klaus:1 jean:1 encoded:1 supplementary:3 valued:6 larger:1 solve:1 cvpr:1 otherwise:1 reconstruct:1 heuristic:1 asm:13 statistic:3 hoc:1 sequence:4 advantage:1 nad:1 propose:3 reconstruction:6 product:2 relevant:1 combining:1 description:1 ihy:6 kv:1 normalize:1 exploiting:2 convergence:1 regularity:2 requirement:1 captioning:1 generating:1 ben:1 object:1 help:1 derive:3 illustrate:1 depending:1 pose:2 ij:3 eq:42 strong:1 recovering:1 christmann:2 direction:5 opened:1 mcallester:1 material:3 require:3 generalization:14 investigation:1 preliminary:1 tighter:2 extension:1 k2r:1 hold:3 marco:1 considered:10 exp:1 gigu:1 lawrence:1 predict:4 substituting:1 adopt:3 early:1 purpose:1 estimation:10 proc:1 label:5 krr:2 ross:1 edit:1 repetition:1 weighted:1 minimization:3 mit:5 always:1 gaussian:8 zhou:1 shelf:1 derived:2 joachim:1 improvement:2 rank:9 likelihood:1 bernoulli:1 hk:1 greatly:1 baseline:2 dependent:1 stopping:1 entire:1 typically:1 hidden:1 relation:3 tiis:1 interested:1 arg:1 classification:12 among:1 priori:1 special:2 equal:3 having:2 functions1:1 ernesto:1 icml:4 jon:1 future:4 simplex:2 report:4 np:1 others:1 richard:1 few:4 intelligent:1 pinball:2 randomly:1 replaced:1 argmax:1 intended:1 geometry:1 william:1 ciliberto:1 recalling:1 ab:1 detection:1 possibility:1 investigate:1 alignment:1 semidefinite:1 devoted:2 partial:1 istituto:1 poggio:1 tree:1 re:2 girshick:1 theoretical:3 obstacle:1 deserves:1 cost:1 subset:2 entry:2 predictor:7 usefulness:1 uniform:1 characterize:1 reported:3 dependency:3 international:4 off:1 decoding:9 michael:3 kernel2:1 yao:1 squared:1 satisfied:1 choose:2 shing:1 rosasco:4 possibly:3 yau:1 american:1 leading:4 li:1 account:1 suggesting:2 de:1 sec:13 ioannis:1 coding:1 satisfy:8 explicitly:1 ranking:13 ad:1 eyi:1 performed:5 view:1 jason:2 wolfgang:1 analyze:4 tab:2 recover:5 contribution:2 minimize:4 square:14 ni:3 variance:1 identify:2 lkopf:3 bayesian:1 none:1 carlo:1 history:1 sharing:1 definition:1 competitor:2 slotine:1 naturally:2 associated:2 di:2 proof:1 sampled:6 proved:2 dataset:3 popular:2 massachusetts:1 ask:5 recall:1 knowledge:2 lim:1 bakir:1 rdle:1 segmentation:1 organized:1 hilbert:5 cj:2 formalize:1 profession:1 back:1 graepel:1 appears:2 maxwell:1 originally:3 higher:1 supervised:2 improved:3 wei:1 formulation:1 evaluated:1 generality:1 smola:2 steinwart:2 replacing:1 ei:1 christopher:1 transport:1 lack:1 defines:1 logistic:3 semisupervised:1 thore:1 usa:1 name:1 effect:1 normalized:6 requiring:1 true:1 regularization:7 symmetric:1 laboratory:1 semantic:1 sin:1 generalized:4 ridge:4 demonstrate:1 geurts:1 duchi:1 l1:1 christine:1 reasoning:1 image:12 charles:1 argminy:2 functional:1 insensitive:1 volume:2 belong:1 extend:1 approximates:1 discussed:1 interpret:1 relating:1 association:2 refer:2 cambridge:1 imposing:2 automatic:2 unconstrained:1 consistency:16 mroueh:1 similarly:4 mathematics:1 language:2 brouard:1 chapelle:1 similarity:1 sinkhorn:1 etc:3 multivariate:1 perspective:5 lrosasco:1 italy:2 belongs:1 inequality:13 binary:5 arbitrarily:1 watson:2 yi:11 yasemin:1 seen:1 impose:1 ey:2 reduces:2 caponnetto:2 faster:1 characterized:2 cross:3 lin:1 divided:1 estimating:1 prediction:33 regression:15 essentially:3 arxiv:1 histogram:3 kernel:37 adopting:1 achieved:1 folding:2 justified:1 sch:3 rest:4 induced:6 jordan:2 split:1 easy:2 variety:4 xj:3 lightspeed:1 identified:2 topology:1 opposite:1 inner:2 idea:7 krls:2 multiclass:3 andreas:2 whether:8 motivated:1 expression:1 bartlett:1 effort:1 peter:3 york:1 remark:3 matlab:1 deep:1 dramatically:1 useful:1 generally:1 clear:4 informally:1 karpathy:1 tsybakov:2 http:1 outperform:1 canonical:1 andr:1 revisit:1 sign:1 jacques:1 correctly:1 discrete:2 express:1 diffusion:2 graph:2 relaxation:1 year:1 sum:1 run:3 letter:1 extends:2 family:2 draw:1 genova:3 bound:17 ki:1 rudi:1 fold:3 vishwanathan:1 fei:2 hy:23 separable:1 structured:41 according:5 combination:1 manning:1 joseph:1 lunch:1 making:1 maxy:1 outlier:3 thorsten:1 equation:2 previously:1 discus:2 german:1 singer:1 ordinal:1 italiano:1 end:1 adopted:3 generalizes:1 available:2 rewritten:1 ofer:1 apply:1 kernelizing:1 alternative:1 corinna:1 struct:2 original:3 thomas:3 denotes:1 remaining:1 linguistics:1 wehenkel:1 hinge:4 unifying:1 laviolette:1 exploit:1 yoram:1 quantile:1 establish:2 micchelli:1 question:14 already:1 strategy:4 fa:1 surrogate:31 obermayer:1 gradient:1 distance:11 separate:1 simulated:1 evenly:1 manifold:3 cauchy:3 studi:1 assuming:2 modeled:1 providing:1 minimizing:3 vladimir:1 sharper:1 kde:8 design:2 motivates:1 unknown:1 perform:1 upper:1 observation:2 markov:1 datasets:1 arc:2 finite:7 ingo:2 descent:1 immediate:1 extended:2 variability:1 rn:5 reproducing:8 thm:14 rating:4 introduced:1 david:2 cast:1 namely:1 pair:1 khan:1 connection:2 optimized:1 lvarez:1 c4:2 learned:1 distinction:1 barcelona:1 nip:2 address:2 able:3 beyond:2 below:1 agnan:1 challenge:2 preux:1 suitable:4 misclassification:2 natural:10 ranked:1 regularized:3 predicting:2 indicator:1 business:1 movie:7 technology:1 lorenzo:3 concludes:2 review:1 literature:2 l2:1 interdependent:1 unsurprisingly:1 loss:39 lecture:1 discriminatively:1 interesting:4 khy:1 analogy:1 borrows:1 outlining:1 age:1 validation:3 foundation:2 sufficient:2 consistent:4 share:1 mohri:1 surprisingly:1 last:1 free:1 supported:1 alain:1 allow:4 perceptron:1 institute:1 wide:2 characterizing:1 felzenszwalb:1 absolute:1 distributed:1 benefit:1 boundary:2 feedback:2 dimension:2 plain:1 made:2 universally:1 ig:1 transaction:2 excess:2 approximate:1 compact:6 reconstructed:1 implicitly:3 bernhard:3 buc:2 dealing:2 rnw:3 xi:10 degli:1 discriminative:1 continuous:5 table:2 promising:3 learn:2 robust:8 depicting:2 alg:42 mehryar:1 complex:1 main:2 dense:1 noise:1 kadri:1 fair:1 fig:1 quantiles:1 transduction:1 explicit:2 konstan:1 jmlr:3 theorem:4 embed:1 specific:3 pac:1 insightful:1 showing:3 admits:1 svm:4 cortes:1 exists:2 workshop:1 vapnik:1 ci:3 kx:6 margin:2 easier:2 boston:1 suited:1 wolpert:1 led:1 gao:1 visual:1 datadependent:1 ordered:2 scalar:1 applies:1 springer:3 ch:1 corresponds:4 minimizer:5 satisfies:3 pedro:1 complemented:1 ma:1 hua:1 weston:2 conditional:1 acm:1 goal:5 identity:2 consequently:1 replace:1 fisher:3 hard:1 tecnologia:1 infinite:6 movielens:3 uniformly:2 lemma:5 experimental:2 support:5 latter:1 arises:1 collins:1 alexander:2 harper:1 constructive:1 tested:3
5,629
6,094
Agnostic Estimation for Misspecified Phase Retrieval Models Matey Neykov Zhaoran Wang Han Liu Department of Operations Research and Financial Engineering Princeton University, Princeton, NJ 08544 {mneykov, zhaoran, hanliu}@princeton.edu Abstract The goal of noisy high-dimensional phase retrieval is to estimate an s-sparse parameter ? ? ? Rd from n realizations of the model Y = (X > ? ? )2 + ?. Based on this model, we propose a significant semi-parametric generalization called misspecified phase retrieval (MPR), in which Y = f (X > ? ? , ?) with unknown f and Cov(Y, (X > ? ? )2 ) > 0. For example, MPR encompasses Y = h(|X > ? ? |) + ? with increasing h as a special case. Despite the generality of the MPR model, it eludes the reach of most existing semi-parametric estimators. In this paper, we propose an estimation procedure, which consists of solving a cascade of two convex programs and provably recovers the direction of ? ? . Our theory is backed up by thorough numerical results. 1 Introduction In scientific and engineering fields researchers often times face the problem of quantifying the relationship between a given outcome Y and corresponding predictor vector X, based on a sample {(Yi , Xi> )> }ni=1 of n observations. In such situations it is common to postulate a linear ?working? model, and search for a d-dimensional signal vector ? ? satisfying the following familiar relationship: Y = X > ? ? + ?. (1.1) When the predictor X is high-dimensional in the sense that d  n, it is commonly assumed that the underlying signal ? ? is s-sparse. In a certain line of applications, such as X-ray crystallography, microscopy, diffraction and array imaging1 , one can only measure the magnitude of X > ? ? but not its phase (i.e., sign in the real domain). In this case, assuming model (1.1) may not be appropriate. To cope with such applications in the high-dimensional setting, [7] proposed the thresholded Wirtinger flow (TWF), a procedure which consistently estimates the signal ? ? in the following real sparse noisy phase retrieval model: Y = (X > ? ? )2 + ?, (1.2) where one additionally knows that the predictors have a Gaussian random design X ? N (0, Id ). In the present paper, taking an agnostic point of view, we recognize that both models (1.1) and (1.2) represent an idealized view of the data generating mechanism. In reality, the nature of the data could be better reflected through the more flexible viewpoint of a single index model (SIM): Y = f (X > ? ? , ?), (1.3) ? where f is an unknown link function, and it is assumed that k? k2 = 1 for identifiability. A recent line of work on high-dimensional SIMs [25, 27], showed that under Gaussian designs, one can apply `1 regularized least squares to successfully estimate the direction of ? ? and its support. The crucial condition allowing for the above somewhat surprising application turns out to be: Cov(Y, X > ? ? ) 6= 0. (1.4) While condition (1.4) is fairly generic, encompassing cases with a binary outcome, such as logistic regression and one-bit compressive sensing [5], it fails to capture the phase retrieval model (1.2). 1 In such applications it is typically assumed that X ? Cd is a complex normal random vector. In this paper for simplicity we only consider the real case X ? Rd . 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. More generally, it is easy to see that when the link function f is even in its first coordinate, condition (1.4) fails to hold. The goal of the present manuscript is to formalize a class of SIMs, which includes the noisy phase retrieval model as a special case in addition to various other additive and non-additive models with even link functions, and develop a procedure that can successfully estimate the direction of ? ? up to a global sign. Formally, we consider models (1.3) with Gaussian design that satisfy the following moment assumption: Cov(Y, (X > ? ? )2 ) > 0. (1.5) Unlike (1.4), one can immediately check that condition (1.5) is satisfied by model (1.2). In ?2 we give multiple examples, both abstract and concrete, of SIMs obeying this constraint. Our second moment constraint (1.5) can be interpreted as a semi-parametric robust version of phase-retrieval. Hence, we will refer to the class of models satisfying condition (1.5) as misspecified phase retrieval (MPR) models. In this point of view it is worth noting that condition (1.4) relates to linear regression in a way similar to how condition (1.5) relates to the phase retrieval model. Our motivation for studying SIMs under such a constraint can ultimately be traced to the vast sufficient dimension reduction (SDR) literature. In particular, we would like to point out [22] as a source of inspiration. Contributions. Our first contribution is to formulate a novel and easily implementable two-step procedure, which consistently estimates the direction of ? ? in an MPR model. In the first step b , such that |b we solve a semidefinite program producing a unit vector v v> ? ? | is sufficiently large. Once such a pilot estimate is available, we consider solving an `1 regularized least squares on the b , where Y is the average of Yi ?s, to produce a second augmented outcome Yei = (Yi ? Y )Xi> v b which is then normalized to obtain the final refined estimator ?b = b/k b bk b 2 . In addition estimate b, to being universally applicable to MPR models, our procedure has an algorithmic advantage in that it relies solely on convex optimization, and as a consequence we can obtain the corresponding global minima of the two convex programs in polynomial time. Our second contribution is to rigorously demonstrate that the above procedure consistently estimates the direction of ? ? . We prove that for a given MPR model, with high probability, one has: p min??{?1,1} k?b ? ?? ? k2 . s log d/n, provided that the sample size n satisfies n & s2 log d. While the same rates (with different constants) hold for the TWF algorithm of [7] in the special case of noisy phase retrieval model, our procedure provably achieves these rates over the broader class of MPR models. Lastly, we propose an optional refinement of our algorithm, which shows improved performance in the numerical studies. Related Work. The phase retrieval model has received considerable attention in the recent years by statistics, applied mathematics as well as signal processing communities. For the non-sparse version of (1.2), efficient algorithms have been suggested based on both semidefinite programs [8, 10] and non-convex optimization methods that extend gradient descent [9]. Additionally, a non-traditional instance of phase retrieval model (which also happens to be a special case of the MPR model) was considered by [11], where the authors suggested an estimation procedure originally proposed for the problem of mixed regression. For the noisy sparse version of model (1.2), near optimal solutions were achieved with a computationally infeasible program by [20]. Subsequently, a tractable gradient descent approach achieving minimax optimal rates was developed by [7]. Abstracting away from the phase retrieval or linear model settings, we note that inference for SIMs in the case when d is small or fixed, has been studied extensively in the literature [e.g., 18, 24, 26, 34, among many others]. In another line of research on SDR, seminal insights shedding light on condition (1.4) can be found in, e.g., [12, 21, 23]. The modified condition (1.5) traces roots to [22], where the authors designed a procedure to handle precisely situations where (1.4) fails to hold. More recently, there have been active developments for high-dimensional SIMs. [27] and later [31] demonstrated that under condition (1.4), running the least squares with `1 regularization can obtain a consistent estimate of the direction of ? ? , while [25] showed that this procedure also recovers the signed support of the direction. Excess risk bounds were derived in [14]. Very recently, [16] extended this observation to other convex loss functions under a condition corresponding to (1.4) depending implicitly on the loss function of interest. [28] proposed a non-parametric least squares with an equality `1 constraint to handle simultaneous estimation of ? ? as well as f . [17] considered a smoothed-out U -process type of loss function with `1 regularization, and proved their approach works for a sub-class of functions satisfying condition (1.4). None of the aforementioned works on SIMs can be directly applied to tackle the MPR class (1.5). A generic procedure for estimating sparse principal eigenvectors was 2 developed in [37]. While in principle this procedure can be applied to estimate the direction in MPR models, it requires proper initialization, and in addition, it requires knowledge of the sparsity of the vector ? ? . We discuss this approach in more detail in ?4. Regularized procedures have also been proposed for specific choices of f and Y . For example, [36] studied consistent estimation under the model P(Y = 1|X) = (h(X > ? ? ) + 1)/2 with binary Y , where h : R 7? [?1, 1] is possibly unknown. Their procedure is based on taking pairs of differences in the outcome, and therefore replaces condition (1.4) with a different type of moment conditon. [35] considered the model Y = h(X > ? ? ) + ? with a known continuously differentiable and monotonic h, and developed estimation and inferential procedures based on the `1 regularized quadratic loss, in a similar spirit to the TWF algorithm suggested by [7]. In conclusion, although there exists much prior related work, to the best of our knowledge, none of the available literature discusses the MPR models in the generality we attempt in the present manuscript. Notation. We now briefly outline some commonly used notations. Other notations will be defined as needed throughout the paper. For a (sparse) vector v = (v1 , . . . , vp )> , we let Sv := supp(v) = {j : vj 6= 0} denote its support, kvkp denote the `p norm (with the usual extension when p = ?) and v?2 := vv> is a shorthand for the outer product. With a standard abuse of notation we will denote by kvk0 = |supp(v)| the cardinality of the support of v. We often use Id to denote a d ? d identity matrix. For a real random variable X, define kXk?2 = sup p?1/2 (E|X|p )1/p , kXk?1 = sup p?1 (E|X|p )1/p . p?1 p?1 Recall that a random variable is called sub-Gaussian if kXk?2 < ? and sub-exponential if kXk?1 < ? [e.g., 32]. For any integer k ? N we use the shorthand notation [k] = {1, . . . , k}. We also use standard asymptotic notations. Given two sequences {an }, {bn } we write an = O(bn ) if there exists a constant C < ? such that an ? Cbn , and an  bn if there exist positive constants c and C such that c < an /bn < C. Organization. In ?2 and ?3 we introduce the MPR model class and our estimation procedure, and ?3.1 is dedicated to stating the theoretical guarantees of our proposed algorithm. Simulation results are given in ?4. A brief discussion is provided in ?5. We defer the proofs to the appendices due to space limitations. 2 MPR Models In this section we formally introduce MPR models. In detail, we argue that the class of such models is sufficiently rich, including numerous models of interest. Motivated by the setup in the sparse noisy phase retrieval model (1.2), we assume throughout the remainder of the paper that X ? N (0, Id ). We begin our discussion with a formal definition. ?X Definition 2.1 (MPR Models). Assume that we are given model (1.3), where X ? N (0, Id ), ? ? and ? ? ? Rd is an s-sparse unit vector, i.e., k? ? k2 = 1. We call such a model misspecified phase retrieval (MPR) model, if the link function f and noise ? further satisfy, for Z ? N (0, 1) and K > 0, c0 := Cov(f (Z, ?), Z 2 ) > 0, kY k?1 ? K. (2.1) (2.2) Both assumptions (2.1) and (2.2) impose moment restrictions on the random variable Y . Assumption (2.1) states that Y is positively correlated with the random variable (X > ? ? )2 , while assumption (2.2) requires Y to have somewhat light-tails. Also, as mentioned in the introduction, the unit norm constraint on the vector ? ? is required for the identifiability of model (1.3). We remark that the class of MPR models is convex in the sense that if we have two MPR models f1 (X > ? ? , ?) and f2 (X > ? ? , ?), all models generated by their convex combinations ?f1 (X > ? ? , ?)+(1??)f2 (X > ? ? , ?) (? ? [0, 1]) are also MPR models. It is worth noting the > direction in (2.1) is assumed without loss of generality. If Cov(Y, (X > ? ? )2 ) < 0 one can apply the same algorithm to ?Y . However, the knowledge of the direction of the inequality is important. In the following, we restate condition (2.1) in a more convenient way, enabling us to easily calculate the explicit value of the constant c0 in several examples. Proposition 2.2. Assume that there exists a version of the map ?(z) : z 7? E[f (Z, ?)|Z = z] such that ED2 ?(Z) > 0, where D2 is the second distributional derivative of ? and Z ? N (0, 1). Then the SIM (1.3) satisfies assumption (2.1) with c0 = ED2 ?(Z). We now provide three concrete MPR models as warm up examples for the more general examples discussed in Proposition 2.3 and Remark 2.3. Consider the models: 3 Y = (X > ? ? )2 + ?, (2.3) Y = |X > ? ? | + ?, (2.4) Y = |X > ? ? + ?|, (2.5) where ? ? ? X is sub-exponential noise, i.e., k?k?1 ? K? for some K? > 0. Model (2.3) is the noisy phase retrieval model considered by [7], while models (2.4) and (2.5) were both discussed in [11], where the authors proposed a method to solve model (2.5) in the low-dimensional regime. Below we briefly explain why these models satisfy conditions (2.1) and (2.2). First, observe that in all three models we have a sum of two sub-exponential random variables, and hence by the triangle inequality it follows that the random variable Y is also sub-exponential, which implies (2.2). To understand why (2.1)? holds, by applying Proposition 2.2 we have c0 = E2 = 2 > 0 for model (2.3), c0 = E2?0 (Z) = 2/ 2? > 0 for model (2.4), and c0 = E2?0 (Z + ?) = 2E?(?) > 0 for model (2.5), where ?0 (?) is the Dirac delta function centered at zero, and ? is the density of the standard normal distribution. Admittedly, calculating the second distributional derivative could be a laborious task in general. In the remainder of this section we set out to find a simple to check generic sufficient condition on the link function f and error term ?, under which both (2.1) and (2.2) hold. Before giving our result note that the condition ED2 ?(Z) > 0 is implied whenever ? is strictly convex and twice differentiable. However, strictly convex functions ? may violate assumption (2.2) as they can inflate the tails of Y arbitrarily (consider, e.g., f (x, ?) = x4 + ?). Moreover, the functions in example (2.4) and (2.5) fail to be twice differentiable. In the following result we handle those two problems, and in addition we provide a more generic condition than convexity, which suffices to ensure the validity of (2.1). Proposition 2.3. The following statements hold. (i) Let the function ? defined in Proposition 2.2 be such that the map z 7? ?(z) + ?(?z) is non-decreasing on R+ 0 and and there exist z1 > z2 > 0 such that ?(z1 ) + ?(?z1 ) > ?(z2 ) + ?(?z2 ). Then (2.1) holds. (ii) A sufficient condition for (i) to hold, is that z 7? ?(z) is convex and sub-differentiable at every point z ? R, and there exists a point z0 ? R+ 0 satisfying ?(z0 ) + ?(?z0 ) > 2?(0). (iii) Assume that there exist functions g1 , g2 such that f (Z, ?) ? g1 (Z) + g2 (?), and g1 is essentially quadratic in the sense that there exists a closed interval I = [a, b] with 0 ? I, such that for all z satisfying g1 (z) ? I c we have |g1 (z)| ? Cz 2 for a sufficiently large constant C > 0, and let g2 (?) be sub-exponential. Then (2.2) holds. Remark 2.4. Proposition 2.3 shows that the class of MPR models is sufficiently broad. By (i) and (ii) it immediately follows that the additive models Y = h(X > ? ? ) + ?, (2.6) R+ 0 where the link function h is even and increasing on or convex, satisfy the covariance condition (2.1) by (i) and (ii) of Proposition 2.3 respectively. If h is also essentially quadratic and ? is subexponentially distributed, using (iii) we can deduce that Y in (2.6) is a sub-exponential random variable, and hence under these restrictions model (2.6) is an MPR model. Both examples (2.3) and (2.4) take this form. Additionally, Proposition 2.3 implies that the model Y = h(X > ? ? + ?) (2.7) satisfies (2.1), whenever the link h is a convex sub-differentiable function, such that h(z0 )+h(?z0 ) > 2h(0) for some z0 > 0, E|h(z + ?)| < ? for all z ? R and E|h(Z + ?)| < ?. This conclusion follows because under the latter conditions the function ?(z) = Eh(z + ?) satisfies (ii), which is proved in Appendix C under Lemma C.1. Moreover, if it turns out that h is essentially quadratic and h(2?) is sub-exponential, then by Jensen?s inequality we have 2h(Z +?) ? h(2Z)+h(2?) and hence (iii) implies that (2.2) is also satisfied. Model (2.5) is of the type (2.7). Unlike the additive noise models (2.6), models (2.7) allow noise corruption even within the argument of the link function. On the negative side, it should be apparent that (2.1) fails to hold in cases where ? is an odd function, i.e., ?(z) = ??(?z). In many such cases (e.g. when ? is monotone or non-constant and non-positive/nonnegative on R+ ), one would have Cov(Y, X > ? ? ) = E[?(Z)Z] 6= 0, and hence direct application of the `1 regularized least squares algorithm is possible as we discussed in the introduction. 3 Agnostic Estimation for MPR In this section we describe and motivate our two-step procedure, which consists of a convex relaxation and an `1 regularized least squares program, for performing estimation in the MPR class of models 4 described by Definition 2.1. We begin our motivation by noting that any MPR model satisfies the following inequality Cov((Y ? ?)X > ? ? , X > ? ? ) = E{(Y ? ?)(X > ? ? )2 } = Cov(f (Z, ?), Z 2 ) = c0 > 0, (3.1) where we have denoted ? := EY . This simple observation plays a major role in the motivation of our procedure. Notice that in view of condition (1.4), inequality (3.1) implies that if instead of observing Y we had observed Y? = g(X > ? ? , ?) = (Y ? ?)X > ? ? . However, there is no direct way of generating the random variable Y? , as doing so would require the knowledge of ? ? and the mean ?. b first, use an empirical estimate Y of ?, and Here, we propose to roughly estimate ? ? by a vector v b then obtain the `1 regularized least squares estimate on the augmented variable Ye = (Y ? Y )X > v to sharpen the convergence rate. At first glance it might appear counter-intuitive that introducing a noisy estimate of ? ? can lead to consistent estimates, as the so-defined Ye variable depends on the b }. Decompose projection of X on span{? ? , v b = (b v v> ? ? )? ? + ?b? , (3.2) where ?b? ? ? ? . To better motivate this proposal, in the following we analyze the population least b for some fixed unit vector v b with squares fit, based on the augmented variable Y? = (Y ? ?)X > v decomposition (3.2). Writing out the population solution for least squares yields: [EX ?2 ]?1 E[X Y? ] = E[X(Y ? ?)X > (b v> ? ? )? ? ] + E[X(Y ? ?)X > ?b? ] . (3.3) | {z } | {z } I1 I2 We will now argue that left hand side of (3.3) is proportional to ? ? . First, we observe that I1 = c0 (b v> ? ? )? ? , since multiplying by any vector b ? ? ? yields b> I1 = 0 by independence. Second, and perhaps more importantly, we have that I2 = 0. To see this, we first take a vector b ? span{? ? , ?b? }? . Since the three variables b> X, Y ? ? and ?b? X are independent, we have b> I2 = 0. Multiplying by ? ? we have ? ? > I2 = 0 since ? ? > X(Y ? ?) is independent of X > ?b? . Finally, multiplying by ?b? yields I2> ?b? = 0, since (X > ?b? )2 is independent of Y ? ?. (a) Initialization (b) Second Step b and ?b produced by the first and second steps of Algorithm Figure 1: An illustration of the estimates v 1. After the first step we can guarantee that the vector ? ? belongs to one of two spherical caps which contain all vectors w such that |b v> w| ? ? for some constant ? > 0, provided that the sample size n & s2 log d is sufficiently large. After the second step we can guarantee that the vector ? ? belongs to one of two spherical caps in (b), which are shrinking with (n, s, d) at a faster rate. It is noteworthy to mention that the above derivation crucially relies on the fact that the Y variable b was fixed. In what follows we formulate a pilot procedure which was centered, and the vector v b such that |b produces an estimate v v> ? ? | ? ? > 0. A proper initialization algorithm can be achieved by using a spectral method, such as the Principal Hessian Directions (PHD) proposed by [22]. Cast into the framework of SIM, the PHD framework implies the following simple observation: Lemma 3.1. If we have an MPR model, then argmaxkvk2 =1 v> E[Y (X ?2 ? I)]v = ?? ? . A proof of this fact can be found in Appendix C. Lemma 3.1 encourages us to look into the following sample version maximization problem Pn argmaxkvk2 =1,kvk0 =s n?1 v> i=1 [Yi (Xi?2 ? I)]v, (3.4) which targets a restricted (s-sparse) principal eigenvector. Unfortunately, solving such a problem is a computationally intensive task, and requires knowledge of s. Here we take a standard route of relaxing the above problem to a convex program, and solving it efficiently via semidefinite programming (SDP). A similar in spirit SDP relaxation for solving sparse PCA problems, was originally proposed b = n?1 Pn Yi (X ?2 ?I), and solve the following convex by [13]. Instead of solving (3.4), define ? i i=1 5 program: Pd b = argmaxtr(A)=1,A?Sd tr(?A) b A ? ?n i,j=1 |Aij |, + (3.5) where Sd+ is the convex cone of non-negative semidefinite matrices, and ?n is a regularization parameter encouraging element-wise sparsity in the matrix A. The hopes of introducing the optimization b will be a good first estimate of ? ??2 . In practice it could turn out that the program above are that A b b In theory we b as the principal eigenvector of A. matrix A is not rank one, hence we suggest taking v b show that with high probability the matrix A will indeed be of rank one. Observation (3.3), Lemma 3.1 and the SDP formulation motivate the agnostic two-step estimation procedure for misspecified phase retrieval in Algorithm 1. Algorithm 1 input :(Yi , Xi )ni=1 : data, ?n , ?n : tuning parameters 1. Split the sample into two approximately equal sets S1 , S2 , with |S1 | = bn/2c, |S2 | = dn/2e. ?2 b := |S1 |?1 P b b be the first eigenvector of A. 2. Let ? ? Id ). Solve (3.5). Let v i?S1 Yi (Xi P ?1 3. Let Y = |S2 | i?S2 Yi . Solve the following program: > b = argmin (2|S2 |)?1 P b ? Xi> b)2 + ?n kbk1 . b (3.6) b i?S2 ((Yi ? Y )Xi v b bk b 2. 4. Return ?b := b/k The sample split is required to ensure that after decomposition (3.2), the vector ?b? and the value b > ? ? are independent of the remaining sample. v In ?3.1 we demonstrate that Algorithm 1 succeeds p with optimal (in the noisy regime) `2 rate s log d/n, provided that s2 log d . n. The latter requireb of optimization program (3.5) is ment on the sample size suffices to guarantee that the solution A rank one. Figure 1 illustrates the two steps of Algorithm 1. In addition to our main procedure, we propose an optional refinement step (Algorithm 2) in which one applies steps 3. and 4. of Algorithm 1 on the full dataset using the output vector ?b of Algorithm 1. Doing so can potentially result in additional stability and further refinements of the rate constant. Algorithm 2 Optional Refinement 0 b input :(Yi , Xi )ni=1 P: data, ?n : tuning parameter, output ? from the Algorithm 1 ?1 5. Let Y = n i?[n] Yi . Solve the following program: b = argmin (2n)?1 Pn ((Yi ? Y )X > ?b ? X > b)2 + ? 0 kbk1 . b b i i n i=1 (3.7) b bk b 2. 6. Return ?b0 := b/k 3.1 Theoretical Guarantees In this section we present our main theoretical results, which consist of theoretical justification of our procedures, as well as lower bounds for certain types of SIM (1.3). To simplify the presentation for this section, we slightly change the notation and assume that the sample size is 2n and S1 = [n] and S2 = {n + 1, . . . , 2n}. Of course this abuse of notation does not restrict our analysis to only even sample size cases. b which is Our first result shows that the optimization program (3.5) succeeds in producing a vector v close to the vector ? ? . p Proposition 3.2. Assume that n is large enough so that s log d/n < (1/6 ? ?/4)c0 /(C1 + C2 ) for some smallp but fixed ? > 0 and constants C1 , C2 (depending on f and ?). Then there exists a b the solution of (3.5), satisfies b of A, value of ?n  log d/n such that the principal eigenvector v |b v> ? ? | ? ? > 0, ?1 with probability at least 1 ? 4d ? O(n ). Proposition 3.2 shows that the first step of Algorithm 1 narrows down the search for the direction b satisfies |b of ? ? to a union of two spherical caps (i.e., the estimate v v> ? ? | ? ? for some constant ? > 0, see also Figure 1a). Our main result below, demonstrates that in combination with program (3.6) this suffices to recover the direction of ? ? at an optimal rate with high probability. ?1 6 p Theorem 3.3. There exist values of ?n , ?n  log d/n and a constant R > 0 depending on f and p ?, such that if s log d/n < R and log(d) log2 (n)/n = o(1), the output of Algorithm 1 satisfies: r   s log d ? sup P?? ? O(d?1 ? n?1 ), (3.8) min k?b ? ?? k2 > L n ??{1,?1} k? ? k2 =1,k? ? k0 ?s where L is a constant depending solely on f and ?. p We remark that although the estimation rate is of the order s log d/n, our procedure still requires p that s log d/n is sufficiently small. This phenomenon is similar to what has been observed by [7], and it is our belief that this requirement cannot be relaxed for computationally feasible algorithms. We would further like to mention that while in bound (3.8) we control the worst case probability of failure, it is less clear whether the estimate ?b is universally consistent (i.e., whether the sup can be moved inside the probability in (3.8)). 4 Numerical Experiments In this section we provide numerical experiments based on the three models (2.3), (2.4) and (2.5) where the random variable ? ? N (0, 1). All models are compared with the Truncated Power Method (TPM), proposed in [37]. For model (2.3) we also compare the results of our approach to the ones given by the TWF algorithm of [7]. Our setup is as follows. In all scenarios the vector ? ? was held fixed at ? ? = (?s?1/2 , s?1/2 , . . . , s?1/2 , 0, . . . 0). Since our theory requires that n & s2 log d, we | {z } | {z } s d?s have setup four different sample sizes n ? ?s2 log d, where ? ? {4, 8, 12, 16}. We let s depend on the dimension d and we take s ? log d. In addition to the suggested approach in Algorithm 1, we also provide results using the refinement procedure (see Algorithm 3.7). We also provide the values of two ?warm? starts of our algorithm, produced by solving program (3.5) with half and full data correspondingly. It is evident that for all scenarios the second step of Algorithms 1 and 2 outperform the warm start from SDP, except in Figure 2 (b), (c), when the sample size is simply two small to for the warm start on half of the data to be accurate. All values we report are based on an average over 100 simulations. The SDP parameter was kept at a constant value (0.015) throughout all simulations, and we observed that varying this parameter had little influence on the final SDP solution. To select the ?n parameter for (3.6) a pre-specified grid of parameters {? 1 , . . . , ? l } was selected, and the following heuristic procedure based on K-fold cross-validation was used. We divide S2 into K = 5 approximately equally sized non-intersecting sets S2 = ?j?[K] Se2j . For each j ? [K] and k ? [l] we run (3.6) on the set ?r?[K],r6=j Se2r with a tuning parameter ?n = ? k to obtain an estimate ?bk,?Sej . Lemma 3.1 then 2 justifies the following criteria to select the optimal index for selecting ?bn = ? l where X X b l = argmax Yi (X > ?b ej )2 . b i k?[l] k,?S2 ej j?[K] i?S 2 Our experience suggests this approach works well in practice provided that the values {? 1 , . . . , ? l } p are selected within appropriate range and are of the magnitude log d/n. Since the TPM algorithm requires an estimate of the sparsity s, we tuned it as suggested in Section 4.1.2 of [37]. In particular, for each scenario we considered the set of possible sparsities K = {s, 2s, 4s, 8s}. For each k ? K the algorithm is ran on the first part of the data S1 , to obtain an estimate ?bk , and the final estimate is taken to be ?bbk where b k is given by X ?1 b k = argmax ?bk> |S2 | Yi (Xi?2 ? Id )?bk . k?K i?S2 The TPM is ran for 2000 iterations. In the case of phase retrieval, the TWF algorithm was also ran at a total number of 2000 iterations, using the tuning parameters originally suggested in [7]. As expected the TWF algorithm which targets the sparse phase retrieval model in particular outperforms our approach in the case when the sample size n is small, however our approach performs very comparatively to the TWF, and in fact even slightly better once we increase the sample size. It is possible that the TWF algorithm can perform better if it is ran for a longer than 2000 iterations, though in most cases it appeared to have converged to its final value. The results are visualized on Figure 2 above. The TPM algorithm, has performance comparable to that of Algorithm 1, is always 7 ? ? ? ? ? ? ? 2.0 Init Second Step Init full data Refined TPM 1.5 ? ?=4 ?=8 ? = 12 ? = 16 ? ? Init Second Step Init full data Refined TPM ? ? 1.0 2.0 1.5 ?=4 ?=8 ? = 12 ? = 16 ^ ||? ? ?*||2 ? 1.0 ^ ||? ? ?*||2 ? Init Second Step Init full data Refined TPM TWF full data 1.0 ? ^ ||? ? ?*||2 2.0 1.5 ?=4 ?=8 ? = 12 ? = 16 ? ? 0.0 ? ? ? ? ? (d) Model (2.3), d = 400 0.0 ? ? ? ? 2.0 1.5 ? ? ? ? ? ? ? ? ? 1.0 2.0 ? ?=4 ?=8 ? = 12 ? = 16 Init Second Step Init full data Refined TPM ? ? 0.5 ? (c) Model (2.5), d = 200 Init Second Step Init full data Refined TPM ? 0.5 ? ? ? ? ? ? ? ? ? 0.0 ? ?=4 ?=8 ? = 12 ? = 16 ^ ||? ? ?*||2 1.0 ? 0.5 ^ ||? ? ?*||2 ? 1.5 ? (b) Model (2.4), d = 200 Init Second Step Init full data Refined TPM TWF full data 1.0 ?=4 ?=8 ? = 12 ? = 16 ^ ||? ? ?*||2 1.5 2.0 (a) Model (2.3), d = 200 ? ? ? ? 0.0 ? ? ? ? 0.0 0.0 ? ? ? 0.5 0.5 0.5 ? ? ? ? ? (e) Model (2.4), d = 400 (f) Model (2.5), d = 400 Figure 2: Simulation results for the three examples considered in ?2, in two different settings for the n dimension d = 200, 400. Here the parameter ? ? s2 log d describes the relationship between sample size, dimension and sparsity of the signal. Algorithm 2 dominates in most settings, with exceptions when ? is too small, in which case none of the approaches provides meaningful results. worse than the estimate produced by Algorithm 2, and it needs an initialization (for the first step of Algorithm 1 is used) and further requires a rough knowledge of the sparsity s, whereas both Algorithms 1 and 2 do not require an estimate of s. 5 Discussion In this paper we proposed a two-step procedure for estimation of MPR models with standard Gaussian designs. We argued that the MPR models form a rich class including numerous additive SIMs (i.e., Y = h(X > ? ? ) + ?) with an even and increasing on R+ link function h. Our algorithm is based solely on convex optimization, and achieves optimal rates of estimation. Our procedure does require that the sample size n & s2 log d to ensure successful initialization. The same condition has been exhibited previously, e.g., in [7] for the phase retrieval model, and in works on sparse principal components analysis [see, e.g., 3, 15, 33]. We anticipate that for a certain subclass of MPR models, the sample size requirement n & s2 log d is necessary for computationally efficient algorithms to exist. We conjecture that models (2.3)-(2.5) are such models. It is however certainly not true that this sample size requirement holds for all models from the MPR class. For example, the following model can be solved efficiently by applying the Lasso algorithm, without requiring the sample size scaling n & s2 log d Y = sign(X > ? ? + c), where c < 0 is fixed. This discussion leads to the important question under what conditions of the (known) link and error distribution (f, ?) one can efficiently solve the SIM Y = f (X > ? ? , ?) with an optimal sample complexity. We would like to investigate this issue further in our future work. Acknowledgments: The authors would like to thank the reviewers and meta-reviewers for carefully reading the manuscript and their helpful suggestions which improved the presentation. The authors would also like to thank Professor Xiaodong Li for kindly providing the code for the TWF algorithm. References [1] Adamczak, R. and Wolff, P. (2015). Concentration inequalities for non-Lipschitz functions with bounded derivatives of higher order. Probability Theory and Related Fields, 162 531?586. [2] Amini, A. A. and Wainwright, M. J. (2008). High-dimensional analysis of semidefinite relaxations for sparse principal components. In IEEE International Symposium on Information Theory. [3] Berthet, Q. and Rigollet, P. (2013). Complexity theoretic lower bounds for sparse principal component detection. In Conference on Learning Theory. [4] Bickel, P. J., Ritov, Y. and Tsybakov, A. B. (2009). Simultaneous analysis of Lasso and dantzig selector. The Annals of Statistics 1705?1732. 8 [5] Boufounos, P. T. and Baraniuk, R. G. (2008). 1-bit compressive sensing. In Annual Conference on Information Sciences and Systems. [6] B?hlmann, P. and van de Geer, S. (2011). Statistics for high-dimensional data: Methods, theory and applications. Springer. [7] Cai, T. T., Li, X. and Ma, Z. (2015). Optimal rates of convergence for noisy sparse phase retrieval via thresholded Wirtinger flow. arXiv:1506.03382. [8] Cand?s, E. J., Li, X. and Soltanolkotabi, M. (2015). Phase retrieval from coded diffraction patterns. Applied and Computational Harmonic Analysis, 39 277?299. [9] Cand?s, E. J., Li, X. and Soltanolkotabi, M. (2015). Phase retrieval via Wirtinger flow: Theory and algorithms. IEEE Transactions on Information Theory, 61 1985?2007. [10] Cand?s, E. J., Strohmer, T. and Voroninski, V. (2013). Phaselift: Exact and stable signal recovery from magnitude measurements via convex programming. Communications on Pure and Applied Mathematics, 66 1241?1274. [11] Chen, Y., Yi, X. and Caramanis, C. (2013). A convex formulation for mixed regression with two components: Minimax optimal rates. arXiv:1312.7006. [12] Cook, R. D. and Ni, L. (2005). Sufficient dimension reduction via inverse regression. Journal of the American Statistical Association, 100. [13] d?Aspremont, A., El Ghaoui, L., Jordan, M. I. and Lanckriet, G. R. (2007). A direct formulation for sparse PCA using semidefinite programming. SIAM review, 49 434?448. [14] Ganti, R., Rao, N., Willett, R. M. and Nowak, R. (2015). Learning single index models in high dimensions. arXiv preprint arXiv:1506.08910. [15] Gao, C., Ma, Z. and Zhou, H. H. (2014). Sparse CCA: Adaptive estimation and computational barriers. arXiv:1409.8565. [16] Genzel, M. (2016). High-dimensional estimation of structured signals from non-linear observations with general convex loss functions. arXiv:1602.03436. [17] Han, F. and Wang, H. (2015). Provable smoothing approach in high dimensional generalized regression model. arXiv:1509.07158. [18] Horowitz, J. L. (2009). Semiparametric and nonparametric methods in econometrics. Springer. [19] Laurent, B. and Massart, P. (2000). Adaptive estimation of a quadratic functional by model selection. Annals of Statistics 1302?1338. [20] Lecu?, G. and Mendelson, S. (2013). Minimax rate of convergence and the performance of erm in phase recovery. arXiv:1311.5024. [21] Li, K.-C. (1991). Sliced inverse regression for dimension reduction. Journal of the American Statistical Association, 86 316?327. [22] Li, K.-C. (1992). On principal Hessian directions for data visualization and dimension reduction: Another application of Stein?s lemma. Journal of the American Statistical Association, 87 1025?1039. [23] Li, K.-C. and Duan, N. (1989). Regression analysis under link violation. The Annals of Statistics 1009? 1052. [24] McCullagh, P. and Nelder, J. (1989). Generalized linear models. Chapman & Hall/CRC. [25] Neykov, M., Liu, J. S. and Cai, T. (2016). L1 -regularized least squares for support recovery of high dimensional single index models with Gaussian designs. Journal of Machine Learning Research, 17 1?37. [26] Peng, H. and Huang, T. (2011). Penalized least squares for single index models. Journal of Statistical Planning and Inference, 141 1362?1379. [27] Plan, Y. and Vershynin, R. (2015). The generalized Lasso with non-linear observations. IEEE Transactions on information theory. [28] Radchenko, P. (2015). High dimensional single index models. Journal of Multivariate Analysis, 139 266?282. [29] Raskutti, G., Wainwright, M. J. and Yu, B. (2010). Restricted eigenvalue properties for correlated Gaussian designs. Journal of Machine Learning Research, 11 2241?2259. [30] Stein, C. M. (1981). Estimation of the mean of a multivariate normal distribution. The Annals of Statistics 1135?1151. [31] Thrampoulidis, C., Abbasi, E. and Hassibi, B. (2015). Lasso with non-linear measurements is equivalent to one with linear measurements. arXiv preprint, arXiv:1506.02181. [32] Vershynin, R. (2010). Introduction to the non-asymptotic analysis of random matrices. arXiv:1011.3027. [33] Wang, Z., Gu, Q. and Liu, H. (2015). Sharp computational-statistical phase transitions via oracle computational model. arXiv:1512.08861. [34] Xia, Y. and Li, W. (1999). On single-index coefficient regression models. Journal of the American Statistical Association, 94 1275?1285. [35] Yang, Z., Wang, Z., Liu, H., Eldar, Y. C. and Zhang, T. (2015). Sparse nonlinear regression: Parameter estimation and asymptotic inference. arXiv; 1511:04514. [36] Yi, X., Wang, Z., Caramanis, C. and Liu, H. (2015). Optimal linear estimation under unknown nonlinear transform. In Advances in Neural Information Processing Systems. [37] Yuan, X.-T. and Zhang, T. (2013). Truncated power method for sparse eigenvalue problems. Journal of Machine Learning Research, 14 899?925. 9
6094 |@word briefly:2 version:5 polynomial:1 norm:2 c0:9 d2:1 simulation:4 crucially:1 bn:6 covariance:1 decomposition:2 mention:2 tr:1 reduction:4 moment:4 liu:5 selecting:1 tuned:1 outperforms:1 existing:1 ganti:1 z2:3 surprising:1 numerical:4 additive:5 designed:1 half:2 selected:2 cook:1 provides:1 zhang:2 dn:1 c2:2 direct:3 symposium:1 yuan:1 consists:2 prove:1 shorthand:2 ray:1 inside:1 introduce:2 peng:1 expected:1 indeed:1 roughly:1 cand:3 planning:1 sdp:6 decreasing:1 spherical:3 duan:1 encouraging:1 little:1 cardinality:1 increasing:3 spain:1 provided:5 underlying:1 estimating:1 notation:8 agnostic:4 begin:2 moreover:2 what:3 bounded:1 argmin:2 interpreted:1 eigenvector:4 developed:3 compressive:2 nj:1 guarantee:5 thorough:1 every:1 subclass:1 tackle:1 k2:5 demonstrates:1 control:1 unit:4 appear:1 producing:2 cbn:1 positive:2 before:1 engineering:2 sd:2 consequence:1 despite:1 id:6 laurent:1 solely:3 abuse:2 noteworthy:1 signed:1 might:1 twice:2 initialization:5 studied:2 approximately:2 dantzig:1 suggests:1 relaxing:1 range:1 acknowledgment:1 practice:2 union:1 procedure:27 empirical:1 cascade:1 inferential:1 convenient:1 projection:1 pre:1 suggest:1 cannot:1 close:1 selection:1 risk:1 applying:2 seminal:1 writing:1 influence:1 restriction:2 equivalent:1 map:2 demonstrated:1 reviewer:2 backed:1 attention:1 convex:20 formulate:2 simplicity:1 recovery:3 immediately:2 pure:1 estimator:2 insight:1 array:1 importantly:1 financial:1 population:2 handle:3 stability:1 coordinate:1 justification:1 annals:4 target:2 play:1 exact:1 programming:3 lanckriet:1 element:1 satisfying:5 econometrics:1 distributional:2 observed:3 role:1 preprint:2 wang:5 capture:1 worst:1 calculate:1 solved:1 counter:1 ran:4 mentioned:1 pd:1 convexity:1 complexity:2 rigorously:1 ultimately:1 motivate:3 depend:1 solving:7 yei:1 f2:2 triangle:1 gu:1 easily:2 k0:1 various:1 caramanis:2 derivation:1 describe:1 matey:1 outcome:4 refined:7 apparent:1 heuristic:1 solve:7 cov:8 statistic:6 g1:5 transform:1 noisy:10 final:4 advantage:1 differentiable:5 sequence:1 eigenvalue:2 cai:2 propose:5 ment:1 product:1 remainder:2 realization:1 intuitive:1 moved:1 dirac:1 ky:1 convergence:3 requirement:3 produce:2 generating:2 depending:4 develop:1 stating:1 odd:1 b0:1 inflate:1 received:1 sim:5 implies:5 direction:14 restate:1 subsequently:1 centered:2 crc:1 require:3 argued:1 f1:2 generalization:1 suffices:3 decompose:1 proposition:10 anticipate:1 extension:1 strictly:2 hold:11 sufficiently:6 considered:6 hall:1 normal:3 algorithmic:1 major:1 achieves:2 bickel:1 estimation:19 applicable:1 radchenko:1 successfully:2 hope:1 rough:1 gaussian:7 always:1 modified:1 pn:3 ej:2 zhou:1 varying:1 broader:1 derived:1 consistently:3 rank:3 check:2 sense:3 helpful:1 inference:3 el:1 typically:1 i1:3 voroninski:1 provably:2 issue:1 among:1 flexible:1 eldar:1 denoted:1 aforementioned:1 development:1 plan:1 smoothing:1 special:4 fairly:1 field:2 once:2 equal:1 chapman:1 x4:1 broad:1 look:1 yu:1 future:1 others:1 report:1 simplify:1 recognize:1 familiar:1 phase:26 argmax:2 sdr:2 attempt:1 detection:1 organization:1 interest:2 investigate:1 laborious:1 certainly:1 violation:1 semidefinite:6 light:2 held:1 strohmer:1 accurate:1 nowak:1 necessary:1 experience:1 divide:1 phaselift:1 theoretical:4 instance:1 rao:1 hlmann:1 maximization:1 introducing:2 predictor:3 successful:1 too:1 sv:1 vershynin:2 density:1 international:1 siam:1 continuously:1 concrete:2 intersecting:1 postulate:1 satisfied:2 abbasi:1 huang:1 possibly:1 worse:1 horowitz:1 american:4 derivative:3 return:2 li:8 supp:2 de:1 zhaoran:2 includes:1 coefficient:1 satisfy:4 kvkp:1 idealized:1 depends:1 later:1 view:4 root:1 closed:1 observing:1 sup:4 doing:2 analyze:1 recover:1 start:3 identifiability:2 defer:1 contribution:3 square:11 ni:4 efficiently:3 yield:3 vp:1 produced:3 none:3 multiplying:3 worth:2 researcher:1 corruption:1 converged:1 simultaneous:2 explain:1 reach:1 whenever:2 definition:3 failure:1 e2:3 proof:2 recovers:2 pilot:2 proved:2 dataset:1 recall:1 knowledge:6 cap:3 formalize:1 carefully:1 manuscript:3 originally:3 higher:1 reflected:1 improved:2 formulation:3 ritov:1 though:1 generality:3 lastly:1 working:1 hand:1 nonlinear:2 glance:1 logistic:1 perhaps:1 scientific:1 xiaodong:1 validity:1 requiring:1 normalized:1 ye:2 contain:1 true:1 hence:6 inspiration:1 regularization:3 equality:1 i2:5 encourages:1 criterion:1 generalized:3 outline:1 evident:1 demonstrate:2 theoretic:1 performs:1 dedicated:1 l1:1 wise:1 harmonic:1 novel:1 recently:2 misspecified:5 common:1 raskutti:1 functional:1 rigollet:1 extend:1 tail:2 discussed:3 association:4 willett:1 significant:1 refer:1 measurement:3 rd:3 tuning:4 grid:1 mathematics:2 sharpen:1 soltanolkotabi:2 had:2 stable:1 han:2 longer:1 deduce:1 multivariate:2 recent:2 showed:2 belongs:2 scenario:3 route:1 certain:3 inequality:6 binary:2 arbitrarily:1 meta:1 lecu:1 yi:16 genzel:1 minimum:1 additional:1 somewhat:2 impose:1 relaxed:1 ey:1 signal:7 semi:3 relates:2 multiple:1 violate:1 ii:4 full:10 faster:1 cross:1 retrieval:23 equally:1 coded:1 regression:10 essentially:3 arxiv:13 iteration:3 represent:1 cz:1 microscopy:1 achieved:2 c1:2 proposal:1 addition:6 whereas:1 semiparametric:1 interval:1 source:1 crucial:1 unlike:2 exhibited:1 massart:1 flow:3 spirit:2 jordan:1 integer:1 call:1 near:1 noting:3 wirtinger:3 yang:1 iii:3 easy:1 split:2 enough:1 independence:1 fit:1 restrict:1 lasso:4 intensive:1 whether:2 motivated:1 pca:2 hessian:2 remark:4 generally:1 clear:1 eigenvectors:1 nonparametric:1 tsybakov:1 stein:2 extensively:1 visualized:1 outperform:1 exist:5 notice:1 sign:3 delta:1 write:1 twf:11 four:1 traced:1 achieving:1 thresholded:2 kept:1 v1:1 vast:1 relaxation:3 monotone:1 neykov:2 year:1 sum:1 cone:1 tpm:10 run:1 baraniuk:1 inverse:2 throughout:3 diffraction:2 appendix:3 scaling:1 comparable:1 bit:2 bound:4 cca:1 fold:1 replaces:1 quadratic:5 nonnegative:1 annual:1 oracle:1 constraint:5 precisely:1 argument:1 min:2 span:2 performing:1 conjecture:1 department:1 structured:1 combination:2 describes:1 slightly:2 happens:1 s1:6 restricted:2 ghaoui:1 erm:1 taken:1 computationally:4 visualization:1 previously:1 turn:3 discus:2 mechanism:1 fail:1 needed:1 know:1 tractable:1 studying:1 available:2 operation:1 apply:2 observe:2 away:1 appropriate:2 generic:4 spectral:1 amini:1 running:1 ensure:3 remaining:1 log2:1 calculating:1 giving:1 comparatively:1 implied:1 question:1 parametric:4 concentration:1 usual:1 traditional:1 gradient:2 link:11 kbk1:2 thank:2 outer:1 argue:2 provable:1 assuming:1 code:1 index:7 relationship:3 illustration:1 providing:1 setup:3 unfortunately:1 statement:1 potentially:1 trace:1 negative:2 design:6 proper:2 unknown:4 perform:1 allowing:1 observation:7 implementable:1 enabling:1 descent:2 sej:1 optional:3 truncated:2 situation:2 extended:1 communication:1 kvk0:2 smoothed:1 sharp:1 community:1 mpr:31 thrampoulidis:1 bk:7 pair:1 required:2 cast:1 specified:1 z1:3 narrow:1 barcelona:1 nip:1 suggested:6 below:2 pattern:1 regime:2 sparsity:6 appeared:1 reading:1 encompasses:1 program:15 including:2 belief:1 wainwright:2 power:2 warm:4 regularized:8 eh:1 minimax:3 brief:1 numerous:2 aspremont:1 prior:1 literature:3 review:1 asymptotic:3 encompassing:1 loss:6 mixed:2 abstracting:1 limitation:1 proportional:1 suggestion:1 validation:1 sufficient:4 consistent:4 principle:1 viewpoint:1 cd:1 course:1 penalized:1 infeasible:1 aij:1 formal:1 allow:1 vv:1 understand:1 side:2 face:1 taking:3 correspondingly:1 barrier:1 sparse:20 distributed:1 van:1 xia:1 dimension:8 transition:1 rich:2 author:5 commonly:2 refinement:5 universally:2 berthet:1 adaptive:2 cope:1 transaction:2 excess:1 ed2:3 selector:1 implicitly:1 global:2 active:1 assumed:4 nelder:1 xi:9 search:2 why:2 reality:1 additionally:3 nature:1 robust:1 correlated:2 init:12 complex:1 domain:1 vj:1 kindly:1 main:3 motivation:3 s2:21 noise:4 sliced:1 positively:1 augmented:3 shrinking:1 fails:4 sub:11 hassibi:1 explicit:1 obeying:1 exponential:7 r6:1 z0:6 down:1 theorem:1 specific:1 hanliu:1 jensen:1 sensing:2 sims:8 dominates:1 exists:6 consist:1 mendelson:1 phd:2 magnitude:3 illustrates:1 justifies:1 chen:1 crystallography:1 simply:1 gao:1 kxk:4 g2:3 monotonic:1 applies:1 springer:2 satisfies:8 relies:2 ma:2 goal:2 identity:1 presentation:2 quantifying:1 sized:1 lipschitz:1 professor:1 considerable:1 change:1 feasible:1 mccullagh:1 except:1 boufounos:1 principal:9 admittedly:1 called:2 lemma:6 total:1 wolff:1 geer:1 succeeds:2 shedding:1 meaningful:1 exception:1 formally:2 select:2 support:5 latter:2 princeton:3 phenomenon:1 ex:1
5,630
6,095
Lifelong Learning with Weighted Majority Votes Anastasia Pentina IST Austria apentina@ist.ac.at Ruth Urner Max Planck Institute for Intelligent Systems rurner@tuebingen.mpg.de Abstract Better understanding of the potential benefits of information transfer and representation learning is an important step towards the goal of building intelligent systems that are able to persist in the world and learn over time. In this work, we consider a setting where the learner encounters a stream of tasks but is able to retain only limited information from each encountered task, such as a learned predictor. In contrast to most previous works analyzing this scenario, we do not make any distributional assumptions on the task generating process. Instead, we formulate a complexity measure that captures the diversity of the observed tasks. We provide a lifelong learning algorithm with error guarantees for every observed task (rather than on average). We show sample complexity reductions in comparison to solving every task in isolation in terms of our task complexity measure. Further, our algorithmic framework can naturally be viewed as learning a representation from encountered tasks with a neural network. 1 Introduction Machine learning has made significant progress in understanding both theoretical and practical aspects of solving a single prediction problem from a set of annotated examples. However, if we aim at building autonomous agents, capable to persist in the world, we need to establish methods for continuously learning various tasks over time [25, 26]. There is no hope to initially provide, for example, an autonomous robot with sufficiently rich prior knowledge to solve any problem that it may encounter during the course of its life. Therefore, an important goal of machine learning research is to replicate humans? ability to learn from experience and to reuse knowledge from previously encountered tasks for solving new ones more efficiently. This is aimed at in lifelong learning or learning to learn, where a learning algorithm is assumed to encounter a stream of tasks and is aiming to exploit commonalities between them by transferring information from earlier tasks to later ones. The first theoretical formulation of this framework was proposed by Baxter [4]. In that model, tasks are generated by a probability distribution and the goal, given a sample of tasks from this distribution, is to perform well in expectation over tasks. Under certain assumptions, such as a shared good hypothesis set, this model allows for sample complexity savings [4]. However, good performance in expectation is often too weak a requirement. To stay with the robot example, failure on a single task may cause severe malfunction ? and the end of the robot?s life. Moreover, the theoretical analysis of this model relies on the assumption that the learner maintains access to training data for all previously observed tasks, which allows to formulate a joint optimization problem. However, it is unlikely that an autonomous robot is able to keep all this data. Thus, we instead focus on a streaming setting for lifelong learning, where the learner can only retain learned models from previously encountered tasks. These models have a much more compact description than the joint training data. Specifically, we are interested in analysis and performance guarantees in the scenario that 1) tasks arrive one at a time without distributional or i.i.d. assumptions, 2) the learner can only keep the learned hypotheses from previously observed tasks, 3) error bounds are required for every single task, rather than on average. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. The first analysis of this challenging setting was recently provided by Balcan et al. [3]. That work demonstrates sample complexity improvements for learning linear halfspaces (and some boolean function classes) in the lifelong learning setting in comparison to solving each task in isolation under the assumption that the tasks share a common low dimensional representation. However, the analysis relies on the marginal distributions of all tasks being isotropic log-concave. It was stated as an open challenge in that work whether similar guarantees (error bounds for every task, while only keeping limited information from earlier tasks) were possible under less restrictive distributional assumptions. In this work, we (partially) answer this question in the positive. We do so by proposing to learn with weighted majority votes rather than linear combinations over linear predictors. We show that the shift from linear combinations to majority votes introduces stability to the learned ensemble that allows exploiting it for later tasks. Additionally, we show that this stability is achieved for any ground hypothesis class. We formulate a relatedness assumption on the sequence of tasks (similar to one used in [3]) that captures how suitable to lifelong learning a sequence of tasks is. With this, we prove that sample complexity savings through lifelong learning are obtained for arbitrary marginal distributions (provided that these marginal distributions are related in terms of their discrepancy [5, 17]). This is a significant generalization towards more practically relevant scenarios. Summary of our work We employ a natural algorithmic paradigm, similar to the one in [3]. The algorithm maintains a set of base hypotheses from some fixed ground hypothesis class H. These base hypotheses are predictors learned on previous tasks. For each new task, the algorithm first attempts to achieve good prediction performance with a weighted majority vote over the current base hypotheses, and uses this predictor for the task if successful. Otherwise (if no majority vote classifier achieves high accuracy), the algorithm resorts to learning a classifier from the ground class for this task. This classifier is then added to the set of base hypotheses, to be used for subsequent tasks. We describe this algorithm Section 4.1. If the ground class is the class of linear predictors, this algorithm is actually learning a neural network. Each base classifier becomes a node in a hidden middle layer, which represents a learned feature representation of the neural net. A new task is then either solved by employing the representation learned from previous tasks (the current middle layer), and just learning task specific weights for the last layer, or, in case this is not possible, it extends the current representation. See also Section 4.2. This paradigm yields sample complexity savings, if the tasks encountered are related in the sense that for many tasks, good classification accuracy can be achieved with a weighted majority vote over previously learned models. We formally capture this property as an effective dimension of the sequence of tasks. We prove in Section 4.3 that if this effective dimension is bounded by k, then the total samplecomplexity for learning n tasks with this paradigm is upper bounded by ? (nk + VC(H)k 2 )/ , a reduction from O ? (nVC(H)/), the sample complexity of learning n O tasks individually without retaining information from previous tasks. The main technical difficulty is to control the propagation of errors. Since every task is represented by a finite training set, the learner has access only to approximations of the true labeling functions, which may degrade the quality of this collection of functions as a ?basis? for new tasks. Balcan et al. [3] control this error propagation using elegant geometric arguments for linear predictors under isotropic log-concave distributions. We show that moving from linear combinations to majority votes yields the required stability for the quality of the representation under arbitrary distributions. Finally, while we first present our algorithm and results for known upper bounds of k base tasks and n tasks in total, we also provide a variation of the algorithm that does not need to know the number of tasks and the complexity parameter of the task sequence. We show that similar sample complexity improvements are achievable in this setting in Section 5. 2 Related Work Lifelong learning. While there are many ways in which prediction tasks may be related [20], most of the existing approaches to transfer or lifelong learning are exploiting possible similarities between the optimal predictors for the considered tasks. In particular, one widely used relatedness assumption is that these predictors can be described as linear or sparse combinations of some common metafeatures and the corresponding methods aim at learning this representations [10, 2, 15, 3]. Though this idea was originally used in the multi-task setting, it was later extended to lifelong learning by 2 Eaton et al. [9], who proposed a method for sequentially updating the underlying representation as new tasks arrive. These settings were theoretically analyzed in a series of works [18, 19, 23, 21] that have demonstrated that information transfer can lead to provable sample complexity reductions compared to solving each task independently. However, all these results rely on Baxter?s model of lifelong learning and therefore assume access to the training data for all (observed) tasks and provide guarantees only on average performance over all tasks. An exception is [6], where the authors provide error guarantees for every task in the multi-task scenario. However, these guarantees are due to the relatedness assumption used which implies that all tasks have the same expected error. The task relatedness assumption that we employ is related to the one used in [1] for multi-task learning with expert advice. There the authors consider a setting where there exists a small subset of experts that perform well on all tasks. Similarly, we assume that there is a small subset of base tasks, such that the remaining ones can be solved well using majority votes over the corresponding base hypotheses. Majority votes. Weighted majority votes are a theoretically well understood and widely used in practice type of ensemble predictor. In particular, they are employed in boosting [11]. They are also often considered in works that utilize PAC-Bayesian techniques [22, 12]. Majority votes are also used in the concept drift setting [14]. The corresponding method, conceptually similar to the one proposed here, dynamically updates a set of experts and uses their weighted majority votes for making predictions. 3 3.1 Formal Setting General notation and background We let X ? Rd denote a domain set and let Y denote a label set. A hypothesis is a function h : X ? Y, and a hypothesis class H is a set of hypotheses. We model learning tasks as pairs hD, h? i of a distribution D over X and a labeling function h? : X ? Y. The quality of a hypothesis is measured by a loss function ` : Y ? Y ? R+ . We deal with binary classification tasks, that is, Y = {?1, 1}, under the 0/1-loss function, that is, `(y, y 0 ) = Jy 6= y 0 K (we let J?K denote the indicator function). The risk of a hypothesis h with respect to task hD, h? i is defined as its expected loss: LD,h? (h) := Ex?D [`(h(x), h? (x))]. Given a sample S = {(x1 , y1 ), (x2 , y2 ), . . . , (xn , yn )}, the empirical risk of h with respect to S is n 1X LS (h) := `(h(xi ), yi ). n i=1 For binary classification, the sample complexity of learning a hypothesis class is characterized (that is, upper and lower bounded), by the VC-dimension of the class [27]. We will employ the following generalization bounds for classes of finite VC-dimension: Theorem 1 (Corollaries 5.2 and 5.3 in [7]). Let H be a class of binary functions with a finite VC-dimension. There exists a constant C, such that for any ? ? (0, 1), for any task hD, h? i, with probability at least 1 ? ? over a training set S of size n, sampled i.i.d. from hD, h? i: q ? ? ? ? ? + ?, ? LD,h (h) ? LS (h) + LS (h) (1) q ? ? LD,h? (h) ? + LD,h? (h) ? ? ? + ?, LS (h) (2) q ? ? inf LD (h) + inf LD (h) ? ? + ?, LD,h? (h) (3) h?H h?H ? ? arg minh?H LS (h) is an empirical risk minimizer and where h ?=C VC(H) log(n) + log(1/?) . n (4) In therealizable case (h? ? H), the above bounds imply that the sample complexity is upper bounded ? VC(H)+log(1/?) . by O  3 Weighted majority votes Given a hypothesis class H, we define the class of k-majority votes as ( !) k X MV(H, k) = g : X ? Y | ?h1 , . . . , hk ? H, ?w1 , . . . , wk ? R : g(x) = sign wi hi (x) . i=1 We will omit k in the above notation if clear from the context. The VC-dimension of MV(H, k) is upper bounded as VC(MV(H, k)) ? k(VC(H) + 1)(3 log(VC(H) + 1) + 2) (5) (see Theorem 10.3 in [24]). This implies in particular, that the VC-dimension of majority votes over ? log(k)). a fixed set of k functions is upper bounded by O(k 3.2 Lifelong learning In the lifelong learning setting the learner encounters a stream of prediction problems hD1 , h?1 i, . . . , hDn , h?n i, one at a time. In this work we focus on the realizable case, i.e. h?i ? H for every i and some fixed H. In contrast to most works on lifelong learning, we assume that the only information the learner is able to store about the already observed tasks is the obtained predictors, i.e. it does not have access to the training data of already solved tasks. The need to keep little information from previous tasks has also been argued for in the context of domain adaptation [16]. Possible benefits of information transfer depend on how related or, in other words, how diverse the observed tasks are. Moreover, since we do not make any assumptions on the task generation process (in contrast to Baxter?s i.i.d. model [4]), we will formulate our relatedness assumption in terms of a sequence of tasks. Intuitively, one would expect that the information transfer is beneficial if only a few times throughout the course of learning information obtained from the already solved tasks will not be sufficient to solve the current one. In order to formalize this intuition, we use the following (pseudo-)metric over the hypothesis class with respect to a marginal distribution D: dD (h, h0 ) = E Jh(x) 6= h0 (x)K. x?D (6) Further, we can define a distance of a hypothesis to a hypothesis space as dD (h, H0 ) = min dD (h, h0 ) 0 0 h ?H (7) and the distance between two sets of hypotheses as dD (H, H0 ) = max dD (h, H0 ) = max min dD (h, h0 ). 0 0 h?H h ?H h?H (8) Note that the latter is not necessarily a metric over subsets of the hypothesis space. However, it does satisfy the triangle inequality (see Section 1 in the supplementary material). Now we can formulate the diversity measure for a sequence of learning tasks that we will employ. Note that the concepts below are closely related to the ones used in [3] for the case of linear predictors and linear combinations over these. Definition 1. A sequence of learning tasks hD1 , h?1 i, . . . , hDn , h?n i is ?-separated, if for every i dDi (h?i , MV(h?1 , . . . , h?i?1 )) > ?. Definition 2. A sequence of learning tasks hD1 , h?1 i, . . . , hDn , h?n i has ?-effective dimension k, if the largest ?-separated subsequence of these tasks has length k. Formally, we will assume that the ?-effective dimension k of the observed sequence of tasks is relatively small for a sufficiently small ?. Note that this assumption can also be seen as a relaxation of the one used in [8]. There the authors assumed that there exists a set of k hypothesis such that every task can be well solved by one of them. This would correspond to substituting the sets of weighted majority votes MV(h?1 , . . . , h?i?1 ) by just the collections {h?1 , . . . , h?i?1 } in the above definitions. Moreover, we will assume that the marginal distributions have small discrepancy with respect to the hypothesis set H: discH (Di , Dj ) = max |dDi (h, h0 ) ? dDj (h, h0 )|. (9) 0 h,h ?H This is a measure of task relatedness that has been introduced in [13] and shown to be beneficial in the context of domain adaptation [5, 17]. Note, however, that we do not make any assumptions on the marginal distributions D1 , . . . , Dn themselves. 4 4 4.1 Algorithm and complexity guarantees The algorithm We employ a natural algorithmic paradigm, which is similar to the one in [3]. Algorithm 1 below provides pseudocode for our procedure. The algorithm takes as parameters a class H, which we call the ground class, accuracy and confidence parameters  and ?, as well as a task horizon n (the number of tasks to be solved) and a parameter k (a guessed upper bound on the number of tasks that will not be solvable as majority votes over earlier tasks). In Section 5 we present a version that does not need to know n and k in advance. Algorithm 1 Lifelong learning of majority votes 1: Input parameters H, n, k, , ? 2: set ? 0 = ?/(2n), 0 = /(8k) 3: draw a training set S1 from hD1 , h?1 i, such that ?1 := ?(VC(H), ? 0 , |S1 |) ? 0 4: g1 = arg minh?H LS1 (h) ? = 1, i1 = 1, h ? 1 = g1 5: set k 6: for i = 2 to n do ? 1, . . . , h ? ? )), ? 0 , |Si |) ? 7: draw a training set Si from hDi , h?i i, such that ?i := ?(VC(MV(h k 8: gi = arg minh?MV(h? 1 ,...,h? ? ) LSi (h) k p 9: if LSi (gi ) + LSi (gi ) ? ?i + ?i >  then 10: draw a training set Si from hDi , h?i i, such that ?i := ?(VC(H), ? 0 , |Si |) ? 0 11: gi = arg minh?H LSi (h) ? ? = gi , i? = i 12: set k? = k? + 1, h k k 13: end if 14: end for 15: return g1 , . . . , gn  40 ? 1, . . . , h ? ? ) from During the course of its ?life?, the algorithm maintains a set of base hypotheses (h k the ground class, which are predictors learned on previous tasks. In order to solve the first task, it 0 uses the hypothesis set H and a large enough training set S1 to ensure the error guarantee  ? /8k ? 1 ? H is the first member with probability at least 1 ? ? 0 , where ? 0 = ?/2n. The learned hypothesis h of the set of base hypotheses. For each new task i, the algorithm first attempts to achieve good prediction performance (up to error ) with a weighted majority vote over the base hypotheses, i.e. it ? 1, . . . , h ? ? ), and uses the obtained predictor for the attempts to learn this task using the class MV(h k task if successful. Otherwise (if no majority vote classifier achieves high accuracy), the algorithm resorts to learning a classifier from the base class for this task, which is then added to the set of base hypotheses, to be used for subsequent tasks. The error guarantees are ensured with Theorem 1 by choosing the training sets Si large enough so that ?i := ?(VC(Hi ), ? 0 , |Si |) := C VC(Hi ) log(|Si |) + log(1/? 0 ) ? c, |Si | where Hi is either the ground class H or the set of weighted majority votes over the current set of ? 1, . . . , h ? ? ), and constant c is set according to case, see pseudocode. base hypotheses MV(h k While this approach is very natural, the challenge is to analyze it and to specify the parameters. In particular, we need to ensure that the algorithm will not have to search over (potentially large) hypothesis set H too often and, consequently, will lead to provable sample complexity reductions over solving each task independently. The following theorem summarizes the performance guarantees for Algorithm 1 (the proof is in Section 4.3). Theorem 2. Consider running Algorithm 1 on a sequence of tasks with ?-effective dimension at most k and discH (Di , Dj ) ? ? for all i, j. Then, if ? ? /4 and k? < /8, with probability at least 1 ? ?: ? The error of every task is bounded: LDi ,h?i (gi ) ?  for every i = 1, . . . , n.   2 ? nk+VC(H)k . ? The total number of labeled examples used is O  5 Discussion Note that if we assume that all tasks by H, independently learning them  are realizable  VC(H)n ? . The sample complexity of learning n up to error  would have sample complexity O    2 ? nk+VC(H)k . This is a tasks in the lifelong learning regime with our paradigm in contrast is O  significant reduction if the effective dimension of the task sequence k is small in comparison to the total number n of tasks, as well as the complexity measure VC(H) of the ground class. That is, if most tasks are learnable as combination of previously stored base predictors, much less data is required overall. Note that for all those tasks that are solved as majority votes, our algorithm and analysis actually require realizability only by the class of k-majority votes over H and not by the ground class tasks independently under this assumption, has sample complexity  H. Learning the n  ? VC(H)k + (n?k)VC(H)k . In contrast, the lifelong learning method gradually identifies the O   relevant set of base predictors and thereby reduces the number of required examples. 4.2 Neural networks If the ground class is the class of linear predictors, our algorithm is actually learning a neural network (with sign() as the activation function). Each base classifier becomes a new node in a hidden middle layer. Thus, the maintained set of base classifiers can be viewed as feature representation in the neural net, which was learned based on the encountered tasks. A new task is then either solved by employing the representation learned from previous tasks (the current middle layer), and just learning task specific weights for the last layer; or, in case this is not possible, a fresh linear classifier is learned, and added as a node to the middle layer. Thus, in this case, the feature representation is extended. 4.3 Analysis We start with presenting the following two lemmas that show how to control the error propagation of the learned representations (sets of base classifiers). We then proceed to the proof of Theorem 2. Lemma 1. Let V = MV(h1 , . . . , hk , g) and V? = MV(h1 , . . . , hk , g?). Then, for any distribution D: dD (V, V? ) ? dD (g, g?). (10) Proof. By the definition of dD (V, V? ) there exists u ? V such that: dD (V, V? ) = dD (u, V? ). (11) Pk Pk We can represent u as u = sign( i=1 ?i hi + ?g) and let u1 = i=1 ?i hi . Note that while all hi -s, g and g? are assumed to take values in {?1, 1}, u1 can take values in R. Then: ? ? dD (u, V? ) = min dD (u, h) ? V? h? ? max min ? h?MV(u g) 1 ,? min ? h?MV(u1 ,g) h?MV(u g) 1 ,? ? dD (u, h) ? = dD (MV(u1 , g), MV(u1 , g?)). dD (h, h) Now we show that for any ?1 u1 + ?2 g ? MV(u1 , g) there exists a close hypothesis in MV(u1 , g?). In particular, this hypothesis is ?1 u1 + ?2 g?: dD (?1 u1 + ?2 g, ?1 u1 + ?2 g?) = = E Jsign(?1 u1 (x) + ?2 g(x)) 6= sign(?1 u1 (x) + ?2 g?(x))K x?D E J?12 u21 (x) + ?1 ?2 u1 (x)g(x) + ?1 ?2 u1 (x)? g (x) + ?22 g(x)? g (x) < 0K. x?D Note that for every x on which g and g? agree, i.e. g(x)? g (x) = 1, we obtain: ?12 u21 (x) + ?1 ?2 u1 (x)g(x) + ?1 ?2 u1 (x)? g (x) + ?22 g(x)? g (x) = (?1 u1 (x) + ?2 g(x))2 ? 0. Therefore: dD (?1 u1 + ?2 g, ?1 u1 + ?2 g?) ? E Jg(x) 6= g?(x)K = dD (g, g?). x?D 6 (12) ? 1, . . . , h ? k ). For any distribution D, if Lemma 2. Let Vk = MV(h1 , . . . , hk ) and V?k = MV(h P k ? i ) ? i for every i = 1, . . . , k, then dD (Vk , V?k ) ? dD (hi , h i . i=1 For the proof see Section 2 in the supplementary material. Proof of Theorem 2. 1. First, note that for every task Algorithm 1 solves at most 2 estimation problems with a probability of failure ? 0 for each of them. Therefore, with a union bound argument, the probability of any of these estimations being wrong is at most 2 ? n ? ? 0 = ?. Thus, from now we assume that all the estimations were correct, that is, the high probability events of Theorem 1 hold. 2. To see that the error of every encountered task is bounded by , note that there are p two cases. For tasks i that are solved by a majority vote over previous tasks, we have LSi (gi ) + LSi (gi ) ? ?i + ?i ? . In this case, Equation (1) in Theorem 1 implies LDi ,h?i (gi ) ? . For tasks i that are not solved ? 1, . . . , h ? ? )), ? 0 , m) ? /8k. as a majority vote over previous tasks, we have ?i = ?(VC(MV(h k Since task i is realizable by the base class H, we have inf h?H LDi ,h?i (h) = 0, and thus Equation (3) of Theorem 1 implies LDi ,h?i (gi ) ? /8k < . 3. To upper bound the sample complexity we first prove that the number k? of tasks, which are not learned as majority votes over previous tasks, is at most k. For that we use induction showing that for ? when we create a new h ? ? from the i? -th task, we have that every k? ? k, k k dDi? (h?ik? , MV(h?i1 , . . . , h?ik?1 )) > ?. (13) ? k This implies k? ? k by invoking that the ?-effective dimension of the sequence of encountered tasks is at most k. To proceed to the induction, note that for k? = 1, the claim follows immediately. Consider k? > 1. If ? ? , it means that the condition in line 9 is true, which is: we create a new h k q LSi? (gik? ) + LSi? (gik? ) ? ?i + ?i > . (14) k k Therefore LSi? (gik? ) > 0.83. Consequently, due to (2), LDi? ,h?i (gik? ) > 0.67. Finally, by (3), ? k k k ? 1, . . . , h ? ? that inf g LD ,h? (g) > 0.5. Therefore there is no majority vote predictor based on h i? k k?1 i? k leads to error less than /2 on the problem ik? . In other words: ? 1, . . . , h ? ? )) > /2. dD (h? , MV(h ik ? i? k k?1 (15) Now, by way of contradiction, suppose that dDi? (h?ik? , MV(h?i1 , . . . , h?ik?1 )) ? ?. By construction ? k ? ? 0 ? for every j = 1, . . . , k ? 1 dD (h , hj ) ?  ? /8k. By the definition of discrepancy and the ij ij assumption on the marginal distributions it follows that for all j: ? j ) ? dD (h? , h ? j ) + discH (Di , Di ) ? 0 + ?. dDi? (h?ij , h ij ij j ? k k (16) Therefore by Lemma 2: ? 1, . . . , h ? ? )) ? k(0 + ?). dDi? (MV(h?i1 , . . . , h?ik?1 ), MV(h k ? k Consequently, by using the triangle inequality: ? 1, . . . , h ? ? )) ? ? + k(0 + ?) ? /4 + /8 + /8 = /2, dD (h? , MV(h i? k ik ? k?1 (17) (18) which is in contradiction with (15). 4. The total sample complexity of Algorithm 1 consists of two parts. First, for every task Algorithm 1 checks, whether it can be solved by a majority vote over the base, at most k? predictors. For that it employs Theorem 1 and therefore needs the following number of samples: !   ? log(2n/?) nk? log k? log(k? log k) nk ? ? O =O . (19)   Second, there are at most k? tasks that satisfy the condition in line 9 and are learned using the 0 hypothesis = /(8k). Therefore the corresponding sample complexity  ? set H with estimation   error  2  kVC(H) log(2n/?) VC(H)k ? is: O =O . /(8k)  7 5 Lifelong learning with unknown horizon In this section we present a modification of Algorithm 1 for the case when the total number of tasks n and the complexity of the task sequence k are not known in advance. The main difference between Algorithm 2 and Algorithm 1 is that with unknown n and k the learner has to adopt the parameters ? 0 and 0 on the fly. We show that this can be done by the doubling trick that is often used in online learning. Theorem 3 summarizes the resulting guarantees (the proof can be found in the supplementary material, Section 3). Algorithm 2 Lifelong learning of majority votes with unkown horizon 1: Input parameters H, , ? 2: set ?1 = ?/2, 01 = /16 3: draw a training set S1 from hD1 , h?1 i of size m, such that ?(VC(H), ?1 , m) ? 01 (see (4)) 4: g1 = arg minh?H LS1 (h) ? = 1, i1 = 1, h ? 1 = g1 5: set k 6: for i = 2 to n do 7: set l = blog ic, m = blog k? + 1c ?  8: set ?i = 22l+2 , 0i = 22m+4 ? 1, . . . , h ? ? )), ?i , m) ? draw a training set Si from hDi , h?i i of size m, such that ?(VC(MV(h 9: k /40 (see (4)) 10: gi = arg minh?MV(h? 1 ,...,h? ? ) LSi (h) k p 11: if LSi (gi ) + LSi (gi ) ? ? + ? >  then 12: draw a training set Si from hDi , h?i i of size m, such that ?(VC(H), ?i , m) ? 0i (see (4)) 13: gi = arg minh?H LSi (h) ? ? = gi , i? = i 14: set k? = k? + 1, h k k 15: end if 16: end for 17: return g1 , . . . , gn Theorem 3. Consider running Algorithm 2 on a sequence of tasks with ?-effective dimension at most k and discH (Di , Dj ) ? ? for all i, j. Then, if ? ? /4 and k? < /8, with probability at least 1 ? ?: ? The error of every task is bounded: LDi ,h?i (gi ) ?  for every i = 1, . . . , n.   3 ? nk+VC(H)k . ? The total number of labeled examples used is O  6 Conclusion In this work, we have shown sample complexity improvements with lifelong learning in the challenging, yet as argued important setting, where tasks arrive in a stream (without assumptions on the tasks generating process), where the learner is only allowed to maintain limited amounts of information from previously encountered tasks, and where high performance is required for every single task, rather than on average. While such improvements have been established in very specific settings [3], our work shows they are possible in much more general and realistic scenarios. We hope that this will open the door for more work in this area of machine lifelong learning and lead to better understanding of how and when learning machines can benefit from past experience. An intriguing direction is to investigate whether there exists a more general characterization of ensemble methods and/or data distributions that would lead to benefits with lifelong learning. Another one is to better understand lifelong learning with neural networks, analyzing cases of more complex network structures and activation functions, an area where current machine learning practice yields exciting successes, but little is understood. Acknowledgments This work was in parts funded by the European Research Council under the European Union?s Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement no 308036. 8 References [1] J. Abernethy, P. Bartlett, and A. Rakhlin. Multitask learning with expert advice. In Workshop on Computational Learning Theory (COLT), 2007. [2] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Machine Learning (ML), 2008. [3] M.-F. Balcan, A. Blum, and S. Vempala. Efficient representations for lifelong learning and autoencoding. In Workshop on Computational Learning Theory (COLT), 2015. [4] J. Baxter. A model of inductive bias learning. Journal of Artificial Intelligence Research (JAIR), 12, 2000. [5] S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. W. Vaughan. A theory of learning from different domains. Machine Learning (ML), 2010. [6] S. Ben-David and R. Schuller. Exploiting task relatedness for multiple task learning. 2003. [7] S. Boucheron, O. Bousquet, and G. Lugosi. Theory of classification: a survey of some recent advances. ESAIM: Probability and Statistics, 9:323?375, 11 2005. [8] K. Crammer and Y. Mansour. Learning multiple tasks using shared hypotheses. In Conference on Neural Information Processing Systems (NIPS), 2012. [9] E. Eaton and P. L. Ruvolo. ELLA: An efficient lifelong learning algorithm. In International Conference on Machine Learning (ICML), 2013. [10] A. Evgeniou and M. Pontil. Multi-task feature learning. In Conference on Neural Information Processing Systems (NIPS), 2007. [11] Y. Freund and R. E. Schapire. Experiments with a new boosting algorithm. In International Conference on Machine Learning (ICML), 1996. [12] P. Germain, A. Habrard, F. Laviolette, and E. Morvant. A New PAC-Bayesian Perspective on Domain Adaptation. In International Conference on Machine Learning (ICML), 2016. [13] D. Kifer, S. Ben-David, and J. Gehrke. Detecting change in data streams. In International Conference on Very Large Data Bases (VLDB), 2004. [14] J. Z. Kolter and M. A. Maloof. Dynamic weighted majority: An ensemble method for drifting concepts. Journal of Machine Learning Research (JMLR), 8:2755?2790, Dec. 2007. [15] A. Kumar and H. Daum? III. Learning task grouping and overlap in multi-task learning. In International Conference on Machine Learning (ICML), 2012. [16] I. Kuzborskij and F. Orabona. Stability and hypothesis transfer learning. In International Conference on Machine Learning (ICML), 2013. [17] Y. Mansour, M. Mohri, and A. Rostamizadeh. Domain adaptation: Learning bounds and algorithms. In Workshop on Computational Learning Theory (COLT), 2009. [18] A. Maurer. Transfer bounds for linear feature learning. Machine Learning, 75(3):327?350, 2009. [19] A. Maurer, M. Pontil, and B. Romera-Paredes. Sparse coding for multitask and transfer learning. In International Conference on Machine Learning (ICML), 2013. [20] S. J. Pan and Q. Yang. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10):1345?1359, Oct. 2010. [21] A. Pentina and S. Ben-David. Multi-task and lifelong learning of kernels. In Algorithmic Learning Theory (ALT), 2015. [22] A. Pentina and C. H. Lampert. A PAC-Bayesian bound for lifelong learning. In International Conference on Machine Learning (ICML), 2014. [23] M. Pontil and A. Maurer. Excess risk bounds for multitask learning with trace norm regularization. In Workshop on Computational Learning Theory (COLT), 2013. [24] S. Shalev-Shwartz and S. Ben-David. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, New York, NY, USA, 2014. [25] S. Thrun and T. M. Mitchell. Lifelong robot learning. Robotics and Autonomous Systems, 15(1?2):25 ? 46, 1995. The Biology and Technology of Intelligent Autonomous Agents. [26] S. Thrun and L. Pratt. Learning to learn. Kluwer Academic Publishers, 1998. [27] V. N. Vapnik and A. J. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability & Its Applications, 16(2):264?280, 1971. 9
6095 |@word multitask:3 version:1 middle:5 achievable:1 norm:1 replicate:1 paredes:1 open:2 vldb:1 invoking:1 thereby:1 ld:8 reduction:5 series:1 chervonenkis:1 romera:1 past:1 existing:1 current:7 si:10 activation:2 yet:1 intriguing:1 subsequent:2 realistic:1 update:1 intelligence:1 isotropic:2 ruvolo:1 provides:1 boosting:2 node:3 characterization:1 detecting:1 dn:1 ik:8 prove:3 consists:1 theoretically:2 expected:2 mpg:1 themselves:1 multi:7 little:2 becomes:2 spain:1 provided:2 moreover:3 bounded:9 underlying:1 notation:2 proposing:1 guarantee:11 pseudo:1 every:21 concave:2 ensured:1 demonstrates:1 classifier:10 wrong:1 control:3 grant:1 omit:1 yn:1 planck:1 maloof:1 positive:1 understood:2 engineering:1 aiming:1 analyzing:2 lugosi:1 dynamically:1 challenging:2 limited:3 practical:1 acknowledgment:1 practice:2 union:2 procedure:1 pontil:4 area:2 empirical:2 word:2 confidence:1 close:1 risk:4 context:3 vaughan:1 demonstrated:1 independently:4 l:5 convex:1 formulate:5 survey:2 immediately:1 contradiction:2 d1:1 hd:4 stability:4 autonomous:5 variation:1 construction:1 suppose:1 us:4 hypothesis:36 agreement:1 trick:1 updating:1 distributional:3 persist:2 labeled:2 observed:8 fly:1 solved:11 capture:3 halfspaces:1 intuition:1 complexity:25 dynamic:1 depend:1 solving:6 learner:9 basis:1 triangle:2 joint:2 various:1 represented:1 separated:2 describe:1 effective:8 artificial:1 labeling:2 choosing:1 h0:9 abernethy:1 shalev:1 widely:2 solve:3 supplementary:3 otherwise:2 ability:1 statistic:1 gi:16 g1:6 online:1 unkown:1 autoencoding:1 sequence:14 net:2 adaptation:4 relevant:2 achieve:2 description:1 exploiting:3 convergence:1 requirement:1 generating:2 ben:5 blitzer:1 ac:1 measured:1 ij:5 ldi:6 progress:1 solves:1 implies:5 direction:1 closely:1 annotated:1 correct:1 vc:28 human:1 material:3 ddi:6 argued:2 require:1 generalization:2 hold:1 practically:1 sufficiently:2 considered:2 ground:10 ic:1 algorithmic:4 eaton:2 claim:1 substituting:1 achieves:2 commonality:1 adopt:1 estimation:4 label:1 council:1 individually:1 largest:1 create:2 gehrke:1 weighted:11 hope:2 aim:2 rather:4 hj:1 corollary:1 focus:2 improvement:4 vk:2 check:1 u21:2 hk:4 contrast:5 rostamizadeh:1 sense:1 realizable:4 streaming:1 unlikely:1 transferring:1 initially:1 hidden:2 interested:1 i1:5 arg:7 classification:4 overall:1 colt:4 retaining:1 marginal:7 saving:3 evgeniou:2 biology:1 represents:1 icml:7 discrepancy:3 intelligent:3 employ:6 few:1 maintain:1 attempt:3 investigate:1 severe:1 introduces:1 analyzed:1 capable:1 experience:2 hdi:4 maurer:3 theoretical:3 earlier:3 boolean:1 gn:2 subset:3 habrard:1 predictor:18 uniform:1 successful:2 seventh:1 too:2 stored:1 answer:1 international:8 retain:2 stay:1 continuously:1 w1:1 resort:2 expert:4 return:2 potential:1 de:1 diversity:2 coding:1 wk:1 satisfy:2 kolter:1 mv:29 stream:5 later:3 h1:4 analyze:1 start:1 maintains:3 accuracy:4 who:1 efficiently:1 ensemble:4 yield:3 correspond:1 guessed:1 conceptually:1 weak:1 bayesian:3 urner:1 definition:5 failure:2 frequency:1 naturally:1 proof:6 di:5 sampled:1 mitchell:1 austria:1 knowledge:3 malfunction:1 formalize:1 actually:3 originally:1 jair:1 specify:1 formulation:1 done:1 though:1 just:3 propagation:3 quality:3 usa:1 building:2 concept:3 true:2 y2:1 inductive:1 regularization:1 boucheron:1 deal:1 during:2 maintained:1 presenting:1 balcan:3 recently:1 common:2 pseudocode:2 kluwer:1 significant:3 cambridge:1 rd:1 similarly:1 erc:1 dj:3 jg:1 funded:1 moving:1 robot:5 access:4 similarity:1 base:22 recent:1 perspective:1 inf:4 scenario:5 store:1 certain:1 inequality:2 binary:3 blog:2 success:1 life:3 yi:1 seen:1 employed:1 paradigm:5 multiple:2 reduces:1 technical:1 characterized:1 academic:1 jy:1 prediction:6 expectation:2 metric:2 represent:1 kernel:1 achieved:2 dec:1 robotics:1 background:1 publisher:1 elegant:1 member:1 call:1 yang:1 door:1 iii:1 enough:2 baxter:4 pratt:1 pentina:3 isolation:2 idea:1 shift:1 whether:3 bartlett:1 reuse:1 proceed:2 cause:1 york:1 clear:1 aimed:1 amount:1 schapire:1 lsi:13 sign:4 diverse:1 ist:2 blum:1 kuzborskij:1 ls1:2 utilize:1 relaxation:1 arrive:3 extends:1 throughout:1 draw:6 summarizes:2 bound:12 layer:7 hi:8 encountered:9 x2:1 bousquet:1 aspect:1 u1:20 argument:2 min:5 kumar:1 vempala:1 relatively:1 according:1 combination:6 beneficial:2 pan:1 wi:1 making:1 s1:4 modification:1 intuitively:1 gradually:1 equation:2 agree:1 previously:7 know:2 fp7:1 end:5 kifer:1 encounter:4 drifting:1 remaining:1 ensure:2 running:2 laviolette:1 daum:1 exploit:1 restrictive:1 establish:1 question:1 added:3 already:3 anastasia:1 distance:2 thrun:2 majority:30 degrade:1 tuebingen:1 provable:2 fresh:1 induction:2 ruth:1 length:1 potentially:1 trace:1 stated:1 unknown:2 perform:2 upper:8 finite:3 minh:7 extended:2 y1:1 mansour:2 arbitrary:2 drift:1 introduced:1 david:5 pair:1 required:5 germain:1 learned:16 established:1 barcelona:1 nip:3 able:4 below:2 regime:1 kulesza:1 challenge:2 max:5 suitable:1 event:2 natural:3 difficulty:1 rely:1 overlap:1 indicator:1 solvable:1 schuller:1 esaim:1 technology:1 imply:1 identifies:1 realizability:1 prior:1 understanding:4 geometric:1 ella:1 relative:1 freund:1 loss:3 expect:1 generation:1 agent:2 sufficient:1 dd:25 exciting:1 share:1 course:3 summary:1 mohri:1 last:2 keeping:1 formal:1 jh:1 understand:1 bias:1 institute:1 lifelong:28 sparse:2 benefit:4 dimension:13 xn:1 world:2 rich:1 author:3 made:1 collection:2 programme:1 employing:2 transaction:1 excess:1 compact:1 relatedness:7 keep:3 ml:2 sequentially:1 assumed:3 xi:1 shwartz:1 subsequence:1 search:1 additionally:1 learn:6 transfer:9 necessarily:1 complex:1 european:2 domain:6 pk:2 main:2 lampert:1 allowed:1 x1:1 advice:2 ny:1 pereira:1 jmlr:1 theorem:13 specific:3 pac:3 showing:1 learnable:1 rakhlin:1 alt:1 gik:4 grouping:1 exists:6 workshop:4 vapnik:1 horizon:3 nk:6 partially:1 doubling:1 minimizer:1 relies:2 oct:1 goal:3 viewed:2 consequently:3 towards:2 orabona:1 shared:2 change:1 specifically:1 lemma:4 total:7 hd1:5 vote:29 exception:1 formally:2 metafeatures:1 latter:1 crammer:2 morvant:1 argyriou:1 ex:1
5,631
6,096
Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling Jiajun Wu* MIT CSAIL Chengkai Zhang* MIT CSAIL William T. Freeman MIT CSAIL, Google Research Tianfan Xue MIT CSAIL Joshua B. Tenenbaum MIT CSAIL Abstract We study the problem of 3D object generation. We propose a novel framework, namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets. The benefits of our model are three-fold: first, the use of an adversarial criterion, instead of traditional heuristic criteria, enables the generator to capture object structure implicitly and to synthesize high-quality 3D objects; second, the generator establishes a mapping from a low-dimensional probabilistic space to the space of 3D objects, so that we can sample objects without a reference image or CAD models, and explore the 3D object manifold; third, the adversarial discriminator provides a powerful 3D shape descriptor which, learned without supervision, has wide applications in 3D object recognition. Experiments demonstrate that our method generates high-quality 3D objects, and our unsupervisedly learned features achieve impressive performance on 3D object recognition, comparable with those of supervised learning methods. 1 Introduction What makes a 3D generative model of object shapes appealing? We believe a good generative model should be able to synthesize 3D objects that are both highly varied and realistic. Specifically, for 3D objects to have variations, a generative model should be able to go beyond memorizing and recombining parts or pieces from a pre-defined repository to produce novel shapes; and for objects to be realistic, there need to be fine details in the generated examples. In the past decades, researchers have made impressive progress on 3D object modeling and synthesis [Van Kaick et al., 2011, Tangelder and Veltkamp, 2008, Carlson, 1982], mostly based on meshes or skeletons. Many of these traditional methods synthesize new objects by borrowing parts from objects in existing CAD model libraries. Therefore, the synthesized objects look realistic, but not conceptually novel. Recently, with the advances in deep representation learning and the introduction of large 3D CAD datasets like ShapeNet [Chang et al., 2015, Wu et al., 2015], there have been some inspiring attempts in learning deep object representations based on voxelized objects [Girdhar et al., 2016, Su et al., 2015a, Qi et al., 2016]. Different from part-based methods, many of these generative approaches do not explicitly model the concept of parts or retrieve them from an object repository; instead, they synthesize new objects based on learned object representations. This is a challenging problem because, compared to the space of 2D images, it is more difficult to model the space of 3D shapes due to its higher dimensionality. Their current results are encouraging, but often there still exist artifacts (e.g., fragments or holes) in the generated objects. In this paper, we demonstrate that modeling volumetric objects in a general-adversarial manner could be a promising solution to generate objects that are both novel and realistic. Our approach combines ? indicates equal contributions. Emails: {jiajunwu, ckzhang, tfxue, billf, jbt}@mit.edu 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. the merits of both general-adversarial modeling [Goodfellow et al., 2014, Radford et al., 2016] and volumetric convolutional networks [Maturana and Scherer, 2015, Wu et al., 2015]. Different from traditional heuristic criteria, generative-adversarial modeling introduces an adversarial discriminator to classify whether an object is synthesized or real. This could be a particularly favorable framework for 3D object modeling: as 3D objects are highly structured, a generative-adversarial criterion, but not a voxel-wise independent heuristic one, has the potential to capture the structural difference of two 3D objects. The use of a generative-adversarial loss may also avoid possible criterion-dependent overfitting (e.g., generating mean-shape-like blurred objects when minimizing a mean squared error). Modeling 3D objects in a generative-adversarial way offers additional distinctive advantages. First, it becomes possible to sample novel 3D objects from a probabilistic latent space such as a Gaussian or uniform distribution. Second, the discriminator in the generative-adversarial approach carries informative features for 3D object recognition, as demonstrated in experiments (Section 4). From a different perspective, instead of learning a single feature representation for both generating and recognizing objects [Girdhar et al., 2016, Sharma et al., 2016], our framework learns disentangled generative and discriminative representations for 3D objects without supervision, and applies them on generation and recognition tasks, respectively. We show that our generative representation can be used to synthesize high-quality realistic objects, and our discriminative representation can be used for 3D object recognition, achieving comparable performance with recent supervised methods [Maturana and Scherer, 2015, Shi et al., 2015], and outperforming other unsupervised methods by a large margin. The learned generative and discriminative representations also have wide applications. For example, we show that our network can be combined with a variational autoencoder [Kingma and Welling, 2014, Larsen et al., 2016] to directly reconstruct a 3D object from a 2D input image. Further, we explore the space of object representations and demonstrate that both our generative and discriminative representations carry rich semantic information about 3D objects. 2 Related Work Modeling and synthesizing 3D shapes 3D object understanding and generation is an important problem in the graphics and vision community, and the relevant literature is very rich [Carlson, 1982, Tangelder and Veltkamp, 2008, Van Kaick et al., 2011, Blanz and Vetter, 1999, Kalogerakis et al., 2012, Chaudhuri et al., 2011, Xue et al., 2012, Kar et al., 2015, Bansal et al., 2016, Wu et al., 2016]. Since decades ago, AI and vision researchers have made inspiring attempts to design or learn 3D object representations, mostly based on meshes and skeletons. Many of these shape synthesis algorithms are nonparametric and they synthesize new objects by retrieving and combining shapes and parts from a database. Recently, Huang et al. [2015] explored generating 3D shapes with pre-trained templates and producing both object structure and surface geometry. Our framework synthesizes objects without explicitly borrow parts from a repository, and requires no supervision during training. Deep learning for 3D data The vision community have witnessed rapid development of deep networks for various tasks. In the field of 3D object recognition, Li et al. [2015], Su et al. [2015b], Girdhar et al. [2016] proposed to learn a joint embedding of 3D shapes and synthesized images, Su et al. [2015a], Qi et al. [2016] focused on learning discriminative representations for 3D object recognition, Wu et al. [2016], Xiang et al. [2015], Choy et al. [2016] discussed 3D object reconstruction from in-the-wild images, possibly with a recurrent network, and Girdhar et al. [2016], Sharma et al. [2016] explored autoencoder-based networks for learning voxel-based object representations. Wu et al. [2015], Rezende et al. [2016], Yan et al. [2016] attempted to generate 3D objects with deep networks, some using 2D images during training with a 3D to 2D projection layer. Many of these networks can be used for 3D shape classification [Su et al., 2015a, Sharma et al., 2016, Maturana and Scherer, 2015], 3D shape retrieval [Shi et al., 2015, Su et al., 2015a], and single image 3D reconstruction [Kar et al., 2015, Bansal et al., 2016, Girdhar et al., 2016], mostly with full supervision. In comparison, our framework requires no supervision for training, is able to generate objects from a probabilistic space, and comes with a rich discriminative 3D shape representation. Learning with an adversarial net Generative Adversarial Nets (GAN) [Goodfellow et al., 2014] proposed to incorporate an adversarial discriminator into the procedure of generative modeling. More recently, LAPGAN [Denton et al., 2015] and DC-GAN [Radford et al., 2016] adopted GAN with convolutional networks for image synthesis, and achieved impressive performance. Researchers have also explored the use of GAN for other vision problems. To name a few, Wang and Gupta [2016] discussed how to model image style and structure with sequential GANs, Li and Wand [2016] and Zhu et al. [2016] used GAN for texture synthesis and image editing, respectively, and Im et al. [2016] 2 512?4?4?4 z 256?8?8?8 128?16?16?16 64?32?32?32 G(z) in 3D Voxel Space 64?64?64 Figure 1: The generator in 3D-GAN. The discriminator mostly mirrors the generator. developed a recurrent adversarial network for image generation. While previous approaches focus on modeling 2D images, we discuss the use of an adversarial component in modeling 3D objects. 3 Models In this section we introduce our model for 3D object generation. We first discuss how we build our framework, 3D Generative Adversarial Network (3D-GAN), by leveraging previous advances on volumetric convolutional networks and generative adversarial nets. We then show how to train a variational autoencoder [Kingma and Welling, 2014] simultaneously so that our framework can capture a mapping from a 2D image to a 3D object. 3.1 3D Generative Adversarial Network (3D-GAN) As proposed in Goodfellow et al. [2014], the Generative Adversarial Network (GAN) consists of a generator and a discriminator, where the discriminator tries to classify real objects and objects synthesized by the generator, and the generator attempts to confuse the discriminator. In our 3D Generative Adversarial Network (3D-GAN), the generator G maps a 200-dimensional latent vector z, randomly sampled from a probabilistic latent space, to a 64 ? 64 ? 64 cube, representing an object G(z) in 3D voxel space. The discriminator D outputs a confidence value D(x) of whether a 3D object input x is real or synthetic. Following Goodfellow et al. [2014], we use binary cross entropy as the classification loss, and present our overall adversarial loss function as L3D-GAN = log D(x) + log(1 ? D(G(z))), (1) where x is a real object in a 64 ? 64 ? 64 space, and z is a randomly sampled noise vector from a distribution p(z). In this work, each dimension of z is an i.i.d. uniform distribution over [0, 1]. Network structure Inspired by Radford et al. [2016], we design an all-convolutional neural network to generate 3D objects. As shown in Figure 1, the generator consists of five volumetric fully convolutional layers of kernel sizes 4 ? 4 ? 4 and strides 2, with batch normalization and ReLU layers added in between and a Sigmoid layer at the end. The discriminator basically mirrors the generator, except that it uses Leaky ReLU [Maas et al., 2013] instead of ReLU layers. There are no pooling or linear layers in our network. More details can be found in the supplementary material. Training details A straightforward training procedure is to update both the generator and the discriminator in every batch. However, the discriminator usually learns much faster than the generator, possibly because generating objects in a 3D voxel space is more difficult than differentiating between real and synthetic objects [Goodfellow et al., 2014, Radford et al., 2016]. It then becomes hard for the generator to extract signals for improvement from a discriminator that is way ahead, as all examples it generated would be correctly identified as synthetic with high confidence. Therefore, to keep the training of both networks in pace, we employ an adaptive training strategy: for each batch, the discriminator only gets updated if its accuracy in the last batch is not higher than 80%. We observe this helps to stabilize the training and to produce better results. We set the learning rate of G to 0.0025, D to 10?5 , and use a batch size of 100. We use ADAM [Kingma and Ba, 2015] for optimization, with ? = 0.5. 3.2 3D-VAE-GAN We have discussed how to generate 3D objects by sampling a latent vector z and mapping it to the object space. In practice, it would also be helpful to infer these latent vectors from observations. For example, if there exists a mapping from a 2D image to the latent representation, we can then recover the 3D object corresponding to that 2D image. 3 Following this idea, we introduce 3D-VAE-GAN as an extension to 3D-GAN. We add an additional image encoder E, which takes a 2D image x as input and outputs the latent representation vector z. This is inspired by VAE-GAN proposed by [Larsen et al., 2016], which combines VAE and GAN by sharing the decoder of VAE with the generator of GAN. The 3D-VAE-GAN therefore consists of three components: an image encoder E, a decoder (the generator G in 3D-GAN), and a discriminator D. The image encoder consists of five spatial convolution layers with kernel size {11, 5, 5, 5, 8} and strides {4, 2, 2, 2, 1}, respectively. There are batch normalization and ReLU layers in between, and a sampler at the end to sample a 200 dimensional vector used by the 3D-GAN. The structures of the generator and the discriminator are the same as those in Section 3.1. Similar to VAE-GAN [Larsen et al., 2016], our loss function consists of three parts: an object reconstruction loss Lrecon , a cross entropy loss L3D-GAN for 3D-GAN, and a KL divergence loss LKL to restrict the distribution of the output of the encoder. Formally, these loss functions write as L = L3D-GAN + ?1 LKL + ?2 Lrecon , (2) where ?1 and ?2 are weights of the KL divergence loss and the reconstruction loss. We have L3D-GAN = log D(x) + log(1 ? D(G(z))), LKL = DKL (q(z|y) || p(z)), Lrecon = ||G(E(y)) ? x||2 , (3) (4) (5) where x is a 3D shape from the training set, y is its corresponding 2D image, and q(z|y) is the variational distribution of the latent representation z. The KL-divergence pushes this variational distribution towards to the prior distribution p(z), so that the generator can sample the latent representation z from the same distribution p(z). In this work, we choose p(z) a multivariate Gaussian distribution with zero-mean and unit variance. For more details, please refer to Larsen et al. [2016]. Training 3D-VAE-GAN requires both 2D images and their corresponding 3D models. We render 3D shapes in front of background images (16, 913 indoor images from the SUN database [Xiao et al., 2010]) in 72 views (from 24 angles and 3 elevations). We set ?1 = 5, ?2 = 10?4 , and use a similar training strategy as in Section 3.1. See our supplementary material for more details. 4 Evaluation In this section, we evaluate our framework from various aspects. We first show qualitative results of generated 3D objects. We then evaluate the unsupervisedly learned representation from the discriminator by using them as features for 3D object classification. We show both qualitative and quantitative results on the popular benchmark ModelNet [Wu et al., 2015]. Further, we evaluate our 3D-VAE-GAN on 3D object reconstruction from a single image, and show both qualitative and quantitative results on the IKEA dataset [Lim et al., 2013]. 4.1 3D Object Generation Figure 2 shows 3D objects generated by our 3D-GAN. For this experiment, we train one 3D-GAN for each object category. For generation, we sample 200-dimensional vectors following an i.i.d. uniform distribution over [0, 1], and render the largest connected component of each generated object. We compare 3D-GAN with Wu et al. [2015], the state-of-the-art in 3D object synthesis from a probabilistic space, and with a volumetric autoencoder, whose variants have been employed by multiple recent methods [Girdhar et al., 2016, Sharma et al., 2016]. Because an autoencoder does not restrict the distribution of its latent representation, we compute the empirical distribution p0 (z) of the latent vector z of all training examples, fit a Gaussian distribution g0 to p0 , and sample from g0 . Our algorithm produces 3D objects with much higher quality and more fine-grained details. Compared with previous works, our 3D-GAN can synthesize high-resolution 3D objects with detailed geometries. Figure 3 shows both high-res voxels and down-sampled low-res voxels for comparison. Note that it is relatively easy to synthesize a low-res object, but is much harder to obtain a high-res one due to the rapid growth of 3D space. However, object details are only revealed in high resolution. A natural concern to our generative model is whether it is simply memorizing objects from training data. To demonstrate that the network can generalize beyond the training set, we compare synthesized objects with their nearest neighbor in the training set. Since the retrieval objects based on `2 distance in the voxel space are visually very different from the queries, we use the output of the last convolutional 4 Our results (64 ? 64 ? 64) NN Gun Chair Car Sofa Table Objects generated by Wu et al. [2015] (30 ? 30 ? 30) Car Table Objects generated by a volumetric autoencoder (64 ? 64 ? 64) Chair Table Sofa Figure 2: Objects generated by 3D-GAN from vectors, without a reference image/object. We show, for the last two objects in each row, the nearest neighbor retrieved from the training set. We see that the generated objects are similar, but not identical, to examples in the training set. For comparison, we show objects generated by the previous state-of-the-art [Wu et al., 2015] (results supplied by the authors). We also show objects generated by autoencoders trained on a single object category, with latent vectors sampled from empirical distribution. See text for details. High-res Low-res High-res Low-res High-res Low-res High-res Low-res Figure 3: We present each object at high resolution (64 ? 64 ? 64) on the left and at low resolution (down-sampled to 16 ? 16 ? 16) on the right. While humans can perceive object structure at a relatively low resolution, fine details and variations only appear in high-res objects. layer in our discriminator (with a 2x pooling) as features for retrieval instead. Figure 2 shows that generated objects are similar, but not identical, to the nearest examples in the training set. 4.2 3D Object Classification We then evaluate the representations learned by our discriminator. A typical way of evaluating representations learned without supervision is to use them as features for classification. To obtain features for an input 3D object, we concatenate the responses of the second, third, and fourth convolution layers in the discriminator, and apply max pooling of kernel sizes {8, 4, 2}, respectively. We use a linear SVM for classification. Data We train a single 3D-GAN on the seven major object categories (chairs, sofas, tables, boats, airplanes, rifles, and cars) of ShapeNet [Chang et al., 2015]. We use ModelNet [Wu et al., 2015] for testing, following Sharma et al. [2016], Maturana and Scherer [2015], Qi et al. [2016].? Specifically, we evaluate our model on both ModelNet10 and ModelNet40, two subsets of ModelNet that are often ? For ModelNet, there are two train/test splits typically used. Qi et al. [2016], Shi et al. [2015], Maturana and Scherer [2015] used the train/test split included in the dataset, which we also follow; Wu et al. [2015], Su 5 Supervision Pretraining ModelNet40 ModelNet10 ImageNet MVCNN [Su et al., 2015a] MVCNN-MultiRes [Qi et al., 2016] 90.1% 91.4% - None 3D ShapeNets [Wu et al., 2015] DeepPano [Shi et al., 2015] VoxNet [Maturana and Scherer, 2015] ORION [Sedaghat et al., 2016] 77.3% 77.6% 83.0% - 83.5% 85.5% 92.0% 93.8% - SPH [Kazhdan et al., 2003] LFD [Chen et al., 2003] T-L Network [Girdhar et al., 2016] VConv-DAE [Sharma et al., 2016] 3D-GAN (ours) 68.2% 75.5% 74.4% 75.5% 83.3% 79.8% 79.9% 80.5% 91.0% Category labels Unsupervised Classification (Accuracy) Method Table 1: Classification results on the ModelNet dataset. Our 3D-GAN outperforms other unsupervised learning methods by a large margin, and is comparable to some recent supervised learning frameworks. Accuracy (%) 85 80 Figure 5: The effects of individual dimensions of the object vector 75 70 3D-GAN VoxNet VConv-DAE 65 10 20 40 80 160 full # objects per class in training Figure 4: ModelNet40 classification with limited training data Figure 6: Intra/inter-class interpolation between object vectors used as benchmarks for 3D object classification. Note that the training and test categories are not identical, which also shows the out-of-category generalization power of our 3D-GAN. Results We compare with the state-of-the-art methods [Wu et al., 2015, Girdhar et al., 2016, Sharma et al., 2016, Sedaghat et al., 2016] and show per-class accuracy in Table 1. Our representation outperforms other features learned without supervision by a large margin (83.3% vs. 75.5% on ModelNet40, and 91.0% vs 80.5% on ModelNet10) [Girdhar et al., 2016, Sharma et al., 2016]. Further, our classification accuracy is also higher than some recent supervised methods [Shi et al., 2015], and is close to the state-of-the-art voxel-based supervised learning approaches [Maturana and Scherer, 2015, Sedaghat et al., 2016]. Multi-view CNNs [Su et al., 2015a, Qi et al., 2016] outperform us, though their methods are designed for classification, and require rendered multi-view images and an ImageNet-pretrained model. 3D-GAN also works well with limited training data. As shown in Figure 4, with roughly 25 training samples per class, 3D-GAN achieves comparable performance on ModelNet40 with other unsupervised learning methods trained with at least 80 samples per class. 4.3 Single Image 3D Reconstruction As an application, our show that the 3D-VAE-GAN can perform well on single image 3D reconstruction. Following previous work [Girdhar et al., 2016], we test it on the IKEA dataset [Lim et al., 2013], and show both qualitative and quantitative results. Data The IKEA dataset consists of images with IKEA objects. We crop the images so that the objects are centered in the images. Our test set consists of 1, 039 objects cropped from 759 images (supplied by the author). The IKEA dataset is challenging because all images are captured in the wild, often with heavy occlusions. We test on all six categories of objects: bed, bookcase, chair, desk, sofa, and table. Results We show our results in Figure 7 and Table 2, with performance of a single 3D-VAE-GAN jointly trained on all six categories, as well as the results of six 3D-VAE-GANs separately trained on et al. [2015a], Sharma et al. [2016] used 80 training points and 20 test points in each category for experiments, possibly with viewpoint augmentation. 6 Method Bed Bookcase Chair Desk Sofa Table Mean AlexNet-fc8 [Girdhar et al., 2016] AlexNet-conv4 [Girdhar et al., 2016] T-L Network [Girdhar et al., 2016] 29.5 38.2 56.3 17.3 26.6 30.2 20.4 31.4 32.9 19.7 26.6 25.8 38.8 69.3 71.7 16.0 19.1 23.3 23.6 35.2 40.0 3D-VAE-GAN (jointly trained) 3D-VAE-GAN (separately trained) 49.1 63.2 31.9 46.3 42.6 47.2 34.8 40.7 79.8 78.8 33.1 42.3 45.2 53.1 Table 2: Average precision for voxel prediction on the IKEA dataset.? Figure 7: Qualitative results of single image 3D reconstruction on the IKEA dataset each class. Following Girdhar et al. [2016], we evaluate results at resolution 20 ? 20 ? 20, use the average precision as our evaluation metric, and attempt to align each prediction with the ground-truth over permutations, flips, and translational alignments (up to 10%), as IKEA ground truth objects are not in a canonical viewpoint. In all categories, our model consistently outperforms previous state-of-the-art in voxel-level prediction and other baseline methods.? 5 Analyzing Learned Representations In this section, we look deep into the representations learned by both the generator and the discriminator of 3D-GAN. We start with the 200-dimensional object vector, from which the generator produces various objects. We then visualize neurons in the discriminator, and demonstrate that these units capture informative semantic knowledge of the objects, which justifies its good performance on object classification presented in Section 4. 5.1 The Generative Representation We explore three methods for understanding the latent space of vectors for object generation. We first visualize what an individual dimension of the vector represents; we then explore the possibility of interpolating between two object vectors and observe how the generated objects change; last, we present how we can apply shape arithmetic in the latent space. Visualizing the object vector To visualize the semantic meaning of each dimension, we gradually increase its value, and observe how it affects the generated 3D object. In Figure 5, each column corresponds to one dimension of the object vector, where the red region marks the voxels affected by changing values of that dimension. We observe that some dimensions in the object vector carries semantic knowledge of the object, e.g., the thickness or width of surfaces. Interpolation We show results of interpolating between two object vectors in Figure 6. Earlier works demonstrated interpolation between two 2D images of the same category [Dosovitskiy et al., 2015, Radford et al., 2016]. Here we show interpolations both within and across object categories. We observe that for both cases walking over the latent space gives smooth transitions between objects. Arithmetic Another way of exploring the learned representations is to show arithmetic in the latent space. Previously, Dosovitskiy et al. [2015], Radford et al. [2016] presented that their generative nets are able to encode semantic knowledge of chair or face images in its latent space; Girdhar et al. [2016] also showed that the learned representation for 3D objects behave similarly. We show our shape arithmetic in Figure 8. Different from Girdhar et al. [2016], all of our objects are randomly sampled, requiring no existing 3D CAD models as input. 5.2 The Discriminative Representation We now visualize the neurons in the discriminator. Specifically, we would like to show what input objects, and which part of them produce the highest intensity values for each neuron. To do that, ? For methods from Girdhar et al. [2016], the mean values in the last column are higher than the originals in their paper, because we compute per-class accuracy instead of per-instance accuracy. 7 Figure 8: Shape arithmetic for chairs and tables. The left images show the obtained ?arm? vector can be added to other chairs, and the right ones show the ?layer? vector can be added to other tables. Figure 9: Objects and parts that activate specific neurons in the discriminator. For each neuron, we show five objects that activate it most strongly, with colors representing gradients of activations with respect to input voxels. for each neuron in the second to last convolutional layer of the discriminator, we iterate through all training objects and exhibit the ones activating the unit most strongly. We further use guided back-propagation [Springenberg et al., 2015] to visualize the parts that produce the activation. Figure 9 shows the results. There are two main observations: first, for a single neuron, the objects producing strongest activations have very similar shapes, showing the neuron is selective in terms of the overall object shape; second, the parts that activate the neuron, shown in red, are consistent across these objects, indicating the neuron is also learning semantic knowledge about object parts. 6 Conclusion In this paper, we proposed 3D-GAN for 3D object generation, as well as 3D-VAE-GAN for learning an image to 3D model mapping. We demonstrated that our models are able to generate novel objects and to reconstruct 3D objects from images. We showed that the discriminator in GAN, learned without supervision, can be used as an informative feature representation for 3D objects, achieving impressive performance on shape classification. We also explored the latent space of object vectors, and presented results on object interpolation, shape arithmetic, and neuron visualization. Acknowledgement This work is supported by NSF grants #1212849 and #1447476, ONR MURI N00014-16-1-2007, the Center for Brain, Minds and Machines (NSF STC award CCF-1231216), Toyota Research Institute, Adobe, Shell, IARPA MICrONS, and a hardware donation from Nvidia. References Aayush Bansal, Bryan Russell, and Abhinav Gupta. Marr revisited: 2d-3d alignment via surface normal prediction. In CVPR, 2016. 2 Volker Blanz and Thomas Vetter. A morphable model for the synthesis of 3d faces. In SIGGRAPH, 1999. 2 Wayne E Carlson. An algorithm and data structure for 3d object synthesis using surface patch intersections. In SIGGRAPH, 1982. 1, 2 Angel X Chang, Thomas Funkhouser, Leonidas Guibas, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015. 1, 5 Siddhartha Chaudhuri, Evangelos Kalogerakis, Leonidas Guibas, and Vladlen Koltun. Probabilistic reasoning for assembly-based 3d modeling. ACM TOG, 30(4):35, 2011. 2 Ding-Yun Chen, Xiao-Pei Tian, Yu-Te Shen, and Ming Ouhyoung. On visual similarity based 3d model retrieval. CGF, 2003. 6 Christopher B Choy, Danfei Xu, JunYoung Gwak, Kevin Chen, and Silvio Savarese. 3d-r2n2: A unified approach for single and multi-view 3d object reconstruction. In ECCV, 2016. 2 Emily L Denton, Soumith Chintala, and Rob Fergus. Deep generative image models using a laplacian pyramid of adversarial networks. In NIPS, 2015. 2 Alexey Dosovitskiy, Jost Tobias Springenberg, and Thomas Brox. Learning to generate chairs with convolutional neural networks. In CVPR, 2015. 7 Rohit Girdhar, David F Fouhey, Mikel Rodriguez, and Abhinav Gupta. Learning a predictable and generative vector representation for objects. In ECCV, 2016. 1, 2, 4, 6, 7 8 Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014. 2, 3 Haibin Huang, Evangelos Kalogerakis, and Benjamin Marlin. Analysis and synthesis of 3d shape families via deep-learned generative models of surfaces. CGF, 34(5):25?38, 2015. 2 Daniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, and Roland Memisevic. Generating images with recurrent adversarial networks. arXiv preprint arXiv:1602.05110, 2016. 2 Evangelos Kalogerakis, Siddhartha Chaudhuri, Daphne Koller, and Vladlen Koltun. A probabilistic model for component-based shape synthesis. ACM TOG, 31(4):55, 2012. 2 Abhishek Kar, Shubham Tulsiani, Joao Carreira, and Jitendra Malik. Category-specific object reconstruction from a single image. In CVPR, 2015. 2 Michael Kazhdan, Thomas Funkhouser, and Szymon Rusinkiewicz. Rotation invariant spherical harmonic representation of 3 d shape descriptors. In SGP, 2003. 6 Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. 3 Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In ICLR, 2014. 2, 3 Anders Boesen Lindbo Larsen, S?ren Kaae S?nderby, and Ole Winther. Autoencoding beyond pixels using a learned similarity metric. In ICML, 2016. 2, 4 Chuan Li and Michael Wand. Precomputed real-time texture synthesis with markovian generative adversarial networks. arXiv preprint arXiv:1604.04382, 2016. 2 Yangyan Li, Hao Su, Charles Ruizhongtai Qi, Noa Fish, Daniel Cohen-Or, and Leonidas J Guibas. Joint embeddings of shapes and images via cnn image purification. ACM TOG, 34(6):234, 2015. 2 Joseph J. Lim, Hamed Pirsiavash, and Antonio Torralba. Parsing ikea objects: Fine pose estimation. In ICCV, 2013. 4, 6 Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. Rectifier nonlinearities improve neural network acoustic models. In ICML, 2013. 3 Daniel Maturana and Sebastian Scherer. Voxnet: A 3d convolutional neural network for real-time object recognition. In IROS, 2015. 2, 5, 6 Charles R Qi, Hao Su, Matthias Niessner, Angela Dai, Mengyuan Yan, and Leonidas J Guibas. Volumetric and multi-view cnns for object classification on 3d data. In CVPR, 2016. 1, 2, 5, 6 Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016. 2, 3, 7 Danilo Jimenez Rezende, SM Eslami, Shakir Mohamed, Peter Battaglia, Max Jaderberg, and Nicolas Heess. Unsupervised learning of 3d structure from images. In NIPS, 2016. 2 Nima Sedaghat, Mohammadreza Zolfaghari, and Thomas Brox. Orientation-boosted voxel nets for 3d object recognition. arXiv preprint arXiv:1604.03351, 2016. 6 Abhishek Sharma, Oliver Grau, and Mario Fritz. Vconv-dae: Deep volumetric shape learning without object labels. arXiv preprint arXiv:1604.03755, 2016. 2, 4, 5, 6 Baoguang Shi, Song Bai, Zhichao Zhou, and Xiang Bai. Deeppano: Deep panoramic representation for 3-d shape recognition. IEEE SPL, 22(12):2339?2343, 2015. 2, 5, 6 Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. In ICLR Workshop, 2015. 8 Hang Su, Subhransu Maji, Evangelos Kalogerakis, and Erik Learned-Miller. Multi-view convolutional neural networks for 3d shape recognition. In ICCV, 2015a. 1, 2, 5, 6 Hao Su, Charles R Qi, Yangyan Li, and Leonidas Guibas. Render for cnn: Viewpoint estimation in images using cnns trained with rendered 3d model views. In ICCV, 2015b. 2 Johan WH Tangelder and Remco C Veltkamp. A survey of content based 3d shape retrieval methods. Multimedia tools and applications, 39(3):441?471, 2008. 1, 2 Oliver Van Kaick, Hao Zhang, Ghassan Hamarneh, and Daniel Cohen-Or. A survey on shape correspondence. CGF, 2011. 1, 2 Xiaolong Wang and Abhinav Gupta. Generative image modeling using style and structure adversarial networks. In ECCV, 2016. 2 Jiajun Wu, Tianfan Xue, Joseph J Lim, Yuandong Tian, Joshua B Tenenbaum, Antonio Torralba, and William T Freeman. Single image 3d interpreter network. In ECCV, 2016. 2 Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. In CVPR, 2015. 1, 2, 4, 5, 6 Yu Xiang, Wongun Choi, Yuanqing Lin, and Silvio Savarese. Data-driven 3d voxel patterns for object category recognition. In CVPR, 2015. 2 Jianxiong Xiao, James Hays, Krista Ehinger, Aude Oliva, and Antonio Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In CVPR, 2010. 4 Tianfan Xue, Jianzhuang Liu, and Xiaoou Tang. Example-based 3d object reconstruction from line drawings. In CVPR, 2012. 2 Xinchen Yan, Jimei Yang, Ersin Yumer, Yijie Guo, and Honglak Lee. Perspective transformer nets: Learning single-view 3d object reconstruction without 3d supervision. In NIPS, 2016. 2 Jun-Yan Zhu, Philipp Kr?henb?hl, Eli Shechtman, and Alexei A Efros. Generative visual manipulation on the natural image manifold. In ECCV, 2016. 2 9
6096 |@word repository:4 cnn:2 choy:2 p0:2 harder:1 carry:3 shechtman:1 bai:2 liu:1 fragment:1 jimenez:1 daniel:4 ours:1 past:1 existing:2 outperforms:3 current:1 bookcase:2 cad:4 activation:3 diederik:2 parsing:1 mesh:2 realistic:5 concatenate:1 informative:3 shape:33 enables:1 designed:1 update:1 v:2 generative:34 alec:1 provides:1 revisited:1 philipp:1 tianfan:3 zhang:3 five:3 daphne:1 shubham:1 kalogerakis:5 retrieving:1 qualitative:5 consists:7 koltun:2 combine:2 wild:2 introduce:2 manner:1 inter:1 angel:1 jiwoong:1 rapid:2 roughly:1 kaick:3 multi:5 brain:1 freeman:2 inspired:2 ming:1 spherical:1 lindbo:1 encouraging:1 soumith:2 becomes:2 spain:1 joao:1 alexnet:2 what:3 developed:1 interpreter:1 unified:1 marlin:1 quantitative:3 every:1 lfd:1 jimei:1 growth:1 sherjil:1 unit:3 grant:1 wayne:1 appear:1 producing:2 eslami:1 encoding:1 analyzing:1 jiang:1 interpolation:5 alexey:2 challenging:2 luke:1 limited:2 tian:2 testing:1 practice:1 procedure:2 riedmiller:1 empirical:2 yan:4 dongjoo:1 projection:1 pre:2 vetter:2 confidence:2 modelnet:5 get:1 close:1 transformer:1 map:1 demonstrated:3 shi:6 center:1 go:1 straightforward:1 conv4:1 emily:1 focused:1 resolution:6 shen:1 jimmy:1 simplicity:1 survey:2 perceive:1 pouget:1 borrow:1 marr:1 disentangled:1 retrieve:1 embedding:1 variation:2 updated:1 xinchen:1 us:1 goodfellow:6 synthesize:8 recognition:13 particularly:1 walking:1 nderby:1 muri:1 database:3 preprint:5 ding:1 wang:2 capture:4 region:1 connected:1 sun:2 yangyan:2 russell:1 highest:1 benjamin:1 predictable:1 skeleton:2 tobias:2 warde:1 trained:8 distinctive:1 tog:3 joint:2 siggraph:2 xiaoou:2 various:3 maji:1 train:5 activate:3 ole:1 query:1 kevin:1 whose:1 heuristic:3 supplementary:2 jean:1 cvpr:8 zhichao:1 drawing:1 reconstruct:2 blanz:2 encoder:4 jointly:2 shakir:1 autoencoding:1 advantage:1 net:9 matthias:1 propose:1 reconstruction:12 linguang:1 modelnet40:5 relevant:1 combining:1 chaudhuri:3 achieve:1 bed:2 produce:6 generating:5 adam:2 object:151 help:1 donation:1 recurrent:3 maturana:8 pose:1 andrew:2 nearest:3 tulsiani:1 progress:1 come:1 kaae:1 guided:1 cnns:3 stochastic:1 centered:1 human:1 material:2 require:1 activating:1 generalization:1 elevation:1 im:2 awni:1 extension:1 exploring:1 lkl:3 ground:2 normal:1 visually:1 guibas:5 mapping:5 visualize:5 major:1 achieves:1 torralba:3 abbey:1 efros:1 battaglia:1 favorable:1 estimation:2 sofa:5 label:2 largest:1 establishes:1 tool:1 evangelos:4 mit:6 gaussian:3 avoid:1 zhou:1 boosted:1 volker:1 vae:15 encode:1 rezende:2 focus:1 improvement:1 consistently:1 indicates:1 panoramic:1 adversarial:30 shapenet:3 baseline:1 kim:1 helpful:1 dependent:1 anders:1 nn:1 typically:1 borrowing:1 koller:1 hamarneh:1 selective:1 subhransu:1 pixel:1 overall:2 classification:15 translational:1 orientation:1 development:1 spatial:1 art:5 brox:3 jiajunwu:1 equal:1 field:1 cube:1 ng:1 sampling:1 identical:3 represents:1 look:2 unsupervised:6 denton:2 yu:3 icml:2 mirza:1 yoshua:1 dosovitskiy:4 fouhey:1 few:1 employ:1 randomly:3 simultaneously:1 divergence:3 individual:2 geometry:2 occlusion:1 william:2 attempt:4 highly:2 possibility:1 intra:1 alexei:1 evaluation:2 alignment:2 introduces:1 farley:1 xiaolong:1 oliver:2 baoguang:1 shuran:1 savarese:2 re:13 dae:3 witnessed:1 grau:1 classify:2 modeling:13 column:2 earlier:1 multires:1 cgf:3 markovian:1 instance:1 voxnet:3 subset:1 uniform:3 recognizing:1 front:1 graphic:1 thickness:1 xue:4 synthetic:3 combined:1 fritz:1 winther:1 csail:5 memisevic:1 probabilistic:9 lee:1 michael:2 synthesis:10 szymon:1 gans:2 squared:1 augmentation:1 huang:2 possibly:3 choose:1 style:2 li:5 potential:1 nonlinearities:1 stride:2 stabilize:1 blurred:1 jitendra:1 explicitly:2 leonidas:5 piece:1 try:1 view:8 mario:1 red:2 start:1 recover:1 bayes:1 metz:1 contribution:1 accuracy:7 convolutional:13 descriptor:2 variance:1 purification:1 miller:1 conceptually:1 generalize:1 basically:1 none:1 ren:1 zoo:1 unsupervisedly:2 researcher:3 ago:1 hamed:1 strongest:1 sharing:1 sebastian:1 email:1 volumetric:10 larsen:5 mohamed:1 james:1 chintala:2 sampled:6 dataset:8 popular:1 wh:1 lim:4 car:3 dimensionality:1 knowledge:4 color:1 yumer:1 back:1 higher:5 supervised:5 chengkai:1 follow:1 response:1 danilo:1 editing:1 though:1 strongly:2 autoencoders:1 christopher:1 su:12 mehdi:1 propagation:1 google:1 rodriguez:1 artifact:1 quality:4 believe:1 aude:1 name:1 effect:1 concept:1 requiring:1 ccf:1 jbt:1 semantic:6 funkhouser:2 sgp:1 visualizing:1 during:2 width:1 please:1 criterion:5 bansal:3 yun:1 demonstrate:5 ruizhongtai:1 reasoning:1 image:49 wise:1 variational:5 novel:6 recently:3 meaning:1 harmonic:1 sigmoid:1 rotation:1 charles:3 cohen:2 discussed:3 lrecon:3 yuandong:1 synthesized:5 refer:1 honglak:1 ai:1 similarly:1 gwak:1 supervision:10 impressive:4 surface:5 morphable:1 add:1 align:1 similarity:2 multivariate:1 recent:5 showed:2 perspective:2 retrieved:1 boesen:1 driven:1 manipulation:1 zhirong:1 n00014:1 nvidia:1 hay:1 kar:3 outperforming:1 binary:1 onr:1 joshua:2 captured:1 additional:2 dai:1 employed:1 sharma:10 signal:1 arithmetic:6 full:2 multiple:1 infer:1 smooth:1 faster:1 danfei:1 offer:1 cross:2 retrieval:5 rifle:1 lin:1 roland:1 award:1 dkl:1 laplacian:1 qi:9 prediction:4 variant:1 crop:1 adobe:1 jost:2 vision:4 metric:2 oliva:1 arxiv:10 kernel:3 normalization:2 pyramid:1 achieved:1 background:1 cropped:1 fine:4 separately:2 girdhar:18 pooling:3 leveraging:2 structural:1 mohammadreza:1 yang:1 revealed:1 split:2 easy:1 r2n2:1 bengio:1 iterate:1 affect:1 relu:4 fit:1 embeddings:1 identified:1 restrict:2 idea:1 airplane:1 whether:3 six:3 song:2 render:3 peter:1 henb:1 pretraining:1 deep:12 antonio:3 heess:1 detailed:1 nonparametric:1 chuan:1 desk:2 tenenbaum:2 inspiring:2 hardware:1 category:14 generate:7 supplied:2 exist:1 outperform:1 canonical:1 nsf:2 fish:1 jiajun:2 correctly:1 per:6 pace:1 bryan:1 write:1 affected:1 siddhartha:2 achieving:2 changing:1 iros:1 veltkamp:3 wand:2 angle:1 micron:1 powerful:1 fourth:1 springenberg:3 eli:1 family:1 ikea:9 spl:1 wu:16 patch:1 ersin:1 comparable:4 layer:12 courville:1 correspondence:1 fold:1 ahead:1 scene:1 generates:2 aspect:1 chair:9 rendered:2 recombining:1 relatively:2 martin:1 structured:1 vladlen:2 across:2 appealing:1 joseph:2 rob:1 hl:1 memorizing:2 gradually:1 invariant:1 iccv:3 lapgan:1 visualization:1 previously:1 bing:1 discus:2 precomputed:1 hannun:1 mind:1 merit:1 flip:1 end:2 adopted:1 deeppano:2 apply:2 observe:5 batch:6 original:1 thomas:6 angela:1 assembly:1 gan:48 carlson:3 build:1 malik:1 g0:2 added:3 strategy:2 shapenets:2 traditional:3 exhibit:1 gradient:1 iclr:4 distance:1 decoder:2 gun:1 seven:1 manifold:2 chris:1 mikel:1 ozair:1 yuanqing:1 erik:1 minimizing:1 difficult:2 mostly:4 voxelized:1 hao:4 synthesizing:1 ba:2 design:2 pei:1 perform:1 observation:2 convolution:2 datasets:1 neuron:11 benchmark:2 sm:1 behave:1 dc:1 varied:1 community:2 intensity:1 david:2 namely:1 kl:3 discriminator:26 imagenet:2 acoustic:1 learned:16 barcelona:1 kingma:5 nip:5 able:5 beyond:3 usually:1 pattern:1 indoor:1 max:3 pirsiavash:1 power:1 natural:2 noa:1 boat:1 zhu:2 representing:2 fc8:1 arm:1 improve:1 library:1 abhinav:3 jun:1 autoencoder:6 extract:1 auto:1 text:1 prior:1 understanding:2 literature:1 voxels:4 acknowledgement:1 rohit:1 xiang:3 billf:1 loss:10 fully:1 permutation:1 generation:9 scherer:8 generator:19 sedaghat:4 consistent:1 xiao:4 viewpoint:3 heavy:1 row:1 eccv:5 maas:2 supported:1 last:6 institute:1 wide:2 template:1 neighbor:2 face:2 differentiating:1 leaky:1 benefit:1 van:3 dimension:7 evaluating:1 transition:1 rich:4 author:2 made:2 adaptive:1 voxel:11 welling:3 hang:1 implicitly:1 jaderberg:1 keep:1 overfitting:1 discriminative:7 fergus:1 abhishek:2 latent:19 decade:2 khosla:1 table:12 promising:1 learn:2 johan:1 nicolas:1 rusinkiewicz:1 synthesizes:1 interpolating:2 stc:1 main:1 noise:1 iarpa:1 xu:2 junyoung:1 ehinger:1 precision:2 third:2 toyota:1 learns:2 grained:1 ian:1 tang:2 down:2 choi:1 specific:2 rectifier:1 showing:1 krista:1 yijie:1 explored:4 sph:1 svm:1 gupta:4 abadie:1 concern:1 striving:1 exists:1 workshop:1 sequential:1 kr:1 hui:1 mirror:2 texture:2 te:1 confuse:1 push:1 hole:1 margin:3 justifies:1 chen:3 entropy:2 intersection:1 mvcnn:2 simply:1 explore:4 visual:2 aditya:1 orion:1 pretrained:1 chang:3 applies:1 radford:7 corresponds:1 truth:2 acm:3 shell:1 towards:1 fisher:1 content:1 hard:1 kazhdan:2 included:1 specifically:3 except:1 typical:1 change:1 sampler:1 carreira:1 nima:1 silvio:2 multimedia:1 attempted:1 indicating:1 formally:1 aaron:1 mark:1 guo:1 jianxiong:2 incorporate:1 evaluate:6
5,632
6,097
Learning Sparse Gaussian Graphical Models with Overlapping Blocks 1 Mohammad Javad Hosseini1 Su-In Lee1,2 Department of Computer Science & Engineering, University of Washington, Seattle 2 Department of Genome Sciences, University of Washington, Seattle {hosseini, suinlee}@cs.washington.edu Abstract We present a novel framework, called GRAB (GRaphical models with overlApping Blocks), to capture densely connected components in a network estimate. GRAB takes as input a data matrix of p variables and n samples and jointly learns both a network of the p variables and densely connected groups of variables (called ?blocks?). GRAB has four major novelties as compared to existing network estimation methods: 1) It does not require blocks to be given a priori. 2) Blocks can overlap. 3) It can jointly learn a network structure and overlapping blocks. 4) It solves a joint optimization problem with the block coordinate descent method that is convex in each step. We show that GRAB reveals the underlying network structure substantially better than four state-of-the-art competitors on synthetic data. When applied to cancer gene expression data, GRAB outperforms its competitors in revealing known functional gene sets and potentially novel cancer driver genes. 1 Introduction Many real-world networks contain subsets of variables densely connected to one another, a property called modularity (Fig 1A); however, standard network inference methods do not incorporate this property. As an example, biologists are increasingly interested in understanding how thousands of genes interact with each other on the basis of gene expression data that measure expression levels of p genes across n samples. This has stimulated considerable research into the structure estimation of a network from high-dimensional data (p n). It is well-known that the network structure corresponds to the non-zero pattern of the inverse covariance matrix, ? 1 [1]. Thus, obtaining a sparse estimate of ? 1 by using `1 penalty has been a standard approach to inferring a network, a method called graphical lasso [2]. However, applying an `1 penalty to each edge fails to reflect the fact that genes involved in similar functions are more likely to be connected with each other and that how genes are organized into functional modules are often not known. We present a novel structural prior, called GRAB prior, which encourages the network estimate to be dense within a block (i.e, a subset of variables) and sparse between blocks, where blocks are not given a priori. Fig 1B illustrates the effectiveness of the GRAB prior (bottom) in a high-dimensional setting (p = 200 and n = 100), where it is difficult to reveal the true underlying network by using the graphical lasso (GLasso) (top). The major novelty of GRAB is four-fold: First, unlike previous work [3, 4, 5], GRAB allows each variable to belong to more than one block, which is an important property of many real-world networks. For example, genes important in disease processes are often involved in multiple functional modules [6], and identifying such genes would be of great scientific interest (Section 4.2). Although existing methods to learn non-overlapping blocks allow edges between different blocks, they use stronger regularization parameters for between-block edges, which decreases the power to detect variables associated with multiple blocks. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Second, GRAB jointly learns the network structure and the assignment of variables into overlapping blocks (Fig 2). Existing methods to incorporate blocks in network learning either use blocks given a priori or use a sequential approach to learn blocks and then learn a network given the blocks held fixed. Interestingly, the GRAB algorithm can be viewed as a generalization of the joint learning of the distance metric among p variables and graph-cut clustering of p variables into blocks (Section 3.4) Third, GRAB solves a joint optimization problem with the block coordinate descent method that is convex in each step. This is a powerful feature that is difficult to be achieved by existing methods to cluster variables into blocks. This property guarantees the convergence of the learning algorithm (Section 3). Finally, the GRAB framework we presented in this paper uses the Gaussian graphical model as a baseline model. However, the GRAB prior, formulated as tr ZZ| |?| (Section 2.2), can be used in any kind of network models such as pairwise Markov random fields. In the following sections, we show that GRAB outperforms the graphical lasso [2] and existing methods to learn blocks and network estimates [3, 4] on synthetic data and cancer gene expression data. We also demonstrate GRAB?s potential to identify novel genes that drive cancer. (B)$ x4 x5 Z:#assignment#matrix# x2 x3 x1 Zi# x7 (B)# p# !# x6 ZT # K# (A)# K# p# ZjT# (ZZT)ij# x8 ZZT# ?1# p# ?2# Learning#Z## Z# (C)# ZT # K# p# SBM# Learning#?## (D)# ?# ?w# p# ?b# !# x1 x2 x3 x4 x5 x6 x7 x8 Block#2# Block#3# ?.step# x1 x2 x3 x4 x5 x6 x7 x8 (E)#GRAB# p# Block#1# The#similarity#between#ith# and#jth#variables## Z.step# (A)$ K# p# p# Sparsity#paBern# encouraged#by#ZZT# ?1,#?2:#based#on#Z# ?w:#within?block# ?b:#between?block# Figure 1: (A) A network with overlapping blocks (top) and its adjacency matrix (bot- Figure 2: The GRAB framework ? an iterative tom). (B) Network estimates of GLasso (top) algorithm that jointly learns ? and Z. and GRAB (bottom) in a toy example. 2 2.1 GGM with Overlapping Blocks Background: High-Dimensional Gaussian Graphical Model (GGM) We aim to learn a GGM of p variables on the basis of n observations (p n). That is, suppose that X(1) , . . . , X(n) are i.i.d. N (?, ?), where ? 2 Rp and ? is a p ? p positive definite matrix. It is well known that the sparsity pattern of ? 1 determines the conditional independence structure of the p variables; there is an edge between the ith and jth variables if and only if the (i, j) element of ? 1 is non-zero [1]. A number of authors have proposed to estimate ? 1 using the graphical lasso [2, 7, 8]: maximize log det ? tr(S?) ??0 k?k1 , (1) b is an estimate of ? 1 , S denotes the empirical covariance matrix, and is a where the solution ? nonnegative tuning parameter that controls the strength of the `1 penalty applied to the elements of ?. This amounts to maximizing a penalized log-likelihood. 2 2.2 GGM with the Overlapping Block Prior Here, we present the GRAB prior, formulated as tr ZZ| |?| , that encourages ? to have overlapping blocks. Let X = {X1 , . . . , Xp } be variables in the network and Z be a real matrix of size p ? K, where K is the total number of blocks. Each element 1 ? Zik ? 1 can be interpreted as a score representing how likely the ith variable Xi belongs to the kth block Bk . The ith row of Z, denoted by Zi , can be interpreted as a low-rank embedding for the variable Xi showing its block assignment scores. Then, the (i, j) element (ZZ| )ij = ?K k=1 Zik Zjk (the dot product of Zi and Zj ) represents the similarity between variables Xi and Xj in their embeddings. To more clearly understand the impact of the GRAB prior on the sparsity structure of ?, let us assume a hard assignment model in which we assign variables to blocks. Then, Z becomes a binary matrix and the sparsity pattern of ZZ| would indicate the region covered by all K blocks (Fig 2A-B). Then, jointly learning Z and ? to increase ?i,j (ZZ| )ij |?ij | would encourage ? to have a sparsity structure imposed by (ZZ| ). In the continuous case, it would encourage |?ij | to be non-zero when Xi and Xj have similar embeddings (i.e., a dot product of Zi and Zj is large). Incorporating the GRAB prior into Eq (1) as a structural prior leads to: ? ? maximize log det ? tr(S?) k?k1 tr ZZ| |?| , ??0,Z2D where is a non-negative tuning parameter. We can re-write Eq (2) as: ? X ? maximize log det ? tr(S?) 1 (ZZ| )ij |?ij |. ??0,Z2D i,j (2) (3) We use the value of the sparsity tuning parameter 1 (ZZ| )ij for each (i, j) element ?ij . A network edge that corresponds to two variables with similar embeddings would be penalized less. p?K The set D ? [ 1, 1] contains matrices Z satisfying the following constraints: (a) kZi k2 ? 1, where Zi denotes the ith row of Z. This constraint ensures the regularization parameters of all (i, j) pairs of variables are non-negative. (b) kZkF ? . In addition to the variable specific constraint on each Zi in (a), we need a global constraint on Z to prevent all regularization parameters from becoming zero (8i, j : (ZZ T )ij = 1). (c) kZk2 ? ? , where k.k2 of a matrix is its maximum singular value. This constraint prevents the case where all variables are assigned to one block. There are two hyperparameters, and ? ; however we describe below that we set ? = pK and that has an effect to guarantee p that there are at least K non-empty blocks. In our experiments, we set the hyper-parameter = p2 , which, intuitively, would allow each variable to get on average half of Pp its largest possible squared norm. Given that kZk2F = i=1 i2 where i is the ith singular value Pp of Z, from the constraint (b), i=1 i2 ? 2 . We set ? = pK , where ? means the upper bound of the maximum singular value, given the constraint (c). This means that there would be at least K non-empty blocks given that the constraint (b) is tight. We show in Section 3 that this choice of hyperparameters makes our learning algorithm simpler (see Lemma 3.2). 2.3 Probabilistic Interpretation The joint distribution over X, ? and Z is as: P (X, ?, Z) = P (X|?)P (?|Z)P (Z). The first two terms, log det(?) trace(S?), in Eq (3) correspond to log P (X|?), the log-likelihood of GGM given a particular parameter ? (i.e., an estimate of ? 1 ), as described in Section 2.1. For ? ? 0, Q P (?|Z) = P (?ij |Z), where P (?ij |Z) represents a conditional probability over ?ij given the block assignment scores of Xi and Xj . We useQthe Laplacian prior with the sparsity parameter value 1 (1 (ZZ| )ij ). For ? ? 0, P (?|Z) is: D (ZZ| )ij ))|?ij |), where D is the (i,j) exp( ( (1 normalization constant. The prior probability P (Z) is proportional to D. 2.4 Related Work To our knowledge, GRAB is the first attempt to jointly learn the overlapping blocks and the structure of a conditional dependence network such as a GGM. Related work consists of 3 categories: 3 1) Learning blocks with a network held fixed: This category includes (a) stochastic block model (SBM) [9], (b) spectral clustering [10], and (c) a screening rule to identify non-overlapping blocks based on the empirical covariance matrix [11]. 2) Learning a network with blocks given a priori and held fixed: This category includes a) a method to solve graphical lasso with group `1 penalty to encourage group sparsity of edges within pairs of blocks [12], and b) an efficient learning algorithm for GGMs given a set of overlapping blocks [13]. 3) Learning non-overlapping blocks first and then the network given the blocks: (a) Marlin et al. (2009) extend the prior work [12] to identify non-overlapping blocks which are then used to learn a network [3]. (b) Another method assigns each variable to one block, and use different regularization parameters for within-block and between-block edges [14]. (c) Tan et al. (2015) propose to use hierarchical clustering (complete-linkage and average-linkage) to cluster variables into non-overlapping blocks, and apply graphical lasso to each block [4]. 3 GRAB Learning Algorithm 3.1 Overview Our learning algorithm jointly learns the block assignment scores Z and the network estimate ? by solving Eq (2). We adopt the block coordinate descent (BCD) method to iteratively learn Z and ?. Our learning algorithm essentially performs adaptive distance (similarity) metric learning and clustering of variables into blocks simultaneously (Section 3.4). Given the current assignment of variables into blocks, Z, we learn a network among variables, ?. Then, |?| is used as a similarity matrix among variables to update the assignment of variables to blocks, Z. We iterate until convergence. Convergence is theoretically guaranteed. Since our objective function is continuous on a compact level set, based on Theorem 4.1 in [15], the solution sequence of our method is defined and bounded. Every coordinate block found by the ?-step and Z-step is a stationary point of GRAB. We indeed observed the value of the objective function monotonically increases until convergence. In the following, we show that the BCD method will be convex in each step. We first re-write Eq (2) with all the constraints explicitly: maximize log det ? ??0,Z ? tr(S?) k?k1 tr ZZ| |?| ? subject to kZk2 ? ?, kZi k2 ? 1, kZkF ? , (i 2 {1, . . . p})). Now, we state the following lemma, the proof of which can be found in the Appendix. Lemma 3.1 Eq (4) is equivalent to the following: maximize log det ? ??0,W?0 tr(S?) subject to rank(W) ? K, W ? k?k1 tr W|?| ? ? 2 I, diag(W) ? 1, tr(W) ? (4) (5) 2 , where W is a p ? p matrix, K means the number of blocks, and I is the identity matrix of size p.1 Corollary 3.1.1 Suppose that (?? , W? ) is the optimal solution of the optimization problem (5). p ? ? Then, ? , Z = U D is the optimal solution of problem 4, where U 2 Rp?K is a matrix with columns containing K eigenvectors of W corresponding to the largest eigenvalues and D is a diagonal matrix of the corresponding eigenvalues. 3.2 Learning ? (?-step) To estimate ? given Z, based on Eq (3), we solve the following problem: P maximize log det ? tr(S?) (i,j) ?ij |?ij |, ??0 | (ZZ )ij ). (6) where ?ij = (1 This is the graphical lasso with edge-specific regularization parameters ?ij . Eq (6) is a convex problem and we solve it by adopting a standard solver for graphical lasso [16]. 1 In this paper, we assume diag is an operator that maps a vector to a diagonal matrix with the vector as its diagonal, and maps a matrix to a vector containing its diagonal. 4 3.3 Learning Z (Z-step) Here we describe how to learn Z given ?. Instead of solving (4), we solve (5) because (5) is a convex optimization problem with respect to W. Interestingly, we can remove the rank constraint, rank(W) ? K; in Lemma 3.2, we show that with the choice of ? = pK , the rank constraint is automatically satisfied. This leads to the following optimization problem: maximize tr W|?| W?0 (7) subject to W ? 2 I, diag(W) ? 1, tr(W) ? 2 . This W-step is a semi-definite programming problem. We solve the dual of Eq (7) that leads to an efficient optimization problem.2 We introduce three dual variables: 1) a matrix Y ? 0 for the `2 norm constraint, 2) a vector v 2 Rp+ for the constraints on the diagonal and 3) a scalar y 0 for the constraint on trace. The Lagrangian is: L(W, Y, v , y) = tr W|?| + tr (? 2 I W)Y + y( 2 tr(W)) + v T (1 diag(W)). (8) The dual function is as: sup tr W|?| + tr (? 2 I W)Y + y( 2 tr W)) + v | (1 diag(W)) W?0 = sup tr W(|?| Y yI diag(v)) + ? 2 tr(Y ) + y + v | 1 W?0 ? ? tr(Y ) + y + v 1 if Y ? |?| yI +1 otherwise consequently, we get the following dual problem for Eq (7): minimize ? 2 tr(Y ) + y 2 + v | 1 = 2 T diag(v) (9) . Y,y,v (10) subject to Y ? |?| yI diag(v) , Y ? 0, y 0, v 0. Eq (10) has a closed form solution in Y and y given that v is fixed. The dual problem boils down to: K X (11) minimize g(v) = minimize ? 2 C +,i + v | 1, v 0 v 0 i=1 p where we have replaced ? 2 with K (because ? = / K). We define C = (|?| diag(v)) and assume it has eigenvalues ( 1 , . . . p ) in descending order and (C)+,i = max(0, i ). We solve Eq (11) by projected subgradient descent method where the subgradient direction is: (12) rv g(v) = ? 2 diag UC 1K (DC )UC| + 1. DC is the diagonal matrix of eigenvalues in descending order and UC is the matrix containing orthonormal eigenvectors of C as its columns. We define 1K (DC ) as a binary vector of size p with jth element equal to 1 if and only if j ? K and j > 0. 2 After finding the optimal v ? , the optimal solution W? can be obtained by: W? = argmax tr W(|?| diag(v ? )) (13) subject to W ? 2 I, tr(W) ? 2 . One can see that the solution of problem (13) is W ? = ? 2 UC ? 1 2 /? 2 (DC ? )UC| ? = ? 2 UC ? 1K (DC ? )UC| ? , where C ? , UC ? , DC ? and 1K (DC ? ) are defined similarly to (12). By definition, 1K (.) is a diagonal matrix with at most K nonzeros elements. Therefore, W ? will have rank at most K, which means that we do not need the rank constraint on W. This leads to the following lemma. W?0 Lemma 3.2 If we set ? = p K in (5), the constraint rank(W) ? K will be automatically satisfied. p Finally, we construct Z? = U D as instructed in corollary 3.1.1. Note that in the intermediate iterations, we do not need to compute Z; we need to construct the matrix Z? to find the overlapping blocks after the learning algorithm will converge3 . 2 The primal problem has a strictly feasible solution ?I, where ? is a small number and I is the identity matrix; therefore strong duality holds. 3 The source code is available at: http://suinlee.cs.washington.edu/software/grab 5 3.4 A special case: K-way graph cut algorithm Here, we show that GRAB algorithm generalizes the K-way graph cut algorithm in two ways: 1) GRAB allows each variable to be in multiple blocks with soft membership; and 2) GRAB updates a network structure ?, used as a similarity matrix, in each iteration. The proof is in the Appendix. Lemma 3.3 Say that we use a binary matrix Z (hard assignment) with the following constraints: a) For all variables i, kZi k2 ? 1, where Zi denotes the ith row of Z. b) For all blocks k, kZ k k2 >= 1, where Z k denotes the kth column of Z. This means that each variable can belong to only one block (i.e., non-overlapping blocks), and each block has at least one variable. Then GRAB is equivalent to iterating between K-way graph-cut on |?| to find Z and solving graphical lasso problem to find ?. 4 Experimental Results We present results on synthetically generated data and real data. Comparison. Three state-of-the-art competitors are considered: UGL1 - unknown group `1 regularization [3]; CGL - cluster graphical lasso [4]; and GLasso - standard graphical lasso [2]. CGL has two variants depending on the type of hierarchical clustering used: average linkage clustering (CGL:ALC) and complete linkage clustering (CGL:CLC). Each method selects the regularization parameter using the standard cross-validation (or held out validation) procedure. CGL and UGL1 have their own ways of selecting the number of blocks K [4, 3]. GRAB selects K based on the validation-set log-likelihood in initialization. We initialize GRAB by constructing the Z matrix. We first perform spectral clustering on |S|, where S denotes the empirical covariance matrix, then add overlap by assigning a random subset of variables to clusters with the highest average correlation. Then, we project the Z matrix into the convex set defined in Section 2.2 and p form W = ZZ| . In the Z-step of the GRAB learning algorithm, we use step size 1/ t, where t is the iteration number and iterate until the relative change in the objective function is less than 10 6 (Section 3.3). We use the warm-start technique between the BCD iterations. Evaluation criteria. In the synthetic data experiments (sectoin 4.1), we evaluate each method based on the learned network with the optimal regularization parameter chosen for each method based on only training-set. For the AML dataset (Section 4.2), we evaluate the learned blocks for varying regularization parameters (x-axis) to better illustrate the difference among the methods in terms of their performances. In all experiments, we standardize the data and show the average results over 10 runs and the standard deviations as error bars. 4.1 Synthetic Data Experiments Data generation. We first generate ? overlapping blocks forming a chain, a random tree or a lattice. In each case, two neighboring blocks overlap each other by o (the ratio of the variables shared between two overlapping blocks). Then, we randomly generate a true underlying network of p variables with density of 20%, and convert it to the precision matrix following the procedure of [17]. We generate 100 training samples and 50 validation samples from the multivariate Gaussian distribution with mean zero and the covariance matrix equal to the inverse of the precision matrix. We consider a varying number of true blocks ? 2 {9, 25, 49} and overlap ratio o = .25. For ? = 25, we consider o 2 {.1, .25, .4}. We vary the number of variables p 2 {400, 800} for the lattice-structured blocks. The results on the chain and random tree blocks are similar and so we provide only the results for p = 400 for these block structures. For all methods, we considered the regularization parameter 2 [.02, .4] with step size .02. Results. Fig 3 compares five methods when a regularization parameter was selected for each method based on the 50 validation samples. Each of the four plots correspond to different block structure or number of variables. Each bar group corresponds to a particular (?, o, ?), in which we computed the modularity measure ? as (fraction of edges that fall within groups - expected fraction if edges were distributed at random), as was done by [18]. Fig 3A shows how accurately each method recovers the true network. For each method m, we compared the learned edges (EZ,m ) and that from the underlying network (EZ ). By comparing EZ,m and EZ , we can compute the precision and recall 6 La/ce:'p=400,n=100' ?=9' ?=25' ?=25' ?=25' ?=49' o=.25' o=.4' o=.25' o=.1' o=.25' ?=0.85'?=0.88' ?=0.91' ?=0.96' ?=0.93' ' Chain:'p=400,n=100' La/ce:'p=800,n=100' ?=9' ?=25' ?=25' ?=25' ?=49' o=.25' o=.4' o=.25' o=.1' o=.25' ?=0.85' ?=0.88' ?=0.91' ?=0.96' ?=0.93' ' ?=10' ?=30' ?=30' ?=30' ?=50' o=.25' o=.4' o=.25' o=.1' o=.25' ?=0.85'?=0.94' ?=0.95' ?=0.96' ?=0.97' ' Random:'p=400,n=100' ?=10' ?=30' ?=30' ?=30' ?=50' o=.25' o=.4' o=.25' o=.1' o=.25' ?=0.87' ?=0.93' ?=0.96' ?=0.97' ?=0.96' ' Figure 3: Comparison based on average network recovery F1 on synthetic data from lattice blocks, when p = 400 (first panel) and p = 800 (second panel), chain blocks (third panel) and random blocks (fourth panel) when p = 400. Each bar group corresponds to a particular (number of blocks ?, overlap ratio o, modularity ?). of network recovery. Since it is not enough to get only high precision or recall, we use the F1 (or pr?rec F-measure) = 2 pr+rec as an evaluation metric. A number of authors have shown that identifying the underlying network structure is very challenging in the high-dimensional setting, resulting in low accuracies even on synthetic data [14, 19, 4]. Our results also show that the F1 scores for network are lower than 0.40. Despite that, GRAB identifies network edges much more accurately than its competitors. 4.2 Cancer Gene Expression Data We consider the MILE data [20] that measure the mRNA expression levels of 16,853 genes in 541 patients with acute myeloid leukemia (AML), an aggressive blood cancer. For a better visualization of the network in limited space (Fig 5), we selected 500 genes4 , consisting of 488 highest varying genes in MILE and 12 genes highly associated with AML: FLT3, NPM1, CEBPA, KIT, N-RAS, MLL, WT1, IDH1/2, TET2, DNMT3A, and ASXL1. These genes are identified by [21] in a large study on 1,185 patients with AML to be significantly mutated in these AML patients. These genes are well-known to have significant role in driving AML. Here, we evaluate GRAB and the other methods qualitatively in terms of how useful each method is for cancer biologists to make discovery from data. For that, we fix the number of blocks to be K = 10 across all methods such that we get average of over 50 variables per block, which is considered close to the average number of genes in known pathways [22]. We varied K and obtained similar results. Genes in the same block are likely to share similar functions. Statistical significance of the overlap between gene clusters (here, blocks) and known functional gene sets have been widely used as an evaluation criteria [23, 5]. We show how to obtain blocks from the learned Z. Obtaining blocks from Z. After the GRAB algorithm converges, we obtain a network estimate ? and a block membership matrix Z. We find K overlapping blocks satisfying two constraints: a) maximum number of assignments is C; and b) each variable is assigned to 1 block. Here, we used C = 1.3p. We perform the following greedy procedure: 1) We first run k-means clustering 5 algorithm on Pthe p rows of the matrix Z. . 2) We compute the similarity of variables i to blocks Bk as |B1k | j2Bk (ZZ| )ij , where |Bk | is the number of variables in Bk . Then, we add overlap by assigning C p variables to blocks with highest similarity. To evaluate the blocks, we used 4,722 curated gene sets from the molecular signature database [24] and computed a p-value to measure the significance of the overlap between each block and each gene set. We consider the (block, gene set) pairs with false discovery rate (FDR)-corrected p < 0.05 to be significantly overlapping pairs. When a block is significantly overlapped with a gene set, we consider the gene set to be revealed by the corresponding block. We compare GRAB with the 4 GRAB runs for 0.5-1.5 hours for 500 genes and up to 20 hours for 2,000 genes on a computer with 2.5 GHz Intel Core i5 processor 5 This resembles spectral clustering (equivalently, kmeans on eigenvectors of Laplacian matrix) 7 methods introduced in section 4.1. Since we only need the blocks for this experiment, we added two more competitors: k-means and spectral clustering methods applied to |S|, where S denotes the empirical covariance matrix. Fig 4 shows the number of gene sets that are revealed by any block (FDR-corrected p < 0.05) in each method. GRAB significantly outperforms, which indicates the importance of learning overlapping blocks; GRAB?s overlapping blocks reveal known functional organization of genes better than other methods. Fig 4 shows the average results of 10 random initializations. Fig 5 compares the learned networks ? by GLasso (A) and GRAB (B) when the regularization parameters are set such that the networks show a similar level of sparsity. For GRAB, we removed the between-block edges and reordered genes such that the genes in the same blocks tend to appear next to each other. GRAB shows more interpretable network structure, highlighting the genes that belong to multiple blocks. The key innovation of GRAB is to allow for overlap between blocks. Interestingly, the 12 well-known AML genes are significantly enriched for the genes assigned to 3 or more blocks: FLT3, NPM1, TET2 and DNMT3A belong to 3 blocks while there are only 24 such genes out of 500 genes (p-value: 0.001) (Fig 5B). This supports our claim that variables assigned to multiple blocks are likely important. Out of the 24 genes assigned to 3 blocks, 12 are known to be involved in myeloid differentiation (the process impaired in AML) or other types of cancer. This can lead to new discovery on the genes that drive AML. These genes include CCNA1 that has shown to be significantly differentially expressed in AML (A)$ (B)$ of some (C)$ patients [25]. TSPAN7 is expressed in acute myelocytic leukemia patients6 . Several genes are associated with other types of cancer. For example, CCL20 is associated with pancreatic cancer [26]. ELOVL7 is involved in prostate cancer growth [27]. SCRN1 is a novel marker for prognosis in colorectal cancer [28]. These genes assigned to many blocks and have been implicated in other cancers or leukemias can lead the discovery of novel AML driver genes. (A)$ (B)$ NPM1$ DNMT3A$ TET2$ FLT3$ Figure 5: Learned networks of: (A) GLasso and (B) GRAB. For GRAB, we have sorted the genes Figure 4: Average number of gene sets highly based on the blocks and highlighted the following associated with blocks at a varying regulariza4 genes (out of the 12 highly associated genes with tion parameter. The cross-validation results AML) that belong to many blocks: NPM1, FLT3, are consistent with these results. DNMT3A and TET2. 5 Discussion and Future Work We present a novel general framework, called GRAB, that can explicitly model densely connected network components that can overlap with each other in a graphical model. The novel GRAB structural prior encourages the network estimate to be dense within each block (i.e., a densely connected group of variables) and sparse between the variables in different blocks. The GRAB learning algorithm adopts BCD and is convex in each step. We demonstrate the effectiveness of our framework in synthetic data and cancer gene expression dataset. Our framework is general and can be applied to other kinds of graphical models, such as pairwise Markov random fields. Acknowledgements: We give warm thanks to Reza Eghbali and Amin Jalali for many useful discussions. This work was supported by the National Science Foundation grant DBI-1355899 and the American Cancer Society Research Scholar Award 127332-RSG-15-097-01-TBG. 6 http://www.genecards.org/cgi-bin/carddisp.pl?gene=TSPAN7 8 References [1] S. L. Lauritzen. Graphical Models. Oxford Science Publications, 1996. [2] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9:432?441, 2007. [3] B. M. Marlin and K. P. Murphy. Sparse gaussian graphical models with unknown block structure. pages 705?712, 2009. [4] K. M. Tan, D. Witten, and A. Shojaie. The cluster graphical lasso for improved estimation of gaussian graphical models. Computational statistics & data analysis, 85:23?36, 2015. [5] S. Celik, B. A. Logsdon, and S.-I. Lee. Efficient dimensionality reduction for high-dimensional network estimation. ICML, 2014. [6] A. Lasorella, R. Benezra, and A. Iavarone. The id proteins: master regulators of cancer stem cells and tumour aggressiveness. Nature Reviews Cancer, 14(2):77?91, 2014. [7] M. Yuan and Y. Lin. Model selection and estimation in the Gaussian graphical model. Biometrika, 94(10):19?35, 2007. [8] A. Rothman, E. Levina, and J. Zhu. Sparse permutation invariant covariance estimation. Electronic Journal of Statistics, 2:494?515, 2008. [9] P. W. Holland, K. B. Laskey, and S. Leinhardt. Stochastic blockmodels: Some first steps. Social Networks, 5(2):109?137, 1983. [10] J. Shi and J. Malik. Normalized cuts and image segmentation. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 22(8):888?905, 2000. [11] D. M. Witten, J. H. Friedman, and N. Simon. New insights and faster computations for the graphical lasso. Journal of Computational and Graphical Statistics, 20(4):892?900, 2011. [12] J. Duchi, S. Gould, and D. Koller. Projected subgradient methods for learning sparse gaussians. UAI, 2008. [13] M. Grechkin, M. Fazel, D. Witten, and S.-I. Lee. Pathway graphical lasso. 2015. [14] C. Ambroise, J. Chiquet, and C. Matias. Inferring sparse gaussian graphical models with latent structure. Electron. J. Statist., 3:205?238, 2009. [15] P. Tseng. Convergence of a block coordinate descent method for nondifferentiable minimization. Journal of Optimization Theory and Applications, 109(3):475?494, 2001. [16] Cho-Jui Hsieh, Inderjit S Dhillon, Pradeep K Ravikumar, and M?ty?s A Sustik. Sparse inverse covariance matrix estimation using quadratic approximation. In Advances in neural information processing systems, pages 2330?2338, 2011. [17] Q. Liu and A. T. Ihler. Learning scale free networks by reweighted l1 regularization. In International Conference on Artificial Intelligence and Statistics, pages 40?8, 2011. [18] Mark EJ Newman. Modularity and community structure in networks. Proceedings of the National Academy of Sciences, 103(23):8577?8582, 2006. [19] K. Mohan, M. Chung, S. Han, D. Witten, S.-I. Lee, and M. Fazel. Structured learning of gaussian graphical models. In NIPS, pages 620?628, 2012. [20] T. Haferlach, A. Kohlmann, L. Wieczorek, et al. Clinical utility of microarray-based gene expression profiling in the diagnosis and subclassification of leukemia. Journal of Clinical Oncology, 28(15):2529? 2537, 2010. [21] Y. Shen, Y.-M. Zhu, X. Fan, et al. Gene mutation patterns and their prognostic impact in a cohort of 1185 patients with acute myeloid leukemia. Blood, 118(20):5593?5603, 2011. [22] E. Segal, M. Shapira, A. Regev, D. Pe?er, D. Botstein, D. Koller, and N. Friedman. Module networks: identifying regulatory modules and their condition-specific regulators from gene expression data. Nature genetics, 34(2):166?176, 2003. [23] S.-I. Lee and S. Batzoglou. Ica-based clustering of genes from microarray expression data. In Advances in Neural Information Processing Systems, volume 16, 2003. [24] A. Liberzon, A. Subramanian, R. Pinchback, H. Thorvaldsd?ttir, P. Tamayo, and J. P. Mesirov. Molecular signatures database (msigdb) 3.0. Bioinformatics, 27(12):1739?1740, 2011. [25] Y. Fang, L. N. Xie, X. M. Liu, et al. Dysregulated module approach identifies disrupted genes and pathways associated with acute myelocytic leukemia. Eur Rev Med Pharmacol Sci, 19(24):4811?4826, 2015. [26] C. Rubie, V. O. Frick, P. Ghadjar, et al. Research ccl20/ccr6 expression profile in pancreatic cancer. 2010. [27] K. Tamura, A. Makino, et al. Novel lipogenic enzyme elovl7 is involved in prostate cancer growth through saturated long-chain fatty acid metabolism. Cancer Research, 69(20):8133?8140, 2009. [28] N. Miyoshi, H. Ishii, K. Mimori, et al. Scrn1 is a novel marker for prognosis in colorectal cancer. Journal of surgical oncology, 101(2):156?159, 2010. 9
6097 |@word stronger:1 prognostic:1 norm:2 tamayo:1 pancreatic:2 covariance:9 hsieh:1 myeloid:3 tr:26 reduction:1 liu:2 contains:1 score:5 selecting:1 interestingly:3 outperforms:3 existing:5 current:1 comparing:1 assigning:2 remove:1 plot:1 interpretable:1 update:2 zik:2 stationary:1 half:1 selected:2 greedy:1 intelligence:2 metabolism:1 ith:7 core:1 org:1 simpler:1 five:1 driver:2 yuan:1 consists:1 pathway:3 introduce:1 theoretically:1 pairwise:2 ica:1 expected:1 ra:1 indeed:1 automatically:2 solver:1 becomes:1 spain:1 project:1 underlying:5 bounded:1 panel:4 biostatistics:1 kind:2 interpreted:2 substantially:1 fatty:1 marlin:2 finding:1 differentiation:1 guarantee:2 every:1 growth:2 biometrika:1 k2:5 control:1 grant:1 appear:1 positive:1 engineering:1 despite:1 oxford:1 id:1 becoming:1 initialization:2 resembles:1 challenging:1 limited:1 idh1:1 fazel:2 block:120 definite:2 x3:3 procedure:3 empirical:4 significantly:6 revealing:1 jui:1 protein:1 shapira:1 get:4 batzoglou:1 close:1 selection:1 operator:1 wt1:1 applying:1 descending:2 www:1 equivalent:2 imposed:1 map:2 lagrangian:1 maximizing:1 shi:1 mrna:1 convex:7 shen:1 identifying:3 assigns:1 recovery:2 rule:1 sbm:2 insight:1 dbi:1 orthonormal:1 fang:1 embedding:1 ambroise:1 coordinate:5 zzt:3 suppose:2 tan:2 programming:1 us:1 overlapped:1 element:7 standardize:1 satisfying:2 rec:2 curated:1 cut:5 database:2 bottom:2 observed:1 module:5 role:1 capture:1 thousand:1 region:1 ensures:1 connected:6 subclassification:1 decrease:1 highest:3 removed:1 disease:1 signature:2 tight:1 solving:3 reordered:1 basis:2 joint:4 describe:2 artificial:1 newman:1 hyper:1 widely:1 solve:6 say:1 otherwise:1 statistic:4 jointly:7 highlighted:1 sequence:1 eigenvalue:4 lee1:1 propose:1 leinhardt:1 mesirov:1 product:2 neighboring:1 pthe:1 academy:1 amin:1 differentially:1 seattle:2 convergence:5 cluster:6 empty:2 impaired:1 converges:1 miyoshi:1 depending:1 illustrate:1 ij:22 lauritzen:1 eq:12 strong:1 p2:1 solves:2 c:2 indicate:1 direction:1 aml:12 stochastic:2 aggressiveness:1 adjacency:1 bin:1 require:1 assign:1 f1:3 generalization:1 fix:1 scholar:1 rothman:1 strictly:1 pl:1 hold:1 considered:3 exp:1 great:1 claim:1 electron:1 driving:1 major:2 vary:1 adopt:1 estimation:8 largest:2 minimization:1 clearly:1 gaussian:9 aim:1 eghbali:1 ej:1 varying:4 publication:1 corollary:2 rank:8 likelihood:3 indicates:1 ishii:1 celik:1 baseline:1 detect:1 inference:1 membership:2 koller:2 interested:1 selects:2 among:4 dual:5 denoted:1 priori:4 art:2 special:1 initialize:1 biologist:2 uc:8 field:2 equal:2 construct:2 washington:4 zz:16 x4:3 encouraged:1 represents:2 icml:1 leukemia:6 future:1 prostate:2 randomly:1 simultaneously:1 densely:5 national:2 mll:1 murphy:1 replaced:1 argmax:1 consisting:1 attempt:1 friedman:3 organization:1 interest:1 screening:1 highly:3 evaluation:3 saturated:1 pradeep:1 primal:1 held:4 chain:5 edge:13 encourage:3 tree:2 re:2 chiquet:1 column:3 soft:1 assignment:10 lattice:3 deviation:1 subset:3 synthetic:7 cho:1 eur:1 thanks:1 density:1 international:1 disrupted:1 probabilistic:1 lee:4 squared:1 reflect:1 satisfied:2 containing:3 american:1 chung:1 toy:1 aggressive:1 potential:1 segal:1 includes:2 explicitly:2 kzk2:2 tion:1 closed:1 sup:2 start:1 msigdb:1 simon:1 mutation:1 minimize:3 ggm:6 accuracy:1 acid:1 correspond:2 identify:3 surgical:1 rsg:1 mutated:1 accurately:2 drive:2 processor:1 definition:1 competitor:5 ty:1 matias:1 pp:2 involved:5 associated:7 proof:2 recovers:1 boil:1 ihler:1 dataset:2 recall:2 knowledge:1 dimensionality:1 organized:1 segmentation:1 xie:1 x6:3 tom:1 botstein:1 improved:1 done:1 until:3 correlation:1 su:1 overlapping:23 suinlee:2 marker:2 reveal:2 laskey:1 scientific:1 zjk:1 effect:1 contain:1 true:4 normalized:1 regularization:13 assigned:6 iteratively:1 dhillon:1 i2:2 mile:2 reweighted:1 x5:3 encourages:3 criterion:2 complete:2 mohammad:1 demonstrate:2 performs:1 duchi:1 l1:1 image:1 novel:10 witten:4 functional:5 overview:1 reza:1 volume:1 belong:5 interpretation:1 extend:1 significant:1 tuning:3 similarly:1 dot:2 han:1 similarity:7 acute:4 add:2 enzyme:1 multivariate:1 own:1 belongs:1 binary:3 yi:3 b1k:1 kit:1 novelty:2 maximize:7 monotonically:1 semi:1 rv:1 multiple:5 nonzeros:1 stem:1 levina:1 faster:1 profiling:1 cross:2 clinical:2 lin:1 long:1 kzk2f:1 molecular:2 award:1 ravikumar:1 laplacian:2 impact:2 variant:1 essentially:1 metric:3 patient:5 iteration:4 normalization:1 adopting:1 achieved:1 cell:1 pharmacol:1 tamura:1 background:1 addition:1 singular:3 source:1 microarray:2 unlike:1 subject:5 tend:1 med:1 effectiveness:2 structural:3 synthetically:1 intermediate:1 revealed:2 embeddings:3 enough:1 cohort:1 iterate:2 independence:1 xj:3 zi:7 hastie:1 lasso:15 identified:1 prognosis:2 det:7 expression:11 utility:1 linkage:4 penalty:4 useful:2 iterating:1 covered:1 eigenvectors:3 colorectal:2 amount:1 statist:1 category:3 http:2 generate:3 zj:2 bot:1 per:1 tibshirani:1 diagnosis:1 write:2 group:8 key:1 four:4 blood:2 prevent:1 ce:2 grab:49 graph:4 subgradient:3 fraction:2 convert:1 run:3 inverse:4 powerful:1 fourth:1 i5:1 master:1 electronic:1 appendix:2 bound:1 guaranteed:1 fold:1 quadratic:1 fan:1 nonnegative:1 strength:1 constraint:18 x2:3 software:1 bcd:4 x7:3 regulator:2 gould:1 department:2 structured:2 across:2 increasingly:1 rev:1 intuitively:1 invariant:1 pr:2 visualization:1 sustik:1 available:1 generalizes:1 gaussians:1 apply:1 hierarchical:2 spectral:4 rp:3 denotes:6 top:3 clustering:12 include:1 graphical:28 k1:4 hosseini:1 society:1 objective:3 malik:1 added:1 clc:1 regev:1 dependence:1 diagonal:7 jalali:1 tumour:1 kth:2 distance:2 sci:1 cgi:1 nondifferentiable:1 tseng:1 code:1 ratio:3 innovation:1 equivalently:1 difficult:2 potentially:1 trace:2 negative:2 zt:2 fdr:2 unknown:2 perform:2 upper:1 observation:1 markov:2 descent:5 dc:7 varied:1 oncology:2 community:1 bk:4 introduced:1 pair:4 learned:6 barcelona:1 hour:2 nip:2 bar:3 below:1 pattern:5 sparsity:9 max:1 power:1 overlap:10 subramanian:1 warm:2 zhu:2 representing:1 identifies:2 axis:1 x8:3 prior:13 understanding:1 discovery:4 acknowledgement:1 review:1 relative:1 glasso:5 permutation:1 generation:1 proportional:1 validation:6 foundation:1 xp:1 consistent:1 share:1 row:4 cancer:21 genetics:1 penalized:2 supported:1 free:1 jth:3 implicated:1 allow:3 understand:1 fall:1 sparse:10 distributed:1 ghz:1 world:2 genome:1 kz:1 author:2 instructed:1 adaptive:1 projected:2 qualitatively:1 adopts:1 frick:1 makino:1 kzi:3 social:1 transaction:1 compact:1 gene:55 global:1 reveals:1 uai:1 xi:5 continuous:2 iterative:1 latent:1 regulatory:1 modularity:4 stimulated:1 learn:11 nature:2 obtaining:2 interact:1 constructing:1 diag:11 pk:3 dense:2 significance:2 blockmodels:1 hyperparameters:2 profile:1 x1:4 enriched:1 fig:11 intel:1 javad:1 precision:4 fails:1 inferring:2 pe:1 third:2 learns:4 theorem:1 down:1 specific:3 showing:1 er:1 incorporating:1 false:1 sequential:1 alc:1 importance:1 mohan:1 illustrates:1 zjt:1 likely:4 forming:1 ez:4 prevents:1 highlighting:1 expressed:2 scalar:1 inderjit:1 holland:1 corresponds:4 determines:1 shojaie:1 kzkf:2 conditional:3 viewed:1 formulated:2 identity:2 consequently:1 kmeans:1 sorted:1 shared:1 considerable:1 hard:2 feasible:1 change:1 corrected:2 lemma:7 called:6 total:1 duality:1 experimental:1 la:2 ggms:1 support:1 mark:1 bioinformatics:1 incorporate:2 evaluate:4
5,633
6,098
Discriminative Gaifman Models Mathias Niepert NEC Labs Europe Heidelberg, Germany mathias.niepert@neclabs.eu Abstract We present discriminative Gaifman models, a novel family of relational machine learning models. Gaifman models learn feature representations bottom up from representations of locally connected and bounded-size regions of knowledge bases (KBs). Considering local and bounded-size neighborhoods of knowledge bases renders logical inference and learning tractable, mitigates the problem of overfitting, and facilitates weight sharing. Gaifman models sample neighborhoods of knowledge bases so as to make the learned relational models more robust to missing objects and relations which is a common situation in open-world KBs. We present the core ideas of Gaifman models and apply them to large-scale relational learning problems. We also discuss the ways in which Gaifman models relate to some existing relational machine learning approaches. 1 Introduction Knowledge bases are attracting considerable interest both from industry and academia [2, 6, 15, 10]. Instances of knowledge bases are the web graph, social and citation networks, and multi-relational knowledge graphs such as Freebase [2] and YAGO [11]. Large knowledge bases motivate the development of scalable machine learning models that can reason about objects as well as their properties and relationships. Research in statistical relational learning (SRL) has focused on particular formalisms such as Markov logic [22] and P ROB L OG [8] and is often concerned with improving the efficiency of inference and learning [14, 28]. The scalability problems of these statistical relational languages, however, remain an obstacle and have prevented a wider adoption. Another line of work focuses on efficient relational machine learning models that perform well on a particular task such as knowledge base completion and relation extraction. Examples are knowledge base factorization and embedding approaches [5, 21, 23, 26] and random-walk based ML models [15, 10]. We aim to advance the state of the art in relational machine learning by developing efficient models that learn knowledge base embeddings that are effective for probabilistic query answering on the one hand, and interpretable and widely applicable on the other. Gaifman?s locality theorem [9] is a result in the area of finite model theory [16]. The Gaifman graph of a knowledge base is the undirected graph whose nodes correspond to objects and in which two nodes are connected if the corresponding objects co-occur as arguments of some relation. Gaifman?s locality theorem states that every first-order sentence is equivalent to a Boolean combination of sentences whose quantifiers range over local neighborhoods of the Gaifman graph. With this paper, we aim to explore Gaifman locality from a machine learning perspective. If every first-order sentence is equivalent to a Boolean combination of sentences whose quantifiers range over local neighborhoods only, we ought to be able to develop models that learn effective representations from these local neighborhoods. There is increasing evidence that learning representations that are built up from local structures can be highly successful. Convolutional neural networks, for instance, learn features over locally connected regions of images. The aim of this work is to investigate the effectiveness and efficiency of machine learning models that perform learning and inference within and across 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. locally connected regions of knowledge bases. This is achieved by combining relational features that are often used in statistical relatinal learning with novel ideas from the area of deep learning. The following problem motivates Gaifman models. Problem 1. Given a knowledge base (relational structure, mega-example, knowledge graph) or a collection of knowledge bases, learn a relational machine learning model that supports complex relational queries. The model learns a probability for each tuple in the query answer. Note that this is a more general problem than knowledge base completion since it includes the learning of a probability distribution for a complex relational query. The query corresponding to knowledge base completion is r(x, y) for logical variables x and y, and relation r. The problem also touches on the problem of open-world probabilistic KBs [7] since tuples whose prior probability is zero will often have a non-zero probability in the query answer. 2 Background We first review some important concepts and notation in first-order logic. 2.1 Relational First-order Logic An atom r(t1 , ..., tn ) consists of predicate r of arity n followed by n arguments, which are either elements from a finite domain D = {a, b, ...} or logical variables {x, y, ...}. We us the terms domain element and object synonymously. A ground atom is an atom without logical variables. Formulas are built from atoms using the usual Boolean connectives and existential and universal quantification. A free variable in a first-order formula is a variable x not in the scope of a quantifier. We write ?(x, y) to denote that x, y are free in ?, and free(?) to refer to the free variables of ?. A substitution replaces all occurrences of logical variable x by t in some formula ? and is denoted by ?[x/t]. A vocabulary consists of a finite set of predicates R and a domain D. Every predicate r is associated with a positive integer called the arity of r. A R-structure (or knowledge base) D consists of the domain D, a set of predicates R, and an interpretation. The Herbrand base of D is the set of all ground atoms that can be constructed from R and D. The interpretation assigns a truth value to every atom in the Herbrand base by specifying rD ? Dn for each n-ary predicate r ? R. For a formula ?(x1 , ..., xn ) and a structure D, we write D |= ?(d1 , ..., dn ) to say that D satisfies ? if the variables x1 , ..., xn are substituted with the domain elements d1 , ...., dn . We define ?(D) := {(d1 , ..., dn ) ? Dn | D |= ?(d1 , ..., dn )}. For the R-structure D and C ? D, hCiD denotes the substructure induced by C on D, that is, the R-structure C with domain C and rC := rD ? Cn for every n-ary r ? R. 2.2 Gaifman?s Locality Theorem The Gaifman graph of a R-structure D is the graph GD with vertex set D and an edge between two vertices d, d0 ? D if and only if there exists an r ? R and a tuple (d1 , ..., dk ) ? rD such that d, d0 ? {d1 , ..., dk }. Figure 1 depicts a fragment of a knowledge base and the corresponding Gaifman graph. The distance dD (d1 , d2 ) between two elements d1 , d2 ? D of a structure D is the length of the shortest path in GD connecting d1 and d2 . For r ? 1 and d ? D, we define the r-neighborhood of d to be Nr (d) := {x ? D | dD (d, x) ? r}. We refer to r also as the depth of the neighborhood. Let d = (d1 , ..., dn ) ? Dn . The r-neighborhood of d is defined as Nr (d) = n [ Nr (di ). i=1 For the Gaifman graph in Figure 1, we have that N1 (d4 ) = {d1 , d2 , d5 } and N1 ((d1 , d2 )) = {d1 , ..., d6 }. ?Nr (x) is the formula obtained from ?(x) by relativizing all quantifiers to Nr (x), that is, by replacing every subformula of the form ?y?(x, y, z) by ?y(dD (x, y) ? r ? ?(x, y, z)) and every subformula of the form ?y?(x, y, z) by ?y(dD (x, y) ? r ? ?(x, y, z)). A formula ?(x) of the form ?Nr (x), for some ?(x), is called r-local. Whether an r-local formula ?(x) holds depends only on the r-neighborhood of x, that is, for every structure D and every d ? D we have D |= ?(d) 2 103 d2 d5 number of nodes locatedIn(d6, d5) d6 livesIn(d2, d5) worksAt(d2, d6) studentOf(d1, d2) studentAt(d1, d6) d3 d1 bornIn(d1, d3) studentOf(d4, d2) introducedBy(d1, d4, d2) livesIn(d4, d5) d4 102 101 100 10?1 0 10 101 102 103 104 degree Figure 1: A knowledge base fragment for the pair (d1 , d2 ) and the corresponding Gaifman graph. Figure 2: The degree distribution of the Gaifman graph for the Freebase fragment F B 15 K. if and only if hNr (d)i |= ?(d). For r, k ? 1 and ?(x) being r-local, a local sentence is of the form ? ? ^ ^ ?x1 ? ? ? ?xk ? dD (xi , xj ) > 2r ? ?(xi )? . 1?i<j?k 1?i?k We can now state Gaifman?s locality theorem. Theorem 1. [9] Every first-order sentence is equivalent to a Boolean combination of local sentences. Gaifman?s locality theorem states that any first-order sentence can be expressed as a Boolean combination of r-local sentences defined for neighborhoods of objects that are mutually far apart (have distance at least 2r + 1). Now, a novel approach to (statistical) relational learning would be to consider a large set of objects (or tuples of objects) and learn models from their local neighborhoods in the Gaifman graphs. It is this observation that motivates Gaifman models. 3 Learning Gaifman Models Instead of taking the costly approach of applying relational learning and inference directly to entire knowledge bases, the representations of Gaifman models are learned bottom up, by performing inference and learning within bounded-size, locally connected regions of Gaifman graphs. Each Gaifman model specifies the data generating process from a given knowledge base (or collection of knowledge bases), a set of relational features, and a ML model class used for learning. Definition 1. Given a R-structure D, a discriminative Gaifman model for D is a tuple (q, r, k, ?, M) as follows: ? q is a first-order formula called the target query with at least one free variable; ? r is the depth of the Gaifman neighborhoods; ? k is the size-bound of the Gaifman neighborhoods; ? ? is a set of first-order formulas (the relational features); ? M is the base model class (loss, hyper-parameters, etc.). Throughout the rest of the paper, we will provide detailed explanations of the different parameters of Gaifman models and their interaction with data generation, learning, and inference. During the training of Gaifman models, neighborhoods are generated for tuples of objects d ? Dn based on the parameters r and k. We first describe the procedure for arbitrary tuples d of objects and will later explain where these tuples come from. For a given tuple d the r-neighborhood of d within the Gaifman graph is computed. This results in the set of objects Nr (d). Now, from this neighborhood we sample w neighborhoods consisting of at most k objects. Sampling bounded-size sub-neighborhoods from Nr (d) is motivated as follows: 3 1. The degree distribution of Gaifman graphs is often skewed (see Figure 2), that is, the number of other objects a domain element is related to varies heavily. Generating smaller, bounded-size neighborhoods allows the transfer of learned representations between more and less connected objects. Moreover, the sampling strategy makes Gaifman models more robust to object uncertainty [19]. We show empirically that larger values for k reduce the effectiveness of the learned models for some knowledge bases. 2. Relational learning and inference is performed within the generated neighborhoods. Nr (d) can be very large, even for r = 1 (see Figure 2), and we want full control over the complexity of the computational problems. 3. Even for a single object tuple d we can generate a large number of training examples if |Nr (d)| > k. This mitigates the risk of overfitting. The number of training examples per tuple strongly influences the models? accuracy. We can now define the set of (r, k)-neighborhoods generated from a r-neighborhood.  {N | N ? Nr (d) and |N| = k} if |Nr (d)| ? k Nr,k (d) := {Nr (d)} otherwise. For a given tuple of objects d, Algorithm 1 returns a set of w neighborhoods drawn from Nr,k (d) such that the number of objects for each di is the same in expectation. The formulas in the set ? are indexed and of the form ?i (s1 , ..., sn , u1 , ..., um ) with sj ? free(q) and uj 6? free(q). For every tuple d = (d1 , ..., dn ), generated neighborhood N ? Nr,k (d), and ?i ? ?, we perform the substitution [s1 /d1 , ..., sn /dn ] and relativize ?i ?s quantifiers to N, N resulting in ?N i [s1 /d1 , ..., sn /dn ] which we write as ?i [s/d]. Let hNi be the substructure induced by N on D. For every formula ?i (s1 , ..., sn , u1 , ..., um ) and every n ? Nm , we now have that N D |= ?N i [s/d, u/n] if and only if hNi |= ?i [s/d, u/n]. In other words, satisfaction is now checked locally within the neighborhoods N, by deciding whether hNi |= ?N i [s/d, u/n]. The relational semantics of Gaifman models is based on the set of formulas ?. The feature vector v = (v1 , ..., v|?| ) for tuple d, and neighborhood N ? Nr,k (d), written as vN , is constructed as follows ? N ? ?N i [s/d](hNi) if free(?i [s/d]) > 0 vi := 1 if hNi |= ?N i [s/d] ? 0 otherwise. That is, if ?N i [s/d] has free variables, vi is equal to the number of groundings of ?i [s/d] that are satisfied within the neighborhood substructure hNi; if ?i [s/d] has no free variables, vi = 1 if and only if ?i [s/d] is satisfied within the neighborhod substructure hNi; and vi = 0 otherwise. The neighborhood representations v capture r-local formulas and help the model learn formula combinations that are associated with negative and positive examples. For the right choices of the parameters r and k, the neighborhood representations of Gaifman models capture the relational structure associated with positive and negative examples. Deciding D |= ? for a structure D and a first-order formula ? is referred to as model checking and computing ?(D) is called ?-counting. The combined complexity of model checking is PSPACEcomplete [29] and there exists a ||D||O(||?||) algorithm for both problems where || ? || is the size of an encoding. Clearly, for most real-world KBs this is not feasible. For Gaifman models, however, where the neighborhoods are bounded-size, typically 10 ? |N| = k ? 100, the above representation can be computed very efficiently for a large class of relational features. We can now state the following complexity result. Theorem 2. Let D be a relational structure (knowledge base), let d be the size of the largest rneighborhood of D?s Gaifman graph, and let s be the greatest encoding size of any formula in ?. For a Gaifman model with parameters r and k, the worst-case complexity for computing the feature representations of N neighborhoods is O(N (d + |?|k s )). Existing SRL approaches could be applied to the generated neighborhoods, treating each as a possible world for structure and parameter learning. However, our goal is to learn relational models that utilize embeddings computed by multi-layered neural networks. 4 Algorithm 1 G EN N EIGHS: Computes a list of w neighborhoods of size k for an input tuple d. ? 1: input: tuple d ? Dn , parameters r, k, and w 2: S = [ ] 3: while |S| < w do 4: S=? 5: N = Nr (d) 6: for all i ? {1, ..., n} do 7: U = min(bk/nc, |Nr (di )|) elements ... ... Wn ! M Figure 3: Learning of a Gaifman model. sampled uniformly from Nr (di ) N =N \U S =S?U U = min(|S| ? k, |N |) elements sampled uniformly from N 11: S =S?U 12: S=S+S 13: return S 8: 9: 10: 3.1 W1 ? ? W1 ... Wn ! M Figure 4: Inference with a Gaifman model. Learning Distributions for Relational Queries Let q be a first-order formula (the relational query) and S(q) the result set of the query, that is, all groundings that render the formula satisfied in the knowledge base. The feature representations generated for tuples of objects d ? S(q) serve as positive training examples. The Gaifman models? aim is to learn neighborhood embeddings that capture local structure of tuples for which we know that the target query evaluates to true. Similar to previous work, we generate negative examples by corrupting tuples that correspond to positive examples. The corruption mechanism takes a positive input tuple d = (d1 , ..., dn ) and substitutes, for each i ? {1, ..., n}, the domain element di with objects sampled from D while keeping the rest of the tuple fixed. The discriminative Gaifman model performs the following steps. 1. Evaluate the target query q and compute the result set S(q) 2. For each tuple d in the result set S(q): ? ? Nr,k (d) with Algorithm 1; each ? Compute N , a multiset of w neighborhoods N such neighborhood serves as a positive training example ? , a multiset of w ? for corrupted versions of d ? Compute N ? neighborhoods N ? Nr,k (d) with Algorithm 1; each such neighborhood serves as a negative training example ? Perform model checking and counting within the neighborhoods to compute the feature ? ? representations vN and vN ? for each N ? N and N ? N , respectively 3. Learn a ML model with the generated positive and negative training examples. Learning the final Gaifman model depends on the base ML model class M and its loss function. We obtained state of the art results with neural networks, gradient-based learning, and categorical cross-entropy as loss function ? ? X X L = ?? log pM (vN ) + log(1 ? pM (vN ? ))? , ? N ? N? N?N where pM (vN ) is the probability the model returns on input vN . However, other loss functions are possible. The probability of a particular substitution of the target query to be true is now P (q[s/d] = True) = E [pM (vN )]. N?N(r,k) (d) The expected probability of a representation of a neighborhood drawn uniformly at random from N(r,k) (d). It is now possible to generate several neighborhoods N and their representations vN to 5 estimate P (q[s/d] = True), simply by averaging the neighborhoods? probabilities. We have found experimentally that a single neighborhood already leads to highly accurate results but also that more neighborhood samples further improve the accurracy. Let us emphasize again the novel semantics of Gaifman models. Gaifman models generate a large number of small, bounded-size structures from a large structure, learn a representation for these bounded-size structures, and use the resulting representation to answer queries concerning the original structure as a whole. The advantages are model weight sharing across a large number of neighborhoods and efficiency of the computational problems. Figure 3 and Figure 4 illustrate learning from bounded-size neighborhood structures and inference in Gaifman models. 3.2 Structure Learning Structure learning is the problem of determining the set of relational features ?. We provide some directions and leave the problem to future work. Given a collection of bounded-size neighborhoods of the Gaifman graph, the goal is to determine suitable relational features for the problem at hand. There is a set of features which we found to be highly effective. For example, formulas of the form ?x r(s1 , x), ?x r(s1 , x) ? r(x, s2 ), and ?x, y r1 (s1 , x) ? r2 (x, y) ? r3 (y, s2 ) for all relations. The latter formulas capture fixed-length paths between s1 and s2 in the neighborhoods. Hence, Path Ranking type features [15] can be used in Gaifman models as a particular relational feature class. For path formulas with several different relations we cannot include all |R|3 combinations and, hence, we have to determine a subset occurring in the training data. Fortunately, since the neighborhood size is bounded, it is computationally feasible to compute frequent paths in the neighborhoods and to use these as features. The complexity of this learning problem is in the number of elements in the neighborhood and not in the number of all objects in the knowledge base. Relation paths that do not occur in the data can be discarded. Gaifman models can also use features of the form ?x, y r(x, y) ? r(y, x), ?x, y r(x, y), and ?x, y, z r(x, y) ? r(y, z) ? r(x, z), to name but a few. Moreover, features with free variables, such as r(s1 , x) are counting features (here: the r out-degree of s1 ). It is even computationally feasible to include specific second-order features (for instance, quantifiers ranging over R) and aggregations of feature values. 3.3 Prior Confidence Values, Types, and Numerical Attributes Numerous existing knowledge bases assign confidence values (probabilities, weights, etc.) to their statements. Gaifman models can incorporate confidence values during the sampling and learning process. Instead of adding random noise to the representations, which we have found to be beneficial, noise can be added inversely proportional to the confidence values. Statements for which the prior confidence values are lower are more likely to be dropped out during training than statements with higher confidence values. Furthermore, Gaifman models can directly incorporate object types such as Actor and Action Movie as well as numerical features such as location and elevation. One simply has to specify a fixed position in the neighborhood representation v for each object position within the input tuples d. 4 Related Work Recent work on relational machine learning for knowledge graphs is surveyed in [20]. We focus on a select few methods we deem most related to Gaifman models and refer the interested reader to the above article. A large body of work exists on learning inference rules from knowledge bases. Examples include [31] and [1] where inference rules of length one are learned; and [25] where general inference rules are learned by applying a support threshold. Their method does not scale to large KBs and depends on predetermined thresholds. Lao et al. [15] train a logistic regression classifier with path features to perform KB completion. The idea is to perform a random walk between objects and to exploit the discovered paths as features. SFE [10] improves PRA by making the generation of random walks more efficient. More recent embedding methods have combined paths in KBs with KB embedding methods [17]. Gaifman models support a much broader class of relational features subsuming path features. For instance, Gaifman models incorporate counting features that have shown to be beneficial for relational models. 6 Latent feature models learn features for objects and relations that are not directly observed in the data. Examples of latent feature models are tensor factorization [21, 23, 26] and embedding models [5, 3, 4, 18, 13, 27]. The majority of these models can be understood as more or less complex neural networks operating on object and relation representations. Gaifman models can also be used to learn knowledge base embeddings. Indeed, one can show that it generalizes or complements existing approaches. For instance, the universal schema [23] considers pairs of objects where relation membership variables comprise the model?s features. We have the following interesting relationship between universal schemas [23] and Gaifman models. Given a knowledge base D. The Gaifman S model for D with r = 0, k = 2, ? = r?R {r(s1 , s2 ), r(s2 , s1 )}, w = 1 and w ? = 0 is equivalent to the Universal Schema [23] for D up to the base model class M. More recent methods combine embedding methods and inference-based logical approaches for relation extraction [24]. Contrary to most existing multi-relational ML models [20], Gaifman models natively support higher-arity relations, functional and type constraints, numerical features, and complex target queries. 5 Experiments Table 1: The statistics of the data sets. The aim of the experiments is to understand the efficiency and effectiveness of Gaifman models for Dataset |D| |R| # train # test typical knowledge base inference problems. We WN18 40,943 18 141,442 5,000 evaluate the proposed class of models with two data FB15k 14,951 1,345 483,142 59,071 sets derived from the knowledge bases W ORD N ET and F REEBASE [2]. Both data sets consist of a list of statements r(d1 , d2 ) that are known to be true. For a detailed description of the data sets, whose statistics are listed in Table 1, we refer the reader to previous work [4]. After training the models, we perform entity prediction as follows. For each statement r(d1 , d2 ) in the test set, d2 is replaced by each of the KB?s objects in turn. The probabilities of the resulting statements are predicted and sorted in descending order. Finally, the rank of the correct statement within this ordered list is determined. The same process is repeated now with replacements of d1 . We compare Gaifman models with q = r(x, y) to state of the art knowledge base completion approaches which are listed in Table 2. We trained Gaifman models with r = 1 and different values for k, w, and w. ? We use a neural network architecture with two hidden layers, each having 100 units and sigmoid activations, dropout of 0.2 on the input layer, and a softmax layer. Dropout makes the model more robust to missing relations between objects. We trained one model per relation and left the hyper-parameters fixed across models. We did not perform structure learning and instead used the following set of relational features   [ r(s1 , s2 ), r(s2 , s1 ), ?x r(x, si ), ?x r(si , x), . ? := ?x r(s1 , x) ? r(x, s2 ), ?x r(s2 , x) ? r(x, s1 ) r?R, i?{1,2} We performed runtime experiments to evaluate the models? efficiency. Embedding models have the advantage that one dot product for every candidate object is sufficient to compute the score for the corresponding statement and we need to assess the performance of Gaifman models in this context. All experiments were run on commodity hardware with 64G RAM and a single 2.8 GHz CPU. Query answers per second To compute the probabilities, we averaged the probabilities of N = 1, 2, or 3 generated (r, k)neighborhoods. ?104 1.5 WN18 FB15k 1.0 0.5 Table 2 lists the experimental results for different parameter settings [N, k, w, w]. ? The Gaifman mod5 10 20 50 100 inf els achieve the highest hits@10 and hits@1 values for both data sets. As expected, the more neighborhood samples are used to compute the probability Figure 5: Query answers per second rates for estimate (N = 1, 2, 3) the better the result. When different values of the parameter k. the entire 1-neighborhood is considered (k = ?), the performance for WN18 does not deteriorate as it does for FB15k. This is due to the fact that 7 Table 2: Results of the entity prediction experiments. Data Set Metric RESCAL[21] SE[5] LFM[12] TransE[4] TransR[18] DistMult[30] Gaifman [1, ?, 1, 5] Gaifman [1, 20, 1, 2] Gaifman [1, 20, 5, 25] Gaifman [2, 20, 5, 25] Gaifman [3, 20, 5, 25] Mean rank 1,163 985 456 251 219 902 298 357 392 378 352 WN18 Hits@10 52.8 80.5 81.6 89.2 91.7 93.7 93.9 88.1 93.6 93.9 93.9 Hits@1 8.9 76.1 75.8 66.8 76.4 76.7 76.1 Mean rank 683 162 164 51 78 97 124 114 97 84 75 FB15K Hits@10 44.1 39.8 33.1 71.5 65.5 82.8 78.1 79.2 82.1 83.4 84.2 Hits@1 28.1 44.3 59.8 60.1 65.6 68.5 69.2 objects in WN18 have on average few neighbors. FB15k has more variance in the Gaifman graph?s degree distribution (see Figure 2) which is reflected in the better performance for smaller k values. The experiments also show that it is beneficial to generate a large number of representations (both positive and negative ones). The performance improves with larger number of training examples. The runtime experiments demonstrate that Gaifman models perform inference very efficiently for k ? 20. Figure 5 depicts the number of query answers the Gaifman models are able to serve per second, averaged over relation types. A query answer returns the probability for one object pair. These numbers include neighborhood generation and network inference. The results are promising with about 5000 query answers per second (averaged across relation types) as long as k remains small. Since most object pairs of WN18 have a 1-neighborhood whose size is smaller than 20, the answers per second rates for k > 20 is not reduced as drastically as for FB15k. 6 Conclusion and Future Work Gaifman models are a novel family of relational machine learning models that perform learning and inference within and across locally connected regions of relational structures. Future directions of research include structure learning, more sophisticated base model classes, and application of Gaifman models to additional relational ML problems. Acknowledgements Many thanks to Alberto Garc?a-Dur?n, Mohamed Ahmed, and Kristian Kersting for their helpful feedback. References [1] J. Berant, I. Dagan, and J. Goldberger. Global learning of typed entailment rules. In Annual Meeting of the Association for Computational Linguistics, pages 610?619, 2011. [2] K. Bollacker, C. Evans, P. Paritosh, T. Sturge, and J. Taylor. Freebase: A collaboratively created graph database for structuring human knowledge. In SIGMOD, pages 1247?1250, 2008. [3] A. Bordes, X. Glorot, J. Weston, and Y. Bengio. Joint learning of words and meaning representations for open-text semantic parsing. In Conference on Artificial Intelligence and Statistics, pages 127?135, 2012. [4] A. Bordes, N. Usunier, A. Garcia-Duran, J. Weston, and O. Yakhnenko. Translating embeddings for modeling multi-relational data. In Neural Information Processing Systems, pages 2787?2795. 2013. [5] A. Bordes, J. Weston, R. Collobert, and Y. Bengio. Learning structured embeddings of knowledge bases. In AAAI Conference on Artificial Intelligence, 2011. [6] A. Carlson, J. Betteridge, B. Kisiel, B. Settles, E. R. Hruschka, and T. M. Mitchell. Toward an architecture for never-ending language learning. In Twenty-Fourth AAAI Conference on Artificial Intelligence, 2010. [7] I. I. Ceylan, A. Darwiche, and G. Van den Broeck. Open-world probabilistic databases. In Proceedings of the 15th International Conference on Principles of Knowledge Representation and Reasoning (KR), 2016. 8 [8] A. Dries, A. Kimmig, W. Meert, J. Renkens, G. Van den Broeck, J. Vlasselaer, and L. De Raedt. ProbLog2: Probabilistic logic programming. Lecture Notes in Computer Science, 9286:312?315, 2015. [9] H. Gaifman. On local and non-local properties. In Proceedings of the herbrand symposium, logic colloquium, volume 81, pages 105?135, 1982. [10] M. Gardner and T. M. Mitchell. Efficient and expressive knowledge base completion using subgraph feature extraction. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1488?1498, 2015. [11] J. Hoffart, F. M. Suchanek, K. Berberich, and G. Weikum. Yago2: A spatially and temporally enhanced knowledge base from wikipedia. Artif. Intell., 194:28?61, 2013. [12] R. Jenatton, N. L. Roux, A. Bordes, and G. R. Obozinski. A latent factor model for highly multi-relational data. In Neural Information Processing Systems, pages 3167?3175, 2012. [13] G. Ji, K. Liu, S. He, and J. Zhao. Knowledge graph completion with adaptive sparse transfer matrix. In D. Schuurmans and M. P. Wellman, editors, Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pages 985?991, 2016. [14] K. Kersting. Lifted probabilistic inference. In European Conference on Artificial Intelligence, pages 33?38, 2012. [15] N. Lao, T. Mitchell, and W. W. Cohen. Random walk inference and learning in a large scale knowledge base. In Empirical Methods in Natural Language Processing, pages 529?539, 2011. [16] L. Libkin. Elements Of Finite Model Theory. SpringerVerlag, 2004. [17] Y. Lin, Z. Liu, H. Luan, M. Sun, S. Rao, and S. Liu. Modeling relation paths for representation learning of knowledge bases. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 705?714, 2015. [18] Y. Lin, Z. Liu, M. Sun, Y. Liu, and X. Zhu. Learning entity and relation embeddings for knowledge graph completion. In AAAI Conference on Artificial Intelligence, pages 2181?2187, 2015. [19] B. C. Milch. Probabilistic Models with Unknown Objects. PhD thesis, 2006. [20] M. Nickel, K. Murphy, V. Tresp, and E. Gabrilovich. A review of relational machine learning for knowledge graphs. Proceedings of the IEEE, 104(1):11?33, 2016. [21] M. Nickel, V. Tresp, and H.-P. Kriegel. A three-way model for collective learning on multi-relational data. In International conference on machine learning (ICML), pages 809?816, 2011. [22] M. Richardson and P. Domingos. Markov logic networks. Machine learning, 62(1-2):107?136, 2006. [23] S. Riedel, L. Yao, B. M. Marlin, and A. McCallum. Relation extraction with matrix factorization and universal schemas. In HLT-NAACL, 2013. [24] T. Rockt?schel, S. Singh, and S. Riedel. Injecting logical background knowledge into embeddings for relation extraction. In Conference of the North American Chapter of the ACL (NAACL), 2015. [25] S. Schoenmackers, O. Etzioni, D. S. Weld, and J. Davis. Learning first-order horn clauses from web text. In Conference on Empirical Methods in Natural Language Processing, pages 1088?1098, 2010. [26] R. Socher, D. Chen, C. D. Manning, and A. Ng. Reasoning with neural tensor networks for knowledge base completion. In Neural Information Processing Systems, pages 926?934. 2013. [27] T. Trouillon, J. Welbl, S. Riedel, ?. Gaussier, and G. Bouchard. Complex embeddings for simple link prediction. In Proceedings of the 33nd International Conference on Machine Learning, volume 48, pages 2071?2080, 2016. [28] G. Van den Broeck. Lifted inference and learning in statistical relational models. 2013. [29] M. Y. Vardi. The complexity of relational query languages. In ACM symposium on Theory of computing, pages 137?146, 1982. [30] B. Yang, W.-t. Yih, X. He, J. Gao, and L. Deng. Embedding entities and relations for learning and inference in knowledge bases. In International Conference on Learning Representations, 2015. [31] A. Yates and O. Etzioni. Unsupervised resolution of objects and relations on the web. In Conference of the North American Chapter of the Association for Computational Linguistics, 2007. 9
6098 |@word version:1 nd:1 open:4 duran:1 d2:15 yih:1 substitution:3 liu:5 fragment:3 score:1 existing:5 activation:1 si:2 goldberger:1 written:1 parsing:1 evans:1 numerical:3 academia:1 predetermined:1 treating:1 interpretable:1 intelligence:6 xk:1 mccallum:1 core:1 multiset:2 node:3 location:1 rc:1 dn:14 constructed:2 symposium:2 consists:3 combine:1 darwiche:1 suchanek:1 deteriorate:1 indeed:1 expected:2 multi:6 gabrilovich:1 cpu:1 considering:1 increasing:1 deem:1 spain:1 bounded:11 notation:1 moreover:2 schoenmackers:1 connective:1 marlin:1 ought:1 every:14 commodity:1 runtime:2 um:2 classifier:1 hit:6 control:1 unit:1 renkens:1 t1:1 positive:9 dropped:1 local:16 understood:1 encoding:2 path:11 acl:1 specifying:1 co:1 factorization:3 range:2 adoption:1 averaged:3 horn:1 kimmig:1 procedure:1 area:2 empirical:4 universal:5 yakhnenko:1 word:2 confidence:6 cannot:1 layered:1 risk:1 applying:2 influence:1 context:1 descending:1 milch:1 equivalent:4 missing:2 focused:1 resolution:1 roux:1 assigns:1 rule:4 d5:5 embedding:7 target:5 enhanced:1 heavily:1 programming:1 domingo:1 element:10 berant:1 database:2 bottom:2 observed:1 capture:4 worst:1 region:5 connected:7 sun:2 eu:1 highest:1 meert:1 colloquium:1 complexity:6 motivate:1 trained:2 singh:1 serve:2 efficiency:5 joint:1 chapter:2 train:2 effective:3 describe:1 query:21 artificial:6 hyper:2 neighborhood:57 whose:6 widely:1 larger:2 say:1 otherwise:3 statistic:3 richardson:1 final:1 advantage:2 welbl:1 interaction:1 product:1 frequent:1 combining:1 subgraph:1 achieve:1 description:1 scalability:1 r1:1 generating:2 leave:1 object:35 wider:1 help:1 develop:1 completion:9 illustrate:1 predicted:1 come:1 direction:2 correct:1 attribute:1 kb:9 human:1 settle:1 translating:1 garc:1 assign:1 elevation:1 ceylan:1 hold:1 considered:1 ground:2 deciding:2 scope:1 collaboratively:1 injecting:1 applicable:1 largest:1 clearly:1 aim:5 freebase:3 srl:2 kersting:2 og:1 thirtieth:1 broader:1 lifted:2 locatedin:1 structuring:1 derived:1 focus:2 rank:3 helpful:1 inference:21 el:1 membership:1 entire:2 typically:1 hidden:1 relation:22 interested:1 germany:1 semantics:2 denoted:1 development:1 art:3 softmax:1 equal:1 comprise:1 never:1 extraction:5 having:1 atom:6 sampling:3 ng:1 icml:1 unsupervised:1 synonymously:1 future:3 hni:7 few:3 intell:1 murphy:1 replaced:1 consisting:1 replacement:1 n1:2 pra:1 interest:1 highly:4 investigate:1 wellman:1 accurate:1 tuple:14 edge:1 indexed:1 taylor:1 walk:4 instance:5 industry:1 formalism:1 obstacle:1 boolean:5 modeling:2 rao:1 raedt:1 vertex:2 subset:1 successful:1 predicate:5 answer:9 varies:1 corrupted:1 broeck:3 gd:2 combined:2 thanks:1 international:4 yago:1 probabilistic:6 connecting:1 fb15k:6 yao:1 w1:2 again:1 aaai:4 nm:1 satisfied:3 thesis:1 emnlp:1 american:2 zhao:1 return:4 de:1 dur:1 includes:1 north:2 ranking:1 depends:3 vi:4 collobert:1 later:1 performed:2 lab:1 schema:4 aggregation:1 bouchard:1 substructure:4 ass:1 accuracy:1 convolutional:1 variance:1 efficiently:2 correspond:2 dry:1 corruption:1 ary:2 explain:1 sharing:2 checked:1 hlt:1 definition:1 evaluates:1 typed:1 mohamed:1 associated:3 di:5 sampled:3 dataset:1 logical:7 mitchell:3 knowledge:48 improves:2 sophisticated:1 jenatton:1 higher:2 reflected:1 specify:1 entailment:1 niepert:2 strongly:1 furthermore:1 hand:2 web:3 touch:1 replacing:1 expressive:1 logistic:1 artif:1 name:1 grounding:2 naacl:2 concept:1 true:5 sfe:1 hence:2 spatially:1 semantic:1 during:3 skewed:1 davis:1 d4:5 demonstrate:1 tn:1 performs:1 reasoning:2 image:1 ranging:1 meaning:1 novel:5 common:1 sigmoid:1 wikipedia:1 functional:1 empirically:1 ji:1 clause:1 cohen:1 volume:2 association:2 interpretation:2 he:2 refer:4 rd:3 pm:4 portugal:1 language:7 dot:1 europe:1 actor:1 operating:1 attracting:1 etc:2 base:45 recent:3 perspective:1 sturge:1 inf:1 apart:1 luan:1 meeting:1 fortunately:1 additional:1 deng:1 determine:2 shortest:1 full:1 d0:2 ahmed:1 cross:1 long:1 lin:2 alberto:1 concerning:1 prevented:1 prediction:3 scalable:1 regression:1 subsuming:1 expectation:1 metric:1 achieved:1 background:2 want:1 rest:2 induced:2 facilitates:1 undirected:1 contrary:1 effectiveness:3 integer:1 schel:1 counting:4 yang:1 bengio:2 embeddings:9 concerned:1 wn:2 xj:1 architecture:2 reduce:1 idea:3 cn:1 whether:2 motivated:1 render:2 rescal:1 action:1 deep:1 detailed:2 listed:2 se:1 locally:6 hardware:1 reduced:1 generate:5 specifies:1 per:7 mega:1 herbrand:3 write:3 yates:1 threshold:2 drawn:2 d3:2 utilize:1 v1:1 ram:1 graph:24 gaifman:75 run:1 uncertainty:1 fourth:1 family:2 throughout:1 reader:2 vn:9 transe:1 dropout:2 bound:1 layer:3 followed:1 replaces:1 annual:1 occur:2 constraint:1 riedel:3 weld:1 u1:2 argument:2 min:2 performing:1 structured:1 developing:1 combination:6 manning:1 remain:1 across:5 smaller:3 beneficial:3 rob:1 making:1 s1:16 trouillon:1 den:3 quantifier:6 computationally:2 mutually:1 remains:1 discus:1 r3:1 mechanism:1 turn:1 know:1 tractable:1 serf:2 usunier:1 generalizes:1 apply:1 hruschka:1 occurrence:1 substitute:1 original:1 denotes:1 include:5 linguistics:2 carlson:1 exploit:1 sigmod:1 uj:1 tensor:2 already:1 added:1 strategy:1 costly:1 usual:1 nr:22 september:1 gradient:1 distance:2 link:1 entity:4 d6:5 majority:1 considers:1 reason:1 toward:1 length:3 relationship:2 nc:1 gaussier:1 statement:8 relate:1 negative:6 motivates:2 collective:1 twenty:1 perform:10 unknown:1 ord:1 observation:1 markov:2 discarded:1 finite:4 kisiel:1 situation:1 relational:44 rockt:1 discovered:1 arbitrary:1 lfm:1 bk:1 complement:1 pair:4 sentence:9 learned:6 barcelona:1 nip:1 able:2 kriegel:1 built:2 weikum:1 explanation:1 greatest:1 suitable:1 satisfaction:1 natural:4 quantification:1 lisbon:1 zhu:1 improve:1 movie:1 lao:2 inversely:1 numerous:1 temporally:1 gardner:1 created:1 categorical:1 tresp:2 existential:1 sn:4 text:2 prior:3 review:2 acknowledgement:1 checking:3 determining:1 loss:4 lecture:1 generation:3 interesting:1 proportional:1 nickel:2 etzioni:2 degree:5 sufficient:1 article:1 dd:5 principle:1 corrupting:1 editor:1 bordes:4 free:11 keeping:1 drastically:1 paritosh:1 understand:1 neighbor:1 dagan:1 taking:1 sparse:1 ghz:1 van:3 feedback:1 depth:2 vocabulary:1 xn:2 world:5 ending:1 computes:1 collection:3 adaptive:1 far:1 social:1 sj:1 citation:1 emphasize:1 logic:6 ml:6 global:1 overfitting:2 tuples:9 discriminative:4 xi:2 latent:3 table:5 promising:1 learn:13 transfer:2 robust:3 improving:1 schuurmans:1 heidelberg:1 complex:5 european:1 domain:8 substituted:1 did:1 whole:1 s2:9 noise:2 vardi:1 repeated:1 x1:3 body:1 referred:1 en:1 depicts:2 sub:1 position:2 surveyed:1 natively:1 candidate:1 answering:1 learns:1 theorem:7 formula:21 specific:1 arity:3 mitigates:2 list:4 dk:2 r2:1 betteridge:1 evidence:1 glorot:1 exists:3 consist:1 socher:1 adding:1 kr:1 phd:1 nec:1 occurring:1 chen:1 locality:6 entropy:1 garcia:1 simply:2 explore:1 likely:1 gao:1 expressed:1 ordered:1 kristian:1 bollacker:1 truth:1 satisfies:1 acm:1 obozinski:1 weston:3 goal:2 sorted:1 considerable:1 feasible:3 experimentally:1 springerverlag:1 typical:1 determined:1 uniformly:3 averaging:1 called:4 mathias:2 experimental:1 select:1 support:4 latter:1 incorporate:3 evaluate:3 wn18:6 d1:26
5,634
6,099
Professor Forcing: A New Algorithm for Training Recurrent Networks Anirudh Goyal?, Alex Lamb? , Ying Zhang, Saizheng Zhang, Aaron Courville and Yoshua Bengio1 MILA, Universit? de Montr?al, 1 CIFAR {anirudhgoyal9119, alex6200, ying.zhlisa, saizhenglisa, aaron.courville, yoshua.umontreal}@gmail.com Abstract The Teacher Forcing algorithm trains recurrent networks by supplying observed sequence values as inputs during training and using the network?s own one-stepahead predictions to do multi-step sampling. We introduce the Professor Forcing algorithm, which uses adversarial domain adaptation to encourage the dynamics of the recurrent network to be the same when training the network and when sampling from the network over multiple time steps. We apply Professor Forcing to language modeling, vocal synthesis on raw waveforms, handwriting generation, and image generation. Empirically we find that Professor Forcing acts as a regularizer, improving test likelihood on character level Penn Treebank and sequential MNIST. We also find that the model qualitatively improves samples, especially when sampling for a large number of time steps. This is supported by human evaluation of sample quality. Trade-offs between Professor Forcing and Scheduled Sampling are discussed. We produce T-SNEs showing that Professor Forcing successfully makes the dynamics of the network during training and sampling more similar. 1 Introduction Recurrent neural networks (RNNs) have become to be the generative models of choice for sequential data (Graves, 2012) with impressive results in language modeling (Mikolov, 2010; Mikolov and Zweig, 2012), speech recognition (Bahdanau et al., 2015; Chorowski et al., 2015), Machine Translation (Cho et al., 2014a; Sutskever et al., 2014; Bahdanau et al., 2014), handwriting generation (Graves, 2013), image caption generation (Xu et al., 2015; Chen and Lawrence Zitnick, 2015), etc. The RNN models the data via a fully-observed directed graphical model: it decomposes the distribution over the discrete time sequence y1 , y2 , . . . yT into an ordered product of conditional distributions over tokens T Y P (y1 , y2 , . . . yT ) = P (y1 ) P (yt | y1 , . . . yt?1 ). t=1 By far the most popular training strategy is via the maximum likelihood principle. In the RNN literature, this form of training is also known as teacher forcing (Williams and Zipser, 1989), due to the use of the ground-truth samples yt being fed back into the model to be conditioned on for the prediction of later outputs. These fed back samples force the RNN to stay close to the ground-truth sequence. When using the RNN for prediction, the ground-truth sequence is not available conditioning and we sample from the joint distribution over the sequence by sampling each yt from its conditional ? Indicates first authors. Ordering determined by coin flip. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. distribution given the previously generated samples. Unfortunately, this procedure can result in problems in generation as small prediction error compound in the conditioning context. This can lead to poor prediction performance as the RNN?s conditioning context (the sequence of previously generated samples) diverge from sequences seen during training. Recently, (Bengio et al., 2015) proposed to remedy that issue by mixing two kinds of inputs during training: those from the ground-truth training sequence and those generated from the model. However, when the model generates several consecutive yt ?s, it is not clear anymore that the correct target (in terms of its distribution) remains the one in the ground truth sequence. This is mitigated in various ways, by making the self-generated subsequences short and annealing the probability of using self-generated vs ground truth samples. However, as remarked by Husz?r (2015), scheduled sampling yields a biased estimator, in that even as the number of examples and the capacity go to infinity, this procedure may not converge to the correct model. It is however good to note that experiments with scheduled sampling clearly showed some improvements in terms of the robustness of the generated sequences, suggesting that something indeed needs to be fixed (or replaced) with maximum-likelihood (or teacher forcing) training of generative RNNs. In this paper, we propose an alternative way of training RNNs which explicitly seeks to make the generative behavior and the teacher-forced behavior match as closely as possible. This is particularly important to allow the RNN to continue generating robustly well beyond the length of the sequences it saw during training. More generally, we argue that this approach helps to better model long-term dependencies by using a training objective that is not solely focused on predicting the next observation, one step at a time. Our work provides the following contributions regarding this new training framework: ? We introduce a novel method for training generative RNNs called Professor Forcing, meant to improve long-term sequence sampling from recurrent networks. We demonstrate this with human evaluation of sample quality by performing a study with human evaluators. ? We find that Professor Forcing can act as a regularizer for recurrent networks. This is demonstrated by achieving improvements in test likelihood on character-level Penn Treebank, Sequential MNIST Generation, and speech synthesis. Interestingly, we also find that training performance can also be improved, and we conjecture that it is because longer-term dependencies can be more easily captured. ? When running an RNN in sampling mode, the region occupied by the hidden states of the network diverges from the region occupied when doing teacher forcing. We empirically study this phenomenon using T-SNEs and show that it can be mitigated by using Professor Forcing. ? In some domains the sequences available at training time are shorter than the sequences that we want to generate at test time. This is usually the case in long-term forecasting tasks (climate modeling, econometrics). We show how using Professor Forcing can be used to improve performance in this setting. Note that scheduled sampling cannot be used for this task, because it still uses the observed sequence as targets for the network. 2 Proposed Approach: Professor Forcing The basic idea of Professor Forcing is simple: while we do want the generative RNN to match the training data, we also want the behavior of the network (both in its outputs and in the dynamics of its hidden states) to be indistinguishable whether the network is trained with its inputs clamped to a training sequence (teacher forcing mode) or whether its inputs are self-generated (free-running generative mode). Because we can only compare the distribution of these sequences, it makes sense to take advantage of the generative adversarial networks (GANs) framework (Goodfellow et al., 2014) to achieve that second objective of matching the two distributions over sequences (the one observed in teacher forcing mode vs the one observed in free-running mode). Hence, in addition to the generative RNN, we will train a second model, which we call the discriminator, and that can also process variable length inputs. In the experiments we use a bidirectional RNN architecture for the discriminator, so that it can combine evidence at each time step t from the past of the behavior sequence as well as from the future of that sequence. 2 2.1 Definitions and Notation Let the training distribution provide (x, y) pairs of input and output sequences (possibly there are no inputs at all). An output sequence y can also be generated by the generator RNN when given an input sequence x, according to the sequence to sequence model distribution P?g (y|x). Let ?g be the parameters of the generative RNN and ?d be the parameters of the discriminator. The discriminator is trained as a probabilistic classifier that takes as input a behavior sequence b derived from the generative RNN?s activity (hiddens and outputs) when it either generates or is constrained by a sequence y, possibly in the context of an input sequence x (often but not necessarily of the same length). The behavior sequence b is either the result of running the generative RNN in teacher forcing mode (with y from a training sequence with input x), or in free-running mode (with y self-generated according to P?g (y|x), with x from the training sequence). The function B(x, y, ?g ) outputs the behavior sequence (chosen hidden states and output values) given the appropriate data (where x always comes from the training data but y either comes from the data or is self-generated). Let D(b) be the output of the discriminator, estimating the probability that b was produced in teacher-forcing mode, given that half of the examples seen by the discriminator are generated in teacher forcing mode and half are generated in the free-running mode. Note that in the case where the generator RNN does not have any conditioning input, the sequence x is empty. Note also that the generated output sequences could have a different length then the conditioning sequence, depending of the task at hand. 2.2 Training Objective The discriminator parameters ?d are trained as one would expect, i.e., to maximize the likelihood of correctly classifying a behavior sequence: Cd (?d |?g ) = E(x,y)?data [? log D(B(x, y, ?g ), ?d )+Ey?P?g (y|x) [? log(1?D(B(x, y, ?g ), ?d )]]. (1) Practically, this is achieved with a variant of stochastic gradient descent with minibatches formed by combining N sequences obtained in teacher-forcing mode and N sequences obtained in free-running mode, with y sampled from P?g (y|x). Note also that as ?g changes, the task optimized by the discriminator changes too, and it has to track the generator, as in other GAN setups, hence the notation Cd (?d |?g ). The generator RNN parameters ?g are trained to (a) maximize the likelihood of the data and (b) fool the discriminator. We considered two variants of the latter. The negative log-likelihood objective (a) is the usual teacher-forced training criterion for RNNs: N LL(?g ) = E(x,y)?data [? log P?g (y|x)]. (2) Regarding (b) we consider a training objective that only tries to change the free-running behavior so that it better matches the teacher-forced behavior, considering the latter fixed: Cf (?g |?d ) = Ex?data,y?P?g (y|x) [? log D(B(x, y, ?g ), ?d )]. (3) In addition (and optionally), we can ask the teacher-forced behavior to be indistinguishable from the free-running behavior: Ct (?g |?d ) = E(x,y)?data [? log(1 ? D(B(x, y, ?g ), ?d ))]. (4) In our experiments we either perform stochastic gradient steps on N LL + Cf or on N LL + Cf + Ct to update the generative RNN parameters, while we always do gradient steps on Cd to update the discriminator parameters. 3 Related Work Professor Forcing is an adversarial method for learning generative models that is closely related to Generative Adversarial Networks (Goodfellow et al., 2014) and Adversarial Domain Adaptation Ajakan et al. (2014); Ganin et al. (2015). Our approach is similar to generative adversarial networks (GANs) because both use a discriminative classifier to provide gradients for training a generative model. However, Professor Forcing is different because the classifier discriminates between hidden 3 Teacher Forcing ... Distributions of hidden states are forced to be close to each other by Discriminator Share parameters Discriminator ... Free Running Figure 1: Architecture of the Professor Forcing - Learn correct one-step predictions such as to to obtain the same kind of recurrent neural network dynamics whether in open loop (teacher forcing) mode or in closed loop (generative) mode. An open loop generator that does one-step-ahead prediction correctly. Recursively composing these outputs does multi-step prediction (closed-loop) and can generate new sequences. This is achieved by train a classifier to distinguish open loop (teacher forcing) vs. closed loop (free running) dynamics, as a function of the sequence of hidden states and outputs. Optimize the closed loop generator to fool the classifier. Optimize the open loop generator with teacher forcing. The closed loop and open loop generators share all parameters states from sampling mode and teacher forcing mode, whereas the GAN?s classifier discriminates between real samples and generated samples. One practical advantage of Professor Forcing over GANs is that Professor Forcing can be used to learn a generative model over discrete random variables without requiring to approximate backpropagation through discrete spaces Bengio et al. (2013). The Adversarial Domain Adaptation uses a classifier to discriminate between the hidden states of the network with inputs from the source domain and the hidden states of the network with inputs from the target domain. However this method was not applied in the context of generative models, more specifically, was not applied to the task of improving long-term generation from recurrent networks. Alternative non-adversarial methods have been explored for improving long-term generation from recurrent networks. The scheduled sampling method Bengio et al. (2015), which is closely related to SEARN (Daum? et al., 2009) and DAGGER Ross et al. (2010), involves randomly using the network?s predictions as its inputs (as in sampling mode) with some probability that increases over the course of training. This forces the network to be able to stay in a reasonable regime when receiving the network?s predictions as inputs instead of observed inputs. While Scheduled Sampling shows improvement on some tasks, it is not a consistent estimation strategy. This limitation arises because the outputs sampled from the network could correspond to a distribution that is not consistent with the sequence that the network is trained to generate. This issue is discussed in detail in Husz?r (2015). A practical advantage of Scheduled Sampling over Professor Forcing is that Scheduled Sampling does not require the additional overhead of having to train a discriminator network. Finally, the idea of matching the behavior of the model when it is generating in a free-running way with its behavior when it is constrained by the observed data (being clamped on the "visible units") is precisely that which one obtains when zeroing the maximum likelihood gradient on undirected graphical models with latent variables such as the Boltzmann machine. Training Boltzmann machines amounts to matching the sufficient statistics (which summarize the behavior of the model) in both "teacher forced" (positive phase) and "free-running" (negative phase) modes. 4 4.1 Experiments Networks Architecture and Professor Forcing Setup The neural networks and Professor Forcing setup used in the experiments is the following. The generative RNN has single hidden layer of gated recurrent units (GRU), previously introduced by (Cho et al., 2014b) as a computationally cheaper alternative to LSTM units (Hochreiter and Schmidhuber, 1997). At each time step, the generative RNN reads an element xt of the input 4 sequence (if any) and an element of the output sequence yt (which either comes from the training data or was generated at the previous step by the RNN). It then updates its state ht as a function of its previous state ht?1 and of the current input (xt , yt ). It then computes a probability distribution P?g (yt+1 |ht ) = P?g (yt+1 |x1 , . . . , xt , y1 , . . . , yt ) over the next element of the output. For discrete outputs this is achieved by a softmax / affine layer on top of ht , with as many outputs as the size of the set of values that yt can take. In free-running mode, yt+1 is then sampled from this distribution and will be used as part of the input for the next time step. Otherwise, the ground truth yt is used. The behavior function B used in the experiments outputs the pre-tanh activation of the GRU states for the whole sequence considered, and optionally the softmax outputs for the next-step prediction, again for the whole sequence. The discriminator architecture we used for these experiments is based on a bidirectional recurrent neural network, which comprises two RNNs (again, two GRU networks), one running forward in time on top of the input sequence b, and one running backwards in time, with the same input. The hidden states of these two RNNs are concatenated at each time step and fed to a multi-layer neural network shared across time (the same network is used for all time steps). That MLP has three layers, each composing an affine transformation and a rectifier (ReLU). Finally, the output layer composes an affine transformation and a sigmoid that outputs D(b). When the discriminator is too poor, the gradient it propagates into the generator RNN could be detrimental. For this reason, we back-propagate from the discriminator into the generator RNN only when the discriminator classification accuracy is greater than 75%. On the other hand, when the discriminator is too successful at identifying fake inputs, we found that it would also hurt to continue training it. So when its accuracy is greater than 99%, we do not update the discriminator. Both networks are trained by minibatch stochastic gradient descent with adaptive learning rates and momentum determined by the Adam algorithm (Kingma and Ba, 2014). All of our experiments were implemented using the Theano framework (Al-Rfou et al., 2016). 4.2 Character-Level Language Modeling We evaluate Professor Forcing on character-level language modeling on Penn-Treebank corpus, which has an alphabet size of 50 and consists of 5059k characters for training, 396k characters for validation and 446k characters for test. We divide the training set into non-overlapping sequences with each length of 500. During training, we monitor the negative log-likelihood (NLL) of the output sequences. The final model are evaluated by bits-per-character (BPC) metric. The generative RNN Figure 2: Penn Treebank Likelihood Curves in terms of the number of iterations. Training Negative Log-Likelihood (left). Validation BPC (Right) implements an 1 hidden layer GRU with 1024 hidden units. We use Adam algorithm for optimization with a learning rate of 0.0001. We feed both the hidden states and char level embeddings into the discriminator. All the layers in the discriminator consists of 2048 hidden units. Output activation of the last layer is clipped between -10 and 10. We see that training cost of Professor Forcing network decreases faster compared to teacher forcing network. The training time of our model is 3 times more as compared to teacher forcing, since our model includes sampling phase, as well as passing the hidden distributions corresponding to free running and teacher forcing phase to the discriminator. The final BPC on validation set using our baseline was 1.50 while using professor forcing it is 1.48. On word level Penn Treebank we did not observe any difference between Teacher Forcing and Professor Forcing. One possible explanation for this difference is the increased importance of long-term dependencies in character-level language modeling. 5 Figure 3: T-SNE visualization of hidden states, left: with teacher forcing, right: with professor forcing. Red dots correspond to teacher forcing hidden states, while the gold dots correspond to free running mode. At t = 500, the closed-loop and open-loop hidden states clearly occupy distinct regions with teacher forcing, meaning that the network enters a region during sampling distinct from the region seen during teacher forcing training. With professor forcing, these regions now largely overlap. We computed 30 T-SNEs for Teacher Forcing and 30 T-SNEs for Professor Forcing and found that the mean centroid distance was reduced from 3000 to 1800 (40% relative reduction). The mean distance from a hidden state in the training network to a hidden state in the sampling network was reduced from 22.8 with Teacher Forcing to 16.4 with Professor Forcing (vocal synthesis). Method MNIST NLL DBN 2hl (Germain et al., 2015) NADE (Larochelle and Murray, 2011) EoNADE-5 2hl (Raiko et al., 2014) DLGM 8 leapfrog steps (Salimans et al., 2014) DARN 1hl (Gregor et al., 2015) DRAW (Gregor et al., 2015) Pixel RNN (van den Oord et al., 2016) Professor Forcing (ours) ? 84.55 88.33 84.68 ? 85.51 ? 84.13 ? 80.97 79.2 79.58 Table 1: Test set negative log-likelihood evaluations on Sequential MNIST. 4.3 Sequential MNIST We evaluated Professor Forcing on the task of sequentially generating the pixels in MNIST digits. We use the standard binarized MNIST dataset Murray and Salakhutdinov (2009). We selected hyperparameters for our model on the validation set and elected to use 512 hidden states and a learning rate of 0.0001. For all experiments we used a 3-layer GRU as our generator. Unlike our other experiments, we used a convolutional network for the discriminator instead of a bi-directional RNN, as the pixels have a 2D spatial structure. In Table 1, We note that our model achieves the second best reported likelihood on this task, after the PixelRNN, which used a significantly more complicated architecture for its generator van den Oord et al. (2016). Combining Professor Forcing with the PixelRNN would be an interesting area for future research. However, the PixelRNN parallelizes computation in the teacher forcing network in a way that doesn?t work in the sampling network. Because Professor Forcing requires running the sampling network during training, naively combining Professor Forcing with the PixelRNN would be very slow. Figure 4: Samples with Teacher Forcing (left) and Professor Forcing (right) on Sequential MNIST. 6 Response Percent Count Professor Forcing Much Better Professor Forcing Slightly Better Teacher Forcing Slightly Better Teacher Forcing Much Better 19.7 57.2 18.9 4.3 151 439 145 33 Total 100.0 768 Table 2: Human Evaluation Study Results for Handwriting Generation. 4.4 Handwriting Generation With this task we wanted to investigate if Professor Forcing could be used to perform domain adaptation from a training set with short sequences to sampling much longer sequences. We train the Teacher Forcing model on only 50 steps of text-conditioned handwriting (corresponding to a few letters) and then sample for 1000 time steps . We let the model learn a sequence of (x, y) coordinates together with binary indicators of pen-up vs. pen-down, using the standard handwriting IAM-OnDB dataset, which consists of 13,040 handwritten lines written by 500 writers Liwicki and Bunke (2005). For our teacher forcing model, we use the open source implementation Brebisson (2016) and use their hyperparameters which is based on the model in Graves (2013). For the professor forcing model, we sample for 1000 time steps and run a separate discriminator on non-overlapping segments of length 50 (the number of steps used in the teacher forcing model). We performed a human evaluation study on handwriting samples. We gave 48 volunteers 16 randomly selected Prof. Forcing samples randomly paired with 16 Teacher Forcing samples and asked them to indicate which sample was higher quality and whether it was ?much better? or ?slightly better?. Both models had equal training time and samples were drawn using the same procedure. Volunteers were not aware of which samples came from which model, see Table 2 for results. 4.5 Music Synthesis on Raw Waveforms We considered the task of vocal synthesis on raw waveforms. For this task we used three hours of monk chanting audio scraped from YouTube (https://www.youtube.com/watch?v=9-pD28iSiTU). We sampled the audio at a rate of 1 kHz and took four seconds for each training and validation example. On each time step of the raw audio waveform we binned the signal?s value into 8000 bins with boundaries drawn uniformly between the smallest and largest signal values in the dataset. We then model the raw audio waveform as a 4000-length sequence with 8000 potential values on each time step. Figure 6: Music Synthesis. Left: training likelihood curves. Right: validation likelihood curves. We evaluated the quality of our vocal synthesis model using two criteria. First, we demonstrated a regularizing effect and improvement in negative log-likelihood. Second, we observed improvement in the quality of samples. We included a few randomly selected samples in the supplementary material and also performed human evaluation of the samples. Visual inspection of samples is known to be a flawed method for evaluating generative models, because a generative model could simply memorize a small number of examples from the training set (or slightly modified examples from the training set) and achieve high sample quality. This issue was discussed in Theis et al. (2015). However, this is unlikely to be an issue with our evaluation because our method also improved validation set likelihood, whereas a model that achieves quality samples by dropping coverage would have poorer validation set likelihood. 7 We performed human evaluation by asking 29 volunteers to listen to five randomly selected teacher forcing samples and five randomly selected professor forcing samples (included in supplementary materials and then rate each sample from 1-3 on the basis of quality. The annotators were given the samples in random order and were not told which samples came from which algorithm. The human annotators gave the Professor Forcing samples an average score of 2.20, whereas they gave the Teacher Forcing samples an average score of 1.30. Figure 7: Human evaluator ratings for vocal synthesis samples (higher is better). The height of the bar is the mean of the ratings and the error bar shows the spread of one standard deviation. 5 Conclusion The idea of matching behavior of a model when it is running on its own, making predictions, generating samples, etc. vs when it is forced to be consistent with observed data is an old and powerful one. In this paper we introduce Professor Forcing, a novel instance of this idea when the model of interest is a recurrent generative one, and which relies on training an auxiliary model, the discriminator to spot the differences in behavior between these two modes of behavior. A major motivation for this approach is that the discriminator can look at the statistics of the behavior and not just at the single-step predictions, forcing the generator to behave the same when it is constrained by the data and when it is left generating outputs by itself for sequences that can be much longer than the training sequences. This naturally produces better generalization over sequences that are much longer than the training sequences, as we have found. We have also found that it helped to generalize better in terms of one-step prediction (log-likelihood), even though we are adding a possibly conflicting term to the log-likelihood training objective. This suggests that it acts like a regularizer but a very interesting one because it can also greatly speed up convergence in terms of number of training updates. We validated the advantage of Professor Forcing over traditional teacher forcing on a variety of sequential learning and generative tasks, with particularly impressive results in acoustic generation, where the training sequences are much shorter (because of memory constraints) than the length of the sequences we actually want to generate. Acknowledgments We thank Martin Arjovsky, Dzmitry Bahdanau, Nan Rosemary Ke, Jos? Manuel Rodr?guez Sotelo, Alexandre de Br?bisson, Olexa Bilaniuk, Hal Daum? III, Kari Torkkola, and David Krueger. References Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., and Marchand, M. (2014). Domain-Adversarial Neural Networks. ArXiv e-prints. Al-Rfou, R., Alain, G., Almahairi, A., and et al. (2016). Theano: A python framework for fast computation of mathematical expressions. CoRR, abs/1605.02688. Bahdanau, D., Cho, K., and Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Bahdanau, D., Chorowski, J., Serdyuk, D., Brakel, P., and Bengio, Y. (2015). End-to-end attention-based large vocabulary speech recognition. arXiv preprint arXiv:1508.04395. Bahdanau, D., Brakel, P., Xu, K., Goyal, A., Lowe, R., Pineau, J., Courville, A., and Bengio, Y. (2016). An Actor-Critic Algorithm for Sequence Prediction. ArXiv e-prints. Bengio, S., Vinyals, O., Jaitly, N., and Shazeer, N. (2015). Scheduled sampling for sequence prediction with recurrent neural networks. In Advances in Neural Information Processing Systems, pages 1171?1179. Bengio, Y., L?onard, N., and Courville, A. (2013). Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation. ArXiv e-prints. 8 Brebisson, A. (2016). Conditional handwriting generation in theano. https://github.com/adbrebs/ handwriting. Chen, X. and Lawrence Zitnick, C. (2015). Mind?s eye: A recurrent visual representation for image caption generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2422?2431. Cho, K., Van Merri?nboer, B., G?l?ehre, ?., Bahdanau, D., Bougares, F., Schwenk, H., and Bengio, Y. (2014a). Learning phrase representations using RNN encoder?decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724?1734, Doha, Qatar. Association for Computational Linguistics. Cho, K., Van Merri?nboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., and Bengio, Y. (2014b). Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. Chorowski, J. K., Bahdanau, D., Serdyuk, D., Cho, K., and Bengio, Y. (2015). Attention-based models for speech recognition. In Advances in Neural Information Processing Systems, pages 577?585. Daum?, III, H., Langford, J., and Marcu, D. (2009). Search-based Structured Prediction. ArXiv e-prints. Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M., and Lempitsky, V. (2015). Domain-Adversarial Training of Neural Networks. ArXiv e-prints. Germain, M., Gregor, K., Murray, I., and Larochelle, H. (2015). Made: Masked autoencoder for distribution estimation. arXiv preprint arXiv:1502.03509. Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Generative adversarial networks. In NIPS?2014. Graves, A. (2012). Supervised Sequence Labelling with Recurrent Neural Networks. Studies in Computational Intelligence. Springer. Graves, A. (2013). Generating sequences with recurrent neural networks. Technical report, arXiv:1308.0850. Graves, A. (2013). Generating Sequences With Recurrent Neural Networks. ArXiv e-prints. Gregor, K., Danihelka, I., Graves, A., and Wierstra, D. (2015). Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623. Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural Comput., 9(8), 1735?1780. Husz?r, F. (2015). How (not) to Train your Generative Model: Scheduled Sampling, Likelihood, Adversary? ArXiv e-prints. Kingma, D. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Larochelle, H. and Murray, I. (2011). The neural autoregressive distribution estimator. Liwicki, M. and Bunke, H. (2005). Iam-ondb - an on-line english sentence database acquired from handwritten text on a whiteboard. In Eighth International Conference on Document Analysis and Recognition (ICDAR?05), pages 956?961 Vol. 2. Mikolov, T. (2010). Recurrent neural network based language model. Mikolov, T. and Zweig, G. (2012). Context dependent recurrent neural network language model. Murray, I. and Salakhutdinov, R. R. (2009). Evaluating probabilities under high-dimensional latent variable models. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems 21, pages 1137?1144. Curran Associates, Inc. Raiko, T., Yao, L., Cho, K., and Bengio, Y. (2014). Iterative neural autoregressive distribution estimator NADE-k. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27 (NIPS 2014), pages 325?333. Curran Associates, Inc. Ross, S., Gordon, G. J., and Bagnell, J. A. (2010). A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning. ArXiv e-prints. Salimans, T., Kingma, D. P., and Welling, M. (2014). Markov chain monte carlo and variational inference: Bridging the gap. arXiv preprint arXiv:1410.6460. Sutskever, I., Vinyals, O., and Le, Q. V. (2014). Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104?3112. Theis, L., van den Oord, A., and Bethge, M. (2015). A note on the evaluation of generative models. ArXiv e-prints. van den Oord, A., Kalchbrenner, N., and Kavukcuoglu, K. (2016). Pixel Recurrent Neural Networks. ArXiv e-prints. Williams, R. J. and Zipser, D. (1989). A learning algorithm for continually running fully recurrent neural networks. Neural computation, 1(2), 270?280. Xu, K., Ba, J., Kiros, R., Courville, A., Salakhutdinov, R., Zemel, R., and Bengio, Y. (2015). Show, attend and tell: Neural image caption generation with visual attention. arXiv preprint arXiv:1502.03044. 9
6099 |@word open:7 seek:1 propagate:1 recursively:1 reduction:2 qatar:1 score:2 ours:1 interestingly:1 document:1 past:1 current:1 com:3 manuel:1 activation:2 gmail:1 guez:1 written:1 visible:1 wanted:1 update:5 v:5 generative:29 half:2 selected:5 intelligence:1 monk:1 inspection:1 short:3 supplying:1 provides:1 ondb:2 zhang:2 evaluator:2 five:2 height:1 mathematical:1 wierstra:1 become:1 consists:3 combine:1 overhead:1 introduce:3 acquired:1 indeed:1 behavior:20 kiros:1 multi:3 salakhutdinov:3 considering:1 spain:1 estimating:2 mitigated:2 notation:2 kind:2 transformation:2 binarized:1 act:3 universit:1 classifier:7 ustinova:1 unit:5 penn:5 continually:1 danihelka:1 positive:1 attend:1 solely:1 ajakan:3 rnns:7 suggests:1 bi:1 directed:1 practical:2 acknowledgment:1 goyal:2 implement:1 regret:1 backpropagation:1 spot:1 procedure:3 digit:1 area:1 rnn:27 empirical:1 eonade:1 significantly:1 onard:1 matching:4 pre:1 word:1 vocal:5 pixelrnn:4 cannot:1 close:2 context:5 optimize:2 www:1 demonstrated:2 yt:15 williams:2 go:1 attention:3 focused:1 ke:1 identifying:1 pouget:1 estimator:3 coordinate:1 hurt:1 merri:2 target:3 caption:3 us:3 curran:2 goodfellow:3 jaitly:1 associate:2 element:3 recognition:5 particularly:2 marcu:1 econometrics:1 database:1 observed:9 preprint:8 enters:1 region:6 ordering:1 trade:1 decrease:1 discriminates:2 asked:1 warde:1 dynamic:5 trained:6 segment:1 writer:1 basis:1 easily:1 joint:1 schwenk:2 various:1 regularizer:3 alphabet:1 train:6 forced:7 distinct:2 scraped:1 fast:1 monte:1 liwicki:2 zemel:1 tell:1 kalchbrenner:1 saizheng:1 supplementary:2 otherwise:1 encoder:2 statistic:2 jointly:1 itself:1 final:2 online:1 nll:2 sequence:65 advantage:4 took:1 propose:1 product:1 adaptation:4 parallelizes:1 combining:3 loop:12 mixing:1 translate:1 achieve:2 gold:1 sutskever:2 convergence:1 empty:1 diverges:1 produce:2 generating:7 adam:3 help:1 depending:1 recurrent:22 ganin:2 propagating:1 implemented:1 coverage:1 involves:1 come:3 larochelle:5 indicate:1 memorize:1 auxiliary:1 waveform:5 closely:3 correct:3 stochastic:5 human:9 char:1 material:2 bin:1 require:1 generalization:1 practically:1 considered:3 ground:7 lawrence:3 snes:4 rfou:2 major:1 achieves:2 consecutive:1 smallest:1 estimation:2 tanh:1 ross:2 saw:1 largest:1 almahairi:1 successfully:1 offs:1 clearly:2 always:2 modified:1 bunke:2 husz:3 occupied:2 derived:1 validated:1 rosemary:1 leapfrog:1 improvement:5 likelihood:21 indicates:1 greatly:1 adversarial:11 centroid:1 baseline:1 sense:1 inference:1 dependent:1 unlikely:1 hidden:21 koller:1 pixel:4 issue:4 classification:1 rodr:1 constrained:3 softmax:2 spatial:1 equal:1 aware:1 having:1 sampling:25 flawed:1 look:1 future:2 yoshua:2 mirza:1 report:1 gordon:1 few:2 randomly:6 anirudh:1 cheaper:1 replaced:1 phase:4 ab:1 montr:1 mlp:1 interest:1 investigate:1 evaluation:9 bpc:3 farley:1 chain:1 poorer:1 encourage:1 shorter:2 divide:1 old:1 increased:1 instance:1 modeling:6 asking:1 phrase:2 cost:1 deviation:1 masked:1 successful:1 too:3 reported:1 dependency:3 teacher:41 cho:7 hiddens:1 lstm:1 international:1 oord:4 stay:2 probabilistic:1 told:1 receiving:1 diverge:1 jos:1 synthesis:8 together:1 bethge:1 gans:3 yao:1 again:2 possibly:3 emnlp:1 chorowski:3 suggesting:1 potential:1 de:2 includes:1 inc:2 explicitly:1 later:1 try:1 performed:3 closed:6 helped:1 doing:1 lowe:1 red:1 dagger:1 complicated:1 contribution:1 formed:1 accuracy:2 convolutional:1 largely:1 yield:1 correspond:3 directional:1 generalize:1 raw:5 handwritten:2 kavukcuoglu:1 produced:1 carlo:1 composes:1 definition:1 remarked:1 naturally:1 handwriting:9 sampled:4 dataset:3 popular:1 ask:1 listen:1 improves:1 actually:1 back:3 bidirectional:2 feed:1 higher:2 alexandre:1 supervised:1 response:1 improved:2 evaluated:3 though:1 just:1 langford:1 hand:2 overlapping:2 minibatch:1 mode:21 pineau:1 quality:8 scheduled:10 hal:1 effect:1 requiring:1 y2:2 remedy:1 hence:2 read:1 climate:1 indistinguishable:2 during:9 self:5 ll:3 criterion:2 demonstrate:1 darn:1 percent:1 elected:1 image:5 meaning:1 variational:1 novel:2 recently:1 umontreal:1 krueger:1 sigmoid:1 empirically:2 conditioning:5 khz:1 discussed:3 association:1 bougares:2 dbn:1 doha:1 zeroing:1 dlgm:1 language:8 had:1 dot:2 actor:1 impressive:2 longer:4 etc:2 align:1 something:1 own:2 showed:1 forcing:80 schmidhuber:2 compound:1 binary:1 continue:2 came:2 seen:3 captured:1 additional:1 greater:2 arjovsky:1 ey:1 converge:1 maximize:2 signal:2 multiple:1 technical:1 match:3 faster:1 zweig:2 cifar:1 long:7 paired:1 prediction:18 variant:2 basic:1 vision:1 metric:1 volunteer:3 arxiv:27 iteration:1 achieved:3 hochreiter:2 addition:2 want:4 whereas:3 annealing:1 source:2 biased:1 unlike:1 bahdanau:9 undirected:1 call:1 zipser:2 backwards:1 bengio:15 embeddings:1 iii:2 variety:1 relu:1 gave:3 architecture:5 bisson:1 regarding:2 idea:4 br:1 whether:4 expression:1 bridging:1 forecasting:1 speech:4 searn:1 passing:1 generally:1 fake:1 clear:1 fool:2 amount:1 reduced:2 generate:4 occupy:1 http:2 correctly:2 track:1 per:1 discrete:4 dropping:1 vol:1 four:1 achieving:1 monitor:1 drawn:2 ht:4 sotelo:1 run:1 letter:1 powerful:1 clipped:1 reasonable:1 lamb:1 draw:2 bit:1 layer:9 ct:2 nan:1 distinguish:1 courville:6 marchand:2 activity:1 iam:2 ahead:1 binned:1 infinity:1 precisely:1 alex:1 constraint:1 your:1 generates:2 speed:1 mikolov:4 performing:1 nboer:2 martin:1 conjecture:1 structured:2 according:2 poor:2 across:1 slightly:4 character:9 making:2 hl:3 den:4 theano:3 computationally:1 visualization:1 previously:3 remains:1 count:1 icdar:1 mind:1 flip:1 fed:3 end:2 gulcehre:1 available:2 apply:1 observe:1 salimans:2 appropriate:1 anymore:1 robustly:1 alternative:3 coin:1 robustness:1 weinberger:1 top:2 running:21 cf:3 linguistics:1 gan:2 graphical:2 laviolette:2 daum:3 music:2 concatenated:1 ghahramani:1 especially:1 murray:5 prof:1 gregor:4 objective:6 print:10 strategy:2 usual:1 traditional:1 bagnell:1 gradient:8 detrimental:1 distance:2 separate:1 thank:1 capacity:1 decoder:2 argue:1 reason:1 dzmitry:1 ozair:1 length:8 ying:2 optionally:2 setup:3 unfortunately:1 sne:1 negative:6 ba:3 implementation:1 boltzmann:2 perform:2 gated:1 observation:1 neuron:1 markov:1 descent:2 behave:1 y1:5 shazeer:1 rating:2 introduced:1 david:1 pair:1 gru:5 germain:4 optimized:1 discriminator:26 sentence:1 acoustic:1 conflicting:1 barcelona:1 kingma:3 nip:3 hour:1 beyond:1 able:1 bar:2 usually:1 pattern:1 adversary:1 eighth:1 regime:1 summarize:1 memory:2 explanation:1 overlap:1 natural:1 force:2 predicting:1 indicator:1 improve:2 github:1 eye:1 raiko:2 autoencoder:1 text:2 literature:1 python:1 theis:2 graf:7 relative:1 fully:2 expect:1 generation:15 limitation:1 interesting:2 generator:13 validation:8 annotator:2 affine:3 sufficient:1 consistent:3 propagates:1 principle:1 treebank:5 editor:2 classifying:1 critic:1 share:2 ehre:1 translation:4 cd:3 course:1 token:1 supported:1 last:1 free:14 english:1 alain:1 allow:1 van:6 curve:3 boundary:1 vocabulary:1 evaluating:2 kari:1 computes:1 doesn:1 author:1 qualitatively:1 forward:1 adaptive:1 made:1 autoregressive:2 far:1 welling:2 brakel:2 approximate:1 obtains:1 sequentially:1 corpus:1 discriminative:1 imitation:1 subsequence:1 search:1 latent:2 pen:2 iterative:1 decomposes:1 table:4 learn:3 composing:2 serdyuk:2 improving:3 schuurmans:1 whiteboard:1 bottou:1 necessarily:1 domain:9 zitnick:2 did:1 spread:1 whole:2 motivation:1 hyperparameters:2 xu:4 x1:1 nade:2 mila:1 slow:1 momentum:1 comprises:1 comput:1 clamped:2 down:1 xt:3 rectifier:1 showing:1 explored:1 torkkola:1 abadie:1 cortes:1 evidence:1 naively:1 mnist:8 sequential:7 adding:1 importance:1 corr:1 labelling:1 conditioned:2 chen:2 gap:1 simply:1 visual:3 vinyals:2 ordered:1 watch:1 springer:1 truth:7 relies:1 minibatches:1 conditional:4 lempitsky:1 shared:1 professor:42 change:3 youtube:2 included:2 determined:2 specifically:1 uniformly:1 called:1 total:1 discriminate:1 aaron:2 latter:2 arises:1 meant:1 evaluate:1 audio:4 regularizing:1 phenomenon:1 ex:1
5,635
61
402 HOW THE PROCESSING CATFISH TRACKS ITS PREY: AN INTERACTIVE "PIPELINED" SYSTEM MAY DIRECT FORAGING VIA RETlCULOSPINAL NEURONS. Jagmeet S. Kanwal Dept. of Cellular & Structural Biology, Univ. of Colorado, Sch. of Medicine, 4200 East, Ninth Ave., Denver, CO 80262. ABSTRACT Ictalurid catfish use a highly developed gustatory system to localize, track and acquire food from their aquatic environment. The neural organization of the gustatory system illustrates well the importance of the four fundamental ingredients (representation, architecture, search and knowledge) of an "intelligent" system. In addition, the "pipelined" design of architecture illustrates how a goal-directed system effectively utilizes interactive feedback from its environment. Anatomical analysis of neural networks involved in target-tracking indicated that reticular neurons within the medullary region of the brainstem, mediate connections between the gustatory (sensory) inputs and the motor outputs of the spinal cord. Electrophysiological analysis suggested that these neurons integrate selective spatio-temporal patterns of sensory input transduced through a rapidly adapting-type peripheral filter (responding tonically only to a continuously increasing stimulus concentration). The connectivity and response patterns of reticular cells and the nature of the peripheral taste response suggest a unique "gustation-seeking" fUnction of reticulospinal cells, which may enable a catfish to continuously track a stimulus source once its directionality has been computed. INTRODUCTION Food search is an example of a broad class of behaviors generally classified as goal-directed behaviors. Goal-directed behavior is frequently exhibited by animals, humans and some machines. Although a preprogrammed, hard-wired machine may achieve a particular goal in a relatively short time, the general and heuristic nature of complex goal-directed tasks, however, is best exhibited by animals and best studied in some of the less advanced animal species, such as fishes, where anatomical, electrophysiological and behavioral analyses can be performed relatively accurately and easily. Food search, which may lead to food acquisition and ingestion, is critical for the survival of an organism and, therefore, only highly successful systems are selected during the evolution of a species. The act of food search may be classified into two distinct phases, (i) orientation, and (ii) tracking (navigation and homing). In the channel catfish (the animal model utilized for this study), locomotion (swimming) is primarily controlled by the large forked caudal fin, which also mediates turning and directional swimming. @ American Institute of Physics 1988 403 Both these forms of movement, which constitute the essential movements of target-tracking, involve control of the hypaxial/epiaxial muscles of the flank. The alternate contraction of these muscles causes caudal fin undulations. Each cycle of the caudal fin undulation provides either a symmetrical or an asymmetrical bilateral thrust. The former provides a net thrust forward, along the longitudinal axis of the fish causing it to move ahead, while the latter biases the direction of movement towards the right or left side of the fish. HRP injection site 1 /recording site *A ................................... ........................... ......................... NEUROBIOLOGY ...... .................................... fEEDING BEHAVIOR MUSCLE SET MOTOR POOLS PREMOTOR NEURONS GUSTATORY INPUTS ....................................... .............................................................................................................................. food Search 1 Pick Up flank and Tail Fi n Muscles Caudal Reticular Spinal VCord > ....."...... f orma tl on : :.... u. flank Mu:sculature Ja'W Muscles I facial lobe > ......_.............. I II .... UIlIl ..... " _....................... , . , . .. UIl. U Il ........... _WI ......... 111111 ... ' . Rostral Spinal Cord Reticular f t? orma Ion facial lobe Vagal lobe Intrl nsic Interneurons Vagal lobe >......... I ..... > ........_.............. facial and/or > ..........................................._..._J....._. . .__? Trigeminal Motor Nucleus ,~ Selective Ingestion Oral and Pharyngeal Musculature Vagal Motor Nuclei Fig. 1. Schematic representation of possible pathways gustatory modulation of foraging in the catfish. for the 404 Ictalurid catfishes possess a well developed gustatory system and use it to locate and acquire food from their aquatic environmentl,2,~ Behavioral evidence also indicates that ictalurid catfishes can detect small intensity (stimulus concentration) differences across their barbels (interbarbel intensity differences), and may use this or other extraoral taste information to compute the directionality in space and track a gustatory stimulus source 1 In other words, based upon the analysis of locomotion, it may be inferred that during food search, the gustatory sense of the catfish influences the duration and degree of asymmetrical or symmetrical undulations of the caudal fin, besides controlling reflex turns of the head and flank. Since directional swimming is ultimately dependent upon movement of the large caudal fin it may be postulated that, if the gustatory system is to coordinate food tracking, gustato-spinal connections exist upto the level of the caudal fin of the catfish (fig. 1). The objectives of this study were (i) to reconsider the functional organization of the gustatory system within the costraints of the four fundamental ingredients (representation, architecture, search and knowledge) of a naturally or artificially "intelligent" agent, (ii) to test the existence of the postulated gustato-spinal connections, and (iii) to delineate as far as possible, using neuroanatomical and electrophysiological techniques, the neural mechanism/s involved in the control of goal-directed (foraging) behavior. ORGANIZATIONAL CONSIDERATIONS I. REPRESENTATION Representation refers to the translation of a particular task into information structures and information processes and determines to a great extent the efficiency and efficacy with which a solution to the task can be generate<i4. The elaborate and highly sensitive taste system of an ictalurid catfish consists of an extensive array of chemo- and mechanosensory receptors distributed over most of the extraoral as well as oral regions of the epithelium2, 5. Peripherally, branches of the facial nerve (which innervates all extraoral taste buds) re~ond to a wide range of stimulus (amino acids) concentrationEP, 7 , l:Si..e. from 10-% to 10-3 M. The taste activity however, adapts rapidly (phasic response) to ongoing stimulation of the same concentration (Fig. 2) and responds tonically only to continuously increasing concentrations of stimuli, such as L-arginine and L-alanine. rp ros Fig.:t. Integrated, facial taste recordings to continuous application of amino acids to the palate and nasal barbel showing the phasic nature of the taste responses of the ramus palatinus (rp) and ramus ophthalmicus superficialis (ros), respectively . 405 Gustatory information from the extraoral and oral epithelium is "pipelined" into two separate subsystems, facial and glossopharyngeal-vagal, respectively. Each sUbsystem processes a subset of the incoming information (extraoral or oral) and coordinates a different component of food acquisition. Food search is accomplished by the extraoral subsystem, while selective 2 ingestion is accomplished by the oral subsystem (Fig. 3). The extraoral gustatory information terminates in the facial lobe where it is represented as a well-defin~d topographic map 9,10 , while the oral information terminates in the adjacent vagal lobe where it is represented as a relatively diffuse map 11. II. ARCHITECTURE The information represented in an information structure eventually requires an operating frame (architecture) within which to select and carry out the various processes. In ictalurid catfish, partially processed information from the primary gustatory centers (facial and vagal lobes) in the medullary region of the brainstem converges along ascending and descending pathways (Fig. 4). One of the centers in the ascending pathways is the secondary gustatory nucleus in the isthmic region which is connected to the corresponding nucleus of the opposite side via a large commissurel 2 ,13. Facial and vagal gustatory information crosses over to the opposite side via this commissure thus making it possible for neurons to extract information about interbarbel or interflank intensity differences. Although neurons in this region are known to have large receptive fields 14 , the exact function of this large commissural nucleus is not yet clearly established. It is quite clear, however, that gustatory information is at first "pipelined" into separate regions where it is processed in parallel 15 before converging onto neurons in the ascending (isthmic) and descending (reticular) processors as well as other regions within the medulla. The "pipelined" architecture underscores the need for differential processing of subsets of sensory inputs which are consequently integrated to coordinate temporal transitions between the various components of goal-directed behavior. III. SEARCH An important task underlying all "intelligent" goal directed activity is that of search. In artificial systems this involves application of several general problem-solving methods such as means-end analysis, generate and test methods and heuristic search methods. No attempt, as yet, has been made to fit any of these models to the food-tracking behavior of the catfish. However, behavioral observations suggest that the catfish uses a combinatorial approach resulting in a different yet optimal foraging strategy each time ~ What is interesting about biological models is that the intrinsic search strategy is expressed extrinsically by the behavior of the animal which, with a few precautions, can be observed ~uite easily. In addition, simple manipulations of either the animal or its environment can provide interesting data about the search 406 SENSORY BRA IN INPUT Ie xtra 5 t e -=-=b (0 r a I) - ..?: .- ~ ~I'- ? -. t -. , \ __ ",to u d s Jr VII ~ ac l al lobe p IX X vagal lobe ~~~~~ ~~v~ ?(!l](L~~ [pl~??~~~?(:l ? __ ~!pa~Cil[1. !p[3C!J(!J~~~C!J(3 Fig. 4. OUT PUT r--:r a I-ora I Fig. 3. BEHAVIORAL F ISH food searc h and DIe k uO selectIve IngestIon I 407 strategy/ies being used by the animal, which in turn can highlight some of the computational (neuronal) search strategies adopted by the brain e.g. the catfish seems to minimize the probability of failure by continuously interacting with the environment so as to be able to correct any computational or knowledge-based errors. IV. KNOWLEDGE If an "intelligent" goal-directed system resets to zero knowledge before each search trial, its success would depend entirely upon the information obtained over the time period of a search. Such a system would also require a labile architecture to process the varying sets of information generated during each search. For such a system, the solution space can become very large and given the constraints of time (generally an important cri te:don in biological systems) this can lead to continuous failure. For these reasons, knowledge becomes an important ingredient of an "intelligent" agent since it can keep the search under control. For the gustatory system of the catfish too, randomly accessable knowledge, in combination with the immediately available information about the target, may playa critical role in the adoption of a successful search strategy. Although a significant portion of this knowledge is probably learned, it is not yet clear where and how this knowledge is stored in the catfish brain. The reduction in the solution space for a catfish which has gradually learned to find food in its environment may be attributed to the increase in the amount of knowledge, which to some extent may involve a restructuring of the neural networks during development. EXPERIMENTAL METHODS The methods employed for the present study are only briefly introduced here. Neuroanatomical tracing techniques exploit the phenomenon of axonal transport. Crystals of the enzyme, horseradish peroxidase (HRP) or some other substance, when injected at a small locus in the brain, are taken up by the damaged neurons and transported anterogradely and retrogradely from cell bodies and/or axons at the injection site. In the present study, small superficial injections of HRP (Sigma, Type VI) were made at various loci in the facial lobe (FL) in separate animals. After a survival period of 3 to 5 days, the animals were sacrificed and the brains sectioned and reacted for visualization of the neuronal tracer. In this manner, complex neural circuits can be gradually delineated. Electrophysiological recordings from neurons in the central nervous system were obtained using heat-pulled glass micropipettes. These glass electrodes had a tip diameter of approximately 1 um and an impedance of less than 1 megohm when filled with an electrolyte (3M KCl or 3M Nacl). Chemical stimulation of the receptive fields was accomplished by injection of stimuli (amino acids, amino acid mixtures and liver or bait-extract solutions) into a continuous flow of well-water over the receptive epithelium. Tactile stimulation was performed by gentle strokes of a sable hair brush or a glass probe. 408 EXPERIMENTAL OBSERVATIONS Injections of HRP into the spinal cord labelled two relevant populations of cells, (i) in the ipsilateral reticular formation at the level of the facial lobe (FL), and (ii) a few large scattered cells within the ipsilateral, rostral portion OI the lateral lobule of the FL (Fig. 5). Injection of HRP at several sites within the FL resulted in the identification of a small region in the FL from where anterogradely filled fibers project to the reticular formation (Fig. 5). Superimposition of these injection sites onto the anatomical map OI the extraoral surface of the catfish indicated that this small region, within the facial lobe, corresponds to the snout region of the extraoral surface. FACIO-RETICULAR PROJECTIONS FACIO- & RETICULO -SPINAL PROJECTIONS injection site ?r 1 aID SpC 1 , injection site 2 3 CB =cerebellum LL =lateral line lobe Fig. 5. Schematic chartings showing labelled-cell bodies(squares) and fibers transverse sections through the medulla. 4 - (dots) in 409 FL facial lobe RF = reticular formation SpC= spinal cord VL = vagal lobe Fl = VL \. Qjg LlP~ ~PC FLANK RF SNOUT ~---------~ Fig. 6A. WATER SaUIRT -HEAD ? + GLIDING TOUCH -FLANK !III1/1 III11I11 J11I11II LIVER EXTRACT -SNOUT IIHI~I J+~ ,. 'I I I I ! II r 1-,'1 1?1l(I UJi! -iJ,l" ILL !~~ I I I I '?n ill'!Pll r~1"r MIl?l "", i II I I , I? J. I L I~ if I,II 1/' II"11' I'//1I I,. II AMINO ACID MIXTURE ? (Receptive fields) (Sample unit responses) LI.J 11111,ltll!,llllliltUII.I~I'III~UII AU) Jlldll,IIIl~lIijkml!II,1. : CONTROL IJ,[ I!11111,11II1".!L ? til.llluI1Inlll.h!LIIIII,hll, 1,1/L . I LI ~ iLL.! L..IJ" ,1. AMINO ACID MIXTURE J ./, ,I lJ ,1,1 \ L [1.. 1" 1,1,1./ d ,II "I..I1LLLlL/.i.LLU~LUt " ijl,ldqllljl,lJJ\lL,~Lld 1"ltH! I~d II~ 1,,1 TOUCH -SNOUT I ... ? . , 11,1" I Lt J iJ,tt.i ? -SNOUT LlLlII t LI ? J " 1 I ?.d.?' ,_J. L.._,I ... ,:!...,J. ..... 1...?.... 1, 1...... 1 UJ"">1.-I",,JrlHk', ,Llk.-I t Fig. 6B. " .+1.1.4. d II I Jl;!d.t4.l, .lui, 410 Multiunit electrophysiological recordings from various anteroposterior levels of the reticular formation indicated that the snout region (upper lip and proximal portion of the maxillary barbels) of the catfish project to a disproportionately large region of the reticular formation along with a mixed representation of the flank (Fig. 6A). Single unit recordings indicated that some neurons have receptive fields restricted to a bilateral portion of the snout region, while others had large receptive fields extending over the whole flank or over an anteroposterior half of the body (Fig. 6B). DISCUSSION The experimental results obtained here suggest that facial lobe projections to the reticular formation form a functional connection. The reticular neurons project to the spinal cord and, most likely, influence the general cycle of swimming-related activity of motoneurons within the spinal cord 16. The disproportionately large representation of the snout region within the medullary reticular formation, as determined electrophysiologically, is consistent with the anatomical data indicating that most of the fibers projecting to the reticular formatirnl originate from cells in that portion of the facial lobe where the snout region is mapped. The lateral lobule of the spinal cord has a second pathway which projects directly into the spinal cord upto the level of the anterior end of the caudal fin and may coordinate reflexive turning. The significance of the present results is best understood when considered together with previously known information about the anatomy and electrophysiology of the gustatory system. The information presented above is used to propose a model (Fig. 7) for a mechanism that may be involved during the homing phase of target tracking by the catfish. During homing, which refers to the last phase of target-tracking during food search, it may be assumed that the fish is rapidly approaching its target or moving through a steep signal intensity (stimulus concentration) gradient. The data presented above suggest that a neuronal mechanism exists which helps the catfish to lock on to the target during homing. This proposal is based upon the following considerations: 1. Owing to the rapidly adapting response of the peripheral filter, a tonic level of activity in the facial lobe input can occur only when the animal is moving through an increasing concentration gradient of the gustatory stimUlUS. 2. Facial lobe neurons, which receive inputs from the snout region, project to a group of cells in the reticular formation. Activity in the facio-reticular pathway causes a suppression in the spontaneous activity of the reticular neurons. 3. Direct and/or indirect spinal projections from the reticular neurons are involved in the modulation of activity of those spinal motoneurons which coordinate swimming. Thus, it may be hypothesized that during complete suppression of activity in a specific reticulospinal pathway, the fish swims straight ahead, but during excitation 411 of certain reticulospinal neurons the fish dictated by the pattern of activation. Fig. 7. The snout region of the catfish has special significance because of its extensive representation in the reticular formation. In case the fish makes a random or computational error, while approaching its target, the snout is the first region to move out of the stimulus gradient. as .'. : .: ~. Thus, the spinal motoneurons, teleologically speaking, "seek" a gustatory stimulus in order to suppress activity of certain reticulospinal neurons, which in turn reduce variations in the pattern of activity of swimming-related spinal motoneurons. Accordingly, in a situation where the fish is rapidly approaching ~ target, ie. under the specific conditions of a continuously rising stimulus concentration at the snout region and an absence of a stimulus intensity difference across the barbels, there is a locking of the movement of the body (of the fish) towards the stationary or moving target (food or prey). It should be pointed out, however, that the empirical data available so far, only offers clues to the target-tracking mechanism proposed here. Clearly, more research is needed to validate this proposal and to identify other mechanisms of target-tracking utilized by this biological system. This research T.E. Finger. was supported in part by NIH Grant NS15258 to REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. P. B. Johnsen and J. H. Teeter, J. Compo Physiol. 140,95 (1981). J. Atema, Brain Behav. and Evol. 4, 273-294, (1971). J. E. Bardach, et a1., Science, 155,1276-1278, (1967). A. Newell, Mc-Graw Hill Encyclopedia of Electronics and Computers, (1984), p.71-74. C. J. Herrick, Bull. US. Fish. Comm. 22, 237-272, (1904). J. Caprio, Compo Biochem. Physiol. 52A, 247-251, (1975). C. J. Davenport and J. Caprio, J. Compo Physiol. 147, 217 (1982). J. S. Kanwal and J. Caprio, Brain Res. 406, 105-112, (1987). T. E. Finger, J. Compo Neurol. 165, 513-526 (1976). T. Marui and J. Caprio', Brain Res. 231,185-190 (1982). J. S. Kanwal and J. Caprio, J. Neurobiol. in press, (1988). C. J. Herrick, J. Compo Neurol. 15, 375-456 (1905). C. J. Herrick, J. Compo Neurol. 16, 403-440 (1906). C. F. Lamb and J. Caprio, ISOT, #P70, (1986). T. E. Finger and Y. Morita, Science, 227, 776-778 (1985). P. S. G. Stein, Handbook of the Spinal Cord, (Marcel Dekker Inc., N.Y., 1984), p. 647.
61 |@word trial:1 briefly:1 rising:1 seems:1 dekker:1 seek:1 lobe:19 contraction:1 pick:1 carry:1 reduction:1 electronics:1 efficacy:1 longitudinal:1 anterior:1 si:1 yet:4 activation:1 physiol:3 thrust:2 motor:4 precaution:1 half:1 selected:1 stationary:1 nervous:1 accordingly:1 short:1 compo:6 provides:2 along:3 direct:2 differential:1 become:1 consists:1 epithelium:2 pathway:6 behavioral:4 manner:1 rostral:2 os:1 behavior:8 frequently:1 brain:7 food:16 increasing:3 becomes:1 project:5 underlying:1 circuit:1 transduced:1 what:1 neurobiol:1 developed:2 temporal:2 act:1 interactive:2 ro:2 um:1 control:4 unit:2 uo:1 grant:1 before:2 understood:1 receptor:1 modulation:2 approximately:1 au:1 studied:1 co:1 range:1 adoption:1 directed:8 unique:1 lobule:2 ond:1 empirical:1 adapting:2 projection:4 word:1 refers:2 suggest:4 retrogradely:1 pipelined:5 onto:2 subsystem:4 put:1 influence:2 descending:2 map:3 center:2 duration:1 evol:1 immediately:1 array:1 population:1 coordinate:5 variation:1 target:12 controlling:1 colorado:1 damaged:1 exact:1 spontaneous:1 us:1 locomotion:2 pa:1 utilized:2 observed:1 role:1 region:19 cord:9 cycle:2 connected:1 innervates:1 movement:5 ramus:2 environment:5 mu:1 locking:1 comm:1 preprogrammed:1 ultimately:1 depend:1 solving:1 nacl:1 oral:6 upon:4 efficiency:1 easily:2 homing:4 indirect:1 represented:3 various:4 fiber:3 finger:3 sacrificed:1 univ:1 distinct:1 heat:1 kcl:1 artificial:1 formation:9 quite:1 heuristic:2 premotor:1 reticular:20 topographic:1 net:1 propose:1 reset:1 causing:1 relevant:1 rapidly:5 achieve:1 adapts:1 pll:1 gentle:1 validate:1 electrode:1 extending:1 wired:1 converges:1 help:1 ish:1 ac:1 liver:2 ij:4 involves:1 marcel:1 direction:1 anatomy:1 correct:1 owing:1 filter:2 brainstem:2 human:1 enable:1 disproportionately:2 ja:1 require:1 feeding:1 biological:3 pl:1 considered:1 uil:1 great:1 cb:1 tonically:2 combinatorial:1 sensitive:1 caudal:8 horseradish:1 clearly:2 varying:1 mil:1 indicates:1 underscore:1 commissure:1 ave:1 suppression:2 detect:1 sense:1 glass:3 dependent:1 defin:1 cri:1 vl:2 integrated:2 lj:1 selective:4 reacted:1 orientation:1 ill:3 development:1 animal:10 special:1 spc:2 iiil:1 field:5 once:1 biology:1 broad:1 others:1 stimulus:13 intelligent:5 primarily:1 few:2 randomly:1 resulted:1 hrp:5 phase:3 attempt:1 organization:2 interneurons:1 highly:3 navigation:1 extrinsically:1 mixture:3 pc:1 ingestion:4 facial:17 filled:2 iv:1 re:3 iii1:1 graw:1 bull:1 organizational:1 reflexive:1 subset:2 successful:2 medulla:2 too:1 stored:1 foraging:4 proximal:1 gliding:1 undulation:3 fundamental:2 ie:2 physic:1 pool:1 tip:1 together:1 continuously:5 connectivity:1 central:1 davenport:1 american:1 til:1 li:3 inc:1 postulated:2 vi:1 performed:2 bilateral:2 portion:5 parallel:1 ii1:1 minimize:1 il:1 oi:2 square:1 acid:6 identify:1 directional:2 identification:1 accurately:1 mc:1 straight:1 processor:1 classified:2 stroke:1 palate:1 failure:2 acquisition:2 involved:4 naturally:1 commissural:1 attributed:1 knowledge:10 electrophysiological:5 nerve:1 day:1 response:6 delineate:1 snout:13 accessable:1 anterogradely:2 transport:1 touch:2 indicated:4 aquatic:2 hypothesized:1 asymmetrical:2 former:1 evolution:1 chemical:1 adjacent:1 cerebellum:1 during:10 ll:2 die:1 excitation:1 ijl:1 crystal:1 tt:1 complete:1 hill:1 consideration:2 fi:1 nih:1 functional:2 stimulation:3 denver:1 spinal:17 jl:1 tail:1 organism:1 significant:1 pointed:1 had:2 dot:1 moving:3 operating:1 surface:2 biochem:1 qjg:1 playa:1 enzyme:1 dictated:1 manipulation:1 certain:2 success:1 accomplished:3 muscle:5 motoneuron:4 employed:1 bra:1 period:2 signal:1 ii:14 branch:1 cross:1 offer:1 labile:1 y:1 a1:1 controlled:1 schematic:2 converging:1 hair:1 cell:8 ion:1 proposal:2 addition:2 receive:1 source:2 sch:1 exhibited:2 posse:1 probably:1 morita:1 recording:5 flow:1 structural:1 axonal:1 iii:3 sectioned:1 fit:1 architecture:7 approaching:3 opposite:2 reduce:1 swim:1 tactile:1 speaking:1 cause:2 constitute:1 behav:1 generally:2 clear:2 involve:2 nasal:1 amount:1 stein:1 encyclopedia:1 processed:2 diameter:1 generate:2 exist:1 electrolyte:1 fish:10 track:4 ipsilateral:2 anatomical:4 alanine:1 group:1 four:2 localize:1 prey:2 peripherally:1 swimming:6 musculature:1 injected:1 lamb:1 utilizes:1 entirely:1 fl:7 electrophysiologically:1 activity:10 i4:1 ahead:2 occur:1 constraint:1 uii:1 diffuse:1 ltll:1 injection:9 relatively:3 alternate:1 peripheral:3 combination:1 jr:1 across:2 terminates:2 lld:1 wi:1 delineated:1 making:1 projecting:1 gradually:2 restricted:1 taken:1 visualization:1 previously:1 turn:3 eventually:1 mechanism:5 phasic:2 locus:2 needed:1 ascending:3 end:2 adopted:1 available:2 probe:1 upto:2 rp:2 existence:1 neuroanatomical:2 responding:1 lock:1 medicine:1 llu:1 exploit:1 uj:1 seeking:1 move:2 objective:1 receptive:6 concentration:7 primary:1 strategy:5 responds:1 gradient:3 separate:3 mapped:1 lateral:3 originate:1 extent:2 cellular:1 reason:1 water:2 bud:1 besides:1 acquire:2 steep:1 multiunit:1 sigma:1 reconsider:1 suppress:1 design:1 peroxidase:1 upper:1 neuron:16 observation:2 fin:7 situation:1 neurobiology:1 tonic:1 head:2 locate:1 frame:1 interacting:1 ninth:1 intensity:5 inferred:1 transverse:1 introduced:1 extensive:2 connection:4 learned:2 established:1 mediates:1 chemo:1 able:1 suggested:1 llp:1 pattern:4 johnsen:1 rf:2 critical:2 turning:2 advanced:1 hll:1 mechanosensory:1 axis:1 extract:3 tracer:1 ljj:1 taste:7 highlight:1 mixed:1 interesting:2 ingredient:3 integrate:1 nucleus:5 degree:1 agent:2 consistent:1 medullary:3 translation:1 supported:1 last:1 bias:1 side:3 pulled:1 institute:1 wide:1 tracing:1 distributed:1 feedback:1 transition:1 sensory:4 forward:1 made:2 clue:1 far:2 lut:1 arginine:1 keep:1 incoming:1 handbook:1 symmetrical:2 llk:1 assumed:1 spatio:1 don:1 search:20 continuous:3 gustatory:20 uji:1 impedance:1 lip:1 nature:3 channel:1 transported:1 superficial:1 flank:8 complex:2 artificially:1 significance:2 maxillary:1 whole:1 mediate:1 amino:6 body:4 neuronal:3 site:7 fig:17 tl:1 elaborate:1 scattered:1 cil:1 axon:1 aid:1 reticulospinal:4 herrick:3 ix:1 specific:2 substance:1 showing:2 neurol:3 survival:2 evidence:1 essential:1 intrinsic:1 exists:1 teeter:1 effectively:1 importance:1 te:1 illustrates:2 t4:1 vii:1 lt:1 electrophysiology:1 likely:1 expressed:1 restructuring:1 tracking:9 partially:1 reflex:1 brush:1 corresponds:1 newell:1 determines:1 lth:1 goal:9 trigeminal:1 consequently:1 towards:2 labelled:2 absence:1 hard:1 directionality:2 determined:1 lui:1 specie:2 secondary:1 experimental:3 east:1 indicating:1 select:1 latter:1 liiiii:1 ongoing:1 dept:1 phenomenon:1
5,636
610
Parameterising Feature Sensitive Cell Formation in Linsker Networks in the Auditory System Lance C. Walton University of Kent at Canterbury Canterbury Kent England David L. Bisset University of Kent at Canterbury Canterbury Kent England Abstract This paper examines and extends the work of Linsker (1986) on self organising feature detectors. Linsker concentrates on the visual processing system, but infers that the weak assumptions made will allow the model to be used in the processing of other sensory information. This claim is examined here, with special attention paid to the auditory system, where there is much lower connectivity and therefore more statistical variability. On-line training is utilised, to obtain an idea of training times. These are then compared to the time available to pre-natal mammals for the formation of feature sensitive cells. 1 INTRODUCTION Within the last thirty years, a great deal of research has been carried out in an attempt to understand the development of cells in the pathways between the sensory apparatus and the cortex in mammals. For example, theories for the development of feature detectors were forwarded by Nass and Cooper (1975), by Grossberg (1976) and more recently Obermayer et al (1990). Hubel and Wiesel (1961) established the existence of several different types of feature sensitive cell in the visual cortex of cats. Various subsequent experiments have 1007 1008 Walton and Bisset shown that a considerable amount of development takes place before birth (i.e. without environmental input). This must either be dependent on a genetic predispostion for individual cells to develop in an appropriate way without external influence, or some low level rules sufficient to create the required cell morphologies in the presence of random action potentials. Although there is a great deal of a priori information concerning axon growth and synapse arborisation (governed by chemical means in the brain), it is difficult to conceive of a biological system that could use genetic information to directly manipulate the spatial information about the pre-synaptic target with respect to the axon with which the synapse is made. However, there is considerable random activity in the sensory apparatus that could be used to effect synaptic development . Various authors have constructed models that deal with different aspects of selforganisation of this kind and some have pointed out the value of these types of cells in pattern classification problems (Grossberg 1976), but either the biological plausibility of these models is questionable, or the subject of pre-natal development is not addressed (i.e. without environmental input). In this paper, the networks of Linsker (1986) will be examined. Although these networks have been analysed quite extensively by Linsker, and also by Mackay and Miller (1990), the biological aspects of parameter ranges and choices have only been touched upon. It is our aim in this paper, to add further detail in this area by examining the one-dimensional case which represents the auditory pathways. 2 LINSKER NETWORKS The network is based on a Multi Layer Perceptron, with feed forward connections in all layers, and la.teral connections (inhibition and excitation) in higher layers. The neural outputs are sums of the weighted inputs, and the weights develop according to a constrained Hebbian Rule. Each layer is lettered for reference starting from A and subsequent layers are lettered B,C,D etc. The superscript M will be used to refer to an arbitrary layer, and L is used to refer to the previous layer. Each layer has a set of parameters which are the same for all neurons in that layer . Connectivity is random but is based on a Gaussian density distribution (exp( -r2/rll?' where rM is the arbor radius for layer M. Each layer is a rectangular array of neurons (or vector of neurons for the one dimensional case). The layers are assumed to be large enough so that edge effects are not important or do not occur . Layers develop one at a time starting from the B layer. The A layer is an input layer, which is divided into boxes, within each of which activity is uniform . This is biologically realistic, since sensory neurons fan out to a number of cells (an average of lOin the cochlea) each of which only take input from one sensory cell. Hence the input layer for the network acts like a layer of tonotopically organised neurons. Parameterising Feature Sensitive Cell Formation in Linsker Networks in the Auditory System 3 NETWORK DEVELOPMENT The output of a neuron in layer M is given by F~1rr = Ra + Rb. L CnjFp~:(nj) (1) j Where, 7r indexes a pattern presentation, The subscript n is used to index the M layer neurons, R a , Rb are layer parameters, Fp~:(nj) is the output of the L layer neuron which is pre-synaptic to the j'th input of the n'th M layer neuron. The synaptic weights develop according to a constrained Hebbian learning rule, (.6cn i)7I' = ka + k b .(FnM7r - Ftt).(Fp~:(ni) - Fl) (2) Where, (.6cn d7l' is the change in the i'th weight of neuron n, k a , kb' FtJ , Frf are layer parameters. Synaptic weights are constrained to lie within t.he ra.nge (nem - 1, n em ). (In this work, nem = 0.5) Linsker (1986a) derives an Ensemble Averaged Development equation which shows how development depends on the parameters, and how correlations develop between spatially proximate neurons in layers beyond the first. In so doing, the number of parameters is reduced from five per layer to two per layer, and therefore the equation is a very useful aid in understanding the self-organising nature of this model. The development equation is QL (.) (,),c nJ' . }., +}' _ + ~. ~) pren, .pre nJ (3) Cni = '\1 \.::!'Cn NM Qt == < (Fl7r - FL).(F/ 7r - fL) (4) f6 Where, N M is the number of synaptic connections to an M layer neuron, FL is the a.verage output activity in the L layer, ka + kb.(Ra - F~1) . (FL - Frf) K1 2 NMkbRbfo = (5) K'l = fL .(FL'l- Frf) ~ f& is a unit of activity used to normalise the two point correlation function this work is chosen to set Qt = 1 fJ (6) fa Qb. Angle brackets denote an a.verage ta.ken over the ensemble of input patterns. In 1009 1010 Walton and Bisset 4 MORPHOLOGICAL REGIMES From equation 3, an expression can be found for the average weight value c in a layer, and therefore certain properties of the system can be described. Although Mackay and Miller (1990) have described the regimes with the aid of eigenvalues and eigenfunctions, there is a much simpler method which will provide the same information. For an all-excitatory (AE) layer, the average weight value is equal to 71. em . Since all weights are equal to llem, the summation in equation 3 can be re-written QL N - h rB n em ? ~ L.Jj pre(ni).pre(nj) = ll em ? M?q, were q = 2 .N c? J rc 2 + 2 ? rB A similar expression can be found for all-inhibitory (AI) layers, and therefore the J(l - 1<2 plane can be sub-divided into three regions which will yield AE cells, AI cells, and mixed-mode cells (see figure 1). The plane can be divided further for the mixed-mode cell type in the C layer. Oncenter and off-center cells develop close to the AE and AI boundaries respectively. Mackay and Miller have shown why these cells develop and have placed a theoretical lower bound on c which agrees with experimental data. However, in so doing the effect of the intercept on the /{2 axis was deemed small, due to a large number of synaptic connections. This approximation depends upon the large number of connections between the Band C layers. In the auditory case, the number of connections is smaller, and it is possible that this assumption no longer holds. From equation 3, it can be seen that movement into the On-Centre region from the AE region, causes the value of ~. (.).c n )? t.o decrease. This has the L.J} QL pre (.) 7H .pre n} effect of moving the intercept of the constant c line from J{:? = q towards /{ 2 = O. J{2 finally reaches 0 when c 0, and then begins to move back towards ij as the AI regime is approached. = This has two potentially important effects. Firstly, it means that the tolerance of /{ 2 varies with J( 1; for a particular value of /{l, there are upper and lower limits on the value of K2 which will allow maturation of on-center cells. This range of values (i.e. the difference between the limits) varies in a linear way with K I, but the ratio of the range to a value of K2 which is within the range (i.e. the center value) is not linear with /{l. Here, tolerance is defined as that ratio. Secondly, there is a region of negative /{2 where the nature of the cell morphology which will be produced is unknown. It is therefore important that K'2 should be larger than this value in order to produce On-Center or Off-Center cells reliably. Mackay and Miller use IK21 -+ 00 in their analysis. Unfortunately, this would require the fundamental network parameter Fl -+ 00 from equation 6, and therefore is an unsuitable choice. It is reasonable to assume that Fl is of the same order as FL, and hence an order for 1<2 can be established. For a concrete example, assume inputs are binary (giving Qfi = 0.25) and Fl FL x 1.2, this will ensure K2 < 0 (equation 6) while adhering to the assumption made above. Equation 6 now gives the order for K2 = 0.2. = To find the value of ij, which will place a lower bound on IK2L a particular system should be chosen. The auditory system is chosen here. Parameterising Feature Sensitive Cell Formation in Linsker Networks in the Auditory System Kl ~~~~~~-----------------------------------.K2 o Figure 1: Graph of Morphological Regions for C Layer There are approximately 3000 inner hair cells in the cochlea, each of which fans out to an average of 10 neurons (which sets our box size p = 10). These neurons take input from only one hair cell. The anteroventral cochlea nucleus takes input from 1000 in Linsker this layer of cells, with a fan in N B ~ 50 (c.f. the value of N B (1986a)). The assumption is made that the three sections of the cochlea nucleus each contain approximately the same number of cells. With this smaller number of connections, the correlation function for this layer is somewhat coarser, and does not follow the theoretical curve for the continuum limit so well. = In addition, the on-center cells found in the posteroventral cochlea nucleus and the dorsal nucleus have centres with a tuning curve response Q of about 2.5 which corresponds to about 2000 B layer cells . If it is assumed that the surround of the cell is half the width of the core, then there is a total "c ~ 3000 neurons. Simulations here use Nc = 100 which is a realistic number of connections in the context of a one-dimensional network . In general, the arbor radius increases as layers become closer to the cortex. From Linsker, rc /"B = 3. "B is therefore equal to 1000. This yields the average number of connections to a given B cell from a. particular A box being approximately unity, which agrees well with the condition expressed by Linskel'. Using the expression above, if can be calculated as approximately 1.5 x 10- 3 . This value is certainly insignificant with respect to the value of f{2 = 0.2 quoted earlier, and therefore any effects due to the summation term in equation 3 can be ignored in the calculation of c for this system. This means that the original approximation still holds even in this low connectivity case. 1011 1012 Walton and Bisset 5 SIMULATION RESULTS A network was trained using the connectivity sta.ted above to give various values of C with f{2 = 0.2. To obtain an idea of the total number of presentations that were required to train the network , without any artifacts that might be produced as a result of batch training, the original network equations were used. In all of these simulations , R a , Ftt = 0 so that the value of f{ 1 could be easily controlled. The findings were that the maximum value of kb was about 10- 3 which required 2.5 million pattern presentations to mature the network. With this value, on-center cells with an average weight value less than about 0.3 would not mature. However as the value of kb was decreased (keeping f{l constant), the value of c could be made lower, at the expense of more pattern presentations. The figures obtained for the maturation of feature sensitive cells are extremely biologically realistic in the light of the number of pattern presentations available to an average mammal. For example , the foetal cat has sufficient time for about 25 million presentations (assuming 10 presentations per second) . 6 CONCLUSION We have shown that the class of network developed by Linsker is extendable to the auditory system where the number and density of synapses is considerably smaller than in the visual case. It has also been shown that the time for layer maturation by this method is sufficiently short even for mammals with a relatively short gestation period. and therefore should also be sufficient in mammals with longer foetal development times . We conclude that the model is therefore a good representation of feature detector development in the pre-natal mammal. References Grossberg S. (1976) - On the Development of Feature Detectors in the Visual Cortex with Applications to Learning and Reaction Diffusion Systems, Biological Cybernetics 21, 145 - 159 Grossberg S. (1976) - Adaptive Pattern Classification and Universal Recoding : 1 Parallel Development and Coding of Neural Feature Detectors, Biological Cybernetics 28, 121 - 134 Hu bel D. H. and Wiesel T . N. (1961) - Receptive Fields, Binocular Interaction and Functional Architechture in the Cat 's Visual Cortex, Journal of Physiology, 160, 106 - 154 Kalil R. E. (1989) - Synapse Formation In The Developing Brain, Scientific American, Decembe r 1989, 38 - 45 Klinke R . (1986) - Physiology of Hearing. In Schmidt R . 'N . (ed .), Fundamentals of Sensory Physiology , 19!-J - 22:3 MacKay D. J. C. and Miller K. D. (1990) - Analysis of Linsker 's Simulations of Hehbian Rules, Neural Computation, 2, 173 - 187 von der Malsburg C. (1979) - Development of Ocularity Domains and Growth Parameterising Feature Sensitive Cell Formation in Linsker Networks in the Auditory System Behaviour of Axon Terminals, Biological Cybernetics, :]2, 49 - 62 Linsker R . (1986a) - From Basic Network Principles To Neural Architecture: Emergence Of Spatial-Opponent Cells, Proceedings of the National Academy of Sciences (USA), 83, 7508 - 7512 Linsker R . (1986b) - From Ba.')ic Network Principles To Neural Architecture: Emergence of Orientation-Selective Cells, Proceedings of the National Academy of Sciences (USA), 83, 8390 - 8394 Linsker R. (1986c) - From Basic Network Principles To Neural Architecture: Emergence of Orientation-Columns, Proceedings of the National Academy of Sciences (USA), 83, 8779 - 8783 Nass M. M. and Cooper L. N. (1975) - A Theory for the Development of Feature Detecting Cells in the Visua.l Cortex, Biological Cybernetics, 19, 1 - 18 Obermayer K. Ritter H. and Schulten K. (1990) - Development and Spatial Structure of Cortical Feature Maps: A Model Study NIPS, 3, 11 - 17 Sloman A. (1989) - On Designing a Visual System (Towards a Gibsonian Computational Model of Vision) Journal of Experimental and Theoretical Artificial Intelligence, 1, 289 - 337 Tanaka S. (1990) - Intera.ction among Ocularity, Retinotopy and On-Center/Off Center Pathways During Development NIPS, 3, 18 - 25 1013
610 |@word wiesel:2 oncenter:1 hu:1 simulation:4 kent:4 paid:1 mammal:6 genetic:2 reaction:1 ka:2 analysed:1 must:1 written:1 realistic:3 subsequent:2 half:1 intelligence:1 plane:2 core:1 short:2 detecting:1 organising:2 firstly:1 simpler:1 five:1 rc:2 constructed:1 become:1 pathway:3 ra:3 morphology:2 brain:2 multi:1 terminal:1 begin:1 anteroventral:1 kind:1 developed:1 finding:1 nj:5 act:1 growth:2 questionable:1 rm:1 k2:5 unit:1 before:1 apparatus:2 limit:3 subscript:1 approximately:4 might:1 examined:2 range:4 averaged:1 grossberg:4 thirty:1 area:1 universal:1 physiology:3 pre:10 close:1 context:1 influence:1 intercept:2 map:1 rll:1 center:9 attention:1 starting:2 rectangular:1 adhering:1 examines:1 rule:4 array:1 target:1 designing:1 klinke:1 coarser:1 region:5 morphological:2 movement:1 decrease:1 trained:1 upon:2 easily:1 cat:3 various:3 train:1 ction:1 artificial:1 approached:1 formation:6 birth:1 quite:1 larger:1 forwarded:1 emergence:3 superscript:1 rr:1 eigenvalue:1 interaction:1 academy:3 walton:4 produce:1 arborisation:1 develop:7 ij:2 qt:2 concentrate:1 radius:2 kb:4 require:1 behaviour:1 biological:7 summation:2 secondly:1 hold:2 sufficiently:1 ic:1 exp:1 great:2 claim:1 continuum:1 sensitive:7 agrees:2 create:1 weighted:1 gaussian:1 aim:1 dependent:1 selective:1 classification:2 orientation:2 among:1 priori:1 development:17 spatial:3 special:1 mackay:5 constrained:3 equal:3 field:1 ted:1 represents:1 linsker:17 sta:1 national:3 individual:1 attempt:1 certainly:1 bracket:1 light:1 parameterising:4 edge:1 closer:1 gestation:1 re:1 theoretical:3 column:1 earlier:1 hearing:1 uniform:1 examining:1 varies:2 considerably:1 extendable:1 density:2 fundamental:2 ritter:1 off:3 concrete:1 na:2 connectivity:4 von:1 nm:1 external:1 american:1 f6:1 potential:1 coding:1 depends:2 utilised:1 doing:2 parallel:1 ftt:2 ni:2 conceive:1 miller:5 ensemble:2 yield:2 cni:1 weak:1 produced:2 cybernetics:4 detector:5 synapsis:1 reach:1 synaptic:7 ed:1 auditory:9 infers:1 loin:1 back:1 feed:1 higher:1 ta:1 maturation:3 follow:1 response:1 synapse:3 box:3 binocular:1 correlation:3 mode:2 artifact:1 scientific:1 usa:3 effect:6 contain:1 hence:2 chemical:1 spatially:1 deal:3 ll:1 during:1 self:2 width:1 excitation:1 fj:1 recently:1 functional:1 million:2 he:1 refer:2 surround:1 ai:4 tuning:1 pointed:1 centre:2 moving:1 cortex:6 longer:2 inhibition:1 etc:1 add:1 certain:1 binary:1 der:1 seen:1 somewhat:1 kalil:1 period:1 hebbian:2 england:2 plausibility:1 calculation:1 divided:3 concerning:1 proximate:1 manipulate:1 controlled:1 basic:2 hair:2 ae:4 vision:1 cochlea:5 cell:34 addition:1 addressed:1 decreased:1 eigenfunctions:1 subject:1 mature:2 presence:1 enough:1 architecture:3 inner:1 idea:2 cn:3 expression:3 cause:1 jj:1 action:1 ignored:1 useful:1 amount:1 extensively:1 band:1 ken:1 reduced:1 inhibitory:1 per:3 rb:4 diffusion:1 graph:1 tonotopically:1 year:1 sum:1 angle:1 extends:1 place:2 reasonable:1 bisset:4 layer:40 fl:12 bound:2 fan:3 activity:4 occur:1 aspect:2 extremely:1 qb:1 relatively:1 developing:1 according:2 verage:2 smaller:3 em:4 unity:1 biologically:2 equation:11 available:2 opponent:1 appropriate:1 batch:1 schmidt:1 existence:1 original:2 ensure:1 malsburg:1 unsuitable:1 giving:1 k1:1 move:1 fa:1 receptive:1 obermayer:2 sloman:1 normalise:1 assuming:1 index:2 ratio:2 nc:1 difficult:1 ql:3 unfortunately:1 potentially:1 expense:1 negative:1 ba:1 reliably:1 unknown:1 upper:1 neuron:15 variability:1 arbitrary:1 david:1 required:3 kl:1 connection:9 bel:1 established:2 tanaka:1 nip:2 beyond:1 pattern:7 ftj:1 fp:2 regime:3 ocularity:2 lance:1 axis:1 carried:1 deemed:1 understanding:1 lettered:2 mixed:2 organised:1 nucleus:4 sufficient:3 principle:3 excitatory:1 placed:1 last:1 keeping:1 visua:1 allow:2 understand:1 perceptron:1 nge:1 tolerance:2 recoding:1 boundary:1 curve:2 calculated:1 cortical:1 sensory:6 author:1 made:5 forward:1 adaptive:1 frf:3 hubel:1 assumed:2 conclude:1 quoted:1 why:1 nature:2 domain:1 architechture:1 cooper:2 axon:3 aid:2 sub:1 schulten:1 lie:1 governed:1 canterbury:4 touched:1 r2:1 insignificant:1 derives:1 visual:6 expressed:1 corresponds:1 environmental:2 nem:2 presentation:7 towards:3 considerable:2 change:1 retinotopy:1 total:2 arbor:2 la:1 experimental:2 dorsal:1
5,637
6,100
Active Nearest-Neighbor Learning in Metric Spaces Aryeh Kontorovich Department of Computer Science Ben-Gurion University of the Negev Beer Sheva 8499000, Israel Sivan Sabato Department of Computer Science Ben-Gurion University of the Negev Beer Sheva 8499000, Israel Ruth Urner Max Planck Institute for Intelligent Systems Department for Empirical Inference T?bingen 72076, Germany Abstract We propose a pool-based non-parametric active learning algorithm for general metric spaces, called MArgin Regularized Metric Active Nearest Neighbor (MARMANN), which outputs a nearest-neighbor classifier. We give prediction error guarantees that depend on the noisy-margin properties of the input sample, and are competitive with those obtained by previously proposed passive learners. We prove that the label complexity of MARMANN is significantly lower than that of any passive learner with similar error guarantees. Our algorithm is based on a generalized sample compression scheme and a new label-efficient active model-selection procedure. 1 Introduction In this paper we propose a non-parametric pool-based active learning algorithm for general metric spaces, which outputs a nearest-neighbor classifier. The algorithm is named MArgin Regularized Metric Active Nearest Neighbor (MARMANN). In pool-based active learning [McCallum and Nigam, 1998] a collection of random examples is provided, and the algorithm can interactively query an oracle to label some of the examples. The goal is good prediction accuracy, while keeping the label complexity (the number of queried labels) low. MARMANN receives a pool of unlabeled examples in a general metric space, and outputs a variant of the nearest-neighbor classifier. The algorithm obtains a prediction error guarantee that depends on a noisy-margin property of the input sample, and has a provably smaller label complexity than any passive learner with a similar guarantee. The theory of active learning has received considerable attention in the past decade [e.g., Dasgupta, 2004, Balcan et al., 2007, 2009, Hanneke, 2011, Hanneke and Yang, 2015]. Active learning has been mostly studied in a parametric setting (that is, learning with respect to a fixed hypothesis class with a bounded capacity). Various strategies have been analyzed for parametric classification [e.g., Dasgupta, 2004, Balcan et al., 2007, Gonen et al., 2013, Balcan et al., 2009, Hanneke, 2011, Awasthi et al., 2013].An active model selection procedure has also been developed for the parametric setting Balcan et al. [2010]. However, the number of labels used there depends quadratically on the number of possible model classes, which is prohibitive in our non-parametric setting. The potential benefits of active learning for non-parametric classification in metric spaces are less well understood. The paradigm of cluster-based active learning [Dasgupta and Hsu, 2008] has been shown to provide label savings under some distributional clusterability assumptions [Urner et al., 2013, Kpotufe et al., 2015]. Certain active learning methods for nearest neighbor classification are known to be Bayes consistent [Dasgupta, 2012], and an active querying rule, based solely on information in 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. the unlabeled data, has been shown to be beneficial for nearest neighbors under covariate shift [Berlind and Urner, 2015]. Castro and Nowak [2007] analyze minimax rates for a class of distributions in Euclidean space, characterized by decision boundary regularity and noise conditions. However, no active non-parametric strategy for general metric spaces, with label complexity guarantees for general distributions, has been proposed so far. Here, we provide the first such algorithm and guarantees. The passive nearest-neighbor classifier is popular among theorists and practitioners alike [Fix and Hodges, 1989, Cover and Hart, 1967, Stone, 1977, Kulkarni and Posner, 1995]. This paradigm is applicable in general metric spaces, and its simplicity is an attractive feature for both implementation and analysis. When appropriately regularized [e.g. Stone, 1977, Devroye and Gy?rfi, 1985, von Luxburg and Bousquet, 2004, Gottlieb et al., 2010, Kontorovich and Weiss, 2015] this type of learner can be made Bayes consistent. Another desirable property of nearest-neighbor-based methods is their ability to generalize at a rate that scales with the intrinsic data dimension, which can be much lower than that of the ambient space [Kpotufe, 2011, Gottlieb et al., 2014a, 2016a, Chaudhuri and Dasgupta, 2014]. Furthermore, margin-based regularization makes nearest neighbors ideally suited for sample compression, which yields a compact representation, faster classification runtime, and improved generalization performance [Gottlieb et al., 2014b, Kontorovich and Weiss, 2015]. The resulting error guarantees can be stated in terms of the sample?s noisy-margin, which depends on the distances between differently-labeled examples in the input sample. Our contribution. We propose MARMANN, a non-parametric pool-based active learning algorithm that obtains an error guarantee competitive with that of a noisy-margin-based passive learner, but can provably use significantly fewer labels. This is the first non-parametric active learner for general metric spaces that achieves prediction error that is competitive with passive learning for general distributions, and provably improves label complexity. Our approach. Previous passive learning approaches to classification using nearest-neighbor rules under noisy-margin assumptions [Gottlieb et al., 2014b, 2016b] provide statistical guarantees using sample compression bounds [Graepel et al., 2005]. The finite-sample guarantees depend on the number of noisy labels relative to an optimal margin scale. A central challenge in the active setting is performing model selection (selecting the margin scale) with a low label complexity. A key insight that we exploit in this work is that by designing a new labeling scheme for the compression set, we can construct the compression set and estimate its error with label-efficient procedures. We obtain statistical guarantees for this approach using a generalized sample compression analysis. We derive a label-efficient (as well as computationally efficient) active model-selection procedure. This procedure finds a good scale by estimating the sample error for some scales, using a small number of active querying rounds. Crucially, unlike cross-validation, our model-selection procedure does not require a number of labels that depends on the worst possible scale, nor does it test many scales. This allows our label complexity bounds to be low, and to depend only on the final scale selected by the algorithm. Our error guarantee is a constant factor over the error guarantee of the passive learner of Gottlieb et al. [2016b]. An approach similar to Gottlieb et al. [2016b], proposed in Gottlieb et al. [2014a], has been shown to be Bayes consistent [Kontorovich and Weiss, 2015]. The Bayes-consistency of the passive version of our approach is the subject of ongoing work. Paper outline. We define the setting and notations in Section 2. In Section 3 we provide our main result, Theorem 3.2, giving error and label complexity guarantees for MARMANN. Section 4 shows how to set the nearest neighbor rule for a given scale, and Section 5 describes the model selection procedure. Some of the analysis is omitted due to lack of space. The full analysis is available at Kontorovich et al. [2016]. 2 Setting and notations We consider learning in a general metric space (X , ?), where X is a set and ? is the metric on X . Our problem setting is that of classification of the instance space X into some finite label set Y. Assume that there is some distribution D over X ? Y, and let S ? Dm be a labeled sample of size m, where m is an integer. Denote the sequence of unlabeled points in S by U(S). We sometimes treat S and U(S) as multisets, since the order is unimportant. The error of a classifier h : X ? Y on D is denoted err(h, D) := P[h(X) 6= P Y ], where (X, Y ) ? D. The empirical error on a labeled sample 1 S instantiates to err(h, S) = |S| I[h(X) 6= Y ]. A passive learner receives a labeled sample Sin as input. An active learner receives the unlabeled part of the sample Uin := U(Sin ) as input, and 2 is allowed to adaptively select examples from Uin and request their label from Sin . When either ? : X ? Y, with the goal of achieving a low err(h, ? D). An learner terminates, it outputs a classifier h additional goal of the active learner is to achieve a performance competitive with that of the passive learner, while querying considerably fewer labels. The diameter of a set A ? X is defined by diam(A) := supa,a0 ?A ?(a, a0 ). Denote the index of the closest point in U to x ? X by ?(x, U ) := argmini:xi ?U ?(x, xi ). We assume here and throughout this work that when there is more than one minimizer for ?(x, xi ), ties are broken arbitrarily (but in a consistent fashion). For a set Z ? X , denote ?(Z, U ) := {?(z, U ) | z ? Z}. Any labeled sample S = ((xi , yi ))i?[k] naturally induces the nearest-neighbor classifier hnn S : X ? Y, via hnn (x) := y . ?(x,U(S)) S For x ? X , and t > 0, denote by ball(x, t) the (closed) ball of radius t around x: ball(x, t) := {x0 ? X | ?(x, x0 ) ? t}. The doubling dimension, the effective dimension of the metric space, which controls generalization and runtime performance of nearest-neighbors [Kpotufe, 2011, Gottlieb et al., 2014a], is defined as follows. Let ? = ?(X ) be the smallest number such that every ball in X can be covered by ? balls of half its radius, where all balls are centered at points of X . Formally, ?(X ) := min{? ? N : ?x ? X , r > 0, ?x1 , . . . , x? ? X : ball(x, r) ? ??i=1 ball(xi , r/2)}. Then the doubling dimension of X is defined by ddim(X ) := log2 ?. In line with modern literature, we work in the low-dimension, big-sample regime, where the doubling dimension is assumed to be constant and hence sample complexity and algorithmic runtime may depend on it exponentially. This exponential dependence is unavoidable, even under margin assumptions, as previous analysis [Kpotufe, 2011, Gottlieb et al., 2014a] indicates. 0 A set A ? X is t-separated if inf a,a0 ?A:a6 S =a0 ?(a, a ) ? t. For A ? B ? X , the set A is a t-net of B if A is t-separated and B ? a?A ball(a, t). Constructing a minimum size t-net for a general set B is NP-hard [Gottlieb and Krauthgamer, 2010], however efficient procedures exist for constructing some t-net [Krauthgamer and Lee, 2004, Gottlieb et al., 2014b]. The size of any t-net is at most 2ddim(B) times the smallest possible size (see Kontorovich et al. [2016]). In ddim(X )+1 addition, the size of any t-net is at most ddiam(B)/te [Krauthgamer and Lee, 2004]. Throughout the paper, we fix a deterministic procedure for constructing a t-net, and denote its output for a multiset U ? X by Net(U, t). Let Par(U, t) be a partition of X into regions induced by Net(U, t), that is: for Net(U, t) = {x1 , . . . , xN }, define Par(U, t) := {P1 , . . . , PN }, where Pi = {x ? X | ?(x, Net(U, t)) = i}. For t > 0, denote N (t) := |Net(Uin , t)|. For a labeled multiset S ? X ? Y and y ? Y, denote S y := {x | (x, y) ? S}; in particular, U(S) = ?y?Y S y . 3 Main results Non-parameteric binary classification admits performance guarantees that scale with the sample?s noisy-margin [von Luxburg and Bousquet, 2004, Gottlieb et al., 2010, 2016b]. We say that a labeled multiset S is (?, t)-separated, for ? ? [0, 1] and t > 0 (representing a margin t with noise ?), if one can remove a ?-fraction of the points in S, and in the resulting multiset, points with different labels are at least t-far from each other. Formally, S is (?, t)-separated if there exists a subsample S? ? S ? ? ?|S| and ?y1 6= y2 ? Y, a ? S?y1 , b ? S?y2 , we have ?(a, b) ? t. For a given such that |S \ S| labeled sample S, denote by ?(t) the smallest value ? such that S is (?, t)-separated. Gottlieb et al. [2016b] propose a passive learner with the following guarantees as a function of the separation of S. Setting ? := m/(m ? N ), define the following form of a generalization bound: s ?((N + 1) log(mk) + log( 1? )) 2 (N + 1) log(mk) + log( 1? ) 3 GB(, N, ?, m, k) := ? + +? . 3 m?N m?N 2 Theorem 3.1 (Gottlieb et al. [2016b]). Let m be an integer, Y = {0, 1}, ? ? (0, 1). There exists a passive learning algorithm that returns a nearest-neighbor classifier hnn Spas , where Spas ? Sin , such that, with probability 1 ? ?, err(hnn Spas , D) ? GB(?(t), N (t), ?, m, 1). min t>0:N (t)<m The passive algorithm of Gottlieb et al. [2016b] generates Spas of size approximately N (t) for the optimal scale t > 0 (found by searching over all scales), removing the |Sin |?(t) points that 3 obstruct the t-separation between different labels in Sin , and then selecting a subset of the remaining labeled examples to form Spas , so that the examples are a t-net for Sin . We propose a different approach for generating a compression set for a nearest-neighbor rule. This approach, detailed in the following sections, does not require finding and removing all the obstructing points in Sin , and can be implemented in an active setting using a small number of labels. The resulting active learning algorithm, MARMANN, has an error guarantee competitive with that of the passive learner and a label complexity that can be significantly lower. Our main result is the following guarantee for MARMANN. Theorem 3.2. Let Sin ? Dm , where m ? max(6, |Y|), ? ? (0, 14 ). Let S? be the output of ? Sin ), and ? := hnn and ? := err(h, ? := |S|. ? Let h MARMANN(Uin , ?), where S? ? X ? Y, and let N ? S ? := GB(? ? , ?, m, 1). With a probability of 1 ? ? over Sin and randomness of MARMANN, denote G , N   ? D) ? 2G ??O err(h, min GB(?(t), N (t), ?, m, 1) , t>0:N (t)<m and the number of labels from Sin requested by MARMANN is at most    1 1 3 m ? O log ( ) log( ) + mG . ? ? ? G G Here O(?) hides only universal numerical constants. To observe the advantages of MARMANN over a passive learner, consider a scenario in which ? the upper bound GB of Theorem 3.1, as well as the Bayes error of D, are of order ?(1/ m). ? ? = ?(1/ m) as well. Therefore, MARMANN obtains a prediction error guarantee of Then G ? ? ?m) labels instead of m. Moreover, ?(1/ m), similarly to the passive learner, but it uses only ?( no learner that selects labels randomly from Sin can compete with MARMANN: In Kontorovich et al. [2016] we adapt an argument of Devroye et al. [1996] to show that for any passive learner ? ?m) random labels from Sin , there exists a distribution D with the above properties, that uses ?( ? ?1/4 ), a decay rate which is for which the prediction error of the passive ? learner in this case is ?(m almost quadratically slower than the O(1/ m) rate achieved by MARMANN. Thus, the guarantees of MARMANN cannot be matched by any passive learner. MARMANN operates as follows. First, a scale t? > 0 is selected, by calling t? ? SelectScale(?), where SelectScale is our model selection procedure. SelectScale has access to Uin , and queries labels from Sin as necessary. It estimates the generalization error bound GB for several different scales, and executes a procedure similar to binary search to identify a good scale. The binary search keeps the number of estimations (and thus requested labels) small. Crucially, our estimation procedure is designed to prevent the search from spending a number of labels that depends on the net size of the smallest possible scale t, so that the total label complexity of MARMANN depends only on error of the selected t?. Second, the selected scale t? is used to generate the compression set by calling S? ? GenerateNNSet(t?, [N (t?)], ?), where GenerateNNSet is our compression set generation procedure. For clarity of presentation, we first introduce in Section 4 the procedure GenerateNNSet, which determines the compression set for a given scale, and then in Section 5, we describe how SelectScale chooses the appropriate scale. 4 Active nearest-neighbor at a given scale The passive learner of Gottlieb et al. [2014a, 2016b] generates a compression set by first finding and removing from Sin all points that obstruct (?, t)-separation at a given scale t > 0. We propose below a different approach for generating a compression set, which seems more conducive to active learning: as we show below, it also generates a low-error nearest neighbor rule, just like the passive approach. At the same time, it allows us to estimate the error on many different scales using few label queries. A small technical difference, which will be evident below, is that in this new approach, examples in the compression set might have a different label than their original label in Sin . Standard sample compression analysis [e.g. Graepel et al., 2005] assumes that the classifier is determined by a small number of labeled examples from Sin . This does not allow the examples in the compression set to have a different label than their original label in Sin . Therefore, we require a slight generalization of previous compression analysis, which allows setting arbitrary labels for examples that are assigned to the compression set. The following theorem quantifies the effect of this change on generalization. 4 Theorem 4.1. Let m ? |Y| be an integer, ? ? (0, 14 ). Let Sin ? Dm . With probability at least 1 ? ?, 1 if there exist N < m and S ? (X ? Y)N such that U(S) ? Uin and  := err(hnn S , Sin ) ? 2 , then nn err(hS , D) ? GB(, N, ?, m, |Y|) ? 2GB(, N, 2?, m, 1). The proof is similar to that of standard sample compression schemes. If the compression set includes only the original labels, the compression analysis of Gottlieb et al. [2016b] gives the bound GB(, N, ?, m, 1). Thus the effect of allowing the labels to change is only logarithmic in |Y|, and does not appreciably degrade the prediction error. We now describe the generation of the compression set for a given scale t > 0. Recall that ?(t) is the smallest value for which Sin is (?, t)-separated. We define two compression sets. The first one, denoted Sa (t), represents an ideal compression set, which induces an empirical error of at most ?(t), but calculating it might require many labels. The second compression set, denoted S?a (t), represents an approximation to Sa (t), which can be constructed using a small number of labels, and induces a sample error of at most 4?(t) with high probability. MARMANN constructs only S?a (t), while Sa (t) is defined for the sake of analysis only. We first define the ideal set Sa (t) := {(x1 , y1 ), . . . , (xN , yN )}. The examples in Sa (t) are the points in Net(Uin , t/2), and the label of each example is the majority label out of the examples in Sin to which xi is closest. Formally, {x1 , . . . , xN } := Net(Uin , t/2), and for i ? [N ], yi := argmaxy?Y |S y ? Pi |, where Pi = {x ? X | ?(x, Net(U, t/2)) = i} ? Par(Uin , t/2). For i ? [N ], let ?i := S yi ? Pi . The following lemma bounds the empirical error of hnn Sa (t) . Lemma 4.2. For every t > 0, err(hnn Sa (t) , Sin ) ? ?(t). Proof. Since Net(Uin , t/2) is a t/2-net, diam(P ) ? t for any P ? Par(Uin , t/2). Let S? ? S ? ? m(1 ? ?(t)), and for be a subsample that witnesses the (?(t), t)-separation of S, so that |S| ? := U(S). ? Since ? if ?(x, x0 ) ? t then y = y 0 . Denote U any two points (x, y), (x0 , y 0 ) ? S, ? maxP ?Par(Uin ,t/2) diam(P ) ? t, for any i ? [N ] all the points in U ? Pi must have the same label in ? Therefore, ?y ? Y such that U ? ? Pi ? S?y ? Pi . Hence |U ? ? Pi | ? |?i |. It follows S. X X ? ? Pi | = |S| ? |S| ? = m ? ?(t). m ? err(hnn |?i | ? |S| ? |U Sa (t) , Sin ) ? |S| ? i?[N ] i?[N ] Dividing by m we get the statement of the theorem. Now, calculating Sa (t) requires knowing most of the labels in Sin . MARMANN constructs instead an approximation S?a (t), in which the examples are the points in Net(Uin , t/2) (so that U(S?a (t)) = U(Sa (t)) ), but the labels are determined using a bounded number of labels requested from Sin . The labels in S?a (t) are calculated by the simple procedure GenerateNNSet given in Alg. 1. The empirical error of the output of GenerateNNSet is bounded in Theorem 4.3 below.1 A technicality in Alg. 1 requires explanation: In MARMANN, the generation of S?a (t) will be split into several calls to GenerateNNSet, so that different calls determine the labels of different points in S?a (t). Therefore GenerateNNSet has an additional argument I, which specifies the indices of the points in Net(Uin , t/2) for which the labels should be returned this time. Crucially, if during the run of MARMANN, GenerateNNSet is called again for the same scale t and the same point in Net(Uin , t/2), then GenerateNNSet returns the same label that it returned before, rather than recalculating it using fresh labels from Sin . This guarantees that despite the randomness in GenerateNNSet, the full S?a (t) is well-defined within any single run of MARMANN, and is distributed like the output of GenerateNNSet(t, [N (t/2)], ?), which is convenient for the analysis. Theorem 4.3. Let S?a (t) be the output of GenerateNNSet(t, [N (t/2)], ?). With a probability at least ? nn 1 ? 2m 2 , we have err(hS , Sin ) ? 4?(t). Denote this event by E(t). 1 In the case of binary labels (|Y| = 2), the problem of estimating Sa (t) can be formulated as a special case of the benign noise setting for parametric active learning, for which tight lower and upper bounds are provided in Hanneke and Yang [2015]. However, our case is both more general (as we allow multiclass labels) and more specific (as we are dealing with a specific hypothesis class). Thus we provide our own procedure and analysis. 5 Algorithm 1 GenerateNNSet(t, I, ?) input Scale t > 0, a target set I ? [N (t/2)], confidence ? ? (0, 1). output A labeled set S ? X ? Y of size |I| {x1 , . . . , xN } ? Net(Uin , t/2), {P1 , . . . , PN } ? Par(Uin , t/2), S ? () for i ? I do if y?i has not already been calculated for Uin with this values of t then   Draw Q := 18 log(2m3 /?) points uniformly at random from Pi and query their labels. Let y?i be the majority label observed in these Q queries. end if S ? S ? {(xi , y?i )}. end for Output S Proof. By Lemma 4.2, err(hnn Sa (t) , Sin ) ? ?(t). In Sa (t), the labels assigned to each point in Net(Uin , t/2) are the majority labels (based on Sin ) of the points in the regions in Par(Uin , t/2). Denote the majority label for region Pi by yi := argmaxy?Y |S y ? Pi |. We now compare these labels to the labels y?i assigned by Alg. 1. Let p(i) = |?i |/|Pi | be the fraction of points in Pi which are labeled by the majority label yi . Let p?(i) be the fraction of labels equal to yi out of those queried by Alg. 1 in round i. Let ? := 1/6. By Hoeffding?s inequality and union bounds, we have that with a Q ? probability of at least 1 ? N (t/2) exp(? 18 p(i) ? p(i)| ? ?. ) ? 1 ? 2m 2 , we have maxi?[N (t/2)] |? 0 0 Denote this ?good? event by E . We now prove that E ? E(t). Let J ? [N (t/2)] = {i | p?(i) > 12 }. It can be easily seen that y?i = yi for all i ? J. Therefore, for all x such that ?(x, U(Sa (t))) ? J, nn nn hnn / J] + err(hnn S (x) = hSa (t) (x), and hence err(hS , Uin ) ? PX?Uin [?(X, U(Sa (t))) ? Sa (t) , Uin ). 0 The second term is at most ?(t), and it remains P to bound the 0first term, on the condition that E 1holds. 1 We have PX?U [?(X, U(Sa (t))) ? / J] = m |P |. If E holds, then for any i ? / J, p(i) ? i i?J / 2 + ?, therefore |Pi | ? |?i | = (1 ? p(i))|Pi | ? ( 12 ? ?)|Pi |. Therefore 1? 1 X 1 X |?i | ? |Pi |( 12 ? ?) = PX?U [?(X, U(Sa (t))) ? / J]( 12 ? ?). m m i?J / i?J / On the other hand, as in the proof of Lemma 4.2, 1 ? PX?U [?(X, S) ? / J] ? 5 ?(t) 1 2 ?? 1 m P = 3?(t). It follows that under i?[N (t/2)] |?i | ? ?(t). Thus, E 0 , err(hnn S , Uin ) ? 4?(t). under E 0 , Model Selection We now show how to select the scale t? that will be used to generate the output nearest-neighbor rule. The main challenge is to do this with a low label complexity: Generating the full classification rule for scale t requires a number of labels that depends on N (t), which might be very large. We would like the label complexity of MARMANN to depend only on N (t?) (where t? is the selected scale), ? Therefore, during model selection we can only invest a bounded number which is of the order mG. of labels in each tested scale. In addition, to keep the label complexity low, we cannot test all scales. For t > 0, let S?a (t) be the model that MARMANN would generate if the selected scale were set to t. Our model selection procedure performs a search, similar to binary search, over the possible scales. For each tested scale t, the procedure estimates (t) := err(hnn ?a (t) , S) within a certain accuracy, using S an estimation procedure we call EstimateErr. EstimateErr outputs an estimate ?(t) of (t), up to a given accuracy ? > 0, using labels requested from Sin . It draws random examples from Sin , asks for their label, and calls GenerateNNSet (which also might request labels) to find the prediction error of hnn ?(t) is set to this prediction error. The number ?a (t) on these random examples. The estimate  S of random examples drawn by EstimateErr is determined based on the accuracy ?, using empirical Bernstein bounds [Maurer and Pontil, 2009]. Theorem 5.1 gives a guarantee for the accuracy and label complexity of EstimateErr. The full implementation of EstimateErr and the proof of Theorem 5.1 can be found in the long version of this paper Kontorovich et al. [2016]. 6 Theorem 5.1. Let t, ? > 0 and ? ? (0, 1), and let ?(t) ? EstimateErr(t, ?, ?). Let Q be as defined in Alg. 1. The following properties (which we denote below by V (t)) hold with a probability of ? ? 1 ? 2m 2 over the randomness of EstimateErr (and conditioned on Sa (t)). 1. If ?(t) ? ?, then (t) ? 5?/4. Otherwise, 4(t) 5 ? ?(t) ? 4(t) 3 . 2 2. EstimateErr requests at most 520(Q+1) log( 1040m ) ?? 0 ?0 labels, where ? 0 := max(?, (t)). The model selection procedure SelectScale, given in Alg. 2, implements its search based on the guarantees in Theorem 5.1. First, we introduce some notation. Let G? = mint GB(?(t), N (t), ?, m, 1). ? We would like MARMANN to obtain a generalization guarantee that is competitive p with G . Denote 1 2 3 ? ?(t) := ((N (t) + 1) log(m) + log( ? ))/m, and let G(, t) :=  + 3 ?(t) + 2 ?(t). Note that for all , t, m GB(, N (t), ?, m, 1) = G(, t). m ? N (t) When referring to G(?(t), t), G((t), t), or G(? (t), t) we omit the second t for brevity. Instead of directly optimizing GB, we will select a scale based on our estimate G(? (t)) of G((t)).  Let Dist denote the set of pairwise distances in the unlabeled dataset Uin (note that |Dist| < m 2 ). We remove from Dist some distances, so that the remaining distances have a net size N (t) that is monotone non-increasing in t. We also remove values with a very large net size. Concretely, define Distmon := Dist \ {t | N (t) + 1 > m/2} \ {t | ?t0 ? Dist, t0 < t and N (t0 ) < N (t)}. Then for all t, t0 ? Distmon such that t0 < t, we have N (t0 ) ? N (t). The output of SelectScale is always a value in Distmon . The following lemma shows that it suffices to consider these scales. Lemma 5.2. Assume m ? 6 and let t?m ? argmint?Dist G(?(t)). If G? ? 1/3 then t?m ? Distmon . Proof. Assume by way of contradiction that t?m ? Dist \ Distmon . First, since G(?(t?m )) ? G? ? N (t? )+1 1/3 we have m?Nm(t? ) log(m) ? 12 . Therefore, since m ? 6, it is easy to verify N (t?m ) + 1 ? m/2. m Therefore, by definition of Distmon there exists a t ? t?m with ?(t) < ?(t?m ). Since ?(t) is monotone over all of t ? Dist, we also have ?(t) ? ?(t?m ). Now, ?(t) < ?(t?m ) and ?(t) ? ?(t?m ) together imply that G(?(t)) < G(?(t?m )), a contradiction. Hence, t?m ? Distmon . SelectScale follows a search similar to binary search, however the conditions for going right and for going left are not complementary. The search ends when either none of these two conditions hold, or when there is nothing left to try. The final output of the algorithm is based on minimizing G(? (t)) over some of the values tested during search. For c > 0, define ?(c) := 1 + implications 2 3c + ?3 2c and ?? (c) :=  ? c?(t) ? ?(c) ? G(, t) and 1 c + 2 3 + ?3 . 2c For all t,  > 0 we have the ?(t) ? c ? ?? (c)?(t) ? G(, t). (1) The following lemma uses Eq. (1) to show that the estimate G(? (t)) is close to the true G((t)). Lemma 5.3. Let t > 0, ? ? (0, 1), and suppose that SelectScale calls ?(t) ? EstimateErr(t, ?(t), ?). Suppose that V (t) as defined in Theorem 5.1 holds. Then 16 G(? (t)) ? G((t)) ? 6.5G(? (t)). Proof. Under V (t), we have that if ?(t) < ?(t) then (t) ? 54 ?(t). In this case, G((t)) ? ?? (4/5)?(t) ? 4.3?(t), by Eq. (1). Therefore G((t)) ? 3?4.3 (t)). In addition, G((t)) ? 32 ?(t) 2 G(? (t)). Therefore G((t)) ? (from the definition of G), and by Eq. (1) and ?? (1) ? 4, ?(t) ? 14 G(? 1 4 G(?  (t)). On the other hand, if  ? (t) ? ?(t), then by Theorem 5.1 (t) ? ?(t) ? 43 (t). Therefore 6 5 4 5 G(? (t)) ? 3 G((t)) and G((t)) ? 4 G(? (t)). Taking the worst-case of both possibilities, we get the bounds in the lemma. The next theorem bounds the label complexity of SelectScale. Let Ttest ? Distmon be the set of scales that are tested during SelectScale (that is, their ?(t) was estimated). 7 Algorithm 2 SelectScale(?) input ? ? (0, 1) output Scale t? T ? Distmon , # T maintains the current set of possible scales while T = 6 ? do t ? the median value in T # break ties arbitrarily ?(t) ? EstimateErr(t, ?(t), ?). if ?(t) < ?(t) then T ? T \ [0, t] # go right in the binary search 11 else if ?(t) > 10 ?(t) then T ? T \ [t, ?) # go left in the binary search else t0 ? t, T0 ? {t0 }. break from loop end if end while if T0 was not set yet then If the algorithm ever went to the right, let t0 be the last value for which this happened, and let T0 := {t0 }. Otherwise, T0 := ?. end if Let TL be the set of all t that were tested and made the search go left Output t? := argmint?TL ?T0 G(? (t)) Theorem 5.4. Suppose that the event V (t) defined in Theorem 5.1 holds for all t ? Ttest for the calls ?(t) ? EstimateErr(t, ?(t), ?). If the output of SelectScale is t?, then the number of labels requested by SelectScale is at most 19240|Ttest |(Q + 1) 38480m2 1 log( ), G((t?)) ?G((t?)) where Q is as defined in Alg. 1. The following theorem provides a competitive error guarantee for the selected scale t?. Theorem 5.5. Suppose that V (t) and E(t), defined in Theorem 5.1 and Theorem 4.3, hold for all values t ? Ttest , and that G? ? 1/3. Then SelectScale outputs t? ? Distmon such that GB((t?), N (t?), ?, m, 1) ? O(G? ), where O(?) hides numerical constants only. The idea of the proof is as follows: First, we show (using Lemma 5.3) that it suffices to prove that G(?(t?m )) ? O(G(? (t?))) to derive the bound in the theorem. Now, SelectScale ends in one of two cases: either T0 is set within the loop, or T = ? and T0 is set outside the loop. In the first case, neither of the conditions for turning left and turning right holds for t0 , so we have ?(t0 ) = ?(?(t0 )) (where ? hides numerical constants). We show that in this case, whether t?m ? t0 or t?m ? t0 , G(?(t?m )) ? O(G(? (t0 ))). In the second case, there exist (except for edge cases, which are also handled) two values t0 ? T0 and t1 ? TL such that t0 caused the binary search to go right, and t1 caused it to go left, and also t0 ? t1 , and (t0 , t1 ) ? Distmon = ?. We use these facts to show that for t?m ? t1 , G(?(t?m )) ? O(G(? (t1 ))), and for t?m ? t0 , G(?(t?m )) ? O(G(? (t0 ))). Since t? minimizes over a set that includes t0 and t1 , this gives G(?(t?m )) ? O(G(? (t?))) in all cases. The proof of the main theorem, Theorem 3.2, which gives the guarantee for MARMANN, is almost immediate from Theorem 4.1, Theorem 4.3, Theorem 5.5 and Theorem 5.4. Acknowledgements Sivan Sabato was partially supported by the Israel Science Foundation (grant No. 555/15). Aryeh Kontorovich was partially supported by the Israel Science Foundation (grants No. 1141/12 and 755/15) and a Yahoo Faculty award. We thank Lee-Ad Gottlieb and Dana Ron for helpful discussions. 8 References P. Awasthi, M.-F. Balcan, and P. M. Long. The power of localization for efficiently learning linear separators with malicious noise. CoRR, abs/1307.8371, 2013. M. Balcan, S. Hanneke, and J. W. Vaughan. The true sample complexity of active learning. Machine Learning, 80(2-3):111?139, 2010. M.-F. Balcan, A. Broder, and T. Zhang. Margin-based active learning. In COLT, 2007. M.-F. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. J. Comput. Syst. Sci., 75(1), 2009. C. Berlind and R. Urner. Active nearest neighbors in changing environments. In ICML, pages 1870?1879, 2015. R. M. Castro and R. D. Nowak. Learning Theory: 20th Annual Conference on Learning Theory, COLT 2007, San Diego, CA, USA; June 13-15, 2007. Proceedings, chapter Minimax Bounds for Active Learning, pages 5?19. Springer Berlin Heidelberg, Berlin, Heidelberg, 2007. K. Chaudhuri and S. Dasgupta. Rates of convergence for nearest neighbor classification. In NIPS, 2014. T. M. Cover and P. E. Hart. Nearest neighbor pattern classification. IEEE Transactions on Information Theory, 13:21?27, 1967. S. Dasgupta. Analysis of a greedy active learning strategy. In NIPS, pages 337?344, 2004. S. Dasgupta. Consistency of nearest neighbor classification under selective sampling. In COLT, 2012. S. Dasgupta and D. Hsu. Hierarchical sampling for active learning. In ICML, pages 208?215, 2008. L. Devroye and L. Gy?rfi. Nonparametric density estimation: the L1 view. Wiley Series in Probability and Mathematical Statistics: Tracts on Probability and Statistics. John Wiley & Sons, Inc., New York, 1985. L. Devroye, L. Gy?rfi, and G. Lugosi. A probabilistic theory of pattern recognition, volume 31 of Applications of Mathematics (New York). Springer-Verlag, New York, 1996. ISBN 0-387-94618-7. E. Fix and J. Hodges, J. L. Discriminatory analysis. nonparametric discrimination: Consistency properties. International Statistical Review / Revue Internationale de Statistique, 57(3):pp. 238?247, 1989. A. Gonen, S. Sabato, and S. Shalev-Shwartz. Efficient active learning of halfspaces: an aggressive approach. Journal of Machine Learning Research, 14(1):2583?2615, 2013. L. Gottlieb and R. Krauthgamer. Proximity algorithms for nearly-doubling spaces. In APPROX-RANDOM, pages 192?204, 2010. L. Gottlieb, L. Kontorovich, and R. Krauthgamer. Efficient classification for metric data. In COLT, pages 433?440, 2010. L. Gottlieb, A. Kontorovich, and R. Krauthgamer. Efficient classification for metric data. IEEE Transactions on Information Theory, 60(9):5750?5759, 2014a. L. Gottlieb, A. Kontorovich, and P. Nisnevitch. Near-optimal sample compression for nearest neighbors. In NIPS, pages 370?378, 2014b. L. Gottlieb, A. Kontorovich, and R. Krauthgamer. Adaptive metric dimensionality reduction. Theoretical Computer Science, pages 105?118, 2016a. L. Gottlieb, A. Kontorovich, and P. Nisnevitch. Nearly optimal classification for semimetrics. In Artificial Intelligence and Statistics (AISTATS), 2016b. T. Graepel, R. Herbrich, and J. Shawe-Taylor. PAC-Bayesian compression bounds on the prediction error of learning algorithms for classification. Machine Learning, 59(1-2):55?76, 2005. S. Hanneke. Rates of convergence in active learning. The Annals of Statistics, 39(1):333?361, 2011. S. Hanneke and L. Yang. Minimax analysis of active learning. JMLR, 16:3487?3602, 2015. A. Kontorovich and R. Weiss. A Bayes consistent 1-NN classifier. In AISTATS, 2015. A. Kontorovich, S. Sabato, and R. Urner. Active nearest-neighbor learning in metric spaces. CoRR, abs/1605.06792, 2016. URL http://arxiv.org/abs/1605.06792. S. Kpotufe. k-NN regression adapts to local intrinsic dimension. In NIPS, 2011. S. Kpotufe, R. Urner, and S. Ben-David. Hierarchical label queries with data-dependent partitions. In COLT, pages 1176?1189, 2015. R. Krauthgamer and J. R. Lee. Navigating nets: Simple algorithms for proximity search. In 15th Annual ACM-SIAM Symposium on Discrete Algorithms, pages 791?801, Jan. 2004. S. R. Kulkarni and S. E. Posner. Rates of convergence of nearest neighbor estimation under arbitrary sampling. IEEE Transactions on Information Theory, 41(4):1028?1039, 1995. A. Maurer and M. Pontil. Empirical Bernstein bounds and sample-variance penalization. In COLT, 2009. A. K. McCallum and K. Nigam. Employing EM and pool-based active learning for text classification. In ICML, 1998. C. J. Stone. Consistent nonparametric regression. The Annals of Statistics, 5(4):595?620, 1977. R. Urner, S. Wulff, and S. Ben-David. PLAL: cluster-based active learning. In COLT, pages 376?397, 2013. U. von Luxburg and O. Bousquet. Distance-based classification with Lipschitz functions. Journal of Machine Learning Research, 5:669?695, 2004. 9
6100 |@word h:3 faculty:1 version:2 compression:26 seems:1 crucially:3 asks:1 reduction:1 series:1 selecting:2 past:1 err:16 current:1 beygelzimer:1 ddim:3 yet:1 must:1 john:1 numerical:3 partition:2 gurion:2 benign:1 remove:3 designed:1 discrimination:1 half:1 prohibitive:1 fewer:2 selected:7 greedy:1 intelligence:1 mccallum:2 provides:1 multiset:4 ron:1 herbrich:1 org:1 zhang:1 mathematical:1 constructed:1 aryeh:2 symposium:1 prove:3 introduce:2 pairwise:1 x0:4 p1:2 nor:1 dist:8 increasing:1 provided:2 spain:1 bounded:4 estimating:2 notation:3 moreover:1 matched:1 agnostic:1 israel:4 minimizes:1 developed:1 finding:2 guarantee:26 every:2 runtime:3 tie:2 classifier:10 control:1 grant:2 omit:1 yn:1 planck:1 before:1 t1:7 understood:1 local:1 treat:1 despite:1 solely:1 approximately:1 lugosi:1 might:4 studied:1 discriminatory:1 parameteric:1 union:1 implement:1 revue:1 procedure:20 pontil:2 jan:1 empirical:7 universal:1 significantly:3 convenient:1 confidence:1 statistique:1 get:2 cannot:2 unlabeled:5 selection:11 close:1 nisnevitch:2 vaughan:1 deterministic:1 go:5 attention:1 simplicity:1 contradiction:2 rule:7 insight:1 m2:1 posner:2 searching:1 annals:2 target:1 suppose:4 diego:1 us:3 designing:1 hypothesis:2 recognition:1 distributional:1 labeled:12 observed:1 worst:2 region:3 went:1 halfspaces:1 environment:1 broken:1 complexity:17 ideally:1 depend:5 tight:1 localization:1 learner:21 easily:1 differently:1 various:1 chapter:1 separated:6 effective:1 describe:2 query:6 artificial:1 labeling:1 outside:1 shalev:1 say:1 otherwise:2 maxp:1 ability:1 statistic:5 noisy:7 final:2 uin:25 sequence:1 mg:2 advantage:1 net:26 isbn:1 propose:6 loop:3 chaudhuri:2 achieve:1 adapts:1 invest:1 convergence:3 cluster:2 regularity:1 generating:3 tract:1 ben:4 derive:2 nearest:27 received:1 eq:3 sa:19 dividing:1 implemented:1 radius:2 centered:1 require:4 fix:3 generalization:7 suffices:2 hold:8 proximity:2 around:1 exp:1 algorithmic:1 achieves:1 smallest:5 omitted:1 estimation:5 applicable:1 label:78 appreciably:1 awasthi:2 always:1 rather:1 pn:2 june:1 indicates:1 helpful:1 inference:1 dependent:1 nn:6 a0:4 going:2 selective:1 selects:1 germany:1 provably:3 classification:17 among:1 colt:7 denoted:3 yahoo:1 special:1 equal:1 construct:3 saving:1 sampling:3 represents:2 icml:3 nearly:2 np:1 intelligent:1 few:1 modern:1 randomly:1 ab:3 possibility:1 recalculating:1 analyzed:1 argmaxy:2 implication:1 ambient:1 edge:1 nowak:2 necessary:1 euclidean:1 maurer:2 taylor:1 theoretical:1 mk:2 instance:1 cover:2 a6:1 subset:1 semimetrics:1 considerably:1 chooses:1 adaptively:1 referring:1 density:1 broder:1 international:1 siam:1 lee:4 probabilistic:1 pool:6 together:1 kontorovich:16 von:3 hodges:2 central:1 interactively:1 unavoidable:1 again:1 nm:1 hoeffding:1 return:2 syst:1 aggressive:1 potential:1 de:1 gy:3 includes:2 inc:1 caused:2 depends:7 ad:1 try:1 break:2 closed:1 view:1 analyze:1 competitive:7 bayes:6 maintains:1 ttest:4 contribution:1 accuracy:5 variance:1 efficiently:1 yield:1 identify:1 generalize:1 bayesian:1 none:1 plal:1 hanneke:7 randomness:3 executes:1 urner:7 definition:2 pp:1 dm:3 naturally:1 proof:9 hsu:2 dataset:1 popular:1 recall:1 improves:1 dimensionality:1 graepel:3 wei:4 improved:1 furthermore:1 just:1 langford:1 hand:2 receives:3 lack:1 usa:1 effect:2 verify:1 y2:2 true:2 regularization:1 hence:4 assigned:3 attractive:1 round:2 sin:33 during:4 generalized:2 stone:3 outline:1 evident:1 performs:1 l1:1 passive:22 balcan:8 spending:1 exponentially:1 volume:1 ddiam:1 slight:1 theorist:1 queried:2 approx:1 consistency:3 mathematics:1 similarly:1 shawe:1 access:1 closest:2 own:1 hide:3 optimizing:1 inf:1 mint:1 scenario:1 certain:2 verlag:1 inequality:1 binary:9 arbitrarily:2 yi:7 seen:1 minimum:1 additional:2 determine:1 paradigm:2 full:4 desirable:1 conducive:1 technical:1 faster:1 characterized:1 adapt:1 cross:1 long:2 hart:2 award:1 prediction:10 variant:1 regression:2 metric:17 arxiv:1 sometimes:1 achieved:1 addition:3 else:2 median:1 malicious:1 sabato:4 appropriately:1 unlike:1 subject:1 induced:1 practitioner:1 integer:3 call:6 near:1 yang:3 ideal:2 bernstein:2 split:1 easy:1 idea:1 knowing:1 multiclass:1 shift:1 t0:31 whether:1 handled:1 sheva:2 clusterability:1 gb:13 url:1 bingen:1 returned:2 york:3 rfi:3 covered:1 unimportant:1 detailed:1 nonparametric:3 induces:3 diameter:1 generate:3 specifies:1 http:1 exist:3 happened:1 estimated:1 discrete:1 dasgupta:9 key:1 sivan:2 achieving:1 drawn:1 clarity:1 prevent:1 neither:1 changing:1 internationale:1 monotone:2 fraction:3 luxburg:3 compete:1 run:2 named:1 throughout:2 almost:2 separation:4 draw:2 decision:1 spa:5 bound:17 oracle:1 annual:2 calling:2 sake:1 bousquet:3 generates:3 argument:2 min:3 performing:1 px:4 department:3 request:3 ball:9 instantiates:1 smaller:1 beneficial:1 describes:1 terminates:1 negev:2 son:1 em:1 alike:1 castro:2 computationally:1 previously:1 remains:1 end:7 available:1 observe:1 hierarchical:2 appropriate:1 slower:1 original:3 assumes:1 remaining:2 krauthgamer:8 log2:1 calculating:2 exploit:1 giving:1 already:1 parametric:11 strategy:3 dependence:1 navigating:1 distance:5 thank:1 sci:1 capacity:1 majority:5 berlin:2 degrade:1 fresh:1 ruth:1 devroye:4 index:2 minimizing:1 berlind:2 mostly:1 statement:1 hsa:1 stated:1 implementation:2 kpotufe:6 allowing:1 upper:2 finite:2 immediate:1 witness:1 ever:1 y1:3 supa:1 arbitrary:2 david:2 quadratically:2 barcelona:1 nip:5 below:5 pattern:2 gonen:2 regime:1 challenge:2 max:3 explanation:1 power:1 event:3 regularized:3 turning:2 minimax:3 scheme:3 representing:1 imply:1 multisets:1 text:1 review:1 literature:1 acknowledgement:1 relative:1 par:7 generation:3 querying:3 dana:1 validation:1 foundation:2 penalization:1 beer:2 consistent:6 pi:18 supported:2 last:1 keeping:1 allow:2 institute:1 neighbor:27 taking:1 benefit:1 distributed:1 boundary:1 dimension:7 xn:4 calculated:2 concretely:1 collection:1 made:2 san:1 adaptive:1 far:2 employing:1 transaction:3 obtains:3 compact:1 argmint:2 keep:2 technicality:1 dealing:1 active:40 assumed:1 xi:7 shwartz:1 search:15 decade:1 quantifies:1 ca:1 nigam:2 requested:5 alg:7 heidelberg:2 separator:1 constructing:3 aistats:2 main:5 big:1 noise:4 subsample:2 nothing:1 allowed:1 complementary:1 wulff:1 x1:5 tl:3 fashion:1 wiley:2 exponential:1 comput:1 jmlr:1 theorem:29 removing:3 specific:2 covariate:1 pac:1 maxi:1 decay:1 admits:1 intrinsic:2 exists:4 corr:2 te:1 conditioned:1 margin:14 suited:1 logarithmic:1 partially:2 doubling:4 obstructing:1 springer:2 minimizer:1 determines:1 acm:1 goal:3 diam:3 presentation:1 formulated:1 obstruct:2 lipschitz:1 considerable:1 hard:1 argmini:1 change:2 determined:3 except:1 operates:1 gottlieb:24 uniformly:1 lemma:10 called:2 total:1 m3:1 select:3 formally:3 brevity:1 kulkarni:2 ongoing:1 tested:5
5,638
6,101
Relevant sparse codes with variational information bottleneck Matthew Chalk IST Austria Am Campus 1 A - 3400 Klosterneuburg, Austria Olivier Marre Institut de la Vision 17, Rue Moreau 75012, Paris, France Gasper Tkacik IST Austria Am Campus 1 A - 3400 Klosterneuburg, Austria Abstract In many applications, it is desirable to extract only the relevant aspects of data. A principled way to do this is the information bottleneck (IB) method, where one seeks a code that maximizes information about a ?relevance? variable, Y , while constraining the information encoded about the original data, X. Unfortunately however, the IB method is computationally demanding when data are high-dimensional and/or non-gaussian. Here we propose an approximate variational scheme for maximizing a lower bound on the IB objective, analogous to variational EM. Using this method, we derive an IB algorithm to recover features that are both relevant and sparse. Finally, we demonstrate how kernelized versions of the algorithm can be used to address a broad range of problems with non-linear relation between X and Y . 1 Introduction An important problem, for both humans and machines, is to extract relevant information from complex data. To do so, one must be able to define which aspects of data are relevant and which should be discarded. The ?information bottleneck? (IB) approach, developed by Tishby and colleagues [1], provides a principled way to approach this problem. The idea behind the IB approach is to use additional ?variables of interest? to determine which aspects of a signal are relevant. For example, for speech signals, variables of interest could be the words being pronounced, or alternatively, the speaker identity. One then seeks a coding scheme that retains maximal information about these variables of interest, constrained on the information encoded about the input. The IB approach has been used to tackle a wide variety of problems, including filtering, prediction and learning [2-5]. However, it quickly becomes intractable with high-dimensional and/or non-gaussian data. Consequently, previous research has primarily focussed on tractable cases, where the data comprises a countably small number of discrete states [1-5], or is gaussian [6]. Here, we extend the IB algorithm of Tishby et al. [1] using a variational approximation. The algorithm maximizes a lower bound on the IB objective function, and is closely related to variational EM. Using this approach, we derive an IB algorithm that can be effectively applied to ?sparse? data in which input and relevance variables are generated by sparsely occurring latent features. The resulting solutions share many properties with previous sparse coding models, used to model early sensory processing [7]. However, unlike these sparse coding models, the learned representation depends on: (i) the relation between the input and variable of interest; (ii) the trade-off between encoding quality and compression. Finally, we present a kernelized version of the algorithm, that can be applied to a large range of problems with non-linear relation between the input data and variables of interest. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 2 Variational IB Let us define an input variable X, as well as a ?relevance variable?, Y , with joint distribution p (y, x). The goal of the IB approach is to compress the variable X through another variable R, while conserving information about Y . Mathematically, we seek an encoding model, p (r|x), that maximizes: Lp(r|x) = I (R; Y ) ? ?I (R; X) ? hlog p (y|r) ? log p (y) + ? log p (r) ? ? log p (r|x)ip(r,x,y) , (1) where 0 < ? < 1 is a Lagrange multiplier that determines the strength of the bottleneck. Tishby and colleagues showed that i be optimizedR by applying iterative h theRIB loss function can updates: pt+1 (r|x) ? pt (r) exp ? ?1 y p (y|x) log pp(y|x) , pt+1 (r) = x p (x) pt+1 (r|x) and t (y|r) R pt+1 (y|r) = x p (y|x) pt+1 (x|r) [1]. Unfortunately however, when p (x, y) is high-dimensional and/or non-gaussian these updates become intractable, and approximations are required. Due to the positivity of the KL divergence, we can write, hlog q (?)ip(?) ? hlog p (?)ip(?) for any approximative distribution q(?). This allows us to formulate a variational lower bound for the IB objective function: ? p(r|x),q(y|r),q(r) L = N 1 X hlog q (yn |r) + ? log q (r) ? ? log p (r|xn )ip(r|xn ) N n=1 ? Lp(r|x) , (2) where q (yn |r) and q (r) are variational distributions, and we have replaced the expectation over p (x, y) with the empirical expectation over training data. (Note that, for notational simplicity we have also omitted the constant term, HY = ? hlog p (y)ip(y) .) ? = L), and leads to Setting q (yn |r) ? p (yn |r) and q (r) ? p (r) fully tightens the bound (so that L the iterative algorithm of Tishby et al. However, when these exact updates are not possible, one can instead choose a restricted class of distributions q (y|r) ? Qy|r and q (r) ? Qr for which inference is ? with respect to parameters ? of the encoding distribution p (r|x, ?), tractable. Thus, to maximize L we repeat the following steps until convergence: ? ? For fixed ?, find {q new (y|r) , q new (r)} = arg max{q(y|r),q(r)}?{Qy|r ,Qr } L ? ? For fixed q (y|r) and q (r), find ? = arg max? L. We note that using a simple approximation for the decoding distribution, q(y|r), can carry additional benefits, besides rendering the IB algorithm tractable. Specifically, while an advantage of mutual information is its generality, in certain cases this can also be a drawback. That is, because Shannon information does not make any assumptions about the code, it is not always apparent how information should be best extracted from the responses: just because information is ?there? does not mean we know how to get at it. In contrast, using a simple approximation for the decoding distribution, q(y|r) (e.g. linear gaussian), constrains the IB algorithm to find solutions where information about Y can be easily extracted from the responses (e.g. via linear regression). 3 Sparse IB In previous work on gaussian IB [6], responses were equal to a linear projection of the input, plus noise: r = W x + ?, where W is an Nr ? Nx matrix of encoding weights, and ? ? N (?|0, ?), where ? is an Nr ? Nr covariance matrix. When the joint distribution, p (x, y), is gaussian, it follows that the marginal and decoding distributions, p (r) and p (y|r), are also gaussian, and the parameters of the encoding distribution, W and ?, can be found analytically. To illustrate the capabilities of the variational algorithm, while permitting comparison to gaussian IB, we begin by adding a single degree of complexity. In common with gaussian IB, we consider 2 a linear gaussian encoder, p (r|x) = N (r|W x, ?), and decoder, q (y|r) = N (y|U r, ?). However, unlike gaussian IB, we use the response marginal: q (r) =  a student-t distribution to approximate Q 2 2 i Student ri |0, ?i , ?i , with scale and shape parameters, ?i and ?i , respectively. When the shape parameter, ?i , is small then the student-t distribution is heavy-tailed, or ?sparse?, compared to a gaussian distribution. Thus, we call the resulting algorithm ?sparse IB?. Unlike gaussian IB, the introduction of a student-t marginal means the IB algorithm cannot be solved analytically, and one requires approximations. 3.1 Iterative algorithm Recall that the IB objective function consists of two terms: I (R; Y ), and I (R; X). We begin by describing how to optimize the lower and upper bound of each of these two terms with respect to the variational distributions q(y|r) and q(r), respectively. The first term of the IB objective function is bounded from below by: E 1 1 XD T I (R; Y ) ? ? log |?| ? (yn ? U r) ??1 (yn ? U r) + const. 2 2N n p(r|xn ) (3) Maximizing the lower bound on I (R; Y ) with respect to the decoding parameters, U and ?, gives: ?1 T ? = Cyy ? U W Cxy , U = Cxy W T W Cxx W T + ? (4) P P P 1 1 1 T T T where Cyy = N n yn yn , Cxy = N n xn yn , and Cxx = N n xn xn . Unfortunately, it is not straightforward to express the bound on I (R; X) in closed form. Instead, we use an additional variational approximation, utilising the fact that can be ex the   R student-t distribution pressed as an infinite mixture of gaussians: Student r|0, ? 2 , ? = ? N r|0, ? 2 Gamma ?| ?2 , ?2 [8]. Following a standard EM procedure[9], one can thus write a tractable lower bound on the log likelihood, l ? log Student r|0, ? 2 , ? , which corresponds to an upper-bound on the bottleneck term: X I (R; X) ? h? log q (ri ) + log p (ri |xn )ip(ri |xn ) (5) i,n # " N X 1 2 1 1 X 2 ?ni rni + f (?i , ?i , ai ) ? log |?| + const. ? log ?i + 2 2 2N ? 2 i n=1 i where ?ni , and ai denote for the ith unit and nth data instance. We used the 2 variationalTparameters T 2 shorthand notation, rni = wi xn xn wi + ?i , where ?i2 is the ith diagonal element of ? and wi is the ith row of W . For notational simplicity, terms that do not depend on the encoding parameters were pushed into the function, f (?i , ?i , ai )1 . Minimizing the upper bound on I (R; X) with respect to ?i2 , ?ni and ai (for fixed ?i ) gives: ?i2 = N 2 ?i + 1 1 X 1 , ?ni = ?ni rni , ai = (?i + 1), 2 2 N n=1 ?i + hrni i /?i 2 (6) The shape parameter, ?i , is then found numerically on each iteration (for fixed ?ni and ai ), by solving:  N  ?  ?  ai 1 X i i ?(ai ) ? log ? ? log =1+ ? ?ni , (7) 2 2 N n=1 ?ni where ?(?) is the digamma function [9]. ? with respect to the encoding distribution, Next we maximize the full variational objective function L ? with respect to the encoding noise covariance, ?, p (r|x) (for fixed q(y|r) and q(r)). Maximizing L gives: N X 1 1 ??1 = U T ??1 U + ??1 ?n , (8) ? N n=1 1 f (?i , ?i , ai ) = log ? ?i 2  ? ?i 2 log ?i 2 ? 1 N P h ?i ?1  n 2 ?(ai ) ? ln ai ?ni  ? ?i ? 2 ni i + Hni , where Hni is the entropy of a gamma distribution with shape and rate parameters: ai , and ai /?ni , respectively [9]. 3 Y? R W B U gaussian IB encoding filters D prob density X gaussian IB sparse IB 1 0.1 0.01 0.001 ?10 decoding filters 0 10 response (a.u.) E Ilin (Y ; R) (nats) A 100 0 0 50 100 150 I(X; R) (nats) sparse IB encoding filters gaussian IB sparse IB null model decoding filters F 1 2 s 2+ 2 s n C 50 0.5 0 0 gaussian IB sparse IB 20 40 units 60 80 Figure 1: Behaviour of sparse IB and gaussian IB algorithms, on denoising task. (A) Artificial image patches were constructed from combinations of orientated edge-like features. Patches were corrupted with white noise to generate the input, X. The goal of the IB algorithm is to learn a linear code that maximized information about the original patches, Y , constrained on information encoded about the input, X. (B) A selection of linear encoding (left), and decoding (right) filters obtained with the gaussian IB algorithm. (C) Same as B, but for the sparse IB algorithm. (D) Response histograms for the 10 units with highest variance, for the gaussian (red) and sparse (blue) IB algorithms. (E) Information curves for the gaussian (red) and sparse (blue) algorithms, alongside a ?null? model, where responses were equal to the original input, plus white noise. (F) Fraction of response variance attributed to signal fluctuations, for each unit. Solid and dashed curves correspond to strong and weak bottlenecks, respectively (corresponding to the vertical dashed lines in panel E). where ? and ?n are Nr ? Nr diagonal covariance matrices with diagonal elements ?ii = ?i2 , and (?n )ii = ?ni , respectively. ? with respect to the encoding weights, W , gives: Finally, taking the derivative of L ? ?L 1 X T = U T ??1 Cxy ? U T ??1 U W Cxx ? ???1 ?n W xn xTn , ?W N n (9) Setting the derivative to zero, we can solve for W directly. One may verify that, when variational parameters, ?ni , are unity, the above iterative updates are identical to the iterative gaussian IB algorithm described in [6]. 3.2 Simulations In our framework, the approximation of the response marginal, q (r), plays an analogous role to the prior distribution in a probabilistic generative model. Thus, we hypothesized that a sparse approximation for the response marginal, q(r), would permit the IB algorithm to recover sparsely occurring input features, analogous to the effect of using a sparse prior. 4 Figure 2: spatially correlated noise B Y C D stim. reconstruction 10 no. of units A X 5 0 ?90 ?45 0 45 90 encoded orientation (relative to vertical) Figure 2: Variant of the task in figure 1, in which the input noise is spatically correlated. (A) Example input X and patch, Y . Spatial noise correlations were aligned along the vertical direction. (B) Subset of decoding filters obtained with the sparse IB algorithm. (C) Distribution of encoded orientations. (D) Example stimulus (left) and reconstruction (right) of bars presented at variable orientations (presented with zero input noise, so that X ? Y for this example). To show this, we constructed artificial 9 ? 9 image patches from combinations of orientated bar features. Each bar had a gaussian cross-section, with maximum amplitude drawn from a standard normal distribution of width 1.2 pixels. Patches were constructed by linearly combining 3 bars, with uniformly random orientation and position. Initially, we considered a simple de-noising task, where the input, X, was a noisy version of the original image patches (gaussian noise, with variance ? 2 = 0.005; figure 1A). Training data consisted of 10,000 patches. Figure 1B and 1C show a selection of encoding (W ) and decoding (U ) filters obtained with the gaussian and sparse IB models, respectively. As predicted, only the sparse IB model was able to recover the original bar features. In addition, response histograms were considerably more heavy-tailed for the sparse IB model (fig. 1D). The relevant information, I(R; Y ), encoded by the sparse model was greater than for the gaussian model, over a range of bottleneck strengths (fig. 1E). While the difference may appear small, it is consistent with work showing that sparse coding models achieve only a small improvement in log-likelihood for natural image patches [10]. We also plotted the information curve for a ?null model?, with responses sampled from p(r|x) = N (r|x, ? 2 I). Interestingly, the performance of this null model was almost identical to the gaussian IB model. w C wT i Figure 1F plots the fraction of response variance due to the signal, for each unit ( wi Cixxxx ). Solid wiT +?i2 and dashed curves denote strong and weak bottlenecks, respectively. In both cases, the gaussian model gave a smooth spectrum of response magnitudes, while the sparse model was more ?all-or-nothing?. One way the sparse IB algorithm differs qualitatively from traditional sparse coding algorithms, is that the learned representation depends on the relation between X and Y , rather than just the input statistics. To illustrate this, we conducted simulations with patches corrupted by spatially correlated noise, aligned along the vertical direction (fig. 2A). The spatial covariance of the noise was described by a gaussian envelope, with standard deviation 3 pixels in the vertical direction and 1 pixel in horizontal direction. Figure 2B shows a selection of decoding filters obtained from the sparse IB model, with correlated input noise. The shape of individual filters was qualitatively similar to those obtained with uncorrelated noise (fig. 1C). However, with this stimulus, the IB model avoided ?wasting? bits by representing features co-orientated with the noise (fig. 2C). Consequently, it was not possible to reconstruct vertical bars from the responses, when bars were presented alone, even with zero noise (fig. 2D). 4 Kernel IB One way to improve the IB algorithm is to consider non-linear encoders. A general choice is: p (r|x) = N (r|W ?(x), ?), where ?(x) is an embedding to a high-dimensional non-linear feature space. 5 D f (X) R U sparse kIB B E 1 gaussian kIB sparse kIB 20 prob density Y? X Ilin (Y ; R) (nats) 15 10 gaussian kIB sparse kIB sparse IB 5 0 0 20 0.1 0.01 0.001 0.0001 40 ?10 I(X; R) (nats) F C gaussian kIB sparse IB gaussian IB stim 1 0 10 response (a.u.) stim 2 stim 3 recon. patches A G ?5 st im st 1 im st 2 im 3 0 st im st 1 im st 2 im 3 response (a.u.) 5 Figure 3: Behaviour of kernel IB algorithm on occlusion task. (A) Image patches were the same as for figure 1. However, the input, X, was restricted to 2 columns to either side of the patch. The target variable, Y , was the central region. (B) Subset of decoding filters, U , for the sparse kernel IB (?sparse kIB?) algorithm. (C) As for B, for other versions of the IB algorithm. (D) Information curves for the gaussian kIB (blue) sparse kIB (green) and sparse IB algorithms (red). The bottleneck strength for the other panels in this figure is indicated by a vertical dashed line. (E) Response histogram for the 10 units with highest variance, for the gaussian and sparse kIB models. (F) (above) Three test stimuli, used to demonstrate the non-linear properties of the sparse KIB code. (below) Reconstruction obtained from responses to test stimulus. (G) Responses of two units which showed strong responses to stimulus 3. The decoding filters for these units are shown above the bar plots. The variational objective functions for both gaussian and sparse IB algorithms are quadratic in the responses, and thus can be expressed in terms of dot products of the row vector, ?(x). Consequently, PN every solution for wi can be expressed as an expansion of mapped training data, wi = n=1 ain ?(xn ) [11]. It follows that the variational IB algorithm can be expressed in?dual space?, with responses to the nth input drawn from r ? N (r|Akn , ?), where A is an Nr ? N matrix of expansion coefficients, and kn is the nth column of the N ? N kernel-gram matrix, K, with elements Knm = ?(xn )?(xm )T . In this formulation, the problem of finding the linear encoding weights, W , is replaced by finding the expansion coefficients, A. The advantage of expressing the algorithm in the dual space is that we never have to deal with ?(x) directly, so are free to consider high- (or even infinite) dimensional feature spaces. However, without additional constraints on the expansion coefficients, A, the IB algorithm becomes degenerate (i.e. the solutions are independent of the input, X). A standard way to deal with this is to add an L2 regularization term that favours solutions with small expansion coefficients. Here, this is achieved here by replacing ?Tn ?n with ?Tn ?n + ?I, where ? is a fixed regularization parameter. Doing so, the ? with respect to A becomes: derivative of L X ?   ?L = U T ??1 Y K ? U T ??1 U + ???1 ?n A kn knT + ?K (10) ?A n Setting the derivative to zero and solving for A directly requires inverting an N Nr ? N Nr matrix, which is expensive. Instead, one can use an iterative solver (we used the conjugate gradients squared 6 Figure 5: handwritten digits Y? X f (X) C R B U sparse kIB gaussian kIB sparse kIB 1 prob density A 0.1 0.01 0.001 ?10 Dgaussian kIB 0 10 response (a.u.) sparse IB gaussian IB Figure 4: Behaviour of kernel IB algorithm on handwritten digit data. (A) As with figure 4, we considered an occlusion task. This time, units were provided with the left hand side of the image patch, and had to reconstruct the right hand side. (B) Response distribution for 10 neurons with highest variance, for the gaussian (blue) and sparse (green) kIB algorithms. (C) Decoding filters for a subset of units, obtained with the sparse kIB algorithm. Note that, for clearer visualization, we show here the decoding filter for the entire image patch, not just the occluded region. (D) A selection of decoding filters obtained with the alternative IB algorithms. method). In addition, the computational complexity can be reduced by restricting the solution to lie PM on a subspace of training instances, such that, wi = n=1 ain ?(xn ), where M < N . The derivation does not change, only now K has dimensions M ? N [11]. When q(r) is gaussian (equivalent to setting ?n = I), solving for A gives: ?1 T ?1 A = U T ??1 U + ???1 U ? AKRR (11) where AKRR = Y (K + ?I)?1 are the coefficients obtained from kernel ridge-regression (KRR). This suggests the following two stage algorithm: first, we learn the regularisation constant, ?, and parameters of the kernel matrix, K, to maximize KRR performance on hold-out data; next, we perform variational IB, with fixed K and ?. 4.1 Simulations To illustrate the capabilities of the kernel IB algorithm, we considered an ?occlusion? task, with the outer columns of each patch presented as input, X (2 columns to the far left and right), and the inner columns as the relevance variable Y , to be reconstructed. Image patches were as before. Note that performing the occlusion task optimally requires detecting combinations of features presented to either side of the occluded region, and is thus inherently nonlinear. We used gaussian kernels, with scale parameter, ?, and regularisation constant, ?, chosen to maximize KRR performance on test data. Both test and training data consisted of 10,000 images. However, A was restricted to lie on a subset of 1000 randomly chosen training patches (see earlier). Figure 3B shows a selection of decoding filters (U ) learned by the sparse kernel IB algorithm (?sparse kIB?). A large fraction of filters resembled near-horizontal bars, traversing the occluded region. This was not the case for the sparse linear IB algorithm, which recovered localized blobs either side of the occluded region, nor the gaussian linear or kernelized models, which recovered non-local features (fig. 3C). Figure 3D shows a small but significant improvement in performance for the sparse kIB versus the gaussian kIB model. Most noticeable, however, is the distribution of responses, which are much more heavy tailed for the sparse kIB algorithm (fig. 3E). To demonstrate the non-linear behaviour of the sparse kIB model, we presented bar segments: first to either side of the occluded patch, then to both sides simultaneously. When bar segments were presented to both sides simultaneously, the sparse KIB model ?filled in? the missing bar segment, 7 in contrast to the reconstruction obtained with single bar segments (fig. 3F). This behaviour was reflected in the non-linear responses of certain encoding units, which were large when two segments were presented together, but near zero when one segment was presented alone (fig. 3G). Finally, we repeated the occlusion task with handwritten digits, taken from the USPS dataset (www. gaussianprocess.org/gpml/data). We used 4649 training and 4649 test patches, of 16?16 pixels. However, expansion coeffecients were restricted to a lie on subset of 500 randomly patches. We set X and Y , to be the left and right side of each patch, respectively (fig. 4A). In common with the artificial data, the response distributions achieved with the sparse kIB algorithm were more heavy-tailed than for the gaussian kIB algorithm (fig. 4B). Likewise, recovered decoding filters closely resembled handwritten digits, and extended far into the occluded region (fig. 4C). This was not the case for the alternative IB algorithms (fig. 4D). 5 Discussion Previous work has shown close parallels between the IB framework and maximum-likelihood estimation in a latent variable model [12, 13]. For the sparse IB algorithm presented here, maximizing the IB objective function is closely related to maximizing the likelihood of a ?sparse coding? latent variable model, with student-t prior and linear gaussian likelihood function. However, unlike traditional sparse coding models, the encoding (or ?recognition?) model p(r|x) is conditioned on a seperate set of inputs, X, distinct from the image patches themselves. Thus, the solutions depend on the relation between X and Y , not just the image statistics (e.g. see fig. 2). Second, an additional parameter, ?, not present in sparse coding models, controls the trade-off between encoding and compression. Finally, in contrast to traditional sparse coding algorithms, IB gives an unambiguous ordering of features, which can be arranged according to the response variance of each unit (fig. 1F). Our work is also closely related to the IM algorithm, proposed by Barber et al. to solve the information maximization (?infomax?) problem [14]. However, a general issue with infomax problems is that they are usually ill-posed, necessitating additional ad hoc constraints on the encoding weights or responses [15]. In contrast, in the IB approach, such constraints emerge automatically from the bottleneck term. A related method to find low-dimensional projections of X/Y pairs is canonical correlation analysis (?CCA?), and its kernel analogue [16]. In fact, the features obtained with gaussian IB are identical to those obtained with CCA [6]. However, unlike CCA, the number and ?scale? of the features are not specified in advance, but determined by the bottleneck parameter, ?. Secondly, kernel CCA is symmetric in X and Y , and thus performs nonlinear embedding of both X and Y . In contrast, the IB problem is assymetric: we are interested in recovering Y from an input X. Thus, only X is kernelized, while the decoder remains linear. Finally, the features obtained from gaussian IB (and thus, CCA) differ qualitatively from the sparse IB algorithm, which recovers sparse features that account jointly for X and Y . Sparse IB can be extended to the nonlinear regime using a kernel expansion. For the gaussian model, the expansion coefficients, A, are a linear projection of the coefficients used for kernel-ridgeregression (?KRR?). A general disadvantage of KRR, is that it can be difficult to know which aspects of X are relied on to perform the regression. In contrast, the kernel IB framework provides an intermediate representation, allowing one to visualize the features that jointly account for both X and Y (figs. 3B & 4C). Furthermore, this learned representation permits generalisation across different tasks that rely on the same set of latent features; something not possible with KRR. Finally, the IB approach has important implications for models of early sensory processing [17, 18]. Notably, ?efficient coding? models typically consider the low-noise limit, where the goal is to reduce the neural response redundancy [7]. In contrast, the IB approach provides a natural way to explore the family of solutions that emerge as one varies internal coding constraints (by varying ?) and external constraints (by varying the input, X) [19, 20]. Further, our simulations suggest how the framework can be used to go beyond early sensory processing: for example to explain higher-level cognitive phenomena such as perceptual filling in (fig. 3G). In future, it would be interesting to explore how the IB framework can be used to extend the efficient coding theory, by accounting for modulations in sensory processing that occur due to changing task demands (i.e. via changes to the relevance variable, Y ), rather than just the input statistics (X). 8 References [1] Tishby, N. Pereira, F C. & Bialek, W. (1999) The information bottleneck method. The 37th annual Allerton Conference on Communication, Control and Computing. pp. 368?377 [2] Bialek, W. Nemenman, I. & Tishby, N. (2001) Predictability, complexity, and learning. Neural computation, 13(11) pp. 240- 63 [3] Slonim, N. (2003) Information bottleneck theory and applications. PhD thesis, Hebrew University of Jerusalem [4] Chechik, G. & Tishby, N. (2002) Extracting relevant structures with side information. In Advances in Neural Information Processing Systems 15 [5] Hofmann, T. & Gondek, D. (2003) Conditional information bottleneck clustering. In 3rd IEEE International conference in data mining, workshop on clustering large data sets [6] Chechik, G. Globerson, A., Tishby, N. & Weiss, Y. (2005) Information bottleneck for gaussian variables. Journal of Machine Learning Research, (6) pp. 165?188 [7] Simoncelli, E. P. & Olshausen, B. A. (2001) Natural image statistics and neural representation. Ann. Rev. Neurosci. 24:1194?216 [8] Andrews, D. F. & Mallows C. L. (1974). Scale mixtures of normal distributions. J. of the Royal Stat. Society. Series B 36(1) pp. 99?102 [9] Scheffler, C. (2008). A derivation of the EM updates for finding the maximum likelihood parameter estimates of the student-t distribution. Technical note. URL www.inference.phy.cam.ac.uk/cs482/publications/ scheffler2008derivation.pdf [10] Eichhorn, J. Sinz, F. & Bethge, M. (2009). Natural image coding in V1: how much use is orientation selectivity?. PLoS Comput Biol, 5(4), e1000336. [11] Mika, S. Ratsch, G. Weston, J. Scholkopf, B. Smola, A. J. & Muller, K. R. (1999). Invariant Feature Extraction and Classification in Kernel Spaces. In Advances in neural information processing systems 12 pp. 526?532. [12] Slonim, N., & Weiss, Y. (2002). Maximum likelihood and the information bottleneck. In Advances in neural information processing systems pp. 335?342 [13] Elidan, G., & Friedman, N. (2002). The information bottleneck EM algorithm. In Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence pp. 200?208 [14] Barber, D. & Agakov, F. (2004) The IM algorithm: a variational approach to information maximization. In Advances in Neural Information Processing Systems 16 pp. 201?208 [15] Doi, E., Gauthier, J. L. Field, G. D. Shlens, J. Sher, A. Greschner, M. (2012). Efficient Coding of Spatial Information in the Primate Retina. The Journal of neuroscience 32(46), pp. 16256?16264 [16] Hardoon, D. R., Szedmak, S. & Shawe-Taylor, J. (2004). Canonical correlation analysis: An overview with application to learning methods. Neural computation, 16(12), 2639-2664. [17] Bialek, W., de Ruyter Van Steveninck, R. R., & Tishby, N. (2008). Efficient representation as a design principle for neural coding and computation. In Information Theory, 2006 IEEE International Symposium pp. 659?663 [18] Palmer, S. E., Marre, O., Berry, M. J., & Bialek, W. (2015). Predictive information in a sensory population. Proceedings of the National Academy of Sciences 112(22) pp. 6908?6913. [19] Doi, Eizaburo. & Lewicki, M. S. (2005). Sparse coding of natural images using an overcomplete set of limited capacity units. In Advances in Neural Information Processing Systems 17 pp. 377?384 [20] Tkacik, G. Prentice, J. S. Balasubramanian, V. & Schneidman, E. (2010). Optimal population coding by noisy spiking neurons. Proceedings of the National Academy of Sciences 107(32), pp. 14419-14424. 9
6101 |@word version:4 compression:2 simulation:4 seek:3 covariance:4 accounting:1 tkacik:2 pressed:1 solid:2 carry:1 phy:1 series:1 interestingly:1 recovered:3 must:1 shape:5 eichhorn:1 hofmann:1 plot:2 update:5 alone:2 generative:1 intelligence:1 greschner:1 ith:3 provides:3 detecting:1 allerton:1 org:1 along:2 constructed:3 become:1 symposium:1 scholkopf:1 ilin:2 consists:1 shorthand:1 chalk:1 notably:1 themselves:1 nor:1 scheffler:1 automatically:1 balasubramanian:1 solver:1 becomes:3 spain:1 begin:2 campus:2 bounded:1 maximizes:3 notation:1 panel:2 null:4 provided:1 developed:1 finding:3 sinz:1 wasting:1 every:1 tackle:1 xd:1 gondek:1 uk:1 control:2 unit:14 yn:9 appear:1 before:1 local:1 limit:1 slonim:2 encoding:18 fluctuation:1 modulation:1 mika:1 plus:2 suggests:1 co:1 limited:1 palmer:1 range:3 steveninck:1 globerson:1 mallow:1 differs:1 digit:4 procedure:1 empirical:1 projection:3 chechik:2 word:1 suggest:1 get:1 cannot:1 close:1 selection:5 noising:1 prentice:1 applying:1 optimize:1 equivalent:1 www:2 missing:1 maximizing:5 straightforward:1 go:1 jerusalem:1 formulate:1 wit:1 simplicity:2 shlens:1 embedding:2 population:2 analogous:3 pt:6 play:1 target:1 exact:1 olivier:1 approximative:1 element:3 expensive:1 recognition:1 agakov:1 sparsely:2 role:1 solved:1 region:6 ordering:1 trade:2 highest:3 plo:1 principled:2 complexity:3 constrains:1 nats:4 occluded:6 cam:1 depend:2 solving:3 segment:6 predictive:1 usps:1 easily:1 joint:2 derivation:2 seperate:1 distinct:1 doi:2 artificial:4 apparent:1 encoded:6 posed:1 solve:2 nineteenth:1 reconstruct:2 encoder:1 statistic:4 jointly:2 noisy:2 ip:6 hoc:1 advantage:2 blob:1 propose:1 reconstruction:4 maximal:1 product:1 relevant:8 aligned:2 combining:1 degenerate:1 conserving:1 achieve:1 academy:2 pronounced:1 qr:2 convergence:1 klosterneuburg:2 derive:2 illustrate:3 clearer:1 stat:1 andrew:1 ac:1 ex:1 noticeable:1 strong:3 recovering:1 predicted:1 differ:1 direction:4 closely:4 drawback:1 filter:17 human:1 behaviour:5 secondly:1 mathematically:1 im:8 hold:1 considered:3 normal:2 exp:1 visualize:1 matthew:1 early:3 omitted:1 akrr:2 estimation:1 krr:6 ain:2 gaussianprocess:1 gaussian:50 always:1 rather:2 pn:1 varying:2 publication:1 gpml:1 notational:2 improvement:2 likelihood:7 contrast:7 digamma:1 am:2 inference:2 entire:1 typically:1 initially:1 kernelized:4 relation:5 france:1 interested:1 pixel:4 issue:1 arg:2 dual:2 orientation:5 ill:1 classification:1 constrained:2 spatial:3 mutual:1 marginal:5 equal:2 field:1 never:1 extraction:1 identical:3 broad:1 filling:1 future:1 stimulus:5 primarily:1 hni:2 retina:1 randomly:2 gamma:2 divergence:1 simultaneously:2 individual:1 national:2 replaced:2 occlusion:5 friedman:1 nemenman:1 interest:5 mining:1 mixture:2 behind:1 implication:1 edge:1 institut:1 traversing:1 filled:1 taylor:1 plotted:1 overcomplete:1 instance:2 column:5 earlier:1 disadvantage:1 retains:1 maximization:2 deviation:1 subset:5 hardoon:1 conducted:1 tishby:9 optimally:1 encoders:1 kn:2 varies:1 corrupted:2 considerably:1 st:6 density:3 international:2 probabilistic:1 off:2 decoding:17 infomax:2 together:1 quickly:1 bethge:1 squared:1 central:1 thesis:1 choose:1 positivity:1 external:1 cognitive:1 derivative:4 account:2 knm:1 de:3 coding:17 student:9 coefficient:7 depends:2 ad:1 closed:1 doing:1 red:3 recover:3 relied:1 capability:2 parallel:1 ni:13 cxy:4 variance:7 likewise:1 maximized:1 correspond:1 weak:2 handwritten:4 xtn:1 explain:1 colleague:2 pp:14 attributed:1 recovers:1 sampled:1 dataset:1 austria:4 recall:1 amplitude:1 higher:1 reflected:1 response:31 wei:2 formulation:1 arranged:1 generality:1 furthermore:1 just:5 stage:1 smola:1 until:1 correlation:3 hand:2 horizontal:2 replacing:1 gauthier:1 nonlinear:3 cyy:2 quality:1 indicated:1 olshausen:1 effect:1 hypothesized:1 verify:1 multiplier:1 consisted:2 analytically:2 regularization:2 spatially:2 symmetric:1 i2:5 white:2 deal:2 width:1 unambiguous:1 speaker:1 pdf:1 ridge:1 demonstrate:3 necessitating:1 tn:2 performs:1 image:14 variational:17 common:2 spiking:1 overview:1 tightens:1 extend:2 numerically:1 knt:1 expressing:1 significant:1 ai:13 rd:1 pm:1 shawe:1 had:2 dot:1 add:1 something:1 showed:2 selectivity:1 certain:2 muller:1 additional:6 greater:1 determine:1 maximize:4 elidan:1 signal:4 ii:3 dashed:4 full:1 desirable:1 simoncelli:1 schneidman:1 smooth:1 technical:1 cross:1 assymetric:1 permitting:1 prediction:1 variant:1 regression:3 vision:1 expectation:2 iteration:1 histogram:3 kernel:16 achieved:2 qy:2 addition:2 ratsch:1 envelope:1 unlike:5 call:1 extracting:1 near:2 constraining:1 intermediate:1 rendering:1 variety:1 eizaburo:1 gave:1 inner:1 idea:1 reduce:1 bottleneck:17 favour:1 url:1 speech:1 akn:1 gasper:1 recon:1 reduced:1 generate:1 canonical:2 neuroscience:1 blue:4 discrete:1 write:2 express:1 ist:2 redundancy:1 drawn:2 changing:1 v1:1 fraction:3 utilising:1 prob:3 uncertainty:1 almost:1 family:1 patch:23 cxx:3 pushed:1 bit:1 bound:10 cca:5 quadratic:1 annual:1 strength:3 occur:1 constraint:5 ri:4 hy:1 aspect:4 performing:1 according:1 combination:3 conjugate:1 across:1 em:5 unity:1 wi:7 lp:2 rev:1 primate:1 restricted:4 invariant:1 taken:1 computationally:1 ln:1 visualization:1 remains:1 describing:1 know:2 tractable:4 gaussians:1 permit:2 alternative:2 original:5 compress:1 clustering:2 const:2 society:1 objective:8 nr:8 diagonal:3 traditional:3 bialek:4 gradient:1 subspace:1 mapped:1 capacity:1 decoder:2 outer:1 nx:1 barber:2 stim:4 code:5 besides:1 minimizing:1 hebrew:1 difficult:1 unfortunately:3 hlog:5 design:1 perform:2 allowing:1 upper:3 vertical:7 neuron:2 discarded:1 extended:2 communication:1 orientated:3 inverting:1 pair:1 paris:1 required:1 kl:1 specified:1 learned:4 barcelona:1 nip:1 address:1 able:2 bar:13 alongside:1 below:2 usually:1 xm:1 beyond:1 regime:1 including:1 max:2 green:2 royal:1 analogue:1 demanding:1 natural:5 rely:1 nth:3 representing:1 scheme:2 improve:1 extract:2 sher:1 szedmak:1 prior:3 l2:1 berry:1 relative:1 regularisation:2 loss:1 fully:1 interesting:1 filtering:1 versus:1 localized:1 degree:1 rni:3 consistent:1 principle:1 uncorrelated:1 share:1 heavy:4 row:2 repeat:1 free:1 side:10 wide:1 focussed:1 taking:1 emerge:2 sparse:62 moreau:1 benefit:1 van:1 curve:5 dimension:1 xn:14 gram:1 sensory:5 qualitatively:3 avoided:1 far:2 reconstructed:1 approximate:2 countably:1 alternatively:1 spectrum:1 latent:4 iterative:6 tailed:4 learn:2 ruyter:1 inherently:1 marre:2 expansion:8 complex:1 rue:1 linearly:1 neurosci:1 noise:16 nothing:1 repeated:1 fig:18 predictability:1 position:1 comprises:1 pereira:1 comput:1 lie:3 perceptual:1 ib:84 resembled:2 showing:1 intractable:2 workshop:1 restricting:1 adding:1 effectively:1 phd:1 magnitude:1 conditioned:1 occurring:2 demand:1 entropy:1 explore:2 lagrange:1 expressed:3 lewicki:1 corresponds:1 determines:1 extracted:2 weston:1 conditional:1 identity:1 goal:3 consequently:3 ann:1 change:2 specifically:1 infinite:2 uniformly:1 determined:1 wt:1 generalisation:1 denoising:1 la:1 shannon:1 internal:1 relevance:5 phenomenon:1 biol:1 correlated:4
5,639
6,102
Multistage Campaigning in Social Networks Mehrdad Farajtabar? Xiaojing Ye? Sahar Harati? Le Song? Hongyuan Zha? Georgia Institute of Technology? Georgia State University? Emory University? mehrdad@gatech.edu xye@gsu.edu sahar.harati@emory.edu {lsong,zha}@cc.gatech.edu Abstract We consider the problem of how to optimize multi-stage campaigning over social networks. The dynamic programming framework is employed to balance the high present reward and large penalty on low future outcome in the presence of extensive uncertainties. In particular, we establish theoretical foundations of optimal campaigning over social networks where the user activities are modeled as a multivariate Hawkes process, and we derive a time dependent linear relation between the intensity of exogenous events and several commonly used objective functions of campaigning. We further develop a convex dynamic programming framework for determining the optimal intervention policy that prescribes the required level of external drive at each stage for the desired campaigning result. Experiments on both synthetic data and the real-world MemeTracker dataset show that our algorithm can steer the user activities for optimal campaigning much more accurately than baselines. 1 Introduction Obama was the first US president in history who successfully leveraged online social media in presidential campaigning, which has been popularized and become a ubiquitous approach to electoral politics (such as in the on-going 2016 US presidential election) in contrast to the decreasing relevance of traditional media such as TV and newspapers [1, 2]. The power of campaigning via social media in modern politics is a consequence of online social networking being an important part of people?s regular daily social lives. It has been quite common that individuals use social network sites to share their ideas and comment on other people?s opinions. In recent years, large organizations, such as governments, public media, and business corporations, also start to announce news, spread ideas, and/or post advertisements in order to steer the public opinion through social media platform. There has been extensive interest for these entities to influence the public?s view and manipulate the trend by incentivizing influential users to endorse their ideas/merits/opinions at certain monetary expenses or credits. To obtain most cost-effective trend manipulations, one needs to design an optimal campaigning strategy or policy such that quantities of interests, such as influence of opinions, exposure of a campaign, adoption of new products, can be maximized or steered towards the target amount given realistic budget constraints. The key factor differentiating social networks from traditional media is peer influence. In fact, events in an online social network can be categorized roughly into two types: endogenous events where users just respond to the actions of their neighbors within the network, and exogenous events where users take actions due to drives external to the network. Then it is natural to raise the following fundamental questions regarding optimal campaigning over social networks: can we model and exploit those event data to steer the online community to a desired exposure level? More specifically, can we drive the overall exposure to a campaign to a certain level (e.g., at least twice per week per user) by incentivizing a small number of users to take more initiatives? What about maximizing the overall exposure for a target group of people? 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. More importantly, those exposure shaping tasks are more effective when the interventions are implemented in multiple stages. Due to the inherent uncertainty in social behavior, the outcome of each intervention may not be fully predictable but can be anticipated to some extent before the next intervention happens. A key aspect of such situations is that interventions can?t be viewed in isolation since one must balance the desire for high present reward with the penalty of low future outcome. In this paper, the dynamic programming framework [3] is employed to tackle the aforementioned issues. In particular, we first establish the fundamental theory of optimal campaigning over social networks where the user activities are modeled as a multivariate Hawkes process (MHP) [4, 5] since MHP can capture both endogenous and exogenous event intensities. We also derive a time dependent linear relation between the intensity of exogenous events and the overall exposure to the campaign. Exploiting this connection, we develop a convex dynamic programming framework for determining the optimal intervention policy that prescribes the required level of external drive at each stage in order for the campaign to reach a desired exposure profile. We propose several objective functions that are commonly considered as campaigning criteria in social networks. Experiments on both synthetic data and real world network of news websites in the MemeTracker dataset show that our algorithms can shape the exposure of campaigns much more accurately than baselines. 2 Basics and Background An n-dimensional temporal point process is a random process whose realization consists of a list of discrete events in time and their associated dimension, {(tk , dk )} with tk ? R+ and dk ? {1, . . . , n}. Many different types of data produced in online social networks can be represented as temporal point processes, such as likes and tweets. A temporal point process can be equivalently represented as a counting process, N (t) = (N 1 (t), . . . , N n (t))? associated to n users in the social network. Here, N i (t) records the number of events user i performs before time t for 1 ? i ? n. Let the history Hi (t) be the list of times of events {t1 , t2 , . . . , tk } of the i-th user up to time t. Then, the number of observed events in a small time window [t, t + dt) of length dt is "t ! dN i (t) = tk ?Hi (t) ?(t ? tk ) dt, and hence N i (t) = 0 dN i (s), where ?(t) is a Dirac delta function. The point process representation of temporal data is fundamentally different from the discrete time representation typically used in social network analysis. It directly models the time interval between events as random variables, avoids the need to pick a time window to aggregate events, and allows temporal events to be modeled in a fine grained fashion. Moreover, it has a remarkably rich theoretical support [6]. An important way to characterize temporal point processes is via the conditional intensity function ? a stochastic model for the time of the next event given all the times of previous events. Formally, the conditional intensity function ?i (t) (intensity, for short) of user i is the conditional probability # $ of observing an event in a small window [t, t + dt) given the history H(t) = H1 (t), . . . , Hn (t) : ?i (t)dt := P {user i performs event in [t, t + dt) | H(t)} = E[dN i (t) | H(t)], (1) where one typically assumes that only one event can happen in a small window of size dt. The functional form of the intensity ?i (t) is often designed to capture the phenomena of interests. The Hawkes process [7] is a class of self and mutually exciting point process models, n & t % % ?i (t) = ?i (t) + ?idk (t, tk ) = ?i (t) + ?ij (t, s)dN j (s), j=1 k:tk <t (2) 0 where the intensity is history dependent. ?ij (t, s) is the impact function capturing the temporal influence of an event by user j at time s to the future events of user j at time t ! s. Here, the first term ?i (t) is the exogenous! event intensity modeling drive outside the network and indecent of the history, and the second term k:tk <t ?idk (t, tk ) is the endogenous event intensity modeling interactions within the network [8]. Defining ?(t, s) = [?ij (t, s)]i,j=1...n , and ?(t) = (?1 (t), . . . , ?n (t))? , and ?(t) = (?1 (t), . . . , ?n (t))? we can compactly rewrite Eq 2 in matrix form: & t ?(t) = ?(t) + ?(t, s)dN (s). (3) 0 In practice it is standard to employ shift-invariant impact function, i.e., ?(t, s) = ?(t ? s). Then, "t by using notation of convolution f (t) ? g(t) = 0 f (t ? s)g(s)ds we have ?(t) = ?(t) + ?(t) ? dN (t). 2 (4) 3 From Intensity to Average Activity In this section we will develop a closed form relation between the expected total intensity E[?(t)] and the intensity ?(t) of exogenous events. This relation establish the basis of our campaigning framework. First, define the mean function as M(t) := E[N (t)] = EH(t) [E(N (t)|H(t))]. Note that M(t) is history independent, and it gives the average number of events up to time t for each of the dimension. Similarly, the rate function ?(t) is given by ?(t)dt := dM(t). On the other hand, dM(t) = dE[N (t)] = EH(t) [E(dN (t)|H(t))] = EH(t) [?(t)|H(t)]dt = E[?(t)]dt. (5) Therefore ?(t) = E[?(t)] which serves as a measure of activity in the network. In what follows we will find an analytical form for the average activity. Proofs are presented in Appendix C. Lemma 1. Suppose ? : [0, T ] ? Rn?n is a non-increasing matrix function, then for every fixed constant intensity ?(t) = c ? Rn+ , ?c (t) := ?(t)c solves the semi-infinite integral equation & t ?(t) = c + ?(t ? s)?(s)ds, ?t ? [0, T ], (6) 0 if and only if ?(t) satisfies ?(t) = I + & t ?(t ? s)?(s)ds, 0 ?t ? [0, T ]. (7) In particular, if ?(t) = Ae??t 1?0 (t) = [aij e??t 1?0 (t)]ij where 0 ? ? ? / Spectrum(A), then ?(t) = e(A??I)t + ?(A ? ?I)?1 (e(A??I)t ? I) (8) for t ? [0, T ], where, 1?0 (t) is an indicator function for t ? 0. Let ? : [0, T ] ? Rn+ be a right-continuous piecewise constant function ?(t) = M % cm 1[?m?1 ,?m ) (t), (9) m=1 where 0 = ?0 < ?1 < ? ? ? < ?M = T is a finite partition of time interval [0, T ] and function 1[?m?1 ,?m ) (t) indicates ?m?1 ? t < ?m . The next theorem shows that if ?(t) satisfies (7), then one can calculate ?(t) for piecewise constant intensity ? : [0, T ] of form (9). Theorem 2. Let ?(t) satisfy (7) and ?(t) be a right-continuous piecewise constant intensity function of form (9), then the rate function ?(t) is given by ?(t) = m % k=0 ?(t ? ?k )(ck ? ck?1 ), (10) for all t ? (?m?1 , ?m ] and m = 1, . . . , M , where c?1 := 0 by convention. Using the above lemma, for the first time, we derive the average intensity for a general exogenous intensity. Appendix E includes a few experiments to investigate these results empirically. Theorem 3. If ? ? C 1 ([0, T ]) and satisfies (7), and exogenous intensity ? is bounded and piecewise absolutely continuous on [0, T ] where ?(t+) = ?(t) at all discontinuous points t, then ? is differentiable almost everywhere, and the semi-indefinite integral & t ?(t) = ?(t) + ?(t ? s)?(s)ds, ?t ? [0, T ], (11) 0 yields a rate function ? : [0, T ] ? given by & t ?(t) = ?(t ? s)d?(s). Rn+ (12) 0 Corollary 4. Suppose ? and ? satisfy the same conditions as in Thm. 3, and define ? = ?? , then the rate function is ?(t) = (? ? ?)(t). In particular, if ?(t) = Ae??t 1?0 (t) = [aij e??t 1?0 (t)]ij "t then the rate function ?(t) = ?(t) + A 0 e(A?wI)(t?s) ?(s)ds. 3 4 Multi-stage Closed-loop Control Problem Given the analytical relation between exogenous intensity and expected overall intensity (rate function), one can solve a single one-stage campaigning problem to find the optimal constant intervention intensity [8]. Alternatively, the time window can be partitioned into multiple stages and one can impose different levels of interventions in these stages. This yields an open-loop optimization of the cost function where one selects all the intervention actions at initial time 0. More effectively, we tackle the campaigning problem in a dynamic and adaptive manner where we can postpone deciding the intervention by observing the process until the next stage begins. This is called the closed-loop optimization of the objective function. In this section, we establish the foundation to formulate the problem as a multi-stage closed-loop optimal control problem. We assume that n users are generating events according to multi-dimensional Hawkes process with exogenous intensity ?(t) ? Rn and impact function ?(t, s) ? Rn?n . Event exposure. Event exposure is the quantity of major interests in campaigning. The exposure process is mathematically represented as a counting process, E(t) = (E 1 (t), . . . , E n (t))? : Here, E i (t) records the number of times user i is exposed (she or one of her neighbors performs an activity) to the campaign by time t. Let B be the adjacency matrix of the user network, i.e., bij = 1 if user i follows user j or equivalently user j influences user i. We assume bii = 1 for all i. Then the exposure process is given by E(t) = B N (t). Stages and interventions. Let [0, T ] be the time horizon and 0 = ?0 < ?1 < . . . < ?M ?1 < ?M = T be a partition into the M stages. In order to steer the activities of network towards a desired level (criteria given below) at these stages, we impose a constant intervention um ? Rn to the existing exogenous intensity ? during time [?m , ?m+1 ) for each stage m = 0, 1, . . . , M ? 1. The "t activity intensity at the m-th stage is ?m (t) = ? + um + 0 ?(t, s) dN (s) for ?m ? t < ?m+1 where N (t) tracks the counting process of activities since t = 0. Note that the intervention itself exhibits a stochastic nature: adding uim to ?i is equivalent to incentivizing user i to increase her activity rate but it is still uncertain when she will perform an activity, which appropriately mimics the randomness in real-world campaigning. States and state evolution. Note that the Hawkes process is non-Markov and one needs complete knowledge of the history to characterize the entire process. However, the conditional intensity ?(t) only depends on the state of process at time t when the standard exponential kernel ?(t, s) = Ae??(t?s) 1?0 (t ? s) is employed. In this case, the activity rate at stage m is & ?m & t ??(t?s) ?m (t) = ? + um + Ae dN (s) + Ae??(t?s) dN (s) (13) 0 ?m ' () * ' () * from previous stages current stage Define xm := ?m?1 (?m ) ? um?1 ? ? (and " ?x0 = 0 by convention) then the intensity due to events of all previous m stages can be written as 0 m Ae??(t?s) dN (s) = xm e??(t??m ) . In other words, xm is sufficient to encode the information of activity in the past m stages that is relevant to future. This is in sharp contrast to the general case where the state space grows with the number of events. !M ?1 Objective function. For a sequence of controls u(t) = m=0 um 1[?m ,?m+1 ) (t), the activity count"t ing process N (t) is generated by intensity ?(t) = ? + u(t) + 0 Ae??(t?s) dN (s). For each stage m from 0 to M ? 1, xm encodes the effects from previous " t m stages as above and um is the current i control imposed at this stage. Let Em (t; xm , um ) := B ?m dN i (s) be the number of times user i is exposed to the campaign by time t ? [?m , ?m+1 ) in stage m, then the goal is to steer the expected i i total number of exposure E?m (xm , um ) := E[Em (?m+1 ; xm , um )] to a desired level. In what follows, i we introduce several instances of the objective function g(xm , um ) in terms of {E?m (xm , um )}ni=1 in each stage m that characterize different exposure shaping tasks. Then the overall control problem !M ?1 is to find u(t) that optimizes the total objective m=0 gm (xm , um ). ? Capped Exposure Maximization (CEM): In real networks, there is a cap on the exposure each user i can tolerate due to the limited attention of a user. Suppose we know the upper bound ?m , on user i?s exposure tolerance over which the extra exposure is not counted towards the objective. Then, we can form the following capped exposure maximization n # i $ 1% i gm (xm , um ) = min E?m (xm , um ), ?m (14) n i=1 4 Algorithm 1: Closed-loop Multi-stage Dynamic Programming Input: Intervention constraints: c0 . . . cM ?1 , C0 . . . CM ?1 , ?0 . . . ?M ?1 , Input: Objective-specific constraints: ?0 . . . ?M ?1 for CEM and ?0 . . . ?M ?1 for LES Input: Time: T , Hawkes parameters: A, ? Output: Optimal intervention u0 . . . uM ?1 , Optimal cost: Cost Set x0 ? 0 and Cost ? 0 for l ? 0 : M ? 1 do (vl . . . vM ?1 ) = open loop(xl ) (Problems (24), (25), (26) for CEM, MEM, LES respectively) Set ul ? vl and drop vl+1 . . . vM ?1 Update next state xl+1 ? fl (xl , ul ) and Cost = Cost + gl (xl , ul ) ? Minimum Exposure Maximization (MEM): Suppose our goal is instead to maintain the exposure of campaign on each user above a certain minimum level, at each stage or, alternatively to make the user with the minimum exposure as exposed as possible, we can consider the following cost function: i gm (xm , um ) = min E?m (xm , um ) (15) i ? Least-squares Exposure Shaping (LES): Sometimes we want to achieve a pre-specified target exposure levels, ?m ? Rn , for the users. For example, we may like to divide users into groups and desire a different level of exposure in each group. To this end, we can perform least-squares campaigning task with the following cost function where D encodes potentially additional constraints (e.g., group partitions): 1 gm (xm , um ) = ? ?DE?m (xm , um ) ? ?m ?2 (16) n Policy and actions. By observing the counting process in previous stages (summarized in a sequence of xm ) and taking the future uncertainty into account, the control problem is to design a policy ? = {?m : Rn ? Rn : m = 0, . . . , M ? 1} such that the controls um = ?m (xm ) can maxi!M ?1 mize the total objective m=0 gm (xm , um ). In addition, we may have constraints on the amount of control. For example, a budget constraint on the sum of all interventions to users at each stage, or, a cap over the amount of intensity a user can handle.# A feasible set or an action space over which we $ find the best intervention is represented as Um := um ? Rn |c? m um ? Cm , 0 " um " ?m . Here, cm ? Rn+ contains the price of each person per unit increase of exogenous intensity and Cm ? R+ is the total budget at stage m. Also, ?m ? Rn+ is the cap on the amount of activities of the users. To summarize, the following problem is formulated to find the optimal control policy ?: maximize ? 5 M ?1 % m=0 gm (xm , ?m (xm )), subject to ?m (xm ) ? Um , for m = 0, . . . , M ? 1. (17) Closed-loop Dynamic Programming Solution We have formulated the control problem as an optimization in (17). However, when control policy ?m is to be implemented, only xm is observed and there are still uncertainties in future {xm+1 , . . . , xM ?1 }. For instance, when ?m is implemented according to xm starting from time ?m , the intensity xm+1 := f (xm , ?m (xm )) at time ?m+1 depends on xm and the control ?m (xm ), but is also random due to the stochasticity of the process during time [?m , ?m+1 ). Therefore, the design of ? needs to take future uncertainties into considerations. Suppose we have arrived at stage M at time ?M ?1 with observation xM ?1 , then the optimal policy ?M ?1 satisfies gM ?1 (xM ?1 , ?M ?1 (xM ?1 )) = maxu?UM ?1 gM ?1 (xM ?1 , u) =: JM ?1 (xM ?1 ). We then repeat this procedure for m from M ? 1 to 0 backward to find the sequence of controls via dynamic programming such that the control ?m (xm ) ? Um yields optimal objective value Jm (xm ) = max E[gm (xm , um ) + Jm+1 (f (xm , um ))] um ?Um (18) Approximate Dynamic Programming. Solving (18) for finding Jm (xm ) analytically is intractable. Therefore, we will adopt an approximate dynamic programming scheme. In fact approximate control is as essential part of dynamic programming as the optimization is usually intractable due to 5 curse of dimensionality except a few especial cases [3]. Here we adopt a suboptimal control scheme, certainty equivalent control (CEC), which applies at each stage the control that would be optimal if the uncertain quantities were fixed at some typical values like the average behavior. It results in an optimal control sequence, the first component of which is used at the current stage, while the remaining components are discarded. The procedure is repeated for the remaining stages. Algorithm 1 summarizes the dynamic programing steps. This algorithm has two parts: (i) certainty equivalence which the random behavior is replaced by its average; and (ii) the open-loop optimization. Let?s assume we are at the beginning of stage l of the Alg. 1 with state vector xl at ?l . Certainty equivalence. We use the machinery developed in Sec. 3 to compute the average of exposure at any stage m = l, l + 1, . . . , M ? 1. +& ?m+1 , & ?m+1 E?m (xm , um ) = BE[N (?m+1 ) ? N (?m )] = BE dN (s) = B ?m (s) ds (19) ?m ?m "t where ?m (t) = E[?m (t)] and ?m (t) = ? + um + xl e??(t??l ) + ?l Ae??(t?s) dN (s) for t ? [?m , ?m+1 ). Now, we use the superposition property of point processes [4] to decompose the process as N (t) = N c (t) + N v (t) corresponding to ?m (t) = ?cm (t) + ?vm (t) where the first ?cm (t) = "t ? + um + ?l Ae??(t?s) dN c (s) consists of events caused by exogenous intensity at current stage m "t and the second ?vm (t) = xl e??(t??l ) + ?l Ae??(t?s) dN v (s) is due to activities in previous stages. According to Thm. 2 we have c ?m (t) := E[?cm (t)] = ?(t ? ?l )? + ?(t ? ?l )ul + k=l+1 and according to Thm. 3 we have v ?m (t) := E[?vm (t)] = & t ?l m?1 % ?(t ? ?k )(uk ? uk?1 ), ?(t ? s) d(xl e??(s??l ) 1[?l ,?) (s)). (20) (21) From now on, for simplicity, we assume stages are based on equal partition of [0, T ] to M segments c v where each has length ?M . Combining Eq. (19) and ?m (t) = ?m (t) + ?m (t) yields: E?m (xm , um ) =?((m ? l + 1)?M )ul + ?((m ? l)?M )(ul+1 ? ul ) + . . . (22) + ?(?M )(um ? um?1 ) + ?((m ? l + 1)?M )? + ?((m ? l + 1)?M )xl where ?(t) and ?(t) are matrices independent of um ?s and are defined in Appendix D. Note the linear relation between average exposure E?m (xm , um ) and intervention values ul , . . . , um?1 . Open-loop optimization. Having found the average exposure at stages m = l, . . . , M ?1 we formulate an open-loop optimization to find optimal ul , ul+1 , . . . , uM ?1 . Defining u ?l = (ul ; . . . ; uM ?1 ) and E?l = (E?l (xl , ul ); . . . ; E?M ?1 (xM ?1 , uM ?1 )) we can write Xl u ?l + Yl ? + Wl xl = E?l where Zl u ? l ? zl (23) and Xl , Yl , Wl , Zl , and zl are independent of u ?l , ?, and xl as defined in Appendix D. Defining the expanded form of constraint variables as c?l = (cl ; . . . ; cM ?1 ), C?l = (Cl ; . . . ; CM ?1 ), and ? ? l = (?l ; . . . ; ?M ?1 ) we provide the optimization from of the above exposure shaping tasks. For CEM consider ??l = (?l ; . . . , ?M ?1 ). Then the problem 1 ?? ? ??l ? h, ? Zl u maximizeh,? ?l + Yl ? + Wl xl ? h, ?l ? zl , ? ul n 1 h subject to Xl u (24) solves CEM where h is an auxiliary vector of size n(M ? l). ? a vector of size n(M ? 1). For MEM consider the auxiliary h as a vector of size M ? l and h ? h = (h(1); . . . ; h(1); h(2); . . . , h(2); . . . , h(M ? l); . . . ; h(M ? l)) where each h(k) is repeated n times. Then MEM is equivalent to ?? ? ??l ? h, ? Zl u maximizeh,? ?l + Yl ? + Wl xl ? h, ? l ? zl ? ul 1 h subject to Xl u 6 (25) 5 300 250 200 150 100 CLL OPL RND PRK WEI 1.5 4.5 average distance minimum exposure sum of exposure 350 4 3.5 3 2.5 2 1.5 CLL OPL RND WFL PRP ?104 1 0.5 CLL OPL RND GRD REL a) Capped maximization b) Minimum maximization c) Least-squares shaping Figure 1: The objective on simulated events and synthetic network; n = 300, M = 6, T = 40 ? l = diag(D, . . . , D), then For LES let ??l = (?l ; . . . ; ?M ?1 ) and D ? l (Xl u (26) minimizeu?l n1 ?D ?l + Yl ? + Wl xl ) ? ??l ?2 subject to Zl u ?l ? zl All the three tasks involve convex (and linear) objective function with linear constraints which impose a convex feasible set. Therefore, one can use the rich and well-developed literature on convex optimization and linear programming to find the optimum intervention. 6 Experiments We evaluate our campaigning framework using both simulated and real world data and show that our approach significantly outperforms several baselines1 . Campaigning results on synthetic networks. In this section, we experiment with a synthetic network of 300 nodes. Details of the experimental setup and parameter setting are found in appendix F. We focus on three tasks: capped exposure maximization, minimax exposure shaping, and least square exposure shaping. To compare the methods we simulate the network with the prescribed intervention intensity and compute the objective function based on the events happened during the simulation. The mean and standard deviation of the objective function out of 10 runs are reported. Fig. 1 summarizes the performance of the proposed algorithm (CLL) and 4 other baselines on different campaigning tasks. For CEM, our approach consistently outperforms the others by at least 10. This means it exposes each user to the campaign at least 10 times more than the rest consuming the same budget and within the same constraints. The extra 20 units of exposures of over OPL or value of information shows how much we gain by incorporating a dynamic closed-loop solution as opposed to open-loop one-time optimization over all stages. For MEM, the proposed method outperforms the others by a smaller margin, however, the 0.1 exposure difference with the second best method is not trifling. This is expected as lifting the minimum exposure is a difficult task [8]. For LES, results demonstrate the superiority of CLL by a large margin. The 103 difference with the second best algorithm aggregated over 6 stages roughly is translated to 103 /6 ? 13 difference in the number of exposures per user. Given the heterogeneity of the network activity and target shape, this is a significant improvement over the baselines. Appendix F includes further results on varying number of nodes, number of stages, and duration of each stage. Campaigning results on real world networks. We also evaluate the proposed framework on real world data. To this end, we utilize the MemeTracker dataset [9] which contains the information flows captured by hyperlinks between different sites with timestamps during 9 months. This data has been previously used to validate Hawkes process models of social activity [5, 10]. For the real data, we utilize two evaluation procedures. First, similar to the synthetic case, we simulate the network, but now on a network based on the learned parameters from real data. However, the more interesting evaluation scheme would entail carrying out real intervention in a social media platform. Since this is very challenging to do, instead, in this evaluation scheme we used held-out data to mimic such procedure. Second, we form 10 pairs of clusters/cascades by selecting any 2 combinations of 5 largest clusters in the Memetracker data. Each is a cascade of events around a common subject. For any of these 10 pairs, the methods are faced to the question of predicting which cascade will reach the objective function better. They should be able to answer this by measuring how similar their prescription is to the real exogenous intensity. The key point here is that the real events happened are used to evaluate the objective function of the methods. Then the results are reported on average prediction accuracy on all stages over 10 runs of random constraint and parameter initialization on 10 pairs of cascades. The details of the experimental setup is further explained in Appendix F. Fig. 2, left column illustrates the performance with respect to increasing the number of users in the network. The performance drops slightly with the network size. This means that prediction becomes 1 codes are available at http://www.cc.gatech.edu/~mfarajta/ 7 CLL OPL RND PRK WEI 0.65 150 200 250 0.7 0.65 0.6 2 prediction accuracy 0.55 0.5 150 200 250 0.65 0.6 0.55 0.5 0.45 0.4 2 4 6 8 10 0.6 0.55 150 200 250 OPL RND PRK WEI 1 0.5 0 CLL OPL RND WFL PRP 7000 0.65 0.6 0.55 0.5 0.45 2 1.5 methods CLL OPL RND GRD REL network size CLL methods CLL OPL RND GRD REL 100 5 10 intervention points 0.65 0.5 50 8 network size prediction accuracy prediction accuracy 0.6 100 6 10 CLL OPL RND WFL PRP CLL OPL RND WFL PRP 0.65 0.45 50 4 15 intervention points minimum exposure 100 0.75 average distance 0.6 50 sum of exposure prediction accuracy prediction accuracy 0.7 20 0.8 network size prediction accuracy Least-squares Shaping Minimum Maximization Capped Maximization CLL OPL RND PRK WEI 0.75 4 6 8 intervention points 10 6500 6000 5500 5000 CLL OPL RND GRD REL methods Performance vs. # users Performance vs. # points Objective function Figure 2: real world dataset results; n = 300, M = 6, T = 40 more difficult as more random variables are involved. The middle panel shows the performance with respect to increasing the number of intervention points. Here, a slight increase in the performance is apparent. As the number of intervention points increases the algorithm has more control over the outcome and can reach the objective function better. Fig. 2 top row summarizes the results of CEM. The left panel demonstrates the predictive performance of the algorithms. CLL consistently outperforms the rest. With 65-70 % of accuracy in predicting the optimal cascade. The right panel shows the objective function simulated 10 times with the learned parameters for network of n = 300 users on 6 intervention points. The extra 2.5 extra exposure per user compared to the second best method with the same budget and constraint would be a significant advertising achievement. Among the competitors OPL and RND seem to perform good. If there where no cap over the resultant exposure, all methods would perform comparably because of the linearity of sum of exposure. However, the successful method is the one who manage to maximize exposure considering the cap. Failure of PRK and WEI indicates that structural properties are not enough to capture the influence. Compared to these two, RND performs better in average, however exhibits a larger variance as expected. Fig. 2 middle row summarizes the results for MEM and shows CLL outperforms others consistently. CLL still is the best algorithm and OPL and RND are the significant baselines. Failure of WFL and PRP shows the network structure plays a significant role in the activity and exposure processes. The bottom row in Fig. 2 demonstrates the results of LES. CLL is still the best method. OPL is still strong but RND is not performing well. The objective function is summation of the square of the gap between target and current exposure. This explains why GRD is showing a comparable success, since, it starts with the highest gap in the exposure and greedily allocates the budget. Conclusion. In this paper, we introduced the optimal multistage campaigning problem, which is a generalization of the activity shaping and influence maximization problems, and it allows for more elaborate goal functions. Our model of social activity is based on multivariate Hawkes process, and for the first time, we manage to derive a linear connection between a time-varying exogenous intensity and the overall network exposure of the campaign. Acknowledgement. The work is supported in part by NSF/NIH BIGDATA R01 GM108341, NSF IIS-1639792, NSF DMS-1620345, and NSF DMS-1620342. 8 References [1] D M West. Air Wars: Television Advertising and Social Media in Election Campaigns, 1952-2012: Television Advertising and Social Media in Election Campaigns, 1952-2012. Sage, 2013. [2] M Vergeer, L Hermans, and S Sams. Online social networks and micro-blogging in political campaigning the exploration of a new campaign tool and a new campaign style. Party Politics, 2013. [3] Dimitri P Bertsekas. Dynamic programming and optimal control, volume 1. [4] D J Daley and D Vere-Jones. An introduction to the theory of point processes. Springer Science & Business Media, 2007. [5] K Zhou, H Zha, and L Song. Learning social infectivity in sparse low-rank networks using multidimensional hawkes processes. In AISTATS, 2013. [6] O Aalen, O Borgan, and H Gjessing. Survival and event history analysis: a process point of view. Springer, 2008. [7] A G Hawkes. Spectra of some self-exciting and mutually exciting point processes. Biometrika, 1971. [8] M Farajtabar, N Du, M Gomez-Rodriguez, I Valera, L Song, and H Zha. Shaping social activity by incentivizing users. NIPS, 2014. [9] J Leskovec, L Backstrom, and J Kleinberg. Meme-tracking and the dynamics of the news cycle. SIGKDD, 2009. [10] SH Yang and H Zha. Mixture of mutually exciting processes for viral diffusion. ICML, 2013. [11] D Kempe, J Kleinberg, and E Tardos. Maximizing the spread of influence through a social network. SIGKDD, 2003. [12] FB Hanson. Applied stochastic processes and control for Jump-diffusions: modeling, analysis, and computation, volume 13. Siam, 2007. [13] A De, I Valera, N Ganguly, S Bhattacharya, and M Gomez Rodriguez. Modeling opinion dynamics in diffusion networks. arXiv:1506.05474, 2015. [14] Y Wang, E Theodorou, A Verma, and L Song. Steering opinion dynamics in information diffusion networks. arXiv:1603.09021, 2016. [15] D Bloembergen, B Ranjbar Sahraei, H Bou-Ammar, K Tuyls, and G Weiss. Influencing social networks: An optimal control study. In ECAI, 2014. [16] K Kandhway and J Kuri. Campaigning in heterogeneous social networks: Optimal control of si information epidemics. 2015. [17] Pin-Yu Chen, Shin-Ming Cheng, and Kwang-Cheng Chen. Optimal control of epidemic information dissemination over networks. Cybernetics, IEEE Transactions on, 2014. [18] W Lian, R Henao, V Rao, J Lucas, and L Carin. A multitask point process predictive model. ICML, 2015. [19] AP Parikh, A Gunawardana, and C Meek. Conjoint modeling of temporal dependencies in event streams. UAI, 2012. [20] PO Perry and PJ Wolfe. Point process modeling for directed interaction networks. Journal of the Royal Statistical Society, 2013. [21] SW Linderman and RP Adams. Discovering latent network structure in point process data. ICML, 2014. [22] C Blundell, J Beck, and KA Heller. Modelling reciprocating relationships with hawkes processes. NIPS, 2012. [23] T Iwata, A Shah, and Z Ghahramani. Discovering latent influence in online social activities via shared cascade poisson processes. SIGKDD, 2013. [24] O Hijab. Introduction to calculus and classical analysis. Springer, 2007. [25] GB Folland. Real analysis: modern techniques and their applications. John Wiley & Sons, 2013. [26] R Bracewell. The fourier transform and iis applications. New York, 5, 1965. [27] AH Al-Mohy and MJ Higham. Computing the action of the matrix exponential, with an application to exponential integrators. SIAM journal on scientific computing, 2011. 9
6102 |@word multitask:1 middle:2 c0:2 open:6 calculus:1 simulation:1 pick:1 initial:1 memetracker:4 contains:2 selecting:1 past:1 existing:1 outperforms:5 current:5 emory:2 ka:1 si:1 must:1 written:1 vere:1 john:1 realistic:1 happen:1 partition:4 timestamps:1 shape:2 designed:1 drop:2 update:1 prk:5 v:2 discovering:2 website:1 beginning:1 short:1 record:2 node:2 dn:17 xye:1 become:1 initiative:1 consists:2 mhp:2 introduce:1 manner:1 x0:2 expected:5 roughly:2 behavior:3 multi:5 integrator:1 ming:1 decreasing:1 harati:2 election:3 jm:4 window:5 curse:1 increasing:3 becomes:1 spain:1 begin:1 moreover:1 notation:1 bounded:1 medium:10 panel:3 linearity:1 what:3 cm:11 developed:2 finding:1 corporation:1 temporal:8 certainty:3 every:1 multidimensional:1 tackle:2 um:44 demonstrates:2 biometrika:1 uk:2 control:25 unit:2 zl:10 intervention:27 superiority:1 bertsekas:1 before:2 t1:1 infectivity:1 influencing:1 consequence:1 ap:1 twice:1 initialization:1 equivalence:2 challenging:1 campaign:14 limited:1 adoption:1 directed:1 practice:1 postpone:1 procedure:4 shin:1 idk:2 significantly:1 cascade:6 word:1 pre:1 regular:1 influence:9 optimize:1 equivalent:3 imposed:1 www:1 ranjbar:1 maximizing:2 folland:1 exposure:48 attention:1 starting:1 duration:1 convex:5 bou:1 formulate:2 simplicity:1 importantly:1 handle:1 president:1 tardos:1 target:5 suppose:5 gm:9 user:41 play:1 programming:12 trend:2 wolfe:1 observed:2 role:1 bottom:1 wang:1 capture:3 calculate:1 news:3 cycle:1 gjessing:1 highest:1 kuri:1 borgan:1 predictable:1 meme:1 reward:2 multistage:2 dynamic:17 prescribes:2 raise:1 rewrite:1 solving:1 segment:1 exposed:3 carrying:1 predictive:2 basis:1 compactly:1 translated:1 po:1 represented:4 effective:2 aggregate:1 outcome:4 outside:1 peer:1 quite:1 whose:1 apparent:1 solve:1 larger:1 epidemic:2 presidential:2 ganguly:1 transform:1 itself:1 online:7 sequence:4 differentiable:1 analytical:2 propose:1 interaction:2 product:1 relevant:1 loop:12 monetary:1 realization:1 combining:1 achieve:1 validate:1 dirac:1 achievement:1 exploiting:1 cluster:2 optimum:1 generating:1 adam:1 tk:9 derive:4 develop:3 ij:5 lsong:1 eq:2 strong:1 solves:2 implemented:3 auxiliary:2 convention:2 discontinuous:1 stochastic:3 exploration:1 opinion:6 public:3 adjacency:1 explains:1 government:1 generalization:1 decompose:1 summation:1 mathematically:1 around:1 credit:1 considered:1 deciding:1 maxu:1 week:1 major:1 adopt:2 superposition:1 expose:1 largest:1 wl:5 successfully:1 tool:1 ck:2 zhou:1 varying:2 gatech:3 corollary:1 encode:1 bloembergen:1 focus:1 she:2 consistently:3 improvement:1 indicates:2 rank:1 modelling:1 prp:5 contrast:2 sigkdd:3 greedily:1 baseline:5 political:1 dependent:3 vl:3 typically:2 entire:1 her:2 relation:6 going:1 selects:1 henao:1 issue:1 overall:6 aforementioned:1 among:1 lucas:1 platform:2 kempe:1 equal:1 having:1 jones:1 icml:3 yu:1 carin:1 anticipated:1 future:7 mimic:2 t2:1 others:3 fundamentally:1 piecewise:4 employ:1 inherent:1 modern:2 few:2 micro:1 individual:1 beck:1 replaced:1 maintain:1 n1:1 organization:1 interest:4 investigate:1 evaluation:3 mixture:1 sh:1 tuyls:1 held:1 integral:2 daily:1 allocates:1 machinery:1 divide:1 desired:5 theoretical:2 leskovec:1 uncertain:2 instance:2 column:1 modeling:6 steer:5 rao:1 measuring:1 maximization:9 cost:9 deviation:1 successful:1 theodorou:1 characterize:3 reported:2 dependency:1 answer:1 synthetic:6 person:1 maximizeh:2 fundamental:2 cll:18 siam:2 vm:5 yl:5 gunawardana:1 manage:2 opposed:1 leveraged:1 hn:1 bracewell:1 external:3 steered:1 style:1 dimitri:1 account:1 de:3 summarized:1 sec:1 includes:2 satisfy:2 caused:1 depends:2 stream:1 view:2 h1:1 endogenous:3 exogenous:15 observing:3 closed:7 zha:5 start:2 air:1 ni:1 square:6 accuracy:8 variance:1 who:2 maximized:1 yield:4 considering:1 accurately:2 produced:1 comparably:1 advertising:3 cc:2 drive:5 cybernetics:1 randomness:1 history:8 ah:1 networking:1 reach:3 competitor:1 failure:2 involved:1 dm:4 resultant:1 associated:2 proof:1 gain:1 dataset:4 knowledge:1 cap:5 dimensionality:1 ubiquitous:1 shaping:10 tolerate:1 dt:10 wei:6 just:1 stage:45 until:1 d:6 hand:1 perry:1 rodriguez:2 scientific:1 grows:1 effect:1 ye:1 evolution:1 hence:1 analytically:1 during:4 self:2 hawkes:11 criterion:2 arrived:1 complete:1 demonstrate:1 performs:4 consideration:1 parikh:1 nih:1 common:2 viral:1 functional:1 empirically:1 volume:2 slight:1 reciprocating:1 significant:4 similarly:1 stochasticity:1 entail:1 grd:5 multivariate:3 recent:1 electoral:1 optimizes:1 manipulation:1 certain:3 success:1 life:1 captured:1 minimum:8 additional:1 impose:3 steering:1 employed:3 aggregated:1 maximize:2 semi:2 u0:1 multiple:2 mize:1 ii:3 ing:1 prescription:1 post:1 manipulate:1 impact:3 prediction:8 basic:1 ae:10 heterogeneous:1 poisson:1 arxiv:2 kernel:1 sometimes:1 background:1 remarkably:1 fine:1 want:1 interval:2 addition:1 appropriately:1 extra:4 rest:2 comment:1 subject:5 flow:1 seem:1 structural:1 presence:1 counting:4 yang:1 enough:1 isolation:1 suboptimal:1 idea:3 regarding:1 shift:1 politics:3 blundell:1 war:1 gb:1 ul:14 penalty:2 song:4 york:1 action:6 involve:1 amount:4 http:1 nsf:4 happened:2 delta:1 per:5 track:1 discrete:2 write:1 group:4 key:3 indefinite:1 pj:1 diffusion:4 utilize:2 backward:1 tweet:1 year:1 sum:4 run:2 everywhere:1 uncertainty:5 respond:1 farajtabar:2 almost:1 announce:1 appendix:7 summarizes:4 comparable:1 capturing:1 bound:1 hi:2 fl:1 meek:1 gomez:2 cheng:2 activity:24 constraint:11 encodes:2 kleinberg:2 aspect:1 simulate:2 fourier:1 min:2 prescribed:1 performing:1 expanded:1 influential:1 tv:1 according:4 popularized:1 combination:1 dissemination:1 smaller:1 slightly:1 em:2 sam:1 son:1 wi:1 partitioned:1 backstrom:1 happens:1 explained:1 invariant:1 equation:1 mutually:3 previously:1 pin:1 count:1 know:1 merit:1 serf:1 end:2 available:1 linderman:1 bii:1 bhattacharya:1 shah:1 rp:1 assumes:1 remaining:2 top:1 sw:1 exploit:1 ghahramani:1 establish:4 society:1 r01:1 classical:1 objective:20 question:2 quantity:3 strategy:1 opl:16 mehrdad:2 traditional:2 exhibit:2 distance:2 simulated:3 entity:1 extent:1 length:2 code:1 modeled:3 relationship:1 balance:2 equivalently:2 setup:2 difficult:2 potentially:1 expense:1 sage:1 design:3 policy:8 perform:4 upper:1 convolution:1 observation:1 markov:1 discarded:1 finite:1 situation:1 defining:3 heterogeneity:1 rn:13 sharp:1 thm:3 community:1 intensity:35 introduced:1 pair:3 required:2 specified:1 extensive:2 connection:2 hanson:1 learned:2 barcelona:1 gsu:1 nip:3 capped:5 able:1 below:1 usually:1 xm:45 summarize:1 hyperlink:1 herman:1 max:1 royal:1 power:1 event:37 business:2 natural:1 eh:3 predicting:2 indicator:1 valera:2 minimax:1 scheme:4 technology:1 campaigning:25 faced:1 heller:1 literature:1 acknowledgement:1 ammar:1 determining:2 fully:1 sahar:2 interesting:1 especial:1 conjoint:1 foundation:2 sufficient:1 blogging:1 exciting:4 verma:1 share:1 row:3 wfl:5 gl:1 repeat:1 supported:1 ecai:1 aij:2 institute:1 neighbor:2 taking:1 kwang:1 differentiating:1 sparse:1 tolerance:1 dimension:2 world:7 avoids:1 rich:2 fb:1 commonly:2 adaptive:1 jump:1 mohy:1 counted:1 party:1 social:30 newspaper:1 transaction:1 approximate:3 cem:7 uai:1 hongyuan:1 mem:6 consuming:1 alternatively:2 spectrum:2 continuous:3 latent:2 why:1 nature:1 mj:1 alg:1 du:1 cl:2 obama:1 diag:1 aistats:1 spread:2 profile:1 repeated:2 categorized:1 site:2 fig:5 west:1 elaborate:1 georgia:2 fashion:1 wiley:1 daley:1 exponential:3 xl:20 advertisement:1 grained:1 bij:1 incentivizing:4 theorem:3 specific:1 cec:1 showing:1 maxi:1 list:2 dk:2 survival:1 intractable:2 essential:1 incorporating:1 rel:4 adding:1 effectively:1 higham:1 lifting:1 budget:6 illustrates:1 television:2 horizon:1 margin:2 gap:2 chen:2 desire:2 tracking:1 applies:1 rnd:16 springer:3 iwata:1 satisfies:4 conditional:4 viewed:1 goal:3 formulated:2 month:1 towards:3 price:1 shared:1 feasible:2 programing:1 specifically:1 infinite:1 except:1 typical:1 lemma:2 total:5 called:1 uim:1 experimental:2 xiaojing:1 aalen:1 formally:1 people:3 support:1 relevance:1 absolutely:1 bigdata:1 evaluate:3 lian:1 phenomenon:1
5,640
6,103
Coordinate-wise Power Method 1 Qi Lei 1 Kai Zhong 1 Inderjit S. Dhillon 1,2 Institute for Computational Engineering & Sciences 2 Department of Computer Science University of Texas at Austin {leiqi, zhongkai}@ices.utexas.edu, inderjit@cs.utexas.edu Abstract In this paper, we propose a coordinate-wise version of the power method from an optimization viewpoint. The vanilla power method simultaneously updates all the coordinates of the iterate, which is essential for its convergence analysis. However, different coordinates converge to the optimal value at different speeds. Our proposed algorithm, which we call coordinate-wise power method, is able to select and update the most important k coordinates in O(kn) time at each iteration, where n is the dimension of the matrix and k ? n is the size of the active set. Inspired by the ?greedy? nature of our method, we further propose a greedy coordinate descent algorithm applied on a non-convex objective function specialized for symmetric matrices. We provide convergence analyses for both methods. Experimental results on both synthetic and real data show that our methods achieve up to 23 times speedup over the basic power method. Meanwhile, due to their coordinate-wise nature, our methods are very suitable for the important case when data cannot fit into memory. Finally, we introduce how the coordinatewise mechanism could be applied to other iterative methods that are used in machine learning. 1 Introduction Computing the dominant eigenvectors of matrices and graphs is one of the most fundamental tasks in various machine learning problems, including low-rank approximation, principal component analysis, spectral clustering, dimensionality reduction and matrix completion. Several algorithms are known for computing the dominant eigenvectors, such as the power method, Lanczos algorithm [14], randomized SVD [2] and multi-scale method [17]. Among them, the power method is the oldest and simplest one, where a matrix A is multiplied by the normalized iterate x(l) at each iteration, namely, x(l+1) = normalize(Ax(l) ). The power method is popular in practice due to its simplicity, small memory footprint and robustness, and particularly suitable for computing the dominant eigenvector of large sparse matrices [14]. It has applied to PageRank [7], sparse PCA [19, 9], private PCA [4] and spectral clustering [18]. However, its convergence rate depends on | 2 |/| 1 |, the ratio of magnitude of the top two dominant eigenvalues [14]. Note that when | 2 | ? | 1 |, the power method converges slowly. In this paper, we propose an improved power method, which we call coordinate-wise power method, to accelerate the vanilla power method. Vanilla power method updates all n coordinates of the iterate simultaneously even if some have already converged to the optimal. This motivates us to develop new algorithms where we select and update a set of important coordinates at each iteration. As updating each coordinate costs only n1 of one power iteration, significant running time can be saved when n is very large. We raise two questions for designing such an algorithm. The first question: how to select the coordinate? A natural idea is to select the coordinate that will change the most, namely, 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Ax x, (1) xT Ax where xTAx is a scaled version of the next iterate given by power method, and we will explain this Ax special scaling factor in Section 2. Note that ci denotes the i-th element of the vector c. Instead of choosing only one coordinate to update, we can also choose k coordinates with the largest k changes in {|ci |}ni=1 . We will justify this selection criterion by connecting our method with greedy coordinate descent algorithm for minimizing a non-convex function in Section 3. With this selection rule, we are able to show that our method has global convergence guarantees and faster convergence rate compared to vanilla power method if k satisfies certain conditions. Another key question: how to choose these coordinates without too much overhead? How to efficiently select important elements to update is of great interest in the optimization community. For example, [1] leveraged nearest neighbor search for greedy coordinate selection, while [11] applied partially biased sampling for stochastic gradient descent. To calculate the changes in Eq (1) we need to know all coordinates of the next iterate. This violates our previous intention to calculate a small subset of the new coordinates. We show, by a simple trick, we can use only O(kn) operations to update the most important k coordinates. Experimental results on dense as well as sparse matrices show that our method is up to 8 times faster than vanilla power method. Relation to optimization. Our method reminds us of greedy coordinate descent method. Indeed, we show for symmetric matrices our coordinate-wise power method is similar to greedy coordinate descent for rank-1 matrix approximation, whose variants are widely used in matrix completion [8] and non-negative matrix factorization [6]. Based on this interpretation, we further propose a faster greedy coordinate descent method specialized for symmetric matrices. This method achieves up to 23 times speedup over the basic power method and 3 times speedup over the Lanczos method on large real graphs. For this non-convex problem, we also provide convergence guarantees when the initial iterate lies in the neighborhood of the optimal solution. Extensions. With the coordinate-wise nature, our methods are very suitable to deal with the case when data cannot fit into memory. We can choose a k such that k rows of A can fit in memory, and then fully process those k rows of data before loading the RAM (random access memory) with a new partition of the matrix. This strategy helps balance the data processing and data loading time. The experimental results show our method is 8 times faster than vanilla power method for this case. The paper is organized as follows. Section 2 introduces coordinate-wise power method for computing the dominant eigenvector. Section 3 interprets our strategy from an optimization perspective and proposes a faster algorithm. Section 4 provides theoretical convergence guarantee for both algorithms. Experimental results on synthetic or real data are shown in Section 5. Finally Section 6 presents the extensions of our methods: dealing with out-of-core cases and generalizing the coordinate-wise mechanism to other iterative methods that are useful for the machine learning community. argmaxi |ci |, where c = 2 Coordinate-wise Power Method The classical power method (PM) iteratively multiplies the iterate x 2 Rn by the matrix A 2 Rn?n , which is inefficient since some coordinates may converge faster than others. To illustrate this (a) The percentage of unconverged coordinates versus the number of operations (b) Number of updates of each coordinate Figure 1: Motivation for the Coordinate-wise Power Method. Figure 1(a) shows how the percentage of unconverged coordinates decreases with the number of operations. The gradual decrease demonstrates the unevenness of each coordinate as the iterate converges to the dominant eigenvector. In Figure 1(b), the X-axis is the coordinate indices of iterate x sorted by their frequency of updates, which is shown on the Y-axis. The area below each curve approximately equals the total number of operations.The given matrix is synthetic with | 2 |/| 1 | = 0.5, and terminating accuracy ? is set to be 1e-5. 2 phenomenon, we conduct an experiment with the power method; we set the stopping criterion as kx v 1 k1 < ?, where ? is the threshold for error, and let v i denote the i-th dominant eigenvector (associated with the eigenvalue of the i-th largest magnitude) of A in this paper. During the iterative process, even if some coordinates meet the stopping criterion, they still have to be updated at every iteration until uniform convergence. In Figure 1(a), we count the number of unconverged coordinates, which we define as {i : i 2 [n] |xi v1,i | > ?}, and see it gradually decreases with the iterations, which implies that the power method makes a large number of unnecessary updates. In this paper, for computing the dominant eigenvector, we exhibit a coordinate selection scheme that has the ability to select and update ?important? coordinates with little overhead. We call our method Coordinate-wise Power Method (CPM). As shown in Figure 1(a) and 1(b), by selecting important entries to update, the number of unconverged coordinates drops much faster, leading to an overall fewer flops. Algorithm 1 Coordinate-wise Power Method 1: Input: Symmetric matrix A 2 Rn?n , number of selected coordinates k, and number of iterations, L. (0) 2: Initialize x(0) 2 Rn and set z (0) = Ax(0) . Set coordinate selecting criterion c(0) = x(0) (x(0)z)T z(0) . 3: for l = 1 to L do Let ?(l) be a set containing k coordinates of c(l 1) with the largest magnitude. Execute the following 4: 8 updates: (l 1) zj < , j 2 ?(l) (l) (x(l 1) )T z (l 1) yj = (2) (l 1) : xj , j2 / ?(l) z (l) z (l) c(l) = = = z (l z (l) 1) (l) + A(y ?(l) /ky x(l) (l) k, x z (x(l (l) (l 1) (3) x?(l) ) =y (l) (l) /ky (l) k 1) )T z (l 1) 5: Output: Approximate dominant eigenvector x(L) Algorithm 1 describes our coordinate-wise power method that updates k entries at a time for computing the dominant eigenvector for a symmetric input matrix, while a generalization to asymmetric cases is straightforward. The algorithm starts from an initial vector x(0) , and iteratively performs updates xi aTi x/xT Ax with i in a selected set of coordinates ? ? [n] defined in step 4, where ai is the i-th row of A. The set of indices ? is chosen to maximize the difference between the current coordinate value xi and the next coordinate value aTi x/xT Ax. z (l) and c(l) are auxiliary vectors. Maintaining z (l) ? Ax(l) saves much time, while the magnitude of c represents importance of each coordinate and is used to select ?. We use the Rayleigh Quotient xT Ax (x is normalized) for scaling, different from kAxk in the power method. Our intuition is as follows: on one hand, it is well known that Rayleigh Quotient is the best estimate for eigenvalues. On the other hand, the limit point using xT Ax scaling will satisfy ? = A? x x/? xT A? x, which allows both negative or positive dominant eigenvectors, while the scaling kAxk is always positive, so its limit point only lies in the eigenvectors associated with positive eigenvalues, which rules out the possibility of converging to the negative dominant eigenvector. 2.1 Coordinate Selection Strategy An initial understanding for our coordinate selection strategy is that we select coordinates with the largest potential change. With a current iterate x and an arbitrary active set ?, let y ? be a potential next iterate with only coordinates in ? updated, namely, ? T ai x ? , i2? xT Ax (y )i = xi , i 2 /? According to our algorithm, we select active set ? to maximize the iterate change. Therefore: ( ) ( ) 2 2 Ax Ax def I 2 I 2 ? = arg max (x )I = ky xk = arg min y = kgk xT Ax xT Ax I?[n],|I|=k I?[n],|I|=k This is to say, with our updating rule, our goal of maximizing iteration gap is equivalent to minimizing the difference between the next iterate y (l+1) and Ax(l) /(x(l) )T Ax(l) , where this difference could be interpreted as noise g (l) . A good set ? ensures a sufficiently small noise g (l) , thus achieving a 3 similar convergence rate in O(kn) time (analyzed later) as the power method does in O(n2 ) time. More formal statement for the convergence analysis is given in Section 4. Another reason for this selection rule is that it incurs little overhead. For each iteration, we maintain a vector z ? Ax with kn flops by the updating rule in Eq.(3). And the overhead consists of calculating c and choosing ?. Both parts cost O(n) operations. Here ? is chosen by Hoare?s quick selection algorithm [5] to find the k th largest entry in |c|. Thus the overhead is negligible compared with O(kn). Thus CPM spends as much time on each coordinate as PM does on average, while those updated k coordinates are most important. For sparse matrices, the time complexity is O(n + nk nnz(A)) for each iteration, where nnz(A) is the number of nonzero elements in matrix A. Although the above analysis gives us a good intuition on how our method works, it doesn?t directly show that our coordinate selection strategy has any optimal properties. In next section, we give another interpretation of our coordinate-wise power method and establish its connection with the optimization problem for low-rank approximation. 3 Optimization Interpretation The coordinate descent method [12, 6] was popularized due to its simplicity and good performance. With all but one coordinates fixed, the minimization of the objective function becomes a sequence of subproblems with univariate minimization. When such subproblems are quickly solvable, coordinate descent methods can be efficient. Moreover, in different problem settings, a specific coordinate selecting rule in each iteration makes it possible to further improve the algorithm?s efficiency. The power method reminds us of the rank-one matrix factorization arg min x2Rn ,y2Rd f (x, y) = kA xy T k2F (4) Ay With alternating minimization, the update for x becomes x kyk2 and vice versa for y. Therefore for symmetric matrix, alternating minimization is exactly PM apart from the normalization constant. Meanwhile, the above similarity between PM and alternating minimization extends to the similarity between CPM and greedy coordinate descent. A more detailed interpretation is in Appendix A.5, where we show the equivalence in the following coordinate selecting rules for Eq.(4): (a) largest coordinate value change, denoted as | xi |; (b) largest partial gradient (Gauss-Southwell rule), |ri f (x)|; (c) largest function value decrease, |f (x + xi ei ) f (x)|. Therefore, the coordinate selection rule is more formally testified in optimization viewpoint. 3.1 Symmetric Greedy Coordinate Descent (SGCD) We propose an even faster algorithm based on greedy coordinate descent. This method is designed for symmetric matrices and additionally requires to know the sign of the most dominant eigenvalue. We also prove its convergence to the global optimum with a sufficiently close initial point. A natural alternative objective function specifically for the symmetric case would be arg min f (x) = kA x2Rn xxT k2F . (5) 2 Notice that the stationary p points of f (x), which require rf (x) = 4(kxk x Ax) = 0, are obtained ? at eigenvectors: xi = i v i , if the eigenvalue i is positive. The global minimum for Eq. (5) is the eigenvector corresponding to the largest positive eigenvalue, not the one with the largest magnitude. For most applications like PageRank we know 1 is positive, but if we want to calculate the negative eigenvalue with the largest magnitude, just optimize on f = kA + xxT k2F instead. Now we introduce Algorithm 2 that optimizes Eq. (5). With coordinate descent, we update the i-th (l+1) (l) coordinate by xi arg min? f (x(l) + (? xi )ei ), which requires the partial derivative of f (x) in i-th coordinate to be zero, i.e., () ri f (x) = 4(xi kxk22 x3i aTi x) = 0. + pxi + q = 0, where p = kxk2 (6) x2i aii , and q = aTi x + aii xi (7) Similar to CPM, the most time consuming part comes from maintaining z (? Ax), as the calculation for selecting the criterion c and the coefficient q requires it. Therefore the overall time complexity for one iteration is the same as CPM. 4 Notice that c from Eq.(6) is the partial gradient of f , so we are using the Gauss-Southwell rule to choose the active set. And it is actually the only effective and computationally cheap selection rule among previously analyzed rules (a), (b) or (c). For calculating the iterate change | xi |, one needs to obtain roots for n equations. Likewise, the function decrease | fi | requires even more work. Remark: for an unbiased initializer, x(0) should be scaled by a constant ? such that s (x(0) )T Ax(0) ? = arg min kA (ax(0) )(ax(0) )T kF = kx(0) k4 a 0 Algorithm 2 Symmetric greedy coordinate descent (SGCD) 1: Input: Symmetric matrix A 2 Rn?n , number of selected coordinate, k, and number of iterations, L. (0) 2: Initialize x(0) 2 Rn and set z (0) = Ax(0) . Set coordinate selecting criterion c(0) = x(0) kxz(0) k2 . 3: for l = 0 to L 1 do 4: Let ?(l) be a set containing k coordinates of c(l) with the largest magnitude. Execute the following updates: ? ? ( (l) arg min? f x(l) + (? xj )ej , if j 2 ?(l) , (l+1) xj = (l) xj , if j 2 / ?(l) . (l+1) z (l+1) = z (l) + A(x?(l) c(l+1) = x(l+1) (l) x?(l) ) z (l+1) kx(l+1) k2 5: Output: vector x(L) 4 Convergence Analysis In the previous section, we propose coordinate-wise power method (CPM) and symmetric greedy coordinate descent (SGCD) on a non-convex function for computing the dominant eigenvector. However, it remains an open problem to prove convergence of coordinate descent methods for general non-convex functions. In this section, we show that both CPM and SGCD converge to the dominant eigenvector under some assumptions. 4.1 Convergence of Coordinate-wise Power Method Consider a positive semidefinite matrix A, and let v 1 be its leading eigenvector. For any sequence (l) (l) (x(0) , x(1) , ? ? ? ) generated q by Algorithm 1, let ? to be the angle between vector x and v 1 , P def (l) 2 (l) (l) (l) and (l) (k) = min|?|=k i2? / (ci ) /kc k2 = kg k/kc k. The following lemma illustrates (l) convergence of the tangent of ? . Lemma 4.1. Suppose k is large enough such that (l) Then tan ?(l+1) ? (k) < 1 2 (1 + tan ?(l) ) tan ?(l) ( 2 1 . (l) + (8) 1 (k)) < tan ?(l) cos ?(l) (9) With the aid of Lemma 4.1, we show the following iteration complexity: Theorem 4.2. For any sequence (x(0) , x(1) , ? ? ? ) generated by Algorithm 1 with k satisfying (0) (l) 1 2 (k) < 2 1 (1+tan , if x(0) is not orthogonal to v 1 , then after T = O( 1 1 2 log( tan"? )) ? (l) ) iterations we have tan ?(T ) ? ". The iteration complexity shown is the same as the power method, but since it requires less operations (O(knnz(A)/n) instead of O(nnz(A)) per iteration, we have Corollary 4.2.1. If the requirements in Theorem 4.2 apply and additionally k satisfies: k < n log(( 1 + 2 )/(2 1 ))/ log( 2 / 1 ), (10) CPM has a better convergence rate than PM in terms of the number of equivalent passes over the coordinates. 5 The RHS of (10) ranges from 0.06n to 0.5n when 21 goes from 10 5 to 1 10 5 . Meanwhile, experiments show that the performance of our algorithms isn?t too sensitive to the choice of k. Figure 6 in Appendix A.6 illustrates that a sufficiently large range of k guarantees good performances. Thus n we use a prescribed k = 20 throughout our experiments in this paper, which saves the burden of tuning parameters and is a theoretically and experimentally favorable choice. Part of the proof is inspired by the noisy power method [3] in that we consider the unchanged part g as noise. For the sake of a neat proof we require our target matrix to be positive semidefinite, although experimentally a generalization to regular matrices is also valid for our algorithm. Details can be found in Appendix A.1 and A.3. 4.2 Local Convergence for Optimization on kA xxT k2F As the objective in Problem (5) is non-convex, it is hard to show global convergence. Clearly, with exact coordinate descent, Algorithm 2 will converge to some stationary point. In the following, we show that Algorithm 2 converges to the global minimum with a starting point sufficiently close to it. (0) Theorem 4.3. (Local Linear Convergence) For any sequence of iterates (xp , x(1) , ? ? ? ) generated (0) by Algorithm 2, assume the starting 1 v 1 with radius r = p point x is in a ball centered by 1 2 O( p ), or formally, x(0) 2 Br ( 1 v 1 ), then (x0 , x1 , ? ? ? ) converges to the optima linearly. 1 ? (0) maxi |aii | Specifically, when k = 1, then after T = 14 1 2 2 +4 log f (x ") f iterations, we have ? p f (x(T ) ) f ? ? ", where f ? = f ( 1 v 1 ) is the global minimum of the objective function f , and rf (y)k1 ? = inf x,y2Br (p 1 v1 ) krf (x) 2 [ 3( 1n 2 ) , 3( 1 2 )]. kx yk1 We prove this by showing that the objective (5) is strongly convex and coordinate-wise Lipschitz continuous in a neighborhood of the optimum. The proof is given in Appendix A.4. Remark: For real-life graphs, the diagonal values aii = 0, and the coefficient in the iteration complexity could be simplified as 14 1? 2 2 when k = 1. 10 7 6 10 1 10 0 10 2 CPM SGCD PM Lanczos VRPCA 10 5 time (sec) flops/n time (sec) 10 10 2 CPM SGCD PM Lanczos VRPCA 10 1 10 0 CPM SGCD PM Lanczos VRPCA 10 -1 10 4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 ?2 /?1 (a) Convergence flops vs 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 ?2 /?1 2 1 (b) Convergence time vs 1 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 n 2 1 (c) Convergence time vs dimension Figure 2: Matrix properties affecting performance. Figure 2(a), 2(b) show the performance of five methods with 21 ranging from 0.01 to 0.99 and fixed matrix size n = 5000. In Figure 2(a) the measurement is FLOPs while in Figure 2(b) Y-axis is CPU time. Figure 2(c) shows how the convergence time varies with the dimension when fixing 21 = 2/3. In all figures Y-axis is in log scale for better observation. Results are averaged over from 20 runs. 5 Experiments In this section, we compared our algorithms with PM, Lanczos method [14], and VRPCA [16] on dense as well as sparse dataset. All the experiments were executed on Intel(R) Xeon(R) E5430 machine with 16G RAM and Linux OS. We implement all the five algorithms in C++ with Eigen library. 5.1 Comparison on Dense and Simulated Dataset We compare PM with our CPM and SGCD methods to show how coordinate-wise mechanism improves the original method. Further, we compared with a state-of-the-art algorithm Lanczos method. Besides, we also include a recent proposed stochastic SVD algorithm, VRPCA, that enjoys exponential convergence rate and shows similar insight in viewing the data in a separable way. With dense and synthetic matrices, we are able to test the condition that our methods are preferable, and how the properties of the matrix, like 2 / 1 or the dimension, affect the performance. For each algorithm, we start from the same random vector, and set stopping condition to be cos ? 1 ?, ? = 10 6 , where ? is the angle between the current iterate and the dominant eigenvector. 6 First we compare the performances with number of FLOPs (Floating Point Operations), which could better illustrate how greediness affects the algorithm?s efficiency. From Figure 2(a) we can see our method shows much better performance than PM, especially when 2 / 1 ! 1, where CPM and SGCD respectively achieve more than 2 and 3 times faster than PM. Figure 2(b) shows running time using five methods under different eigenvalue ratios 2 / 1 . We can see that only in some extreme cases when PM converges in less than 0.1 second, PM is comparable to our methods. In Figure 2(c) the testing factor is the dimension, which shows the performance is independent of the size of n. Meanwhile, in most cases, SGCD is better than Lanczos method. And although VRPCA has better convergence rate, it requires at least 10n2 operations for one data pass. Therefore in real applications, it is not even comparable to PM. 5.2 Comparison on Sparse and Real Dataset Table 1: Six datasets and the performance of three methods on them. Dataset n nnz(A) nnz/n com-Orkut soc-LiveJournal soc-Pokec web-Stanford ego-Gplus ego-Twitter 3.07M 4.85M 1.63M 282K 108K 81.3K 234M 86M 44M 3.99M 30.5M 2.68M 76.3 17.8 27.3 14.1 283 33 2 1 Time (sec) CPM SGCD 31.5 19.3 17.9 13.7 26.5 5.2 1.05 0.54 0.57 0.61 0.15 0.11 PM 109.6 58.5 118 8.15 0.99 0.31 0.71 0.78 0.95 0.95 0.51 0.65 Lanczos 63.6 25.8 14.2 0.69 1.01 0.19 VRPCA 189.7 88.1 596.2 7.55 5.06 0.98 To test the scalability of our methods, we further test and compare our methods on large and sparse datasets. We use the following real datasets: 1) com-Orkut: Orkut online social network 2) soc-LiveJournal: On-line community for maintaining journals, individual and group blogs 3) soc-Pokec: Pokec, most popular on-line social network in Slovakia 4) web-Stanford: Pages from Stanford University (stanford.edu) and hyperlinks between them 5) ego-Gplus (Google+): Social circles from Google+ 6) ego-Twitter: Social circles from Twitter The statistics of the datasets are summarized in Table 1, which includes the essential properties of the datasets that affect the performances and the average CPU time for reaching cos ?x,v1 1 10 6 . Figure 3 shows tan ?x,v1 against the CPU time for the four methods with multiple datasets. From the statistics in Table 1 we can see that in all the cases, either CPM or SGCD performs the best. CPM is roughly 2-8 times faster than PM, while SGCD reaches up to 23 times and 3 times faster than PM and Lanczos method respectively. Our methods show their privilege in the soc-Pokec(3(c)) and web-Stanford(3(d)), the most ill-conditioned cases ( 2 / 1 ? 0.95), achieving 15 or 23 times of speedup on PM with SGCD. Meanwhile, when the condition number of the datasets is not too small (see 3(a),3(b),3(e),3(f)), both CPM and SGCD outperform PM as well as Lanczos method. And 10 1 10 1 CPM SGCD PM Lanczos VRPCA 10 -1 10 -2 10 -3 10 -1 10 0 20 40 60 80 100 120 140 -2 10 -3 160 0 10 20 30 60 10 -3 70 10 -1 10 2 3 4 5 40 60 6 7 8 time (sec) (d) Performance on web-Stanford 9 -2 10 -3 80 100 120 140 160 10 1 CPM SGCD PM Lanczos VRPCA 10 -1 10 0 0.5 1 1.5 2 2.5 time (sec) (e) Performance on Google+ 3 CPM SGCD PM Lanczos VRPCA 10 0 tan ?x,v1 tan ?x,v1 -2 1 20 time (sec) 10 0 10 -1 0 0 (c) Performance on soc-Pokec 10 1 CPM SGCD PM Lanczos VRPCA 10 0 tan ?x,v1 50 (b) Performance on LiveJournal 10 1 10 -3 40 -2 time (sec) (a) Performance on com-Orkut 10 10 -1 10 time (sec) CPM SGCD PM Lanczos VRPCA 10 0 tan ?x,v1 10 0 tan ?x,v1 tan ?x,v1 10 0 10 1 CPM SGCD PM Lanczos VRPCA -2 10 -3 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 time (sec) (f) Performance on ego-Twitter Figure 3: Time comparison for sparse dataset. X-axis shows the CPU time while Y-axis is log scaled tan ? between x and v 1 . The empirical performance shows all three methods have linear convergence. 7 similar to the reasoning in the dense case, although VRPCA requires less iterations for convergence, the overall CPU time is much longer than others in practice. In summary of performances on both dense and sparse datasets, SGCD is the fastest among others. 6 Other Application and Extensions 6.1 Comparison on Out-of-core Real Dataset 10 2 CPM SGCD 10 1 PM An important application for coordinate-wise power method is the case when data can not fit into memory. Existing methods can?t be easily applied to out-of-core dataset. Most existing methods don?t indicate how we can update part of the coordinates multiple times and fully reuse part of the matrix corresponding to those active coordinates. Therefore the data loading and data time (sec) processing time are highly unbalanced. A naive Figure 4: A pseudograph for time comparison of way of using PM would be repetitively loading out-of-core dataset from Twitter. Each "staircase" ilpart of the matrix from the disk and calculating that part of matrix-vector multiplication. But lustrates the performance of one data pass. The flat part indicates the stage of loading data, while the downward from Figure 4 we can see reading from the disk part shows the phase of processing data. As we only costs much more time than the process of com- updated auxiliary vectors instead of the iterate every putation, therefore we will waste a lot of time time we load part of the matrix, we could not test perif we cannot fully use the data before dumping formances until a whole data pass. Therefore for the it. For CPM, as we showed in Theorem 4.1 that sake of clear observation, we group together the loading updating only k coordinates of iterate x may phase and the processing phase in each data pass. still enhance the target direction, we could do matrix vector multiplication multiple times after one single loading. As with SGCD, optimization on part of x for several times will also decrease the function value. We did experiments on the dataset from Twitter [10] using out-of-core version of the three algorithms shown in Algorithm 3 in Appendix A.7. The data, which contains 41.7 million user profiles and 1.47 billion social relations, is originally 25.6 GB and then separated into 5 files. In Figure 4, we can see that after data pass, our methods can already reach rather high precision, which compresses hours of processing time to 8 minutes. tan ?x,v1 10 0 10 -1 10 -2 10 -3 10 -4 10 -5 6.2 0 500 1000 1500 2000 2500 3000 3500 4000 4500 Extension to other linear algebraic methods With the interpretation in optimization, we could apply a coordinate-wise mechanism to PM and get good performance. Meanwhile, for some other iterative methods in linear algebra, if the connection to optimization is valid, or if the update is separable for each coordinate, the coordinate-wise mechanism may also be applicable, like Jacobi method. For diagonal dominant matrices, Jacobi iteration [15] is a classical method for solving linear system Ax = b with linear convergence rate. The iteration procedure is: Initialize: A ! D + R, where D =Diag(A), and R = A Iterations: x+ D 1 (b Rx). D. This method is similar to the vanilla power method, which includes a matrix vector multiplication Rx with an extra translation b and a normalization step D 1 . Therefore, a potential similar realization of greedy coordinate-wise mechanism is also applicable here. See Appendix A.8 for more experiments and analyses, where we also specify its relation to Gauss-Seidel iteration [15]. 7 Conclusion In summary, we propose a new coordinate-wise power method and greedy coordinate descent method for computing the most dominant eigenvector of a matrix. This problem is critical to many applications in machine learning. Our methods have convergence guarantees and achieve up to 23 times of speedup on both real and synthetic data, as compared to the vanilla power method. Acknowledgements This research was supported by NSF grants CCF-1320746, IIS-1546452 and CCF-1564000. 8 References [1] Inderjit S Dhillon, Pradeep K Ravikumar, and Ambuj Tewari. Nearest neighbor based greedy coordinate descent. In Advances in Neural Information Processing Systems, pages 2160?2168, 2011. [2] Nathan Halko, Per-Gunnar Martinsson, and Joel A Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM review, 53(2):217?288, 2011. [3] Moritz Hardt and Eric Price. The noisy power method: A meta algorithm with applications. In Advances in Neural Information Processing Systems, pages 2861?2869, 2014. [4] Moritz Hardt and Aaron Roth. Beyond worst-case analysis in private singular vector computation. In Proceedings of the forty-fifth annual ACM symposium on Theory of computing, pages 331?340. ACM, 2013. [5] Charles AR Hoare. Algorithm 65: find. Communications of the ACM, 4(7):321?322, 1961. [6] Cho-Jui Hsieh and Inderjit S Dhillon. Fast coordinate descent methods with variable selection for non-negative matrix factorization. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1064?1072. ACM, 2011. [7] Ilse Ipsen and Rebecca M Wills. Analysis and computation of google?s pagerank. In 7th IMACS international symposium on iterative methods in scientific computing, Fields Institute, Toronto, Canada, volume 5, 2005. [8] Prateek Jain, Praneeth Netrapalli, and Sujay Sanghavi. Low-rank matrix completion using alternating minimization. In Proceedings of the forty-fifth annual ACM symposium on Theory of computing, pages 665?674. ACM, 2013. [9] Michel Journ?e, Yurii Nesterov, Peter Richt?rik, and Rodolphe Sepulchre. Generalized power method for sparse principal component analysis. The Journal of Machine Learning Research, 11:517?553, 2010. [10] Haewoon Kwak, Changhyun Lee, Hosung Park, and Sue Moon. What is Twitter, a social network or a news media? Proceedings of the 19th international conference on World wide web, pages 591?600, 2010. [11] Deanna Needell, Rachel Ward, and Nati Srebro. Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm. In Advances in Neural Information Processing Systems, pages 1017?1025, 2014. [12] Yu Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM Journal on Optimization, 22(2):341?362, 2012. [13] Julie Nutini, Mark Schmidt, Issam H Laradji, Michael Friedlander, and Hoyt Koepke. Coordinate descent converges faster with the Gauss-Southwell rule than random selection. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 1632?1641, 2015. [14] Beresford N Parlett. The Symmetric Eigenvalue Problem, volume 20. SIAM, 1998. [15] Yousef Saad. Iterative methods for sparse linear systems. SIAM, 2003. [16] Ohad Shamir. A stochastic PCA and SVD algorithm with an exponential convergence rate. In Proc. of the 32st Int. Conf. Machine Learning (ICML 2015), pages 144?152, 2015. [17] Si Si, Donghyuk Shin, Inderjit S Dhillon, and Beresford N Parlett. Multi-scale spectral decomposition of massive graphs. In Advances in Neural Information Processing Systems, pages 2798?2806, 2014. [18] Ulrike Von Luxburg. A tutorial on spectral clustering. Statistics and computing, 17(4):395?416, 2007. [19] Xiao-Tong Yuan and Tong Zhang. Truncated power method for sparse eigenvalue problems. The Journal of Machine Learning Research, 14(1):899?925, 2013. 9
6103 |@word kgk:1 private:2 version:3 loading:7 nd:1 disk:2 open:1 gradual:1 decomposition:2 hsieh:1 incurs:1 sepulchre:1 reduction:1 initial:4 contains:1 selecting:6 ati:4 existing:2 current:3 ka:5 com:4 si:2 partition:1 cheap:1 drop:1 designed:1 update:20 v:3 stationary:2 greedy:15 fewer:1 selected:3 oldest:1 xk:1 core:5 provides:1 iterates:1 toronto:1 kaxk:2 zhang:1 five:3 symposium:3 yuan:1 consists:1 prove:3 overhead:5 introduce:2 x0:1 theoretically:1 indeed:1 roughly:1 multi:2 inspired:2 little:2 hoare:2 cpu:5 becomes:2 spain:1 moreover:1 medium:1 prateek:1 kg:1 what:1 interpreted:1 spends:1 eigenvector:14 pseudograph:1 finding:1 guarantee:5 every:2 exactly:1 preferable:1 scaled:3 demonstrates:1 k2:3 grant:1 positive:8 negligible:1 ice:1 before:2 engineering:1 local:2 limit:2 meet:1 approximately:1 equivalence:1 co:3 fastest:1 factorization:3 range:2 averaged:1 yj:1 testing:1 practice:2 implement:1 kaczmarz:1 footprint:1 procedure:1 shin:1 area:1 nnz:5 empirical:1 intention:1 regular:1 jui:1 get:1 cannot:3 close:2 selection:13 greediness:1 optimize:1 equivalent:2 quick:1 roth:1 maximizing:1 straightforward:1 go:1 starting:2 convex:7 simplicity:2 needell:1 rule:13 insight:1 coordinate:99 updated:4 target:2 suppose:1 tan:16 user:1 exact:1 shamir:1 massive:1 designing:1 trick:1 element:3 ego:5 satisfying:1 particularly:1 updating:4 asymmetric:1 yk1:1 worst:1 calculate:3 ensures:1 news:1 richt:1 decrease:6 intuition:2 complexity:5 nesterov:2 terminating:1 raise:1 solving:1 algebra:1 efficiency:3 eric:1 accelerate:1 aii:4 easily:1 various:1 xxt:3 separated:1 jain:1 fast:1 effective:1 argmaxi:1 choosing:2 neighborhood:2 whose:1 kai:1 widely:1 stanford:6 say:1 ability:1 statistic:3 ward:1 noisy:2 online:1 putation:1 sequence:4 eigenvalue:11 propose:7 j2:1 realization:1 achieve:3 normalize:1 ky:3 scalability:1 billion:1 convergence:30 optimum:3 requirement:1 converges:6 help:1 illustrate:2 develop:1 completion:3 fixing:1 nearest:2 eq:6 soc:6 auxiliary:2 c:1 quotient:2 implies:1 come:1 indicate:1 netrapalli:1 direction:1 radius:1 saved:1 stochastic:4 centered:1 viewing:1 violates:1 require:2 generalization:2 extension:4 sufficiently:4 great:1 achieves:1 favorable:1 proc:1 applicable:2 cpm:25 utexas:2 sensitive:1 largest:12 vice:1 weighted:1 minimization:6 clearly:1 always:1 reaching:1 rather:1 knnz:1 ej:1 zhong:1 koepke:1 corollary:1 ax:25 pxi:1 rank:5 indicates:1 kwak:1 sigkdd:1 twitter:7 stopping:3 relation:3 kc:2 journ:1 overall:3 among:3 arg:7 ill:1 denoted:1 multiplies:1 proposes:1 art:1 special:1 initialize:3 equal:1 field:1 sampling:2 represents:1 park:1 yu:1 k2f:4 icml:2 others:3 sanghavi:1 simultaneously:2 individual:1 floating:1 phase:3 n1:1 maintain:1 privilege:1 interest:1 huge:1 possibility:1 highly:1 mining:1 joel:1 rodolphe:1 introduces:1 analyzed:2 extreme:1 pradeep:1 semidefinite:2 beresford:2 partial:3 xy:1 ohad:1 orthogonal:1 conduct:1 circle:2 theoretical:1 xeon:1 ar:1 lanczos:17 cost:3 subset:1 entry:3 uniform:1 zhongkai:1 too:3 kn:5 varies:1 synthetic:5 cho:1 st:1 fundamental:1 randomized:2 siam:4 international:4 probabilistic:1 lee:1 hoyt:1 michael:1 connecting:1 quickly:1 enhance:1 together:1 linux:1 von:1 initializer:1 containing:2 choose:4 leveraged:1 slowly:1 conf:1 inefficient:1 leading:2 derivative:1 michel:1 potential:3 sec:10 summarized:1 includes:2 coefficient:2 waste:1 int:1 satisfy:1 depends:1 later:1 root:1 lot:1 ulrike:1 start:2 ni:1 accuracy:1 moon:1 efficiently:1 likewise:1 rx:2 randomness:1 converged:1 explain:1 reach:2 against:1 frequency:1 associated:2 proof:3 jacobi:2 dataset:9 hardt:2 popular:2 knowledge:1 dimensionality:1 improves:1 organized:1 actually:1 originally:1 specify:1 improved:1 execute:2 strongly:1 just:1 stage:1 until:2 hand:2 web:5 ei:2 tropp:1 o:1 google:4 lei:1 scientific:1 normalized:2 unbiased:1 staircase:1 ccf:2 alternating:4 symmetric:13 dhillon:4 iteratively:2 nonzero:1 i2:2 moritz:2 reminds:2 deal:1 during:1 kyk2:1 criterion:6 generalized:1 ay:1 performs:2 reasoning:1 ranging:1 wise:24 fi:1 charles:1 specialized:2 volume:2 million:1 interpretation:5 martinsson:1 significant:1 measurement:1 versa:1 ai:2 tuning:1 vanilla:8 sujay:1 pm:29 access:1 similarity:2 longer:1 dominant:18 recent:1 showed:1 perspective:1 optimizes:1 apart:1 inf:1 certain:1 meta:1 blog:1 life:1 minimum:3 converge:4 maximize:2 forty:2 ii:1 multiple:3 seidel:1 x2rn:2 faster:12 calculation:1 repetitively:1 ravikumar:1 qi:1 converging:1 variant:1 basic:2 sue:1 iteration:24 normalization:2 imacs:1 affecting:1 want:1 singular:1 biased:1 extra:1 saad:1 pass:1 file:1 call:3 enough:1 iterate:17 xj:4 fit:4 affect:3 interprets:1 idea:1 praneeth:1 br:1 texas:1 six:1 pca:3 gb:1 reuse:1 peter:1 algebraic:1 remark:2 useful:1 tewari:1 detailed:1 eigenvectors:5 clear:1 simplest:1 outperform:1 percentage:2 zj:1 nsf:1 notice:2 tutorial:1 sign:1 per:2 group:2 key:1 four:1 gunnar:1 threshold:1 achieving:2 k4:1 krf:1 v1:11 ram:2 graph:4 run:1 angle:2 luxburg:1 extends:1 throughout:1 rachel:1 appendix:6 scaling:4 comparable:2 def:2 annual:2 ri:2 flat:1 sake:2 nathan:1 speed:1 min:7 prescribed:1 separable:2 speedup:5 department:1 according:1 popularized:1 ball:1 describes:1 gradually:1 southwell:3 computationally:1 equation:1 previously:1 remains:1 count:1 mechanism:6 know:3 yurii:1 issam:1 kxz:1 operation:8 multiplied:1 apply:2 spectral:4 save:2 alternative:1 robustness:1 schmidt:1 eigen:1 original:1 compress:1 top:1 clustering:3 running:2 denotes:1 include:1 maintaining:3 calculating:3 k1:2 especially:1 establish:1 classical:2 unchanged:1 objective:6 already:2 question:3 strategy:5 diagonal:2 exhibit:1 gradient:4 simulated:1 reason:1 besides:1 index:2 ratio:2 minimizing:2 balance:1 executed:1 ipsen:1 statement:1 subproblems:2 negative:5 yousef:1 motivates:1 observation:2 datasets:8 descent:22 truncated:1 flop:6 communication:1 rn:6 arbitrary:1 community:3 canada:1 rebecca:1 namely:3 connection:2 barcelona:1 hour:1 nip:1 able:3 beyond:1 deanna:1 below:1 pokec:5 reading:1 hyperlink:1 ambuj:1 pagerank:3 including:1 memory:6 max:1 rf:2 power:44 suitable:3 critical:1 natural:2 solvable:1 scheme:1 improve:1 kxk22:1 x2i:1 library:1 axis:6 naive:1 isn:1 review:1 understanding:1 acknowledgement:1 tangent:1 kf:1 multiplication:3 discovery:1 nati:1 friedlander:1 fully:3 srebro:1 versus:1 rik:1 xp:1 xiao:1 viewpoint:2 translation:1 austin:1 row:3 summary:2 supported:1 neat:1 enjoys:1 formal:1 unevenness:1 institute:2 neighbor:2 wide:1 fifth:2 sparse:12 julie:1 curve:1 dimension:5 valid:2 world:1 doesn:1 parlett:2 simplified:1 y2rd:1 social:6 approximate:2 dealing:1 global:6 active:5 unnecessary:1 consuming:1 xi:12 don:1 search:1 iterative:6 continuous:1 table:3 additionally:2 nature:3 meanwhile:6 constructing:1 diag:1 did:1 dense:6 linearly:1 rh:1 motivation:1 noise:3 whole:1 profile:1 n2:2 coordinatewise:1 x1:1 intel:1 aid:1 tong:2 precision:1 exponential:2 lie:2 kxk2:1 theorem:4 minute:1 load:1 xt:9 specific:1 showing:1 maxi:1 essential:2 burden:1 livejournal:3 importance:1 ci:4 magnitude:7 illustrates:2 conditioned:1 downward:1 kx:4 nk:1 gap:1 generalizing:1 rayleigh:2 halko:1 univariate:1 x3i:1 kxk:1 ilse:1 partially:1 inderjit:5 orkut:4 nutini:1 satisfies:2 acm:7 sorted:1 goal:1 lipschitz:1 price:1 change:7 experimentally:2 hard:1 specifically:2 laradji:1 justify:1 principal:2 lemma:3 total:1 pas:5 experimental:4 svd:3 gauss:4 aaron:1 select:9 formally:2 mark:1 unbalanced:1 phenomenon:1
5,641
6,104
Fast learning rates with heavy-tailed losses 1 Vu Dinh1 Lam Si Tung Ho2 Duy Nguyen3 Binh T. Nguyen4 Program in Computational Biology, Fred Hutchinson Cancer Research Center 2 Department of Biostatistics, University of California, Los Angeles 3 Department of Statistics, University of Wisconsin-Madison 4 Department of Computer Science, University of Science, Vietnam Abstract We study fast learning rates when the losses are not necessarily bounded and may have a distribution with heavy tails. To enable such analyses, we introduce two new conditions: (i) the envelope function supf ?F |` ? f |, where ` is the loss function and F is the hypothesis class, exists and is Lr -integrable, and (ii) ` satisfies the multi-scale Bernstein?s condition on F. Under these assumptions, we prove that learning rate faster than O(n?1/2 ) can be obtained and, depending on r and the multi-scale Bernstein?s powers, can be arbitrarily close to O(n?1 ). We then verify these assumptions and derive fast learning rates for the problem of vector quantization by k-means clustering with heavy-tailed distributions. The analyses enable us to obtain novel learning rates that extend and complement existing results in the literature from both theoretical and practical viewpoints. 1 Introduction The rate with which a learning algorithm converges as more data comes in play a central role in machine learning. Recent progress has refined our theoretical understanding about setting under which fast learning rates are possible, leading to the development of robust algorithms that can automatically adapt to data with hidden structures and achieve faster rates whenever possible. The literature, however, has mainly focused on bounded losses and little has been known about rates of learning in the unbounded cases, especially in cases when the distribution of the loss has heavy tails [van Erven et al., 2015]. Most of previous work about learning rate for unbounded losses are done in the context of density estimation [van Erven et al., 2015, Zhang, 2006a,b], of which the proofs of fast rates implicitly employ the central condition [Gr?unwald, 2012] and cannot be extended to address losses with polynomial tails [van Erven et al., 2015]. Efforts to resolve this issue include Brownlees et al. [2015], which proposes using some robust mean estimators to replace empirical means, and Cortes et al. [2013], which derives relative deviation and generalization bounds for unbounded losses with the assumption that Lr -diameter of the hypothesis class is bounded. However, results about fast learning rates were not obtained in both approaches. Fast learning rates are derived in Lecu?e and Mendelson [2013] for sub-Gaussian losses and in Lecu?e and Mendelson [2012] for hypothesis classes that have sub-exponential envelope functions. To the best of our knowledge, no previous work about fast learning rates for heavy-tailed losses has been done in the literature. The goal of this research is to study fast learning rates for the empirical risk minimizer when the losses are not necessarily bounded and may have a distribution with heavy tails. We recall that heavy-tailed distributions are probability distributions whose tails are not exponentially bounded: that is, they have heavier tails than the exponential distribution. To enable the analyses of fast rates with heavy-tailed losses, two new assumptions are introduced. First, we assume the existence and the Lr -integrability of the envelope function F = supf ?F |f | of the hypothesis class F for 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. some value of r ? 2, which enables us to use the results of Lederer and van de Geer [2014] on concentration inequalities for suprema of empirical unbounded processes. Second, we assume that the loss function satisfies the multi-scale Bernstein?s condition, a generalization of the standard Bernstein?s condition for unbounded losses, which enables derivation of fast learning rates. Building upon this framework, we prove that if the loss has finite moments up to order r large enough and if the hypothesis class satisfies the regularity conditions described above, then learning rate faster than O(n?1/2 ) can be obtained. Moreover, depending on r and the multi-scale Bernstein?s powers, the learning rate can be arbitrarily close to the optimal rate O(n?1 ). We then verify these assumptions and derive fast learning rates for the k-mean clustering algorithm and prove that if the distribution of observations has finite moments up to order r and satisfies the Pollard?s regularity conditions, then fast learning rate can be derived. The result can be viewed as an extension of the result from Antos et al. [2005] and Levrard [2013] to cases when the source distribution has unbounded support, and produces a more favorable convergence rate than that of Telgarsky and Dasgupta [2013] under similar settings. 2 Mathematical framework Let the hypothesis class F be a class of functions defined on some measurable space X with values in R. Let Z = (X, Y ) be a random variable taking values in Z = X ?Y with probability distribution P where Y ? R. The loss ` : Z ? F ? R+ is a non-negative function. For a hypothesis f ? F and n iid samples {Z1 , Z2 , . . . , Zn } of Z, we define n P `(f ) = EZ?P [`(Z, f )] and Pn `(f ) = 1X `(Zi , f ). n i=1 For unsupervised learning frameworks, there is no output (Y = ?) and the loss has the form `(X, f ) depending on applications. Nevertheless, P `(f ) and Pn `(f ) can be defined in a similar manner. We will abuse the notation to denote the losses `(Z, f ) by `(f ). We also denote the optimal hypothesis f ? be any function for which P `(f ? ) = inf f ?F P `(f ) := P ? and consider the empirical risk minimizer (ERM) estimator f?n = arg minf ?F Pn `(f ). We recall that heavy-tailed distributions are probability distributions whose tails are not exponentially bounded. Rigorously, the distribution of a random variable V is said to have a heavy right tail if limv?? e?v P[V > v] = ? for all ? > 0 and the definition is similar for heavy left tail. A learning problem is said to be with heavy-tailed loss if the distribution of `(f ) has heavy tails from some or all hypotheses f ? F. For a pseudo-metric space (G, d) and  > 0, we denote by N (, G, d) the covering number of (G, d); that is, N (, G, d) is the minimal number of balls of radius  needed to cover G. The universal metric entropy of G is defined by H(, G) = supQ log N (, G, L2 (Q)), where the supremum is taken over the set of all probability measures Q concentrated on some finite subset of G. For convenience, we define G = ` ? F the class of all functions g such that g = `(f ) for some f ? F and denote by F a finite subset of F such that G is contained in the union of balls of radius  with centers in G = `?F . We refer to F and G as an -net of F and G, respectively. To enable the analyses of fast rates for learning problems with heavy-tailed losses, throughout the paper, we impose the following regularity conditions on F and `. Assumption 2.1 (Multi-scale Bernstein?s condition). Define F ? = arg minF P `(f ). There exist a finite partition of F = ?i?I Fi , positive constants B = {Bi }i?I , constants ? = {?i }i?I in (0, 1], ? and f ? = {fi? }i?I ? F ? such that E[(`(f ) ? `(fi? ))2 ] ? Bi (E[`(f ) ? `(fi? )]) i for all i ? I and f ? Fi . Assumption 2.2 (Entropy bounds). The hypothesis class F is separable and there exist C ? 1, K ? 1 such that ? ? (0, K], the L2 (P )-covering numbers and the universal metric entropies of G are bounded as log N (, G, L2 (P )) ? C log(K/) and H(, G) ? C log(K/). Assumption 2.3 (Integrability of the envelope function). There exists W > 0, r ? C + 1 such that 1/r E supg?G |g|r ? W. The multi-scale Bernstein?s condition is more general than the Bernstein?s condition. This entails that the multi-scale Bernstein?s condition holds whenever the Bernstein?s condition does, thus al2 lows us to consider a larger class of problems. In other words, our results are also valid with the Bernstein?s condition. The multi-scale Bernstein?s condition is more proper to study unbounded losses since it is able to separately consider the behaviors of the risk function on microscopic and macroscopic scales, for which the distinction can only be observed in an unbounded setting. We also recall that if G has finite VC-dimension, then Assumption 2.2 is satisfied [Boucheron et al., 2013, Bousquet et al., 2004]. Both Bernstein?s condition and the assumption of separable parametric hypothesis class are standard assumptions frequently used to obtain faster learning rates in agnostic settings. A review about the Bernstein?s condition and its applications is Mendelson [2008], while fast learning rates for bounded losses on hypothesis classes satisfying Assumptions 2.2 were previously studied in Mehta and Williamson [2014] under the stochastic mixability condition. Fast learning rate for hypothesis classes with envelope functions were studied in Lecu?e and Mendelson [2012], but under a much stronger assumption that the envelope function is sub-exponential. Under these assumptions, we illustrate that fast rates for heavy-tailed losses can be obtained. Throughout the analyses, two recurrent analytical techniques are worth mentioning. The first comes from the simple observation that in the standard derivation of fast learning rates for bounded losses, the boundedness assumption is used in multiple places only to provide reverse-Holder-type inequalities, where the L2 -norm are upper bounded by the L1 -norm. This use of the boundedness assumption can be simply relieved by the assumption that the Lr -norm of the loss is bounded, which implies (r?2)/(2r?2) kukL2 ? kukL1 r/(2r?2) kukLr . The second technique relies on the following results of Lederer and van de Geer [2014] on concentration inequalities for suprema of empirical unbounded processes. Lemma 2.1. If {Vk : k ? K} is a countable family of non-negative functions such that E sup |Vk |r ? M r k?K ? 2 = sup EVk2 and V := sup Pn Vk , k?K k?K then for all ?, x > 0, we have P[V ? (1 + ?)EV + x] ? min (1/x)l 1?l?r  64/? + ? + 7) (l/n) 1?l/r M + 4? p l  l/n . An important notice from this result is that the failure probability is a polynomial in the deviation x. As we will see later, for a given level of confidence ?, this makes the constant in the convergence rate a polynomial function of (1/?) instead of log(1/?) as in sub-exponential cases. Thus, more careful examinations of the order of the failure probability are required for the derivation of any generalization bound with heavy-tailed losses. 3 Fast learning rates with heavy-tailed losses The derivation of fast learning rate with heavy tailed losses proceeds as follows. First, we will use the assumption of integrable envelope function to prove a localization-based result that allows us to reduce the analyses from the separable parametric classes F to its finite -net F . The multi-scale Bernstein?s condition is then employed to derive a fast-rate inequality that helps distinguish the optimal hypothesis from alternative hypotheses in F . The two results are then combined to obtain fast learning rates. 3.1 Preliminaries Throughout this section, let G be an -net for G in the L2 (P )-norm, with  = n?? for some 1 ? ? > 0. Denote by ? : G ? G an L2 (P )-metric projection from G to G . For any g0 ? G , we denote K(g0 ) = {|g0 ? g| : g ? ? ?1 (g0 )}. We have (i) the constant zero function is an element of K(g0 ), (ii) E[supu?K(g0 ) |u|r ] ? (2W )r ; and supu?K(g0 ) kukL2 (P ) ? , (iii) N (t, K(g0 ), L2 (P )) ? (K/t)C for all t > 0. 3 Given a sample Z = (Z1 , . . . , Zn ), we denote by KZ the projection of K(g0 ) onto the sample Z and by D(KZ ) half of the radius of (KZ , k ? k2 ), that is D(KZ ) = supu,v?KZ ku ? vk/4. We have the following preliminary lemma, for which the proofs are provided in the Appendix. r?2   2(r?1) r Lemma 3.1. ?2n ED(KZ ) ?  + E supu?K(g0 ) (Pn ? P )u (2W ) 2(r?1) . Lemma 3.2. Given 0 < ? < 1, there exist constant C1 , C2 > 0 depending only on ? such that for all x > 0, if x ? ax? + b then x ? C1 a1/(1??) + C2 b. Lemma 3.3. Define  A(l, r, ?, C, ?) = max l2 /r ? (1 ? ?)l + ?C, [? (1 ? ?/2) ? 1/2] l + ?C . Assuming that r ? 4C and ? ? 1, if we choose l = r (1 ? ?) /2 and p 0 < ? < (1 ? 2 C/r)/(2 ? ?), (3.1) (3.2) p then 1 ? l ? r and A(l, r, ?, C, ?) < 0. This also holds if ? ? 1 and 0 < ? < 1 ? 2 C/r. 3.2 Local analysis of the empirical loss The preliminary lemmas enable us to locally bound E supu?K(g0 ) (Pn ? P )u as follows: Lemma 3.4. If ? < (r ? 1)/r, there exists c1 > 0 such that E supu?K(g0 ) (Pn ? P )u ? c1 n?? for all n. Proof. Without loss of generality, we assume that K(g0 ) is countable. The arguments to extend the bound from countable classes to separable classes are standard (see, for example, Lemma 12 of Mehta and Williamson [2014]). Denote Z? = supu?K(g0 ) (Pn ? P )u and let  = 1/n? , R = (R1 , R2 , . . . , Rn ) be iid Rademacher random variables, using standard results about symmetrization and chaining of Rademacher process (see, for example, Corollary 13.2 in Boucheron et al. [2013]), we have ? ? n X Rj u(Xj )? nE sup (Pn ? P )g ? 2E ?ER sup u?K(g0 ) u?K(g0 ) j=1 Z ? 24E D(KX )? Z p log N (t, KX , k ? k2 )dt ? 24E 0 D(KX )? q  ? H t/ n, K(g0 ) dt, 0 where ER denotes the expected value with respect to the random variables R1 , R2 , . . . , Rn . By Assumption 2.2, we deduce that p nEZ? ? C0 (K, n, ?, C)( + ED(KX )) where C0 = O( log n). If we define p p ? r ? b = C0 /n = O( log n/n?+1 ), a = C0 n?1/2 (2W ) 2(r?1) x =  + EZ, /2 = O( log n/ n), then by Lemma 3.1, we have x ? ax(r?2)/(2r?2) + b + . Using lemma 3.2, we have x ? C1 a2(r?1)/r + C2 (b + ) ? C3 n?? , which completes the proof. p Lemma 3.5. Assuming that r ? 4C, if ? < 1 ? 2 C/r, there exist c1 , c2 > 0 such that for all n and ? > 0   sup Pn u ? 9c1 + (c2 /?)1/[r(1??)] n?? ?g0 ? G u?K(g0 ) with probability at least 1 ? ?. 4 Proof. Denote Z = supu?K(g0 ) Pn u and Z? = supu?K(g0 ) (Pn ? P )u. We have Z = sup Pn u ? Z? + sup P u ? Z? + sup kukL2 (P ) = Z? + . u?K(g0 ) u?K(g0 ) u?K(g0 ) ? using the facts that Applying Lemma 2.1 for ? = 8 and x = y/n for Z, p ? = sup E[u(X)]2 ?  = 1/n? , and E[ sup |u|r ] ? (2W )r , ? u?Kg0 u?Kg0 we have   p l   1?l/r ? ? ?l ? ? P Z ? 9EZ + y/n ? min y 46 (l/n) n W + 4 l/n := ?(y, n). 1?l?r To provide a union bound for all g0 ? G , we want the total failure probability ?(y, n)(n? K)C ? ?. This failure probability, as a function of n, is of order A(l, p r, ?, C, ?) (as define in Lemma 3.3) with ? = 2 . By choosing l = r(1 ? ?)/2 and ? < 1 ? 2 C/r, we deduce that there exist c2 , c3 > 0 such that ?(y, n)(n? K)C ? c2 /(nc3 y l ) ? c2 /y r(1??)/2 . The proof is completed by choosing p 2/[r(1??)] y = (c2 /?) and using the fact that EZ? ? c1 /n? (note that 1 ? 2 C/r ? (r ? 1)/r and we can apply Lemma 3.4 to get the bound). A direct consequence of this Lemma is the following localization-based result. Theorem 3.1 (Local analysis). Under Assumptions p 2.1, 2.2 and 2.3, let G be a minimal -net for G ?? in the L2 (P )-norm, with  = n where ? < 1 ? 2 C/r. Then there exist c1 , c2 > 0 such that for all ? > 0,   Pn g ? Pn (?(g)) ? 9c1 + (c2 /?)2/[r(1??)] n?? ?g ? G with probability at least 1 ? ?. 3.3 Fast learning rates with heavy-tailed losses Theorem 3.2. Given a0 , ? > 0. Under the multi-scale (B, ?, I)-Bernstein?s condition and the assumption that r ? 4C, consider p 0 < ? < (1 ? 2 C/r)/(2 ? ?i ) ?i ? I. (3.3) Then there exist Na0 ,?,r,B,? > 0 such that ?f ? F and n ? Na0 ,?,r,B,? , we have P `(f ) ? P ? ? a0 /n? with probability at least 1 ? ?. implies ?f ? ? F ? : Pn `(f ) ? Pn `(f ? ) ? a0 /(4n? ) Proof. Define a = [P `(f ) ? P ? ]n? . Assuming that f ? Fi , applying Lemma 2.1 for ? = 1/2 and x = a/4n? for a single hypothesis f , we have P [Pn `(f ) ? Pn `(fi? ) ? (P `(f ) ? P `(fi? ))/4] ? h(a, n) where  p l 1?l/r h(a, n, i) = min (4/a)l 50n? (l/n) W + 4n? Bi a?i /2 /n??i /2 l/n 1?l?r ? using the fact that ? = E[`(f )?`(fi? )]2 ? Bi [E(`(f ) ? `(fi? ))] i = Bi a?i /n??i if f ? Fi . Since ?i ? 1, h(a, n, i) is a non-increasing function in a. Thus, P [Pn `(f ) ? Pn `(fi? ) ? (P `(f ) ? P `(fi? ))/4] ? h(a0 , n, i). 2 To provide a union bound for all f ? F such that P `(f ) ? P `(fi? ) ? a0 /n? , we want the total failure probability to be small. This is guaranteed if h(a0 , n, i)(n? K)C ? ?. This failure probability, as a function of n, is of order A(l, r, ?, C, ?i ) as defined in equation (3.1). By choosing r, l as in Lemma 3.3 and ? as in equation (3.3), we have 1 ? l ? r and A(l, r, ?, C, ?i ) < 0 for all i. Thus, there exists c4 , c5 , c6 > 0 such that ?c (1?? /2) i h(a0 , n, i)(n? K)C ? c6 a0 5 n?c4 ?n, i.  1/c4 ?c (1?? ? /2) Hence, when n ? Na,?,r,B,? = c6 ?a0 5 where ?? = max{?}1{a0 ?1} + min{?}1{a0 <1} , we have: ?f ? F , P `(f ) ? P ? ? a0 /n? Pn `(f ? ) ? a0 /(4n? ) with probability at least 1 ? ?. 5 implies ?f ? ? F ? , Pn `(f ) ? Theorem 3.3. Under Assumptions 2.1, 2.2 and 2.3, consider ? as in equation (3.3) and c1 , c2 as in previous theorems. For all ? > 0, there exists N?,r,B,? such that if n ? N?,r,B,? , then   2/[r(1??)] P `(f?z ) ? P `(f ? ) + 36c1 + 1 + 4 (2c2 /?) n?? with probability at least 1 ? ?. Proof of Theorem 3.3. Let F by an -net of F with  = 1/n? such that f ? ? F . We denote the projection of f?z to F by f1 = ?(f?z ). For a given ? > 0, define o n   2/[r(1??)] A1 = ?f ? F : Pn f ? Pn (?(f )) ? 9c1 + (c3 /?) n?? ,  A2 = ?f ? F : Pn `(?(f )) ? Pn `(f ? ) ? a0 /(4n? ) and P `(?(f )) ? P `(f ? ) ? a0 /n? , where c1 , c2 is defined as in previous theorem, a0 /4 = 9c1 + (c3 /?)2/[r(1??)] and n ? Na0 ,?,r,? . We deduce that A1 and A2 happen with probability at most ?. On the other hand, under the event that A1 and A2 do not happen, we have   2/[r(1??)] Pn `(f1 ) ? Pn `(f?z ) + 9c1 + (c3 /?) n?? ? Pn `(f ? ) + a0 /(4n? ). By definition of F , we have P `(f?z ) ? P `(f1 ) +  ? P `(f ? ) + (a0 + 1)/n? . 3.4 Verifying the multi-scale Bernstein?s condition In practice, the most difficult condition to verify for fast learning rates is the multi-scale Bernstein?s condition. We derive in this section some approaches to verify the condition. We first extend the result of Mendelson [2008] to prove that the (standard) Bernstein?s condition is automatically satisfied for functions that are relatively far way from f ? under the integrability condition of the envelope function (proof in the Appendix). We recall that R(f ) = E`(f ) is referred to as the risk function. Lemma 3.6. Under Assumption 2.3, we define M = W r/(r?2) and ? = (r ? 2)/(r ? 1). Then, if ? > M and R(f ) ? ?/(? ? M )R(f ? ), then E(`(f ) ? `(f ? ))2 ? 2?? E(`(f ) ? `(f ? ))? . This allows us to derive the following result, for which the proof is provided in the Appendix. Lemma 3.7. If F is a subset of a vector space with metric d and the risk function R(f ) = E`(f ) has a unique minimizer on F at f ? in the interior of F and (i) There exists L > 0 such that E(`(f ) ? `(g))2 ? Ld(f, g)2 for all f, g ? F. (ii) There exists m ? 2, c > 0 and a neighborhood U around f ? such that R(f ) ? R(f ? ) ? cd(f, f ? )m for all f ? U . Then the multi-scale Bernstein?s condition holds for ? = ((r ? 2)/(r ? 1), 2/m). Corollary 3.1. Suppose that (F, d) is a pseudo-metric space, ` satisfies condition (i) in Lemma 3.7 and the risk function is strongly convex with respect to d, then the Bernstein?s condition holds with ? = 1. Remark 3.1. If the risk function is analytic at f ? , then condition (ii) in Lemma 3.7 holds. Similarly, if the risk function is continuously differentiable up to order 2 and the Hessian of R(f ) is positive definite at f ? , then condition (ii) is valid with m = 2. Corollary 3.2. If the risk function R(f ) = E`(f ) has a finite number of global minimizers f1 , f2 , . . . , fk , ` satisfies condition (i) in Lemma 3.7 and there exists mi ? 2, ci > 0 and neighborhoods Ui around fi such that R(f ) ? R(fi ) ? ci d(f, fi )mi for all f ? Ui , i = 1, . . . , k, then the multi-scale Bernstein?s condition holds for ? = ((r ? 2)/(r ? 1), 2/m1 , . . . , 2/mk ). 3.5 Comparison to related work Theorem 3.3 dictates that under our settings, the problem of learning with heavy-tailed losses can obtain convergence rates up to order   ? O n?(1?2 C/r)/(2?min{?}) (3.4) 6 where ? is the multi-scale Bernstein?s order and r is the degree of integrability of the loss. We recall that convergence rate of O(n?1/(2??) ) is obtained in Mehta and Williamson [2014] under the same setting but for bounded losses. (The analysis there was done under the ?-weakly stochastic mixability condition, which is equivalent with the standard ?-Bernstein?s condition for bounded losses [van Erven et al., 2015]). We note that if the loss is bounded, r = ? and (3.4) reduces to the convergence rate obtained in Mehta and Williamson [2014]. Fast learning rates for unbounded loses are previously derived in Lecu?e and Mendelson [2013] for sub-Gaussian losses and in Lecu?e and Mendelson [2012] for hypothesis classes that have subexponential envelope functions. In Lecu?e and Mendelson [2013], the Bernstein?s condition is not directly imposed, but is replaced by condition (ii) of Lemma 3.7 with m = 2 on the whole hypothesis class, while the assumption of sub-Gaussian hypothesis class validates condition (i). This implies the standard Bernstein?s condition with ? = 1 and makes the convergence rate O(n?1 ) consistent with our result (note that for sub-Gaussian losses, r can be chosen arbitrary large). The analysis of Lecu?e and Mendelson [2012] concerns about non-exact oracle inequalities (rather than the sharp oracle inequalities we investigate in this paper) and can not be directly compared with our results. 4 Application: k-means clustering with heavy-tailed source distributions k-means clustering is a method of vector quantization aiming to partition n observations into k ? 2 clusters in which each observation belongs to the cluster with the nearest mean. Formally, let X be a random vector taking values in Rd with distribution P . Given a codebook (set of k cluster centers) C = {yi } ? (Rd )k , the distortion (loss) on an instant x is defined as `(C, x) = minyi ?C kx ? yi k2 and k-means clustering method aims at finding a minimizer C ? of R(`(C)) = P `(C) via minimizing the empirical distortion Pn `(C). The rate of convergence of k-means clustering has drawn considerable attention in the statistics and machine learning literatures [Pollard, 1982, Bartlett et al., 1998, Linder et al., 1994, Ben-David, 2007]. Fast learning rates for k-means clustering (O(1/n)) have also been derived by Antos et al. [2005] in the case when the source distribution is supported on a finite set of points, and by Levrard [2013] under the assumptions that the source distribution has bounded support and satisfies the so-called Pollard?s regularity condition, which dictates that P has a continuous density with respect to the Lebesgue measure and the Hessian matrix of the mapping C ? R(C) is positive definite at C ? . Little is known about the finite-sample performance of empirically designed quantizers under possibly heavy-tailed distributions. In Telgarsky and Dasgupta [2013], a convergence rate of O(n?1/2+2/r ) are derived, where r is the number of moments of X that are assumed to be finite. Brownlees et al. [2015] uses some robust mean estimators to replace empirical means and derives a convergence rate of O(n?1/2 ) assuming only that the variance of X is finite. The results from previous sections enable us to prove that with proper setting, the convergence rate of k-means clustering for heavy-tailed source distributions can be arbitrarily close to O(1/n). Following the framework of Brownlees et al. [2015], we consider G = {`(C, x) = min kx ? yi k2 , C ? F = (??, ?)d?k } yi ?C for some ? > 0 with the regular Euclidean metric. We let C ? , C?n be defined as in the previous sections. Theorem 4.1. If X has finite moments up to order r ? 4k(d + 1), P has a continuous density with respect to the Lebesgue measure, the risk function has a finite number of global minimizers and the Hessian matrix of C ? R(C) is positive definite at the every optimal C ? in the interior of F, then for all ? that satisfies p r?1 0<?< (1 ? 2 k(d + 1)/r), r there exists c1 , c2 > 0 such that for all ? > 0, with probability at least 1 ? ?, we have   2/r R(C?n ) ? R(C ? ) ? c1 + 4 (c2 /?) n?? Moreover, when r ? ?, ? can be chosen arbitrarily close to 1. 7 Proof. We have 1/r   1/r  1/r 1 1 1 2r 2 2 r r 2r ? E[kXk + ? ] E sup `(C, X) ? EkXk + ? ? W < ?, 2r 2 2 C?F while standard results about VC-dimension of k-means clustering hypothesis class guarantees that C ? k(d + 1) [Linder et al., 1994]. On the other hand, we can verify that E[`(C, X) ? `(C 0 , X)]2 ? L? kC ? C 0 k22 , which validates condition (i) in Lemma 3.7. The fact that the Hessian matrix of C ? R(C) is positive definite at C ? prompts R(C?n )?R(C ? ) ? ckC?n ?C ? k2 for some c > 0 in a neighborhood U around any optimal codebook C ? . Thus, Lemma 3.6 confirms the multi-scale Bernstein?s condition with ? = ((r ? 2)/(r ? 1), 1, . . . , 1). The inequality is then obtained from Theorem 3.3. 5 Discussion and future work We have shown that fast learning rates for heavy-tailed losses can be obtained for hypothesis classes with an integrable envelope when the loss satisfies the multi-scale Bernstein?s condition. We then verify those conditions and obtain new convergence rates for k-means clustering with heavy-tailed losses. The analyses extend and complement existing results in the literature from both theoretical and practical points of view. We also introduce a new fast-rate assumption, the multi-scale Bernstein?s condition, and provide a clear path to verify the assumption in practice. We believe that the multi-scale Bernstein?s condition is the proper assumption to study fast rates for unbounded losses, for its ability to separate the behaviors of the risk function on microscopic and macroscopic scales, for which the distinction can only be observed in an unbounded setting. There are several avenues for improvement. First, we would like to consider hypothesis class with polynomial entropy bounds. Similarly, the condition of independent and identically distributed observations can be replaced with mixing properties [Steinwart and Christmann, 2009, Hang and Steinwart, 2014, Dinh et al., 2015]. While the condition of integrable envelope is an improvement from the condition of sub-exponential envelope previously investigated in the literature, it would be interesting to see if the rates retain under weaker conditions, for example, the assumption that the Lr -diameter of the hypothesis class is bounded [Cortes et al., 2013]. Finally, the recent work of Brownlees et al. [2015], Hsu and Sabato [2016] about robust estimators as alternatives of ERM to study heavy-tailed losses has yielded more favorable learning rates under weaker conditions, and we would like to extend the result in this paper to study such estimators. Acknowledgement Vu Dinh was supported by DMS-1223057 and CISE-1564137 from the National Science Foundation and U54GM111274 from the National Institutes of Health. Lam Si Tung Ho was supported by NSF grant IIS 1251151. 8 References Andr?as Antos, L?aszl?o Gy?orfi, and Andr?as Gy?orgy. Individual convergence rates in empirical vector quantizer design. IEEE Transactions on Information Theory, 51(11):4013?4022, 2005. Peter L Bartlett, Tam?as Linder, and G?abor Lugosi. The minimax distortion redundancy in empirical quantizer design. IEEE Transactions on Information Theory, 44(5):1802?1813, 1998. Shai Ben-David. A framework for statistical clustering with constant time approximation algorithms for kmedian and k-means clustering. Machine Learning, 66(2):243?257, 2007. St?ephane Boucheron, G?abor Lugosi, and Pascal Massart. Concentration inequalities: A nonasymptotic theory of independence. OUP Oxford, 2013. Olivier Bousquet, St?ephane Boucheron, and G?abor Lugosi. Introduction to statistical learning theory. In Advanced lectures on machine learning, pages 169?207. Springer, 2004. Christian Brownlees, Emilien Joly, and G?abor Lugosi. Empirical risk minimization for heavy-tailed losses. The Annals of Statistics, 43(6):2507?2536, 2015. Corinna Cortes, Spencer Greenberg, and Mehryar Mohri. Relative deviation learning bounds and generalization with unbounded loss functions. arXiv:1310.5796, 2013. Vu Dinh, Lam Si Tung Ho, Nguyen Viet Cuong, Duy Nguyen, and Binh T Nguyen. Learning from non-iid data: Fast rates for the one-vs-all multiclass plug-in classifiers. In Theory and Applications of Models of Computation, pages 375?387. Springer, 2015. Peter Gr?unwald. The safe Bayesian: learning the learning rate via the mixability gap. In Proceedings of the 23rd international conference on Algorithmic Learning Theory, pages 169?183. Springer-Verlag, 2012. Hanyuan Hang and Ingo Steinwart. Fast learning from ?-mixing observations. Journal of Multivariate Analysis, 127:184?199, 2014. Daniel Hsu and Sivan Sabato. Loss minimization and parameter estimation with heavy tails. Journal of Machine Learning Research, 17(18):1?40, 2016. Guillaume Lecu?e and Shahar Mendelson. General nonexact oracle inequalities for classes with a subexponential envelope. The Annals of Statistics, 40(2):832?860, 2012. Guillaume Lecu?e and Shahar Mendelson. Learning sub-Gaussian classes: Upper and minimax bounds. arXiv:1305.4825, 2013. Johannes Lederer and Sara van de Geer. New concentration inequalities for suprema of empirical processes. Bernoulli, 20(4):2020?2038, 2014. Cl?ement Levrard. Fast rates for empirical vector quantization. Electronic Journal of Statistics, 7:1716?1746, 2013. Tam?as Linder, G?abor Lugosi, and Kenneth Zeger. Rates of convergence in the source coding theorem, in empirical quantizer design, and in universal lossy source coding. IEEE Transactions on Information Theory, 40(6):1728?1740, 1994. Nishant A Mehta and Robert C Williamson. From stochastic mixability to fast rates. In Advances in Neural Information Processing Systems, pages 1197?1205, 2014. Shahar Mendelson. Obtaining fast error rates in nonconvex situations. Journal of Complexity, 24(3):380?397, 2008. David Pollard. A central limit theorem for k-means clustering. The Annals of Probability, pages 919?926, 1982. Ingo Steinwart and Andreas Christmann. Fast learning from non-iid observations. In Advances in Neural Information Processing Systems, pages 1768?1776, 2009. Matus J Telgarsky and Sanjoy Dasgupta. Moment-based uniform deviation bounds for k-means and friends. In Advances in Neural Information Processing Systems, pages 2940?2948, 2013. Tim van Erven, Peter D Gr?unwald, Nishant A Mehta, Mark D Reid, and Robert C Williamson. Fast rates in statistical and online learning. Journal of Machine Learning Research, 16:1793?1861, 2015. Tong Zhang. From -entropy to KL-entropy: Analysis of minimum information complexity density estimation. The Annals of Statistics, 34(5):2180?2210, 2006a. Tong Zhang. Information-theoretic upper and lower bounds for statistical estimation. IEEE Transactions on Information Theory, 52(4):1307?1321, 2006b. 9
6104 |@word polynomial:4 stronger:1 norm:5 c0:4 mehta:6 confirms:1 boundedness:2 ld:1 moment:5 daniel:1 erven:5 existing:2 z2:1 si:3 happen:2 zeger:1 partition:2 enables:2 analytic:1 christian:1 designed:1 v:1 half:1 lr:5 quantizer:3 codebook:2 c6:3 zhang:3 unbounded:13 mathematical:1 c2:16 direct:1 prove:6 manner:1 introduce:2 expected:1 behavior:2 frequently:1 multi:19 automatically:2 resolve:1 little:2 increasing:1 spain:1 provided:2 bounded:16 moreover:2 notation:1 biostatistics:1 agnostic:1 finding:1 guarantee:1 pseudo:2 every:1 k2:5 classifier:1 grant:1 reid:1 positive:5 local:2 limit:1 consequence:1 aiming:1 oxford:1 path:1 abuse:1 lugosi:5 studied:2 sara:1 supq:1 mentioning:1 bi:5 kg0:2 practical:2 unique:1 vu:3 union:3 practice:2 definite:4 supu:9 ement:1 universal:3 suprema:3 empirical:14 orfi:1 dictate:2 projection:3 word:1 confidence:1 regular:1 get:1 cannot:1 close:4 convenience:1 onto:1 interior:2 context:1 risk:12 applying:2 measurable:1 equivalent:1 imposed:1 center:3 attention:1 convex:1 focused:1 estimator:5 joly:1 annals:4 play:1 suppose:1 exact:1 olivier:1 us:1 hypothesis:23 element:1 satisfying:1 observed:2 role:1 aszl:1 tung:3 verifying:1 ui:2 complexity:2 rigorously:1 weakly:1 duy:2 upon:1 localization:2 f2:1 derivation:4 fast:36 choosing:3 refined:1 neighborhood:3 whose:2 larger:1 distortion:3 ability:1 statistic:6 validates:2 online:1 differentiable:1 net:5 analytical:1 kmedian:1 lam:3 mixing:2 achieve:1 los:1 convergence:13 regularity:4 cluster:3 r1:2 rademacher:2 produce:1 telgarsky:3 converges:1 ben:2 help:1 depending:4 derive:5 illustrate:1 recurrent:1 friend:1 tim:1 nearest:1 progress:1 christmann:2 come:2 implies:4 safe:1 radius:3 stochastic:3 vc:2 enable:6 f1:4 generalization:4 preliminary:3 spencer:1 extension:1 hold:6 around:3 mapping:1 algorithmic:1 matus:1 a2:4 estimation:4 favorable:2 symmetrization:1 minimization:2 gaussian:5 aim:1 rather:1 pn:31 corollary:3 derived:5 ax:2 vk:4 improvement:2 bernoulli:1 integrability:4 mainly:1 minimizers:2 a0:18 abor:5 ho2:1 hidden:1 kc:1 arg:2 issue:1 subexponential:2 pascal:1 development:1 proposes:1 biology:1 unsupervised:1 minf:2 future:1 ephane:2 employ:1 national:2 individual:1 replaced:2 lebesgue:2 investigate:1 antos:3 ckc:1 euclidean:1 theoretical:3 minimal:2 mk:1 cover:1 zn:2 deviation:4 subset:3 uniform:1 gr:3 hutchinson:1 combined:1 st:2 density:4 international:1 retain:1 continuously:1 na:1 central:3 satisfied:2 choose:1 possibly:1 tam:2 leading:1 nonasymptotic:1 de:3 gy:2 coding:2 supg:1 later:1 view:1 sup:12 shai:1 holder:1 variance:1 bayesian:1 iid:4 worth:1 whenever:2 ed:2 definition:2 failure:6 dm:1 proof:11 mi:2 hsu:2 recall:5 knowledge:1 dt:2 lederer:3 done:3 strongly:1 generality:1 hand:2 steinwart:4 lossy:1 believe:1 building:1 k22:1 verify:7 hence:1 boucheron:4 covering:2 chaining:1 theoretic:1 l1:1 novel:1 fi:17 al2:1 empirically:1 exponentially:2 tail:11 extend:5 m1:1 refer:1 dinh:3 rd:3 fk:1 similarly:2 entail:1 deduce:3 multivariate:1 recent:2 inf:1 belongs:1 reverse:1 verlag:1 nonconvex:1 inequality:10 shahar:3 arbitrarily:4 lecu:9 yi:4 integrable:4 minimum:1 impose:1 employed:1 ii:7 multiple:1 rj:1 reduces:1 faster:4 adapt:1 plug:1 a1:4 metric:7 arxiv:2 c1:18 want:2 separately:1 completes:1 source:7 macroscopic:2 sabato:2 envelope:13 massart:1 bernstein:30 iii:1 enough:1 identically:1 xj:1 independence:1 zi:1 reduce:1 andreas:1 avenue:1 multiclass:1 angeles:1 heavier:1 bartlett:2 effort:1 peter:3 pollard:4 hessian:4 remark:1 clear:1 johannes:1 locally:1 concentrated:1 diameter:2 exist:7 nsf:1 andr:2 notice:1 dasgupta:3 redundancy:1 sivan:1 nevertheless:1 drawn:1 kenneth:1 place:1 throughout:3 family:1 electronic:1 appendix:3 bound:13 guaranteed:1 vietnam:1 distinguish:1 oracle:3 yielded:1 relieved:1 bousquet:2 argument:1 min:6 separable:4 oup:1 relatively:1 department:3 ball:2 erm:2 taken:1 equation:3 previously:3 needed:1 apply:1 alternative:2 corinna:1 ho:2 existence:1 denotes:1 clustering:13 include:1 completed:1 madison:1 instant:1 especially:1 mixability:4 g0:25 parametric:2 concentration:4 said:2 microscopic:2 separate:1 assuming:4 minimizing:1 difficult:1 robert:2 quantizers:1 negative:2 design:3 countable:3 proper:3 upper:3 observation:7 ingo:2 finite:14 situation:1 extended:1 rn:2 arbitrary:1 sharp:1 prompt:1 introduced:1 complement:2 david:3 required:1 kl:1 c3:5 z1:2 c4:3 california:1 distinction:2 nishant:2 barcelona:1 nip:1 address:1 able:1 proceeds:1 ev:1 program:1 max:2 power:2 event:1 examination:1 advanced:1 minimax:2 ne:1 health:1 review:1 literature:6 understanding:1 l2:9 acknowledgement:1 relative:2 wisconsin:1 loss:47 lecture:1 interesting:1 foundation:1 degree:1 consistent:1 viewpoint:1 heavy:28 cd:1 cancer:1 mohri:1 supported:3 cuong:1 levrard:3 weaker:2 viet:1 institute:1 taking:2 van:8 distributed:1 greenberg:1 dimension:2 fred:1 valid:2 kz:6 c5:1 nguyen:3 far:1 transaction:4 hang:2 implicitly:1 supremum:1 global:2 assumed:1 continuous:2 tailed:21 ku:1 robust:4 obtaining:1 orgy:1 williamson:6 investigated:1 necessarily:2 mehryar:1 na0:3 cl:1 whole:1 referred:1 tong:2 binh:2 ekxk:1 sub:9 exponential:5 theorem:11 brownlees:5 er:2 r2:2 cortes:3 concern:1 derives:2 exists:9 mendelson:12 quantization:3 ci:2 kx:6 gap:1 supf:2 entropy:6 simply:1 nez:1 ez:4 kxk:1 contained:1 springer:3 minimizer:4 satisfies:9 relies:1 loses:1 goal:1 viewed:1 careful:1 replace:2 cise:1 considerable:1 lemma:25 geer:3 total:2 called:1 sanjoy:1 unwald:3 formally:1 linder:4 guillaume:2 support:2 mark:1 limv:1
5,642
6,105
Guided Policy Search via Approximate Mirror Descent William Montgomery Dept. of Computer Science and Engineering University of Washington wmonty@cs.washington.edu Sergey Levine Dept. of Computer Science and Engineering University of Washington svlevine@cs.washington.edu Abstract Guided policy search algorithms can be used to optimize complex nonlinear policies, such as deep neural networks, without directly computing policy gradients in the high-dimensional parameter space. Instead, these methods use supervised learning to train the policy to mimic a ?teacher? algorithm, such as a trajectory optimizer or a trajectory-centric reinforcement learning method. Guided policy search methods provide asymptotic local convergence guarantees by construction, but it is not clear how much the policy improves within a small, finite number of iterations. We show that guided policy search algorithms can be interpreted as an approximate variant of mirror descent, where the projection onto the constraint manifold is not exact. We derive a new guided policy search algorithm that is simpler and provides appealing improvement and convergence guarantees in simplified convex and linear settings, and show that in the more general nonlinear setting, the error in the projection step can be bounded. We provide empirical results on several simulated robotic navigation and manipulation tasks that show that our method is stable and achieves similar or better performance when compared to prior guided policy search methods, with a simpler formulation and fewer hyperparameters. 1 Introduction Policy search algorithms based on supervised learning from a computational or human ?teacher? have gained prominence in recent years due to their ability to optimize complex policies for autonomous flight [16], video game playing [15, 4], and bipedal locomotion [11]. Among these methods, guided policy search algorithms [6] are particularly appealing due to their ability to adapt the teacher to produce data that is best suited for training the final policy with supervised learning. Such algorithms have been used to train complex deep neural network policies for vision-based robotic manipulation [6], as well as a variety of other tasks [19, 11]. However, convergence results for these methods typically follow by construction from their formulation as a constrained optimization, where the teacher is gradually constrained to match the learned policy, and guarantees on the performance of the final policy only hold at convergence if the constraint is enforced exactly. This is problematic in practical applications, where such algorithms are typically executed for a small number of iterations. In this paper, we show that guided policy search algorithms can be interpreted as approximate variants of mirror descent under constraints imposed by the policy parameterization, with supervised learning corresponding to a projection onto the constraint manifold. Based on this interpretation, we can derive a new, simplified variant of guided policy search, which corresponds exactly to mirror descent under linear dynamics and convex policy spaces. When these convexity and linearity assumptions do not hold, we can show that the projection step is approximate, up to a bound that depends on the step size of the algorithm, which suggests that for a small enough step size, we can achieve continuous improvement. The form of this bound provides us with intuition about how to adjust the step size in practice, so as to obtain a simple algorithm with a small number of hyperparameters. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Algorithm 1 Generic guided policy search method 1: for iteration k ? {1, . . . , K} do 2: C-step: improve each pi (ut |xt ) based on surrogate cost `?i (xt , ut ), return samples Di 3: S-step: train ?? (ut |xt ) with supervised learning on the dataset D = ?i Di 4: Modify `?i (xt , ut ) to enforce agreement between ?? (ut |xt ) and each p(ut |xt ) 5: end for The main contribution of this paper is a simple new guided policy search algorithm that can train complex, high-dimensional policies by alternating between trajectory-centric reinforcement learning and supervised learning, as well as a connection between guided policy search methods and mirror descent. We also extend previous work on bounding policy cost in terms of KL divergence [15, 17] to derive a bound on the cost of the policy at each iteration, which provides guidance on how to adjust the step size of the method. We provide empirical results on several simulated robotic navigation and manipulation tasks that show that our method is stable and achieves similar or better performance when compared to prior guided policy search methods, with a simpler formulation and fewer hyperparameters. 2 Guided Policy Search Algorithms We first review guided policy search methods and background. Policy search algorithms aim to optimize a parameterized policy ?? (ut |xt ) over actions ut conditioned on the state xt . Given stochastic dynamics p(xt+1 |xt , ut ) and cost `(xt , ut ), the goal is to minimize the expected cost under PT the policy?s trajectory distribution, given by J(?) = t=1 E?? (xt ,ut ) [`(xt , ut )], where we overload QT notation to use ?? (xt , ut ) to denote the marginals of ?? (? ) = p(x1 ) t=1 p(xt+1 |xt , ut )?? (ut |xt ), where ? = {x1 , u1 , . . . , xT , uT } denotes a trajectory. A standard reinforcement learning (RL) approach to policy search is to compute the gradient ?? J(?) and use it to improve J(?) [18, 14]. The gradient is typically estimated using samples obtained from the real physical system being controlled, and recent work has shown that such methods can be applied to very complex, high-dimensional policies such as deep neural networks [17, 10]. However, for complex, high-dimensional policies, such methods tend to be inefficient, and practical real-world applications of such model-free policy search techniques are typically limited to policies with about one hundred parameters [3]. Instead of directly optimizing J(?), guided policy search algorithms split the optimization into a ?control phase? (which we?ll call the C-step) that finds multiple simple local policies pi (ut |xt ) that can solve the task from different initial states xi1 ? p(x1 ), and a ?supervised phase? (S-step) that optimizes the global policy ?? (ut |xt ) to match all of these local policies using standard supervised learning. In fact, a variational formulation of guided policy search [7] corresponds to the EM algorithm, where the C-step is actually the E-step, and the S-step is the M-step. The benefit of this approach is that the local policies pi (ut |xt ) can be optimized separately using domain-specific local methods. Trajectory optimization might be used when the dynamics are known [19, 11], while local RL methods might be used with unknown dynamics [5, 6], which still requires samples from the real system, though substantially fewer than the direct approach, due to the simplicity of the local policies. This sample efficiency is the main advantage of guided policy search, which can train policies with nearly a hundred thousand parameters for vision-based control using under 200 episodes [6], in contrast to direct deep RL methods that might require orders of magnitude more experience [17, 10]. A generic guided policy search method is shown in Algorithm 1. The C-step invokes a local policy optimizer (trajectory optimization or local RL) for each pi (ut |xt ) on line 2, and the S-step uses supervised learning to optimize the global policy ?? (ut |xt ) on line 3 using samples from each pi (ut |xt ), which are generated during the C-step. On line 4, the surrogate cost `?i (xt , ut ) for each pi (ut |xt ) is adjusted to ensure convergence. This step is crucial, because supervised learning does not in general guarantee that ?? (ut |xt ) will achieve similar long-horizon performance to pi (ut |xt ) [15]. The local policies might not even be reproducible by a single global policy in general. To address this issue, most guided policy search methods have some mechanism to force the local policies to agree with the global policy, typically by framing the entire algorithm as a constrained optimization that seeks at convergence to enforce equality between ?? (ut |xt ) and each pi (ut |xt ). The form of the 2 overall optimization problem resembles dual decomposition, and usually looks something like this: min N X T X ?,p1 ,...,pN Epi (xt ,ut ) [`(xt , ut )] such that pi (ut |xt ) = ?? (ut |xt ) ?xt , ut , t, i. (1) i=1 t=1 PN PT Since xi1 ? p(x1 ), we have J(?) ? i=1 t=1 Epi (xt ,ut ) [`(xt , ut )] when the constraints are enforced exactly. The particular form of the constraint varies depending on the method: prior works have used dual gradient descent [8], penalty methods [11], ADMM [12], and Bregman ADMM [6]. We omit the derivation of these prior variants due to space constraints. 2.1 Efficiently Optimizing Local Policies A common and simple choice for the local policies pi (ut |xt ) is to use time-varying linear-Gaussian controllers of the form pi (ut |xt ) = N (Kt xt + kt , Ct ), though other options are also possible [12, 11, 19]. Linear-Gaussian controllers represent individual trajectories with linear stabilization and Gaussian noise, and are convenient in domains where each local policy can be trained from a different (but consistent) initial state xi1 ? p(x1 ). This represents an additional assumption beyond standard RL, but allows for an extremely efficient and convenient local model-based RL algorithm based on iterative LQR [9]. The algorithm proceeds by generating N samples on the real physical system from each local policy pi (ut |xt ) during the C-step, using these samples to fit local linear-Gaussian dynamics for each local policy of the form pi (xt+1 |xt , ut ) = N (fxt xt + fut ut + fct , Ft ) using linear regression, and then using these fitted dynamics to improve the linear-Gaussian controller via a modified LQR algorithm [5]. This modified LQR method solves the following optimization problem: min T X Kt ,kt ,Ct Epi (xt ,ut ) [`?i (xt , ut )] such that DKL (pi (? )k? pi (? )) ? , (2) t=1 where we again use pi (? ) to denote the trajectory distribution induced by pi (ut |xt ) and the fitted dynamics pi (xt+1 |xt , ut ). Here, p?i (ut |xt ) denotes the previous local policy, and the constraint ensures that the change in the local policy is bounded, as proposed also in prior works [1, 14, 13]. This is particularly important when using linearized dynamics fitted to local samples, since these dynamics are not valid outside of a small region around the current controller. In the case of linearGaussian dynamics and policies, the KL-divergence constraint DKL (pi (? )k? pi (? )) ?  can be shown to simplify, as shown in prior work [5] and Appendix A: DKL (pi (? )k? pi (? )) = T X DKL (pi (ut |xt )k? pi (ut |xt )) = t=1 T X ?Epi (xt ,ut ) [log p?i (ut |xt )]?H(pi (ut |xt )), t=1 and the resulting Lagrangian of the problem in Equation (2) can be optimized with respect to the primal variables using the standard LQR algorithm, which suggests a simple method for solving the problem in Equation (2) using dual gradient descent [5]. The surrogate objective `?i (xt , ut ) = `(xt , ut )+?i (?) typically includes some term ?i (?) that encourages the local policy pi (ut |xt ) to stay close to the global policy ?? (ut |xt ), such as a KL-divergence of the form DKL (pi (ut |xt )k?? (ut |xt )). 2.2 Prior Convergence Results Prior work on guided policy search typically shows convergence by construction, by framing the C-step and S-step as block coordinate ascent on the (augmented) Lagrangian of the problem in Equation (1), with the surrogate cost `?i (xt , ut ) for the local policies corresponding to the (augmented) Lagrangian, and the overall algorithm being an instance of dual gradient descent [8], ADMM [12], or Bregman ADMM [6]. Since these methods enforce the constraint pi (ut |xt ) = ?? (ut |xt ) at convergence (up to linearization or sampling error, depending on the method), we know that PN 1 1 i=1 Epi (xt ,ut ) [`(xt , ut )] ? E?? (xt ,ut ) [`(xt , ut )] at convergence. However, prior work does N not say anything about ?? (ut |xt ) at intermediate iterations, and the constraints of policy search in the real world might often preclude running the method to full convergence. We propose a simplified variant of guided policy search, and present an analysis that sheds light on the performance of both the new algorithm and prior guided policy search methods. As mentioned previously, the initial state xi1 of each local policy pi (ut |xt ) is assumed to be drawn from p(x1 ), hence the outer sum corresponds to Monte Carlo integration of the expectation under p(x1 ). 1 3 Algorithm 2 Mirror descent guided policy search (MDGPS): convex linear variant 1: for iteration k ? {1, . . . , K} do h i PT 2: C-step: pi ? arg minpi Epi (? ) t=1 `(xt , ut ) such that DKL (pi (? )k?? (? )) ?  3: S-step: ?? ? arg min? 4: end for 3 P i DKL (pi (? )k?? (? )) (via supervised learning) Mirror Descent Guided Policy Search In this section, we propose our new simplified guided policy search, which we term mirror descent guided policy search (MDGPS). This algorithm uses the constrained LQR optimization in Equation (2) to optimize each of the local policies, but instead of constraining each local policy pi (ut |xt ) against the previous local policy p?i (ut |xt ), we instead constraint it directly against the global policy ?? (ut |xt ), and simply set the surrogate cost to be the true cost, such that `?i (xt , ut ) = `(xt , ut ). The method is summarized in Algorithm 2. In the case of linear dynamics and a quadratic cost (i.e. the LQR setting), and assuming that supervised learning can globally solve a convex optimization problem, we can show that this method corresponds to an instance of mirror descent [2] on the objective J(?). In this formulation, the optimization is performed on the space of trajectory distributions, with a constraint that the policy must lie on the manifold of policies with the chosen parameterization. Let ?? be the set of all possible policies ?? for a given parameterization, where we overload notation to also let ?? denote the set of trajectory distributions that are possible under the chosen parameteriPT zation. The return J(?) can be optimized according to ?? ? arg min???? E?(? ) [ t=1 `(xt , ut )]. Mirror descent solves this " Toptimization # by alternating between two steps at each iteration k: X   pk ? arg min Ep(? ) `(xt , ut ) s. t. D p, ? k ? , ? k+1 ? arg min D pk , ? . p ???? t=1 The first step finds a new distribution pk that minimizes the cost and is close to the previous policy ? k in terms of the divergence D p, ? k , while the second step projects this distribution onto the constraint set ?? , with respect to the divergence D(pk , ?). In the linear-quadratic case with a convex supervised learning phase, this corresponds exactly to Algorithm 2: the C-step optimizes pk , while the S-step is the projection. Monotonic improvement of the global policy ?? follows from the monotonic improvement of mirror descent [2]. In the case of linear-Gaussian dynamics and policies, the S-step, which minimizes KL-divergence between trajectory distributions, in fact only requires minimizing the KL-divergence between policies. Using the identity in Appendix A, we know that DKL (pi (? )k?? (? )) = T X Epi (xt ) [DKL (pi (ut |xt )k?? (ut |xt ))] . (3) t=1 3.1 Implementation for Nonlinear Global Policies and Unknown Dynamics In practice, we aim to optimize complex policies for nonlinear systems with unknown dynamics. This requires a few practical considerations. The C-step requires a local quadratic cost function, which can be obtained via Taylor expansion, as well as local linear-Gaussian dynamics p(xt+1 |xt , ut ) = N (fxt xt + fut ut + fct , Ft ), which we can fit to samples as in prior work [5]. We also need a local time-varying linear-Gaussian approximation to the global policy ?? (ut |xt ), denoted ? ??i (ut |xt ). This can be obtained either by analytically differentiating the policy, or by using the same linear regression method that we use to estimate p(xt+1 |xt , ut ), which is the approach in our implementation. In both cases, we get a different global policy linearization around each local policy. Following prior work [5], we use a Gaussian mixture model prior for both the dynamics and global policy fit. The S-step can be performed approximately in the nonlinear case by using the samples collected for dynamics fitting to also train the global policy. Following prior work [6], our S-step minimizes2 X 1 X Epi (xt ) [DKL (?? (ut |xt )kpi (ut |xt ))] ? DKL (?? (ut |xt,i,j )kpi (ut |xt,i,j )), |Di | i,t,j i,t 2 Note that we flip the KL-divergence inside the expectation, following [6]. We found that this produced better results. The intuition behind this is that, because log pi (ut |xt ) is proportional to the Q-function of pi (ut |xt ) (see Appendix B.1), DKL (?? (ut |xt,i,j )kpi (ut |xt,i,j ) minimizes the cost-to-go under pi (ut |xt ) with respect to ?? (ut |xt ), which provides for a more informative objective than the unweighted likelihood in Equation (3). 4 Algorithm 3 Mirror descent guided policy search (MDGPS): unknown nonlinear dynamics 1: for iteration k ? {1, . . . , K} do 2: Generate samples Di = {?i,j } by running either pi or ??i 3: Fit linear-Gaussian dynamics pi (xt+1 |xt , ut ) using samples in Di 4: Fit linearized global policy ? ??i (ut |xt ) using samples in Di PT 5: C-step: pi ? arg minpiP Epi (? ) [ t=1 `(xt , ut )] such that DKL (pi (? )k? ??i (? )) ?  6: S-step: ?? ? arg min? t,i,j DKL (?? (ut |xt,i,j )kpi (ut |xt,i,j )) (via supervised learning) 7: Adjust  (see Section 4.2) 8: end for where xt,i,j is the j th sample from pi (xt ) obtained by running pi (ut |xt ) on the real system. For linear-Gaussian pi (ut |xt ) and (nonlinear) conditionally Gaussian ?? (ut |xt ) = N (?? (xt ), ?? (xt )), where ?? and ?? can be any function (such as a deep neural network), the KL-divergence DKL (?? (ut |xt,i,j )kpi (ut |xt,i,j )) can easily be evaluated and differentiated in closed form [6]. However, in the nonlinear setting, minimizing this objective no longer minimizes the KL-divergence between trajectory distributions DKL (?? (? )kpi (? )) exactly, which means that MDGPS does not correspond exactly to mirror descent: although the C-step can still be evaluated exactly, the S-step now corresponds to an approximate projection onto the constraint manifold. In the next section, we discuss how we can bound the error in this projection. A summary of the nonlinear MDGPS method is provided in Algorithm 4, and additional details are in Appendix B. The samples for linearizing the dynamics and policy can be obtained by running either the last local policy pi (ut |xt ), or the last global policy ?? (ut |xt ). Both variants produce good results, and we compare them in Section 6. 3.2 Analysis of Prior Guided Policy Search Methods as Approximate Mirror Descent The main distinction between the proposed method and prior guided policy search methods is that the constraint DKL (pi (? )k? ??i (? )) ?  is enforced on the local policies at each iteration, while in prior methods, this constraint is iteratively enforced via a dual descent procedure over multiple iterations. This means that the prior methods perform approximate mirror descent with step sizes that are adapted (by adjusting the Lagrange multipliers) but not constrained exactly. In our empirical evaluation, we show that our approach is somewhat more stable, though sometimes slower than these prior methods. This empirical observation agrees with our intuition: prior methods can sometimes be faster, because they do not exactly constrain the step size, but our method is simpler, requires less tuning, and always takes bounded steps on the global policy in trajectory space. 4 Analysis in the Nonlinear Case Although the S-step under nonlinear dynamics is not an optimal projection onto the constraint manifold, we can bound the additional cost incurred by this projection in terms of the KL-divergence between pi (ut |xt ) and ?? (ut |xt ). This analysis also reveals why prior guided policy search algorithms, which only have asymptotic convergence guarantees, still attain good performance in practice even after a small number of iterations. We will drop the subscript i from pi (ut |xt ) in this section for conciseness, though the same analysis can be repeated for multiple local policies pi (ut |xt ). 4.1 Bounding the Global Policy Cost The analysis in this section is based on the following lemma, which we prove in Appendix C.1, building off of earlier results by Ross et al. [15] and Schulman et al. [17]: Lemma 4.1 Let t = maxxt DKL (p(ut |xt )k?? (ut |xt ). Then DTV (p(xt )k?? (xt )) ? 2 PT t=1 ? 2t . This means that if we can bound the KL-divergence between the policies, then the total variation divergence between their state marginals (given by DTV (p(xt )k?? (xt )) = 12 kp(xt ) ? ?? (xt )k1 ) will also be bounded. This bound allows us in turn to relate the total expected costs of the two policies to each other according to the following lemma, which we prove in Appendix C.2: 5 Lemma 4.2 If DTV (p(xt )k?? (xt )) ? 2 T X PT t=1 ? 2t , then we can bound the total cost of ?? as  T  X ? ? E?? (xt ,ut ) [`(xt , ut )] ? Ep(xt ,ut ) [`(xt , ut )] + 2t max `(xt , ut ) + 2 2t Qmax,t t=1 where Qmax,t = xt ,ut t=1 PT t0 =t maxxt0 ,ut0 `(xt0 , ut0 ), the maximum total cost from time t to T . This bound on the cost of ?? (ut |xt ) tells us that if we update p(ut |xt ) so as to decrease its total cost or decrease its KL-divergence against ?? (ut |xt ), we will eventually reduce the cost of ?? (ut |xt ). For the MDGPS algorithm, this bound suggests that we can ensure improvement of the global policy within a small number of iterations by appropriately choosing the constraint  during the C-step. PT Recall that the C-step constrains t=1 t ? , so if we choose  to be small enough, we can close the gap between the local and global policies. Optimizing the bound directly turns out to produce very slow learning in practice, because the bound is very loose. However, it tells us that we can either decrease  toward the end of the optimization process or if we observe the global policy performing much worse than the local policies. We discuss how this idea can be put into action in the next section. 4.2 Step Size Selection Setting the local policy step size  is important for proper convergence of guided policy search methods. Since we are approximating the true unknown dynamics with time-varying linear dynamics, setting  too large can produce unstable local policies which cause the method to fail. However, setting  too small will prevent the local policies from improving significantly between iterations, leading to slower learning rates. In prior work [8], the step size  in the local policy optimization is dynamically adjusted by considering the difference between the predicted change in the cost of the local policy p(ut |xt ) under the fitted dynamics, and the actual cost obtained when sampling from that policy. The intuition is that, because the linearized dynamics are local, we incur a larger cost the further we deviate from the previous policy. We can adjust the step size by estimating the rate at which the additional cost is incurred and choosing the optimal tradeoff. In Appendix B.3 we describe the step size adjustment rule used for BADMM in prior work, and use it to derive two step size adjustment rules for MDGPS: ?classic? and ?global.? The classic step size adjustment is a direct reintrepretation of the BADMM step rule for MDGPS, while the global step rule is a more conservative rule that takes the difference between the global and local policies into account. 5 Relation to Prior Work While we?ve discussed the connections between MDGPS and prior guided policy search methods, in this section we?ll also discuss the connections between our method and other policy search methods. One popular supervised policy learning methods is DAGGER [15], which also trains the policy using supervised learning, but does not attempt to adapt the teacher to provide better training data. MDGPS removes the assumption in DAGGER that the supervised learning stage has bounded error against an arbitrary teacher policy. MDGPS does not need to make this assumption, since the teacher can be adapted to the limitations of the global policy learning. This is particularly important when the global policy has computational or observational limitations, such as when learning to use camera images for partially observed control tasks or, as shown in our evaluation, blind peg insertion. When we sample from the global policy ?? (ut |xt ), our method resembles policy gradient methods with KL-divergence constraints [14, 13, 17]. However, policy gradient methods update the policy ?? (ut |xt ) at each iteration by linearizing with respect to the policy parameters, which often requires small steps for complex, nonlinear policies, such as neural networks. In contrast, we linearize in the space of time-varying linear dynamics, while the policy is optimized at each iteration with many steps of supervised learning (e.g. stochastic gradient descent). This makes MDGPS much better suited for quickly and efficiently training highly nonlinear, high-dimensional policies. 6 Figure 1: Results for MDGPS variants and BADMM on each task. MDGPS is tested with local policy (?off policy?) and global policy (?on policy?) sampling (see Section 3.1), and both the ?classic? and ?global? step sizes (see Section 4.2). The vertical axis for the obstacle task shows the average distance between the point mass and the target. The vertical axis for the peg tasks shows the average distance between the bottom of the peg and the hole. Distances above 0.1, which is the depth of the hole (shown as a dotted line) indicate failure. All experiments are repeated ten times, with the average performance and standard deviation shown in the plots. 6 Experimental Evaluation We compare several variants of MDGPS and a prior guided policy search method based on Bregman ADMM (BADMM) [6]. We evaluate all methods on one simulated robotic navigation task and two manipulation tasks. For MDGPS, during training we sample from either the local policies (?off-policy? sampling) or the global policy (?on-policy? sampling), and we use both forms of the step rule described in Section 4.2 (?classic? and ?global?). 3 Obstacle Navigation. In this task, a 2D point mass (grey) must navigate around obstacles to reach a target (shown in green), using velocities and positions relative to the target. We use N = 5 initial states, with 5 samples per initial state per iteration. The target and obstacles are fixed, but the starting position varies. Peg Insertion. This task, which is more complex, requires controlling a 7 DoF 3D arm to insert a tight-fitting peg into a hole. The hole can be in different positions, and the state consists of joint angles, velocities, and end-effector positions relative to the target. This task is substantially more challenging physically. We use N = 9 different hole positions, with 5 samples per initial state per iteration. Blind Peg Insertion. The last task is a blind variant of the peg insertion task, where the target-relative end effector positions are provided to the local policies, but not to the global policy ?? (ut |xt ). This requires the global policy to search for the hole, since no input to the global policy can distinguish between the different initial state xi1 . This makes it much more challenging to adapt the global and local policies to each other, and makes it impossible for the global learner to succeed without adaptation of the local policies. We use N = 4 different hole positions, with 5 samples per initial state per iteration. The global policy for each task consists of a fully connected neural network with two hidden layers with 40 rectified linear units. The same settings are used for MDGPS and the prior BADMM-based method, except for the difference in surrogate costs, constraints, and step size adjustment methods discussed in the paper. Results are presented in Figure 1 and Table 1. On the easier point mass navigation task all methods achieve similar performance, but the on-policy variants of MDGPS outperform the off-policy variants. This suggests that we can benefit from directly sampling from the global policy during training, which is not possible in the BADMM formulation. Although performance is similar among all methods, the MDGPS methods are all substantially easier to apply to these tasks, since they have very few free hyperparameters. An initial step size must be selected, but the adaptive step size adjustment rules make this choice less important. In contrast, 3 Guided policy search code, including BADMM and MDGPS methods, https://www.github.com/cbfinn/gps. 7 is available at Peg Blind Peg Itr. 3 6 9 12 3 6 9 12 BADMM 1.1% ? 3.3% 51.1% ? 10.2% 72.2% ? 14.3% 74.4% ? 19.3% 20.0% ? 31.2% 65.0% ? 22.9% 82.5% ? 25.1% 82.5% ? 16.1% Off/Classic 11.1 ? 9.9% 62.2 ? 17.4% 82.2 ? 11.3% 83.3 ? 11.4% 2.5 ? 7.5% 62.5 ? 32.1% 80.0 ? 24.5% 95.0 ? 10.0% Off/Global 6.7% ? 7.4% 64.4% ? 19.1% 71.1% ? 24.0% 84.4% ? 15.1% 7.5% ? 16.0% 70.0% ? 21.8% 60.0% ? 32.0% 85.0% ? 22.9% On/Classic 6.7% ? 7.4% 68.9% ? 18.5% 90.0% ? 10.5% 90.0% ? 11.6% 2.5% ? 7.5% 72.5% ? 28.4% 80.0% ? 35.0% 85.0% ? 20.0% On/Global 6.7% ? 7.4% 63.3% ? 20.0% 85.6% ? 8.7% 87.8% ? 13.6% 15.0% ? 30.0% 70.0% ? 35.0% 82.5% ? 19.5% 85.0% ? 12.2% Table 1: Success rates of each method on each peg insertion task. Success is defined as inserting the peg into the hole with a final distance of less than 0.06. Results are averaged over ten runs. the BADMM method requires choosing an initial weight on the augmented Lagrangian term, an adjustment schedule for this term, a step size on the dual variables, and a step size for local policies, all of which have a substantial impact on the final performance of the method (the reported results are for the best setting of these parameters, identified with a hyperparameter sweep). On the peg insertion tasks, all variants MDGPS consistently outperform BADMM as shown by the success rates in Table 1, which shows that the MDGPS policies succeed at actually inserting the peg into the hole more often and on more conditions. This suggests that our method is better able to improve global policies, particularly in situations where informational or representational constraints make na?ve imitation of the local policies insufficient to solve the task. On both tasks, we see faster learning from the on-policy variants, although this is less noticeable on the harder blind peg insertion task, where the best final policy is the off-policy variant with classic step size adjustment. Sampling from the global policies may be desirable in practice, since the global policies can directly use observations at runtime instead of requiring access to the state [6]. The global step size also tends to be more conservative than the classic step size, but produces more consistent and monotonic improvement. 7 Discussion and Future Work We presented a new guided policy search method that corresponds to mirror descent under linearity and convexity assumptions, and showed how prior guided policy search methods can be seen as approximating mirror descent. We provide a bound on the return of the global policy in the nonlinear case, and argue that an appropriate step size can provide improvement of the global policy in this case also. Our analysis provides us with the intuition to design an automated step size adjustment rule, and we illustrate empirically that our method achieves good results on a complex simulated robotic manipulation task while requiring substantially less tuning and hyperparameter optimization than prior guided policy search methods. Manual tuning and hyperparameter searches are a major challenge across a range of deep reinforcement learning algorithms, and developing scalable policy search methods that are simple and reliable is vital to enable further progress. As discussed in Section 5, MDGPS has interesting connections to other policy search methods. Like DAGGER [15], MDGPS uses supervised learning to train the policy, but unlike DAGGER, MDGPS does not assume that the learner is able to reproduce an arbitrary teacher?s behavior with bounded error, which makes it very appealing for tasks with partial observability or other limits on information, such as learning to use camera images for robotic manipulation [6]. When sampling directly from the global policy, MDGPS also has close connections to policy gradient methods that take steps of fixed KL-divergence [14, 17], but with the steps taken in the space of trajectories rather than policy parameters, followed by a projection step. In future work, it would be interesting to explore this connection further, so as to develop new model-free policy gradient methods. Acknowledgments We thank the anonymous reviewers for their helpful and constructive feedback. This research was supported in part by an ONR Young Investigator Program award. 8 References [1] J. A. Bagnell and J. Schneider. Covariant policy search. In International Joint Conference on Artificial Intelligence (IJCAI), 2003. [2] A. Beck and M. Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 31(3):167?175, May 2003. [3] M. Deisenroth, G. Neumann, and J. Peters. A survey on policy search for robotics. Foundations and Trends in Robotics, 2(1-2):1?142, 2013. [4] X. Guo, S. Singh, H. Lee, R. L. Lewis, and X. Wang. Deep learning for real-time Atari game play using offline Monte-Carlo tree search planning. In Advances in Neural Information Processing Systems (NIPS), 2014. [5] S. Levine and P. Abbeel. Learning neural network policies with guided policy search under unknown dynamics. In Advances in Neural Information Processing Systems (NIPS), 2014. [6] S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies. Journal of Machine Learning Research (JMLR), 17, 2016. [7] S. Levine and V. Koltun. Variational policy search via trajectory optimization. In Advances in Neural Information Processing Systems (NIPS), 2013. [8] S. Levine, N. Wagener, and P. Abbeel. Learning contact-rich manipulation skills with guided policy search. In International Conference on Robotics and Automation (ICRA), 2015. [9] W. Li and E. Todorov. Iterative linear quadratic regulator design for nonlinear biological movement systems. In ICINCO (1), pages 222?229, 2004. [10] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. In International Conference on Learning Representations (ICLR), 2016. [11] I. Mordatch, K. Lowrey, G. Andrew, Z. Popovic, and E. Todorov. Interactive control of diverse complex characters with neural networks. In Advances in Neural Information Processing Systems (NIPS), 2015. [12] I. Mordatch and E. Todorov. Combining the benefits of function approximation and trajectory optimization. In Robotics: Science and Systems (RSS), 2014. [13] J. Peters, K. M?lling, and Y. Alt?n. Relative entropy policy search. In AAAI Conference on Artificial Intelligence, 2010. [14] J. Peters and S. Schaal. Reinforcement learning of motor skills with policy gradients. Neural Networks, 21(4):682?697, 2008. [15] S. Ross, G. Gordon, and A. Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. Journal of Machine Learning Research, 15:627?635, 2011. [16] S. Ross, N. Melik-Barkhudarov, K. Shaurya Shankar, A. Wendel, D. Dey, J. A. Bagnell, and M. Hebert. Learning monocular reactive UAV control in cluttered natural environments. In International Conference on Robotics and Automation (ICRA), 2013. [17] J. Schulman, S. Levine, P. Moritz, M. Jordan, and P. Abbeel. Trust region policy optimization. In International Conference on Machine Learning (ICML), 2015. [18] R. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3-4):229?256, May 1992. [19] T. Zhang, G. Kahn, S. Levine, and P. Abbeel. Learning deep control policies for autonomous aerial vehicles with mpc-guided policy search. In International Conference on Robotics and Automation (ICRA), 2016. 9
6105 |@word grey:1 seek:1 linearized:3 r:1 prominence:1 decomposition:1 harder:1 reduction:1 initial:10 lqr:6 current:1 com:1 must:3 informative:1 motor:1 remove:1 reproducible:1 drop:1 update:2 plot:1 intelligence:2 fewer:3 selected:1 parameterization:3 provides:5 simpler:4 zhang:1 wierstra:1 direct:3 fxt:2 koltun:1 pritzel:1 prove:2 consists:2 fitting:2 inside:1 expected:2 behavior:1 p1:1 planning:1 globally:1 informational:1 actual:1 preclude:1 considering:1 spain:1 project:1 bounded:6 linearity:2 notation:2 provided:2 estimating:1 mass:3 atari:1 interpreted:2 substantially:4 minimizes:4 guarantee:5 shed:1 runtime:1 exactly:9 interactive:1 control:7 unit:1 omit:1 engineering:2 local:50 modify:1 tends:1 limit:1 subscript:1 approximately:1 might:5 resembles:2 dynamically:1 suggests:5 challenging:2 limited:1 hunt:1 range:1 averaged:1 practical:3 camera:2 acknowledgment:1 practice:5 block:1 regret:1 procedure:1 empirical:4 attain:1 significantly:1 projection:10 convenient:2 get:1 onto:5 close:4 selection:1 shankar:1 put:1 impossible:1 optimize:6 www:1 imposed:1 lagrangian:4 reviewer:1 go:1 williams:1 starting:1 cluttered:1 convex:6 kpi:6 survey:1 simplicity:1 rule:8 classic:8 autonomous:2 coordinate:1 variation:1 construction:3 pt:8 target:6 controlling:1 exact:1 play:1 gps:1 us:3 locomotion:1 agreement:1 velocity:2 trend:1 particularly:4 ep:2 levine:7 ft:2 observed:1 bottom:1 wang:1 thousand:1 region:2 ensures:1 connected:1 episode:1 decrease:3 movement:1 mentioned:1 intuition:5 substantial:1 convexity:2 insertion:7 constrains:1 environment:1 dynamic:27 trained:1 singh:1 solving:1 tight:1 incur:1 efficiency:1 learner:2 easily:1 joint:2 derivation:1 train:8 epi:9 describe:1 monte:2 kp:1 artificial:2 tell:2 visuomotor:1 outside:1 choosing:3 dof:1 larger:1 solve:3 say:1 ability:2 final:5 online:1 advantage:1 propose:2 adaptation:1 inserting:2 combining:1 achieve:3 representational:1 convergence:13 ijcai:1 darrell:1 neumann:1 badmm:10 produce:5 generating:1 silver:1 derive:4 depending:2 linearize:1 illustrate:1 develop:1 andrew:1 qt:1 noticeable:1 progress:1 solves:2 c:2 predicted:1 indicate:1 guided:41 stochastic:2 stabilization:1 human:1 enable:1 observational:1 require:1 abbeel:5 anonymous:1 biological:1 adjusted:2 insert:1 hold:2 around:3 major:1 optimizer:2 achieves:3 ross:3 agrees:1 gaussian:12 always:1 aim:2 modified:2 rather:1 pn:3 varying:4 schaal:1 improvement:7 consistently:1 likelihood:1 contrast:3 helpful:1 typically:7 entire:1 hidden:1 relation:1 kahn:1 reproduce:1 arg:7 issue:1 among:2 dual:6 denoted:1 overall:2 constrained:5 integration:1 washington:4 sampling:8 represents:1 look:1 icml:1 nearly:1 mimic:1 future:2 connectionist:1 simplify:1 gordon:1 few:2 divergence:16 ve:2 individual:1 beck:1 phase:3 william:1 attempt:1 highly:1 evaluation:3 adjust:4 navigation:5 bipedal:1 mixture:1 light:1 primal:1 behind:1 kt:4 bregman:3 partial:1 experience:1 tree:1 taylor:1 zation:1 guidance:1 fitted:4 effector:2 instance:2 earlier:1 obstacle:4 teboulle:1 cost:26 deviation:1 hundred:2 too:2 reported:1 teacher:8 varies:2 international:6 stay:1 lee:1 xi1:5 off:7 quickly:1 na:1 again:1 aaai:1 choose:1 worse:1 inefficient:1 leading:1 return:3 li:1 account:1 summarized:1 includes:1 automation:3 depends:1 blind:5 performed:2 vehicle:1 closed:1 dagger:4 option:1 contribution:1 minimize:1 efficiently:2 correspond:1 produced:1 carlo:2 trajectory:17 rectified:1 reach:1 manual:1 against:4 failure:1 svlevine:1 mpc:1 di:6 conciseness:1 dataset:1 adjusting:1 popular:1 recall:1 ut:122 improves:1 schedule:1 actually:2 centric:2 supervised:19 follow:1 formulation:6 evaluated:2 though:4 dey:1 stage:1 flight:1 trust:1 nonlinear:16 building:1 lillicrap:1 requiring:2 true:2 multiplier:1 equality:1 hence:1 analytically:1 alternating:2 moritz:1 iteratively:1 conditionally:1 ll:2 game:2 during:5 encourages:1 anything:1 linearizing:2 ut0:2 image:2 variational:2 consideration:1 common:1 rl:6 physical:2 empirically:1 tassa:1 extend:1 interpretation:1 discussed:3 marginals:2 tuning:3 erez:1 stable:3 access:1 longer:1 something:1 recent:2 showed:1 optimizing:3 optimizes:2 manipulation:7 onr:1 success:3 seen:1 additional:4 somewhat:1 schneider:1 multiple:3 full:1 desirable:1 match:2 adapt:3 faster:2 long:1 dept:2 award:1 dkl:18 uav:1 controlled:1 impact:1 prediction:1 variant:15 regression:2 scalable:1 controller:4 vision:2 expectation:2 physically:1 iteration:18 sergey:1 represent:1 sometimes:2 robotics:6 background:1 separately:1 crucial:1 appropriately:1 unlike:1 ascent:1 induced:1 tend:1 jordan:1 call:1 intermediate:1 split:1 enough:2 constraining:1 automated:1 variety:1 vital:1 fit:5 todorov:3 dtv:3 identified:1 reduce:1 idea:1 observability:1 itr:1 tradeoff:1 t0:1 penalty:1 peter:3 cause:1 action:2 deep:10 heess:1 clear:1 ten:2 generate:1 http:1 peg:14 outperform:2 problematic:1 dotted:1 estimated:1 per:6 diverse:1 hyperparameter:3 drawn:1 prevent:1 subgradient:1 year:1 sum:1 enforced:4 run:1 angle:1 parameterized:1 letter:1 qmax:2 appendix:7 bound:13 ct:2 layer:1 followed:1 distinguish:1 quadratic:4 lowrey:1 adapted:2 constraint:22 constrain:1 u1:1 regulator:1 min:7 extremely:1 performing:1 fct:2 structured:1 developing:1 according:2 aerial:1 across:1 em:1 character:1 appealing:3 gradually:1 taken:1 equation:5 agree:1 previously:1 monocular:1 discus:3 montgomery:1 mechanism:1 turn:2 eventually:1 know:2 flip:1 loose:1 fail:1 finn:1 end:8 available:1 operation:1 icinco:1 apply:1 observe:1 generic:2 enforce:3 differentiated:1 appropriate:1 slower:2 denotes:2 running:4 ensure:2 invokes:1 k1:1 approximating:2 icra:3 contact:1 sweep:1 objective:4 bagnell:3 surrogate:6 gradient:13 iclr:1 distance:4 thank:1 simulated:4 outer:1 manifold:5 argue:1 collected:1 unstable:1 toward:1 assuming:1 code:1 insufficient:1 minimizing:2 executed:1 relate:1 implementation:2 design:2 proper:1 policy:189 unknown:6 perform:1 vertical:2 observation:2 finite:1 descent:23 situation:1 arbitrary:2 kl:13 connection:6 optimized:4 learned:1 framing:2 distinction:1 barcelona:1 nip:5 address:1 beyond:1 able:2 proceeds:1 usually:1 mordatch:2 challenge:1 program:1 max:1 green:1 video:1 including:1 reliable:1 natural:1 force:1 arm:1 improve:4 github:1 axis:2 deviate:1 prior:29 review:1 schulman:2 asymptotic:2 relative:4 fully:1 interesting:2 limitation:2 proportional:1 foundation:1 incurred:2 consistent:2 playing:1 pi:50 summary:1 supported:1 last:3 free:3 hebert:1 offline:1 differentiating:1 benefit:3 feedback:1 depth:1 world:2 valid:1 unweighted:1 rich:1 reinforcement:7 adaptive:1 simplified:4 projected:1 approximate:7 skill:2 global:45 robotic:6 reveals:1 assumed:1 popovic:1 imitation:2 search:56 continuous:2 iterative:2 lling:1 why:1 table:3 lineargaussian:1 improving:1 expansion:1 complex:11 domain:2 pk:5 main:3 bounding:2 noise:1 hyperparameters:4 repeated:2 x1:7 augmented:3 slow:1 position:7 lie:1 jmlr:1 young:1 xt:144 specific:1 navigate:1 alt:1 gained:1 mirror:18 magnitude:1 linearization:2 conditioned:1 hole:9 horizon:1 gap:1 easier:2 suited:2 entropy:1 simply:1 explore:1 xt0:1 lagrange:1 adjustment:8 partially:1 monotonic:3 covariant:1 corresponds:7 lewis:1 succeed:2 goal:1 identity:1 admm:5 fut:2 change:2 except:1 lemma:4 conservative:2 total:5 experimental:1 deisenroth:1 guo:1 reactive:1 overload:2 investigator:1 constructive:1 evaluate:1 tested:1
5,643
6,106
Learning Additive Exponential Family Graphical Models via ?2,1-norm Regularized M-Estimation Xiao-Tong Yuan? Ping Li?? Tong Zhang? Qingshan Liu? Guangcan Liu? ?B-DAT Lab, Nanjing University of Info. Sci.&Tech. Nanjing, Jiangsu, 210044, China ?Depart. of Statistics and ?Depart. of Computer Science, Rutgers University Piscataway, NJ, 08854, USA {xtyuan,qsliu, gcliu}@nuist.edu.cn, {pingli,tzhang}@stat.rutgers.edu Abstract We investigate a subclass of exponential family graphical models of which the sufficient statistics are defined by arbitrary additive forms. We propose two ?2,1 norm regularized maximum likelihood estimators to learn the model parameters from i.i.d. samples. The first one is a joint MLE estimator which estimates all the parameters simultaneously. The second one is a node-wise conditional MLE estimator which estimates the parameters for each node individually. For both estimators, statistical analysis shows that under mild conditions the extra flexibility gained by the additive exponential family models comes at almost no cost of statistical efficiency. A Monte-Carlo approximation method is developed to efficiently optimize the proposed estimators. The advantages of our estimators over Gaussian graphical models and Nonparanormal estimators are demonstrated on synthetic and real data sets. 1 Introduction As an important class of statistical models for exploring the interrelationship among a large number of random variables, undirected graphical models (UGMs) have enjoyed popularity in a wide range of scientific and engineering domains, including statistical physics, computer vision, data mining, and computational biology. Let X = [X1 , ..., Xp ]? be a p-dimensional random vector with each variable Xi taking values in a set X . Suppose G = (V, E) is an undirected graph consists of a set of vertices V = {1, ..., p} and a set of unordered pairs E representing edges between the vertices. The pairwise UGMs over X corresponding to G can be written as the following exponential family distribution: ? ? ? ?? ? (1) ?s ?s (Xs ) + ?st ?st (Xs , Xt ) . P(X; ?) ? exp ? ? s?V (s,t)?E In such a pairwise model, (Xs , Xt ) are conditionally independent (given the rest of the variables) if and only if the weight ?st is zero. The most popular instances of pairwise UGMs are Gaussian graphical models (GGMs) [19, 2] for real-valued random variables and Ising (or Potts) models [15] for binary or finite nominal discrete random variables. More broadly, in order to derive multivariate graphical models from univariate exponential family distributions (such as the Gaussian, binomial/multinomial, Poisson, exponential distributions, etc.), the exponential family graphical models (EFGMs) [27, 21] were proposed as a unified framework to learn UGMs with node-wise conditional distributions arising from generalized linear models (GLMs). 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 1.1 Overview of contribution A fundamental issue that arises in UGMs is to specify sufficient statistics, i.e., {?s (Xs ), ?st (Xs , Xt )}, for modeling the interactions among variables. It is noteworthy that most prior pairwise UGMs use pairwise product of variables (or properly transformed variables) as pairwise sufficient statistics [16, 11, 27]. This is clearly restrictive in modern data analysis tasks where the underlying pairwise interactions among variables are more often than not highly complex and unknown a prior. The goal of this work is to remove such a restriction and explore the feasibility (in theory and practice) of defining sufficient statistics in an additive formulation to approximate the underlying unknown sufficient statistics. To this end, we consider the following Additive Exponential Family Graphical Model (AdEFGM) distribution with joint density function: ? ? ?? ? ? P(X; f ) = exp fs (Xs ) + fst (Xs , Xt ) ? A(f ) , (2) ? ? s?V (s,t)?E where fs : X ? R and{ fst (?, ?) : X 2 ? R are respectively node-wise and pairwise statistics, and } ? ? ? A(f ) := log X p exp f (X ) + f (X , X ) dX is the log-partition function. s s t s?V s (s,t)?E st We require the condition A(f ) < ? holds so that the definition of probability is valid. In this paper, we assume the formulations of sufficient statistics fs and fst are unknown but they admit linear representations over two sets of pre-fixed basis functions {?k (?), k = 1, 2, ..., q} and {?l (?, ?), l = 1, 2, ..., r}, respectively. That is, fs (Xs ) = q ? ?s,k ?k (Xs ), fst (Xs , Xt ) = k=1 r ? ?st,l ?l (Xs , Xt ), (3) l=1 where q and r are the truncation order parameters. In the formulation (3), the choice of basis and their sizes is flexible and task-dependent. For instance, if the mapping functions fs and fst are periodic, then we can choose {?k (?)} as 1-D Fourier basis and {?l (?, ?)} as 2-D Fourier basis. As another instance, the basis {?l } can be chosen as multiple kernels which are commonly used in computer vision tasks. Specially, when q = r = 1, ?l (Xs , Xt ) = Xs Xt and ?k (Xs ) is fixed as certain parametric function, AdEFGM reduces to the standard EFGM [27, 21]. In general cases, by imposing an additive structure on the sufficient statistics fs and fst , AdEFGM is expected to be able to capture more complex interactions among variables beyond pairwise product. As the core contribution of this paper, we propose two ?2,1 -norm regularized maximum likelihood estimation (MLE) estimators to learn the weights of AdEFGM in high dimensional settings. The first estimator is formulated as an ?2,1 -norm regularized MLE to jointly estimate all the parameters in the model. The second estimator is formulated as an ?2,1 -norm regularized node-wise conditional MLE to estimate the parameters associated with each node individually. Theoretically, we prove that ? under mild conditions the joint MLE estimator achieves convergence rate O(( (2|E| ? + p) ln p/n) where |E| while the node-wise conditional estimator achieves convergence rate O( (d + 1) ln p/n) in which d is the degree of the underlying graph G. Computationally, we propose a Monte-Carlo approximation scheme to efficiently optimize the estimators via proximal gradient descent methods. We conduct numerical studies on simulated and real data to support our claims. The simulation results confirm that, when the data are drawn from an underlying UGMs with highly nonlinear sufficient statistics, our estimators significantly outperform GGMs and Nonparanormal [10] estimators in most cases. The experimental results on a stock price data show that our estimators are able to recover more accurate category links among stocks than GMMs and Nonparanormal estimators. 1.2 Related work In order to model random variables beyond parametric UGMs such as GGMs and Ising models, researchers recently investigated semi-parametric/nonparametric extensions of these parametric models. The Nonparanormal [11] and copula-based methods [5] are semi-parametric graphical models which assume that data is Gaussian after applying a monotone transformation. More broadly, one could learn transformations of the variables and then fit any parametric UGMs (like EFGMs) over the transformed variables. In [10, 26], two rank-based estimators were used to estimate correlation matrix and then fit the GGMs. In [24], a semi-parametric method was proposed to fit the conditional 2 means of the features with an arbitrary additive formulation. The Semi-EFGM proposed in [28] is a semi-parametric rank-based conditional estimator for exponential family graphical models. In [1], a kernel method was proposed for learning the structure of graphical models by treating variables as Gaussians in a mapped high-dimensional feature space. In [7], Gu proposed a functional minimization framework to estimate the nonparametric model (1) over a Reproducing Hilbert Kernel Space (RKHS). Nonparametric exponential family graphical models based on score matching loss were investigated in [9, 20]. The forest density estimation [8] is a fully nonparametric method for estimating UGMs with structure restricted to be a forest. In contrast to all these existing semiparametric/nonparametric models, our approach is novel in model definition and computation: we impose a simple additive structure on sufficient statistics to describe complex interactions between variables and use Monte-Carlo approximation to estimate the intractable normalization constant for efficient optimization. 1.3 Notation and organization Notation Let ? = {?s,k , ?st,l : s ? V, k = 1, 2, .., , (s, t) ? V 2 , s ?= t, l = 1, 2, ...} be a vector of parameters associated with AdEFGM and G = {{(s, k)}k , {(st, l)}l : s ? V, (s, t) ? V 2 , s ?= t} be a group induced by the additive structures of nodes? and edges. We conventionally define the following grouped-norm related notations: ???2,1 = g?G ??g ?, ???2,? = maxg?G ??g ?, supp(?, G) = {g ? G : ??g ? ?= 0} and ???2,0 = |supp(?, G)|. For any S ? G, these notations can be defined restrictively over ?S . We denote S? = G \ S the complement of S in G. Organization. The remaining of this paper is organized as follows: In ?2, we present two maximum likelihood estimators for learning the model parameters of AdEFGM. The statistical guarantees of the proposed estimators are analyzed in ?3. Monte-Carlo simulations and experimental results on real stock price data are presented in ?4. Finally, we conclude this paper in ?5. Due to space limit, all the technical proofs of theoretical results are deferred to an appendix section which is included in the supplementary material. 2 ?2,1 -norm Regularized MLE for AdEFGM In this section, we investigate the problem of estimating the parameters of AdEFGM in high dimensional settings. By substituting (3) into (2), the distribution of an AdEFGM can be converted to the following form: P(X; ?) = exp {B(X; ?) ? A(?)} , (4) where ? = {?s,k , ?st,l }, and ? ? ? ?st,l ?l (Xs , Xt ), A(?) := log exp {B(X; ?)} dX. ?s,k ?k (Xs ) + B(X; ?) := s?V,k Xp (s,t)?E,l Suppose we have n i.i.d. samples Xn = {X (i) }ni=1 drawn from the following AdEFGM with true parameters ?? : P(X; ?? ) = exp {B(X; ?? ) ? A(?? )} . (5) An important goal of graphical model learning is to estimate the true parameters ?? from the observed data Xn . The more accurate parameter estimation is, the more accurate we are able to recover the underlying true graph structure. We next propose two ?2,1 -norm regularized maximum likelihood estimation (MLE) methods for joint and node-conditional learning of parameters, respectively. 2.1 Joint MLE estimation Given the sample set Xn = {X (i) }ni=1 , the negative log-likelihood of the joint distribution (5) is: 1? B(X (i) ; ?) + A(?). n i=1 n L(?; Xn ) = ? It is trivial to verify L(?; Xn ) has the following first order derivative (see, e.g., [25]): 1? ?L = E? [?k (Xs )] ? ?k (Xs(i) ), ??s,k n i=1 n 1? ?L (i) = E? [?l (Xs , Xt )] ? ?l (Xs(i) , Xt ), (6) ??st,l n i=1 n 3 where the expectation E? [?] is taken over the joint distribution (2). Also, it is well known that L(?; Xn ) is convex in ?. In order to estimate the parameters which are expected to be sparse in edge level due to the potential sparse structure of graph, we consider the following ?2,1 -norm regularized MLE estimator: ??n = arg min {L(?; Xn ) + ?n ???2,1 } , (7) ? (? )1/2 ? (? )1/2 ? q r 2 2 where ???2,1 = ? + ? is the ?2,1 -norm with 2 s?V k=1 s,k (s,t)?V ,s?=t l=1 st,l respect to the basis statistics and ?n > 0 is the regularization strength parameter dependent on n. The ?2,1 -norm penalty is used to promote edgewise sparsity as the graph structure is expected to be sparse in high dimensional settings. 2.2 Node-conditional MLE estimation Recent state of the art methods for learning UGMs suggest a natural procedure deriving multivariate graphical models from univariate distributions [12, 15, 27]. The common idea in these methods is to learn the graph structure by estimating node-neighborhoods, or by fitting the node-conditional distribution of each individual node conditioned on the rest of the nodes. Indeed, these node-wise fitting methods have been shown to have strong statistical guarantees and attractive computational performance. Inspired by these approaches, we propose an alternative estimator to estimate the weights of sufficient statistics associated with each individual node. With a slight abuse of notation, we denote ?s a subvector of ? associated with node s, i.e., ?s := {?s,k | k = 1, ..., q} ? {?st,l | t ? N (s), l = 1, ..., r}, where N (s) is the neighborhood of s. Given the joint distribution (4), it is easy to show that the conditional distribution of Xs given the rest variables, X\s , is written by: { } P(Xs | X\s ; ?s ) = exp C(Xs | X\s ; ?s ) ? D(X\s ; ?s ) , (8) ? ? ? ? (X ) + t?N (s),l ?st,l ?l (Xs , Xt ), and D(X\s ; ?s ) := where C(Xs | X\s ; ?s ) := { } k s,k k s ? log X exp C(Xs | X\s ; ?s ) dXs is the log-partition function which ensures normalization. We note that the condition A(?) < ? for the joint log-partition function implies D(X\s ; ?s ) < ?. In order to estimate the parameters associated with a node, we consider using the sparsity regularized conditional maximum likelihood estimation. Given n independent samples Xn drawn from (5), we can write the negative log-likelihood of the conditional distribution as: } ?{ (i) (i) ? s ; Xn ) = 1 L(? ?C(Xs(i) | X\s ; ?s ) + D(X\s ; ?s ) . n i=1 n ? s ; Xn ) is convex with respect to ?s and it has the following first-order derivative: It is standard that L(? } ? s ; Xn ) 1 ? { ? L(? (i) = ??k (Xs(i) ) + E?s [?k (Xs ) | X\s ] , ??s,k n i=1 n n { } ? s ; Xn ) 1 ? ? L(? (i) (i) (i) = ??l (Xs(i) , Xt ) + E?s [?l (Xs , Xt ) | X\s ] , ??st,l n i=1 (9) where the expectation E?s [? | X\s ] is taken over the node-wise conditional distribution (8). Let us consider the following ?2,1 -norm regularized conditional MLE formulation associated with the variable Xs : { } ? s ; Xn ) + ?n ??s ?2,1 , ??sn = arg min L(? (10) (? ?s )1/2 ? (? )1/2 q r 2 2 where ??s ?2,1 = ? + ? is the grouped ?2,1 -norm with respect to k=1 s,k t?=s l=1 st,l the node-wise and pairwise basis associated with s and ?n > 0 controls the regularization strength. 4 2.3 Computation via Monte-Carlo approximation We consider using proximal gradient descent methods [22] to solve the composite optimization problems in (7) and (10). For both estimators, the major computational overhead is to iteratively ? s ; Xn ). In general, calculate the expectation terms involved in the gradients ?L(?; Xn ) and ?L(? these expectation terms have no close-form for exact calculation and sampling methods such as importance sampling and MCMC are usually needed for approximate estimation. There are, however, two challenging issues with such a sampling based optimization procedure: (1) the multivariate sampling methods typically suffer from high computational cost even when the dimensionality p is moderately large; and (2) the non-vanishing sampling error of gradient will accumulate during the iteration which according to the results in [18] will deteriorate the overall convergence performance. Obviously, the main source of these challenges is caused by the intractable log-partition terms appeared in the estimators. To more efficiently apply the first-order methods without suffering from iterative sampling and error accumulation, it is a natural idea to replace the log-partition terms by a Monte-Carlo approximation and minimize the resulting approximated formulation. Taking the joint estimator (7) as an example, we resort to the ? basic importance sampling method to approximately estimate the log-partition term A(?) = log X p exp {B(X; ?)} dX. Assume we have m i.i.d. samples Ym = {Y (j) }m j=1 drawn from a random vector Y ? X p with known probability density P(Y ). Given ?, an importance sampling estimate of exp{A(?)} is given by { } m 1 ? exp B(Y (j) ; ?) ? exp{A(?; Ym )} = . m j=1 P(Y (j) ) We consider the following Monte-Carlo approximation to the estimator (7): { } ? ? Xn , Ym ) + ?n ???2,1 , ??n = arg min L(?; (11) ? ? Ym ). Since the random samples Ym are fixed ? Xn , Ym ) = ? 1 ?n B(X (i) ; ?) + A(?; where L(?; i=1 n ? Xn , Ym ). Concerning in (11), the sampling operation can be avoided in the computation of ?L(?; the accuracy of the approximate estimator (11), the following result guarantees that, with high probability, the minimizer of the ? approximate estimator (11) is suboptimal to the population estimator (7) with suboptimality O(1/ m). A proof of this proposition is provided in A.1 (see the supplementary material). Proposition 1. Assume that P(Y ) > 0. Then the following inequality holds with high probability: ( ) ? ? ??n ; Ym )} 2.58? ? exp{?A(??n } + exp{?A( ? ? ? , L(??n ; Xn )+?n ???n ?2,1 ? L(??n ; Xn )+?n ???n ?2,1 + m ( )2 ?m exp{B(Y (j) ;??n )} 1 ? ? where ? ?n = m j=1 ? exp{A(?n ; Ym )} . P(Y (j) ) A similar Monte-Carlo approximation strategy can be applied to the node-wise MLE estimator (10). 3 Statistical Analysis In this section, we provide statistical guarantees on parameter estimation error for the joint MLE estimator (7) and the node-conditional estimator (10). In large picture, our analysis follows the techniques presented in [13, 30] by specifying the conditions under which these techniques can be applied to our setting. 3.1 Analysis of the joint estimator We are interested in the concentration bounds of the random variables defined by Zs,k := ?k (Xs ) ? E?? [?k (Xs )], Zst,l := ?l (Xs , Xt ) ? E?? [?l (Xs , Xt )], 5 where the expectation E?? [?] is taken over the underlying true distribution (5). By the ?law of the unconscious statistician? we have E[Zs,k ] = E[Zst,l ] = 0. That is, {Zs,k } and {Zst,l } are zeromean random variables. We introduce the following technical condition on {Zs,k , Zst,l } which we will show to guarantee the gradient ?L(?? ; Xn ) vanishes exponentially fast, with high probability, as sample size increases. Assumption 1. For all (s, k) and all (s, t, l), we assume that there exist constants ? > 0 and ? > 0 such that for all |?| ? ?, { } { } E[exp{?Zs,k }] ? exp ? 2 ? 2 /2 , E[exp{?Zst,l }] ? exp ? 2 ? 2 /2 . This assumption essentially imposes an exponential-type bound on the moment generating function of the random variables Zs,k , Zst,l . It is well known that the Hessian ?2 L(?; Xn ) is positive semidefinite at any ? and it is independent on the sample set Xn . We also need the following condition which guarantees the restricted positive definiteness of ?2 L(?; Xn ) over certain low dimensional subspace when ? is in the vicinity of ?? . Assumption 2 (Locally Restricted Positive Definite Hessian). Let S = supp(?? ; G). There exist constants ? > 0 and ? > 0 such that for any ? ? {?? ? ?? ? ? ?}, the inequality ?? ?2 L(?; Xn )? ? ????2 holds for any ? ? CS := {??S? ?2,1 ? 3??S ?2,1 }. Assumption 2 requires that the Hessian ?2 L(?; Xn ) is positive definite in the cone CS when ? lies in a local ball centered at ?? . This condition is a specification of the concept restricted strong convexity [30] to AdEFGM. Remark 1 (Minimal Representation). We say an AdEFGM has minimal representation if there is a unique parameter vector ? associated with the distribution (4). This condition equivalently requires that there exists no non-zero ? such that B(X; ?) is equal to an absolute constant. This implies that for any ? and for all non-zero ?, Var? [B(X; ?)] = ?? ?2 L(?; Xn )? > 0. If AdEFGM has minimal representation at ?? , then there must exist sufficiently small constants ? > 0 and ? > 0 such that for any ? ? {?? ? ?? ? ? ?}, ?? ?2 L(?; Xn )? ? ????2 . Therefore, Assumption 2 holds true when AdEFGM has minimal representation at ?? . The following theorem is our main result on the estimation error of the joint MLE estimator (7). A proof of this result is provided in Appendix A.2 in the supplementary material. Theorem 1. Assume that the conditions in Assumption 1 and Assumption 2 hold. If sample size n satisfies { } 6 max{q, r} ln p 54c20 ? 2 max{q, r}??? ?2,0 ln p n > max , , ?2 ? 2 ?2 ? 2 then with probability at least 1 ? 2 max{q, r}p?1 , the following inequality holds: ? ???n ? ?? ? ? 3c0 ? ?1 ? 6 max{q, r}??? ?2,0 ln p/n. Remark 2. The main message Theorem 1 conveys ? is that when n is sufficiently large, the estimation error ???n ? ?? ? vanishes at the order of O( max{q, r}(2|E| + p) ln p/n) with high probability. This convergence rate matches the results obtained in [17, 16] for GGMs and the results in [10, 26] for Nonparanormal. 3.2 Analysis of the node-conditional estimator For the node-conditional estimator (10), we study the rate of convergence of the parameter estimation error ???sn ? ?s? ? as a function of sample size n. We need Assumption 1 and the following assumption in our analysis. Assumption 3. For any node s, let S = supp(?s? ; G). There exist constants ?? > 0 and ?? > 0 such ? the inequality ?? ?2 L(? ? s ?2 holds for any ? s ; Xn )?s ? ??? that for any ?s ? {??s ? ?s? ? < ?}, s ?s ? C?S := {?(?s )S? ?2,1 ? 3?(?s )S ?2,1 }. 6 The following is our main result on the convergence rate of node-conditional estimation error ???sn ? ?s? ?. A proof of this result is provided in Appendix A.3 in the supplementary material. Theorem 2. Assume that the conditions in Assumption 1 and Assumption 3 hold. If sample size n satisfies } { c20 ? 6 max{q, r} ln p 216? ? 2 max{q, r}??s? ?2,0 ln p , n > max , ?2 ? 2 ? 2 ??2 then with probability at least 1 ? 4 max{q, r}p?2 , the following inequality holds: ? ???sn ? ?s? ? ? 6? c0 ???1 ? 6 max{q, r}??s? ?2,0 ln p/n. Remark 3. Theorem ? 2 indicates that with overwhelming probability, the estimation error ???sn ? ?s? ? = O( (d + 1) ln p)/n) where d is the degree of the underlying graph, i.e., d = maxs?V ??s? ?2,0 ? 1. We may combine the parameter estimation errors from all the nodes as a global measurement of?accuracy. Indeed, by Theorem 2 and union of probability we get that maxs?V ???sn ??s? ? = O( (d + 1) ln p/n) holds with probability at least 1?4 max{q, r}p?1 . This estimation error bound matches those for GGMs with neighborhood-selection-type estimators [29]. 4 Experiments This section is devoted to showing the actual learning performance of AdEFGM. We first investigate graph structure recovery accuracy using simulation data (for which we know the ground truth), and then we apply our method to a stock price data for inferring the statistical dependency among stocks. 4.1 Monte-Carlo simulation This is a proof-of-concept experiment. The purpose is to confirm that when the pairwise interactions of the underlying graphical models are highly nonlinear and unknown a prior, our additive estimator will be significantly superior to existing parametric/semi-parametric graphical models for inferring the structure of graphs. The numerical results of AdEFGM reported in this experiment are obtained by the joint MLE estimator in (7). Simulated data Our simulation study employs a graphical model of which the edges are generated independently with probability P . We will consider the model under different levels of sparsity by adjusting the probability P . For simplicity purpose, we assume fs (Xs ) ? 1 and consider a nonlinear pairwise interaction function fst (Xs , Xt ) = cos(?(Xs ? Xt )/5). We fit the data to the additive model (4) with a 2-D Fourier basis of size 8. Using Gibbs sampling, we generate a training sample of size n from the true graphical model, and an independent sample of the same size from the same distribution for tuning the strength parameter ?n . We compare performance for n = 200, varying values of p ? {50, 100, 150, 200, 250, 300}, and different sparsity levels under P = {0.02, 0.05, 0.1}, replicated 10 times for each configuration. Baselines We compare the performance of our estimator to Graphical Lasso [6] as a GGM estimator and SKEPTIC [10] as a Nonparanormal estimator. In our implementation, we use a version of SKEPTIC with Kendall?s tau to infer the correlation. Evaluation metric To evaluate the support recovery performance, we use the standard F-score from the information retrieval literature. The larger F-score is, the better the support recovery performance. The numerical values over 10?3 in magnitude are considered to be nonzero. Results Figure 1 shows the support recovery F-scores of the considered methods on the synthetic data. From this group of results we can observe that by using 2-D Fourier basis to approximate the unknown cosine distance function, AdEFGM is able to more accurately recover the underlying graph structure than the other two considered methods. The advantage of AdEFGM illustrated here is as expected because it is designed to automatically learn the unknown complex pairwise interactions while GGM and Nonparanormal are restrictive to certain UGMs with known sufficient statistics. 4.2 Stock price data We further study the performance of AdEFGM on a stock price data. This data contains the historical prices of S&P500 stocks over 5 years, from January 1, 2008 to January 1, 2013. By taking out the 7 1 0.8 0.6 0.4 1 0.8 0.6 0.4 0.2 0.2 0 0 50 100 150 200 250 Cosine Distance AdEFGM GGM Nonparanormal Recovery F?score Cosine Distance AdEFGM GGM Nonparanormal Recovery F?score Recovery F?score Cosine Distance AdEFGM GGM Nonparanormal 1 0.8 0.6 0.4 0.2 0 300 50 100 Dimension p 150 200 250 300 50 100 Dimension p 150 200 250 300 Dimension p Figure 1: Simulated data: Support recovery F-score curves. Left panels: P = 0.02, Middle panels: P = 0.05, Right panels: P = 0.1. Link Precision 0.8 0.6 0.4 0.2 1 0.8 AdEFGM GGM Nonparanormal AdEFGM GGM Nonparanormal 0.6 Link F?score AdEFGM GGM Nonparanormal Link Recall 1 0.6 0.4 0.4 0.2 0.2 0 0 2 4 6 Number of links 8 10 4 x 10 0 2 4 6 Number of links 8 10 4 x 10 2 4 6 Number of links 8 10 4 x 10 Figure 2: Stock price data S&P500: Category link precision, recall and F-score curves. stocks with less than 5 years of history, we end up with 465 stocks, each having daily closing prices over 1,260 trading days. The prices are first adjusted for dividends and splits and the used to calculate daily log returns. Each day?s return can be represented as a point in R465 . To apply AdEFGM to this data, we consider the general model (4) with the 2-D Fourier basis being used to approximate the pairwise interaction between stocks Xs and Xt . Since the category information of S&P500 is available, we measure the performance by Precision, Recall and F-score of the top k links (edges) on the constructed graph. A link is regarded as true if and only if it connects two nodes belonging to the same category. We use the joint MLE estimator for this experiment. Figure 2 shows the curves of precision, recall and F-score as functions of k in a wide range [103 , 105 ]. It is apparent that AdEFGM significantly outperforms GGM and Nonparanormal for identifying correct category links. This result suggests that the interactions among the S&P500 stocks are highly nonlinear. 5 Conclusions In this paper, we proposed and analyzed AdEFGMs as a generic class of additive undirected graphical models. By expressing node-wise and pairwise sufficient statistics as linear representations over a set of basis statistics, AdEFGM is able to capture complex interactions among variables which are not uncommon in modern engineering applications. We investigated two types of ?2,1 -norm regularized MLE estimators for joint and node-conditional high dimensional estimation. Based on our theoretical justification and empirical observation, we can draw the following two conclusions: 1) the ?2,1 -norm regularized AdEFGM learning is a powerful tool for inferring pairwise exponential family graphical models with unknown arbitrary sufficient statistics; and 2) the extra flexibility gained by AdEFGM comes at almost no cost of statistical and computational efficiency. Acknowledgments Xiao-Tong Yuan and Ping Li were partially supported by NSF-Bigdata-1419210, NSF-III-1360971, ONR-N00014-13-1-0764, and AFOSR-FA9550-13-1-0137. Xiao-Tong Yuan is also partially supported by NSFC-61402232, NSFC-61522308, and NSFJP-BK20141003. Tong Zhang is supported by NSF-IIS-1407939 and NSF-IIS-1250985. Qingshan Liu is supported by NSFC-61532009. Guangcan Liu is supported by NSFC-61622305, NSFC-61502238 and NSFJP-BK20160040. 8 References [1] F. Bach and M. Jordan. Learning graphical models with mercer kernels. In Proceedings of the 16th Annual Conference on Neural Information Processing Systems (NIPS?02), 2002. [2] O. Banerjee, L. E. Ghaoui, and A. d?Aspremont. Model selection through sparse maximum likelihood estimation for multivariate gaussian or binary data. JMLR, 9:485?516, 2008. [3] R. G. Baraniuk, M. Davenport, R. A. DeVore, and M. Wakin. A simple proof of the restricted isometry property for random matrices. Constructive Approximation, 28(3):253?263, 2008. [4] E. J. Cand?s, Y. C. Eldarb, D. Needella, and P. Randallc. Compressedsensing with coherent and redundantdictionarie. Applied and Computational Harmonic Analysis, 31(1):59?73, 2011. [5] A. Dobra and A. Lenkoski. Copula gaussian graphical models and their application to modeling functional disability data. The Annals of Applied Statistics, 5:969?993, 2011. [6] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432?441, 2008. [7] C. Gu, Y. Jeon, and Y. Lin. Nonparametric density estimation in high-dimensions. Statistica Sinica, 23:1131?1153, 2013. [8] J. Lafferty, H. Liu, and L. Wasserman. Sparse nonparametric graphical models. Statistical Science, 27(4):519?537, 2012. [9] L. Lin, M. Drton, A. Shojaie, et al. Estimation of high-dimensional graphical models using regularized score matching. Electronic Journal of Statistics, 10(1):806?854, 2016. [10] H. Liu, F. Han, M. Yuan, J. Lafferty, and L. Wasserman. High dimensional semiparametric gaussian copula graphical models. Annals of Statistics, 40(4):2293?2326, 2012. [11] H. Liu, J. Lafferty, and L. Wasserman. The nonparanormal: Semiparametric estimation of high dimensional undirected graphs. Journal of Machine Learning Research, 10:2295?2328, 2009. [12] N. Meinshausen and P. B?hlmann. High-dimensional graphs and variable selection with the lasso. Annals of Statistics, 34(3):1436?1462, 2006. [13] S. N. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for high-dimensional analysis of m-estimators with decomposable regularizers. Statistical Science, 27(4):538?557, 2012. [14] A. B. Owen. Monte Carlo theory, methods and examples. 2013. [15] P. Ravikumar, M. Wainwright, and J. Lafferty. High-dimensional ising model selection using l1regularized logistic regression. Annals of Statistics, 38(3):1287?1319, 2010. [16] P. Ravikumar, M. J. Wainwright, G. Raskutti, and B. Yu. High-dimensional covariance estimation by minimizing ?1 -penalized log-determinant divergence. Electronic Journal of Statistics, 5:935?980, 2011. [17] A. J. Rothman, P. J. Bickel, E. Levina, and J. Zhu. Sparse permutation invariant covariance estimation. Electronic Journal of Statistics, 2:494?515, 2008. [18] M. Schmidt, N. L. Roux, and F. R. Bach. Convergence rates of inexact proximal-gradient methods for convex optimization. In Proceedings of the 25th Annual Conference on Neural Information Processing Systems (NIPS?11), pages 1458?1466, 2011. [19] T. P. Speed and H. T. Kiiveri. Gaussian markov distributions over finite graphs. Annals of Statistics, 14:138?150, 1986. [20] S. Sun, M. Kolar, and J. Xu. Learning structured densities via infinite dimensional exponential families. In Proceedings of the 29th Annual Conference on Neural Information Processing Systems (NIPS?15), 2015. [21] W. Tansey, O. H. M. Padilla, A. S. Suggala, and P. Ravikumar. Vector-space markov random fields via exponential families. In Proceedings of the 32nd International Conference on Machine Learning (ICML?15), pages 684?692, 2015. [22] P. Tseng. On accelerated proximal gradient methods for convex-concave optimization. submitted to SIAM Journal of Optimization, 2008. [23] R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. CoRR, arxiv: abs/1011.3027, 2011. [24] A. Voorman, A. Shojaie, and D. Witten. Graph estimation with joint additive models. Biometrika, 101(1):85?101, 2014. [25] M. Wainwright and M. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1?305, 2008. [26] L. Xue and H. Zou. Regularized rank-based estimation of high-dimensional nonparanormal graphical models. Annals of Statistics, 40(5):2541?2571, 2012. [27] E. Yang, P. Ravikumar, G. I. Allen, and Z. Liu. Graphical models via univariate exponential family distributions. Journal of Machine Learning Research, 16:3813?3847, 2015. [28] Z. Yang, Y. Ning, and H. Liu. On semiparametric exponential family graphical models. arXiv preprint arXiv:1412.8697, 2014. [29] M. Yuan. High dimensional inverse covariance matrix estimation via linear programming. Journal of Machine Learning Research, 11:2261?2286, 2010. [30] C.-H. Zhang and T. Zhang. A general framework of dual certificate analysis for structured sparse recovery problems. CoRR, arxiv: abs/1201.3302, 2012. 9
6106 |@word mild:2 determinant:1 version:1 middle:1 norm:15 nd:1 c0:2 simulation:5 covariance:4 moment:1 liu:9 configuration:1 score:13 contains:1 rkhs:1 nonparanormal:16 outperforms:1 existing:2 dx:3 written:2 must:1 additive:13 partition:6 numerical:3 remove:1 ugms:12 treating:1 designed:1 vanishing:1 core:1 fa9550:1 certificate:1 node:30 zhang:4 constructed:1 yuan:5 consists:1 prove:1 fitting:2 overhead:1 combine:1 introduce:1 theoretically:1 deteriorate:1 pairwise:16 indeed:2 expected:4 qingshan:2 cand:1 inspired:1 automatically:1 actual:1 overwhelming:1 xtyuan:1 spain:1 underlying:9 estimating:3 notation:5 provided:3 panel:3 biostatistics:1 z:6 developed:1 unified:2 transformation:2 nj:1 guarantee:6 subclass:1 concave:1 biometrika:1 control:1 positive:4 engineering:2 local:1 limit:1 nsfc:5 noteworthy:1 abuse:1 approximately:1 china:1 meinshausen:1 specifying:1 challenging:1 suggests:1 co:1 range:2 unique:1 acknowledgment:1 practice:1 union:1 definite:2 procedure:2 empirical:1 significantly:3 composite:1 matching:2 pre:1 suggest:1 nanjing:2 get:1 close:1 selection:4 applying:1 optimize:2 restriction:1 accumulation:1 demonstrated:1 independently:1 convex:4 simplicity:1 recovery:9 identifying:1 decomposable:1 wasserman:3 roux:1 estimator:46 regarded:1 deriving:1 l1regularized:1 population:1 justification:1 annals:6 unconscious:1 suppose:2 nominal:1 exact:1 programming:1 trend:1 approximated:1 ising:3 observed:1 preprint:1 capture:2 calculate:2 ensures:1 sun:1 vanishes:2 convexity:1 moderately:1 efficiency:2 basis:11 gu:2 joint:17 stock:13 represented:1 fast:1 describe:1 monte:10 neighborhood:3 apparent:1 supplementary:4 valued:1 solve:1 say:1 larger:1 statistic:26 jointly:1 obviously:1 advantage:2 propose:5 interaction:10 product:2 skeptic:2 flexibility:2 lenkoski:1 convergence:7 nuist:1 generating:1 derive:1 stat:1 strong:2 c:2 come:2 implies:2 trading:1 ning:1 correct:1 centered:1 material:4 require:1 proposition:2 rothman:1 adjusted:1 exploring:1 extension:1 hold:10 sufficiently:2 considered:3 ground:1 exp:20 mapping:1 claim:1 substituting:1 major:1 achieves:2 bickel:1 purpose:2 estimation:28 individually:2 grouped:2 tool:1 minimization:1 clearly:1 gaussian:8 varying:1 properly:1 potts:1 rank:3 indicates:1 likelihood:8 tech:1 contrast:1 baseline:1 inference:1 dependent:2 typically:1 transformed:2 interested:1 issue:2 among:8 flexible:1 arg:3 overall:1 dual:1 art:1 copula:3 equal:1 field:1 having:1 sampling:10 biology:1 yu:2 icml:1 promote:1 employ:1 modern:2 simultaneously:1 divergence:1 individual:2 connects:1 statistician:1 jeon:1 friedman:1 ab:2 drton:1 organization:2 message:1 investigate:3 mining:1 highly:4 evaluation:1 deferred:1 uncommon:1 analyzed:2 semidefinite:1 devoted:1 regularizers:1 accurate:3 edge:5 daily:2 dividend:1 conduct:1 theoretical:2 minimal:4 instance:3 modeling:2 bk20141003:1 hlmann:1 cost:3 vertex:2 jiangsu:1 reported:1 dependency:1 periodic:1 proximal:4 synthetic:2 vershynin:1 xue:1 st:16 density:5 international:1 fundamental:1 negahban:1 siam:1 physic:1 ym:9 choose:1 davenport:1 admit:1 resort:1 derivative:2 return:2 li:2 supp:4 converted:1 potential:1 unordered:1 caused:1 lab:1 kendall:1 recover:3 guangcan:2 contribution:2 minimize:1 ni:2 accuracy:3 ggm:9 efficiently:3 accurately:1 carlo:10 researcher:1 history:1 submitted:1 ping:2 definition:2 inexact:1 involved:1 conveys:1 associated:8 proof:6 edgewise:1 adjusting:1 popular:1 recall:4 dimensionality:1 hilbert:1 organized:1 dobra:1 day:2 specify:1 devore:1 formulation:6 zeromean:1 correlation:2 glms:1 nonlinear:4 banerjee:1 logistic:1 scientific:1 usa:1 verify:1 true:7 concept:2 regularization:2 vicinity:1 iteratively:1 nonzero:1 illustrated:1 conditionally:1 attractive:1 during:1 cosine:4 suboptimality:1 generalized:1 interrelationship:1 allen:1 wise:10 harmonic:1 novel:1 recently:1 variational:1 common:1 superior:1 raskutti:1 witten:1 multinomial:1 functional:2 overview:1 exponentially:1 slight:1 accumulate:1 measurement:1 expressing:1 imposing:1 gibbs:1 enjoyed:1 tuning:1 dxs:1 closing:1 specification:1 han:1 fst:7 etc:1 multivariate:4 isometry:1 recent:1 certain:3 n00014:1 inequality:5 binary:2 onr:1 impose:1 semi:6 ii:2 multiple:1 reduces:1 infer:1 technical:2 match:2 levina:1 calculation:1 bach:2 retrieval:1 lin:2 concerning:1 mle:18 ravikumar:5 feasibility:1 basic:1 regression:1 vision:2 expectation:5 rutgers:2 poisson:1 essentially:1 iteration:1 kernel:4 normalization:2 metric:1 arxiv:4 semiparametric:4 source:1 extra:2 rest:3 specially:1 induced:1 undirected:4 lafferty:4 gmms:1 jordan:2 yang:2 split:1 easy:1 iii:1 fit:4 hastie:1 lasso:3 suboptimal:1 idea:2 cn:1 penalty:1 f:7 suffer:1 hessian:3 remark:3 nonparametric:7 locally:1 category:5 generate:1 outperform:1 exist:4 nsf:4 restrictively:1 arising:1 popularity:1 tibshirani:1 broadly:2 discrete:1 write:1 group:2 drawn:4 graph:15 monotone:1 cone:1 year:2 inverse:2 powerful:1 baraniuk:1 tzhang:1 family:15 almost:2 electronic:3 draw:1 appendix:3 bound:3 annual:3 strength:3 fourier:5 speed:1 min:3 c20:2 structured:2 according:1 piscataway:1 ball:1 belonging:1 gcliu:1 restricted:5 invariant:1 ghaoui:1 taken:3 ln:11 computationally:1 needed:1 know:1 end:2 available:1 gaussians:1 operation:1 apply:3 observe:1 generic:1 alternative:1 schmidt:1 binomial:1 remaining:1 top:1 graphical:31 wakin:1 restrictive:2 dat:1 padilla:1 pingli:1 depart:2 parametric:10 strategy:1 concentration:1 disability:1 gradient:7 subspace:1 distance:4 link:11 mapped:1 sci:1 simulated:3 bigdata:1 tseng:1 trivial:1 minimizing:1 kolar:1 equivalently:1 sinica:1 info:1 negative:2 zst:6 implementation:1 unknown:7 observation:1 markov:2 finite:2 descent:2 january:2 defining:1 reproducing:1 arbitrary:3 complement:1 pair:1 subvector:1 coherent:1 barcelona:1 nip:4 able:5 beyond:2 usually:1 appeared:1 sparsity:4 challenge:1 including:1 max:14 tau:1 wainwright:4 natural:2 regularized:14 zhu:1 representing:1 scheme:1 picture:1 conventionally:1 aspremont:1 sn:6 prior:3 literature:1 asymptotic:1 law:1 afosr:1 loss:1 fully:1 permutation:1 var:1 foundation:1 degree:2 sufficient:13 xp:2 imposes:1 xiao:3 mercer:1 penalized:1 supported:5 truncation:1 kiiveri:1 wide:2 taking:3 absolute:1 sparse:8 curve:3 dimension:4 xn:29 valid:1 commonly:1 replicated:1 avoided:1 historical:1 approximate:6 confirm:2 global:1 conclude:1 xi:1 maxg:1 iterative:1 learn:6 p500:4 forest:2 investigated:3 complex:5 zou:1 domain:1 main:4 statistica:1 voorman:1 suffering:1 x1:1 xu:1 definiteness:1 tong:5 precision:4 inferring:3 exponential:17 lie:1 jmlr:1 nsfjp:2 theorem:6 xt:19 showing:1 x:40 intractable:2 exists:1 corr:2 gained:2 importance:3 magnitude:1 conditioned:1 univariate:3 explore:1 partially:2 minimizer:1 satisfies:2 truth:1 shojaie:2 conditional:19 goal:2 formulated:2 price:9 replace:1 owen:1 included:1 infinite:1 experimental:2 ggms:6 support:5 arises:1 accelerated:1 constructive:1 evaluate:1 mcmc:1
5,644
6,107
Observational-Interventional Priors for Dose-Response Learning Ricardo Silva Department of Statistical Science and Centre for Computational Statistics and Machine Learning University College London ricardo@stats.ucl.ac.uk Abstract Controlled interventions provide the most direct source of information for learning causal effects. In particular, a dose-response curve can be learned by varying the treatment level and observing the corresponding outcomes. However, interventions can be expensive and time-consuming. Observational data, where the treatment is not controlled by a known mechanism, is sometimes available. Under some strong assumptions, observational data allows for the estimation of dose-response curves. Estimating such curves nonparametrically is hard: sample sizes for controlled interventions may be small, while in the observational case a large number of measured confounders may need to be marginalized. In this paper, we introduce a hierarchical Gaussian process prior that constructs a distribution over the doseresponse curve by learning from observational data, and reshapes the distribution with a nonparametric affine transform learned from controlled interventions. This function composition from different sources is shown to speed-up learning, which we demonstrate with a thorough sensitivity analysis and an application to modeling the effect of therapy on cognitive skills of premature infants. 1 Contribution We introduce a new solution to the problem of learning how an outcome variable Y varies under different levels of a control variable X that is manipulated. This is done by coupling different Gaussian process priors that combine observational and interventional data. The method outperforms estimates given by using only observational or only interventional data in a variety of scenarios and provides an alternative way of interpreting related methods in the design of computer experiments. Many problems in causal inference [14] consist of having a treatment variable X and and outcome Y , and estimating how Y varies as we control X at different levels. If we have data from a randomized controlled trial, where X and Y are not confounded, many standard modeling approaches can be used to learn the relationship between X and Y . If X and Y are measured in an observational study, the corresponding data can be used to estimate the association between X and Y , but this may not be the same as the causal relationship of these two variables because of possible confounders. To distinguish between the observational regime (where X is not controlled) and the interventional regime (where X is controlled), we adopt the causal graphical framework of [16] and [19]. In Figure 1 we illustrate the different regimes using causal graphical models. We will use p(? | ?) to denote (conditional) density or probability mass functions. In Figure 1(a) we have the observational, or ?natural,? regime where common causes Z generate both treatment variable X and outcome variable Y . While the conditional distribution p(Y = x | X = x) can be learned from this data, this quantity is not the same as p(Y = y | do(X = x)): the latter notation, due to Pearl [16], denotes a regime where X is not random, but a quantity set by an intervention performed by an external agent. The relation between these regimes comes from fundamental invariance assumptions: when X is intervened upon, 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Y X ZO Z Z Y X (a) (b) ZH Y X (c) Figure 1: Graphs representing causal graphical models. Circles represent random variables, squares represent fixed constants. (a) A system where Z is a set of common causes (confounders), common parents of X and Y here represented as a single vertex. (b) An intervention overrides the value of X setting it to some constant. The rest of the system remains invariant. (c) ZO is not a common cause of X and Y , but blocks the influence of confounder ZH . ?all other things are equal,? and this invariance is reflected by the fact that the model in Figure 1(a) and Figure 1(b) share the same conditional distribution p(Y = x|X = x, Z = z) and marginal distribution p(Z = z). If we observe Z, p(Y = y | do(X = x)) can be learned from observational data, as we explain in the next section. Our goal is to learn the relationship f (x) ? E[Y | do(X = x)], x ? X , (1) where X ? {x1 , x2 , . . . , xT } is a pre-defined set of treatment levels. We call the vector f (X ) ? [f (x1 ); . . . ; f (xT )]> the response curve for the ?doses? X . Although the term ?dose? is typically associated with the medical domain, we adopt here the term dose-response learning in its more general setup: estimating the causal effect of a treatment on an outcome across different (quantitative) levels of treatment. We assume that the causal structure information is known, complementing approaches for structure learning [19, 9] by tackling the quantitative side of causal prediction. In Section 2, we provide the basic notation of our setup. Section 3 describes our model family. Section 4 provides a thorough set of experiments assessing our approach, including sensitivity to model misspecification. We provide final conclusions in Section 5. 2 Background The target estimand p(Y = y | do(X = x)) can be derived from the structural assumptions of Figure 1(b) by standard conditioning and marginalization operations: Z p(Y = y | do(X = x)) = p(Y = y | X = x, Z = z)p(Z = z) dz. (2) Notice the important difference between the above and p(Y = y | X = x), which can be derived from the assumptions in Figure 1(a) by marginalizing over p(Z = z | X = x) instead. The observational and interventional distributions can be very different. The above formula is sometimes known as the back-door adjustment [16] and it does not require measuring all common causes of treatment and outcome. It suffices that we measure variables Z that block all ?back-door paths? between X and Y , a role played by ZO in Figure 1(c). A formal description of which variables Z will validate (2) is given by [20, 16, 19]. We will assume that the selection of which variables Z to adjust for has been decided prior to our analysis, although in our experiments in Section 4 we will assess the behavior of our method under model misspecification. Our task is to estimate (1) nonparametrically given observational and experimental data, assuming that Z satisfies the back-door criteria. One possibility for estimating (1) from observational data Dobs ? {(Y (i) , X (i) , Z(i) )}, 1 ? i ? N , is by first estimating g(x, z) ? E[Y | X = x, Z = z]. The resulting estimator, N 1 X f?(x) ? g?(x, z(i) ), N i=1 (3) is consistent under some general assumptions on f (?) and g(?, ?). Estimating g(?, ?) nonparametrically seems daunting, since Z can in principle be high-dimensional. However, as shown by [5], under 2 some conditions the problem of estimating f?(?) nonparametrically via (3) is no harder than a onedimensional nonparametric regression problem. There is however one main catch: while observational data can be used to choose the level of regularization for g?(?), this is not likely to be an optimal choice for f?(?) itself. Nevertheless, even if suboptimal smoothing is done, the use of nonparametric methods for estimating causal effects by back-door adjustment has been successful. For instance, [7] uses Bayesian classification and regression trees for this task. Although of practical use, there are shortcomings in this idea even under the assumption that Z provides a correct back-door adjustment. In particular, Bayesian measures of uncertainty should be interpreted with care: a fully Bayesian equivalent to (3) would require integrating over a model for p(Z) instead of the empirical distribution for Z in Dobs ; evaluating a dose x might require combining many g(x, z(i) ) where the corresponding training measurements x(i) are far from x, resulting on possibly unreliable extrapolations with poorly calibrated credible intervals. While there are well established approaches to deal with this ?lack of overlap? problem in binary treatments or linear responses [18, 8], it is less clear what to do in the continuous case with nonlinear responses. In this paper, we focus on a setup where it is possible to collect interventional data such that treatments are controlled, but where sample sizes might be limited due to financial and time costs. This is related to design of computer experiments, where (cheap, but biased) computer simulations are combined with field experiments [2, 6]. The key idea of combining two sources of data is very generic, the value of new methods being on the design of adequate prior families. For instance, if computer simulations are noisy, it is may not be clear how uncertainty at that level should be modeled. We leverage knowledge of adjustment techniques for causal inference, so that it provides a partially automated recipe to transform observational data into informed priors. We leverage knowledge of the practical shortcomings of nonparametric adjustment (3) so that, unlike the biased but low variance setup of computer experiments, we try to improve the (theoretically) unbiased but possibly oversmooth structure of such estimators by introducing a layer of pointwise affine transformations. Heterogeneous effects and stratification. One might ask why marginalize Z in (2), as it might be of greater interest to understand effects at the finer subpopulation levels conditioned on Z. In fact, (2) should be seen as the most general case, where conditioning on a subset of covariates (for instance, gender) will provide the possibly different average causal effect for each given strata (different levels of gender) marginalized over the remaining covariates. Randomized fine-grained effects might be hard to estimate and require stronger smoothing and extrapolation assumptions, but in principle they could be integrated with the approaches discussed here. In practice, in causal inference we are generally interested in marginal effects for some subpopulations where many covariates might not be practically measurable at decision time, and for the scientific purposes of understanding total effects [5] at different levels of granularity with weaker assumptions. 3 Hierarchical Priors via Inherited Smoothing and Local Affine Changes The main idea is to first learn from observational data a Gaussian process over dose-response curves, then compose it with a nonlinear transformation biased toward the identity function. The fundamental innovation is the construction a nonstationary covariance function from observational data. 3.1 Two-layered Priors for Dose-responses Given an observational dataset Dobs of size N , we fit a Gaussian process to learn a regression model of outcome Y on (uncontrolled) treatment X and covariates Z. A Gaussian likelihood for Y given X and Z is adopted, with conditional mean g(x, z) and variance ?g2 . A Mat?rn 3/2 covariance function with automatic relevance determination priors is given to g(?, ?), followed by marginal maximum likelihood to estimate ?g2 and the covariance hyperparameters [12, 17]. This provides a posterior distribution over functions g(?, ?) in the input space of X and Z. We then define fobs (X ), x ? X , as fobs (x) ? N 1 X g(x, z(i) ), N i=1 (4) where set {g(x, z(i) )} is unknown. Uncertainty about fobs (?) comes from the joint predictive distribution of {g(x, z(i) )} learned from Dobs , itself a Gaussian distribution with a T N ? 1 mean vector 3 ??g and a T N ? T N covariance matrix, T ? |X |. Since (4) is a linear function of {g(x, z(i) )}, this PN implies fobs (X ) is also a (nonstationary) Gaussian process with mean ?obs (x) = N1 i=1 ??g (x, z (i) ) for each x ? X . The motivation for (4) is that ?obs is an estimator of the type (3), inheriting its desirable properties and caveats. The cost of computing the covariance matrix Kobs of fobs (X ) is O(T 2 N 2 ), potentially expensive. In many practical applications, however, the size of X is not particularly large as it is a set of intervention points to be decided according to practical real-world constraints. In our simulations in Section 4, we chose T = |X | = 20. Approximating such covariance matrix, if necessary, is a future research topic. (i) (i) (i) Assume interventional data Dint ? {(Yint , xint )}, 1 ? i ? M , is provided (with assignments xint chosen by some pre-defined design in X ). We assign a prior to f (?) according to the model fobs (X ) a(X ) b(X ) f (X ) (i) Yint ? ? ? = ? N (?obs , Kobs ) N (1, Ka ) N (0, Kb ) a(X ) fobs (X ) + b(X ) (i) 2 N (f (xint ), ?int ), 1 ? i ? M, (5) where N (m, V) is the multivariate normal distribution with mean m and covariance matrix V, is the elementwise product, a(?) is a vector which we call the distortion function, and b(?) the translation function. The role of the ?elementwise affine? transform a fobs + b is to bias f toward fobs with uncertainty that varies depending on our uncertainty about fobs . The multiplicative component a fobs also induces a heavy-tail prior on f . In the Supplementary Material, we discuss briefly the alternative of using the deep Gaussian process of [4] in our observational-interventional setup. 3.2 Hyperpriors We parameterize Ka as follows. Every entry ka (x, x0 ) of Ka , (x, x0 ) ? X ? X , assumes the shape of a squared exponential kernel modified according to the smoothness and scale information obtained from Dobs . First, define ka (x, x0 ) as   1 (? x?x ?0 )2 + (? yx ? y?x0 )2 0 ka (x, x ) ? ?a ? vx ? vx0 ? exp ? + ?(x ? x0 )10?5 , (6) 2 ?a where (?a , ?h ) are hyperparameters, ?(?) is the delta function, vx is a rescaling of Kobs (x, x)1/2 , x ? is a rescaling of X to the [0, 1] interval, y?x is a rescaling of ?obs (x) to the [0, 1] interval. More precisely, s x ? min(X ) Kobs (x, x) ?obs (x) ? min(?obs (X )) , y?x ? , vx = . x ?? max(X ) ? min(X ) max(?obs (X )) ? min(?obs (X )) maxx0 Kobs (x0 , x0 ) (7) Equation (6) is designed to borrow information from the (estimated) smoothness of f (X ), by decreasing the correlation of the distortion factors a(x) and a(x0 ) as a function of the Euclidean distance between the 2D points (x, ?obs (x)) and (x0 , ?obs (x0 )), properly scaled. Hyperparameter ?a controls how this distance is weighted. (6) also captures information about the amplitude of the distortion signal, making it proportional to the ratios of the diagonal entries of Kobs (X ). Hyperparameter ?a controls how this amplitude is globally adjusted. Nugget 10?5 brings stability to the sampling of a(X ) within Markov chain Monte Carlo (MCMC) inference. Hyper-hyperpriors on ?a and ?a are set as log(?a ) ? N (0, 0.5), log(?a ) ? N (0, 0.1). (8) That is, ?a follows a log-Normal distribution with median 1, approximately 90% of the mass below 2.5, and a long tail to the right. The implied distribution for a(x) where sx = 1 will have most of its mass within a factor of 10 from its median. The prior on ?a follows a similar shape, but with a narrower allocation of mass. Covariance matrix Kb is defined in the same way, with its own 2 hyperparameters ?b and ?b . Finally, the usual Jeffrey?s prior for error variances is given to ?int . Figure 2 shows an example of inference obtained from synthetic data, generated according to the protocol of Section 4. In this example, the observational relationship between X and Y has the opposite association of the true causal one, but after adjusting for 15 of the 25 confounders that 4 Observational data (N = 1000) Interventional data (M = 200) 4 4 3 3 2 2 K obs 2 Outcome Y Outcome Y 4 1 0 ?1 6 1 8 10 0 12 ?1 14 ?2 ?2 ?3 ?3 16 ?4 ?4 ?3 ?2 ?1 0 1 2 ?4 3 18 20 ?4 ?3 ?2 Treatment X 0 1 2 3 2 Prior: distortion only 3 8 2.5 2 6 1.5 1 0.5 4 2 0 0 ?4 ?0.5 0 14 16 18 20 1 ?2 ?1 12 0.5 0 ?2 10 1.5 ?0.5 ?3 8 2 Outcome Y Distortion H 10 ?4 6 Prior on dose?response 3 2.5 ?1 4 Treatment X Prior: observational only Outcome Y ?1 1 2 ?6 3 ?4 ?3 ?2 Treatment X ?1 0 1 2 ?1 3 ?4 ?3 ?2 Treatment X Posterior: observational only Posterior: distortion only 3 ?1 0 1 2 3 Treatment X Posterior on dose?response 4 3 2.5 2.5 3 1.5 1 0.5 Outcome Y 2 Outcome Y Outcome Y 2 2 1 0 0 1.5 1 0.5 0 ?1 ?0.5 ?1 ?0.5 ?4 ?3 ?2 ?1 0 Treatment X 1 2 3 ?2 ?4 ?3 ?2 ?1 0 1 Treatment X 2 3 ?1 ?4 ?3 ?2 ?1 0 1 2 3 Treatment X Figure 2: An example with synthetic data (|Z| = 25), from priors to posteriors. Figure best seen in color. Top row: scatterplot of observational data, with true dose-response function in solid green, adjusted ?obs in dashed red, and the unadjusted Gaussian process regression of Y on X in dashedand-circle magenta (which is a very badly biased estimate in this example); scatterplot in the middle shows interventional data, 20 dose levels uniformly spread in the support of the observational data and 10 outputs per level ? notice that the sign of the association is the opposite of the observational regime; matrix Kobs is depicted at the end, where the nonstationarity of the process is evident. Middle row: priors constructed on fobs (X ) and a(X ) with respective means; plot at the end corresponds to the implied prior on a fobs + b. Bottom row: the respective posteriors obtained by Gibbs sampling. generated the data (10 confounders are randomly ignored to mimic imperfect prior knowledge), a reasonable initial estimate for f (X ) is obtained. The combination with interventional data results in a much better fit, but imperfections still exist at the strongest levels of treatment: the green curve drops at x > 2 stronger than the expected posterior mean. This is due to having both a prior derived from observational data that got the wrong direction of the dose-response curve at x > 1.5, and being unlucky at drawing several higher than expected values in the interventional regime for x = 3. The model then shows its strength on capturing much of the structure of the true dose-response curve even under misspecified adjustments, but the example provides a warning that only so much can be done given unlucky draws from a small interventional dataset. 3.3 Inference, Stratified Learning and Active Learning In our experiments, we infer posterior distributions by Gibbs sampling, alternating the sampling of 2 latent variables f (X ), a(X ), b(X ) and hyperparameters ?a , ?a , ?b , ?b , ?int , using slice sampling [15] for the hyperparameters. The meaning of the individual posterior distribution over fobs (X ) might also be of interest. This quantity is potentially identifiable by considering a joint model R for (Dobs , Dint ): in this case, fobs (X ) learns the observational adjustment g(x, z)p(z) dz. This suggests that the posterior distribution for fobs (X ) will change little according to model (5), which 5 is indeed observed in practice and illustrated by Figure 2. Learning the hyperparameters for Kobs could be done jointly with the remaining hyperparameters, but the cost per iteration would be high due to the update of Kobs . The MCMC procedure for (5) is relatively inexpensive assuming that |X | is small. Learning the hyperparameters of Kobs separately is a type of ?modularization? of Bayesian inference [10]. As we mentioned in Section 2, it is sometimes desirable to learn dose-response curves conditioned on a few covariates S ? Z of interest. In particular, in this paper we will consider the case of straightforward stratification: given a set S of discrete covariates assuming instantiations s, we have functions f s (X ) to be learned. Different estimation techniques can be used to borrow statistical s strength across levels of S, both for f s (X ) and fobs (X ). However, in our implementation, where we assume |S| is very small (a realistic case for many experimental designs), we construct independent s priors for the different fobs (X ) with independent affine transformations. Finally, in the Supplementary Material we also consider simple active learning schemes [11], as suggested by the fact that prior information already provides different estimates of uncertainty across X (Figure 2), which is sometimes dramatically nonstationary. 4 Experiments Assessing causal inference algorithms requires fitting and predicting data generated by expensive randomized trials. Since this is typically unavailable, we will use simulated data where the truth is known. We divide our experiments in two types: first, one where we generate random dose-response functions, which allows us to control the difficulty of the problem in different directions; second, one where we start from a real world dataset and generate ?realistic? dose-response curves from which simulated data can be given as input to the method. 4.1 Synthetic Data Studies We generate studies where the observational sample has N = 1000 data points and |Z| = 25 confounders. Interventional data is generated at three different levels of sample size, M = 40, 100 and 200 where the intervention space X is evenly distributed within the range shown by the observational data, with |X | = 20. Covariates Z are generated from a zero-mean, unit variance Gaussian with correlation of 0.5 for all pairs. Treatment X is generated by first sampling a function fi (zi ) for every covariate from a Gaussian process, summing over 1 ? i ? 25 and adding Gaussian noise. Outcome Y is generated by first sampling linear coefficients and one intercept to weight the contribution of confounders Z, and then passing the linear combination through a quadratic function. The dose-response function of X on Y is generated as a polynomial, which is added to the contribution of Z and a Gaussian error. In this way, it is easy to obtain the dose-response function analytically. Besides varying M , we vary the setup in three other aspects: first, the dose-response is either a quadratic or cubic polynomial; second, the contribution of X is scaled to have its minimum and maximum value spam either 50% or 80% of the range of all other causes of Y , including the Gaussian noise (a spam of 50% already generates functions of modest impact to the total variability of Y ); third, the actual data given to the algorithm contains only 15 of the 25 confounders. We either discard 10 confounders uniformly at random (the R ANDOM setup), or remove the ?top 10 strongest? confounders, as measured by how little confounding remains after adjusting for that single covariate alone (the A DVERSARIAL setup). In the interest of space, we provide a fully detailed description of the experimental setup in the Supplementary Material. Code is also provided to regenerate our data and re-run all of these experiments. Evaluation is done in two ways. First, by the normalized absolute difference between an estimate f?(x) and the true f (x), averaged over X . The normalization is done by dividing the difference by the gap between the maximum and minimum true values of f (X ) within each simulated problem1 . The second measure is the log density of each true f (x), averaged over x ? X , according to the inferred posterior distribution approximated as a Gaussian distribution, with mean and variance estimated by MCMC. We compare our method against: I. a variation of it where a and b are fixed at 1 and 0, so the only randomness is in fobs ; II. instead of an affine transformation, we set f (X ) = fobs (X ) + r(X ), 1 Data is also normalized to a zero mean, unit variance according to the empirical mean and variance of the observational data, in order to reduce variability across studies. 6 Table 1: For each experiment, we have either quadratic (Q) or cubic (C) ground truth, with a signal range of 50% or 80%, and an interventional sample size of M = 40, 100 and 200. Ei denotes the difference between competitor i and our method regarding mean error, see text for a description of competitors. The mean absolute error for our method is approximately 0.20 for M = 40 and 0.10 for M = 200 across scenarios. Li denotes the difference between our method and competitor i regarding log-likelihood (differences > 10 are ignored, see text). That is, positive values indicate our method is better according to the corresponding criterion. All results are averages over 50 independent simulations, italics indicate statistically significant differences by a two-tailed t-test at level ? = 0.05. EI EII EIII LI LII LIII EI EII EIII LI LII LIII Q50% R ANDOM 40 100 200 0.00 0.02 0.01 0.05 0.02 0.01 0.11 0.07 0.03 2.33 2.31 2.18 0.78 0.28 0.17 > 10 > 10 0.43 C50% R ANDOM 40 100 200 0.01 0.02 0.03 0.05 0.03 0.02 0.08 0.04 0.04 > 10 > 10 > 10 3.49 0.83 0.41 > 10 > 10 > 10 Q50% A DV Q80% R ANDOM 100 200 40 100 200 0.07 0.05 0.00 0.00 0.01 0.00 0.00 0.04 0.03 0.02 0.01 0.01 0.11 0.06 0.03 6.68 6.23 0.62 0.53 0.45 -0.17 -0.16 0.53 0.42 0.20 > 10 -0.06 0.74 0.44 0.36 C50% A DV C80% R ANDOM 40 100 200 40 100 200 0.08 0.08 0.07 0.03 0.05 0.05 0.05 0.02 0.01 0.05 0.03 0.02 0.03 0.04 0.02 0.11 0.06 0.03 9.62 9.05 8.68 > 10 > 10 > 10 4.45 0.43 -0.10 1.07 0.64 -0.04 > 10 > 10 > 10 > 10 0.79 0.03 40 0.07 0.04 0.05 7.16 0.44 > 10 Q80% A DV 100 200 0.04 0.03 0.02 0.00 0.03 0.01 1.79 1.50 0.07 -0.09 -0.01 -0.10 C80% A DV 40 100 200 0.09 0.09 0.08 0.07 0.03 0.02 0.09 0.05 0.02 > 10 > 10 > 10 0.96 0.30 0.14 0.45 0.18 -0.03 40 0.05 0.04 0.08 2.16 0.25 0.33 where r is given a generic squared exponential Gaussian process prior, which is fit by marginal maximum likelihood; III. Gaussian process regression with squared exponential kernel applied to the interventional data only and hyperparameters fitted by marginal likelihood. The idea is that competitors I and II provide sensitivity analysis of whether our more specialized prior is adding value. In particular, competitor II would be closer to the traditional priors used in computer-aided experimental design [2] (but for our specialized Kobs ). Results are shown in Table 1, according to the two assessment criteria, using E for average absolute error, and L for average log-likelihood. Our method demonstrated robustness to varying degrees of unmeasured confounding. Compared to Competitor I, the mean obtained without any further affine transformation already provides a competitive estimator of f (X ), but this suffers when unmeasured confounding is stronger (A DVERSARIAL setup). Moreover, uncertainty estimates given by Competitor I tend to be overconfident. Competitor II does not make use of our special covariance function for the correction, and tends to be particularly weak against our method in lower interventional sample sizes. In the same line, our advantage over Competitor III starts stronger at M = 40 and diminishes as expected when M increases. Competitor III is particularly bad at lower signal-to-noise ratio problems, where sometimes it is overly confident that f (X ) is zero everywhere (hence, we ignore large likelihood discrepancies in our evaluation). This suggests that in order to learn specialized curves for particular subpopulations, where M will invariably be small, an end-to-end model for observational and interventional data might be essential. 4.2 Case Study We consider an adaptation of the study analyzed by [7]. Targeted at premature infants with low birth weight, the Infant Health and Development Program (IHDP) was a study of the efficacy of ?educational and family support services and pediatric follow-up offered during the first 3 years of life? [3]. The study originally randomized infants into those that received treatment and those that did not. The outcome variable was an IQ test applied when infants reached 3 years. Within those which received treatment, there was a range of number of days of treatment. That dose level was not randomized, and again we do not have ground truth for the dose-response curve. For our assessment, we fit a dose-response curve using Gaussian processes with Gaussian likelihood function and the back-door adjustment (3) on available covariates. We then use the model to generate independent synthetic ?interventional data.? Measured covariates include birth weight, sex, whether the mother smoked during pregnancy, among other factors detailed by [7, 3]. The Supplementary Material goes in detail about the preprocessing, including R/MATLAB scripts to generate the data. The 7 Posterior on dose?response (college) Posterior on dose?response (high school) Posterior on dose?response (all) 130 115 120 115 110 120 110 100 95 90 85 Outcome Y Outcome Y Outcome Y 105 110 100 90 80 100 95 90 85 80 80 75 70 105 75 0 50 100 150 200 250 300 Treatment X (a) 350 400 450 70 0 50 100 150 200 250 300 Treatment X (b) 350 400 450 70 0 50 100 150 200 250 300 350 400 450 Treatment X (c) Figure 3: An illustration of a problem generated from a model fitted to real data. That is, we generated data from ?interventions? simulated from a model that was fitted to an actual study on premature infant development [3], where the dose is the number of days that an infant is assigned to follow a development program and the outcome is an IQ test at age 3. (a) Posterior distribution for the stratum of infants whose mothers had up to some high school education, but no college. The red curve is the posterior mean of our method, and the blue curve the result of Gaussian process fit with interventional data only. (b) Posterior distributions for the infants whose mothers had (some) college education. (c) The combined strata. observational sample contained 347 individuals (corresponding only to those which were eligible for treatment and had no missing outcome variable) and 21 covariates. This sample included 243 infants whose mother attended (some) high school but not college, and 104 with at least some college. We generated 100 synthetic interventional datasets stratified by mother?s education, (some) highschool vs. (some) college. 19 treatment levels were pre-selected (0 to 450 days, increments of 25 days). All variables were standardized to zero mean and unit standard deviation according to the observational distribution per stratum. Two representative simulated studies are shown in Figure 3, depicting dose-response curves which have modest evidence of non-linearity, and differ in range per stratum2 . On average, our method improved over the fitting of a Gaussian process with squared exponential covariance function that was given interventional data only. According to the average normalized absolute differences, the improvement was 0.06, 0.07 and 0.08 for the high school, college and combined data, respectively (where error was reduced in 82%, 89% and 91% of the runs, respectively), each in which 10 interventional samples were simulated per treatment level per stratum. 5 Conclusion We introduced a simple, principled way of combining observational and interventional measurements and assessed its accuracy and robustness. In particular, we emphasized robustness to model misspecification and we performed sensitivity analysis to assess the importance of each individual component of our prior, contrasted to off-the-shelf solutions that can be found in related domains [2]. We are aware that many practical problems remain. For instance, we have not discussed at all the important issue of sample selection bias, where volunteers for an interventional study might not come from the same p(Z) distribution as in the observational study. Worse, neither the observational nor the interventional data might come from the population in which we want to enforce a policy learned from the combined data. While these essential issues were ignored, our method can in principle be combined with ways of assessing and correcting for sample selection bias [1]. Moreover, if unmeasured confounding is too strong, one cannot expect to do well. Methods for sensitivity analysis of confounding assumptions [13] can be integrated with our framework. A more thorough analysis of active learning using our approach, particularly in the light of possible model misspecification, is needed as our results in the Supplementary Material only superficially covers this aspect. Acknowledgments Thanks to Jennifer Hill for clarifications about the IHDP data, and Robert Gramacy for several useful discussions. 2 We do not claim that these curves represent the true dose-response curves: confounders are very likely to exist, as the dose level was not decided at the beginning of the trial and is likely to have been changed ?on the fly? as the infant responded. It is plausible that our covariates cannot reliably account for this feedback effect. 8 References [1] E. Bareinboim and J. Pearl. Causal inference from Big Data: Theoretical foundations and the data-fusion problem. Proceedings of the National Academy of Sciences, in press, 2016. [2] M. J. Bayarri, J. O. Berger, R. Paulo, J. Sacks, J. A. Cafeo, J. Cavendish, C.-H. Lin, and J. Tu. A framework for validation of computer models. Technometrics, 49:138?154, 2007. [3] J. Brooks-Gunn, F. Liaw, and P. Klebanov. Effects of early intervention on cognitive function of low birth weight preterm infants. Journal of Pediatrics, 120:350?359, 1991. [4] A. Damianou and N. D. Lawrence. Deep Gaussian processes. Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics (AISTATS), pages 207?215, 2013. [5] J. Ernest and P. B?hlmann. Marginal integration for nonparametric causal inference. Electronic Journal of Statistics, 9:3155?3194, 2015. [6] R. Gramacy and H. K. Lee. Bayesian treed Gaussian process models with an application to computer modeling. Journal of the American Statistical Association, 103:1119?1130, 2008. [7] J. Hill. Bayesian nonparametric modeling for causal inference. Journal of Computational and Graphical Statistics, 20:217?240, 2011. [8] J. Hill and Y.-S. Su. Assessing lack of common support in causal inference using Bayesian nonparametrics: Implications for evaluating the effect of breastfeeding on children?s cognitive outcomes. The Annals of Applied Statistics, 7:1386?1420, 2013. [9] A. Hyttinen, F. Eberhardt, and P. O. Hoyer. Experiment selection for causal discovery. Journal of Machine Learning Research, 14:3041?3071, 2013. [10] F. Liu, M. J. Bayarri, and J. O. Berger. Modularization in Bayesian analysis, with emphasis on analysis of computer models. Bayesian Analysis, 4:119?150, 2009. [11] D. J. C. MacKay. Information-based objective functions for active data selection. Neural Computation, 4:590?604, 1992. [12] D. J. C. MacKay. Bayesian non-linear modelling for the prediction competition. ASHRAE Transactions, 100:1053?1062, 1994. [13] L. C. McCandless, P. Gustafson, and A. R. Levy. Bayesian sensitivity analysis for unmeasured confounding in observational studies. Statistics in Medicine, 26:2331?2347, 2007. [14] S. L. Morgan and C. Winship. Counterfactuals and Causal Inference: Methods and Principles for Social Research. Cambridge University Press, 2014. [15] R. Neal. Slice sampling. The Annals of Statistics, 31:705?767, 2003. [16] J. Pearl. Causality: Models, Reasoning and Inference. Cambridge University Press, 2000. [17] C. Rasmussen and C. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [18] J. Robins, M. Sued, Q. Lei-Gomez, and A. Rotnitzky. Comment: Performance of doublerobust estimators when "inverse probability" weights are highly variable. Statistical Science, 22:544?559, 2007. [19] P. Spirtes, C. Glymour, and R. Scheines. Causation, Prediction and Search. Cambridge University Press, 2000. [20] T. VanderWeele and I. Shpitser. A new criterion for confounder selection. Biometrics, 64:1406? 1413, 2011. 9
6107 |@word trial:3 middle:2 briefly:1 polynomial:2 seems:1 stronger:4 sex:1 simulation:4 covariance:10 attended:1 solid:1 harder:1 initial:1 liu:1 contains:1 efficacy:1 outperforms:1 ka:6 tackling:1 realistic:2 shape:2 cheap:1 remove:1 designed:1 plot:1 drop:1 update:1 v:1 infant:12 alone:1 selected:1 intelligence:1 complementing:1 beginning:1 caveat:1 provides:8 treed:1 constructed:1 direct:1 combine:1 compose:1 fitting:2 introduce:2 x0:10 theoretically:1 indeed:1 expected:3 behavior:1 nor:1 globally:1 decreasing:1 little:2 actual:2 considering:1 spain:1 estimating:8 notation:2 provided:2 moreover:2 mass:4 linearity:1 what:1 interpreted:1 informed:1 transformation:5 warning:1 pediatrics:1 thorough:3 quantitative:2 every:2 scaled:2 wrong:1 uk:1 control:5 unit:3 medical:1 intervention:10 positive:1 service:1 local:1 tends:1 path:1 approximately:2 might:10 chose:1 emphasis:1 collect:1 suggests:2 limited:1 confounder:2 stratified:2 range:5 averaged:2 statistically:1 decided:3 practical:5 acknowledgment:1 practice:2 block:2 procedure:1 empirical:2 got:1 pre:3 integrating:1 subpopulation:3 cannot:2 marginalize:1 selection:6 layered:1 influence:1 intercept:1 equivalent:1 measurable:1 demonstrated:1 dz:2 missing:1 straightforward:1 educational:1 go:1 williams:1 stats:1 correcting:1 gramacy:2 estimator:5 borrow:2 financial:1 stability:1 population:1 cavendish:1 variation:1 unmeasured:4 increment:1 annals:2 target:1 construction:1 us:1 expensive:3 particularly:4 approximated:1 gunn:1 pediatric:1 bottom:1 role:2 observed:1 fly:1 capture:1 parameterize:1 mentioned:1 principled:1 covariates:11 predictive:1 upon:1 joint:2 represented:1 zo:3 shortcoming:2 london:1 monte:1 artificial:1 hyper:1 outcome:22 birth:3 whose:3 supplementary:5 plausible:1 distortion:6 drawing:1 statistic:7 transform:3 itself:2 noisy:1 final:1 jointly:1 advantage:1 ucl:1 product:1 adaptation:1 tu:1 combining:3 poorly:1 academy:1 sixteenth:1 description:3 validate:1 competition:1 recipe:1 parent:1 assessing:4 coupling:1 illustrate:1 ac:1 depending:1 iq:2 measured:4 school:4 received:2 strong:2 dividing:1 come:4 implies:1 indicate:2 differ:1 direction:2 correct:1 kb:2 vx:3 observational:39 material:5 education:3 require:4 assign:1 suffices:1 adjusted:2 correction:1 practically:1 therapy:1 ground:2 normal:2 exp:1 lawrence:1 claim:1 vary:1 adopt:2 early:1 purpose:1 estimation:2 diminishes:1 weighted:1 mit:1 imperfection:1 gaussian:24 modified:1 pn:1 shelf:1 varying:3 derived:3 focus:1 properly:1 improvement:1 modelling:1 likelihood:8 inference:14 typically:2 integrated:2 relation:1 interested:1 issue:2 classification:1 among:1 development:3 smoothing:3 special:1 integration:1 mackay:2 marginal:6 equal:1 construct:2 aware:1 having:2 hyttinen:1 sampling:8 stratification:2 field:1 problem1:1 future:1 mimic:1 discrepancy:1 few:1 causation:1 randomly:1 manipulated:1 national:1 individual:3 jeffrey:1 n1:1 technometrics:1 invariably:1 interest:4 possibility:1 highly:1 unadjusted:1 adjust:1 evaluation:2 unlucky:2 analyzed:1 light:1 chain:1 implication:1 closer:1 necessary:1 respective:2 modest:2 biometrics:1 tree:1 euclidean:1 divide:1 circle:2 re:1 causal:21 theoretical:1 dose:30 fitted:3 instance:4 ashrae:1 modeling:4 cover:1 measuring:1 assignment:1 hlmann:1 cost:3 introducing:1 deviation:1 vertex:1 subset:1 entry:2 successful:1 too:1 varies:3 synthetic:5 calibrated:1 confounders:11 combined:5 density:2 international:1 fundamental:2 sensitivity:6 randomized:5 stratum:5 confident:1 lee:1 off:1 squared:4 again:1 choose:1 possibly:3 worse:1 cognitive:3 external:1 lii:2 american:1 shpitser:1 ricardo:2 rescaling:3 li:3 account:1 paulo:1 int:3 coefficient:1 dobs:6 performed:2 try:1 extrapolation:2 multiplicative:1 script:1 observing:1 bayarri:2 red:2 start:2 competitive:1 reached:1 counterfactuals:1 inherited:1 nugget:1 contribution:4 ass:2 square:1 rotnitzky:1 accuracy:1 responded:1 variance:7 weak:1 bayesian:11 carlo:1 finer:1 randomness:1 explain:1 strongest:2 damianou:1 suffers:1 nonstationarity:1 inexpensive:1 against:2 competitor:10 associated:1 dataset:3 treatment:30 adjusting:2 ask:1 knowledge:3 color:1 credible:1 thanks:1 amplitude:2 andom:5 back:6 higher:1 originally:1 day:4 follow:2 reflected:1 response:27 improved:1 daunting:1 nonparametrics:1 done:6 correlation:2 ei:3 su:1 nonlinear:2 assessment:2 lack:2 nonparametrically:4 brings:1 scientific:1 lei:1 effect:13 normalized:3 unbiased:1 true:7 regularization:1 analytically:1 hence:1 alternating:1 assigned:1 spirtes:1 neal:1 illustrated:1 deal:1 during:2 liaw:1 criterion:4 eii:2 override:1 evident:1 hill:3 demonstrate:1 interpreting:1 silva:1 reasoning:1 meaning:1 fi:1 misspecified:1 common:6 specialized:3 conditioning:2 association:4 discussed:2 tail:2 elementwise:2 onedimensional:1 measurement:2 composition:1 significant:1 cambridge:3 gibbs:2 mother:5 smoothness:2 automatic:1 centre:1 had:3 sack:1 posterior:17 multivariate:1 own:1 confounding:6 discard:1 scenario:2 binary:1 life:1 seen:2 minimum:2 greater:1 care:1 morgan:1 signal:3 dashed:1 ii:4 desirable:2 infer:1 determination:1 long:1 lin:1 kob:11 controlled:8 impact:1 prediction:3 ernest:1 basic:1 regression:5 heterogeneous:1 volunteer:1 maxx0:1 sometimes:5 represent:3 kernel:2 iteration:1 normalization:1 background:1 want:1 fine:1 separately:1 interval:3 median:2 source:3 biased:4 rest:1 unlike:1 comment:1 tend:1 thing:1 call:2 nonstationary:3 structural:1 leverage:2 door:6 granularity:1 iii:3 easy:1 gustafson:1 automated:1 variety:1 marginalization:1 fit:5 zi:1 suboptimal:1 opposite:2 imperfect:1 idea:4 reduce:1 regarding:2 whether:2 fob:20 vanderweele:1 passing:1 cause:5 adequate:1 deep:2 ignored:3 generally:1 dramatically:1 clear:2 detailed:2 matlab:1 useful:1 nonparametric:6 induces:1 reduced:1 generate:6 exist:2 notice:2 sign:1 delta:1 estimated:2 per:6 overly:1 blue:1 discrete:1 hyperparameter:2 mat:1 key:1 nevertheless:1 interventional:26 neither:1 graph:1 year:2 estimand:1 run:2 inverse:1 everywhere:1 uncertainty:7 family:3 reasonable:1 eligible:1 preterm:1 electronic:1 winship:1 draw:1 vx0:1 decision:1 ob:12 capturing:1 layer:1 uncontrolled:1 followed:1 distinguish:1 played:1 gomez:1 quadratic:3 identifiable:1 badly:1 strength:2 constraint:1 precisely:1 x2:1 generates:1 aspect:2 speed:1 min:4 relatively:1 glymour:1 department:1 according:11 overconfident:1 combination:2 across:5 describes:1 remain:1 making:1 dv:4 invariant:1 equation:1 c50:2 remains:2 jennifer:1 discus:1 scheines:1 mechanism:1 needed:1 end:4 confounded:1 adopted:1 available:2 operation:1 observe:1 hierarchical:2 hyperpriors:2 generic:2 liii:2 enforce:1 alternative:2 modularization:2 robustness:3 denotes:3 remaining:2 assumes:1 top:2 include:1 graphical:4 standardized:1 marginalized:2 yx:1 medicine:1 approximating:1 implied:2 objective:1 already:3 quantity:3 added:1 usual:1 diagonal:1 italic:1 traditional:1 hoyer:1 distance:2 simulated:6 evenly:1 topic:1 toward:2 assuming:3 besides:1 code:1 modeled:1 relationship:4 pointwise:1 ratio:2 illustration:1 berger:2 innovation:1 setup:10 robert:1 potentially:2 bareinboim:1 design:6 implementation:1 reliably:1 policy:1 unknown:1 regenerate:1 markov:1 ihdp:2 datasets:1 variability:2 misspecification:4 rn:1 inferred:1 introduced:1 pair:1 learned:7 established:1 pearl:3 barcelona:1 nip:1 brook:1 suggested:1 below:1 regime:8 program:2 including:3 max:2 green:2 overlap:1 natural:1 difficulty:1 predicting:1 representing:1 scheme:1 improve:1 catch:1 health:1 text:2 prior:27 understanding:1 discovery:1 zh:2 marginalizing:1 fully:2 expect:1 proportional:1 allocation:1 age:1 validation:1 foundation:1 agent:1 degree:1 affine:7 offered:1 consistent:1 principle:4 pregnancy:1 share:1 heavy:1 oversmooth:1 translation:1 row:3 changed:1 rasmussen:1 side:1 formal:1 understand:1 weaker:1 bias:3 absolute:4 distributed:1 slice:2 curve:19 feedback:1 evaluating:2 world:2 superficially:1 preprocessing:1 premature:3 spam:2 far:1 social:1 transaction:1 skill:1 ignore:1 unreliable:1 active:4 instantiation:1 summing:1 q50:2 consuming:1 dint:2 continuous:1 latent:1 search:1 tailed:1 why:1 table:2 robin:1 learn:6 klebanov:1 eberhardt:1 depicting:1 unavailable:1 domain:2 inheriting:1 protocol:1 did:1 aistats:1 main:2 spread:1 motivation:1 noise:3 hyperparameters:9 big:1 child:1 x1:2 causality:1 representative:1 cubic:2 dos:1 exponential:4 intervened:1 levy:1 third:1 learns:1 grained:1 formula:1 magenta:1 bad:1 xt:2 covariate:2 emphasized:1 evidence:1 reshapes:1 consist:1 essential:2 scatterplot:2 fusion:1 adding:2 importance:1 conditioned:2 sx:1 gap:1 depicted:1 likely:3 adjustment:8 contained:1 partially:1 g2:2 gender:2 corresponds:1 truth:3 satisfies:1 conditional:4 goal:1 identity:1 narrower:1 targeted:1 hard:2 change:2 aided:1 included:1 uniformly:2 contrasted:1 total:2 clarification:1 invariance:2 experimental:4 college:8 support:3 latter:1 assessed:1 relevance:1 mcmc:3
5,645
6,108
Blind Regression: Nonparametric Regression for Latent Variable Models via Collaborative Filtering Christina E. Lee Yihua Li Devavrat Shah Dogyoon Song Laboratory for Information and Decision Systems Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology {celee, liyihua, devavrat, dgsong}@mit.edu Abstract We introduce the framework of blind regression motivated by matrix completion for recommendation systems: given m users, n movies, and a subset of user-movie ratings, the goal is to predict the unobserved user-movie ratings given the data, i.e., to complete the partially observed matrix. Following the framework of nonparametric statistics, we posit that user u and movie i have features x1 (u) and x2 (i) respectively, and their corresponding rating y(u, i) is a noisy measurement of f (x1 (u), x2 (i)) for some unknown function f . In contrast with classical regression, the features x = (x1 (u), x2 (i)) are not observed, making it challenging to apply standard regression methods to predict the unobserved ratings. Inspired by the classical Taylor?s expansion for differentiable functions, we provide a prediction algorithm that is consistent for all Lipschitz functions. In fact, the analysis through our framework naturally leads to a variant of collaborative filtering, shedding insight into the widespread success of collaborative filtering in practice. Assuming each entry is sampled independently with probability at least max(m?1+? , n?1/2+? ) with ? > 0, we prove that the expected fraction of our estimates with error greater than  is less than ? 2 /2 plus a polynomially decaying term, where ? 2 is the variance of the additive entry-wise noise term. Experiments with the MovieLens and Netflix datasets suggest that our algorithm provides principled improvements over basic collaborative filtering and is competitive with matrix factorization methods. 1 Introduction In this paper, we provide a statistical framework for performing nonparametric regression over latent variable models. We are initially motivated by the problem of matrix completion arising in the context of designing recommendation systems. In the popularized setting of Netflix, there are m users, indexed by u ? [m], and n movies, indexed by i ? [n]. Each user u has a rating for each movie i, denoted as y(u, i). The system observes ratings for only a small fraction of user-movie pairs. The goal is to predict ratings for the rest of the unknown user-movie pairs, i.e., to complete the partially observed m ? n rating matrix. To be able to obtain meaningful predictions from the partially observed matrix, it is essential to impose a structure on the data. We assume each user u and movie i is associated to features x1 (u) ? X1 and x2 (i) ? X2 for some compact metric spaces X1 , X2 equipped with Borel probability measures. Following the philosophy of non-parametric statistics, we assume that there exists some function f : X1 ? X2 ? R such that the rating of user u for movie i is given by y(u, i) = f (x1 (u), x2 (i)) + ?ui , 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. (1) where ?ui is some independent bounded noise. We observe ratings for a subset of the user-movie pairs, and the goal is to use the given data to predict f (x1 (u), x2 (i)) for all (u, i) ? [m] ? [n] whose rating is unknown. In classical nonparametric regression, we observe input features x1 (u), x2 (i) along with the rating y(u, i) for each datapoint, and thus we can approximate the function f well using local approximation techniques as long as f satisfies mild regularity conditions. However, in our setting, we do not observe the latent features x1 (u), x2 (i), but instead we only observe the indices (u, i). Therefore, we use blind regression to refer to the challenge of performing regression with unobserved latent input variables. This paper addresses the question, does there exist a meaningful prediction algorithm for general nonparametric regression when the input features are unobserved? Related Literature. Matrix completion has received enormous attention in the past decade. Matrix factorization based approaches, such as low-rank approximation, and neighborhood based approaches, such as collaborative filtering, have been the primary ways to address the problem. In the recent years, there has been exciting intellectual development in the context of matrix factorization based approaches. Since any matrix can be factorized, its entries can be described by a function f in (1) with the form f (x1 , x2 ) = xT1 x2 , and the goal of factorization is to recover the latent features for each row and column. [25] was one of the earlier works to suggest the use of low-rank matrix approximation, observing that a low-rank matrix has a comparatively small number of free parameters. Subsequently, statistically efficient approaches were suggested using optimization based estimators, proving that matrix factorization can fill in the missing entries with sample complexity as low as rn log n, where r is the rank of the matrix [5, 23, 11, 21, 10]. There has been an exciting line of ongoing work to make the resulting algorithms faster and scalable [7, 17, 4, 15, 24, 20]. Many of these approaches are based on the structural assumption that the underlying matrix is low-rank and the matrix entries are reasonably ?incoherent?. Unfortunately, the low-rank assumption may not hold in practice. The recent work [8] makes precisely this observation, showing that a simple non-linear, monotonic transformation of a low-rank matrix could easily produce an effectively highrank matrix, despite few free model parameters. They provide an algorithm and analysis specific to the form of their model, which achieves sample complexity of O((mn)2/3 ). However, their algorithm only applies to functions f which are a nonlinear monotonic transformation of the inner product of the latent features. [6] proposes the universal singular value thresholding estimator (USVT), and they provide an analysis under a similar model in which they assume f to be a bounded Lipschitz function. They achieve a sample complexity, or the required fraction of measurements over the total mn entries, which scales with the latent space dimension q according to ? m?2/(q+2) for a square matrix, whereas we achieve a sample complexity of ?(m?1/2+? ) (which is independent of q) as long as the latent dimension scales as o(log n). The term collaborative filtering was coined in [9], and this technique is widely used in practice due to its simplicity and ability to scale. There are two main paradigms in neighborhood-based collaborative filtering: the user-user paradigm and the item-item paradigm. To recommend items to a user in the user-user paradigm, one first looks for similar users, and then recommends items liked by those similar users. In the item-item paradigm, in contrast, items similar to those liked by the user are found and subsequently recommended. Much empirical evidence exists that the item-item paradigm performs well in many cases [16, 14, 22], however the theoretical understanding of the method has been limited. In recent works, Latent mixture models or cluster models have been introduced to explain the collaborative filtering algorithm as well as the empirically observed superior performance of item-item paradigms, c.f. [12, 13, 1, 2, 3]. However, these results assume a specific parametric model, such as a mixture distribution model for preferences across users and movies. We hope that by providing an analysis for collaborative filtering within our broader nonparametric model, we can provide a more complete understanding of the potentials and limitations of collaborative filtering. The algorithm that we propose in this work is inspired by local functional approximations, specifically Taylor?s approximation and classical kernel regression, which also relies on local smoothed approximations, c.f. [18, 26]. However, since kernel regression and other similar methods use explicit knowledge of the input features, their analysis and proof techniques do not extend to our context of Blind regression, in which the features are latent. Although our estimator takes a similar form of computing a convex combination of nearby datapoints weighted according to a function of the latent distance, the analysis required is entirely different. 2 Contributions. The key contribution of our work is in providing a statistical framework for nonparametric regression over latent variable models. We refrain from any specific modeling assumptions on f , keeping mild regularity conditions aligned with the philosophy of non-parametric statistics. We assume that the latent features are drawn independently from an identical distribution (IID) over bounded metric spaces; the function f is Lipschitz with respect to the latent spaces; entries are observed independently with some probability p; and the additive noise in observations is independently distributed with zero mean and bounded support. In spite of the minimal assumptions of our model, we provide a consistent matrix completion algorithm with finite sample error bounds. Furthermore, as a coincidental by-product, we find that our framework provides an explanation of the practical mystery of ?why collaborative filtering algorithms work well in practice?. There are two conceptual parts to our algorithm. First, we derive an estimate of f (x1 (u), x2 (i)) for an unobserved index pair (u, i) by using first order local Taylor approximation expanded around the points corresponding to (u, i0 ), (u0 , i), and (u0 , i0 ). This leads to estimation that y?(u, i) ? y(u0 , i) + y(u, i0 ) ? y(u0 , i0 ) ? f (x1 (u), x2 (i)), (2) as long as x1 (u0 ) is close to x1 (u) or x2 (i0 ) is close to x2 (i). In kernel regression, distances between input features are used to upper bound the error of individual estimates, but since the latent features are not observed, we need another method to determine which of these estimates are reliable. Secondly, under mild regularity conditions, we upper bound the squared error of the estimate in (2) by the the variance of the squared difference between commonly observed entries in rows (u, v) or columns (i, j). We empirically estimate this quantity and use it similarly to distance in the latent space in order to appropriately weight individual estimates to a final prediction. If we choose only the datapoints with minimum empirical row variance, we recover user-user nearest neighbor collaborative filtering. Inspired by kernel regression, we also propose using computing the weights according to a Gaussian kernel applied to the minimum of the row or column sample variances. As the main technical result, we show that the user-user nearest neighbor variant of collaborative filtering method with our similarity metric yields a consistent estimator for any Lipschitz function as long as we observe max(m?1+? , n?1/2+? ) fraction of the matrix with ? > 0. In the process, we obtain finite sample error bounds, whose details are stated in Theorem 1. We compared the Gaussian kernel variant of our algorithm to classic collaborative filtering algorithms and a matrix factorization based approach (softImpute) on predicting user-movie ratings for the Netflix and MovieLens datasets. Experiments suggest that our method improves over existing collaborative filtering methods, and sometimes outperforms matrix-factorization-based approaches depending on the dataset. 2 Setup Operating assumptions. There are m users and n movies. The rating of user u ? [m] for movie i ? [n] is given by (1), taking the form y(u, i) = f (x1 (u), x2 (i)) + ?u,i . We make the following assumptions. (a) X1 and X2 are compact metric spaces endowed with metric dX1 and dX2 respectively: dX1 (x1 , x01 ) ? BX , ? x1 , x01 ? X1 , and dX2 (x2 , x02 ) ? BX , ? x2 , x02 ? X2 . (3) (b) f : X1 ? X2 ? R is L?Lipschitz with respect to ?-product metric: |f (x1 , x2 ) ? f (x01 , x02 )| ? L max {dX1 (x1 , x01 ), dX2 (x2 , x02 )} , ?x1 , x01 ? X1 , x2 , x02 ? X2 . (c) The latent features of each user u and movie i, x1 (u) and x2 (i), are sampled independently according to Borel probability measures PX1 and PX2 on (X1 , TX1 ) and (X2 , TX2 ), where TX denotes the Borel ?-algebra of a metric space X . (d) The additive noise for all data points are independent and bounded with mean zero and variance ? 2 : for all u ? [m], i ? [n], ?u,i ? [?B? , B? ], E[?u,i ] = 0, Var[?u,i ] = ? 2 . (e) Rating of each entry is revealed (observed) with probability p, independently. 3 (4) Notation. Let random variable Mui = 1 if the rating of user u and movie i is revealed and 0 otherwise. Mui is an independent Bernoulli random variable with parameter p. Let N1 (u) denote the set of column indices of observed entries in row u. Similarly, let N2 (i) denote the set of row indices of observed entries in column i. That is, N1 (u) , {i : M (u, i) = 1} and N2 (i) , {u : M (u, i) = 1}. (5) For rows v 6= u, N1 (u, v) , N1 (u) ? N1 (v) denotes column indices of commonly observed entries of rows (u, v). For columns i 6= j, N2 (i, j) , N2 (i) ? N2 (j) denotes row indices of commonly observed entries of columns (i, j). We refer to this as the overlap between two rows or columns. 3 Algorithm Intuition Local Taylor Approximation. We propose a prediction algorithm for unknown ratings based on insights from the classical Taylor approximation of a function. Suppose X1 ? = X2 ? = R, and we wish to predict unknown rating, f (x1 (u), x2 (i)), of user u ? [m] for movie i ? [n]. Using the first order Taylor expansion of f around (x1 (v), x2 (j)) for some u 6= v ? [m], i 6= j ? [n], it follows that 2 (j)) 2 (j)) f (x1 (u), x2 (i)) ? f (x1 (v), x2 (j)) + (x1 (u) ? x1 (v)) ?f (x1 (v),x + (x2 (i) ? x2 (j)) ?f (x1 (v),x . ?x1 ?x2 We are not able to directly compute this expression, as we do not know the latent features, the function f , or the partial derivatives of f . However, we can again apply Taylor expansion for f (x1 (v), x2 (i)) and f (x1 (u), x2 (j)) around (x1 (v), x2 (j)), which results in a set of equations with the same unknown terms. It follows from rearranging terms and substitution that f (x1 (u), x2 (i)) ? f (x1 (v), x2 (i)) + f (x1 (u), x2 (j)) ? f (x1 (v), x2 (j)), as long as the first order Taylor approximation is accurate. Thus if the noise term in (1) is small, we can approximate f (x1 (u), x2 (i)) by using observed ratings y(v, j), y(u, j) and y(v, i) according to y?(u, i) = y(u, j) + y(v, i) ? y(v, j). (6) Reliability of Local Estimates. We will show that the variance of the difference between two rows or columns upper bounds the estimation error. Therefore, in order to ensure the accuracy of the above estimate, we use empirical observations to estimate the variance of the difference between two rows or columns, which directly relates to an error bound. By expanding (6) according to (1), the error f (x1 (u), x2 (i)) ? y?(u, i) is equal to (f (x1 (u), x2 (i)) ? f (x1 (v), x2 (i))) ? (f (x1 (u), x2 (j)) ? f (x1 (v), x2 (j))) ? ?vi + ?vj ? ?uj . If we condition on x1 (u) and x1 (v), h i 2 E (Error) | x1 (u), x1 (v) = 2 Varx?X2 [f (x1 (u), x) ? f (x1 (v), x) | x1 (u), x1 (v)] + 3? 2 . Similarly, if we condition on x2 (i) and x2 (j) it follows that the expected squared error is bounded by the variance of the difference between the ratings of columns i and j. This theoretically motivates weighting the estimates according to the variance of the difference between the rows or columns. 4 Algorithm Description We provide the algorithm for predicting an unknown entry in position (u, i) using available data. Given a parameter ? ? 2, define ?-overlapping neighbors of u and i respectively as Su? (i) = {v s.t. v ? N2 (i), v 6= u, |N1 (u, v)| ? ?}, Si? (u) = {j s.t. j ? N1 (u), j 6= i, |N2 (i, j)| ? ?}. For each v ? Su? (i), compute the empirical row variance between u and v, X 1 2 s2uv = ((y(u, i) ? y(v, i)) ? (y(u, j) ? y(v, j))) . 2|N1 (u, v)|(|N1 (u, v)| ? 1) i,j?N1 (u,v) 4 (7) Similarly, compute empirical column variances between i and j, for all j ? Si? (u), X 1 2 ((y(u, i) ? y(u, j)) ? (y(v, i) ? y(v, j))) . s2ij = 2|N2 (i, j)|(|N2 (i, j)| ? 1) (8) u,v?N2 (i,j) Let B ? (u, i) denote the set of positions (v, j) such that the entries y(v, j), y(u, j) and y(v, i) are observed, and the commonly observed ratings between (u, v) and between (i, j) are at least ?. n o B ? (u, i) = (v, j) ? Su? (i) ? Si? (u) s.t. M (v, j) = 1 . Compute the final estimate as a convex combination of estimates derived in (6) for (v, j) ? B ? (u, i), P (v,j)?B ? (u,i) wui (v, j) (y(u, j) + y(v, i) ? y(v, j)) P y?(u, i) = , (9) (v,j)?B ? (u,i) wui (v, j) where the weights wui (v, j) are defined as a function of (7) and (8). We proceed to discuss a few choices for the weight function, each of which results in a different algorithm. User-User or Item-Item Nearest Neighbor Weights. We can evenly distribute the weights only among entries in the nearest neighbor row, i.e., the row with minimal empirical variance, wvj = I(v = u? ), for u? ? arg min s2uv . ? v?Su (i) If we substitute these weights in (9), we recover an estimate which is asymptotically equivalent to the mean-adjusted variant of the classical user-user nearest neighbor (collaborative filtering) algorithm, y?(u, i) = y(u? , i) + muu? , where muu? is the empirical mean of the difference of ratings between rows u and u? . For any u, v, X 1 muv = (y(u, j) ? y(v, j)). |N1 (u, v)| j?N1 (u,v) Equivalently, we can evenly distribute the weights among entries in the nearest neighbor columns, i.e., the column with minimal empirical variance, recovering the classical mean-adjusted item-item nearest neighbor collaborative filtering algorithm. Theorem 1 proves that this simple algorithm produces a consistent estimator, and we provide the finite sample error analysis. Due to the similarities, our analysis also directly implies the proof of correctness and consistency for the classic user-user and item-item collaborative filtering method. User-Item Gaussian Kernel Weights. Inspired by kernel regression, we introduce a variant of the algorithm which computes the weights according to a Gaussian kernel function with bandwith parameter ?, substituting in the minimum row or column sample variance as a proxy for the distance, wvj = exp(?? min{s2uv , s2ij }). When ? = ?, the estimate only depends on the basic estimates whose row or column has the minimum sample variance. When ? = 0, the algorithm equally averages all basic estimates. We applied this variant of our algorithm to both movie recommendation and image inpainting data, which show that our algorithm improves upon user-user and item-item classical collaborative filtering. Connections to Cosine Similarity Weights. In our algorithm, we determine reliability of estimates as a function of the sample variance, which is equivalent to the squared distance of the meanadjusted values. In classical collaborative filtering, cosine similarity is commonly used, which can be approximated as a different choice of the weight kernel over the squared difference. 5 Main Theorem Let E ? [m] ? [n] denote the set of user-movie pairs for which the algorithm predicts a rating. For ? > 0, the overall ?-risk of the algorithm is the fraction of estimates whose error is larger than ?, 1 X Risk? = I(|f (x1 (u), x2 (i)) ? y?(u, i)| > ?). (10) |E| (u,i)?E 5 In Theorem 1, we upper bound the expected ?-Risk, proving that the user-user nearest neighbor estimator is consistent, i.e., in the presence of no noise, estimates converge to the true values as m, n go to infinity. We may assume m ? n without loss of generality. Theorem 1. For a fixed ? > 0, as long as p ? max{m?1+? , n?1/2+? } (where ? > 0), for any ? = ?(n?2?/3 ), the user-user nearest-neighbor variant of our method with ? = np2 /2 achieves        3 ? 21/3 ? 2 ? 3? + ? 2 1 1 2? ? ? 3 3 1+ E[Risk? ] ? n + O exp ? Cm + m exp ? 2 n . ?2 ? 4 5B p? 1 where B = 2(LBX + B? ), and C = h L2 ? 6 for h(r) := inf x0 ?X1 Px?PX1 (dX1 (x, x0 ) ? r). For a generic ?, we can also provide precise error bounds of a similar form, with modified rates of convergence. Choosing ? to grow with np2 ensures that as n goes to infinity, the required overlap between rows also goes to infinity, thus the empirical mean and variance computed in the algorithm converge precisely to the true mean and variance. The parameter ? in Theorem 1 is introduced purely for the purpose of analysis, and is not used within the implementation of the the algorithm. The function h behaves as a lower bound of the cumulative distribution function of PX1 , and it always exists under our assumptions that X1 is compact. It is used to ensure that for any u ? [m], with high probability, there exists another row v ? Su? (i) such that dX1 (x1 (u), x1 (v)) is small, implying by the Lipschitz condition that we can use the values of row v to approximate the values of row u well. For example, if PX1 is a uniform distribution over a unit cube in q dimensional Euclidean space, then h(r) = min(1, r)q , and our error bound becomes meaningful for n ? (L2 /?)q/2? . On the other hand, if PX1 is supported over finitely many points, then h(r) = minx?supp(PX1 ) PX1 (x) is a positive constant, and the role of the latent dimension becomes irrelevant. Intuitively, the ?geometry? of PX1 through h near 0 determines the impact of the latent space dimension on the sample complexity, and our results hold as long as the latent dimension q = o(log n). 6 Proof Sketch For any evaluation set of unobserved entries E, the expectation of ?-risk is 1 X P(|f (x1 (u), x2 (i)) ? y?(u, i)| > ?) = P(|f (x1 (u), x2 (i)) ? y?(u, i)| > ?), E[Risk? ] = |E| (u,i)?E because the indexing of the entries are exchangeable and identically distributed. To bound the expected risk, it is sufficient to provide a tail bound for the probability of the error. For any fixed a, b ? X1 , and random variable x ? PX2 , we denote the mean and variance of the difference f (a, x) ? f (b, x) by ?ab , Ex [f (a, x) ? f (b, x)] = E[muv |x1 (u) = a, x1 (v) = b], 2 ?ab , Varx [f (a, x) ? f (b, x)] = E[s2uv |x1 (u) = a, x1 (v) = b] ? 2? 2 , which we point out is also equivalent to the expectation of the empirical means and variances computed by the algorithm when we condition on the latent representations of the users. The computation of y?(u, i) involves two steps: first the algorithm determines the neighboring row with the minimum sample variance, u? = arg minv?Su? (i) s2uv , and then it computes the estimate by adjusting according to the empirical mean, y?(u, i) := y(u? , i) + muu? . The proof involves three key steps, each stated within a lemma. Lemma 1 proves that with high probability the observations are dense enough such that there is sufficient number of rows with overlap of entries larger than ?, i.e., the number of the candidate rows, |Su? (i)|, concentrates around (m ? 1)p. This relies on concentration of Binomial random variables via Chernoff?s bound. Lemma 1. Given p > 0, 2 ? ? ? np2 /2 and ? > 0, for any (u, i) ? [m] ? [n],      ?2 (m ? 1)p np2 ? + (m ? 1) exp ? . P |Su (i)| ? / (1 ? ?)(m ? 1)p ? 2 exp ? 3 8 Lemma 2 proves that since the latent features are sampled iid from a bounded metric space, for any index pair (u, i), there exists a ?good? neighboring row v ? Su? (i), whose ?x2 1 (u)x1 (v) is small. 6 Lemma 2. Consider u ? [n] and set S ? [n] \ {u}. Then for any ? > 0,   r |S|  ? , P min ?x2 1 (u)x1 (v) > ? ? 1 ? h v?S L2 where h(r) := inf x0 ?X1 Px?PX1 (dX1 (x, x0 ) ? r). Subsequently, conditioned on the event that |Su? (i)| ? (m ? 1)p, Lemmas 3 and 4 prove that the sample mean and sample variance of the differences between two rows concentrate around the true mean and true variance with high probability. This involves using the Lipschitz and bounded assumptions on f and X1 , as well as the Bernstein and Maurer-Pontil inequalities. Lemma 3. Given u, v ? [m], i ? [n] and ? ? 2, for any ? > 0,   P ?x1 (u)x1 (v) ? muv > ? | v ? Su? (i) ? exp ? 3??2 6B 2 + 2B?  , where recall that B = 2(LBX + B? ). Lemma 4. Given u ? [m], i ? [n], and ? ? 2, for any ? > 0,    2 ? 2 2 P suv ? (?x1 (u)x1 (v) + 2? ) > ? v ? Su (i) ? 2 exp ? ??2 2 + 4? 2 + ?) 4B 2 (2LBX  , where recall that B = 2(LBX + B? ). Given that there exists a neighbor v ? Su? (i) whose true variance ?x2 1 (u)x1 (v) is small, and conditioned on the event that all the sample variances concentrate around the true variance, it follows that the true variance between u and its nearest neighbor u? is small with high probability. Finally, conditioned on the event that |Su? (i)| ? (m ? 1)p and the true variance between the target row and the nearest neighbor row is small, we provide a bound on the tail probability of the estimation error by using Chevyshev inequalities. The only term in the error probability which does not decay to zero is the error from Chebyshev?s inequality, which dominates the final expression, leading to the final result. 7 Experiments We evaluated the performance of our algorithm to predict user-movie ratings on the MovieLens 1M and Netflix datasets. For the implementation of our method, we used user-item Gaussian kernel weights for the final estimator. We chose overlap parameter ? = 2 to ensure the algorithm is able to compute an estimate for all missing entries. When ? is larger, the algorithm enforces rows (or columns) to have more commonly rated movies (or users). Although this increases the reliability of the estimates, it also reduces the fraction of entries for which the estimate is defined. We optimized the ? bandwidth parameter of the Gaussian kernel by evaluating the method with multiple values for ? and choosing the value which minimizes the error. We compared our method with user-user collaborative filtering, item-item collaborative filtering, and softImpute from [20]. We chose the classic mean-adjusted collaborative filtering method, in which the weights are proportional to the cosine similarity of pairs of users or items (i.e. movies). SoftImpute is a matrix-factorization-based method which iteratively replaces missing elements in the matrix with those obtained from a soft-thresholded SVD. For both MovieLens and Netflix data sets, the ratings are integers from 1 to 5. From each dataset, we generated 100 smaller user-movie rating matrices, in which we randomly subsampled 2000 users and 2000 movies. For each rating matrix, we randomly select and withhold a percentage of the known ratings for the test set, while the remaining portion of the data set is revealed to the algorithm for computing the estimates. After the algorithm computes its predictions for unrevealed movie-user pairs, we evaluate the Root Mean Squared Error (RMSE) of the predictions compared with the withheld test set, where RMSE is defined as the square root of the mean of squared prediction error over the evaluation set. Figure 1 plots the RMSE of our method along with classic collaborative filtering and softImpute evaluated against 10%, 30%, 50%, and 70% withheld test sets. The RMSE is averaged over 100 subsampled rating matrices, and 95% confidence intervals are provided. 7 Figure 1: Performance of algorithms on Netflix and MovieLens datasets with 95% confidence interval. ? values used by our algorithm are 2.8 (10%), 2.3 (30%), 1.7 (50%), 1 (70%) for MovieLens, and 1.8 (10%), 1.7 (30%), 1.6 (50%), 1.5 (70%) for Netflix. Figure 1 suggests that our algorithm achieves a systematic improvement over classical user-user and item-item collaborative filtering. SoftImpute performs the worst on the MovieLens dataset, but it performs the best on the Netflix dataset. This behavior could be due to different underlying assumptions of low rank for matrix factorization methods as opposed to Lipschitz for collaborative filtering methods, which could lead to dataset dependent performance outcomes. 8 Discussion We introduced a generic framework of blind regression, i.e., nonparametric regression over latent variable models. We allow the model to be any Lipschitz function f over any bounded feature space X1 , X2 , while imposing the limitation that the input features are latent. This is applicable to a wide variety of problems, including recommendation systems, but also includes social network analysis, community detection, crowdsourcing, and product demand prediction. Many parametric models (e.g. low rank assumptions) can be framed as a specific case of our model. Despite the generality and limited assumptions of our model, we present a simple similarity based estimator, and we provide theoretical guarantees bounding its error within the noise level ? 2 . The analysis provides theoretical grounds for the popularity of similarity based methods. To the best of our knowledge, this is the first provable guarantee on the performance of neighbor-based collaborative filtering within a fully nonparametric model. Our algorithm and analysis follows from local Taylor approximation, along with an observation that the sample variance between rows or columns is a good indicator of ?closeness?, or the similarity of their function values. The algorithm essentially estimates the local metric information between the latent features from observed data, and then performs local smoothing in a similar manner as classical kernel regression. Due to the local nature of our algorithm, our sample complexity does not depend on the latent dimension, whereas Chatterjee?s USVT estimator [6] requires sampling almost every entry when the latent dimension is large. This difference is due to the fact that Chatterjee?s result stems from showing that a Lipschitz function can be approximated by a piecewise constant function, which upper bound the rank of the target matrix. This discretization results in a large penalty with regards to the dimension of the latent space. Since our method follows from local approximations, we only require sufficent sampling such that locally there are enough close neighbor points. The connection of our framework to regression implies many natural future directions. We can extend model (1) to multivariate functions f , which translates to the problem of higher order tensor completion. Variations of the algorithm and analysis that we provide for matrix completion can extend to tensor completion, due to the flexible and generic assumptions of our model. It would also be useful to extend the results to capture general noise models, sparser sampling regimes, or mixed models with both parametric and nonparametric or both latent and observed variables. Acknowledgements: This work is supported in parts by ARO under MURI award 133668-5079809, by NSF under grants CMMI-1462158 and CMMI-1634259, and additionally by a Samsung Scholarship, Siebel Scholarship, NSF Graduate Fellowship, and Claude E. Shannon Research Assistantship. 8 References [1] S. Aditya, O. Dabeer, and B. K. Dey. A channel coding perspective of collaborative filtering. IEEE Transactions on Information Theory, 57(4):2327?2341, 2011. [2] G. Bresler, G. H. Chen, and D. Shah. A latent source model for online collaborative filtering. In Advances in Neural Information Processing Systems, pages 3347?3355, 2014. [3] G. Bresler, D. Shah, and L. F. Voloch. Collaborative filtering with low regret. arXiv preprint arXiv:1507.05371, 2015. [4] D. Cai, X. He, X. Wu, and J. Han. Non-negative matrix factorization on manifold. In Data Mining, 2008. ICDM?08. Eighth IEEE International Conference on, pages 63?72. IEEE, 2008. [5] E. J. Cand?s and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational mathematics, 9(6):717?772, 2009. [6] S. Chatterjee et al. Matrix estimation by universal singular value thresholding. The Annals of Statistics, 43(1):177?214, 2015. [7] M. Fazel, H. Hindi, and S. P. Boyd. Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In Proceedings of ACC, volume 3, pages 2156?2162. IEEE, 2003. [8] R. S. Ganti, L. Balzano, and R. Willett. Matrix completion under monotonic single index models. In Advances in Neural Information Processing Systems, pages 1864?1872, 2015. [9] D. Goldberg, D. Nichols, B. M. Oki, and D. Terry. Using collaborative filtering to weave an information tapestry. Commun. ACM, 1992. [10] P. Jain, P. Netrapalli, and S. Sanghavi. Low-rank matrix completion using alternating minimization. In Proceedings of the 45th annual ACM symposium on Theory of computing, pages 665?674. ACM, 2013. [11] R. Keshavan, A. Montanari, and S. Oh. Matrix completion from a few entries. IEEE Trans. Inf. Theory, 56(6), 2009. [12] J. Kleinberg and M. Sandler. Convergent algorithms for collaborative filtering. In Proceedings of the 4th ACM conference on Electronic commerce, pages 1?10. ACM, 2003. [13] J. Kleinberg and M. Sandler. Using mixture models for collaborative filtering. In Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, pages 569?578. ACM, 2004. [14] Y. Koren and R. Bell. Advances in collaborative filtering. In Recommender Systems Handbook, pages 145?186. Springer US, 2011. [15] Z. Lin, A. Ganesh, J. Wright, L. Wu, M. Chen, and Y. Ma. Fast convex optimization algorithms for exact recovery of a corrupted low-rank matrix. CAMSAP, 61, 2009. [16] G. Linden, B. Smith, and J. York. Amazon.com recommendations: Item-to-item collaborative filtering. IEEE Internet Computing, 7(1):76?80, 2003. [17] Z. Liu and L. Vandenberghe. Interior-point method for nuclear norm approximation with application to system identification. SIAM Journal on Matrix Analysis and Applications, 31(3):1235?1256, 2010. [18] Y. Mack and B. W. Silverman. Weak and strong uniform consistency of kernel regression estimates. Zeitschrift f?r Wahrscheinlichkeitstheorie und verwandte Gebiete, 61(3):405?415, 1982. [19] A. Maurer and M. Pontil. Empirical Bernstein Bounds and Sample Variance Penalization. ArXiv e-prints, July 2009. [20] R. Mazumder, T. Hastie, and R. Tibshirani. Spectral regularization algorithms for learning large incomplete matrices. The Journal of Machine Learning Research, 11:2287?2322, 2010. [21] S. Negahban and M. J. Wainwright. Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. The Journal of Machine Learning Research, 13(1):1665?1697, 2012. [22] X. Ning, C. Desrosiers, and G. Karypis. Recommender Systems Handbook, chapter A Comprehensive Survey of Neighborhood-Based Recommendation Methods, pages 37?76. Springer US, 2015. [23] A. Rohde, A. B. Tsybakov, et al. Estimation of high-dimensional low-rank matrices. The Annals of Statistics, 39(2):887?930, 2011. [24] B.-H. Shen, S. Ji, and J. Ye. Mining discrete patterns via binary matrix factorization. In Proceedings of the 15th ACM SIGKDD international conference, pages 757?766. ACM, 2009. [25] N. Srebro, N. Alon, and T. S. Jaakkola. Generalization error bounds for collaborative prediction with low-rank matrices. In Advances In Neural Information Processing Systems, pages 1321?1328, 2004. [26] M. P. Wand and M. C. Jones. Kernel smoothing. Crc Press, 1994. 9
6108 |@word mild:3 norm:1 inpainting:1 substitution:1 siebel:1 liu:1 past:1 existing:1 outperforms:1 ganti:1 discretization:1 com:1 varx:2 si:3 additive:3 plot:1 implying:1 item:28 smith:1 provides:3 intellectual:1 preference:1 along:3 symposium:2 prove:2 mui:2 weave:1 manner:1 introduce:2 x0:4 theoretically:1 expected:4 behavior:1 cand:1 camsap:1 usvt:2 inspired:4 equipped:1 becomes:2 spain:1 provided:1 bounded:9 underlying:2 notation:1 factorized:1 coincidental:1 cm:1 minimizes:1 unobserved:6 transformation:2 guarantee:2 every:1 rohde:1 oki:1 exchangeable:1 unit:1 grant:1 positive:1 engineering:1 local:11 zeitschrift:1 despite:2 plus:1 chose:2 suggests:1 challenging:1 factorization:11 limited:2 graduate:1 statistically:1 averaged:1 karypis:1 fazel:1 practical:1 commerce:1 enforces:1 thirty:1 practice:4 minv:1 regret:1 silverman:1 pontil:2 universal:2 empirical:12 bell:1 boyd:1 confidence:2 spite:1 suggest:3 close:3 interior:1 context:3 risk:7 voloch:1 equivalent:3 missing:3 go:3 attention:1 independently:6 convex:4 survey:1 shen:1 s2ij:2 simplicity:1 recovery:1 amazon:1 insight:2 estimator:9 fill:1 vandenberghe:1 sufficent:1 datapoints:2 oh:1 nuclear:1 proving:2 classic:4 crowdsourcing:1 variation:1 annals:2 target:2 suppose:1 user:56 exact:2 goldberg:1 designing:1 element:1 approximated:2 predicts:1 muri:1 observed:18 role:1 preprint:1 electrical:1 capture:1 worst:1 ensures:1 observes:1 principled:1 intuition:1 und:1 convexity:1 complexity:6 ui:2 depend:1 algebra:1 purely:1 upon:1 easily:1 samsung:1 muu:3 chapter:1 tx:1 jain:1 fast:1 neighborhood:3 choosing:2 outcome:1 whose:6 heuristic:1 widely:1 larger:3 balzano:1 otherwise:1 ability:1 statistic:5 noisy:1 final:5 online:1 differentiable:1 claude:1 cai:1 propose:3 aro:1 product:4 neighboring:2 aligned:1 achieve:2 description:1 convergence:1 regularity:3 cluster:1 produce:2 liked:2 unrevealed:1 derive:1 depending:1 completion:12 alon:1 finitely:1 nearest:11 received:1 strong:2 netrapalli:1 recovering:1 involves:3 implies:2 concentrate:3 posit:1 direction:1 ning:1 subsequently:3 desrosiers:1 crc:1 require:1 generalization:1 secondly:1 adjusted:3 hold:2 around:6 ground:1 wright:1 exp:7 predict:6 substituting:1 achieves:3 purpose:1 estimation:5 applicable:1 correctness:1 assistantship:1 weighted:2 hope:1 minimization:2 mit:1 gaussian:6 always:1 modified:1 verwandte:1 broader:1 jaakkola:1 np2:4 derived:1 improvement:2 rank:15 bernoulli:1 contrast:2 sigkdd:1 dependent:1 i0:5 initially:1 arg:2 among:2 overall:1 flexible:1 denoted:1 sandler:2 development:1 proposes:1 smoothing:2 cube:1 equal:1 sampling:3 chernoff:1 identical:1 look:1 jones:1 future:1 sanghavi:1 recommend:1 piecewise:1 few:3 randomly:2 comprehensive:1 individual:2 subsampled:2 geometry:1 n1:12 ab:2 detection:1 mining:2 evaluation:2 mixture:3 accurate:1 partial:1 indexed:2 incomplete:1 taylor:9 euclidean:2 maurer:2 theoretical:3 minimal:3 column:20 earlier:1 modeling:1 soft:1 subset:2 entry:24 uniform:2 corrupted:1 muv:3 recht:1 international:2 siam:1 negahban:1 lee:1 systematic:1 squared:7 again:1 opposed:1 choose:1 wahrscheinlichkeitstheorie:1 derivative:1 leading:1 bx:2 li:1 supp:1 potential:1 distribute:2 coding:1 bandwith:1 includes:1 blind:5 vi:1 depends:1 root:2 observing:1 portion:1 netflix:8 decaying:1 competitive:1 recover:3 rmse:4 collaborative:36 contribution:2 square:2 accuracy:1 variance:30 yield:1 weak:1 identification:1 iid:2 acc:1 datapoint:1 explain:1 sixth:1 against:1 naturally:1 associated:1 proof:4 sampled:3 dataset:5 adjusting:1 massachusetts:1 recall:2 knowledge:2 improves:2 higher:1 evaluated:2 generality:2 furthermore:1 dey:1 hand:1 sketch:1 keshavan:1 su:14 nonlinear:1 overlapping:1 ganesh:1 widespread:1 yihua:1 ye:1 nichols:1 true:8 regularization:1 alternating:1 laboratory:1 iteratively:1 dx2:3 cosine:3 complete:3 performs:4 image:1 wise:1 superior:1 behaves:1 functional:1 empirically:2 ji:1 volume:1 extend:4 tail:2 he:1 willett:1 measurement:2 refer:2 imposing:1 framed:1 consistency:2 px1:9 similarly:4 mathematics:1 dabeer:1 reliability:3 han:1 similarity:8 operating:1 multivariate:1 recent:3 perspective:1 inf:3 irrelevant:1 commun:1 inequality:3 binary:1 success:1 refrain:1 minimum:5 greater:1 impose:1 determine:2 paradigm:7 converge:2 recommended:1 x02:5 u0:5 relates:1 multiple:1 july:1 px2:2 reduces:1 stem:1 technical:1 faster:1 long:7 lin:1 icdm:1 christina:1 equally:1 award:1 impact:1 prediction:10 variant:7 regression:22 basic:3 scalable:1 essentially:1 metric:9 expectation:2 arxiv:3 kernel:15 sometimes:1 whereas:2 fellowship:1 interval:2 singular:2 grow:1 source:1 appropriately:1 rest:1 tapestry:1 integer:1 structural:1 near:1 presence:1 revealed:3 bernstein:2 recommends:1 identically:1 enough:2 variety:1 hastie:1 bandwidth:1 inner:1 translates:1 chebyshev:1 det:1 tx1:1 motivated:2 expression:2 penalty:1 song:1 proceed:1 york:1 useful:1 nonparametric:10 tsybakov:1 locally:1 exist:1 percentage:1 nsf:2 arising:1 popularity:1 tibshirani:1 discrete:1 key:2 enormous:1 drawn:1 thresholded:1 asymptotically:1 fraction:6 year:1 wand:1 mystery:1 hankel:1 almost:1 wu:2 electronic:1 decision:1 entirely:1 bound:18 internet:1 koren:1 convergent:1 replaces:1 annual:2 precisely:2 infinity:3 x2:60 nearby:1 kleinberg:2 min:4 performing:2 expanded:1 px:2 department:1 according:9 popularized:1 combination:2 across:1 smaller:1 making:1 intuitively:1 restricted:1 indexing:1 mack:1 equation:1 devavrat:2 discus:1 know:1 available:1 endowed:1 apply:2 observe:5 generic:3 spectral:1 shah:3 substitute:1 denotes:3 binomial:1 ensure:3 remaining:1 coined:1 scholarship:2 uj:1 prof:3 classical:11 comparatively:1 tensor:2 question:1 quantity:1 print:1 parametric:5 primary:1 concentration:1 cmmi:2 minx:1 distance:6 suv:1 evenly:2 manifold:1 provable:1 assuming:1 index:8 providing:2 equivalently:1 setup:1 unfortunately:1 stated:2 negative:1 implementation:2 motivates:1 unknown:7 upper:5 recommender:2 observation:5 datasets:4 finite:3 withheld:2 precise:1 rn:1 smoothed:1 community:1 rating:29 introduced:3 pair:8 required:3 connection:2 optimized:1 barcelona:1 nip:1 trans:1 address:2 able:3 suggested:1 pattern:1 eighth:1 regime:1 challenge:1 max:4 reliable:1 explanation:1 including:1 terry:1 wainwright:1 overlap:4 event:3 natural:1 predicting:2 indicator:1 hindi:1 mn:2 movie:26 technology:1 rated:1 incoherent:1 literature:1 understanding:2 wvj:2 l2:3 acknowledgement:1 loss:1 fully:1 bresler:2 mixed:1 limitation:2 filtering:35 proportional:1 srebro:1 var:1 penalization:1 foundation:1 x01:5 sufficient:2 consistent:5 proxy:1 exciting:2 thresholding:2 tx2:1 row:32 supported:2 free:2 keeping:1 allow:1 institute:1 neighbor:15 wide:1 taking:1 distributed:2 regard:1 dimension:8 evaluating:1 cumulative:1 withhold:1 computes:3 commonly:6 polynomially:1 social:1 transaction:1 approximate:3 compact:3 handbook:2 conceptual:1 xt1:1 latent:31 decade:1 why:1 additionally:1 nature:1 reasonably:1 channel:1 rearranging:1 expanding:1 mazumder:1 expansion:3 vj:1 main:3 dense:1 montanari:1 bounding:1 noise:9 n2:10 x1:81 borel:3 position:2 lbx:4 explicit:1 wish:1 candidate:1 weighting:1 theorem:6 specific:4 showing:2 dx1:6 decay:1 linden:1 evidence:1 dominates:1 essential:1 exists:6 closeness:1 effectively:1 conditioned:3 chatterjee:3 demand:1 sparser:1 chen:2 aditya:1 partially:3 recommendation:6 monotonic:3 applies:1 celee:1 springer:2 satisfies:1 relies:2 determines:2 acm:9 ma:1 goal:4 lipschitz:10 movielens:7 specifically:1 lemma:8 total:1 svd:1 shedding:1 meaningful:3 shannon:1 select:1 support:1 gebiete:1 philosophy:2 ongoing:1 evaluate:1 ex:1
5,646
6,109
SEBOOST ? Boosting Stochastic Learning Using Subspace Optimization Techniques Elad Richardson*1 Rom Herskovitz*1 Boris Ginsburg2 Michael Zibulevsky1 1 Technion, Israel Institute of Technology 2 Nvidia INC {eladrich,mzib}@cs.technion.ac.il {fornoch,boris.ginsburg}@gmail.com Abstract We present SEBOOST, a technique for boosting the performance of existing stochastic optimization methods. SEBOOST applies a secondary optimization process in the subspace spanned by the last steps and descent directions. The method was inspired by the SESOP optimization method, and has been adapted for the stochastic learning. It can be applied on top of any existing optimization method with no need to tweak the internal algorithm. We show that the method is able to boost the performance of different algorithms, and make them more robust to changes in their hyper-parameters. As the boosting steps of SEBOOST are applied between large sets of descent steps, the additional subspace optimization hardly increases the overall computational burden. We introduce hyper-parameters that control the balance between the baseline method and the secondary optimization process. The method was evaluated on several deep learning tasks, demonstrating significant improvement in performance. Video presentation is given in [15] 1 Introduction Stochastic Gradient Descent (SGD) based optimization methods are widely used for many different learning problems. Given some objective function that we want to optimize, a vanilla gradient descent method would simply take some fixed step in the direction of the current gradient. In many learning problems the objective, or loss, function is averaged over the set of given training examples. In that scenario calculating the loss over the entire training set would be expensive, and is therefore approximated on a small batch, resulting in a stochastic algorithm that requires relatively few calculations per step. The simplicity and efficiency of SGD algorithms have made them a standard choice for many learning tasks, and specifically for deep learning [9, 6, 5, 10] . Although the vanilla SGD has no memory of previous steps, they are usually utilized in some way, for example using momentum [13]. Alternatively, the AdaGrad method uses the previous gradients in order to normalize each component in the new gradient adaptively [3], while the ADAM method uses them to estimate an adaptive moment [8]. In this work we utilize the knowledge of previous steps in spirit of the Sequential Subspace Optimization (SESOP) framework [11]. The nature of SESOP allows it to be easily merged with existing algorithms. Several such extensions were introduced over the years to different fields, such as PCD-SESOP and SSF-SESOP, showing state-of-the-art results in their matching fields [4, 17, 16]. The core idea of our method is as follows. At every outer iteration we first perform several steps of a baseline stochastic optimization algorithm which are then summed up as an inner cumulative stochastic step. Afterwards, we minimize the objective function over the affine subspace spanned by the cumulative stochastic step, several previous outer steps and optional other directions. The subspace optimization boosts the performance of the baseline algorithm, therefore our method is called the Sequential Subspace Optimization Boosting method (SEBOOST). *Equal contribution 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 2 The algorithm As our algorithm tries to find the balance between SGD and SESOP, we start by a brief review of the original algorithms, and then move to the SEBOOST algorithm. 2.1 Vanilla SGD In many different large-scale optimization problems, applying complex optimization methods is not practical. Thus, popular optimization methods for those problems are usually based on a stochastic estimation of the gradient. Let minx?Rn f (x) be some minimization problem, and let g(x) be the gradient of f (x). The general stochastic approach applies the following optimization rule xk+1 = xk ? ?g ? (xk ) where xi is the result of the ith iteration, ? is the learning rate and g ? (xk ) is an approximation of g(xk ) obtained using only a small subset (mini-batch) of the training data. These stochastic descent methods have proved themselves in many different problems, specifically in the context of deep learning algorithms, providing a combination of simplicity and speed. Notice that the vanilla SGD algorithm has no memory of previous iterations. Different optimization methods which are based on SGD usually utilize the previous iterations in order to make a more informed descent process. 2.2 Vanilla SESOP The SEquential Subspace OPtimization Method [11, 16] is an optimization technique used for large scale optimization problems. The core idea of SESOP is to perform the optimization of the objective function in the subspace spanned by the current gradient direction and a set of directions obtained from the previous optimization steps. Following the notations in Section 2.1, a subspace structure for SESOP is usually defined based on the following directions: 1. Gradients: Current gradient and [optionally] older ones {g (xi ) : i = k, k ? 1, . . . k ? s1 } 2. Previous directions: {pi = xi ? xi?1 : i = k, k ? 1, . . . k ? s2 } In the SESOP formulation the current gradient and the last step are mandatory and any other set can be used to enrich the subspace. From a theoretical point of view, one can enrich the subspace by two Nemirovsky directions: A weighted average of the previous gradients and the direction to the starting point. This will provide optimal worst case complexity of the method (see also [12].) Denoting Pk as the set of directions at iteration k, the SESOP algorithm would solve the minimization problem ?k = arg min f (xk + Pk ?) ? xk+1 = xk + Pk ?k Thus SESOP reduces the optimization problem to the subspace spanned by Pk at each iteration. This means that instead of solving an optimization problem in Rn the dimensionality of the subspace is governed by the size of Pk and can be controlled. 2.3 The SEBOOST algorithm As explained in Section 2.1, when dealing with large-scale optimization problems, stochastic learning methods are usually better fitted to the task then many more involved optimization methods. However, when applied correctly those methods can still be used to boost the optimization process and achieve faster convergence rates. We propose to start with some SGD algorithm as a baseline, and then apply a SESOP-like optimization method over it in an alternating manner. The subspace for the SESOP algorithm arises from the descent directions of the baseline, utilizing the previous iterations. A description of the method is given in Algorithm 1. Note that the subset of the training data used for the secondary optimization in step 7 isn?t necessarily the same as that of the baseline in step 2, as will be shown in Section 3. Also, note that in step 8 the last added direction is changed, that is done in order to incorporate the step performed by the secondary optimization into the subspace. 2 Algorithm 1 The SEBOOST algorithm 1: for k = 1, . . . do 2: Perform ` steps of baseline stochastic optimization method to get from xk0 to xk` 3: Add the direction of the cumulative step xk` ? xk0 to the optimization subspace P 4: if Subspace dimension exceeded the limit: dim(P ) > M then 5: Remove oldest direction from the optimization subspace P 6: end if 7: Perform optimization over subspace P to get from xk` to xk+1 0 8: Change the last added direction to p = xk+1 ? xk0 0 9: end for It is clear that SEBOOST offers an attractive balance between the baseline stochastic steps and the more costly subspace optimizations. Firstly, as the number ` of stochastic steps grows, the effect of subspace optimization over the result subsides, where taking ` ? ? reduces the algorithm back to the baseline method. Secondly, the dimensionality of the subspace optimization problem is governed by the size of P and can be reduced to as few parameters as desired. Notice also that as SEBOOST is added on top of baseline stochastic optimization method, it does not require any internal changes to be made to the original algorithm. Thus, it can be applied on top of any such method with minimal implementation cost, while potentially boosting the base method. 2.4 Enriching the subspace Although the core elements of our optimization subspace are the directions of last M ? 1 external steps and the new stochastic cumulative direction, many more elements can be added to enrich the subspace. Anchor points As only the last (M ? 1) directions are saved in our subspace, the subspace has knowledge only of recent history of the optimization process. The subspace might benefit from directions dependent on preceding directions as well. For example, one could think of the overall descent achieved by the algorithm p = xk0 ? x00 as a possible direction, or the descent over the second k/2 half of the optimization process p = xk0 ? x0 . We formulate this idea by defining anchor points. Anchors points are locations chosen throughout the descent process which we fix and update only rarely. For each anchor point ai the direction p = xk0 ? ai is added to the subspace. Different techniques can be chosen for setting and changing the anchors. In our formulation each point is associated with a parameter ri which describes the number of boosting steps between each update of the point. After every ri steps the corresponding point ai is initialized back to the current x. That way we can control the number of iterations before an anchor point becomes irrelevant and is initialized again. Algorithm 2 shows how the anchor points can be added to Algorithm 1, by incorporating it before step 7. Current gradient As in the SESOP formulation, the gradient at the current point can be added to the subspace. Momentum Similarly to the idea of momentum in SGD methods one can save a weighted average of the previous updates and add it to the optimization subspace. Denoting the current momentum as mk and the last step as p = xk+1 ? xk0 , the momentum is updated as mk+1 = ??mk + p, where ? is 0 some hyper-parameter, as in regular SGD momentum. Algorithm 2 Controlling anchors in SEBOOST 1: for i = 1, . . . , #anchors do 2: if ri %k == 0 then 3: Change the anchor ai to xk` 4: end if 5: Normalize the direction p = xk` ? ai and add it to the subspace 6: end for 3 Simple Regression -2.5 SGD SEBOOST-SGD NAG SEBOOST-NAG ADAGRAD SEBOOST-ADAGRAD -3 -3.5 logarithmic test error logarithmic test error -3 Simple Regression -2.5 SGD SEBOOST-SGD NAG SEBOOST-NAG ADAGRAD SEBOOST-ADAGRAD -4 -4.5 -5 -3.5 -4 -4.5 -5 -5.5 -5.5 0 5 10 15 20 25 0 train time in seconds 50 100 150 200 250 300 350 400 number of epochs Figure 1: Results for experiment 3.1. The baseline parameters was set as lrSGD = 0.5, lrN AG = 0.1, lrAdaGrad = 0.05, which provided good convergence. SEBOOST?s parameters were fixed at M = 50 and ` = 100 with 50 function evaluations for the secondary optimization. 3 Experiments Following the recent rise of interest in deep learning tasks we focus our evaluation on different neural networks problems. We start with a small, yet challenging, regression problem and then proceed to the known problems of the MNIST autoencoder and CIFAR-10 classifier. For each problem we compare the results of baseline stochastic methods with our boosted variants, showing that SEBOOST can give significant improvement over the base method. Note that the purpose of our work is not to directly compete with existing methods, but rather to show that SEBOOST can improve each learning method compared to its? original variant, while preserving the original qualities of these algorithms. The chosen baselines were SGD with momentum, Nesterov?s Accelerated Gradient (NAG) [13] and AdaGrad [3]. The Conjugate Gradient (CG) [7] was used for the subspace optimization. Our algorithm was implemented and evaluated using the Torch7 framework [1], and is publicly available 1 . The main hyper-parameters that were altered during the experiments were: ? lrmethod - The learning rate of a baseline method. ? M - Maximal number of old directions. ? ` - Number of baseline steps between each subspace optimization. For all experiments the weight decay was set at 0.0001 and the momentum was fixed at 0.9 for SGD and NAG. Unless stated otherwise, the number of function evaluations for CG was set at 20. The baseline method used a mini-batch of size 100, while the subspace optimization was applied with a mini-batch of size 1000. Note that subspace optimization is applied over a significantly larger batch. That is because while a ?bad? stochastic step will be canceled by the next ones, a single secondary step has a bigger effect on the overall result and therefore requires better approximation of the gradient. As the boosting step is applied only between large sets of the base method, the added cost does not hinder the algorithm. For each experiment a different architecture will be defined. We will use the notation a ?L b to denote a classic linear layer with a inputs and b outputs followed by a non-linear Tanh function. Notice that when presenting our results we show two different graphs. The right one always shows the error as a function of the number of passes of the baseline algorithms over the data (i.e. epochs), while the left one shows the error as a function of the actual processor time, taking into account the additional work required by the boosted algorithms. 3.1 Simple regression We will start by evaluating our method on a small regression problem. The dataset in question is a set of 20,000 values simulating some continuous function f : R6 ? R. The dataset was divided 1 https://github.com/eladrich/seboost 4 into 18,000 training examples and 2,000 test examples. The problem was solved using a tiny neural network with the architecture 6 ?L 12 ?L 8 ?L 4 ?L 1. Although the network size is very small the resulting optimization problem remains challenging and gives clear indication of SEBOOST?s behavior. Figure 1 shows the optimization process for the different methods. In all examples the boosted variant converged faster. Note that the different variants of SEBOOST behave differently, governed by the corresponding baseline. 3.2 MNIST autoencoder One of the classic neural network formulation is that of an autoencoder, a network that tries to learn efficient representation for a given set of data. An autoencoder is usually composed of two parts, the encoder which takes the input and produces the compact representation and the decoder which takes the representation and tries to reconstruct the original input. In our experiment the MNIST dataset was used, with 60,000 training images of size 28 ? 28 and 10,000 test images. The encoder was defined as three layer network with an architecture of form 784 ?L 200 ?L 100 ?L 64, with a matching decoder 64 ?L 100 ?L 200 ?L 784. Figure 3 shows the optimization process for the autoencoder problem. A similar trend can be seen to that of experiment 3.1, SEBOOST is able to significantly improve SGD and NAG and shows some improvement over AdaGrad, although not as noticeable. A nice byproduct of working with an autoencoding problem is that one can visualize the quality of the reconstructions as a function of the iterations. Figure 2 shows the change in reconstructions quality for SGD and SESOP-SGD, and shows that the boosting achieved is significant in terms on the actual results. Original #10 #30 #100 #200 Original #10 #30 #100 #200 Figure 2: Reconstruction Results. The first row shows results of the SGD algorithm, while the second row shows results of SESOP-SGD. The last row gives the number of passes over the data. 3.3 CIFAR-10 classifier For classification purposes a standard benchmark is the CIFAR-10 dataset. The dataset is composed of 60,000 images of size 32 ? 32 from 10 different classes, where each class has 6,000 different images. 50,000 images are used for training and 10,000 for testing. In order to check SEBOOST?s ability to deal with large and modern networks the ResNet [6] architecture, winner of the ILSVRC 2015 classification task, is used. MNIST Autoencoder MNIST Autoencoder 0.3 0.3 SGD SEBOOST-SGD NAG SEBOOST-NAG ADAGRAD SEBOOST-ADAGRAD 0.25 0.2 MSE test error 0.2 MSE test error SGD SEBOOST-SGD NAG SEBOOST-NAG ADAGRAD SEBOOST-ADAGRAD 0.25 0.15 0.1 0.05 0.15 0.1 0.05 0 0 0 10 20 30 40 50 60 0 train time in seconds 50 100 150 200 250 300 350 400 number of epochs Figure 3: Results for experiment 3.2. The baseline parameters was set at lrSGD = 0.1, lrN AG = 0.01, lrAdaGrad = 0.01. SEBOOST?s parameters were fixed at M = 10 and ` = 200. 5 CIFAR-10 Classification 60 CIFAR-10 Classification SGD SEBOOST-SGD NAG SEBOOST-NAG ADAGRAD SEBOOST-ADAGRAD 60 SGD SEBOOST-SGD NAG SEBOOST-NAG ADAGRAD SEBOOST-ADAGRAD test error (%) 40 40 test error (%) 50 50 30 30 20 20 10 10 0 0 0 500 1000 1500 2000 2500 0 3000 20 40 60 80 100 120 140 160 number of epochs train time in seconds Figure 4: Results for experiment 3.3. All baselines were set with lr = 0.1 and a mini-batch of size 128. SEBOOST?s parameters were fixed at M = 10 and ` = 391, with a mini-batch of size 1024. Figure 4 shows the optimization process and the achieved accuracy for ResNet of depth 32. Note that we did not manually tweak the learning rate as was done in the original paper. While AdaGrad is not boosted for this experiment, SGD and NAG achieve significant boosting and reach a better minimum. The boosting step was applied only once every epoch, applying too frequent boosting steps resulted in a less stable optimization and higher minima, while applying infrequent steps also lead to higher minima. Experiment 3.4 shows similar results for MNIST and discusses them. 3.4 Understanding the hyper-parameters SEBOOST introduces two hyper-parameters: ` the number of baseline steps between each subspace optimization and M the number of old directions to use. The purpose of the following two experiments is to measure the effect of those parameters on the achieved result and to give some intuition as to their meaning. All experiments are based on the MNIST autoencoder problem defined in Section 3.2. First, let us consider the parameter `, which controls the balance between the baseline SGD algorithm and the more involved optimization process. Taking small values of ` results in more steps of the secondary optimization process, however each direction in the subspace is then composed of fewer steps from the stochastic algorithm, making it less stable. Furthermore, recalling that our secondary optimization is more costly than regular optimization steps, applying it too often would hinder the algorithm?s performance. On the other hand, taking large values of ` weakens the effect of SEBOOST over the baseline algorithm. Figure 5a shows how ` affects the optimization process. One can see that applying the subspace optimization too frequently increases the algorithm?s runtime and reaches an higher minimum than the other variants, as expected. Although taking a large value of ` reaches a better minimum, taking a value which is too large slows the algorithm. We can see that for this experiment taking ` = 200 balances correctly the trade-offs. MNIST Autoencoder - Baseline Steps 0.3 NAG SEBOOST-NAG-5 SEBOOST-NAG-10 SEBOOST-NAG-20 SEBOOST-NAG-50 0.25 MSE test error MSE test error 0.25 MNIST Autoencoder - History Size 0.3 NAG SEBOOST-NAG-50 SEBOOST-NAG-200 SEBOOST-NAG-800 0.2 0.15 0.1 0.2 0.15 0.1 0.05 0 20 40 60 80 100 0.05 120 0 train time in seconds 20 40 60 80 100 120 train time in seconds (a) (b) Figure 5: Experiment 3.4, analyzing different changes in SEBOOST?s hyper-parameters 6 MNIST Autoencoder - Learning Rate 0.3 NAG SEBOOST-NAG-basic SEBOOST-NAG-momentum SEBOOST-NAG-anchors SESOP-NAG-both 0.25 MSE test error 0.25 MSE test error MNIST Autoencoder - Extra Directions 0.3 NAG-0.05 SEBOOST-NAG-0.05 NAG-0.01 SEBOOST-NAG-0.01 NAG-0.005 SEBOOST-NAG-0.005 0.2 0.15 0.2 0.15 0.1 0.1 0.05 0.05 20 40 60 80 100 0 120 20 40 60 80 train time in seconds train time in seconds (a) (b) 100 120 Figure 6: Experiment 3.5, analyzing different changes in SEBOOST?s subspace Let us now consider the effect of M , which governs the size of the subspace in which the secondary optimization is applied. Although taking large values of M allows us to hold more directions and apply the optimization in a larger subspace it also makes the optimization process more involved. Figure 5b shows how M affects the optimization process. Interestingly, the lower M is, the faster the algorithm starts descending. However, larger M values tend to reach better minima. For M = 20 the algorithm reaches the same minimum as M = 50, but starts the descent process faster, making it a good choice for this experiment. To conclude, the introduced hyper-parameters M and ` affect the overall boosting effect achieved by SEBOOST. Both parameters incorporate different trade-offs of the optimization problem and should be considered when using the algorithm. Our own experiments show that a good initialization would be to set ` so the algorithm runs about once or twice per epoch, and to set M between 10 to 20. 3.5 Investigating the subspace One of the key components of SEBOOST is the structure of the subspace in which the optimization is applied. The purpose of the following two experiments is to see how changes in the baseline algorithm, or the addition of more directions, affect the algorithm. All experiments are based on the MNIST autoencoder problem defined in Section 3.2. In the basic formulation of SEBOOST the subspace is composed only from the directions of the baseline algorithm. In Section 3.2 we saw how choosing different baselines affect the algorithm. Another experiment of interest is to see how our algorithm is influenced by changes in the hyperparameters of the baseline algorithm. Figure 6a shows the effect of the learning rate over the baseline algorithms and their boosted variants. It can be seen that the change in the original baseline affects our algorithm, however the impact is noticeably smaller, showing that the algorithm has some robustness to the original learning rate. In Section 2.4 a set of additional directions which can be added to the subspace were defined, these directions can possibly enrich the subspace and improve the optimization process. Figure 6b shows the influence of those directions on the overall result. In SEBOOST-anchors a set of anchor points were added with the r values of 500, 250, 100, 50 and 20. In SEBOOST-momnetum a momentum vector with ? = 0.9 was used. It can be seen that using the proposed anchor directions can significantly boost the algorithm. The momentum direction is less useful, giving a small boost on its own and actually slightly hinders the performance when used in conjunction with the anchor directions. 4 Conclusion In this paper we presented SEBOOST, a technique for boosting stochastic learning algorithms via a secondary optimization process. The secondary optimization is applied in the subspace spanned by the preceding descent steps, which can be further extended with additional directions. We evaluated SEBOOST on different deep learning tasks, showing the achieved results of our methods compared to their original baselines. We believe that the flexibility of SEBOOST could make it useful for different learning tasks. One can easily change the frequency of the secondary optimization step, ranging from 7 frequent and more risky steps, to the more stable one step per epoch. Changing the baseline algorithm and the structure of the subspace allows us to further alter SEBOOST?s behavior. Although this is not the focus of our work, an interesting research direction for SEBOOST is that of parallel computing. Similarly to [2, 14], one can look at a framework composed of a single master and a set of workers, where each worker optimizes a local model and the master saves a global set of parameters which is based on the workers. Inspired by SEBOOST, one can take the descent directions from each of the workers and apply a subspace optimization in the spanned subspace, allowing the master to take a more efficient step based on information from each of its workers. Another interesting direction for future work is the investigation of pruning techniques. In our work, when the subspace if fully occupied the oldest direction is simply removed. One might consider more advanced pruning techniques, such as eliminating the direction which contributed the least for the secondary optimization step, or even randomly removing one of the subspace directions. A good pruning technique can potentially have a significant effect on the overall result. These two ideas will be further researched in future work. Overall, we believe SEBOOST provides a promising balance between popular stochastic descent methods and more involved optimization techniques. Acknowledgements The research leading to these results has received funding from the European Research Council under European Unions Seventh Framework Program, ERC Grant agreement no. 320649 and was supported by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI). References [1] Ronan Collobert, Koray Kavukcuoglu, and Cl?ement Farabet. Torch7: A matlab-like environment for machine learning. In BigLearn, NIPS Workshop, number EPFL-CONF-192376, 2011. [2] Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior, Paul Tucker, Ke Yang, Quoc V Le, et al. Large scale distributed deep networks. In Advances in Neural Information Processing Systems, pages 1223?1231, 2012. [3] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121?2159, 2011. [4] Michael Elad, Boaz Matalon, and Michael Zibulevsky. Coordinate and subspace optimization methods for linear least squares with non-quadratic regularization. Applied and Computational Harmonic Analysis, 23(3):346?367, 2007. [5] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 580?587, 2014. [6] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015. [7] Magnus Rudolph Hestenes and Eduard Stiefel. Methods of conjugate gradients for solving linear systems, volume 49. NBS, 1952. [8] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [9] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097?1105, 2012. [10] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3431?3440, 2015. [11] Guy Narkiss and Michael Zibulevsky. Sequential subspace optimization method for large-scale unconstrained problems. Technion-IIT, Department of Electrical Engineering, 2005. 8 [12] Arkadi Nemirovski. Orth-method for smooth convex optimization. Izvestia AN SSSR, Transl.: Eng. Cybern. Soviet J. Comput. Syst. Sci, 2:937?947, 1982. [13] Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In Proceedings of the 30th international conference on machine learning (ICML-13), pages 1139?1147, 2013. [14] Sixin Zhang, Anna E Choromanska, and Yann LeCun. Deep learning with elastic averaging sgd. In Advances in Neural Information Processing Systems, pages 685?693, 2015. [15] Michael Zibulevsky. SESOP - Sequential Subspace Optimization framework. Video presentations, https://www.youtube.com/playlist?list=PLH39kM3nuavf2Hkr-gBAMBX7EPMB2kUqw. [16] Michael Zibulevsky. Speeding-up convergence via sequential subspace optimization: Current state and future directions. arXiv preprint arXiv:1401.0159, 2013. [17] Michael Zibulevsky and Michael Elad. L1-l2 optimization in signal and image processing. Signal Processing Magazine, IEEE, 27(3):76?88, 2010. 9
6109 |@word eliminating:1 nemirovsky:1 eng:1 sgd:32 moment:1 denoting:2 interestingly:1 existing:4 current:9 com:3 gmail:1 yet:1 diederik:1 john:1 devin:1 ronan:1 remove:1 update:3 half:1 fewer:1 intelligence:1 xk:16 oldest:2 ith:1 core:3 lr:1 provides:1 boosting:13 location:1 firstly:1 zhang:2 transl:1 manner:1 introduce:1 x0:1 expected:1 behavior:2 themselves:1 frequently:1 inspired:2 researched:1 actual:2 becomes:1 spain:1 provided:1 notation:2 israel:1 informed:1 ag:2 every:3 runtime:1 classifier:2 control:3 grant:1 before:2 engineering:1 local:1 limit:1 analyzing:2 might:2 twice:1 initialization:2 challenging:2 nemirovski:1 enriching:1 averaged:1 practical:1 lecun:1 testing:1 ement:1 union:1 evan:1 significantly:3 matching:2 regular:2 get:2 nb:1 context:1 applying:5 influence:1 cybern:1 descending:1 optimize:1 www:1 dean:1 marten:1 starting:1 jimmy:1 convex:1 formulate:1 ke:1 simplicity:2 matthieu:1 rule:1 utilizing:1 spanned:6 classic:2 coordinate:1 updated:1 controlling:1 hierarchy:1 infrequent:1 magazine:1 us:2 agreement:1 element:2 trend:1 expensive:1 approximated:1 utilized:1 recognition:3 preprint:3 solved:1 electrical:1 worst:1 hinders:1 sun:1 trade:2 removed:1 zibulevsky:5 intuition:1 environment:1 complexity:1 nesterov:1 hinder:2 solving:2 efficiency:1 easily:2 differently:1 iit:1 soviet:1 train:7 hyper:8 choosing:1 elad:4 widely:1 solve:1 larger:3 kai:1 otherwise:1 reconstruct:1 encoder:2 ability:1 richardson:1 think:1 rudolph:1 online:1 autoencoding:1 indication:1 propose:1 reconstruction:3 maximal:1 frequent:2 flexibility:1 achieve:2 description:1 normalize:2 sutskever:2 convergence:3 darrell:2 produce:1 boris:2 adam:2 resnet:2 object:1 weakens:1 andrew:1 ac:1 received:1 noticeable:1 implemented:1 c:1 direction:44 merged:1 saved:1 sssr:1 stochastic:24 noticeably:1 require:1 fix:1 investigation:1 secondly:1 extension:1 hold:1 considered:1 magnus:1 eduard:1 visualize:1 purpose:4 estimation:1 tanh:1 ross:1 saw:1 council:1 weighted:2 minimization:2 offs:2 biglearn:1 always:1 rather:1 occupied:1 boosted:5 conjunction:1 focus:2 improvement:3 check:1 cg:2 baseline:32 dim:1 dependent:1 hestenes:1 epfl:1 entire:1 choromanska:1 playlist:1 overall:7 arg:1 canceled:1 classification:5 enrich:4 art:1 summed:1 field:2 equal:1 once:2 koray:1 manually:1 look:1 icml:1 alter:1 future:3 few:2 modern:1 randomly:1 composed:5 resulted:1 jeffrey:1 recalling:1 detection:1 interest:2 evaluation:3 introduces:1 lrn:2 accurate:1 byproduct:1 worker:5 unless:1 old:2 initialized:2 desired:1 girshick:1 theoretical:1 minimal:1 fitted:1 mk:3 cost:2 tweak:2 subset:2 technion:3 krizhevsky:1 seventh:1 too:4 adaptively:1 international:1 michael:8 ilya:2 again:1 possibly:1 guy:1 external:1 conf:1 leading:1 syst:1 account:1 inc:1 jitendra:1 collobert:1 performed:1 try:3 view:1 hazan:1 start:6 parallel:1 arkadi:1 contribution:1 minimize:1 il:1 publicly:1 accuracy:1 collaborative:1 greg:1 square:1 convolutional:2 kavukcuoglu:1 ren:1 processor:1 history:2 converged:1 reach:5 influenced:1 farabet:1 trevor:2 frequency:1 involved:4 tucker:1 james:1 associated:1 proved:1 dataset:5 popular:2 knowledge:2 dimensionality:2 segmentation:2 actually:1 back:2 exceeded:1 higher:3 formulation:5 evaluated:3 done:2 furthermore:1 working:1 hand:1 quality:3 icri:1 believe:2 grows:1 effect:8 regularization:1 alternating:1 semantic:2 deal:1 attractive:1 during:1 presenting:1 duchi:1 l1:1 stiefel:1 ranging:1 meaning:1 image:7 harmonic:1 funding:1 winner:1 volume:1 he:1 significant:5 ai:5 vanilla:5 unconstrained:1 similarly:2 erc:1 stable:3 add:3 base:3 own:2 recent:2 irrelevant:1 optimizes:1 scenario:1 mandatory:1 nvidia:1 sixin:1 preserving:1 seen:3 additional:4 minimum:7 preceding:2 george:1 xiangyu:1 corrado:1 signal:2 afterwards:1 reduces:2 smooth:1 faster:4 calculation:1 offer:1 long:1 cifar:5 divided:1 bigger:1 controlled:1 impact:1 variant:6 regression:5 basic:2 vision:2 arxiv:6 iteration:9 monga:1 achieved:6 addition:1 want:1 jian:1 extra:1 pass:2 tend:1 spirit:1 ssf:1 yang:1 affect:6 architecture:4 inner:1 idea:5 torch7:2 proceed:1 shaoqing:1 hardly:1 matlab:1 deep:10 useful:2 clear:2 governs:1 reduced:1 http:2 notice:3 per:3 correctly:2 key:1 demonstrating:1 changing:2 dahl:1 utilize:2 graph:1 subgradient:1 year:1 compete:1 run:1 master:3 throughout:1 yann:1 layer:2 followed:1 quadratic:1 adapted:1 pcd:1 alex:1 ri:3 speed:1 min:1 relatively:1 department:1 combination:1 conjugate:2 describes:1 smaller:1 slightly:1 making:2 s1:1 quoc:1 explained:1 remains:1 discus:1 singer:1 end:4 available:1 apply:3 ginsburg:1 simulating:1 save:2 batch:7 robustness:1 original:11 top:3 calculating:1 yoram:1 giving:1 objective:4 move:1 added:10 question:1 malik:1 costly:2 gradient:18 minx:1 subspace:58 sci:1 decoder:2 outer:2 rom:1 mini:5 providing:1 balance:6 optionally:1 potentially:2 stated:1 rise:1 slows:1 ba:1 implementation:1 xk0:7 perform:4 allowing:1 contributed:1 benchmark:1 descent:14 behave:1 optional:1 defining:1 extended:1 hinton:2 rn:2 introduced:2 required:1 imagenet:1 boost:5 barcelona:1 nip:2 kingma:1 able:2 usually:6 pattern:2 program:1 memory:2 video:2 residual:1 advanced:1 older:1 improve:3 altered:1 technology:1 brief:1 github:1 risky:1 autoencoder:13 isn:1 speeding:1 review:1 epoch:7 nice:1 understanding:1 acknowledgement:1 l2:1 adagrad:16 loss:2 fully:2 interesting:2 geoffrey:2 shelhamer:1 affine:1 tiny:1 pi:1 row:3 changed:1 supported:1 last:8 senior:1 institute:2 taking:8 benefit:1 distributed:1 dimension:1 depth:1 evaluating:1 cumulative:4 rich:1 made:2 adaptive:2 pruning:3 compact:1 boaz:1 dealing:1 global:1 investigating:1 anchor:15 nag:36 conclude:1 xi:4 alternatively:1 x00:1 continuous:1 promising:1 nature:1 learn:1 robust:1 elastic:1 mse:6 complex:1 necessarily:1 european:2 cl:1 did:1 pk:5 main:1 anna:1 s2:1 hyperparameters:1 paul:1 intel:1 momentum:12 mao:1 orth:1 comput:1 governed:3 r6:1 donahue:1 removing:1 bad:1 showing:4 list:1 decay:1 burden:1 incorporating:1 mnist:12 workshop:1 sequential:6 importance:1 ci:1 chen:1 logarithmic:2 simply:2 kaiming:1 applies:2 presentation:2 jeff:1 change:11 youtube:1 specifically:2 averaging:1 called:1 secondary:13 rarely:1 ilsvrc:1 internal:2 mark:1 arises:1 jonathan:1 accelerated:1 rajat:1 incorporate:2
5,647
611
Learning to See Where and What: Training a Net to Make Saccades and Recognize Handwritten Characters Gale Martin, Mosfeq Rashid, David Chapman, and James Pittman MCC, 3500 Balcones Center Drive, Austin, Texas 78759 ABSTRACT This paper describes an approach to integrated segmentation and recognition of hand-printed characters. The approach, called Saccade, integrates ballistic and corrective saccades (eye movements) with character recognition. A single backpropagation net is trained to make a classification decision on a character centered in its input window, as well as to estimate the distance of the current and next character from the center of the input window. The net learns to accurately estimate these distances regardless of variations in character width, spacing between characters, writing style and other factors. During testing, the system uses the net~xtracted classification and distance information, along with a set of jumping rules, to jump from character to character. The ability to read rests on multiple foundation skills. In learning how to read, people learn how to recognize individual characters centered in the visual field. They also learn how to move their eyes along a line of text, sequentially centering the visual field on successive characters. We believe that the key to developing optical character recognition (OCR) systems that can mimic human reading capabilities, is to develop systems that can learn these and other skills in an integrated fashion. In this paper, we demonstrate that a backpropagation net can learn to naVigate along a line of handwritten characters, as well as to recognize the characters centered in its visual field. The system, called Saccade, extends the current state of the art in OCR technology by using a single classifier to accurately and efficiently locate and recognize characters, regardless of whether they touch each other or are separate. The Saccade system was described briefly at the last NIPS conference (Martin & Rashid, 1992). In this paper, we describe it mcx-e fully and report on results demonstrating its accuracy and efficiency in recognizing handwritten digits. The Saccade system takes a cue from the ballistic and corrective saccades (eye movements) of natural vision systems. Natural saccades make it possible to efficiently move from one informative area to another by jumping. The eye typically initiates a ballistic saccade to 441 442 Martin, Rashid, Chapman, and Pittman move the center of focus to the general area of interest. followed. if necessary. by one or more ccnective saccades for fme-grained position corrections. Recognition processes are applied only at these multiple fixation points. We have copioo some of these aspects in the artificial Saccade system by training a neural netwcn to know a1>oot the locations of characters in its input window. as well as to know about the identity of the character centered in its input window. During run-time, the Saccade system accesses this information computed by the net for successive input windows, along with a set of simple jumping rules, to yield an OCR system that jumps from character to character, classifying each character in a sequence. 1 TRAINING DETAILS As shown in Figure 1, the Saccade system has a wide input window, large enough to contain ~i~ii3a~i~ } No Centered Character Current character Distance to current character I Distance to next character I I f .. W .., I I I I 4 Part Output Vector Backprop Net Figure 1. The Saccade system uses an enlarged input window and a 4-part output vector. several characters. Prior to training, each field image of a line of characters is labeled with the hcrizontal center position of each character in the field, as well as with the category of each character. During training, the input window slides hcriwntally across a field of characters, and at each position, the contents of the input window are paired with a four-part target ootput vector, the values of which are computed from the labeled information. The target values answer the following four questions about the contents of the input window: 1. 2. 3. 4. Is a character centered in the input window? What character is closest to the center of the window? How far off~nter (horizontally) is the centennost character? How far is the next character to the right from the center of the window? The first node in the output vector represents the no-centered-character state. It's target value is set high (e.g .? 1.0) when the center of the input window falls between characters, Training a Net to Make Saccades and Recognize Handwritten Characters and set low (e.g .? 0.0) when the center of the input window falls 00 the center of a character. When the net is trained. the value of the rw-cenlered~haracter node indicates whether the input window is centered over a character. oc whether a corrective saccade is needed to better center the character. The second part of the output vector contains a node foc each character category. When the center of the input window falls on a character. the target value for its cocresponding node is set high; otherwise it is set low. When the net is trained. the values in these nodes are used to classify the centered character. The target values for bdh the rw-centeredcharacter and the character-category nodes are defmed continuously across the hcrizontal dimension as trapewidal functions. such that there are plateaus surrounding the off and on positions. with linearly increasing and decreasing values connecting the plateaus. The third and fourth components of the output vector represent distance values. each encoded in a distributed fashion across multiple nodes. using localized receptive fields (Moody & Darken. 1988). The frrst of these two parts represents the distance by which the character closest to the center of the window is off~nter. The target value can be positive. indicating that the center of the window is to the left of the center of character. or it can be negative. indicating that the center of the window has passed over the character. to the right of it's center. When trained, the value of the current-character-distance set of nodes is accessed to determine the magnitude of a ccxrective saccade. to make a fme-grained position adjustment. The fourth component represems the distance from the center of the window to the center of the next character to the right. The target value can only be positive. When trained, the value of this set of nodes is accessed to detennine the magnitude of a ballistic saccade, to jump to the next character to the right. It is imponant to note that for bdh distance components, the maximum target value can not exceed half the window width. The net is never trained to make a distance judgment that extends beyond its field of view, since it is not given any infonnation about what exists outside of it's input window. For example. when the center of the next character to the right is positioned outside of the current input window. the distance value is set to the maximum value of half the window width. Since the distance values vary with different characters. different writers. and of course, at different positions with respect to a character. the net is forced to learn to use the visual characteristics particular to each window to estimate the distance values. In other words. the net does NOT simply learn average values for each of the two distance metrics. Moreover. as the results will show, the trained net does not seem to use simple density histogram cues to estimate the distance values. It is able to reliably estimate the distance values even when characters overlap. and hence would appear as a single clump in a density histogram. 2 RUN-TIME SACCADE RULES During run-time, the labeled values are. of course, not available. The system uses the computed values in the character classification and distance components of the output vector, and some heuristics, to navigate horizontally along a character field. jumping from one character to the next. and occasionally making a corrective saccade to improve its ability to classify a character. When the net recognizes a character, it executes a ballistic saccade 443 444 Martin, Rashid, Chapman, and Pittman to the next character. obtaining the distance to jump by reading the next-character-disUlnCe component of the output vector. When this actioo fails to center a character. as indicated by a low value in the no-centered-character output node, the system executes a corrective saccade to better center the character. It obtains the distance and direction to jump by reading the current-character-distance component of the output vector. Multiple corrective saccades can be executed. 3 TFSTING ON NIST HANDWRITTEN DIGIT FIELDS We tested the perfonnance of the system on a set of hand-printed digits collected and distributed by the National Institute of Standards and Technology (NIST). This is a database containing 273,000 samples of handwritten numerals. Each of2100 Census workers filled in a fonn with 33 fields, 28 fields of which only contain handwritten digits. The scanning resolution of the samples was 300 pixels/inch. The neural net was trained on about 80,000 characters from 20,000 fields, written by 800 different individuals. The fields varied in length from 2 characters per field to 6 characters per field. The horiwntal positions of each of the characters in these training-data fields were extracted by a person. The test data contained about 20,000 digits from 5,000 fields, written by a different group of 200 individuals. The test set was chosen to be this large because use of smaller test sets (e.g., 5,000 digits, 1250 fields) yielded significant between-set variations in reported accuracy. Each field image was preprocessed to remove the box around the field of characters, and any surrounding white space. Each field image was size normalized, with respect to the vertical axis. to a height of 20 pixels. Aspect ratio was maintained. An input pattern generator was then passed over the field to create input windows fa training the net. The input window size was 36 pixels wide and 20 pixels high. The input window scanned the field at 2-pixel increments during training. Subsequent experiments have shown that training can be speeded up considerably by training on the character centers and at random points between the character centers. without causing decreased accuracy. The backpropagation netwak architecture is described more fully in Martin & Rashid (1992). It has 2 hidden layers with local, shared connections in the first hidden layer. and local connections in the second hidden layer. Shared weights are not used in the second hidden layer because early experiments showed that this retards learning, presumably because extending the ~ition invariance to second-hidden-Iayer nodes inhibits the net in learning the position specific information regarding what is centered in its input window. The learning rate of the net was initially set at .05. and then successivel y lowered as training reached an asymptcxe. The momentum term was set at .9 throughout training. All nodes in the net used logistic activation functions. Table I reports on the test results in terms of field-based reject rates. for 1% and .5% percent of the fields rejected. The error rates are field-based in the sense that if the net misclassifies one character in the field. the entire field is considered as mis-classified. Error rates pertain to the fields remaining after rejection. Rejections are based on placing a threshold for the acceptable distance between the highest and next highest running activation total. In this way. by varying the threshold, the error rate can be traded off against the percentage of rejections. In addition. recognized fields were also rejected if the number of recognized digits differed from the expected number of cligits. Training a Net to Make Saccades and Recognize Handwritten Characters Thble 1: Field - Based Error Rates For Saccade System Field Field Size Field Error Rate Reject Rate 2-dIalt. 10K 0.5111 6.a 9.J'11 3-di&lt. 10K 0.5111 12.".. 19.6'11 4-dJalt. 1.1111 0.5111 19.5111 350K 5-dJalu 1.1'" 0.5111 23.2111 28.J'11 6-di&lt. 1.1111 0.5111 26.K 35.0111 Figure 2 presents some of the fields of connected characters that the system correctly recognized. Conventional OCR systems typically fail on connected characters because they employ an independent character segmentation stage. in which the character is isolated from its surround using features. such as intervening white spaces. This character segmentation stage typically fails when characters are coonected. The Saccade system goes beyond conventional OCR systems by integrating segmentation and recognition. and thereby is able to recognize touching characters. The Saccade system is also efficient in the sense that it typically jumps from one character to the next without making a corrective saccades. Corrective saccades tend to be mere likely when characters are touching. In addition. there is almost always a corrective saccade for the first character in the field. since the system starts at the beginning of the field. with ?1 41" ,. ~. G9? ~1 I o~71 ~ ~J ~~~I ? 1~9___q_30__ 3/,---_r u.ul ~ ~l GtiJ? Figure 2. Examples of connected and broken characters that the Saccade system correctly recognizes. 445 446 Martin, Rashid, Chapman, and Pittman no knowledge of the location of the character. For fields containing two digits, the average number of passes of the net on a small test set was about 2 saccades per character. For field containing five digits, the average number of saccades per character was 1.3. 4 COMPARISONS WITH OTHER SYSTEMS The Saccade system is an extension of a related integrated segmentation and recognition system we reported on at last years NIPS conference (Martin & Rashid, 1992). That system employed an exhaustive scan technique, rather than saccades, to navigate along a line of text. Essentially, the net was convolved hociwntally across a field image at a scan increment of 2 pixels. The net architecture was very similar to that of the Saccade system, except that it did not have the two distance components in the output vector. The accuracy rates of the two systems are essentially equivalent. However, the exhaustive scan version was considerably less efficient, requiring a forward pass of the netwock at every 2-pixel incremental scan position. On average it required about 5.5 forward passes per character, rather than the 1.3 focward passes per character required by the Saccade system. Over the past two years, an approach similar to the exhaustive scan method has been advanced by a number of researchers (Keeler & Rumelhart, 1992; Matan. Burges. Le CuD. & Denker. 1992). This approacb also involves convolving a network across a field image. but uses a time-delay-neural-net (1DNN). or completely local. shared weight. architecture. a smaller input window, and no explicit position labeling of characters. The IDNN approacb has algorithmic advantages over the exhaustive scan version described in the previous paragraph. because the completely shared -weight architecture enables the number of forward passes of the net to be reduced considerably. 5 CONCLUSIONS AND FUTURE WORK As stated at the beginning of this paper, we believe that the key to developing optical character recognition (OCR) systems that can mimic human reading capabilities is to develop systems that can learn the multiple foundation skills underlying human reading. This paper has repocted some progress in this regard. We have demonstrated that a relatively simple back propagation network can integrate its learning of position and category information. thereby enabling efficient navigatioo along a field of text through ballistic and corrective saccades, and accurate recognition of touching or broken characters. There is however, a long way to go before we can claim a system with capabilities similar to human reading. The present StJccade system only moves horizontally, in one dimension. Human reading operates in two-<iimensions. and in a sense. it operates in three-dimensions because it automatically operates across different scales. Human vision also employs auter matic contrast adjusunent; the Saccade system does not. Human vision has a wider field of view and employs a foveal transform. such that objects centered in the field of vision are represented at a higher resolution than objects in the periphery. This effectively expands the field of vision beyond what would be estimated simply by the size of the receptive area on the retina. As a resUlt. saccades enable very effective means of scanning a large visual area. The present artificial Saccade system has only a small field of vision. and no foveal transform. so it's saccades must necessarily be limited in size. The present system is also only oriented toward recognizing a single character centered in its input window at Training a Net to Make Saccades and Recognize Handwritten Characters a time. Human reading typically only makes one or two saccades per woo1. Finally, human reading capabilities clearly integrate recognition processes with higher-level processes, to enable the redundancies of natural language to constrain the recognition decisions. References Keeler, J, & Rumelhart, D. E. (1992) A self-aganizing integrated segmentation and recognition neural network.. In Moody, J.E., Hansoo, S.1., and Lippmann, R.P., (eds.) A.dvances in Neural Information Processing Systems 4. San Mateo. CA: Mocgan Kaufmann Publishers. Malan, 0., Burges, J. C., Le Cun, Y., and Denker. J. S. (1992) Multi-Digit Recognition Using a Space Displacement Neural Netwm. In Moody. J.E .? Hanson. S.1 .? and Lippmann. R.P.? (eds.) Advances in Neural Injormation Processing Systems 4. San Mateo. CA: Mocgan Kaufmann Publishers. 488-495. Martin. O. L. & Rashid. M. (1992) Recognizing overlapping hand-printed characters by centered-OOject integrated segmentation and recognitioo. In Moody, J.E .? Hanson, S.1., and Lippmann, R.P.. (eds.) Advances in Neural Injormation Processing Systems 4. San Mateo. CA: Morgan Kaufmann Publishers. Moody. J. & Darken. C. (1988) Learning with localized receptive fields. Technical Report Yaleu/DCS/RR-649. 447 PART V STOCHASTIC LEARNING AND ANALYSIS
611 |@word version:2 briefly:1 fonn:1 thereby:2 yaleu:1 contains:1 foveal:2 past:1 current:7 activation:2 written:2 must:1 subsequent:1 informative:1 thble:1 enables:1 remove:1 cue:2 half:2 beginning:2 node:12 location:2 successive:2 accessed:2 five:1 height:1 along:7 fixation:1 paragraph:1 expected:1 multi:1 retard:1 decreasing:1 automatically:1 window:32 increasing:1 moreover:1 underlying:1 what:5 every:1 expands:1 classifier:1 appear:1 positive:2 before:1 local:3 mateo:3 limited:1 speeded:1 clump:1 testing:1 backpropagation:3 digit:10 displacement:1 area:4 mcc:1 oot:1 reject:2 printed:3 word:1 integrating:1 pertain:1 writing:1 conventional:2 equivalent:1 demonstrated:1 center:23 go:2 regardless:2 resolution:2 rule:3 variation:2 increment:2 target:8 us:4 rumelhart:2 recognition:12 mosfeq:1 labeled:3 database:1 connected:3 movement:2 highest:2 broken:2 trained:8 writer:1 efficiency:1 completely:2 represented:1 corrective:10 surrounding:2 forced:1 describe:1 effective:1 artificial:2 labeling:1 outside:2 matan:1 exhaustive:4 encoded:1 heuristic:1 otherwise:1 ition:1 ability:2 transform:2 injormation:2 sequence:1 advantage:1 rr:1 net:28 causing:1 frrst:1 intervening:1 g9:1 extending:1 incremental:1 object:2 wider:1 develop:2 progress:1 involves:1 direction:1 stochastic:1 centered:14 human:9 enable:2 numeral:1 backprop:1 extension:1 keeler:2 correction:1 around:1 considered:1 presumably:1 algorithmic:1 traded:1 claim:1 vary:1 early:1 integrates:1 ballistic:6 infonnation:1 create:1 clearly:1 always:1 rather:2 varying:1 focus:1 indicates:1 cud:1 contrast:1 sense:3 integrated:5 typically:5 entire:1 initially:1 hidden:5 dnn:1 pixel:7 classification:3 misclassifies:1 art:1 field:48 never:1 chapman:4 represents:2 placing:1 mimic:2 future:1 report:3 employ:3 retina:1 oriented:1 recognize:8 national:1 individual:3 interest:1 accurate:1 worker:1 necessary:1 jumping:4 perfonnance:1 filled:1 isolated:1 classify:2 recognizing:3 delay:1 reported:2 answer:1 scanning:2 considerably:3 person:1 density:2 off:4 connecting:1 continuously:1 moody:5 containing:3 pittman:4 gale:1 convolving:1 style:1 view:2 reached:1 start:1 capability:4 accuracy:4 fme:2 kaufmann:3 characteristic:1 efficiently:2 yield:1 judgment:1 inch:1 handwritten:9 nter:2 accurately:2 mere:1 drive:1 researcher:1 executes:2 classified:1 plateau:2 ed:3 centering:1 against:1 james:1 mi:1 di:2 knowledge:1 segmentation:7 positioned:1 back:1 higher:2 box:1 rejected:2 stage:2 hand:3 touch:1 overlapping:1 propagation:1 logistic:1 indicated:1 believe:2 contain:2 normalized:1 requiring:1 hence:1 read:2 white:2 during:5 width:3 defmed:1 self:1 maintained:1 oc:1 demonstrate:1 percent:1 image:5 significant:1 surround:1 language:1 lowered:1 access:1 closest:2 showed:1 touching:3 periphery:1 occasionally:1 morgan:1 employed:1 recognized:3 determine:1 multiple:5 technical:1 long:1 a1:1 paired:1 vision:6 metric:1 essentially:2 histogram:2 represent:1 addition:2 spacing:1 decreased:1 publisher:3 rest:1 pass:4 tend:1 seem:1 exceed:1 enough:1 architecture:4 regarding:1 texas:1 whether:3 passed:2 ul:1 slide:1 category:4 rw:2 reduced:1 percentage:1 estimated:1 per:7 correctly:2 group:1 key:2 four:2 redundancy:1 demonstrating:1 threshold:2 preprocessed:1 year:2 run:3 fourth:2 extends:2 throughout:1 almost:1 decision:2 acceptable:1 layer:4 followed:1 yielded:1 scanned:1 constrain:1 aspect:2 balcones:1 optical:2 relatively:1 inhibits:1 martin:8 developing:2 describes:1 across:6 smaller:2 character:95 cun:1 making:2 census:1 fail:1 needed:1 initiate:1 know:2 available:1 detennine:1 denker:2 ocr:6 convolved:1 remaining:1 running:1 recognizes:2 move:4 question:1 receptive:3 fa:1 distance:23 separate:1 collected:1 toward:1 length:1 ratio:1 executed:1 negative:1 stated:1 reliably:1 vertical:1 darken:2 nist:2 enabling:1 rashid:8 locate:1 dc:1 varied:1 david:1 required:2 connection:2 hanson:2 bdh:2 nip:2 beyond:3 able:2 pattern:1 reading:9 overlap:1 natural:3 advanced:1 improve:1 technology:2 eye:4 axis:1 text:3 prior:1 fully:2 idnn:1 localized:2 generator:1 foundation:2 integrate:2 classifying:1 austin:1 course:2 last:2 burges:2 institute:1 wide:2 fall:3 distributed:2 regard:1 dimension:3 forward:3 jump:6 san:3 far:2 skill:3 obtains:1 lippmann:3 sequentially:1 iayer:1 matic:1 table:1 learn:7 ca:3 obtaining:1 necessarily:1 did:1 linearly:1 enlarged:1 fashion:2 differed:1 fails:2 position:11 momentum:1 explicit:1 third:1 learns:1 grained:2 specific:1 navigate:3 exists:1 effectively:1 magnitude:2 rejection:3 lt:2 simply:2 likely:1 visual:5 horizontally:3 adjustment:1 contained:1 saccade:44 extracted:1 identity:1 shared:4 content:2 except:1 operates:3 called:2 total:1 pas:1 invariance:1 indicating:2 people:1 scan:6 tested:1
5,648
6,110
Unsupervised Domain Adaptation with Residual Transfer Networks Mingsheng Long? , Han Zhu? , Jianmin Wang? , and Michael I. Jordan] ? KLiss, MOE; TNList; School of Software, Tsinghua University, China ] University of California, Berkeley, Berkeley, USA {mingsheng,jimwang}@tsinghua.edu.cn, zhuhan10@gmail.com, jordan@berkeley.edu Abstract The recent success of deep neural networks relies on massive amounts of labeled data. For a target task where labeled data is unavailable, domain adaptation can transfer a learner from a different source domain. In this paper, we propose a new approach to domain adaptation in deep networks that can jointly learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeled data in the target domain. We relax a shared-classifier assumption made by previous methods and assume that the source classifier and target classifier differ by a residual function. We enable classifier adaptation by plugging several layers into deep network to explicitly learn the residual function with reference to the target classifier. We fuse features of multiple layers with tensor product and embed them into reproducing kernel Hilbert spaces to match distributions for feature adaptation. The adaptation can be achieved in most feed-forward models by extending them with new residual layers and loss functions, which can be trained efficiently via back-propagation. Empirical evidence shows that the new approach outperforms state of the art methods on standard domain adaptation benchmarks. 1 Introduction Deep networks have significantly improved the state of the art for a wide variety of machine-learning problems and applications. Unfortunately, these impressive gains in performance come only when massive amounts of labeled data are available for supervised training. Since manual labeling of sufficient training data for diverse application domains on-the-fly is often prohibitive, for problems short of labeled data, there is strong incentive to establishing effective algorithms to reduce the labeling consumption, typically by leveraging off-the-shelf labeled data from a different but related source domain. However, this learning paradigm suffers from the shift in data distributions across different domains, which poses a major obstacle in adapting predictive models for the target task [1]. Domain adaptation [1] is machine learning under the shift between training and test distributions. A rich line of approaches to domain adaptation aim to bridge the source and target domains by learning domain-invariant feature representations without using target labels, so that the classifier learned from the source domain can be applied to the target domain. Recent studies have shown that deep networks can learn more transferable features for domain adaptation [2, 3], by disentangling explanatory factors of variations behind domains. Latest advances have been achieved by embedding domain adaptation in the pipeline of deep feature learning which can extract domain-invariant representations [4, 5, 6, 7]. The previous deep domain adaptation methods work under the assumption that the source classifier can be directly transferred to the target domain upon the learned domain-invariant feature representations. This assumption is rather strong as in practical applications, it is often infeasible to check whether the source classifier and target classifier can be shared or not. Hence we focus in this paper on a more general, and safe, domain adaptation scenario in which the source classifier and target classifier differ 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. by a small perturbation function. The goal of this paper is to simultaneously learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeled data in the target domain by embedding the adaptations of both classifiers and features in a unified deep architecture. Motivated by the state of the art deep residual learning [8], winner of the ImageNet ILSVRC 2015 challenge, we propose a new Residual Transfer Network (RTN) approach to domain adaptation in deep networks which can simultaneously learn adaptive classifiers and transferable features. We relax the shared-classifier assumption made by previous methods and assume that the source and target classifiers differ by a small residual function. We enable classifier adaptation by plugging several layers into deep networks to explicitly learn the residual function with reference to the target classifier. In this way, the source classifier and target classifier can be bridged tightly in the back-propagation procedure. The target classifier is tailored to the target data better by exploiting the low-density separation criterion. We fuse features of multiple layers with tensor product and embed them into reproducing kernel Hilbert spaces to match distributions for feature adaptation. The adaptation can be achieved in most feed-forward models by extending them with new residual layers and loss functions, and can be trained efficiently using standard back-propagation. Extensive evidence suggests that the RTN approach outperforms several state of art methods on standard domain adaptation benchmarks. 2 Related Work Domain adaptation [1] builds models that can bridge different domains or tasks, which mitigates the burden of manual labeling for machine learning [9, 10, 11, 12], computer vision [13, 14, 15] and natural language processing [16]. The main technical problem of domain adaptation is that the domain discrepancy in probability distributions of different domains should be formally reduced. Deep neural networks can learn abstract representations that disentangle different explanatory factors of variations behind data samples [17] and manifest invariant factors underlying different populations that transfer well from original tasks to similar novel tasks [3]. Thus deep neural networks have been explored for domain adaptation [18, 19, 15], multimodal and multi-task learning [16, 20], where significant performance gains have been witnessed relative to prior shallow transfer learning methods. However, recent advances show that deep networks can learn abstract feature representations that can only reduce, but not remove, the cross-domain discrepancy [18, 4]. Dataset shift has posed a bottleneck to the transferability of deep features, resulting in statistically unbounded risk for target tasks [21, 22]. Some recent work addresses the aforementioned problem by deep domain adaptation, which bridges the two worlds of deep learning and domain adaptation [4, 5, 6, 7]. They extend deep convolutional networks (CNNs) to domain adaptation either by adding one or multiple adaptation layers through which the mean embeddings of distributions are matched [4, 5], or by adding a fully connected subnetwork as a domain discriminator whilst the deep features are learned to confuse the domain discriminator in a domain-adversarial training paradigm [6, 7]. While performance was significantly improved, these state of the art methods may be restricted by the assumption that under the learned domain-invariant feature representations, the source classifier can be directly transferred to the target domain. In particular, this assumption may not hold when the source classifier and target classifier cannot be shared. As theoretically studied in [22], when the combined error of the ideal joint hypothesis is large, then there is no single classifier that performs well on both source and target domains, so we cannot find a good target classifier by directly transferring from the source domain. This work is primarily motivated by He et al. [8], the winner of the ImageNet ILSVRC 2015 challenge. They present deep residual learning to ease the training of very deep networks (hundreds of layers), termed residual nets. The residual nets explicitly reformulate the layers as learning residual functions ?F (x) with reference to the layer inputs x, instead of directly learning the unreferenced functions F (x) = ?F (x) + x. The method focuses on standard deep learning in which training data and test data are drawn from identical distributions, hence it cannot be directly applied to domain adaptation. In this paper, we propose to bridge the source classifier fS (x) and target classifier fT (x) by the residual layers such that the classifier mismatch across domains can be explicitly modeled by the residual functions ?F (x) in a deep learning architecture. Although the idea of adapting source classifier to target domain by adding a perturbation function has been studied by [23, 24, 25], these methods require target labeled data to learn the perturbation function, which cannot be applied to unsupervised domain adaptation, the focus of this study. Another distinction is that their perturbation function is defined in the input space x, while the input to our residual function is the target classifier fT (x), which can capture the connection between the source and target classifiers more effectively. 2 3 Residual Transfer Networks s In unsupervised domain adaptation problem, we are given a source domain Ds = {(xsi , yis )}ni=1 of ns t labeled examples and a target domain Dt = {xtj }nj=1 of nt unlabeled examples. The source domain and target domain are sampled from different probability distributions p and q respectively, and p 6= q. The goal of this paper is to design a deep neural network that enables learning of transfer classifiers y = fs (x) and y = ft (x) to close the source-target discrepancy, such that the expected target risk Rt (ft ) = E(x,y)?q [ft (x) 6= y] can be bounded by leveraging the source domain supervised data. The challenge of unsupervised domain adaptation arises in that the target domain has no labeled data, while the source classifier fs trained on source domain Ds cannot be directly applied to the target domain Dt due to the distribution discrepancy p(x, y) 6= q(x, y). The distribution discrepancy may give rise to mismatches in both features and classifiers, i.e. p(x) 6= q(x) and fs (x) 6= ft (x). Both mismatches should be fixed by joint adaptation of features and classifiers to enable effective domain adaptation. Classifier adaptation is more difficult than feature adaptation because it is directly related to the labels but the target domain is fully unlabeled. Note that the state of the art deep feature adaptation methods [5, 6, 7] generally assume classifiers can be shared on adapted deep features. This paper assumes fs 6= ft and presents an end-to-end deep learning framework for classifier adaptation. Deep networks [17] can learn distributed, compositional, and abstract representations for natural data such as image and text. This paper addresses unsupervised domain adaptation within deep networks for jointly learning transferable features and adaptive classifiers. We extend deep convolutional networks (CNNs), i.e. AlexNet [26], to novel residual transfer networks (RTNs) as shown in Figure 1. Denote by fs (x) the source classifier, and the empirical error of CNN on source domain data Ds is min fs ns 1 X L (fs (xsi ) , yis ), ns i=1 (1) where L(?, ?) is the cross-entropy loss function. Based on the quantification study of feature transferability in deep convolutional networks [3], convolutional layers can learn generic features that are transferable across domains [3]. Hence we opt to fine-tune, instead of directly adapt, the features of convolutional layers when transferring pre-trained deep models from source domain to target domain. 3.1 Feature Adaptation Deep features learned by CNNs can disentangle explanatory factors of variations behind data distributions to boost knowledge transfer [19, 17]. However, the latest literature findings reveal that deep features can reduce, but not remove, the cross-domain distribution discrepancy [3], which motivates the state of the art deep feature adaptation methods [5, 6, 7]. Deep features in standard CNNs must eventually transition from general to specific along the network, and the transferability of features and classifiers will decrease when the cross-domain discrepancy increases [3]. In other words, the shifts in the data distributions linger even after multilayer feature abstractions. In this paper, we perform feature adaptation by matching the feature distributions of multiple layers ` ? L across domains. We reduce feature dimensions by adding a bottleneck layer f cb on top of the last feature layer of CNNs, and then fine-tune CNNs on source labeled examples such that the feature distributions of the source and target are made similar under new feature representations in multiple layers L = {f cb, f cc}, as shown in Figure 1. To adapt multiple feature layers effectively, we propose the tensor product between features of multiple layers to perform lossless multi-layer feature fusion, i.e. zsi , ?`?L xs` i and ztj , ?`?L xt` . We then perform feature adaptation by minimizing the Maximum Mean Discrepancy j (MMD) [27] between source and target domains using the fusion features (dubbed tensor MMD) as    ns X ns ns X nt nt X nt X X X k zsi , zsj k zsi , ztj k zti , ztj min DL (Ds , Dt ) = + ?2 , (2) fs ,ft n2s n2t ns nt i=1 j=1 i=1 j=1 i=1 j=1 0 2 where the characteristic kernel k(z, z0 ) = e?kvec(z)?vec(z )k /b is the Gaussian kernel function defined on the vectorization of tensors z and z0 with bandwidth parameter b. Different from DAN [5] that adapts multiple feature layers using multiple MMD penalties, this paper adapts multiple feature layers by first fusing them and then adapting the fused features. The advantage of our method against DAN [5] is that our method can capture full interactions across multilayer features and facilitate easier model selection, while DAN [5] needs |L| independent MMD penalties for adapting |L| layers. 3 Xs fcb AlexNet, ResNet? Xt fcb ? fcc fc2 fT ( x ) MMD ? fc1 ?f ( x ) fS ( x ) = fT ( x ) + ?f ( x ) fcc ?f S (x) Ys fcb Xs ? fcc Xs Zs weight layer ?F ( x ) x MMD fT ( x ) entropy minimization Yt fcb Xt Zt ? weight layer fcc Xt F (x) = + ?F ( x ) + x Figure 1: (left) Residual Transfer Network (RTN) for domain adaptation, based on well-established architectures. Due to dataset shift, (1) the last-layer features are tailored to domain-specific structures that are not safely transferable, hence we add a bottleneck layer f cb that is adapted jointly with the classifier layer f cc by the tensor MMD module; (2) Supervised classifiers are not safely transferable, hence we bridge them by the residual layers f c1?f c2 so that fS (x) = fT (x)+?f (x). (middle) The tensor MMD module for multi-layer feature adaptation. (right) The building block for deep residual learning; Instead of using the residual block to model feature mappings, we use it to bridge the source classifier fS (x) and target classifier fT (x) with x , fT (x), F (x) , fS (x), and ?F (x) , ?f (x). 3.2 Classifier Adaptation As feature adaptation cannot remove the mismatch in classification models, we further perform classifier adaptation to learn transfer classifiers that make domain adaptation more effective. Although the source classifier fs (x) and target classifier ft (x) are different, fs (x) 6= ft (x), they should be related to ensure the feasibility of domain adaptation. It is reasonable to assume that fs (x) and ft (x) differ only by a small perturbation function ?f (x). Prior work on classifier adaptation [23, 24, 25] assumes that ft (x) = fs (x) + ?f (x), where the perturbation ?f (x) is a function of input feature x. However, these methods require target labeled data to learn the perturbation function, which cannot be applied to unsupervised domain adaptation where target domain has no labeled data. How to bridge fs (x) and ft (x) in a framework is a key challenge of unsupervised domain adaptation. We postulate that the perturbation function ?f (x) can be learned jointly from the source labeled data and target unlabeled data, given that the source classifier and target classifier are properly connected. To enable classifier adaptation, consider fitting F (x) as an original mapping by a few stacked layers (convolutional or fully connected layers) in Figure 1 (right), where x denotes the inputs to the first of these layers [8]. If one hypothesizes that multiple nonlinear layers can asymptotically approximate complicated functions, then it is equivalent to hypothesize that they can asymptotically approximate the residual functions, i.e. F (x) ? x. Rather than expecting stacked layers to approximate F (x), one explicitly lets these layers approximate a residual function ?F (x) , F (x) ? x, with the original function being ?F (x) + x. The operation ?F (x) + x is performed by a shortcut connection and an element-wise addition, while the residual function is parameterized by residual layers within each residual block. Although both forms are able to asymptotically approximate the desired functions, the ease of learning is different. In reality, it is unlikely that identity mappings are optimal, but it should be easier to find the perturbations with reference to an identity mapping, than to learn the function as new. The residual learning is the key to the successful training of very deep networks. The deep residual network (ResNet) framework [8] bridges the inputs and outputs of the residual layers by the shortcut connection (identity mapping) such that F (x) = ?F (x) + x, which eases the learning of residual function ?F (x) (similar to the perturbation function across the source and target classifiers). Based on this key observation, we extend the CNN architectures (Figure 1, left) by plugging in the residual block (Figure 1, right). We reformulate the residual block to bridge the source classifier fS (x) and target classifier fT (x) by letting x , fT (x), F (x) , fS (x), and ?F (x) , ?f (x). Note that fS (x) is the outputs of the element-wise addition operator and fT (x) is the outputs of the targetclassifier layer f cc, both before softmax activation ?(?), fs (x) , ? (fS (x)) , ft (x) , ? (fT (x)). We can connect the source classifier and target classifier (before activation) by the residual block as fS (x) = fT (x) + ?f (x) , (3) where we use functions fS and fT before softmax for residual block to ensure that the final classifiers fs and ft will output probabilities. Residual layers f c1?f c2 are fully-connected layers with c ? c units, where c is the number of classes. We set the source classifier fS as the outputs of the residual block to make it better trainable from the source-labeled data by deep residual learning [8]. In other words, if we set fT as the outputs of the residual block, then we may be unable to learn it successfully as we do not have target labeled data and thus standard back-propagation will not work. Deep residual learning [8] ensures to output valid classifiers |?f (x)|  |fT (x)| ? |fS (x)|, and more importantly, 4 makes the perturbation function ?f (x) dependent on both the target classifier fT (x) (due to the functional dependency) as well as the source classifier fS (x) (due to the back-propagation pipeline). Although we successfully cast the classifier adaptation into the residual learning framework while the residual learning framework tends to make the target classifier ft (x) not deviate much from the source classifier fs (x), we still cannot guarantee that ft (x) will fit the target-specific structures well. To address this problem, we further exploit the entropy minimization principle [28] for refining the classifier adaptation, which encourages the low-density separation between classes by minimizing the entropy of class-conditional distribution fjt (xti ) = p(yit = j|xti ; ft ) on target domain data Dt as min ft nt  1 X H ft xti , nt i=1 (4) t t where Pc H(?) is the entropy function of class-conditional distribution ft (xi ) defined as H (ft (xi )) = ? j=1 fjt (xti ) log fjt (xti ), c is the number of classes, and fjt (xti ) is the probability of predicting point xti to class j. By minimizing entropy penalty (4), the target classifier ft (x) is made directly accessible to target-unlabeled data and will amend itself to pass through the target low-density regions. 3.3 Residual Transfer Network To enable effective unsupervised domain adaptation, we propose Residual Transfer Network (RTN), which jointly learns transferable features and adaptive classifiers by integrating deep feature learning (1), feature adaptation (2), and classifier adaptation (3)?(4) in an end-to-end deep learning framework, min fS =fT +?f + ns 1 X L (fs (xsi ) , yis ) ns i=1 nt  ? X H ft xti nt i=1 (5) + ? DL (Ds , Dt ), where ? and ? are the tradeoff parameters for the tensor MMD penalty (2) and entropy penalty (4) respectively. The proposed RTN model (5) is enabled to learn both transferable features and adaptive classifiers. As classifier adaptation proposed in this paper and feature adaptation studied in [5, 6] are tailored to adapt different layers of deep networks, they can complement each other to establish better performance. Since training deep CNNs requires a large amount of labeled data that is prohibitive for many domain adaptation applications, we start with the CNN models pre-trained on ImageNet 2012 data and fine-tune it as [5]. The training of RTN mainly follows standard back-propagation, with the residual transfer layers for classifier adaptation as [8]. Note that, the optimization of tensor MMD penalty (2) requires carefully-designed algorithm to establish linear-time training, as detailed in [5]. We also adopt bilinear pooling [29] to reduce the dimensions of fusion features in tensor MMD (2). 4 Experiments We evaluate the residual transfer network against state of the art transfer learning and deep learning methods. Codes and datasets will be available at https://github.com/thuml/transfer-caffe. 4.1 Setup Office-31 [13] is a benchmark for domain adaptation, comprising 4,110 images in 31 classes collected from three distinct domains: Amazon (A), which contains images downloaded from amazon.com, Webcam (W) and DSLR (D), which contain images taken by web camera and digital SLR camera with different photographical settings, respectively. To enable unbiased evaluation, we evaluate all methods on all six transfer tasks A ? W, D ? W, W ? D, A ? D, D ? A and W ? A as in [5, 7]. Office-Caltech [14] is built by selecting the 10 common categories shared by Office-31 and Caltech256 (C), and is widely used by previous methods [14, 30]. We can build 12 transfer tasks: A ? W, D ? W, W ? D, A ? D, D ? A, W ? A, A ? C, W ? C, D ? C, C ? A, C ? W, and C ? D. While Office-31 has more categories and is more difficult for domain adaptation algorithms, 5 Office-Caltech provides more transfer tasks to enable an unbiased look at dataset bias [31]. We adopt DeCAF7 [2] features for shallow transfer methods and original images for deep adaptation methods. We compare with both conventional and the state of the art transfer learning and deep learning methods: Transfer Component Analysis (TCA) [9], Geodesic Flow Kernel (GFK) [14], Deep Convolutional Neural Network (AlexNet [26]), Deep Domain Confusion (DDC) [4], Deep Adaptation Network (DAN) [5], and Reverse Gradient (RevGrad) [6]. TCA is a conventional transfer learning method based on MMD-regularized Kernel PCA. GFK is a manifold learning method that interpolates across an infinite number of intermediate subspaces to bridge domains. DDC is the first method that maximizes domain invariance by adding to AlexNet an adaptation layer using linear-kernel MMD [27]. DAN learns more transferable features by embedding deep features of multiple task-specific layers to reproducing kernel Hilbert spaces (RKHSs) and matching different distributions optimally using multi-kernel MMD. RevGrad improves domain adaptation by making the source and target domains indistinguishable for a discriminative domain classifier via an adversarial training paradigm. To go deeper with the efficacy of classifier adaptation (residual transfer block) and feature adaptation (tensor MMD module), we perform ablation study by evaluating several variants of RTN: (1) RTN (mmd), which adds the tensor MMD module to AlexNet; (2) RTN (mmd+ent), which further adds the entropy penalty to AlexNet; (3) RTN (mmd+ent+res), which further adds the residual module to AlexNet. Note that RTN (mmd) improves DAN [5] by replacing the multiple MMD penalties in DAN by a single tensor MMD penalty in RTN (mmd), which facilitates much easier parameter selection. We follow standard protocols and use all labeled source data and all unlabeled target data for domain adaptation [5]. We compare average classification accuracy of each transfer task using three random experiments. For MMD-based methods (TCA, DDC, DAN, and RTN), we use Gaussian kernel with bandwidth b set to median pairwise squared distances on training data, i.e. median heuristic [27]. As there are no target labeled data in unsupervised domain adaptation, model selection proves difficult. For all methods, we perform cross-valuation on labeled source data to select candidate parameters, then conduct validation on transfer task A ? W by requiring one labeled example per category from target domain W as the validation set, and fix the selected parameters throughout all transfer tasks. We implement all deep methods based on the Caffe deep-learning framework, and fine-tune from Caffe-provided models of AlexNet [26] pre-trained on ImageNet. For RTN, We fine-tune all the feature layers, train bottleneck layer f cb, classifier layer f cc and residual layers f c1?f c2, all through standard back-propagation. Since these new layers are trained from scratch, we set their learning rate to be 10 times that of the other layers. We use mini-batch stochastic gradient descent (SGD) with momentum of 0.9 and the learning rate annealing strategy implemented in RevGrad [6]: the learning rate is not selected through a grid search due to high computational cost?it is adjusted during SGD ?0 using the following formula: ?p = (1+?p) ? , where p is the training progress linearly changing from 0 to 1, ?0 = 0.01, ? = 10 and ? = 0.75, which is optimized for low error on the source domain. As RTN can work stably across different transfer tasks, the MMD penalty parameter ? and entropy penalty ? are first selected on A ? W and then fixed as ? = 0.3, ? = 0.3 for all other transfer tasks. 4.2 Results The classification accuracy results on the six transfer tasks of Office-31 are shown in Table 1, and the results on the twelve transfer tasks of Office-Caltech are shown in Table 2. The RTN model based on AlexNet (Figure 1) outperforms all comparison methods on most transfer tasks. In particular, RTN substantially improves the accuracy on hard transfer tasks, e.g. A ? W and C ? W, where the source and target domains are very different, and achieves comparable accuracy on easy transfer tasks, D ? W and W ? D, where source and target domains are similar [13]. These results suggest that RTN is able to learn more adaptive classifiers and transferable features for safer domain adaptation. From the results, we can make interesting observations. (1) Standard deep-learning methods (AlexNet) perform comparably with traditional shallow transfer-learning methods with deep DeCAF7 features as input (TCA and GFK). The only difference between these two sets of methods is that AlexNet can take the advantage of supervised fine-tuning on the source-labeled data, while TCA and GFK can take benefits of their domain adaptation procedures. This result confirms the current practice that supervised fine-tuning is important for transferring source classifier to target domain [19], and sustains the recent discovery that deep neural networks learn abstract feature representation, which can only reduce, but not remove, the cross-domain discrepancy [3]. This reveals that the two worlds of deep 6 Table 1: Accuracy on Office-31 dataset using standard protocol [5] for unsupervised adaptation. Method TCA [9] GFK [14] AlexNet [26] DDC [4] DAN [5] RevGrad [6] RTN (mmd) RTN (mmd+ent) RTN (mmd+ent+res) A?W 59.0?0.0 58.4?0.0 60.6?0.4 61.0?0.5 68.5?0.3 73.0?0.6 70.0?0.4 71.2?0.3 73.3?0.3 D?W 90.2?0.0 93.6?0.0 95.4?0.2 95.0?0.3 96.0?0.1 96.4?0.4 96.1?0.3 96.4?0.2 96.8?0.2 W?D 88.2?0.0 91.0?0.0 99.0?0.1 98.5?0.3 99.0?0.1 99.2?0.3 99.2?0.3 99.2?0.1 99.6?0.1 A?D 57.8?0.0 58.6?0.0 64.2?0.3 64.9?0.4 66.8?0.2 67.6?0.4 69.8?0.2 71.0?0.2 D?A 51.6?0.0 52.4?0.0 45.5?0.5 47.2?0.5 50.0?0.4 49.8?0.4 50.2?0.3 50.5?0.3 W?A 47.9?0.0 46.1?0.0 48.3?0.5 49.4?0.6 49.8?0.3 50.0?0.3 50.7?0.2 51.0?0.1 Avg 65.8 66.7 68.8 69.3 71.7 72.1 72.9 73.7 Table 2: Accuracy on Office-Caltech dataset using standard protocol [5] for unsupervised adaptation. Method A?W D?W W?D A?D D?A W?A A?C W?C D?C C?A C?W C?D Avg TCA [9] 84.4 96.9 99.4 82.8 90.4 85.6 81.2 75.5 79.6 92.1 88.1 87.9 87.0 GFK [14] 89.5 97.0 98.1 86.0 89.8 88.5 76.2 77.1 77.9 90.7 78.0 77.1 85.5 AlexNet [26] 79.5 97.7 100.0 87.4 87.1 83.8 83.0 73.0 79.0 91.9 83.7 87.1 86.1 DDC [4] 83.1 98.1 100.0 88.4 89.0 84.9 83.5 73.4 79.2 91.9 85.4 88.8 87.1 DAN [5] 91.8 98.5 100.0 91.7 90.0 92.1 84.1 81.2 80.3 92.0 90.6 89.3 90.1 RTN (mmd) 93.2 98.5 100.0 91.7 88.0 90.7 84.0 81.3 80.4 91.0 89.8 90.4 90.0 RTN (mmd+ent) 93.8 98.6 100.0 92.9 93.6 92.7 87.8 84.8 83.4 93.2 96.6 93.9 92.6 RTN (mmd+ent+res) 95.2 99.2 100.0 95.5 93.8 92.5 88.1 86.6 84.6 93.7 96.9 94.2 93.4 learning and domain adaptation cannot reinforce each other substantially in the two-step pipeline, which motivates carefully-designed deep adaptation architectures to unify them. (2) Deep-transfer learning methods that reduce the domain discrepancy by domain-adaptive deep networks (DDC, DAN and RevGrad) substantially outperform standard deep learning methods (AlexNet) and traditional shallow transfer-learning methods with deep features as the input (TCA and GFK). This confirms that incorporating domain-adaptation modules into deep networks can improve domain adaptation performance. By adapting source-target distributions in multiple task-specific layers using optimal multi-kernel two-sample matching, DAN performs the best in general among the prior deep-transfer learning methods. (3) The proposed residual transfer network (RTN) performs the best and sets up a new state of the art result on these benchmark datasets. Different from all the previous deep-transfer learning methods that only adapt the feature layers of deep neural networks to learn more transferable features, RTN further adapts the classifier layers to bridge the source and target classifiers in an end-to-end residual learning framework, which can correct the classifier mismatch more effectively. To go deeper into different modules of RTN, we show the results of RTN variants in Tables 1 and 2. (1) RTN (mmd) slightly outperforms DAN, but RTN (mmd) has only one MMD penalty parameter while DAN has two or three. Thus the proposed tensor MMD module is effective for adapting multiple feature layers using a single MMD penalty, which is important for easy model selection. (2) RTN (mmd+ent) performs substantially better than RTN (mmd). This highlights the importance of entropy minimization for low-density separation, which exploits the cluster structure of target-unlabeled data such that the target-classifier can be better adapted to the target data. (3) RTN (mmd+ent+res) performs the best across all variants. This highlights the importance of residual transfer of classifier layers for learning more adaptive classifiers. This is critical as in practical applications, there is no guarantee that the source classifier and target classifier can be safely shared. It is worth noting that, the entropy penalty and the residual module should be used together, otherwise the residual function tends to learn useless zero mapping such that the source and target classifiers are nearly identical [8]. 4.3 Discussion Predictions Visualization: We respectively visualize in Figures 2(a)?2(d) the t-SNE embeddings [2] of the predictions by DAN and RTN on transfer task A ? W. We can make the following observations. (1) The predictions made by DAN in Figure 2(a)?2(b) show that the target categories are not well discriminated by the source classifier, which implies that target data is not well compatible with the source classifier. Hence the source and target classifiers should not be assumed to be identical, which has been a common assumption made by all prior deep domain adaptation methods [4, 5, 6, 7]. (2) The predictions made by RTN in Figures 2(c)?2(d) show that the target categories are discriminated 7 (a) DAN: Source=A (b) DAN: Target=W (c) RTN: Source=A (d) RTN: Target=W 6 f S (x ) f T (x ) ?f(x ) 4 2 0 mean deviation Statistics (a) Layer Responses -0.5 -1 fs -1.5 10 ft 20 Class (b) Classifier Shift 30 Average Accuracy (%) Layer Responses 8 Weight Parameters Figure 2: Visualization: (a)-(b) t-SNE of DAN predictions; (c)-(d) t-SNE of RTN predictions. 100 90 80 70 60 A ?W 50 0.01 0.04 0.07 0.1 C?W 0.4 0.7 1 ? (c) Accuracy w.r.t. ? Figure 3: (a) layer responses; (b) classifier shift; (c) sensitivity of ? (dashed lines show best baselines). better by the target classifier (larger class-to-class distances), which suggests that residual transfer of classifiers is a reasonable extension to previous deep feature adaptation methods. RTN simultaneously learns more adaptive classifiers and more transferable features to enable effective domain adaptation. Layer Responses: We show in Figure 3(a) the means and standard deviations of the layer responses [8], which are the outputs of fT (x) (f cc layer), ?f (x) (f c2 layer), and fS (x) (after element-wise sum operator), respectively. This exposes the response strength of the residual functions. The results show that the residual function ?f (x) have generally much smaller responses than the shortcut function fT (x). These results support our motivation that the residual functions are generally smaller than the non-residual functions, as they characterize the small gap between the source classifier and target classifier. The small residual function can be learned effectively via deep residual learning [8]. Classifier Shift: To justify that there exists a classifier shift between source classifier fs and target classifier ft , we train fs on source domain and ft on target domain, both provided with labeled data. By taking A as source domain and W as target domain, the weight parameters of the classifiers (e.g. softmax regression) are shown in Figure 3(b), which shows that fs and ft are substantially different. Parameter Sensitivity: We check the sensitivity of entropy parameter ? on transfer tasks A ? W (31 classes) and C ? W (10 classes) by varying the parameter in {0.01, 0.04, 0.07, 0.1, 0.4, 0.7, 1.0}. The results are shown in Figures 3(c), with the best results of the baselines shown as dashed lines. The accuracy of RTN first increases and then decreases as ? varies and demonstrates a desirable bell-shaped curve. This justifies our motivation of jointly learning transferable features and adaptive classifiers by the RTN model, as a good trade-off between them can promote transfer performance. 5 Conclusion This paper presented a novel approach to unsupervised domain adaptation in deep networks, which enables end-to-end learning of adaptive classifiers and transferable features. Similar to many prior domain adaptation techniques, feature adaptation is achieved by matching the distributions of features across domains. However, unlike previous work, the proposed approach also supports classifier adaptation, which is implemented through a new residual transfer module that bridges the source classifier and target classifier. This makes the approach a good complement to existing techniques. The approach can be trained by standard back-propagation, which is scalable and can be implemented by most deep learning package. Future work constitutes semi-supervised domain adaptation extensions. Acknowledgments This work was supported by the National Natural Science Foundation of China (61502265, 61325008), National Key R&D Program of China (2016YFB1000701, 2015BAF32B01), and TNList Key Project. 8 References [1] S. J. Pan and Q. Yang. A survey on transfer learning. TKDE, 22(10):1345?1359, 2010. [2] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In ICML, 2014. [3] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson. How transferable are features in deep neural networks? In NIPS, 2014. [4] E. Tzeng, J. Hoffman, N. Zhang, K. Saenko, and T. Darrell. Deep domain confusion: Maximizing for domain invariance. 2014. [5] M. Long, Y. Cao, J. Wang, and M. I. Jordan. Learning transferable features with deep adaptation networks. In ICML, 2015. [6] Y. Ganin and V. Lempitsky. Unsupervised domain adaptation by backpropagation. In ICML, 2015. [7] E. Tzeng, J. Hoffman, N. Zhang, K. Saenko, and T. Darrell. Simultaneous deep transfer across domains and tasks. In ICCV, 2015. [8] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016. [9] S. J. Pan, I. W. Tsang, J. T. Kwok, and Q. Yang. Domain adaptation via transfer component analysis. TNNLS, 22(2):199?210, 2011. [10] L. Duan, I. W. Tsang, and D. Xu. Domain transfer multiple kernel learning. TPAMI, 34(3):465?479, 2012. [11] K. Zhang, B. Sch?lkopf, K. Muandet, and Z. Wang. Domain adaptation under target and conditional shift. In ICML, 2013. [12] X. Wang and J. Schneider. Flexible transfer learning under support and model shift. In NIPS, 2014. [13] K. Saenko, B. Kulis, M. Fritz, and T. Darrell. Adapting visual category models to new domains. In ECCV, 2010. [14] B. Gong, Y. Shi, F. Sha, and K. Grauman. Geodesic flow kernel for unsupervised domain adaptation. In CVPR, 2012. [15] J. Hoffman, S. Guadarrama, E. Tzeng, R. Hu, J. Donahue, R. Girshick, T. Darrell, and K. Saenko. LSDA: Large scale detection through adaptation. In NIPS, 2014. [16] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. Natural language processing (almost) from scratch. JMLR, 12:2493?2537, 2011. [17] Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. TPAMI, 35(8):1798?1828, 2013. [18] X. Glorot, A. Bordes, and Y. Bengio. Domain adaptation for large-scale sentiment classification: A deep learning approach. In ICML, 2011. [19] M. Oquab, L. Bottou, I. Laptev, and J. Sivic. Learning and transferring mid-level image representations using convolutional neural networks. In CVPR, June 2013. [20] J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y. Ng. Multimodal deep learning. In ICML, 2011. [21] Y. Mansour, M. Mohri, and A. Rostamizadeh. Domain adaptation: Learning bounds and algorithms. In COLT, 2009. [22] S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. W. Vaughan. A theory of learning from different domains. MLJ, 79(1-2):151?175, 2010. [23] J. Yang, R. Yan, and A. G. Hauptmann. Cross-domain video concept detection using adaptive svms. In MM, pages 188?197. ACM, 2007. [24] Lixin Duan, Ivor W Tsang, Dong Xu, and Tat-Seng Chua. Domain adaptation from multiple sources via auxiliary classifiers. In ICML, pages 289?296. ACM, 2009. [25] L. Duan, D. Xu, I. W. Tsang, and J. Luo. Visual event recognition in videos by learning from web data. TPAMI, 34(9):1667?1680, 2012. [26] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. [27] A. Gretton, K. Borgwardt, M. Rasch, B. Sch?lkopf, and A. Smola. A kernel two-sample test. JMLR, 13:723?773, March 2012. [28] Y. Grandvalet and Y. Bengio. Semi-supervised learning by entropy minimization. In NIPS, 2004. [29] Tsung-Yu Lin, Aruni RoyChowdhury, and Subhransu Maji. Bilinear cnn models for fine-grained visual recognition. In CVPR, pages 1449?1457, 2015. [30] B. Sun, J. Feng, and K. Saenko. Return of frustratingly easy domain adaptation. In AAAI, 2016. [31] A. Torralba and A. A. Efros. Unbiased look at dataset bias. In CVPR, 2011. 9
6110 |@word kulis:1 cnn:4 middle:1 hu:1 confirms:2 tat:1 sgd:2 tnlist:2 contains:1 efficacy:1 selecting:1 outperforms:4 existing:1 current:1 com:3 transferability:3 nt:9 luo:1 activation:3 gmail:1 ddc:6 must:1 guadarrama:1 enables:2 remove:4 hypothesize:1 designed:2 prohibitive:2 selected:3 short:1 chua:1 provides:1 zhang:5 unbounded:1 along:1 c2:4 dan:19 fitting:1 pairwise:1 theoretically:1 expected:1 kuksa:1 multi:5 zti:1 duan:3 xti:8 spain:1 provided:2 underlying:1 matched:1 bounded:1 maximizes:1 alexnet:14 project:1 substantially:5 z:1 whilst:1 unified:1 finding:1 dubbed:1 nj:1 guarantee:2 safely:3 berkeley:3 fcc:4 grauman:1 classifier:111 demonstrates:1 unit:1 slr:1 before:3 tsinghua:2 tends:2 bilinear:2 establishing:1 studied:3 china:3 suggests:2 ease:2 statistically:1 practical:2 camera:2 acknowledgment:1 practice:1 block:10 implement:1 backpropagation:1 procedure:2 empirical:2 yan:1 bell:1 significantly:2 adapting:7 matching:4 pre:3 word:2 integrating:1 n2t:1 suggest:1 cannot:9 unlabeled:8 close:1 selection:4 operator:2 risk:2 vaughan:1 equivalent:1 conventional:2 yt:1 maximizing:1 shi:1 latest:2 go:2 survey:1 unify:1 amazon:2 importantly:1 nam:1 enabled:1 population:1 embedding:3 variation:3 target:81 massive:2 hypothesis:1 element:3 recognition:4 labeled:23 ft:46 module:10 fly:1 caltech256:1 wang:4 capture:2 tsang:4 region:1 ensures:1 connected:4 sun:2 decrease:2 trade:1 expecting:1 yfb1000701:1 geodesic:2 trained:8 laptev:1 predictive:1 upon:1 learner:1 tca:8 multimodal:2 joint:2 maji:1 stacked:2 train:2 distinct:1 amend:1 effective:6 labeling:3 caffe:3 heuristic:1 posed:1 widely:1 larger:1 cvpr:5 relax:2 otherwise:1 statistic:1 jointly:6 itself:1 final:1 advantage:2 tpami:3 net:2 propose:5 interaction:1 product:3 adaptation:97 cao:1 ablation:1 adapts:3 ent:8 exploiting:1 sutskever:1 cluster:1 darrell:5 extending:2 ben:1 resnet:2 blitzer:1 ganin:1 gong:1 pose:1 school:1 progress:1 strong:2 implemented:3 auxiliary:1 come:1 implies:1 differ:4 rasch:1 safe:1 correct:1 cnns:7 stochastic:1 enable:8 unreferenced:1 require:2 fix:1 opt:1 adjusted:1 extension:2 hold:1 mm:1 cb:4 mapping:6 visualize:1 major:1 achieves:1 adopt:2 torralba:1 efros:1 label:2 expose:1 bridge:12 successfully:2 hoffman:4 minimization:4 tnnls:1 gaussian:2 aim:1 rather:2 shelf:1 varying:1 office:9 clune:1 focus:3 refining:1 june:1 properly:1 check:2 mainly:1 adversarial:2 baseline:2 kim:1 rostamizadeh:1 abstraction:1 dependent:1 typically:1 transferring:4 explanatory:3 unlikely:1 fcb:4 comprising:1 subhransu:1 aforementioned:1 classification:5 among:1 flexible:1 colt:1 jianmin:1 art:10 softmax:3 tzeng:4 shaped:1 ng:1 identical:3 look:2 unsupervised:14 nearly:1 constitutes:1 promote:1 yu:1 discrepancy:10 future:1 icml:7 primarily:1 few:1 simultaneously:3 tightly:1 national:2 xtj:1 decaf7:2 detection:2 evaluation:1 ztj:3 pc:1 behind:3 conduct:1 desired:1 re:4 girshick:1 witnessed:1 obstacle:1 cost:1 fusing:1 deviation:2 hundred:1 krizhevsky:1 successful:1 optimally:1 characterize:1 connect:1 n2s:1 dependency:1 varies:1 combined:1 muandet:1 borgwardt:1 density:4 twelve:1 sensitivity:3 fritz:1 accessible:1 eas:1 lee:1 off:2 dong:1 michael:1 together:1 fused:1 squared:1 postulate:1 aaai:1 return:1 seng:1 hypothesizes:1 explicitly:5 collobert:1 tsung:1 performed:1 start:1 complicated:1 jia:1 lipson:1 ni:1 accuracy:9 convolutional:10 characteristic:1 efficiently:2 lkopf:2 vincent:1 kavukcuoglu:1 comparably:1 ren:1 worth:1 cc:5 zsi:3 simultaneous:1 suffers:1 manual:2 dslr:1 against:2 gain:2 sampled:1 dataset:6 bridged:1 manifest:1 knowledge:1 improves:3 hilbert:3 carefully:2 back:8 mlj:1 feed:2 dt:5 supervised:7 follow:1 response:7 improved:2 smola:1 d:5 web:2 replacing:1 nonlinear:1 propagation:8 stably:1 reveal:1 mingsheng:2 usa:1 facilitate:1 building:1 contain:1 unbiased:3 requiring:1 concept:1 hence:6 indistinguishable:1 during:1 encourages:1 transferable:18 criterion:1 confusion:2 performs:5 image:7 wise:3 novel:3 jimwang:1 common:2 functional:1 discriminated:2 winner:2 extend:3 he:2 yosinski:1 significant:1 vec:1 tuning:2 grid:1 language:2 han:1 impressive:1 add:4 disentangle:2 recent:5 perspective:1 reverse:1 scenario:1 termed:1 success:1 yi:3 caltech:4 schneider:1 oquab:1 paradigm:3 dashed:2 semi:2 multiple:17 full:1 desirable:1 gretton:1 karlen:1 technical:1 match:2 adapt:4 cross:7 long:2 lin:1 gfk:7 y:1 plugging:3 feasibility:1 prediction:6 variant:3 regression:1 xsi:3 multilayer:2 vision:1 scalable:1 kernel:14 tailored:3 mmd:39 achieved:4 c1:3 addition:2 fine:8 annealing:1 median:2 source:65 sch:2 unlike:1 pooling:1 facilitates:1 leveraging:2 flow:2 jordan:3 noting:1 ideal:1 intermediate:1 yang:3 embeddings:2 easy:3 bengio:4 variety:1 fit:1 architecture:5 bandwidth:2 reduce:7 idea:1 cn:1 tradeoff:1 shift:11 bottleneck:4 whether:1 motivated:2 six:2 pca:1 penalty:14 sentiment:1 f:37 interpolates:1 compositional:1 deep:81 generally:3 detailed:1 tune:5 amount:3 mid:1 svms:1 category:6 reduced:1 http:1 outperform:1 roychowdhury:1 per:1 tkde:1 diverse:1 incentive:1 key:5 drawn:1 yit:1 changing:1 fuse:2 asymptotically:3 sum:1 package:1 linger:1 parameterized:1 throughout:1 reasonable:2 almost:1 separation:3 comparable:1 layer:63 bound:1 courville:1 adapted:3 strength:1 software:1 min:4 transferred:2 march:1 across:11 slightly:1 smaller:2 pan:2 shallow:4 making:1 invariant:5 restricted:1 sustains:1 iccv:1 pipeline:3 taken:1 visualization:2 eventually:1 letting:1 end:8 available:2 operation:1 kwok:1 generic:2 batch:1 rkhss:1 original:4 assumes:2 top:1 ensure:2 denotes:1 lixin:1 exploit:2 build:2 establish:2 prof:1 webcam:1 feng:1 tensor:14 strategy:1 sha:1 rt:1 traditional:2 subnetwork:1 gradient:2 subspace:1 distance:2 unable:1 reinforce:1 fc2:1 consumption:1 manifold:1 valuation:1 collected:1 code:1 modeled:1 useless:1 reformulate:2 mini:1 minimizing:3 difficult:3 unfortunately:1 disentangling:1 setup:1 sne:3 rise:1 design:1 motivates:2 zt:1 perform:7 fjt:4 observation:3 datasets:2 benchmark:4 descent:1 hinton:1 mansour:1 perturbation:11 reproducing:3 david:1 complement:2 cast:1 moe:1 extensive:1 connection:3 imagenet:5 discriminator:2 optimized:1 sivic:1 california:1 learned:7 distinction:1 established:1 barcelona:1 boost:1 nip:6 address:3 able:2 mismatch:5 kulesza:1 challenge:4 program:1 built:1 video:2 critical:1 event:1 natural:4 quantification:1 regularized:1 predicting:1 residual:63 zhu:1 improve:1 github:1 lossless:1 fc1:1 rtn:41 extract:1 text:1 prior:5 literature:1 deviate:1 discovery:1 review:1 relative:1 loss:3 fully:4 highlight:2 interesting:1 digital:1 validation:2 downloaded:1 foundation:1 sufficient:1 principle:1 grandvalet:1 bordes:1 eccv:1 compatible:1 mohri:1 supported:1 last:2 infeasible:1 bias:2 deeper:2 wide:1 taking:1 distributed:1 benefit:1 curve:1 dimension:2 world:2 transition:1 rich:1 valid:1 evaluating:1 forward:2 made:7 adaptive:13 kliss:1 kvec:1 avg:2 lsda:1 approximate:5 reveals:1 assumed:1 xi:2 discriminative:1 search:1 vectorization:1 khosla:1 frustratingly:1 reality:1 table:5 learn:20 transfer:52 unavailable:1 ngiam:1 bottou:2 domain:129 protocol:3 main:1 linearly:1 motivation:2 xu:3 n:9 momentum:1 pereira:1 candidate:1 jmlr:2 learns:3 donahue:2 grained:1 z0:2 formula:1 embed:2 specific:5 xt:4 mitigates:1 explored:1 x:4 glorot:1 evidence:2 fusion:3 burden:1 dl:2 incorporating:1 exists:1 adding:5 effectively:4 importance:2 decaf:1 hauptmann:1 confuse:1 justifies:1 gap:1 easier:3 entropy:13 ivor:1 visual:4 vinyals:1 relies:1 acm:2 zsj:1 conditional:3 lempitsky:1 goal:2 identity:3 weston:1 shared:7 shortcut:3 hard:1 safer:1 infinite:1 justify:1 pas:1 invariance:2 saenko:5 formally:1 select:1 ilsvrc:2 support:3 arises:1 crammer:1 evaluate:2 trainable:1 scratch:2
5,649
6,111
Learning What and Where to Draw Scott Reed1,? reedscot@google.com Zeynep Akata2 akata@mpi-inf.mpg.de Santosh Mohan1 santoshm@umich.edu Samuel Tenka1 samtenka@umich.edu Bernt Schiele2 schiele@mpi-inf.mpg.de Honglak Lee1 honglak@umich.edu 1 2 University of Michigan, Ann Arbor, USA Max Planck Institute for Informatics, Saarbr?cken, Germany Abstract Generative Adversarial Networks (GANs) have recently demonstrated the capability to synthesize compelling real-world images, such as room interiors, album covers, manga, faces, birds, and flowers. While existing models can synthesize images based on global constraints such as a class label or caption, they do not provide control over pose or object location. We propose a new model, the Generative Adversarial What-Where Network (GAWWN), that synthesizes images given instructions describing what content to draw in which location. We show high-quality 128 ? 128 image synthesis on the Caltech-UCSD Birds dataset, conditioned on both informal text descriptions and also object location. Our system exposes control over both the bounding box around the bird and its constituent parts. By modeling the conditional distributions over part locations, our system also enables conditioning on arbitrary subsets of parts (e.g. only the beak and tail), yielding an efficient interface for picking part locations. 1 Introduction Generating realistic images from informal descriptions would have a wide range of applications. Modern computer graphics can already generate remarkably realistic scenes, but it still requires the substantial effort of human designers and developers to bridge the gap between high-level concepts and the end product of pixel-level details. Fully automating this creative process is currently out of reach, but deep networks have shown a rapidly-improving ability for controllable image synthesis. In order for the image-generating system to be useful, it should support high-level control over the contents of the scene to be generated. For example, a user might provide the category of image to be generated, e.g. ?bird?. In the more general case, the user could provide a textual description like ?a yellow bird with a black head?. Compelling image synthesis with this level of control has already been demonstrated using convolutional Generative Adversarial Networks (GANs) [Goodfellow et al., 2014, Radford et al., 2016]. Variational Autoencoders also show some promise for conditional image synthesis, in particular recurrent versions such as DRAW [Gregor et al., 2015, Mansimov et al., 2016]. However, current approaches have so far only used simple conditioning variables such as a class label or a non-localized caption [Reed et al., 2016b], and did not allow for controlling where objects appear in the scene. To generate more realistic and complex scenes, image synthesis models can benefit from incorporating a notion of localizable objects. The same types of objects can appear in many locations in different scales, poses and configurations. This fact can be exploited by separating the questions of ?what? ? Majority of this work was done while first author was at U. Michigan, but completed while at DeepMind. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. and ?where? to modify the image at each step of computation. In addition to parameter efficiency, this yields the benefit of more interpretable image samples, in the sense that we can track what the network was meant to depict at each location. For many image datasets, we have not only global annotations such as a class label but also localized annotations, such as bird part keypoints in Caltech-USCD birds (CUB) [Wah et al., 2011] and human joint locations in the MPII Human Pose dataset (MHP) [Andriluka et al., 2014]. For CUB, there are associated text captions, and for MHP we collected a new dataset of 3 captions per image. This bird is completely black. Beak Belly Right leg This bird is bright blue. Head Our proposed model learns to perform locationand content-controllable image synthesis on the above datasets. We demonstrate two ways to encode spatial constraints (though there could a man in an orange jacket, black pants and a black cap wearing sunglasses skiing be many more). First, we show how to condiFigure 1: Text-to-image examples. Locations can tion on the coarse location of a bird by incorbe specified by keypoint or bounding box. porating spatial masking and cropping modules into a text-conditional GAN, implemented using spatial transformers. Second, we can condition on part locations of birds and humans in the form of a set of normalized (x,y) coordinates, e.g. beak@(0.23,0.15). In the second case, the generator and discriminator use a multiplicative gating mechanism to attend to the relevant part locations. The main contributions are as follows: (1) a novel architecture for text- and location-controllable image synthesis, yielding more realistic and higher-resolution CUB samples, (2) a text-conditional object part completion model enabling a streamlined user interface for specifying part locations, and (3) exploratory results and a new dataset for pose-conditional text to human image synthesis. 2 Related Work In addition to recognizing patterns within images, deep convolutional networks have shown remarkable capability to generate images. Dosovitskiy et al. [2015] trained a deconvolutional network to generate 3D chair renderings conditioned on a set of graphics codes indicating shape, position and lighting. Yang et al. [2015] followed with a recurrent convolutional encoder-decoder that learned to apply incremental 3D rotations to generate sequences of rotated chair and face images. Oh et al. [2015] used a similar approach in order to predict action-conditional future frames of Atari games. Reed et al. [2015] trained a network to generate images that solved visual analogy problems. The above models were all deterministic (i.e. conventional feed-forward and recurrent neural networks), trained to learn one-to-one mappings from the latent space to pixel space. Other recent works take the approach of learning probabilistic models with variational autoencoders [Kingma and Welling, 2014, Rezende et al., 2014]. Kulkarni et al. [2015] developed a convolutional variational autoencoder in which the latent space was ?disentangled? into separate blocks of units corresponding to graphics codes. Gregor et al. [2015] created a recurrent variational autoencoder with attention mechanisms for reading and writing portions of the image canvas at each time step (DRAW). In addition to VAE-based image generation models, simple and effective Generative Adversarial Networks [Goodfellow et al., 2014] have been increasingly popular. In general, GAN image samples are notable for their relative sharpness compared to samples from the contemporary VAE models. Later, class-conditional GAN [Denton et al., 2015] incorporated a Laplacian pyramid of residual images into the generator network to achieve a significant qualitative improvement. Radford et al. [2016] proposed ways to stabilize deep convolutional GAN training and synthesize compelling images of faces and room interiors. Spatial Transformer Networks (STN) [Jaderberg et al., 2015] have proven to be an effective visual attention mechanism, and have already been incorporated into the latest deep generative models. Eslami et al. [2016] incorporate STNs into a form of recurrent VAE called Attend, Infer, Repeat (AIR), that uses an image-dependent number of inference steps, learning to generate simple multi-object 2D and 3D scenes. Rezende et al. [2016] build STNs into a DRAW-like recurrent network with impressive sample complexity visual generalization properties. 2 Larochelle and Murray [2011] proposed the Neural Autoregressive Density Estimator (NADE) to tractably model distributions over image pixels as a product of conditionals. Recently proposed spatial grid-structured recurrent networks [Theis and Bethge, 2015, van den Oord et al., 2016] have shown encouraging image synthesis results. We use GANs in our approach, but the same principle of separating ?what? and ?where? conditioning variables can be applied to these types of models. 3 Preliminaries 3.1 Generative Adversarial Networks Generative adversarial networks (GANs) consist of a generator G and a discriminator D that compete in a two-player minimax game. The discriminator?s objective is to correctly classify its inputs as either real or synthetic. The generator?s objective is to synthesize images that the discriminator will classsify as real. D and G play the following game with value function V (D, G): min max V (D, G) = Ex?pdata (x) [log D(x)] + Ex?pz (z) [log(1 ? D(G(z)))] G D where z is a noise vector drawn from e.g. a Gaussian or uniform distribution. Goodfellow et al. [2014] showed that this minimax game has a global optimium precisely when pg = pdata , and that when G and D have enough capacity, pg converges to pdata . To train a conditional GAN, one can simply provide both the generator and discriminator with the additional input c as in [Denton et al., 2015, Radford et al., 2016] yielding G(z, c) and D(x, c). For an input tuple (x, c) to be intepreted as ?real?, the image x must not only look realistic but also match its context c. In practice G is trained to maximize log D(G(z, c)). 3.2 Structured joint embedding of visual descriptions and images To encode visual content from text descriptions, we use a convolutional and recurrent text encoder to learn a correspondence function between images and text features, following the approach of Reed et al. [2016a] (and closely related to Kiros et al. [2014]). Sentence embeddings are learned by optimizing the following structured loss: N 1 X ?(yn , fv (vn )) + ?(yn , ft (tn )) N n=1 (1) where {(vn , tn , yn ), n = 1, ..., N } is the training data set, ? is the 0-1 loss, vn are the images, tn are the corresponding text descriptions, and yn are the class labels. fv and ft are defined as fv (v) = arg max Et?T (y) [?(v)T ?(t))], ft (t) = arg max Ev?V(y) [?(v)T ?(t))] y?Y (2) y?Y where ? is the image encoder (e.g. a deep convolutional network), ? is the text encoder, T (y) is the set of text descriptions of class y and likewise V(y) for images. Intuitively, the text encoder learns to produce a higher compatibility score with images of the correspondong class compared to any other class, and vice-versa. To train the text encoder we minimize a surrogate loss related to Equation 1 (see Akata et al. [2015] for details). We modify the approach of Reed et al. [2016a] in a few ways: using a char-CNN-GRU [Cho et al., 2014] instead of char-CNN-RNN, and estimating the expectations in Equation 2 using the average of 4 sampled captions per image instead of 1. 4 Generative Adversarial What-Where Networks (GAWWN) In the following sections we describe the bounding-box- and keypoint-conditional GAWWN models. 4.1 Bounding-box-conditional text-to-image model Figure 2 shows a sketch of the model, which can be understood by starting from input noise z ? RZ and text embedding t ? RT (extracted from the caption by pre-trained 2 encoder ?(t)) and following the arrows. Below we walk through each step. First, the text embedding (shown in green) is replicated spatially to form a M ? M ? T feature map, and then warped spatially to fit into the normalized bounding box coordinates. The feature map 2 Both ? and ? could be trained jointly with the GAN, but pre-training allows us to use the best available image features from higher resolution images (224 ? 224) and speeds up GAN training. 3 entries outside the box are all zeros.3 The diagram shows a single object, but in the case of multiple localized captions, these feature maps are averaged. Then, convolution and pooling operations are applied to reduce the spatial dimension back to 1 ? 1. Intuitively, this feature vector encodes the coarse spatial structure in the image, and we concatenate this with the noise vector z. Spatial replicate, crop to bbox replicate spatial = Deconv 16 1 1 1 16 crop to bbox A red bird with a black face A red bird with a black face depth concat = Conv 16 1 16 depth concat crop to bbox 128 Local 16 Local 16 16 16 16 16 { 0, 1 } 128 Global 128 Global 128 16 16 16 128 16 Discriminator Network Generator Network Figure 2: GAWWN with bounding box location control. In the next stage, the generator branches into local and global processing stages. The global pathway is just a series of stride-2 deconvolutions to increase spatial dimension from 1 ? 1 to M ? M . In the local pathway, upon reaching spatial dimension M ? M , a masking operation is applied so that regions outside the object bounding box are set to 0. Finally, the local and global pathways are merged by depth concatenation. A final series of deconvolution layers are used to reach the final spatial dimension. In the final layer we apply a Tanh nonlinearity to constrain the outputs to [?1, 1]. In the discriminator, the text is similarly replicated spatially to form a M ? M ? T tensor. Meanwhile the image is processed in local and global pathways. In the local pathway, the image is fed through stride-2 convolutions down to the M ? M spatial dimension, at which point it is depth-concatenated with the text embedding tensor. The resulting tensor is spatially cropped to within the bounding box coordinates, and further processed convolutionally until the spatial dimension is 1 ? 1. The global pathway consists simply of convolutions down to a vector, with additive contribution of the orignal text embedding t. Finally, the local and global pathway output vectors are combined additively and fed into the final layer producing the scalar discriminator score. 4.2 Keypoint-conditional text-to-image model Figure 3 shows the keypoint-conditional version of the GAWWN, described in detail below. part locs part locs A red bird with a black face = Deconv 1 depth concat A red bird with a black face replicate spatial max, replicate depth pointwise multiply Local max, replicate depth depth concat 16 depth concat 16 depth concat 128 Local 128 { 0, 1 } 16 16 16 16 depth concat 16 Global = Conv 1 16 128 Global 16 16 Generator Network 16 128 16 128 Discriminator Network Figure 3: Text and keypoint-conditional GAWWN.. Keypoint grids are shown as 4 ? 4 for clarity of presentation, but in our experiments we used 16 ? 16. The location keypoints are encoded into a M ? M ? K spatial feature map in which the channels correspond to the part; i.e. head in channel 1, left foot in channel 2, and so on. The keypoint tensor is fed into several stages of the network. First, it is fed through stride-2 convolutions to produce a vector that is concatenated with noise z and text embedding t. The resulting vector provides coarse information about content and part locations. Second, the keypoint tensor is flattened into a binary matrix with a 1 indicating presence of any part at a particular spatial location, then replicated depth-wise into a tensor of size M ? M ? H. In the local and global pathways, the noise-text-keypoint vector is fed through deconvolutions to produce another M ? M ? H tensor. The local pathway activations are gated by pointwise multiplication with the keypoint tensor of the same size. Finally, the original M ? M ? K keypoint tensor is 3 For details of how to apply this warping see equation 3 in [Jaderberg et al., 2015] 4 depth-concatenated with the local and global tensors, and processed with further deconvolutions to produce the final image. Again a Tanh nonlinearity is applied. In the discriminator, the text embedding t is fed into two stages. First, it is combined additively with the global pathway that processes the image convolutionally producing a vector output. Second, it is spatially replicated to M ? M and then depth-concatenated with another M ? M feature map in the local pathway. This local tensor is then multiplicatively gated with the binary keypoint mask exactly as in the generator, and the resulting tensor is depth-concatenated with the M ? M ? T keypoints. The local pathway is fed into further stride-2 convolutions to produce a vector, which is then additively combined with the global pathway output vector, and then into the final layer producing the scalar discriminator score. 4.3 Conditional keypoint generation model From a user-experience perspective, it is not optimal to require users to enter every single keypoint of the parts of the object they wish to be drawn (e.g. for birds our model would require 15). Therefore, it would be very useful to have access to all of the conditional distributions of unobserved keypoints given a subset of observed keypoints and the text description. A similar problem occurs in data imputation, e.g. filling in missing records or inpainting image occlusions. However, in our case we want to draw convincing samples rather than just fill in the most likely values. Conditioned on e.g. only the position of a bird?s beak, there could be several very different plausible poses that satisfy the constraint. Therefore, a simple approach such as training a sparse autoencoder over keypoints would not suffice. A DBM [Salakhutdinov and Hinton, 2009] or variational autoencoder [Rezende et al., 2014] could in theory work, but for simplicity we demonstrate the results achieved by applying the same generic GAN framework to this problem. The basic idea is to use the assignment of each object part as observed (i.e. conditioning variable) or unobserved as a gating mechanism. Denote the keypoints for a single image as ki := {xi , yi , vi }, i = 1, ..., K, where x and y indicate the row and column position, respectively, and v is a bit set to 1 if the part is visible and 0 otherwise. If the part is not visible, x and y are also set to 0. Let k ? [0, 1]K?3 encode the keypoints into a matrix. Let the conditioning variables (e.g. a beak position specified by the user) be encoded into a vector of switch units s ? {0, 1}K , with the i-th entry set to 1 if the i-th part is a conditioning variable and 0 otherwise. We can formulate the generator network over keypoints Gk , conditioned on text t and a subset of keypoints k, s, as follows: Gk (z, t, k, s) := s k + (1 ? s) f (z, t, k) (3) where denotes pointwise multiplication and f : RZ+T +3K ? R3K is an MLP. In practice we concatenated z, t and flattened k and chose f to be a 3-layer fully-connected network. The discriminator Dk learns to distinguish real keypoints and text (kreal , treal ) from synthetic. In order for Gk to capture all of the conditional distributions over keypoints, during training we randomly sample switch units s in each mini-batch. Since we would like to usually specify 1 or 2 keypoints, in our experiments we set the ?on? probability to 0.1. That is, each of the 15 bird parts only had a 10% chance of acting as a conditioning variable for a given training image. 5 Experiments In this section we describe our experiments on generating images from text descriptions on the Caltech-UCSD Birds (CUB) and MPII Human Pose (MHP) datasets. CUB [Wah et al., 2011] has 11,788 images of birds belonging to one of 200 different species. We also use the text dataset from Reed et al. [2016a] including 10 single-sentence descriptions per bird image. Each image also includes the bird location via its bounding box, and keypoint (x,y) coordinates for each of 15 bird parts. Since not all parts are visible in each image, the keypoint data also provides an additional bit per part indicating whether the part can be seen. MHP Andriluka et al. [2014] has 25K images with 410 different common activities. For each image, we collected 3 single-sentence text descriptions using Mechanical Turk. We asked the workers to describe the most distinctive aspects of the person and the activity they are engaged in, e.g. ?a man in a yellow shirt preparing to swing a golf club?. Each image has potentially multiple sets of (x,y) keypoints for each of the 16 joints. During training we filtered out images with multiple people, and for the remaining 19K images we cropped the image to the person?s bounding box. 5 We encoded the captions using a pre-trained char-CNN-GRU as described in [Reed et al., 2016a]. During training, the 1024-dimensional text embedding for a given image was taken to be the average of four randomly-sampled caption encodings corresponding to that image. Sampling multiple captions per image provides further information required to draw the object. At test time one can average together any number of description embeddings, including a single caption. For both CUB and MHP, we trained our GAWWN using the ADAM solver with batch size 16 and learning rate 0.0002 (See Alg. 1 in [Reed et al., 2016b] for the conditional GAN training algorithm). The models were trained on all categories and we show samples on a set of held out captions. For the spatial transformer module, we used a Torch implementation provided by Oquab [2016]. Our GAN implementation is loosely based on dcgan.torch4 . In experiments we analyze how accurately the GAWWN samples reflect the text and location constraints. First we control the location of the bird by interpolation via bounding boxes and keypoints. We consider both the case of (1) ground-truth keypoints from the data set, and (2) synthetic keypoints generated by our model, conditioned on the text. Case (2) is advantageous because it requires less effort from a hypothetical user (i.e. entering 15 keypoint locations). We then compare our CUB results to representative samples from the previous work. Finally, we show samples on textand pose-conditional generation of images of human actions. 5.1 Controlling bird location via bounding boxes We first demonstrate sampling from the text-conditional model while varying the bird location. Since location is specified via bounding box coordinates, we can also control the size and aspect ratio of the bird. This is shown in Figure 4 by interpolating the bounding box coordinates while at the same time fixing the text and noise conditioning variables. Caption GT Shrinking Translation Stretching This bird has a black head, a long orange beak and yellow body This large black bird has a pointy beak and black eyes This small blue bird has a short pointy beak and brown patches on its wings Figure 4: Controlling the bird?s position using bounding box coordinates. and previously-unseen text. With the noise vector z fixed in every set of three frames, the background is usually similar but not perfectly invariant. Interestingly, as the bounding box coordinates are changed, the direction the bird faces does not change. This suggests that the model learns to use the the noise distribution to capture some aspects of the background and also non-controllable aspects of ?where? such as direction. 5.2 Controlling individual part locations via keypoints In this section we study the case of text-conditional image generation with keypoints fixed to the ground-truth. This can give a sense of the performance upper bound for the text to image pipeline, because synthetic keypoints can be no more realistic than the ground-truth. We take a real image and its keypoint annotations from the CUB dataset, and a held-out text description, and draw samples conditioned on this information. This large black bird has a long neck and tail GT feathers. This bird is mostly white with a thick black GT eyebrow, small and black beak and a long tail. GT This large white bird has an orange-tipped beak. GT This bird has a bright red crown and black wings and beak. GT This is a small yellowish green bird with a pointy black beak, black eyes and gray wings. This pale pink bird has a black eyebrow and a GT black pointy beak, gray wings and yellow underparts. Figure 5: Bird generation conditioned on fixed groundtruth keypoints (overlaid in blue) and previously unseen text. Each sample uses a different random noise vector. 4 https://github.com/soumith/dcgan.torch 6 Figure 5 shows several image samples that accurately reflect the text and keypoint constraints. More examples including success and failure are included in the supplement. We observe that the bird pose respects the keypoints and is invariant across the samples. The background and other small details, such as thickness of the tree branch or the background color palette do change with the noise. Caption GT Shrinking Translation Stretching This bird has a black head, a long orange beak and yellow body This large black bird has a pointy beak and black eyes This small blue bird has a short pointy beak and brown patches on its wings Figure 6: Controlling the bird?s position using keypoint coordinates. Here we only interpolated the beak and tail positions, and sampled the rest conditioned on these two. The GAWWN model can also use keypoints to shrink, translate and stretch objects, as shown in Figure 6. We chose to specify beak and tail positions, because in most cases these define an approximate bounding box around the bird. Unlike in the case of bounding boxes, we can now control which way the bird is pointing; note that here all birds face left, whereas when we use bounding boxes (Figure 4) the orientation is random. Elements of the scene, even outside of the controllable location, adjust in order to be coherent with the bird?s position in each frame although in each set of three frames we use the same noise vector z. 5.3 Generating both bird keypoints and images from text alone Although ground truth keypoint locations lead to visually plausible results as shown in the previous sections, the keypoints are costly to obtain. In Figure 7, we provide examples of accurate samples using generated keypoints. Compared to ground-truth keypoints, on average we did not observe degradation in quality. More examples for each regime are provided in the supplement. GT This white bird has gray wings, red webbed feet and a long, curved and yellow beak. This bird has a yellow head, black eyes, a gray GT pointy beak and orange lines on its breast. This bird is completely red with a red and GT cone-shaped beak, black face and a red nape. GT This water bird has a long white neck, black body, yellow beak and black head. GT This small bird has a blue and gray head, pointy beak and a white belly. This bird is large, completely black, with a GT long pointy beak and black eyes. Figure 7: Keypoint- and text-conditional bird generation in which the keypoints are generated conditioned on unseen text. The small blue boxes indicate the generated keypoint locations. 5.4 Comparison to previous work In this section we compare our results with previous text-to-image results on CUB. In Figure 8 we show several representative examples that we cropped from the supplementary material of [Reed et al., 2016b]. We compare against the actual ground-truth and several variants of GAWWN. We observe that the 64 ? 64 samples from [Reed et al., 2016b] mostly reflect the text description, but in some cases lack clearly defined parts such as a beak. When the keypoints are zeroed during training, our GAWWN architecture actually fails to generate any plausible images. This suggests that providing additional conditioning variables in the form of location constraints is helpful for learning to generate high-resolution images. Overall, the sharpest and most accurate results can be seen in the 128 ? 128 samples from our GAWWN with real or synthetic keypoints (bottom two rows). 5.5 Beyond birds: generating images of humans Here we apply our model to generating images of humans conditioned on a description of their appearance and activity, and also on their approximate pose. This is a much more challenging task than generating images of birds due to the larger variety of scenes and pose configurations. This small bird has a blue and gray head, pointy beak, black and white patterns on its wings and a white belly. 7 Groundtruth image and text caption A small sized bird that has tones of brown and dark red with a short stout bill The bird is solid black with white eyes and a black beak. This bird has a yellow breast and a dark grey face GAN-INT-CLS (Reed et. al, 2016b) GAWWN trained without key points GAWWN Key points given GAWWN Key points generated Figure 8: Comparison of GAWWN to GAN-INT-CLS from Reed et al. [2016b] and also the groundtruth images. For the ground-truth row, the first entry corresonds directly to the caption, and the second two entries are sampled from the same species. Caption GT Samples Caption a woman in a yellow tank top is doing yoga. a woman in grey shirt is doing yoga. the man wearing the red shirt and white pants play golf on the green grass a man in green shirt and white pants is swinging his golf club. a man in a red sweater and grey pants swings a golf club with one hand. GT Samples a man in an orange jacket, black pants and a black cap wearing sunglasses skiing. a woman wearing goggles swimming through very murky water a man is skiing and competing for the olympics on the slopes. Figure 9: Generating humans. Both the keypoints and the image are generated from unseen text. The human image samples shown in Figure 9 tend to be much blurrier compared to the bird images, but in many cases bear a clear resemblance to the text query and the pose constraints. Simple captions involving skiing, golf and yoga tend to work, but complex descriptions and unusual poses (e.g. upside-down person on a trampoline) remain especially challenging. We also generate videos by (1) extracting pose keypoints from a pre-trained pose estimator from several YouTube clips, and (2) combining these keypoint trajectories with a text query, fixing the noise vector z over time and concatenating the samples (see supplement). 6 Discussion In this work we showed how to generate images conditioned on both informal text descriptions and object locations. Locations can be accurately controlled by either bounding box or a set of part keypoints. On CUB, the addition of a location constraint allowed us to accurately generate compelling 128 ? 128 images, whereas previous models could only generate 64 ? 64. Furthermore, this location conditioning does not constrain us during test time, because we can also learn a text-conditional generative model of part locations, and simply generate them at test time. An important lesson here is that decomposing the problem into easier subproblems can help generate realistic high-resolution images. In addition to making the overall text to image pipeline easier to train with a GAN, it also yields additional ways to control image synthesis. In future work, it may be promising to learn the object or part locations in an unsupervised or weakly supervised way. In addition, we show the first text-to-human image synthesis results, but performance on this task is clearly far from saturated and further architectural advances will be required to solve it. Acknowledgements This work was supported in part by NSF CAREER IIS-1453651, ONR N00014-13-1-0762, and a Sloan Research Fellowship. 8 References Z. Akata, S. Reed, D. Walter, H. Lee, and B. Schiele. Evaluation of Output Embeddings for Fine-Grained Image Classification. In CVPR, 2015. M. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele. 2d human pose estimation: New benchmark and state of the art analysis. In CVPR, June 2014. K. Cho, B. van Merri?nboer, D. Bahdanau, and Y. Bengio. On the properties of neural machine translation: Encoder?decoder approaches. Syntax, Semantics and Structure in Statistical Translation, 2014. E. L. Denton, S. Chintala, R. Fergus, et al. Deep generative image models using a laplacian pyramid of adversarial networks. In NIPS, 2015. A. Dosovitskiy, J. Tobias Springenberg, and T. Brox. Learning to generate chairs with convolutional neural networks. In CVPR, 2015. S. Eslami, N. Heess, T. Weber, Y. Tassa, K. Kavukcuoglu, and G. E. Hinton. Attend, infer, repeat: Fast scene understanding with generative models. In NIPS, 2016. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014. K. Gregor, I. Danihelka, A. Graves, D. Rezende, and D. Wierstra. Draw: A recurrent neural network for image generation. In ICML, 2015. M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial transformer networks. In NIPS, 2015. D. P. Kingma and M. Welling. Auto-encoding variational bayes. In ICLR, 2014. R. Kiros, R. Salakhutdinov, and R. S. Zemel. Unifying visual-semantic embeddings with multimodal neural language models. In ACL, 2014. T. D. Kulkarni, W. F. Whitney, P. Kohli, and J. Tenenbaum. Deep convolutional inverse graphics network. In NIPS, 2015. H. Larochelle and I. Murray. The neural autoregressive distribution estimator. In AISTATS, 2011. E. Mansimov, E. Parisotto, J. L. Ba, and R. Salakhutdinov. Generating images from captions with attention. In ICLR, 2016. J. Oh, X. Guo, H. Lee, R. L. Lewis, and S. Singh. Action-conditional video prediction using deep networks in atari games. In NIPS, 2015. Q. Oquab. Modules for spatial transformer networks. github.com/qassemoquab/stnbhwd, 2016. A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016. S. Reed, Y. Zhang, Y. Zhang, and H. Lee. Deep visual analogy-making. In NIPS, 2015. S. Reed, Z. Akata, H. Lee, and B. Schiele. Learning deep representations for fine-grained visual descriptions. In CVPR, 2016a. S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee. Generative adversarial text-to-image synthesis. In ICML, 2016b. D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, 2014. D. J. Rezende, S. Mohamed, I. Danihelka, K. Gregor, and D. Wierstra. One-shot generalization in deep generative models. In ICML, 2016. R. Salakhutdinov and G. E. Hinton. Deep boltzmann machines. In AISTATS, 2009. L. Theis and M. Bethge. Generative image modeling using spatial lstms. In NIPS, 2015. A. van den Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel recurrent neural networks. In ICML, 2016. C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The caltech-ucsd birds-200-2011 dataset. 2011. J. Yang, S. Reed, M.-H. Yang, and H. Lee. Weakly-supervised disentangling with recurrent transformations for 3d view synthesis. In NIPS, 2015. 9
6111 |@word kohli:1 cnn:3 version:2 advantageous:1 replicate:5 deconvolutions:3 instruction:1 grey:3 additively:3 pg:2 inpainting:1 solid:1 shot:1 configuration:2 series:2 score:3 interestingly:1 deconvolutional:1 existing:1 current:1 com:3 activation:1 must:1 realistic:7 concatenate:1 additive:1 visible:3 shape:1 enables:1 interpretable:1 depict:1 grass:1 alone:1 generative:17 concat:7 tone:1 short:3 record:1 filtered:1 coarse:3 provides:3 location:36 club:3 zhang:2 wierstra:3 qualitative:1 consists:1 feather:1 mhp:5 pathway:13 mask:1 mpg:2 kiros:2 multi:1 shirt:4 salakhutdinov:4 encouraging:1 soumith:1 actual:1 solver:1 conv:2 spain:1 estimating:1 provided:2 suffice:1 reedscot:1 what:7 atari:2 developer:1 deepmind:1 developed:1 unobserved:2 transformation:1 every:2 hypothetical:1 exactly:1 mansimov:2 control:9 unit:3 appear:2 planck:1 yn:4 producing:3 danihelka:2 attend:3 understood:1 modify:2 local:16 eslami:2 encoding:2 interpolation:1 might:1 black:33 bird:64 chose:2 acl:1 specifying:1 suggests:2 challenging:2 branson:1 jacket:2 logeswaran:1 range:1 averaged:1 practice:2 block:1 backpropagation:1 rnn:1 yan:1 pre:4 interior:2 context:1 transformer:5 writing:1 applying:1 conventional:1 deterministic:1 demonstrated:2 map:5 missing:1 bill:1 latest:1 attention:3 starting:1 resolution:4 sharpness:1 simplicity:1 formulate:1 swinging:1 pouget:1 estimator:3 fill:1 oh:2 disentangled:1 his:1 embedding:8 notion:1 coordinate:9 exploratory:1 merri:1 controlling:5 play:2 user:7 caption:20 us:2 goodfellow:4 synthesize:4 element:1 gehler:1 observed:2 ft:3 module:3 bottom:1 solved:1 capture:2 region:1 connected:1 olympics:1 contemporary:1 substantial:1 schiele:5 complexity:1 asked:1 tobias:1 belly:3 warde:1 trained:11 weakly:2 singh:1 upon:1 distinctive:1 efficiency:1 completely:3 multimodal:1 joint:3 train:3 walter:1 fast:1 effective:2 describe:3 query:2 zemel:1 outside:3 kalchbrenner:1 bernt:1 encoded:3 plausible:3 supplementary:1 larger:1 solve:1 otherwise:2 cvpr:4 encoder:8 ability:1 simonyan:1 unseen:4 jointly:1 final:6 sequence:1 lee1:1 net:1 propose:1 product:2 pale:1 relevant:1 combining:1 rapidly:1 translate:1 achieve:1 description:18 constituent:1 cropping:1 produce:5 generating:9 incremental:1 converges:1 rotated:1 object:15 help:1 adam:1 recurrent:11 completion:1 fixing:2 pose:15 implemented:1 indicate:2 larochelle:2 direction:2 foot:2 thick:1 closely:1 merged:1 stochastic:1 human:13 char:3 material:1 require:2 generalization:2 preliminary:1 stretch:1 around:2 ground:7 visually:1 overlaid:1 mapping:1 predict:1 dbm:1 pointing:1 cken:1 cub:10 estimation:1 label:4 currently:1 tanh:2 expose:1 bridge:1 vice:1 clearly:2 gaussian:1 reaching:1 rather:1 varying:1 vae:3 encode:3 rezende:6 june:1 improvement:1 adversarial:11 sense:2 helpful:1 inference:2 dependent:1 torch:2 perona:1 germany:1 semantics:1 pixel:4 arg:2 compatibility:1 orientation:1 overall:2 classification:1 tank:1 andriluka:3 spatial:21 orange:6 art:1 brox:1 santosh:1 stns:2 shaped:1 sampling:2 preparing:1 look:1 denton:3 filling:1 blurrier:1 pdata:3 unsupervised:2 future:2 icml:5 mirza:1 dosovitskiy:2 few:1 modern:1 randomly:2 individual:1 occlusion:1 mlp:1 multiply:1 goggles:1 evaluation:1 adjust:1 saturated:1 yielding:3 farley:1 held:2 accurate:2 tuple:1 worker:1 experience:1 tree:1 loosely:1 walk:1 classify:1 modeling:2 compelling:4 column:1 cover:1 whitney:1 assignment:1 subset:3 entry:4 uniform:1 stout:1 recognizing:1 welinder:1 graphic:4 thickness:1 synthetic:5 cho:2 combined:3 person:3 density:1 oord:2 automating:1 probabilistic:1 lee:6 informatics:1 picking:1 synthesis:13 bethge:2 together:1 gans:4 again:1 reflect:3 woman:3 warped:1 wing:7 de:2 stride:4 stabilize:1 includes:1 int:2 satisfy:1 notable:1 sloan:1 vi:1 tion:1 multiplicative:1 later:1 view:1 analyze:1 doing:2 portion:1 red:12 bayes:1 metz:1 capability:2 annotation:3 masking:2 slope:1 contribution:2 minimize:1 bright:2 air:1 convolutional:10 stretching:2 likewise:1 yield:2 correspond:1 lesson:1 yellow:10 sharpest:1 kavukcuoglu:2 accurately:4 trajectory:1 lighting:1 reach:2 beak:27 streamlined:1 failure:1 against:1 turk:1 mohamed:2 chintala:2 associated:1 sampled:4 dataset:7 popular:1 color:1 cap:2 murky:1 tipped:1 akata:5 actually:1 back:1 feed:1 higher:3 supervised:2 specify:2 zisserman:1 done:1 box:22 though:1 shrink:1 furthermore:1 just:2 stage:4 autoencoders:2 canvas:1 sketch:1 until:1 hand:1 lstms:1 lack:1 google:1 quality:2 gray:6 resemblance:1 usa:1 concept:1 normalized:2 brown:3 swing:2 spatially:5 entering:1 semantic:1 white:10 game:5 during:5 mpi:2 samuel:1 syntax:1 demonstrate:3 tn:3 interface:2 crown:1 image:95 variational:6 wise:1 novel:1 recently:2 weber:1 common:1 rotation:1 conditioning:10 tassa:1 tail:5 significant:1 honglak:2 versa:1 enter:1 grid:2 similarly:1 nonlinearity:2 language:1 had:1 access:1 impressive:1 gt:16 recent:1 showed:2 perspective:1 optimizing:1 inf:2 skiing:4 n00014:1 binary:2 success:1 onr:1 yi:1 exploited:1 caltech:4 seen:2 additional:4 oquab:2 maximize:1 ii:1 branch:2 multiple:4 upside:1 keypoints:33 infer:2 match:1 convolutionally:2 long:7 laplacian:2 controlled:1 prediction:1 variant:1 crop:3 basic:1 breast:2 involving:1 expectation:1 pyramid:2 achieved:1 addition:6 remarkably:1 conditionals:1 cropped:3 want:1 background:4 diagram:1 whereas:2 fellowship:1 fine:2 rest:1 unlike:1 pooling:1 tend:2 bahdanau:1 extracting:1 yang:3 presence:1 bengio:2 enough:1 embeddings:4 rendering:1 switch:2 variety:1 fit:1 architecture:2 perfectly:1 yellowish:1 competing:1 reduce:1 idea:1 golf:5 whether:1 effort:2 action:3 deep:14 heess:1 useful:2 clear:1 dark:2 tenenbaum:1 clip:1 processed:3 category:2 generate:16 http:1 nsf:1 designer:1 track:1 per:5 correctly:1 blue:7 promise:1 key:3 four:1 drawn:2 imputation:1 clarity:1 swimming:1 cone:1 compete:1 inverse:1 springenberg:1 groundtruth:3 architectural:1 vn:3 patch:2 draw:9 bit:2 bound:1 layer:5 ki:1 followed:1 distinguish:1 courville:1 correspondence:1 activity:3 constraint:8 precisely:1 constrain:2 scene:8 encodes:1 interpolated:1 aspect:4 speed:1 chair:3 min:1 nboer:1 structured:3 creative:1 pink:1 belonging:1 across:1 remain:1 increasingly:1 making:2 leg:1 den:2 intuitively:2 invariant:2 taken:1 pipeline:2 equation:3 previously:2 describing:1 mechanism:4 fed:7 end:1 umich:3 informal:3 unusual:1 available:1 operation:2 decomposing:1 apply:4 observe:3 generic:1 batch:2 rz:2 original:1 denotes:1 remaining:1 top:1 gan:13 completed:1 unifying:1 concatenated:6 build:1 murray:2 especially:1 gregor:4 locs:2 tensor:12 objective:2 deconv:2 already:3 question:1 warping:1 occurs:1 costly:1 rt:1 surrogate:1 orignal:1 iclr:3 separate:1 separating:2 capacity:1 majority:1 decoder:2 concatenation:1 zeynep:1 collected:2 water:2 ozair:1 code:2 pointwise:3 reed:16 multiplicatively:1 mini:1 convincing:1 ratio:1 providing:1 mostly:2 disentangling:1 potentially:1 subproblems:1 gk:3 ba:1 implementation:2 boltzmann:1 perform:1 gated:2 upper:1 convolution:5 datasets:3 benchmark:1 enabling:1 curved:1 hinton:3 incorporated:2 head:9 frame:4 ucsd:3 arbitrary:1 palette:1 gru:2 specified:3 mechanical:1 sentence:3 discriminator:12 wah:3 required:2 fv:3 learned:2 textual:1 coherent:1 saarbr:1 barcelona:1 kingma:2 nip:10 tractably:1 beyond:1 bbox:3 flower:1 pattern:2 scott:1 ev:1 below:2 usually:2 reading:1 regime:1 eyebrow:2 max:6 green:4 including:3 video:2 residual:1 minimax:2 pishchulin:1 github:2 sunglass:2 keypoint:24 eye:6 created:1 autoencoder:4 auto:1 text:57 understanding:1 stn:1 acknowledgement:1 theis:2 multiplication:2 relative:1 graf:1 fully:2 loss:3 bear:1 parisotto:1 generation:7 analogy:2 proven:1 localized:3 remarkable:1 generator:10 principle:1 zeroed:1 pant:5 translation:4 row:3 changed:1 repeat:2 supported:1 allow:1 institute:1 wide:1 face:11 sparse:1 webbed:1 benefit:2 van:3 dimension:6 depth:15 world:1 autoregressive:2 author:1 forward:1 replicated:4 far:2 welling:2 approximate:3 jaderberg:3 global:17 belongie:1 xi:1 fergus:1 latent:2 mpii:2 promising:1 learn:4 channel:3 controllable:5 career:1 synthesizes:1 improving:1 alg:1 complex:2 meanwhile:1 interpolating:1 cl:2 did:2 aistats:2 main:1 arrow:1 bounding:20 noise:12 allowed:1 body:3 xu:1 nade:1 representative:2 shrinking:2 fails:1 position:9 wish:1 concatenating:1 learns:4 grained:2 down:3 gating:2 pz:1 dk:1 abadie:1 deconvolution:1 incorporating:1 consist:1 flattened:2 supplement:3 album:1 conditioned:11 gap:1 easier:2 michigan:2 simply:3 likely:1 appearance:1 visual:8 dcgan:2 scalar:2 radford:4 truth:7 chance:1 lewis:1 extracted:1 conditional:23 sized:1 presentation:1 ann:1 yoga:3 room:2 man:7 content:5 change:2 youtube:1 included:1 acting:1 degradation:1 called:1 specie:2 neck:2 engaged:1 arbor:1 player:1 indicating:3 pointy:10 support:1 people:1 guo:1 meant:1 kulkarni:2 incorporate:1 wearing:4 localizable:1 ex:2
5,650
6,112
Deep Learning without Poor Local Minima Kenji Kawaguchi Massachusetts Institute of Technology kawaguch@mit.edu Abstract In this paper, we prove a conjecture published in 1989 and also partially address an open problem announced at the Conference on Learning Theory (COLT) 2015. With no unrealistic assumption, we first prove the following statements for the squared loss function of deep linear neural networks with any depth and any widths: 1) the function is non-convex and non-concave, 2) every local minimum is a global minimum, 3) every critical point that is not a global minimum is a saddle point, and 4) there exist ?bad? saddle points (where the Hessian has no negative eigenvalue) for the deeper networks (with more than three layers), whereas there is no bad saddle point for the shallow networks (with three layers). Moreover, for deep nonlinear neural networks, we prove the same four statements via a reduction to a deep linear model under the independence assumption adopted from recent work. As a result, we present an instance, for which we can answer the following question: how difficult is it to directly train a deep model in theory? It is more difficult than the classical machine learning models (because of the non-convexity), but not too difficult (because of the nonexistence of poor local minima). Furthermore, the mathematically proven existence of bad saddle points for deeper models would suggest a possible open problem. We note that even though we have advanced the theoretical foundations of deep learning and non-convex optimization, there is still a gap between theory and practice. 1 Introduction Deep learning has been a great practical success in many fields, including the fields of computer vision, machine learning, and artificial intelligence. In addition to its practical success, theoretical results have shown that deep learning is attractive in terms of its generalization properties (Livni et al., 2014; Mhaskar et al., 2016). That is, deep learning introduces good function classes that may have a low capacity in the VC sense while being able to represent target functions of interest well. However, deep learning requires us to deal with seemingly intractable optimization problems. Typically, training of a deep model is conducted via non-convex optimization. Because finding a global minimum of a general non-convex function is an NP-complete problem (Murty & Kabadi, 1987), a hope is that a function induced by a deep model has some structure that makes the nonconvex optimization tractable. Unfortunately, it was shown in 1992 that training a very simple neural network is indeed NP-hard (Blum & Rivest, 1992). In the past, such theoretical concerns in optimization played a major role in shrinking the field of deep learning. That is, many researchers instead favored classical machining learning models (with or without a kernel approach) that require only convex optimization. While the recent great practical successes have revived the field, we do not yet know what makes optimization in deep learning tractable in theory. In this paper, as a step toward establishing the optimization theory for deep learning, we prove a conjecture noted in (Goodfellow et al., 2016) for deep linear networks, and also address an open problem announced in (Choromanska et al., 2015b) for deep nonlinear networks. Moreover, for 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. both the conjecture and the open problem, we prove more general and tighter statements than those previously given (in the ways explained in each section). 2 Deep linear neural networks Given the absence of a theoretical understanding of deep nonlinear neural networks, Goodfellow et al. (2016) noted that it is beneficial to theoretically analyze the loss functions of simpler models, i.e., deep linear neural networks. The function class of a linear multilayer neural network only contains functions that are linear with respect to inputs. However, their loss functions are nonconvex in the weight parameters and thus nontrivial. Saxe et al. (2014) empirically showed that the optimization of deep linear models exhibits similar properties to those of the optimization of deep nonlinear models. Ultimately, for theoretical development, it is natural to start with linear models before working with nonlinear models (as noted in Baldi & Lu, 2012), and yet even for linear models, the understanding is scarce when the models become deep. 2.1 Model and notation We begin by defining the notation. Let H be the number of hidden layers, and let (X, Y ) be the training data set, with Y ? Rdy ?m and X ? Rdx ?m , where m is the number of data points. Here, dy ? 1 and dx ? 1 are the number of components (or dimensions) of the outputs and inputs, respectively. Let ? = Y X T (XX T )?1 XY T . We denote the model (weight) parameters by W , which consists of the entries of the parameter matrices corresponding to each layer: WH+1 ? Rdy ?dH , . . . , Wk ? Rdk ?dk?1 , . . . , W1 ? Rd1 ?dx . Here, dk represents the width of the k-th layer, where the 0-th layer is the input layer and the (H + 1)-th layer is the output layer (i.e., d0 = dx and dH+1 = dy ). Let Idk be the dk ? dk identity matrix. Let p = min(dH , . . . , d1 ) be the smallest width of a hidden layer. We denote the (j, i)-th entry of a matrix M by Mj,i . We also denote the j-th row vector of M by Mj,? and the i-th column vector of M by M?,i . We can then write the output of a feedforward deep linear model, Y (W, X) ? Rdy ?m , as Y (W, X) = WH+1 WH WH?1 ? ? ? W2 W1 X. We consider one of the most widely used loss functions, squared error loss: m X 1 ? )= 1 kY (W, X)?,i ? Y?,i k22 = kY (W, X) ? Y k2F , L(W 2 i=1 2 2 ? where k?kF is the Frobenius norm. Note that m L(W ) is the usual mean squared error, for which ? ) by a constant in W results in an equivalent all of our results hold as well, since multiplying L(W optimization problem. 2.2 Background Recently, Goodfellow et al. (2016) remarked that when Baldi & Hornik (1989) proved Proposition 2.1 for shallow linear networks, they stated Conjecture 2.2 without proof for deep linear networks. Proposition 2.1 (Baldi & Hornik, 1989: shallow linear network) Assume that H = 1 (i.e., Y (W, X) = W2 W1 X), assume that XX T and XY T are invertible, assume that ? has dy distinct eigenvalues, and assume that p < dx , p < dy and dy = dx (e.g., an autoencoder). Then, the ? ) has the following properties: loss function L(W (i) It is convex in each matrix W1 (or W2 ) when the other W2 (or W1 ) is fixed. (ii) Every local minimum is a global minimum. Conjecture 2.2 (Baldi & Hornik, 1989: deep linear network) Assume the same set of conditions as ? ) has the following properties: in Proposition 2.1 except for H = 1. Then, the loss function L(W (i) For any k ? {1, . . . , H + 1}, it is convex in each matrix Wk when for all k 0 6= k, Wk0 is fixed. (ii) Every local minimum is a global minimum. 2 Baldi & Lu (2012) recently provided a proof for Conjecture 2.2 (i), leaving the proof of Conjecture 2.2 (ii) for future work. They also noted that the case of p ? dx = dx is of interest, but requires further analysis, even for a shallow network with H = 1. An informal discussion of Conjecture 2.2 can be found in (Baldi, 1989). In Appendix D, we provide a more detailed discussion of this subject. 2.3 Results We now state our main theoretical results for deep linear networks, which imply Conjecture 2.2 (ii) as well as obtain further information regarding the critical points with more generality. Theorem 2.3 (Loss surface of deep linear networks) Assume that XX T and XY T are of full rank with dy ? dx and ? has dy distinct eigenvalues. Then, for any depth H ? 1 and for any layer widths and any input-output dimensions dy , dH , dH?1 , . . . , d1 , dx ? 1 (the widths can arbitrarily ? ) has the following properties: differ from each other and from dy and dx ), the loss function L(W (i) It is non-convex and non-concave. (ii) Every local minimum is a global minimum. (iii) Every critical point that is not a global minimum is a saddle point. (iv) If rank(WH ? ? ? W2 ) = p, then the Hessian at any saddle point has at least one (strictly) negative eigenvalue.1 Corollary 2.4 (Effect of deepness on the loss surface) Assume the same set of conditions as in ? ). For three-layer networks (i.e., H = 1), the Theorem 2.3 and consider the loss function L(W Hessian at any saddle point has at least one (strictly) negative eigenvalue. In contrast, for networks deeper than three layers (i.e., H ? 2), there exist saddle points at which the Hessian does not have any negative eigenvalue. The assumptions of having full rank and distinct eigenvalues in the training data matrices in Theorem 2.3 are realistic and practically easy to satisfy, as discussed in previous work (e.g., Baldi & Hornik, 1989). In contrast to related previous work (Baldi & Hornik, 1989; Baldi & Lu, 2012), we do not assume the invertibility of XY T , p < dx , p < dy nor dy = dx . In Theorem 2.3, p ? dx is allowed, as well as many other relationships among the widths of the layers. Therefore, we successfully proved Conjecture 2.2 (ii) and a more general statement. Moreover, Theorem 2.3 (iv) and Corollary 2.4 provide additional information regarding the important properties of saddle points. Theorem 2.3 presents an instance of a deep model that would be tractable to train with direct greedy optimization, such as gradient-based methods. If there are ?poor? local minima with large loss values everywhere, we would have to search the entire space,2 the volume of which increases exponentially with the number of variables. This is a major cause of NP-hardness for non-convex optimization. In contrast, if there are no poor local minima as Theorem 2.3 (ii) states, then saddle points are the main ? ) is Lipschitz continuous, if remaining concern in terms of tractability.3 Because the Hessian of L(W the Hessian at a saddle point has a negative eigenvalue, it starts appearing as we approach the saddle point. Thus, Theorem 2.3 and Corollary 2.4 suggest that for 1-hidden layer networks, training can be done in polynomial time with a second order method or even with a modified stochastic gradient decent method, as discussed in (Ge et al., 2015). For deeper networks, Corollary 2.4 states that there exist ?bad? saddle points in the sense that the Hessian at the point has no negative eigenvalue. However, we know exactly when this can happen from Theorem 2.3 (iv) in our deep models. We leave the development of efficient methods to deal with such a bad saddle point in general deep models as an open problem. 3 Deep nonlinear neural networks Now that we have obtained a comprehensive understanding of the loss surface of deep linear models, we discuss deep nonlinear models. For a practical deep nonlinear neural network, our theoretical results so far for the deep linear models can be interpreted as the following: depending on the If H = 1, to be succinct, we define WH ? ? ? W2 = W1 ? ? ? W2 , Id1 , with a slight abuse of notation. Typically, we do this by assuming smoothness in the values of the loss function. 3 Other problems such as the ill-conditioning can make it difficult to obtain a fast convergence rate. 1 2 3 nonlinear activation mechanism and architecture, training would not be arbitrarily difficult. While theoretical formalization of this intuition is left to future work, we address a recently proposed open problem for deep nonlinear networks in the rest of this section. 3.1 Model We use the same notation as for the deep linear models, defined in the beginning of Section 2.1. The output of deep nonlinear neural network, Y? (W, X) ? Rdy ?m , is defined as ? Y(W, X) = q?H+1 (WH+1 ?H (WH ?H?1 (WH?1 ? ? ? ?2 (W2 ?1 (W1 X)) ? ??))), where q ? R is simply a normalization factor, the value of which is specified later. Here, ?k : Rdk ?m ? Rdk ?m is the element-wise rectified linear function: ?? ?? ? ? ? (b1m) ? ? (b11 ) . . . ? b11 . . . b1m ?? ? .. ?? = ? .. .. .. .. ?k ?? ... ?, . . . ?? ? . . bdk 1 ? ? ? bdk m ? (bdk m ) ? ? (bdk 1 ) ? ? ? ? where ? ? (bij ) = max(0, bij ). In practice, we usually set ?H+1 to be an identity map in the last layer, in which case all our theoretical results still hold true. 3.2 Background Following the work by Dauphin et al. (2014), Choromanska et al. (2015a) investigated the connection between the loss functions of deep nonlinear networks and a function well-studied via random matrix theory (i.e., the Hamiltonian of the spherical spin-glass model). They explained that their theoretical results relied on several unrealistic assumptions. Later, Choromanska et al. (2015b) suggested at the Conference on Learning Theory (COLT) 2015 that discarding these assumptions is an important open problem. The assumptions were labeled A1p, A2p, A3p, A4p, A5u, A6u, and A7p. In this paper, we successfully discard most of these assumptions. In particular, we only use a weaker version of assumptions A1p and A5u. We refer to the part of assumption A1p (resp. A5u) that corresponds only to the model assumption as A1p-m (resp. A5u-m). Note that assumptions A1p-m and A5u-m are explicitly used in the previous work (Choromanska et al., 2015a) and included in A1p and A5u (i.e., we are not making new assumptions here). As the model Y? (W, X) ? Rdy ?m represents a directed acyclic graph, we can express an output from one of the units in the output layer as ? H+1 X Y (k) [Xi ](j,p) [Zi ](j,p) w(j,p) . (1) Y? (W, X)j,i = q p=1 k=1 Here, ? is the total number of paths from the inputs to each j-th output in the directed acyclic graph. In addition, [Xi ](j,p) ? R represents the entry of the i-th sample input datum that is used in the (k) p-th path of the j-th output. For each layer k, w(j,p) ? R is the entry of Wk that is used in the p-th path of the j-th output. Finally, [Zi ](j,p) ? {0, 1} represents whether the p-th path of the j-th output is active ([Zi ](j,p) = 1) or not ([Zi ](j,p) = 0) for each sample i as a result of the rectified linear activation. Assumption A1p-m assumes that the Z?s are Bernoulli random variables with the same probability of success, Pr([Zi ](j,p) = 1) = ? for all i and (j, p). Assumption A5u-m assumes that the Z?s are independent from the input X?s and parameters w?s. With assumptions A1p-m and A5u-m, we can P? QH+1 (k) write EZ [Y? (W, X)j,i ] = q p=1 [Xi ](j,p) ? k=1 w(j,p) . Choromanska et al. (2015b) noted that A6u is unrealistic because it implies that the inputs are not shared among the paths. In addition, Assumption A5u is unrealistic because it implies that the activation of any path is independent of the input data. To understand all of the seven assumptions (A1p, A2p, A3p, A4p, A5u, A6u, and A7p), we note that Choromanska et al. (2015b,a) used these seven assumptions to reduce their loss functions of nonlinear neural networks to: H+1 ? ? X Y 1 1X 2 Xi1 ,i2 ,...,iH+1 wik subject to w = 1, Lprevious (W ) = H/2 ? i=1 i ? i ,i ,...,i =1 1 2 k=1 H+1 4 where ? ? R is a constant related to the size of the network. For our purpose, the detailed definitions of the symbols are not important (X and w are defined in the same way as in equation 1). Here, we point out that the target function Y has disappeared in the loss Lprevious (W ) (i.e., the loss value does not depend on the target function). That is, whatever the data points of Y are, their loss values are the same. Moreover, the nonlinear activation function has disappeared in Lprevious (W ) (and the nonlinearity is not taken into account in X or w). In the next section, by using only a strict subset of the set of these seven assumptions, we reduce our loss function to a more realistic loss function of an actual deep model. Proposition 3.1 (High-level description of a main result in Choromanska et al., 2015a) Assume A1p (including A1p-m), A2p, A3p, A4p, A5u (including A5u-m), A6u, and A7p (Choromanska et al., 2015b). Furthermore, assume that dy = 1. Then, the expected loss of each sample datum, Lprevious (W ), has the following property: above a certain loss value, the number of local minima diminishes exponentially as the loss value increases. 3.3 Results We now state our theoretical result, which partially address the aforementioned open problem. We consider loss functions for all the data points and all possible output dimensionalities (i.e., vectoredvalued output). More concretely, we consider the squared error loss with expectation, L(W ) = 1 2 ? 2 kEZ [Y (W, X) ? Y ]kF . Corollary 3.2 (Loss surface of deep nonlinear networks) Assume A1p-m and A5u-m. Let q = ??1 . Then, we can reduce the loss function of the deep nonlinear model L(W ) to that of the deep linear ? ). Therefore, with the same set of conditions as in Theorem 2.3, the loss function of the model L(W deep nonlinear model has the following properties: (i) (ii) (iii) (iv) It is non-convex and non-concave. Every local minimum is a global minimum. Every critical point that is not a global minimum is a saddle point. The saddle points have the properties stated in Theorem 2.3 (iv) and Corollary 2.4. Comparing Corollary 3.2 and Proposition 3.1, we can see that we successfully discarded assumptions A2p, A3p, A4p, A6u, and A7p while obtaining a tighter statement in the following sense: Corollary 3.2 states with fewer unrealistic assumptions that there is no poor local minimum, whereas Proposition 3.1 roughly asserts with more unrealistic assumptions that the number of poor local minimum may be not too large. Furthermore, our model Y? is strictly more general than the model analyzed in (Choromanska et al., 2015a,b) (i.e., this paper?s model class contains the previous work?s model class but not vice versa). 4 Proof Idea and Important lemmas In this section, we provide overviews of the proofs of the theoretical results. Our proof approach largely differs from those in previous work (Baldi & Hornik, 1989; Baldi & Lu, 2012; Choromanska et al., 2015a,b). In contrast to (Baldi & Hornik, 1989; Baldi & Lu, 2012), we need a different approach to deal with the ?bad? saddle points that start appearing when the model becomes deeper (see Section 2.3), as well as to obtain more comprehensive properties of the critical points with more generality. While the previous proofs heavily rely on the first-order information, the main parts of our proofs take advantage of the second order information. In contrast, Choromanska et al. (2015a,b) used the seven assumptions to relate the loss functions of deep models to a function previously analyzed with a tool of random matrix theory. With no reshaping assumptions (A3p, A4p, and A6u), we cannot relate our loss function to such a function. Moreover, with no distributional assumptions (A2p and A6u) (except the activation), our Hessian is deterministic, and therefore, even random matrix theory itself is insufficient for our purpose. Furthermore, with no spherical constraint assumption (A7p), the number of local minima in our loss function can be uncountable. One natural strategy to proceed toward Theorem 2.3 and Corollary 3.2 would be to use the first-order and second-order necessary conditions of local minima (e.g., the gradient is zero and the Hessian is 5 positive semidefinite).4 However, are the first-order and second-order conditions sufficient to prove Theorem 2.3 and Corollary 3.2? Corollaries 2.4 show that the answer is negative for deep models with H ? 2, while it is affirmative for shallow models with H = 1. Thus, for deep models, a simple use of the first-order and second-order information is insufficient to characterize the properties of each critical point. In addition to the complexity of the Hessian of the deep models, this suggests that we must strategically extract the second order information. Accordingly, in section 4.2, we obtain an organized representation of the Hessian in Lemma 4.3 and strategically extract the information in Lemmas 4.4 and 4.6. With the extracted information, we discuss the proofs of Theorem 2.3 and Corollary 3.2 in section 4.3. 4.1 Notations Let M ? M 0 be the Kronecker product of M and M 0 . Let Dvec(WkT ) f (?) = ?f (?) ?vec(W T ) be the partial k din derivative of f with respect to vec(WkT ) in the numerator layout. That is, if f : R ? Rdout , we dout ?(dk dk?1 ) . Let R(M ) be the range (or the column space) of a matrix have Dvec(WkT ) f (?) ? R M . Let M ? be any generalized inverse of M . When we write a generalized inverse in a condition or statement, we mean it for any generalized inverse (i.e., we omit the universal quantifier over generalized inverses, as this is clear). Let r = (Y (W, X) ? Y )T ? Rm?dy be an error matrix. Let C = WH+1 ? ? ? W2 ? Rdy ?d1 . When we write Wk ? ? ? Wk0 , we generally intend that k > k 0 and the expression denotes a product over Wj for integer k ? j ? k 0 . For notational compactness, two additional cases can arise: when k = k 0 , the expression denotes simply Wk , and when k < k 0 , it denotes Idk . For example, in the statement of Lemma 4.1, if we set k := H + 1, we have that WH+1 WH ? ? ? WH+2 , Idy . In Lemma 4.6 and the proofs of Theorems 2.3, we use the following additional notation. We denote an eigendecomposition of ? as ? = U ?U T , where the entries of the eigenvalues are ordered as ?1,1 > ? ? ? > ?dy ,dy with corresponding orthogonal eigenvector matrix U = [u1 , . . . , udy ]. For each k ? {1, . . . dy }, uk ? Rdy ?1 is a column eigenvector. Let p? = rank(C) ? {1, . . . , min(dy , p)}. We define a matrix containing the subset of the p? largest eigenvectors as Up? = [u1 , . . . , up?]. Given any ordered set Ip? = {i1 , . . . , ip? | 1 ? i1 < ? ? ? < ip? ? min(dy , p)}, we define a matrix containing the subset of the corresponding eigenvectors as UIp? = [ui1 , . . . , uip? ]. Note the difference between Up? and UIp? . 4.2 Lemmas As discussed above, we extracted the first-order and second-order conditions of local minima as the following lemmas. The lemmas provided here are also intended to be our additional theoretical results that may lead to further insights. The proofs of the lemmas are in the appendix. ? ) if and Lemma 4.1 (Critical point necessary and sufficient condition) W is a critical point of L(W only if for all k ? {1, ..., H + 1},  T T ? ) Dvec(WkT ) L(W = WH+1 WH ? ? ? Wk+1 ? (Wk?1 ? ? ? W2 W1 X)T vec(r) = 0. ? ), then Lemma 4.2 (Representation at critical point) If W is a critical point of L(W WH+1 WH ? ? ? W2 W1 = C(C T C)? C T Y X T (XX T )?1 . ? ) in a block form Lemma 4.3 (Block Hessian with Kronecker product) Write the entries of ?2 L(W as   T T ? ? ? ? T T T Dvec(WH+1 ? ? ? Dvec(W1T ) Dvec(WH+1 ) Dvec(WH+1 ) L(W ) ) L(W ) ? ? ? .. .. .. ? )=? ?2 L(W ? ?. . . . ? T T ?   ? ? ) T Dvec(WH+1 ? ? ? Dvec(W1T ) Dvec(W1T ) L(W ) Dvec(W1T ) L(W ) 4 For a non-convex and non-differentiable function, we can still have a first-order and second-order necessary condition (e.g., Rockafellar & Wets, 2009, theorem 13.24, p. 606). 6 Then, for any k ? {1, ..., H + 1}, T  ? ) Dvec(WkT ) Dvec(WkT ) L(W  = (WH+1 ? ? ? Wk+1 )T (WH+1 ? ? ? Wk+1 ) ? (Wk?1 ? ? ? W1 X)(Wk?1 ? ? ? W1 X)T , and, for any k ? {2, ..., H + 1},  T ? ) Dvec(WkT ) Dvec(W1T ) L(W  = C T (WH+1 ? ? ? Wk+1 ) ? X(Wk?1 ? ? ? W1 X)T + [(Wk?1 ? ? ? W2 )T ? X] [Idk?1 ? (rWH+1 ? ? ? Wk+1 )?,1 ... Idk?1 ? (rWH+1 ? ? ? Wk+1 )?,dk ] . ? ) is positive semidefinite or negLemma 4.4 (Hessian semidefinite necessary condition) If ?2 L(W ative semidefinite at a critical point, then for any k ? {2, ..., H + 1}, R((Wk?1 ? ? ? W3 W2 )T ) ? R(C T C) or XrWH+1 WH ? ? ? Wk+1 = 0. ? ) is positive semidefinite or negative semidefinite at a critical point, then Corollary 4.5 If ?2 L(W for any k ? {2, ..., H + 1}, rank(WH+1 WH ? ? ? Wk ) ? rank(Wk?1 ? ? ? W3 W2 ) or XrWH+1 WH ? ? ? Wk+1 = 0. ? ) is positive semidefinite Lemma 4.6 (Hessian positive semidefinite necessary condition) If ?2 L(W at a critical point, then C(C T C)? C T = Up?Up?T or Xr = 0. 4.3 Proof sketches of theorems We now provide the proof sketch of Theorem 2.3 and Corollary 3.2. We complete the proofs in the appendix. 4.3.1 Proof sketch of Theorem 2.3 (ii) By case analysis, we show that any point that satisfies the necessary conditions and the definition of a local minimum is a global minimum. Case I: rank(WH ? ? ? W2 ) = p and dy ? p: If dy < p, Corollary 4.5 with k = H + 1 implies the necessary condition of local minima that Xr = 0. If dy = p, Lemma 4.6 with k = H + 1 and k = 2, combined with the fact that R(C) ? R(Y X T ), implies the necessary condition that Xr = 0. Therefore, we have the necessary condition of local minima, Xr = 0 . Interpreting condition Xr = 0, we conclude that W achieving Xr = 0 is indeed a global minimum. Case II: rank(WH ? ? ? W2 ) = p and dy > p: From Lemma 4.6, we have the necessary condition that C(C T C)? C T = Up?Up?T or Xr = 0. If Xr = 0, using the exact same proof as in Case I, it is a global minimum. Suppose then that C(C T C)? C T = Up?Up?T . From Lemma 4.4 with k = H + 1, we conclude that p? , rank(C) = p. Then, from Lemma 4.2, we write WH+1 ? ? ? W1 = Up UpT Y X T (XX T )?1 , which is the orthogonal projection onto the subspace spanned by the p eigenvectors corresponding to the p largest eigenvalues following the ordinary least square regression matrix. This is indeed the expression of a global minimum. Case III: rank(WH ? ? ? W2 ) < p: We first show that if rank(C) ? min(p, dy ), every local minimum is a global minimum. Thus, we consider the case where rank(WH ? ? ? W2 ) < p and rank(C) < min(p, dy ). In this case, by induction on k = {1, . . . , H +1}, we prove that we can have rank(Wk ? ? ? W1 ) ? min(p, dy ) with arbitrarily small perturbation of each entry of Wk , . . . , W1 ? ). Once this is proved, along with the results of Case I and Case without changing the value of L(W II, we can immediately conclude that any point satisfying the definition of a local minimum is a global minimum. We first prove the statement for the base case with k = 1 by using an expression of W1 that is obtained by a first-order necessary condition: for an arbitrary L1 , W1 = (C T C)? C T Y X T (XX T )?1 + (I ? (C T C)? C T C)L1 . 7 By using Lemma 4.6 to obtain an expression of C, we deduce that we can have rank(W1 ) ? min(p, dy ) with arbitrarily small perturbation of each entry of W1 without changing the loss value. For the inductive step with k ? {2, . . . , H + 1}, from Lemma 4.4, we use the following necessary condition for the Hessian to be (positive or negative) semidefinite at a critical point: for any k ? {2, . . . , H + 1}, R((Wk?1 ? ? ? W2 )T ) ? R(C T C) or XrWH+1 ? ? ? Wk+1 = 0. We use the inductive hypothesis to conclude that the first condition is false, and thus the second condition must be satisfied at a candidate point of a local minimum. From the latter condition, with extra steps, we can deduce that we can have rank(Wk Wk?1 ? ? ? W1 ) ? min(p, dx ) with arbitrarily small perturbation of each entry of Wk while retaining the same loss value. We conclude the induction, proving that we can have rank(C) ? rank(WH+1 ? ? ? W1 ) ? min(p, dx ) with arbitrarily small perturbation of each parameter without changing the value of ? ). Upon such a perturbation, we have the case where rank(C) ? min(p, dy ), for which we L(W have already proven that every local minimum is a global minimum. Summarizing the above, any point that satisfies the definition (and necessary conditions) of a local minimum is indeed a global minimum. Therefore, we conclude the proof sketch of Theorem 2.3 (ii). 4.3.2 Proof sketch of Theorem 2.3 (i), (iii) and (iv) We can prove the non-convexity and non-concavity of this function simply from its Hessian (Theorem 2.3 (i)). That is, we can show that in the domain of the function, there exist points at which the Hessian becomes indefinite. Indeed, the domain contains uncountably many points at which the Hessian is indefinite. We now consider Theorem 2.3 (iii): every critical point that is not a global minimum is a saddle point. Combined with Theorem 2.3 (ii), which is proven independently, this is equivalent to the statement that there are no local maxima. We first show that if WH+1 ? ? ? W2 6= 0, the loss function always has some strictly increasing direction with respect to W1 , and hence there is no local maximum. If WH+1 ? ? ? W2 = 0, we show that at a critical point, if the Hessian is negative semidefinite (i.e., a necessary condition of local maxima), we can have WH+1 ? ? ? W2 6= 0 with arbitrarily small perturbation without changing the loss value. We can prove this by induction on k = 2, . . . , H + 1, similar to the induction in the proof of Theorem 2.3 (ii). This means that there is no local maximum. Theorem 2.3 (iv) follows Theorem 2.3 (ii)-(iii) and the analyses for Case I and Case II in the proof ? )  0 at a critical point, W is a global of Theorem 2.3 (ii); when rank(WH ? ? ? W2 ) = p, if ?2 L(W minimum. 4.3.3 Proof sketch of Corollary 3.2 Since the activations are assumed to be random and independent, the effect of nonlinear activations ? ). disappear by taking expectation. As a result, the loss function L(W ) is reduced to L(W 5 Conclusion In this paper, we addressed some open problems, pushing forward the theoretical foundations of deep learning and non-convex optimization. For deep linear neural networks, we proved the aforementioned conjecture and more detailed statements with more generality. For deep nonlinear neural networks, when compared with the previous work, we proved a tighter statement (in the way explained in section 3) with more generality (dy can vary) and with strictly weaker model assumptions (only two assumptions out of seven). However, our theory does not yet directly apply to the practical situation. To fill the gap between theory and practice, future work would further discard the remaining two out of the seven assumptions made in previous work. Our new understanding of the deep linear models at least provides the following theoretical fact: the bad local minima would arise in a deep nonlinear model but only as an effect of adding nonlinear activations to the corresponding deep linear model. Thus, depending on the nonlinear activation mechanism and architecture, we would be able to efficiently train deep models. Acknowledgments The author would like to thank Prof. Leslie Kaelbling, Quynh Nguyen, Li Huan and Anirbit Mukherjee for their thoughtful comments on the paper. We gratefully acknowledge support from NSF grant 1420927, from ONR grant N00014-14-1-0486, and from ARO grant W911NF1410433. 8 References Baldi, Pierre. 1989. Linear learning: Landscapes and algorithms. In Advances in neural information processing systems. pp. 65?72. Baldi, Pierre, & Hornik, Kurt. 1989. Neural networks and principal component analysis: Learning from examples without local minima. Neural networks, 2(1), 53?58. Baldi, Pierre, & Lu, Zhiqin. 2012. Complex-valued autoencoders. Neural Networks, 33, 136?147. Blum, Avrim L, & Rivest, Ronald L. 1992. Training a 3-node neural network is NP-complete. Neural Networks, 5(1), 117?127. Choromanska, Anna, Henaff, MIkael, Mathieu, Michael, Ben Arous, Gerard, & LeCun, Yann. 2015a. The Loss Surfaces of Multilayer Networks. In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics. pp. 192?204. Choromanska, Anna, LeCun, Yann, & Arous, G?rard Ben. 2015b. Open Problem: The landscape of the loss surfaces of multilayer networks. In Proceedings of The 28th Conference on Learning Theory. pp. 1756?1760. Dauphin, Yann N, Pascanu, Razvan, Gulcehre, Caglar, Cho, Kyunghyun, Ganguli, Surya, & Bengio, Yoshua. 2014. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In Advances in Neural Information Processing Systems. pp. 2933?2941. Ge, Rong, Huang, Furong, Jin, Chi, & Yuan, Yang. 2015. Escaping From Saddle Points?Online Stochastic Gradient for Tensor Decomposition. In Proceedings of The 28th Conference on Learning Theory. pp. 797?842. Goodfellow, Ian, Bengio, Yoshua, & Courville, Aaron. 2016. Deep Learning. Book in preparation for MIT Press. http://www.deeplearningbook.org. Livni, Roi, Shalev-Shwartz, Shai, & Shamir, Ohad. 2014. On the computational efficiency of training neural networks. In Advances in Neural Information Processing Systems. pp. 855?863. Mhaskar, Hrushikesh, Liao, Qianli, & Poggio, Tomaso. 2016. Learning Real and Boolean Functions: When Is Deep Better Than Shallow. Massachusetts Institute of Technology CBMM Memo No. 45. Murty, Katta G, & Kabadi, Santosh N. 1987. Some NP-complete problems in quadratic and nonlinear programming. Mathematical programming, 39(2), 117?129. Rockafellar, R Tyrrell, & Wets, Roger J-B. 2009. Variational analysis. Vol. 317. Springer Science & Business Media. Saxe, Andrew M, McClelland, James L, & Ganguli, Surya. 2014. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. In International Conference on Learning Representations. Zhang, Fuzhen. 2006. The Schur complement and its applications. Vol. 4. Springer Science & Business Media. 9
6112 |@word version:1 polynomial:1 norm:1 open:10 decomposition:1 arous:2 reduction:1 contains:3 kurt:1 past:1 comparing:1 activation:9 yet:3 dx:15 must:2 ronald:1 realistic:2 happen:1 intelligence:2 greedy:1 fewer:1 accordingly:1 beginning:1 hamiltonian:1 provides:1 pascanu:1 node:1 org:1 simpler:1 zhang:1 mathematical:1 along:1 direct:1 become:1 yuan:1 prove:10 consists:1 baldi:16 theoretically:1 hardness:1 expected:1 tomaso:1 indeed:5 nor:1 roughly:1 chi:1 spherical:2 actual:1 increasing:1 becomes:2 spain:1 xx:6 notation:6 moreover:5 rivest:2 begin:1 provided:2 medium:2 what:1 interpreted:1 affirmative:1 eigenvector:2 finding:1 every:11 concave:3 exactly:1 rm:1 uk:1 whatever:1 unit:1 grant:3 omit:1 before:1 positive:6 local:29 establishing:1 path:6 abuse:1 studied:1 suggests:1 a1p:12 range:1 directed:2 practical:5 acknowledgment:1 lecun:2 practice:3 block:2 differs:1 xr:8 razvan:1 idk:4 universal:1 murty:2 projection:1 kabadi:2 dout:1 suggest:2 cannot:1 onto:1 a2p:5 www:1 equivalent:2 map:1 deterministic:1 eighteenth:1 layout:1 independently:1 convex:13 identifying:1 immediately:1 insight:1 spanned:1 fill:1 proving:1 resp:2 target:3 qh:1 heavily:1 suppose:1 exact:2 shamir:1 programming:2 goodfellow:4 hypothesis:1 element:1 satisfying:1 mukherjee:1 distributional:1 labeled:1 role:1 wj:1 intuition:1 convexity:2 complexity:1 dynamic:1 ultimately:1 depend:1 upon:1 efficiency:1 train:3 distinct:3 fast:1 artificial:2 shalev:1 widely:1 valued:1 statistic:1 bdk:4 itself:1 ip:3 seemingly:1 online:1 advantage:1 eigenvalue:11 differentiable:1 aro:1 product:3 description:1 frobenius:1 asserts:1 ky:2 convergence:1 gerard:1 disappeared:2 leave:1 ben:2 depending:2 andrew:1 kenji:1 implies:4 differ:1 direction:1 stochastic:2 vc:1 saxe:2 require:1 generalization:1 proposition:6 tighter:3 mathematically:1 strictly:5 rong:1 hold:2 practically:1 cbmm:1 roi:1 great:2 major:2 vary:1 smallest:1 purpose:2 diminishes:1 wet:2 largest:2 vice:1 successfully:3 tool:1 hope:1 mit:2 always:1 modified:1 rdx:1 corollary:16 notational:1 rank:20 bernoulli:1 contrast:5 sense:3 glass:1 summarizing:1 ganguli:2 typically:2 entire:1 compactness:1 hidden:3 choromanska:13 i1:2 among:2 colt:2 ill:1 dauphin:2 favored:1 retaining:1 development:2 aforementioned:2 field:4 once:1 santosh:1 having:1 represents:4 k2f:1 future:3 np:5 yoshua:2 strategically:2 comprehensive:2 intended:1 interest:2 introduces:1 analyzed:2 semidefinite:10 partial:1 necessary:14 xy:4 poggio:1 huan:1 ohad:1 orthogonal:2 iv:7 theoretical:15 instance:2 column:3 idy:1 boolean:1 leslie:1 ordinary:1 tractability:1 kaelbling:1 entry:9 subset:3 conducted:1 too:2 characterize:1 answer:2 combined:2 cho:1 international:2 xi1:1 invertible:1 michael:1 w1:22 squared:4 satisfied:1 containing:2 huang:1 book:1 derivative:1 li:1 account:1 wk:28 invertibility:1 rockafellar:2 satisfy:1 explicitly:1 later:2 analyze:1 start:3 relied:1 shai:1 ative:1 square:1 spin:1 largely:1 efficiently:1 landscape:2 lu:6 multiplying:1 researcher:1 rectified:2 published:1 definition:4 uip:3 pp:6 remarked:1 james:1 rdk:3 proof:21 proved:5 massachusetts:2 wh:38 dimensionality:1 organized:1 furong:1 rard:1 done:1 though:1 generality:4 furthermore:4 roger:1 autoencoders:1 working:1 sketch:6 quynh:1 nonlinear:24 effect:3 k22:1 true:1 inductive:2 hence:1 kyunghyun:1 din:1 i2:1 deal:3 attractive:1 numerator:1 width:6 noted:5 generalized:4 complete:4 l1:2 interpreting:1 wise:1 variational:1 recently:3 deeplearningbook:1 empirically:1 overview:1 conditioning:1 exponentially:2 volume:1 discussed:3 slight:1 refer:1 rdy:7 versa:1 vec:3 smoothness:1 nonlinearity:1 gratefully:1 surface:6 deduce:2 base:1 recent:2 showed:1 henaff:1 discard:2 certain:1 nonconvex:2 n00014:1 onr:1 success:4 arbitrarily:7 minimum:44 additional:4 attacking:1 ii:17 mhaskar:2 full:2 d0:1 reshaping:1 regression:1 multilayer:3 vision:1 expectation:2 liao:1 represent:1 kernel:1 normalization:1 ui1:1 whereas:2 addition:4 background:2 addressed:1 leaving:1 w2:23 rest:1 extra:1 strict:1 wkt:7 induced:1 subject:2 comment:1 schur:1 integer:1 yang:1 feedforward:1 iii:6 easy:1 decent:1 bengio:2 independence:1 zi:5 w3:2 architecture:2 escaping:1 reduce:3 regarding:2 idea:1 whether:1 expression:5 hessian:19 cause:1 proceed:1 deep:59 generally:1 detailed:3 clear:1 eigenvectors:3 revived:1 wk0:2 mcclelland:1 reduced:1 http:1 exist:4 nsf:1 write:6 vol:2 express:1 four:1 indefinite:2 blum:2 achieving:1 changing:4 graph:2 inverse:4 everywhere:1 yann:3 dy:28 appendix:3 announced:2 dvec:15 layer:18 played:1 datum:2 courville:1 quadratic:1 nontrivial:1 constraint:1 kronecker:2 u1:2 min:10 conjecture:11 poor:6 beneficial:1 shallow:6 making:1 explained:3 quantifier:1 pr:1 taken:1 equation:1 previously:2 discus:2 mechanism:2 know:2 ge:2 tractable:3 informal:1 adopted:1 gulcehre:1 apply:1 pierre:3 appearing:2 existence:1 assumes:2 remaining:2 uncountable:1 denotes:3 pushing:1 mikael:1 nonexistence:1 prof:1 kawaguchi:1 disappear:1 classical:2 tensor:1 intend:1 question:1 already:1 strategy:1 usual:1 exhibit:1 gradient:4 subspace:1 thank:1 capacity:1 seven:6 toward:2 induction:4 assuming:1 relationship:1 insufficient:2 thoughtful:1 difficult:5 unfortunately:1 statement:11 relate:2 negative:10 stated:2 memo:1 discarded:1 acknowledge:1 caglar:1 jin:1 defining:1 situation:1 perturbation:6 arbitrary:1 complement:1 specified:1 connection:1 barcelona:1 nip:1 address:4 able:2 suggested:1 usually:1 including:3 max:1 unrealistic:6 critical:17 natural:2 rely:1 business:2 scarce:1 advanced:1 wik:1 technology:2 imply:1 mathieu:1 autoencoder:1 extract:2 understanding:4 kf:2 loss:39 proven:3 acyclic:2 foundation:2 eigendecomposition:1 sufficient:2 row:1 uncountably:1 last:1 weaker:2 deeper:5 understand:1 institute:2 taking:1 livni:2 depth:2 dimension:2 concavity:1 concretely:1 forward:1 made:1 author:1 nguyen:1 far:1 global:19 active:1 b1m:2 conclude:6 assumed:1 xi:3 shwartz:1 surya:2 search:1 continuous:1 mj:2 b11:2 obtaining:1 hornik:8 investigated:1 complex:1 domain:2 anna:2 qianli:1 main:4 arise:2 w1t:5 succinct:1 allowed:1 shrinking:1 formalization:1 candidate:1 bij:2 ian:1 theorem:28 bad:7 discarding:1 symbol:1 dk:7 concern:2 intractable:1 ih:1 false:1 adding:1 avrim:1 gap:2 rd1:1 simply:3 saddle:20 ez:1 ordered:2 partially:2 springer:2 corresponds:1 satisfies:2 dh:5 extracted:2 identity:2 lipschitz:1 absence:1 shared:1 hard:1 included:1 except:2 tyrrell:1 lemma:19 principal:1 total:1 id1:1 aaron:1 support:1 latter:1 preparation:1 d1:3
5,651
6,113
Learning to Poke by Poking: Experiential Learning of Intuitive Physics Pulkit Agrawal? Ashvin Nair? Pieter Abbeel Jitendra Malik Sergey Levine Berkeley Artificial Intelligence Research Laboratory (BAIR) University of California Berkeley {pulkitag,anair17,pabbeel,malik,svlevine}@berkeley.edu Abstract We investigate an experiential learning paradigm for acquiring an internal model of intuitive physics. Our model is evaluated on a real-world robotic manipulation task that requires displacing objects to target locations by poking. The robot gathered over 400 hours of experience by executing more than 100K pokes on different objects. We propose a novel approach based on deep neural networks for modeling the dynamics of robot?s interactions directly from images, by jointly estimating forward and inverse models of dynamics. The inverse model objective provides supervision to construct informative visual features, which the forward model can then predict and in turn regularize the feature space for the inverse model. The interplay between these two objectives creates useful, accurate models that can then be used for multi-step decision making. This formulation has the additional benefit that it is possible to learn forward models in an abstract feature space and thus alleviate the need of predicting pixels. Our experiments show that this joint modeling approach outperforms alternative methods. 1 Introduction Humans can effortlessly manipulate previously unseen objects in novel ways. For example, if a hammer is not available, a human might use a piece of rock or back of a screwdriver to hit a nail. What enables humans to easily perform such tasks that machines struggle with? One possibility is that humans possess an internal model of physics (i.e. ?intuitive physics? (Michotte, 1963; McCloskey, 1983)) that allows them to reason about physical properties of objects and forecast their dynamics under the effect of applied forces. Such models can be used to transform a given task into a search problem in a manner similar to how moves can be planned in a game of chess or tic-tac-toe by searching through the game tree. Because the search algorithm is independent of task semantics, solutions to different and possibly new tasks can be determined using the same mechanism. In human development, it is well known that infants spend years worth of time playing with objects in a seemingly random manner with no specific end goal (Smith & Gasser, 2005; Gopnik et al., 1999). One hypothesis is that infants distill this experience into intuitive physics models that predict how their actions effect the motion of objects. Once learnt, these models could be used for planning actions for achieving novel goals later in life. Inspired by this hypothesis, in this work we investigate whether a robot can use it?s own experience to learn an intuitive model of physics that is also effective for planning actions. In our setup (see Figure 1), a Baxter robot interacts with objects kept on a table in front of it by randomly poking them. The robot records the visual state of the world before and after it executes a poke in order to learn a mapping between its actions and the accompanying change in visual state caused by object motion. To date our robot has interacted with objects for more than 400 hours and in process collected more than 100K pokes on 16 distinct objects. ? equal contribution, authors are listed in alphabetical order. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. CNN CNN Predict Poke Figure 1: Infants spend years worth of time playing with objects in a seemingly random manner. They might use this experience to learn a model of physics relating their actions with the resulting motion of objects. Inspired by this hypothesis, we let a robot interact with objects by randomly poking them. The robot pokes objects and records the visual state before (left) and after (right) the poke. The triplet of before image, after image and the applied poke is used to train a neural network (center) for learning the mapping between actions and the accompanying change in visual state. We show that this learn model can be used to push objects into a desired configuration. What kind of a model should the robot learn from it?s experience? One possibility is to build a model that predicts the next visual state from the current visual state and the applied force (i.e forward dynamics model). This is challenging because predicting the value of every pixel in the next image is non-trivial in real world scenarios. Moreover, in most cases it is not the precise pixel values that are of interest, but the occurrence of a more abstract event. For example, predicting that a glass jar will break when pushed from the table onto the ground is of greater interest (and easier) than predicting exactly how every piece of shattered glass will look. The difficulty, however, is that supervision for such abstract concepts or events is not readily available in unsupervised settings such as ours. In this work, we propose one solution to this problem by jointly training forward and inverse dynamics models. A forward model predicts the next state from the current state and action, and an inverse model predicts the action given the initial and target state. In joint training, the inverse model objective provides supervision for transforming image pixels into an abstract feature space, which the forward model can then predict. The inverse model alleviates the need for the forward model to make predictions in the pixel space and the forward model in turn regularizes the feature space for the inverse model. We empirically show that the joint model allows the robot to generalize and plan actions for achieving tasks with significantly different visual statistics as compared to the data used in the learning phase. Our model can be used for multi step decision making and displace objects with novel geometry and texture into desired goal locations that are much farther apart as compared to position of objects before and after a single poke. We probe the joint modeling approach further using simulation studies and show that the forward model regularizes the inverse model. 2 Data Figure 1 shows our experimental setup. The robot is equipped with a Kinect camera and a gripper for poking objects kept on a table in front of it. At any given time there were 1-3 objects chosen from a set of 16 distinct objects present on the table. The robot?s coordinate system was as following: X and Y axis represented the horizontal and vertical axes, while the Z axis pointed away from the robot. The robot poked objects by moving its finger along the XZ plane at a fixed height from the table. Poke Representation: For collecting a sample of interaction data, the robot first selects a random target point in its field of view to poke. One issue with random poking is that most pokes are executed in free space which severely slows down collection of interesting interaction data. For speedy data collection, a point cloud from the Kinect depth camera was used to only chose points that lie on any object except the table. Point cloud information was only used during data collection and at test time our system only requires RGB image data. After selecting a random point to poke (p) on the object, 2 Figure 2: These images depict the robot in the process of displacing the bottle away from the indicated dotted line. In the middle of the poke, the object flips and ends up moving in the wrong direction. Such occurrences are common because the real world objects have complex geometric and material properties. This makes learning manipulation strategies without prior knowledge very challenging. the robot randomly samples a poke direction (?) and length (l). Kinematically, the poke is defined by points p1 , p2 that are 2l distance from p in the directions ?o , (180 + ?)o respectively. The robot executes the poke by moving its finger from p1 to p2 . Our robot can run autonomously 24x7 without any human intervention. Sometimes when objects are poked they move as expected, but other times due to non-linear interaction between the robot?s finger and the object they move in unexpected ways as shown in Figure 2. Any model of the poking data must deal with such non-linear interactions (see project website for more examples). A small amount of data in the early stages of the project was collected on a table with a green background, but most of our data was collected in a wooden arena with walls for preventing objects from falling down. All results in this paper are from data collected only from the wooden arena. 3 Method The forward and inverse models can be formally described by equations 1 and 2, respectively. The notation is as following: xt , ut are the world state and action applied time step t, x ?t+1 , u ?t+1 are the predicted state and actions, and Wf wd and Winv are parameters of the functions F and G that are used to construct the forward and inverse models. x ?t+1 = F (xt , ut ; Wf wd ) (1) u?t = G(xt , xt+1 ; Winv ) (2) Given an initial and goal state, inverse models provide a direct mapping to actions required for achieving the goal state in one step (if feasible). However, multiple possible actions can transform the world from one visual state to another. For example, an object can appear in a certain part of the visual field if the agent moves or if the agent uses its arms to move the object. This multi-modality in the action space makes the learning hard. On the other hand, given xt and ut , there exists a next state xt+1 that is unique up to dynamics noise. This suggests that forward models might be easier to learn. However, learning forward models in image space is hard because predicting the value of each pixel in the future frames is a non-trivial problem with no known good solution. However, in most scenarios we are not interested in predicting every pixel, but predicting the occurrence of a more abstract event such as object motion, change in object pose etc. The ability to learn an abstract task relevant feature space should make it easier to learn a forward dynamics model. One possible approach is to learn a dynamics model in the feature representation of a higher layer of a deep neural network trained to perform image classification (say on ImageNet) (Vondrick et al., 2016). However, this is not a general way of learning task relevant features and it is unclear whether features adept at object recognition are also optimal for object manipulation. The alternative of adapting higher layer features of a neural network while simultaneously optimizing for the prediction loss leads to a degenerate solution of all the features reducing to zero, since the prediction loss in this case is also zero. Our key observation is that this degenerate solution can be avoided by imposing the constraint that it should be possible to infer the the executed action (ut ) from the feature representation of two images obtained before (xt ) and after (xt+1 ) the action (ut ) is applied (i.e. optimizing the inverse model). This formulation provides a general mechanism for using general purpose function approximators such as deep neural networks for simultaneously learning a task relevant feature space and forecasting the future outcome of actions in this learned space. A second challenge in using forward models is that inferring the optimal action inevitably leads to finding a solution to non-convex problems that are subject to local optima. The inverse model does not suffers from this drawback as it directly outputs the required action. These considerations suggest that inverse and forward models have complementary strengths and therefore it is worthwhile to investigate training a joint model of inverse and forward dynamics. 3 (c) pt , ?t , lt x ?t+1 (a) It p?t xt ??t (b) It+1 ? lt xt+1 Figure 3: (a) The collection of objects in the training set poked by the robot. (b) Example pairs of before (It ) and after images (It+1 ) after a single poke was made by the robot. (c) A Siamese convolutional neural network was trained to predict the poke location (pt ), angle (?t ) and length (lt ) required to transform objects in the image at the tth time step (It ) into their state in It+1 . Images It and It+1 are transformed into their latent feature representations (xt , xt+1 ) by passing them through a series of convolutional layers. For building the inverse model, xt , xt+1 are concatenated and passed through fully connected layers to predict the discretized poke. For building the forward model, the action ut = {pt , ?t , lt } and xt are passed through a series of fully connected layers to predict xt+1 . 3.1 Model A deep neural network is used to simultaneously learn a model of forward and inverse dynamics (see Figure 3). A tuple of before image (It ), after image (It+1 ) and the robot?s action (ut ) constitute one training sample. Input images at consequent time steps (It , It+1 ) are transformed into their latent feature representations (xt , xt+1 ) by passing them through a series of five convolutional layers with the same architecture as the first five layers of AlexNet (Krizhevsky et al., 2012). For building the inverse model, xt , xt+1 are concatenated and passed through fully connected layers to conditionally predict the poke location (pt ), angle (?t ) and length (lt ) separately. For modeling multimodal poke distributions, poke location, angle and length of poke are discretized into a 20 ? 20 grid, 36 bins and 11 bins respectively. The 11th bin of the poke length is used to denote no poke. For building the forward model, the feature representation of the before image (xt ) and the action (ut ; real-valued vector without discretization) are passed into a sequence of fully connected layer that predicts the feature representation of the next image (xt+1 ). Training is performed to optimize the loss defined in equation 3 below. Ljoint = Linv (ut , u?t , W ) + ?Lf wd (xt+1 , x ?t+1 , W ) (3) Linv is a sum of three cross entropy losses between the actual and predicted poke location, angle and length. Lf wd is a L1 loss between the predicted (? xt+1 ) and the ground truth (xt+1 ) feature representation of the after image (It+1 ). W are the parameters of the neural network. We used ? = 0.1 in all our experiments. We call this the joint model and we compare its performance against the inverse only model that was trained by setting ? = 0 in equation 3. More details about model training are provided in the supplementary materials. 3.2 Evaluation Procedure One way to test the learnt model is to provide the robot with an initial and goal image and task it to apply pokes that would displace objects into the configuration shown in the goal image. If the robot succeeds at achieving the goal configuration when the visual statistics of the pair of initial and goal image is similar to before and after image in the training set, then this would not be a convincing demonstration of generalization. However, if the robot is able to displace objects into goal positions that are much farther apart as compared to position of objects before and after a single poke then it might suggest that our model has not simply overfit but has learnt something about the underlying physics of how objects move when poked. This suggestion would be further strengthened if the robot is also able to push objects with novel geometry and texture in presence of multiple distractor objects. If the objects in the initial and goal image are farther apart than the maximum distance that can be pushed by a single poke, then the model would be required to output a sequence of pokes. We use a 4 (a) Greedy Planner (b) Blob Model Action Predictor Current Image (It ) (c) Pose Error Evaluation Goal Image (Ig ) Angle (?) Next Image (It+1 ) Figure 4: (a) Greedy planner is used to output a sequence of pokes to displace the objects from their configuration in initial to the goal image. (b) The blob model first detects the location of objects in the current and goal image. Based on object positions, location and angle of the poke is computed and then executed by the robot. The obtained next and goal image are used to compute the next poke and this process is repeated iteratively. (c) The error of the models in poking objects to their correct pose is measured as the angle between the major axis of the objects in the final and goal images. greedy planning method (see Figure 4(a)) to output a sequence of pokes. First, images depicting the initial and goal state are passed through the learnt model to predict the poke which is then executed by the robot. Then, the image depicting the current world state (i.e. the current image) and the goal image are fed again into the model to output a poke. This process is repeated iteratively unless the robot predicts a no-poke (see section 3.1) or a maximum number of 10 pokes is reached. Error Metrics: In all our experiments, the initial and goal images differ in the position of only a single object. The location and pose of the object in the final image after the robot stops and the goal image are compared for quantitative evaluation. The location error is the Euclidean distance between the object locations. In order to account for different object distances in the initial and goal state, we use relative instead of absolute location error. Pose error is defined as the angle (in degrees) between the major axis of the objects in the final and goal images (see Figure 4(c)). Please see supplementary materials for further details. 3.3 Blob Model We compared the performance of the learnt model against a baseline blob model. This model first estimates object locations in current and goal image using template based object detector. It then uses the vector difference between these to compute the location, angle and length of poke executed by the robot (see supplementary materials for details). In a manner similar to greedy planning with the learnt model, this process is repeated iteratively until the object gets closer to the desired location in the goal image by a pre-defined threshold or a maximum number of pokes is reached. 4 Results The robot was tasked to displace objects in an initial image into their configuration depicted in a goal image (see Figure 5). The three rows in the figure show the performance when the robot is asked to displace an object (Nutella bottle) present in the training set, an object (red cup) whose geometry is different from objects in the training set and when the task is to move an object around an obstacle. These examples are representative of the robot?s performance and more examples can be found on the project website. It can be seen that the robot is able to successfully poke objects present in the training set and objects with novel geometry and texture into desired goal locations that are significantly farther than pair of before and after images used in the training set. Row 2 in Figure 5 also shows that the robot?s performance in unaffected by the presence of distractor objects that occupy the same location in the current and goal images. These results indicate that the learnt model allows the robot to perform tasks that show generalization beyond the training set (i.e. poking object by small distances). Row 3 in Figure 5 depicts an example where the robots fails to push the object around an obstacle (yellow object). The robot acts greedily and ends up pushing the obstacle along with the object. One more side-effect of greedy planning is zig-zag instead of straight trajectories taken by the object between its initial and goal locations. Investigating alternatives to 5 Initial State Goal State Training set object Unseen object End of Sequence (EoS) Limitation (EoS) Figure 5: The robot is able to successfully displace objects in the training set (row 1; Nutella bottle) and objects with previously unseen geometry (row 2; red cup) into goal locations that are significantly farther than pair of before and after images used in the training set. The robot is unable to push objects around obstacles (row 3; limitation of greedy planning). greedy planning, such as using the learnt forward model for planning pokes is a very interesting direction for future research. What representation could the robot have learnt that allows it to generalize? One possibility is that the robot ignores the geometry of the object and only infers the location of the object in the initial and goal image and uses the difference vector between object locations to deduce what poke to execute. This strategy is invariant to absolute distance between the object locations and is therefore capable of explaining the observed generalization to large distances. While we cannot prove that the model has learnt to detect object location, nearest neighbor visualizations of the learnt feature space clearly suggest sensitivity to object location (see supplementary materials). This is interesting because the robot received no direct supervision to locate objects. Because different objects have different geometries, they need to be poked at different places to move them in the same manner. For example, a Nutella bottle can be reliably moved forward without rotating the bottle by poking it on the side along the direction toward its center of mass, whereas a hammer is reliably moved by poking it where the hammer head meets the handle. Pushing an object to a desired pose is harder and requires a more detailed understanding of object geometry in comparison to pushing the object to a desired location. In order to test whether the learnt model represents any information about object geometry, we compared its performance against the baseline blob model (see section 3.3 and figure 4(b)) that ignores object geometry. For this comparison, the robot was tasked to push objects to a nearby goal by making only a single poke (see supplementary materials for more details). Results in Figure 6(a) show that both the inverse and joint model outperform the blob model. This indicates that in addition to representing information about object location, the learn models also represent some information about object geometry. 4.1 Forward model regularizes the inverse model We tested the hypothesis whether the forward model regularizes the feature space learnt by the inverse model in a 2-D simulation environment where the agent interacted with a red rectangular object by poking it by small forces. The rectangle was allowed to freely translate and rotate (Figure 6(c)). Model training was performed using an architecture similar to the one described in section 3.1. Additional details about the experimental setup, network architecture and training procedure for the simulation experiments are provided in the supplementary materials. Figure 6(c) shows that when less training data (10K, 20K examples) is available the joint model outperforms the inverse model and reaches closer to the goal state in fewer steps (i.e. fewer actions). This shows that indeed the forward model regularizes the inverse model and helps generalize better. However, when the number of training examples is increased to 100K both models are at par. This is not surprising because training with more data often results in better generalization and thus the inverse model is no longer reliant on the forward model for the regularization. Evaluation on the real robot supports the findings from the simulation experiments. Figure 6(b) shows that in a test of generalization, when an object is required to be displaced by a long distance, the joint model outperforms the inverse model. Similar performance of joint and blob model at this task is not surprising because even if the pokes are somewhat inaccurate but generally in the direction 6 (a) Pose error for nearby goals Initial State Goal State 1.0 Inverse Model, #Train 10K Joint Model, #Train 10K Inverse Model, #Train 20K Joint Model, #Train 20K Inverse Model, #Train 100K Joint Model, #Train 100K 0.9 20 Blob Model 40 60 Inverse Model Relative Location Error 0 Joint Model 0.8 0.7 0.6 0.5 0.4 0.3 0.0 0.1 0.2 0.3 0.2 0 0.4 (b) Relative location error for far away goals 1 2 Number of Steps 3 4 (c) Simulation experiments Figure 6: (a) Inverse and Joint model are more accurate than the blob model at pushing objects towards the desired pose. (b) The joint model outperforms the inverse-only model when the robot is tasked to push objects by distances that are significantly larger than object distance in before and after images used in the training set (i.e. a test of generalization). (c) Simulation studies reveal that when less number of training examples (10K, 20K) are available the joint model outperforms the inverse model and the performance is comparable with larger amount of data (100K). This result indicates that the forward model regularizes the inverse model. from object?s current to goal location, the object might traverse a zig-zag path but it would eventually reach the goal. The joint model is however more accurate at displacing objects into their correct pose as compared to the blob model (Figure 6(a)). 5 Related Work Learning visual control policies using reinforcement learning for tasks such as playing Atari games (Mnih et al., 2015), controlling robots in simulation (Lillicrap et al., 2016) and in the real world (Levine et al., 2016a) is of growing interest. However, these methods are model free and learn goal specific policies, which makes it difficult to repurpose the learned policies for new tasks. In contrast, the aim of this work is to learn intuitive physical models of object interaction which we show allow the agent to generalize. Other works in visual control have relied on model free methods that operate on a a low-dimensional state representation of images obtained using autoencoders (Lange et al., 2012; Finn et al., 2016; Kietzmann & Riedmiller, 2009). It is unclear that features obtained by optimizing pixelwise reconstruction are necessarily well suited for model based control. Learning to grasp objects by trial and error from large amounts of interaction data has recently been explored (Pinto & Gupta, 2016; Levine et al., 2016b). These methods aim to acquire a policy for solving a single concrete task, while our work is concerned with learning a general predictive model that could be used to achieve a variety of goals at test time. When an object is grasped, it is possible to fully control the state of the grasped object. However, in non-prehensile manipulation (i.e. manipulation without grasping (LaValle, 2006)) such as poking, the object state is not directly controllable which makes manipulation by poking harder than grasping (Dogar & Srinivasa, 2012). Learning a model of poking was considered by (Pinto et al., 2016), but their goal was to learn visual representations and they did not consider using the learnt models to displace objects to goal locations. A good review of model based control can be found in (Mayne, 2014) and (Jordan & Rumelhart, 1992; Wolpert et al., 1995) provide interesting perspectives. A model based deep learning method for cutting vegetables was considered by (Lenz et al., 2015). However, as their system operated on the robotic state space instead of vision and is thus limited in its generality. Model based control from visual inputs was considered by (Fragkiadaki et al., 2016; Wahlstr?m et al., 2015; Watter et al., 2015; Oh et al., 2015) in synthetic domains of manipulating two degree of freedom robotic arm, inverted pendulum, billiards and Atari games. In contrast, we tackle manipulation of complex, compressible real world objects. Instead of learning a model of physics, some recents works (Wu et al., 2015; Mottaghi et al., 2016; Lerer et al., 2016) have proposed to use Newtonian physics in combination with neural networks to forecast object dynamics. 7 In robotic manipulation, a number of prior methods have been proposed that use hand-designed visual features and known object poses or key locations to plan and execute pushes and other non-prehensile manipulations (Kopicki et al., 2011; Lau et al., 2011; Meri?li et al., 2015). Unlike these methods, the goal in our work is to learn an intuitive physics model for pushing only from raw images, thus allowing the robot to learn by exploring the environment on its own without human intervention. 6 Discussion and Future Work In this work we propose to learn ?intuitive" model of physics using interaction data. An alternative is to represent the world in terms of a fixed set of physical parameters such as mass, friction coefficient, normal forces etc and use a physics simulator for computing object dynamics from this representation (Kolev & Todorov, 2015; Mottaghi et al., 2016; Wu et al., 2015; Hamrick et al., 2011). This approach is general because physics simulators inevitably use Newton?s laws that apply to a wide range of physical phenomenon ranging from orbital motion of planets to a swinging pendulum. Estimating parameters such as as mass, friction coefficient etc. from sensory data is subject to errors, and it is possible that one parameterization is easier to estimate or more robust to sensory noise than another. For example, the conclusion that objects with feather like appearance fall slower than objects with stone like appearance can be reached by either correlating visual texture to the speed of falling objects, or by computing the drag force after estimating the cross section area of the object. Depending on whether estimation of visual texture or cross section area is more robust, one parameterization will result in more accurate predictions than the other. Pre-defining a set of parameters for predicting object dynamics, which is required by ?simulator-based" approach might therefore lead to suboptimal solutions that are less robust. For many practical object manipulation tasks of interest, such as re-arranging objects, cutting vegetables, folding clothes, and so forth, small errors in execution are acceptable. The key challenge is robust performance in the face of varying environmental conditions. This suggests that a more robust but a somewhat imprecise model may in fact be desirable over a less robust and a more precise model. While the arguments presented above suggest that intuitive physics models are likely to be more robust than simulator based models, quantifying the robustness of these models is an interesting direction for future work. Furthermore, it is non-trivial to use simulator based models for manipulating deformable objects such as clothes and ropes because simulation of deformable objects is hard and also also requires representing objects by heavily handcrafted features that are unlikely to generalize across objects. The intuitive physics approach does not make any object specific assumptions and can be easily extended to work with deformable objects. This approach is in the spirit of recent successful deep learning techniques in computer vision and speech processing that learn features directly from data, whereas the simulator based physics approach is more similar to using hand-designed features. Current methods for learning intuitive physics models, such as ours are data inefficient and it is possible that combining intuitive and simulator based approaches leads to better models than either approach by itself. In poking based interaction, the robot does not have full control of the object state which makes it harder to predict and plan for the outcome of an action. The models proposed in this work generalize and are able to push objects into their desired location. However, performance on setting objects in the desired pose is not satisfactory, possibly because of the robot only executing pokes in large, discrete time steps. An interesting area of future investigation is to use continuous time control with smaller pokes that are likely to be more predictable than the large pokes used in this work. Further, although our approach is evaluated on a specific robotic manipulation task, there are no task specific assumptions, and the techniques are applicable to other tasks. In future, it would be interesting to see how the proposed approach scales with more complex environments, diverse object collections, different manipulation skills and to other non-manipulation based tasks, such as navigation. Other directions for future investigation include the use of forward model for planning and developing better strategies for data collection than random interaction. Supplementary Materials: and videos can be found at http://ashvin.me/pokebot-website/. Acknowledgement: We thank Alyosha Efros for inspiration and fruitful discussions throughout this work. The title of this paper is partly influenced by the term ?pokebot" that Alyosha has been using for several years. We thank Ruzena Bajcsy for access to Baxter robot and Shubham Tulsiani for helpful comments. This work was supported in part by ONR MURI N00014-14-1-0671, ONR YIP 8 and by ARL through the MAST program. We are grateful to NVIDIA corporation for donating K40 GPUs and providing access to the NVIDIA PSG cluster. References Dogar, Mehmet R and Srinivasa, Siddhartha S. A planning framework for non-prehensile manipulation under clutter and uncertainty. Autonomous Robots, 33(3):217?236, 2012. Finn, Chelsea, Tan, Xin Yu, Duan, Yan, Darrell, Trevor, Levine, Sergey, and Abbeel, Pieter. Deep spatial autoencoders for visuomotor learning. ICRA, 2016. Fragkiadaki, Katerina, Agrawal, Pulkit, Levine, Sergey, and Malik, Jitendra. Learning visual predictive models of physics for playing billiards. ICLR, 2016. Gopnik, Alison, Meltzoff, Andrew N, and Kuhl, Patricia K. The scientist in the crib: Minds, brains, and how children learn. 1999. Hamrick, Jessica, Battaglia, Peter, and Tenenbaum, Joshua B. Internal physics models guide probabilistic judgments about object dynamics. In Cognitive Science Society, pp. 1545?1550, 2011. Jordan, Michael I and Rumelhart, David E. Forward models: Supervised learning with a distal teacher. Cognitive science, 16, 1992. Kietzmann, Tim C and Riedmiller, Martin. The neuro slot car racer: Reinforcement learning in a real world setting. In ICMLA, 2009. Kolev, Svetoslav and Todorov, Emanuel. Physically consistent state estimation and system identification for contacts. In International Conference on Humanoid Robots, pp. 1036?1043. IEEE, 2015. Kopicki, Marek, Zurek, Sebastian, Stolkin, Rustam, M?rwald, Thomas, and Wyatt, Jeremy. Learning to predict how rigid objects behave under simple manipulation. In ICRA, pp. 5722?5729. IEEE, 2011. Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classification with deep convolutional neural networks. In NIPS, pp. 1097?1105, 2012. Lange, Stanislav, Riedmiller, Martin, and Voigtlander, Arne. Autonomous reinforcement learning on raw visual input data in a real world application. In IJCNN, pp. 1?8. IEEE, 2012. Lau, Manfred, Mitani, Jun, and Igarashi, Takeo. Automatic learning of pushing strategy for delivery of irregular-shaped objects. In ICRA, pp. 3733?3738. IEEE, 2011. LaValle, Steven M. Planning algorithms. Cambridge university press, 2006. Lenz, Ian, Knepper, Ross, and Saxena, Ashutosh. Deepmpc: Learning deep latent features for model predictive control. In RSS, 2015. Lerer, Adam, Gross, Sam, and Fergus, Rob. Learning physical intuition of block towers by example. arXiv preprint arXiv:1603.01312, 2016. Levine, Sergey, Finn, Chelsea, Darrell, Trevor, and Abbeel, Pieter. End-to-end training of deep visuomotor policies. JMLR, 2016a. Levine, Sergey, Pastor, Peter, Krizhevsky, Alex, and Quillen, Deirdre. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. arXiv, 2016b. Lillicrap, Timothy P, Hunt, Jonathan J, Pritzel, Alexander, Heess, Nicolas, Erez, Tom, Tassa, Yuval, Silver, David, and Wierstra, Daan. Continuous control with deep reinforcement learning. ICLR, 2016. Mayne, David Q. Model predictive control: Recent developments and future promise. Automatica, 50(12):2967?2986, 2014. McCloskey, Michael. Intuitive physics. Scientific american, 248(4):122?130, 1983. Meri?li, Tekin, Veloso, Manuela, and Ak?n, H Levent. Push-manipulation of complex passive mobile objects using experimentally acquired motion models. Autonomous Robots, 38(3):317?329, 2015. Michotte, Albert. The perception of causality. 1963. Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Rusu, Andrei A, Veness, Joel, Bellemare, Marc G, Graves, Alex, Riedmiller, Martin, Fidjeland, Andreas K, Ostrovski, Georg, et al. Human-level control through deep reinforcement learning. Nature, 2015. Mottaghi, Roozbeh, Bagherinezhad, Hessam, Rastegari, Mohammad, and Farhadi, Ali. Newtonian image understanding: Unfolding the dynamics of objects in static images. CVPR, 2016. Oh, Junhyuk, Guo, Xiaoxiao, Lee, Honglak, Lewis, Richard, and Singh, Satinder. Action-conditional video prediction using deep networks in atari games. NIPS, 2015. Pinto, Lerrel and Gupta, Abhinav. Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours. ICRA, 2016. Pinto, Lerrel, Gandhi, Dhiraj, Han, Yuanfeng, Park, Yong-Lae, and Gupta, Abhinav. The curious robot: Learning visual representations via physical interactions. In ECCV, pp. 3?18. Springer, 2016. Smith, Linda and Gasser, Michael. The development of embodied cognition: Six lessons from babies. Artificial life, 11(1-2):13?29, 2005. Vondrick, Carl, Pirsiavash, Hamed, and Torralba, Antonio. Anticipating the future by watching unlabeled video. CVPR, 2016. Wahlstr?m, Niklas, Sch?n, Thomas B., and Deisenroth, Marc Peter. From pixels to torques: Policy learning with deep dynamical models. CoRR, abs/1502.02251, 2015. Watter, Manuel, Springenberg, Jost, Boedecker, Joschka, and Riedmiller, Martin. Embed to control: A locally linear latent dynamics model for control from raw images. In NIPS, pp. 2728?2736, 2015. Wolpert, Daniel M, Ghahramani, Zoubin, and Jordan, Michael I. An internal model for sensorimotor integration. Science-AAAS-Weekly Paper Edition, 269(5232):1880?1882, 1995. Wu, Jiajun, Yildirim, Ilker, Lim, Joseph J, Freeman, Bill, and Tenenbaum, Josh. Galileo: Perceiving physical object properties by integrating a physics engine with deep learning. In NIPS, pp. 127?135, 2015. 9
6113 |@word trial:1 cnn:2 middle:1 pieter:3 simulation:8 r:1 rgb:1 harder:3 initial:14 configuration:5 series:3 selecting:1 daniel:1 ours:2 outperforms:5 current:10 wd:4 discretization:1 surprising:2 manuel:1 must:1 readily:1 takeo:1 planet:1 informative:1 enables:1 displace:8 designed:2 depict:1 ashutosh:1 infant:3 intelligence:1 greedy:7 website:3 fewer:2 parameterization:2 plane:1 smith:2 yuanfeng:1 farther:5 record:2 manfred:1 provides:3 location:32 traverse:1 compressible:1 shubham:1 five:2 height:1 wierstra:1 along:3 direct:2 prehensile:3 pritzel:1 prove:1 feather:1 manner:5 acquired:1 orbital:1 indeed:1 expected:1 p1:2 planning:11 xz:1 multi:3 distractor:2 discretized:2 ashvin:2 freeman:1 inspired:2 detects:1 simulator:7 brain:1 duan:1 actual:1 deirdre:1 equipped:1 farhadi:1 spain:1 estimating:3 moreover:1 project:3 notation:1 provided:2 alexnet:1 underlying:1 what:4 tic:1 kind:1 mass:3 atari:3 linda:1 finding:2 clothes:2 corporation:1 berkeley:3 every:3 collecting:1 quantitative:1 act:1 tackle:1 saxena:1 weekly:1 exactly:1 wrong:1 hit:1 control:14 intervention:2 appear:1 before:13 scientist:1 local:1 struggle:1 severely:1 ak:1 meet:1 path:1 might:6 chose:1 drag:1 suggests:2 challenging:2 repurpose:1 limited:1 hunt:1 torque:1 range:1 unique:1 camera:2 practical:1 galileo:1 alphabetical:1 block:1 lf:2 procedure:2 grasped:2 riedmiller:5 area:3 yan:1 significantly:4 adapting:1 imprecise:1 pre:2 integrating:1 suggest:4 zoubin:1 get:1 onto:1 cannot:1 unlabeled:1 bellemare:1 optimize:1 fruitful:1 bill:1 center:2 convex:1 rectangular:1 swinging:1 tekin:1 regularize:1 oh:2 searching:1 handle:1 coordinate:1 autonomous:3 arranging:1 target:3 pt:4 controlling:1 heavily:1 tan:1 gandhi:1 carl:1 us:3 hypothesis:4 zurek:1 rumelhart:2 recognition:1 predicts:5 muri:1 observed:1 levine:7 cloud:2 steven:1 preprint:1 connected:4 autonomously:1 grasping:3 k40:1 zig:2 gross:1 intuition:1 transforming:1 environment:3 predictable:1 asked:1 dynamic:16 trained:3 grateful:1 solving:1 singh:1 ali:1 predictive:4 creates:1 easily:2 joint:18 multimodal:1 represented:1 finger:3 train:7 distinct:2 effective:1 artificial:2 visuomotor:2 outcome:2 eos:2 whose:1 spend:2 valued:1 supplementary:7 say:1 larger:2 cvpr:2 ability:1 statistic:2 unseen:3 jointly:2 transform:3 itself:1 final:3 seemingly:2 interplay:1 agrawal:2 sequence:5 blob:10 rock:1 propose:3 reconstruction:1 interaction:11 poke:49 relevant:3 combining:1 date:1 alleviates:1 degenerate:2 translate:1 achieve:1 deformable:3 mayne:2 forth:1 intuitive:13 moved:2 sutskever:1 interacted:2 cluster:1 optimum:1 darrell:2 adam:1 executing:2 newtonian:2 object:131 poking:16 help:1 depending:1 andrew:1 pose:11 silver:2 measured:1 nearest:1 tulsiani:1 received:1 p2:2 predicted:3 indicate:1 arl:1 differ:1 direction:8 gopnik:2 drawback:1 hammer:3 correct:2 meltzoff:1 human:8 material:8 bin:3 abbeel:3 generalization:6 wall:1 alleviate:1 investigation:2 exploring:1 accompanying:2 effortlessly:1 around:3 considered:3 ground:2 normal:1 mapping:3 predict:11 cognition:1 major:2 efros:1 early:1 torralba:1 purpose:1 battaglia:1 lenz:2 estimation:2 applicable:1 coordination:1 title:1 ross:1 successfully:2 unfolding:1 clearly:1 aim:2 rusu:1 varying:1 icmla:1 mobile:1 ax:1 indicates:2 contrast:2 greedily:1 baseline:2 detect:1 wf:2 helpful:1 wooden:2 glass:2 rigid:1 shattered:1 unlikely:1 inaccurate:1 manipulating:2 transformed:2 selects:1 semantics:1 interested:1 pixel:8 issue:1 classification:2 development:3 plan:3 spatial:1 yip:1 integration:1 equal:1 construct:2 once:1 field:2 shaped:1 koray:1 veness:1 represents:1 park:1 look:1 unsupervised:1 yu:1 future:10 richard:1 randomly:3 simultaneously:3 phase:1 geometry:11 ab:1 freedom:1 lavalle:2 jessica:1 interest:4 ostrovski:1 investigate:3 possibility:3 mnih:2 patricia:1 arena:2 evaluation:4 grasp:2 joel:1 navigation:1 operated:1 accurate:4 tuple:1 closer:2 screwdriver:1 capable:1 experience:5 unless:1 pulkit:2 tree:1 euclidean:1 rotating:1 desired:9 re:1 increased:1 modeling:4 obstacle:4 planned:1 wyatt:1 wahlstr:2 distill:1 predictor:1 krizhevsky:3 successful:1 front:2 pixelwise:1 teacher:1 learnt:14 synthetic:1 international:1 sensitivity:1 probabilistic:1 physic:22 lee:1 michael:4 ilya:1 concrete:1 again:1 kinematically:1 possibly:2 watching:1 cognitive:2 american:1 inefficient:1 li:2 account:1 jeremy:1 volodymyr:1 coefficient:2 jitendra:2 caused:1 piece:2 later:1 break:1 view:1 performed:2 try:1 pendulum:2 reached:3 red:3 relied:1 contribution:1 convolutional:4 gathered:1 judgment:1 lesson:1 yellow:1 generalize:6 raw:3 identification:1 kavukcuoglu:1 yildirim:1 trajectory:1 worth:2 unaffected:1 executes:2 straight:1 detector:1 hamed:1 reach:2 suffers:1 influenced:1 psg:1 trevor:2 sebastian:1 xiaoxiao:1 against:3 svlevine:1 pp:9 sensorimotor:1 toe:1 donating:1 static:1 stop:1 emanuel:1 knowledge:1 ut:9 infers:1 car:1 lim:1 anticipating:1 back:1 higher:2 supervised:1 tom:1 roozbeh:1 formulation:2 evaluated:2 execute:2 generality:1 furthermore:1 stage:1 until:1 overfit:1 hand:4 autoencoders:2 horizontal:1 billiards:2 indicated:1 reveal:1 scientific:1 building:4 effect:3 lillicrap:2 concept:1 regularization:1 inspiration:1 laboratory:1 iteratively:3 satisfactory:1 deal:1 conditionally:1 distal:1 game:5 during:1 self:1 please:1 stone:1 rope:1 mohammad:1 motion:6 l1:1 passive:1 vondrick:2 image:53 ranging:1 consideration:1 novel:6 recently:1 srinivasa:2 common:1 junhyuk:1 physical:7 empirically:1 handcrafted:1 tassa:1 relating:1 cup:2 imposing:1 cambridge:1 tac:1 honglak:1 automatic:1 grid:1 pointed:1 erez:1 moving:3 robot:58 access:2 supervision:5 longer:1 han:1 etc:3 deduce:1 something:1 chelsea:2 own:2 recent:3 joschka:1 perspective:1 optimizing:3 apart:3 pastor:1 manipulation:16 scenario:2 certain:1 n00014:1 nvidia:2 onr:2 life:2 approximators:1 baby:1 mottaghi:3 joshua:1 inverted:1 seen:1 additional:2 greater:1 somewhat:2 freely:1 paradigm:1 multiple:2 siamese:1 desirable:1 infer:1 full:1 veloso:1 cross:3 long:1 hamrick:2 arne:1 manipulate:1 prediction:5 neuro:1 jost:1 vision:2 metric:1 tasked:3 physically:1 arxiv:3 sergey:5 sometimes:1 represent:2 albert:1 folding:1 irregular:1 background:1 whereas:2 separately:1 addition:1 modality:1 sch:1 operate:1 growing:1 unlike:1 posse:1 tim:1 comment:1 subject:2 spirit:1 bajcsy:1 jordan:3 call:1 curious:1 presence:2 baxter:2 concerned:1 variety:1 todorov:2 knepper:1 architecture:3 displacing:3 suboptimal:1 lange:2 lerer:2 andreas:1 whether:5 bair:1 six:1 passed:5 forecasting:1 peter:3 speech:1 passing:2 constitute:1 action:26 deep:16 heess:1 useful:1 generally:1 detailed:1 listed:1 vegetable:2 fragkiadaki:2 antonio:1 amount:3 clutter:1 adept:1 gasser:2 tenenbaum:2 locally:1 tth:1 http:1 occupy:1 outperform:1 dotted:1 jiajun:1 diverse:1 discrete:1 promise:1 siddhartha:1 georg:1 key:3 threshold:1 achieving:4 falling:2 kept:2 rectangle:1 year:3 sum:1 run:1 inverse:35 angle:9 uncertainty:1 springenberg:1 place:1 planner:2 throughout:1 nail:1 michotte:2 ljoint:1 wu:3 delivery:1 decision:2 acceptable:1 comparable:1 pushed:2 layer:9 strength:1 ijcnn:1 constraint:1 alex:3 yong:1 nearby:2 x7:1 speed:1 friction:2 argument:1 martin:4 gpus:1 developing:1 deepmpc:1 combination:1 across:1 smaller:1 sam:1 joseph:1 rob:1 making:3 aaa:1 chess:1 lau:2 invariant:1 alison:1 taken:1 equation:3 visualization:1 previously:2 turn:2 eventually:1 mechanism:2 mind:1 flip:1 fed:1 finn:3 end:6 available:4 probe:1 apply:2 worthwhile:1 away:3 kolev:2 kuhl:1 occurrence:3 alternative:4 robustness:1 slower:1 thomas:2 include:1 lerrel:2 newton:1 pushing:6 racer:1 concatenated:2 ghahramani:1 build:1 society:1 icra:4 experiential:2 contact:1 malik:3 objective:3 move:8 strategy:4 interacts:1 unclear:2 iclr:2 distance:10 unable:1 thank:2 fidjeland:1 me:1 tower:1 collected:4 trivial:3 reason:1 toward:1 length:7 providing:1 convincing:1 demonstration:1 acquire:1 setup:3 executed:5 difficult:1 slows:1 reliably:2 policy:6 perform:3 allowing:1 vertical:1 observation:1 displaced:1 daan:1 inevitably:2 behave:1 regularizes:6 defining:1 extended:1 precise:2 head:1 frame:1 locate:1 hinton:1 niklas:1 kinect:2 david:4 bottle:5 required:6 pair:4 imagenet:2 california:1 engine:1 learned:2 meri:2 barcelona:1 hour:3 nip:5 able:5 beyond:1 below:1 perception:1 dynamical:1 challenge:2 program:1 green:1 pirsiavash:1 video:3 marek:1 event:3 difficulty:1 force:5 predicting:8 arm:2 representing:2 eye:1 abhinav:2 axis:4 jun:1 embodied:1 ilker:1 mehmet:1 prior:2 geometric:1 understanding:2 review:1 acknowledgement:1 relative:3 lae:1 law:1 stanislav:1 loss:5 fully:5 par:1 graf:1 interesting:7 suggestion:1 limitation:2 geoffrey:1 pabbeel:1 humanoid:1 agent:4 degree:2 consistent:1 playing:4 row:6 eccv:1 supported:1 free:3 side:2 allow:1 guide:1 explaining:1 template:1 neighbor:1 wide:1 fall:1 absolute:2 face:1 benefit:1 depth:1 world:12 sensory:2 preventing:1 forward:30 author:1 collection:7 made:1 avoided:1 ig:1 ignores:2 far:1 reinforcement:5 skill:1 cutting:2 satinder:1 robotic:6 investigating:1 correlating:1 automatica:1 manuela:1 fergus:1 search:2 latent:4 continuous:2 triplet:1 hessam:1 table:7 learn:20 nature:1 robust:7 nicolas:1 controllable:1 rastegari:1 depicting:2 interact:1 complex:4 necessarily:1 domain:1 marc:2 did:1 noise:2 edition:1 repeated:3 complementary:1 allowed:1 child:1 causality:1 representative:1 depicts:1 strengthened:1 andrei:1 fails:1 position:5 inferring:1 watter:2 lie:1 bagherinezhad:1 jmlr:1 ian:1 down:2 embed:1 specific:5 xt:25 explored:1 consequent:1 gupta:3 exists:1 gripper:1 corr:1 texture:5 execution:1 push:9 forecast:2 easier:4 jar:1 suited:1 entropy:1 depicted:1 lt:5 wolpert:2 simply:1 appearance:2 likely:2 timothy:1 reliant:1 visual:21 boedecker:1 josh:1 unexpected:1 mccloskey:2 pinto:4 acquiring:1 springer:1 truth:1 environmental:1 lewis:1 nair:1 slot:1 conditional:1 goal:43 quantifying:1 towards:1 feasible:1 change:3 hard:3 experimentally:1 determined:1 except:1 reducing:1 yuval:1 perceiving:1 partly:1 experimental:2 xin:1 succeeds:1 zag:2 katerina:1 formally:1 deisenroth:1 mast:1 internal:4 speedy:1 support:1 guo:1 rotate:1 jonathan:1 alexander:1 tested:1 phenomenon:1
5,652
6,114
Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks Tim Salimans OpenAI tim@openai.com Diederik P. Kingma OpenAI dpkingma@openai.com Abstract We present weight normalization: a reparameterization of the weight vectors in a neural network that decouples the length of those weight vectors from their direction. By reparameterizing the weights in this way we improve the conditioning of the optimization problem and we speed up convergence of stochastic gradient descent. Our reparameterization is inspired by batch normalization but does not introduce any dependencies between the examples in a minibatch. This means that our method can also be applied successfully to recurrent models such as LSTMs and to noise-sensitive applications such as deep reinforcement learning or generative models, for which batch normalization is less well suited. Although our method is much simpler, it still provides much of the speed-up of full batch normalization. In addition, the computational overhead of our method is lower, permitting more optimization steps to be taken in the same amount of time. We demonstrate the usefulness of our method on applications in supervised image recognition, generative modelling, and deep reinforcement learning. 1 Introduction Recent successes in deep learning have shown that neural networks trained by first-order gradient based optimization are capable of achieving amazing results in diverse domains like computer vision, speech recognition, and language modelling [7]. However, it is also well known that the practical success of first-order gradient based optimization is highly dependent on the curvature of the objective that is optimized. If the condition number of the Hessian matrix of the objective at the optimum is low, the problem is said to exhibit pathological curvature, and first-order gradient descent will have trouble making progress [22, 32]. The amount of curvature, and thus the success of our optimization, is not invariant to reparameterization [1]: there may be multiple equivalent ways of parameterizing the same model, some of which are much easier to optimize than others. Finding good ways of parameterizing neural networks is thus an important problem in deep learning. While the architectures of neural networks differ widely across applications, they are typically mostly composed of conceptually simple computational building blocks sometimes called neurons: each such neuron computes a weighted sum over its inputs and adds a bias term, followed by the application of an elementwise nonlinear transformation. Improving the general optimizability of deep networks is a challenging task [6], but since many neural architectures share these basic building blocks, improving these building blocks improves the performance of a very wide range of model architectures and could thus be very useful. Several authors have recently developed methods to improve the conditioning of the cost gradient for general neural network architectures. One approach is to explicitly left multiply the cost gradient with an approximate inverse of the Fisher information matrix, thereby obtaining an approximately whitened natural gradient. Such an approximate inverse can for example be obtained by using a Kronecker factored approximation to the Fisher matrix and inverting it (KFAC, [23]), by using an 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. approximate Cholesky factorization of the inverse Fisher matrix (FANG, [10]), or by whitening the input of each layer in the neural network (PRONG, [5]). Alternatively, we can use standard first order gradient descent without preconditioning, but change the parameterization of our model to give gradients that are more like the whitened natural gradients of these methods. For example, Raiko et al. [27] propose to transform the outputs of each neuron to have zero output and zero slope on average. They show that this transformation approximately diagonalizes the Fisher information matrix, thereby whitening the gradient, and that this leads to improved optimization performance. Another approach in this direction is batch normalization [14], a method where the output of each neuron (before application of the nonlinearity) is normalized by the mean and standard deviation of the outputs calculated over the examples in the minibatch. This reduces covariate shift of the neuron outputs and the authors suggest it also brings the Fisher matrix closer to the identity matrix. Following this second approach to approximate natural gradient optimization, we propose a simple but general method, called weight normalization, for improving the optimizability of the weights of neural network models. The method is inspired by batch normalization, but it is a deterministic method that does not share batch normalization?s property of adding noise to the gradients. In addition, the overhead imposed by our method is lower: no additional memory is required and the additional computation is negligible. The method show encouraging results on a wide range of deep learning applications. 2 Weight Normalization We consider standard artificial neural networks where the computation of each neuron consists in taking a weighted sum of input features, followed by an elementwise nonlinearity: y = ?(w ? x + b), (1) where w is a k-dimensional weight vector, b is a scalar bias term, x is a k-dimensional vector of input features, ?(.) denotes an elementwise nonlinearity such as the rectifier max(., 0), and y denotes the scalar output of the neuron. After associating a loss function to one or more neuron outputs, such a neural network is commonly trained by stochastic gradient descent in the parameters w, b of each neuron. In an effort to speed up the convergence of this optimization procedure, we propose to reparameterize each weight vector w in terms of a parameter vector v and a scalar parameter g and to perform stochastic gradient descent with respect to those parameters instead. We do so by expressing the weight vectors in terms of the new parameters using g w= v (2) ||v|| where v is a k-dimensional vector, g is a scalar, and ||v|| denotes the Euclidean norm of v. This reparameterization has the effect of fixing the Euclidean norm of the weight vector w: we now have ||w|| = g, independent of the parameters v. We therefore call this reparameterizaton weight normalization. The idea of normalizing the weight vector has been proposed before (e.g. [31, 33]) but earlier work typically still performed optimization in the w-parameterization, only applying the normalization after each step of stochastic gradient descent. This is fundamentally different from our approach: we propose to explicitly reparameterize the model and to perform stochastic gradient descent in the new parameters v, g directly. Doing so improves the conditioning of the gradient and leads to improved convergence of the optimization procedure: By decoupling the norm of the weight vector (g) from the direction of the weight vector (v/||v||), we speed up convergence of our stochastic gradient descent optimization, as we show experimentally in section 5. Instead of working with g directly, we may also use an exponential parameterization for the scale, i.e. g = es , where s is a log-scale parameter to learn by stochastic gradient descent. Parameterizing the g parameter in the log-scale is more intuitive and more easily allows g to span a wide range of different magnitudes. Empirically, however, we did not find this to be an advantage. In our experiments, the eventual test-set performance was not significantly better or worse than the results with directly learning g in its original parameterization, and optimization was slightly slower. 2 2.1 Gradients Training a neural network in the new parameterization is done using standard stochastic gradient descent methods. Here we differentiate through (2) to obtain the gradient of a loss function L with respect to the new parameters v, g. Doing so gives ?g L = ?w L ? v , ||v|| ?v L = g g?g L v, ?w L ? ||v|| ||v||2 (3) where ?w L is the gradient with respect to the weights w as used normally. Backpropagation using weight normalization thus only requires a minor modification to the usual backpropagation equations, and is easily implemented using standard neural network software, either by directly specifying the network in terms of the v, g parameters and relying on auto-differentiation, or by applying (3) in a post-processing step. We provide reference implementations using both approaches for Theano, Tensorflow and Keras at https://github.com/openai/weightnorm. Unlike with batch normalization, the expressions above are independent of the minibatch size and thus cause only minimal computational overhead. An alternative way to write the gradient is ?v L = g Mw ?w L, ||v|| with Mw = I ? ww0 , ||w||2 (4) where Mw is a projection matrix that projects onto the complement of the w vector. This shows that weight normalization accomplishes two things: it scales the weight gradient by g/||v||, and it projects the gradient away from the current weight vector. Both effects help to bring the covariance matrix of the gradient closer to identity and benefit optimization, as we explain below. Due to projecting away from w, the norm of v grows monotonically with the number of weight updates when learning a neural network with weight normalization using standard gradient descent without momentum: Let v0 = v + ?v denote our parameter update, with ?v ? ?v L (steepest ascent/descent), then ?v is necessarily orthogonal to the current weight vector w since we project away from it when calculating ?v L (equation 4). Since v is proportional to w, the update is thus also orthogonal to v and increases its norm by the Pythagorean theorem. Specifically, if ||?v||/||v|| = c p ? the new weight vector will have norm ||v0 || = ||v||2 + c2 ||v||2 = 1 + c2 ||v|| ? ||v||. The rate of increase will depend on the the variance of the weight gradient. If our gradients are noisy, c will be high and the norm of v will quickly increase,?which in turn will decrease the scaling factor g/||v||. If the norm of the gradients is small, we get 1 + c2 ? 1, and the norm of v will stop increasing. Using this mechanism, the scaled gradient self-stabilizes its norm. This property does not strictly hold for optimizers that use separate learning rates for individual parameters, like Adam [15] which we use in experiments, or when using momentum. However, qualitatively we still find the same effect to hold. Empirically, we find that the ability to grow the norm ||v|| makes optimization of neural networks with weight normalization very robust to the value of the learning rate: If the learning rate is too large, the norm of the unnormalized weights grows quickly until an appropriate effective learning rate is reached. Once the norm of the weights has grown large with respect to the norm of the updates, the effective learning rate stabilizes. Neural networks with weight normalization therefore work well with a much wider range of learning rates than when using the normal parameterization. It has been observed that neural networks with batch normalization also have this property [14], which can also be explained by this analysis. By projecting the gradient away from the weight vector w, we also eliminate the noise in that direction. If the covariance matrix of the gradient with respect to w is given by C, the covariance matrix of the gradient in v is given by D = (g 2 /||v||2 )Mw CMw . Empirically, we find that w is often (close to) a dominant eigenvector of the covariance matrix C: removing that eigenvector then gives a new covariance matrix D that is closer to the identity matrix, which may further speed up learning. 3 2.2 Relation to batch normalization An important source of inspiration for this reparameterization is batch normalization [14], which normalizes the statistics of the pre-activation t for each minibatch as t0 = t ? ?[t] , ?[t] with ?[t], ?[t] the mean and standard deviation of the pre-activations t = v ? x. For the special case where our network only has a single layer, and the input features x for that layer are whitened (independently distributed with zero mean and unit variance), these statistics are given by ?[t] = 0 and ?[t] = ||v||. In that case, normalizing the pre-activations using batch normalization is equivalent to normalizing the weights using weight normalization. Convolutional neural networks usually have much fewer weights than pre-activations, so normalizing the weights is often much cheaper computationally. In addition, the norm of v is non-stochastic, while the minibatch mean ?[t] and variance ? 2 [t] can in general have high variance for small minibatch size. Weight normalization can thus be viewed as a cheaper and less noisy approximation to batch normalization. Although exact equivalence does not usually hold for deeper architectures, we still find that our weight normalization method provides much of the speed-up of full batch normalization. In addition, its deterministic nature and independence on the minibatch input also means that our method can be applied more easily to models like RNNs and LSTMs, as well as noise-sensitive applications like reinforcement learning. 3 Data-Dependent Initialization of Parameters Besides a reparameterization effect, batch normalization also has the benefit of fixing the scale of the features generated by each layer of the neural network. This makes the optimization robust against parameter initializations for which these scales vary across layers. Since weight normalization lacks this property, we find it is important to properly initialize our parameters. We propose to sample the elements of v from a simple distribution with a fixed scale, which is in our experiments a normal distribution with mean zero and standard deviation 0.05. Before starting training, we then initialize the b and g parameters to fix the minibatch statistics of all pre-activations in our network, just like in batch normalization, but only for a single minibatch of data and only during initialization. This can be done efficiently by performing an initial feedforward pass through our network for a single minibatch of data X, using the following computation at each neuron:   v?x t ? ?[t] t= , and y = ? , (5) ||v|| ?[t] where ?[t] and ?[t] are the mean and standard deviation of the pre-activation t over the examples in the minibatch. We can then initialize the neuron?s biase b and scale g as g? 1 , ?[t] b? ??[t] , ?[t] (6) so that y = ?(w ? x + b). Like batch normalization, this method ensures that all features initially have zero mean and unit variance before application of the nonlinearity. With our method this only holds for the minibatch we use for initialization, and subsequent minibatches may have slightly different statistics, but experimentally we find this initialization method to work well. The method can also be applied to networks without weight normalization, simply by doing stochastic gradient optimization on the parameters w directly, after initialization in terms of v and g: this is what we compare to in section 5. Independently from our work, this type of initialization was recently proposed by different authors [24, 18] who found such data-based initialization to work well for use with the standard parameterization in terms of w. The downside of this initialization method is that it can only be applied in similar cases as where batch normalization is applicable. For models with recursion, such as RNNs and LSTMs, we will have to resort to standard initialization methods. 4 4 Mean-only Batch Normalization Weight normalization, as introduced in section 2, makes the scale of neuron activations approximately independent of the parameters v. Unlike with batch normalization, however, the means of the neuron activations still depend on v. We therefore also explore the idea of combining weight normalization with a special version of batch normalization, which we call mean-only batch normalization: With this normalization method, we subtract out the minibatch means like with full batch normalization, but we do not divide by the minibatch standard deviations. That is, we compute neuron activations using t = w ? x, t? = t ? ?[t] + b, y = ?(t?) (7) where w is the weight vector, parameterized using weight normalization, and ?[t] is the minibatch mean of the pre-activation t. During training, we keep a running average of the minibatch mean which we substitute in for ?[t] at test time. The gradient of the loss with respect to the pre-activation t is calculated as ?t L = ?t?L ? ?[?t?L], (8) where ?[.] denotes once again the operation of taking the minibatch mean. Mean-only batch normalization thus has the effect of centering the gradients that are backpropagated. This is a comparatively cheap operation, and the computational overhead of mean-only batch normalization is thus lower than for full batch normalization. In addition, this method causes less noise during training, and the noise that is caused is more gentle as the law of large numbers ensures that ?[t] and ?[?t?] are approximately normally distributed. Thus, the added noise has much lighter tails than the highly kurtotic noise caused by the minibatch estimate of the variance used in full batch normalization. As we show in section 5.1, this leads to improved accuracy at test time. 5 Experiments We experimentally validate the usefulness of our method using four different models for varied applications in supervised image recognition, generative modelling, and deep reinforcement learning. 5.1 Supervised Classification: CIFAR-10 To test our reparameterization method for the application of supervised classification, we consider the CIFAR-10 data set of natural images [19]. The model we are using is based on the ConvPool-CNN-C architecture of [30], with some small modifications: we replace the first dropout layer by a layer that adds Gaussian noise, we expand the last hidden layer from 10 units to 192 units, and we use 2 ? 2 max-pooling, rather than 3 ? 3. The only hyperparameter that we actively optimized (the standard deviation of the Gaussian noise) was chosen to maximize the performance of the network on a holdout set of 10000 examples, using the standard parameterization (no weight normalization or batch normalization). A full description of the resulting architecture is given in table A in the supplementary material. We train our network for CIFAR-10 using Adam [15] for 200 epochs, with a fixed learning rate and momentum of 0.9 for the first 100 epochs. For the last 100 epochs we set the momentum to 0.5 and linearly decay the learning rate to zero. We use a minibatch size of 100. We evaluate 5 different parameterizations of the network: 1) the standard parameterization, 2) using batch normalization, 3) using weight normalization, 4) using weight normalization combined with mean-only batch normalization, 5) using mean-only batch normalization with the normal parameterization. The network parameters are initialized using the scheme of section 3 such that all four cases have identical parameters starting out. For each case we pick the optimal learning rate in {0.0003, 0.001, 0.003, 0.01}. The resulting error curves during training can be found in figure 1: both weight normalization and batch normalization provide a significant speed-up over the standard parameterization. Batch normalization makes slightly more progress per epoch than weight normalization early on, although this is partly offset by the higher computational cost: with our implementation, training with batch normalization was about 16% slower compared to the standard parameterization. In contrast, weight normalization was not noticeably slower. During the later stage of training, weight normalization and batch normalization seem to optimize at about the same speed, with the normal parameterization (with or without mean-only batch normalization) still lagging behind. After optimizing the network for 200 5 training error Model Maxout [8] 0.1 Network in Network [21] Deeply Supervised [20] 0.05 ConvPool-CNN-C [30] ALL-CNN-C [30] our CNN, mean-only B.N. 0 0 50 100 150 200 our CNN, weight norm. training epochs our CNN, normal param. our CNN, batch norm. Figure 1: Training error for CIFAR-10 using dif- ours, W.N. + mean-only B.N. ferent parameterizations. For weight normaliza- DenseNet [13] tion, batch normalization, and mean-only batch normalization we show results using Adam with Figure 2: Classification results a learning rate of 0.003. For the normal param- without data augmentation. eterization we instead use 0.0003 which works best in this case. For the last 100 epochs the learning rate is linearly decayed to zero. normal param. weight norm. batch norm. WN + mean?only BN mean?only BN Test Error 11.68% 10.41% 9.6% 9.31% 9.08% 8.52% 8.46% 8.43% 8.05% 7.31% 5.77% on CIFAR-10 epochs using the different parameterizations, we evaluate their performance on the CIFAR-10 test set. The results are summarized in table 2: weight normalization, the normal parameterization, and mean-only batch normalization have similar test accuracy (? 8.5% error). Batch normalization does significantly better at 8.05% error. Mean-only batch normalization combined with weight normalization has the best performance at 7.31% test error, and interestingly does much better than mean-only batch normalization combined with the normal parameterization: This suggests that the noise added by batch normalization can be useful for regularizing the network, but that the reparameterization provided by weight normalization or full batch normalization is also needed for optimal results. We hypothesize that the substantial improvement by mean-only B.N. with weight normalization over regular batch normalization is due to the distribution of the noise caused by the normalization method during training: for mean-only batch normalization the minibatch mean has a distribution that is approximately Gaussian, while the noise added by full batch normalization during training has much higher kurtosis. The result with mean-only batch normalization combined with weight normalization represented the state-of-the-art for CIFAR-10 among methods that do not use data augmentation, until it was recently surpassed by DenseNets [13]. 5.2 Generative Modelling: Convolutional VAE Next, we test the effect of weight normalization applied to deep convolutional variational autoencoders (CVAEs) [16, 28, 29], trained on the MNIST data set of images of handwritten digits and the CIFAR-10 data set of small natural images. Variational auto-encoders are generative models that explain the data vector x as arising from a set of latent variables z, through a joint distribution of the form p(z, x) = p(z)p(x|z), where the decoder p(x|z) is specified using a neural network. A lower bound on the log marginal likelihood log p(x) can be obtained by approximately inferring the latent variables z from the observed data x using an encoder distribution q(z|x) that is also specified as a neural network. This lower bound is then optimized to fit the model to the data. We follow a similar implementation of the CVAE as in [29] with some modifications, mainly that the encoder and decoder are parameterized with ResNet [11] blocks, and that the diagonal posterior is replaced with a more flexible specification based on inverse autoregressive flow. A further developed version of this model is presented in [17], where the architecture is explained in detail. For MNIST, the encoder consists of 3 sequences of two ResNet blocks each, the first sequence acting on 16 feature maps, the others on 32 feature maps. The first two sequences are followed by a 2-times subsampling operation implemented using 2 ? 2 stride, while the third sequence is followed by a fully connected layer with 450 units. The decoder has a similar architecture, but with reversed direction. For CIFAR-10, we used a neural architecture with ResNet units and multiple intermediate stochastic layers. We used Adamax [15] with ? = 0.002 for optimization, in combination with 6 Polyak averaging [26] in the form of an exponential moving average that averages parameters over approximately 10 epochs. In figure 3, we plot the test-set lower bound as a function of number of training epochs, including error bars based on multiple different random seeds for initializing parameters. As can be seen, the parameterization with weight normalization has lower variance and converges to a better optimum. We observe similar results across different hyper-parameter settings. Convolutional VAE on MNIST 84.0 Convolutional VAE on CIFAR-10 10000 bound on marginal likelihood bound on marginal likelihood 84.5 85.0 85.5 86.0 86.5 87.0 normal parameterization weight normalization 87.5 88.0 50 100 150 200 training epochs 250 9500 9000 8500 normal parameterization weight normalization 8000 0 300 50 100 150 200 250 300 training epochs 350 400 450 Figure 3: Marginal log likelihood lower bound on the MNIST (top) and CIFAR-10 (bottom) test sets for a convolutional VAE during training, for both the standard implementation as well as our modification with weight normalization. For MNIST, we provide standard error bars to indicate variance based on different initial random seeds. 5.3 Generative Modelling: DRAW Next, we consider DRAW, a recurrent generative model by [9]. DRAW is a variational auto-encoder with generative model p(z)p(x|z) and encoder q(z|x), similar to the model in section 5.2, but with both the encoder and decoder consisting of a recurrent neural network comprised of Long Short-Term Memory (LSTM) [12] units. LSTM units consist of a memory cell with additive dynamics, combined with input, forget, and output gates that determine which information flows in and out of the memory. The additive dynamics enables learning of long-range dependencies in the data. At each time step of the model, DRAW uses the same set of weight vectors to update the cell states of the LSTM units in its encoder and decoder. Because of the recurrent nature of this process it is not trivial to apply batch normalization here: Normalizing the cell states diminishes their ability to pass through information. Fortunately, weight normalization can easily be applied to the weight vectors of each LSTM unit, and we find this to work well empirically. Some other potential solutions were recently proposed in [4, 2]. bound on marginal log likelihood ?80 ?85 ?90 ?95 ?100 ?105 ?110 normal parameterization weight normalization ?115 ?120 0 10 20 30 40 50 60 70 80 90 100 training epochs Figure 4: Marginal log likelihood lower bound on the MNIST test set for DRAW during training, for both the standard implementation as well as our modification with weight normalization. 100 epochs is not sufficient for convergence for this model, but the implementation using weight normalization clearly makes progress much more quickly than with the standard parameterization. 7 We take the Theano implementation of DRAW provided at https://github.com/jbornschein/ draw and use it to model the MNIST data set of handwritten digits. We then make a single modification to the model: we apply weight normalization to all weight vectors. As can be seen in figure 4, this significantly speeds up convergence of the optimization procedure, even without modifying the initialization method and learning rate that were tuned for use with the normal parameterization. 5.4 Reinforcement Learning: DQN Next we apply weight normalization to the problem of Reinforcement Learning for playing games on the Atari Learning Environment [3]. The approach we use is the Deep Q-Network (DQN) proposed by [25]. This is an application for which batch normalization is not well suited: the noise introduced by estimating the minibatch statistics destabilizes the learning process. We were not able to get batch normalization to work for DQN without using an impractically large minibatch size. In contrast, weight normalization is easy to apply in this context, as is the initialization method of section 3. Stochastic gradient learning is performed using Adamax [15] with momentum of 0.5. We search for optimal learning rates in {0.0001, 0.0003, 0.001, 0.003}, generally finding 0.0003 to work well with weight normalization and 0.0001 to work well for the normal parameterization. We also use a larger minibatch size (64) which we found to be more efficient on our hardware (Amazon Elastic Compute Cloud g2.2xlarge GPU instance). Apart from these changes we follow [25] as closely as possible in terms of parameter settings and evaluation methods. However, we use a Python/Theano/Lasagne reimplementation of their work, adapted from the implementation available at https://github.com/spragunr/deep_q_rl, so there may be small additional differences in implementation. Figure 5 shows the training curves obtained using DQN with the standard parameterization and with weight normalization on Space Invaders. Using weight normalization the algorithm progresses more quickly and reaches a better final result. Table 6 shows the final evaluation scores obtained by DQN with weight normalization for four games: on average weight normalization improves the performance of DQN. 2500 Game Breakout Enduro Seaquest Space Invaders test reward per episode 2000 1500 1000 500 0 0 normal parameterization weight normalization 50 100 150 200 training epochs Figure 5: Evaluation scores for Space Invaders obtained by DQN after each epoch of training, for both the standard parameterization and using weight normalization. Learning rates for both cases were selected to maximize the highest achieved test score. 6 normal 410 1,250 7,188 1,779 weightnorm 403 1,448 7,375 2,179 Mnih 401 302 5,286 1,975 Figure 6: Maximum evaluation scores obtained by DQN, using either the normal parameterization or using weight normalization. The scores indicated by Mnih et al. are those reported by [25]: Our normal parameterization is approximately equivalent to their method. Differences in scores may be caused by small differences in our implementation. Specifically, the difference in our score on Enduro and that reported by [25] might be due to us not using a play-time limit during evaluation. Conclusion We have presented weight normalization, a simple reparameterization of the weight vectors in a neural network that accelerates the convergence of stochastic gradient descent optimization. Weight normalization was applied to four different models in supervised image recognition, generative modelling, and deep reinforcement learning, showing a consistent advantage across applications. The reparameterization method is easy to apply, has low computational overhead, and does not introduce dependencies between the examples in a minibatch, making it our default choice in the development of new deep learning architectures. 8 References [1] S. Amari. Neural learning in structured parameter spaces - natural Riemannian gradient. In Advances in Neural Information Processing Systems, pages 127?133. MIT Press, 1997. [2] J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. [3] M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253?279, 06 2013. [4] T. Cooijmans, N. Ballas, C. Laurent, and A. Courville. Recurrent batch normalization. arXiv preprint arXiv:1603.09025, 2016. [5] G. Desjardins, K. Simonyan, R. Pascanu, et al. Natural neural networks. In Advances in Neural Information Processing Systems, pages 2062?2070, 2015. [6] X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In International conference on artificial intelligence and statistics, pages 249?256, 2010. [7] I. Goodfellow, Y. Bengio, and A. Courville. Deep learning. Book in preparation for MIT Press, 2016. [8] I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio. Maxout networks. In ICML, 2013. [9] K. Gregor, I. Danihelka, A. Graves, and D. Wierstra. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015. [10] R. Grosse and R. Salakhudinov. Scaling up natural gradient by sparsely factorizing the inverse fisher matrix. In ICML, pages 2304?2313, 2015. [11] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015. [12] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?1780, 1997. [13] G. Huang, Z. Liu, and K. Q. Weinberger. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016. [14] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015. [15] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [16] D. P. Kingma and M. Welling. Auto-Encoding Variational Bayes. Proceedings of the 2nd International Conference on Learning Representations, 2013. [17] D. P. Kingma, T. Salimans, and M. Welling. Improving variational inference with inverse autoregressive flow. arXiv preprint arXiv:1606.04934, 2016. [18] P. Kr?henb?hl, C. Doersch, J. Donahue, and T. Darrell. Data-dependent initializations of convolutional neural networks. arXiv preprint arXiv:1511.06856, 2015. [19] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images, 2009. [20] C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeply-supervised nets. In Deep Learning and Representation Learning Workshop, NIPS, 2014. [21] M. Lin, C. Qiang, and S. Yan. Network in network. In ICLR: Conference Track, 2014. [22] J. Martens. Deep learning via hessian-free optimization. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 735?742, 2010. [23] J. Martens and R. Grosse. Optimizing neural networks with kronecker-factored approximate curvature. arXiv preprint arXiv:1503.05671, 2015. [24] D. Mishkin and J. Matas. All you need is a good init. arXiv preprint arXiv:1511.06422, 2015. [25] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529?533, 2015. [26] B. T. Polyak and A. B. Juditsky. Acceleration of stochastic approximation by averaging. SIAM Journal on Control and Optimization, 30(4):838?855, 1992. [27] T. Raiko, H. Valpola, and Y. LeCun. Deep learning made easier by linear transformations in perceptrons. In International Conference on Artificial Intelligence and Statistics, pages 924?932, 2012. [28] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, pages 1278?1286, 2014. [29] T. Salimans, D. P. Kingma, and M. Welling. Markov chain Monte Carlo and variational inference: Bridging the gap. In ICML, 2015. [30] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller. Striving for simplicity: The all convolutional net. In ICLR Workshop Track, 2015. [31] N. Srebro and A. Shraibman. Rank, trace-norm and max-norm. In Proceedings of the 18th Annual Conference on Learning Theory, pages 545?-560, 2005. [32] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In ICML, pages 1139?1147, 2013. [33] S. Zhang, H. Jiang, S. Wei, and L.-R. Dai. Rectified linear neural networks with tied-scalar regularization for lvcsr. In INTERSPEECH, pages 2635?2639, 2015. 9
6114 |@word cnn:7 version:2 norm:21 nd:1 bn:2 covariance:5 pick:1 thereby:2 initial:2 liu:1 score:7 tuned:1 ours:1 interestingly:1 cvae:1 current:2 com:5 activation:11 diederik:1 gpu:1 subsequent:1 additive:2 cheap:1 enables:1 hypothesize:1 plot:1 update:5 convpool:2 juditsky:1 generative:10 fewer:1 selected:1 intelligence:3 parameterization:27 steepest:1 short:2 provides:2 parameterizations:3 pascanu:1 simpler:1 zhang:3 wierstra:2 c2:3 consists:2 overhead:5 lagging:1 introduce:2 kiros:1 inspired:2 relying:1 encouraging:1 param:3 increasing:1 spain:1 project:3 provided:2 estimating:1 what:1 atari:1 eigenvector:2 developed:2 shraibman:1 finding:2 transformation:3 differentiation:1 decouples:1 scaled:1 control:2 normally:2 unit:10 danihelka:1 before:4 negligible:1 limit:1 encoding:1 jiang:1 laurent:1 approximately:8 might:1 rnns:2 initialization:14 lasagne:1 equivalence:1 specifying:1 challenging:1 suggests:1 dif:1 factorization:1 range:5 practical:1 lecun:1 block:5 reimplementation:1 backpropagation:3 optimizers:1 digit:2 procedure:3 riedmiller:2 yan:1 significantly:3 projection:1 pre:8 regular:1 suggest:1 arcade:1 get:2 onto:1 close:1 context:1 applying:2 bellemare:2 optimize:2 equivalent:3 deterministic:2 imposed:1 map:2 marten:3 starting:2 independently:2 cvaes:1 amazon:1 simplicity:1 factored:2 parameterizing:3 fang:1 reparameterization:11 play:1 exact:1 lighter:1 us:1 goodfellow:2 invader:3 element:1 recognition:5 pythagorean:1 sparsely:1 observed:2 bottom:1 cloud:1 preprint:10 initializing:1 ensures:2 connected:2 sun:1 episode:1 decrease:1 highest:1 deeply:2 substantial:1 environment:2 reward:1 warde:1 dynamic:2 trained:3 depend:2 preconditioning:1 accelerate:1 easily:4 joint:1 represented:1 grown:1 train:1 effective:2 monte:1 artificial:4 cmw:1 hyper:1 widely:1 supplementary:1 larger:1 amari:1 encoder:7 ability:2 statistic:7 simonyan:1 transform:1 noisy:2 final:2 differentiate:1 advantage:2 sequence:4 kurtosis:1 net:2 propose:5 tu:1 combining:1 intuitive:1 description:1 validate:1 gentle:1 breakout:1 sutskever:1 convergence:7 optimum:2 darrell:1 adam:4 converges:1 silver:1 resnet:3 tim:2 help:1 recurrent:6 wider:1 fixing:2 amazing:1 minor:1 progress:4 implemented:2 indicate:1 differ:1 direction:5 closely:1 modifying:1 stochastic:16 human:1 material:1 noticeably:1 fix:1 strictly:1 hold:4 normal:18 seed:2 stabilizes:2 desjardins:1 vary:1 early:1 salakhudinov:1 diminishes:1 applicable:1 sensitive:2 successfully:1 weighted:2 reparameterizing:1 mit:2 clearly:1 gaussian:3 rather:1 rusu:1 vae:4 rezende:1 properly:1 improvement:1 modelling:6 likelihood:6 mainly:1 rank:1 contrast:2 inference:3 dependent:3 typically:2 eliminate:1 initially:1 hidden:1 relation:1 expand:1 classification:3 among:1 flexible:1 seaquest:1 development:1 art:1 special:2 initialize:3 platform:1 marginal:6 brox:1 once:2 veness:2 qiang:1 identical:1 icml:7 others:2 mirza:1 fundamentally:1 dosovitskiy:1 pathological:1 composed:1 densely:1 individual:1 cheaper:2 replaced:1 consisting:1 ostrovski:1 highly:2 mnih:3 multiply:1 evaluation:6 farley:1 behind:1 chain:1 capable:1 closer:3 orthogonal:2 euclidean:2 divide:1 initialized:1 minimal:1 instance:1 earlier:1 downside:1 kurtotic:1 prong:1 cost:3 deviation:6 usefulness:2 comprised:1 krizhevsky:1 too:1 reported:2 dependency:3 encoders:1 combined:5 decayed:1 lstm:4 international:4 siam:1 lee:1 quickly:4 again:1 augmentation:2 huang:1 worse:1 book:1 resort:1 actively:1 szegedy:1 potential:1 stride:1 summarized:1 explicitly:2 caused:4 performed:2 later:1 tion:1 doing:3 reached:1 bayes:1 slope:1 accuracy:2 convolutional:9 variance:8 who:1 efficiently:1 conceptually:1 handwritten:2 mishkin:1 kavukcuoglu:1 ren:1 carlo:1 rectified:1 explain:2 reach:1 centering:1 against:1 mohamed:1 riemannian:1 stop:1 holdout:1 improves:3 enduro:2 higher:2 supervised:7 follow:2 xie:1 improved:3 wei:1 done:2 just:1 stage:1 until:2 autoencoders:1 working:1 lstms:3 nonlinear:1 densenets:1 lack:1 minibatch:24 brings:1 indicated:1 grows:2 dqn:8 building:3 effect:6 normalized:1 regularization:1 inspiration:1 during:10 self:1 game:3 bowling:1 interspeech:1 unnormalized:1 demonstrate:1 bring:1 image:9 variational:6 recently:4 empirically:4 conditioning:3 ballas:1 tail:1 he:1 elementwise:3 expressing:1 significant:1 doersch:1 nonlinearity:4 language:1 moving:1 specification:1 whitening:2 v0:2 add:2 dominant:1 curvature:4 posterior:1 recent:1 optimizing:2 apart:1 schmidhuber:1 success:3 seen:2 additional:3 fortunately:1 dai:1 accomplishes:1 determine:1 maximize:2 monotonically:1 full:8 multiple:4 reduces:1 long:3 cifar:11 lin:1 post:1 permitting:1 basic:1 whitened:3 vision:1 surpassed:1 arxiv:20 normalization:108 sometimes:1 achieved:1 cell:3 hochreiter:1 addition:5 grow:1 source:1 unlike:2 ascent:1 pooling:1 thing:1 flow:3 seem:1 call:2 kera:1 mw:4 feedforward:2 intermediate:1 easy:2 wn:1 bengio:3 independence:1 fit:1 architecture:11 associating:1 polyak:2 idea:2 shift:2 t0:1 expression:1 bridging:1 accelerating:1 effort:1 lvcsr:1 henb:1 speech:1 hessian:2 cause:2 deep:22 useful:2 generally:1 amount:2 backpropagated:1 hardware:1 http:3 weightnorm:2 per:2 arising:1 track:2 diverse:1 naddaf:1 write:1 hyperparameter:1 four:4 openai:5 achieving:1 densenet:1 dahl:1 sum:2 inverse:6 parameterized:2 you:1 springenberg:1 draw:8 scaling:2 dropout:1 layer:12 bound:8 accelerates:1 followed:4 courville:3 annual:1 adapted:1 kronecker:2 software:1 speed:9 reparameterize:2 span:1 performing:1 structured:1 combination:1 across:4 slightly:3 making:2 modification:6 hl:1 projecting:2 invariant:1 explained:2 theano:3 taken:1 computationally:1 equation:2 diagonalizes:1 turn:1 mechanism:1 needed:1 available:1 operation:3 apply:5 observe:1 salimans:3 away:4 appropriate:1 ww0:1 batch:54 alternative:1 weinberger:1 slower:3 gate:1 original:1 substitute:1 denotes:4 running:1 subsampling:1 trouble:1 top:1 calculating:1 gregor:1 comparatively:1 objective:2 matas:1 added:3 usual:1 diagonal:1 said:1 exhibit:1 gradient:43 iclr:2 reversed:1 separate:1 valpola:1 fidjeland:1 decoder:5 trivial:1 length:1 besides:1 mostly:1 trace:1 ba:2 implementation:10 perform:2 neuron:14 markov:1 descent:13 hinton:3 varied:1 introduced:2 inverting:1 complement:1 required:1 specified:2 optimized:3 tensorflow:1 kingma:5 barcelona:1 nip:2 able:1 bar:2 below:1 usually:2 max:3 memory:5 including:1 natural:8 difficulty:1 recursion:1 residual:1 scheme:1 improve:2 github:3 raiko:2 auto:4 epoch:15 understanding:1 python:1 graf:2 law:1 loss:3 fully:1 generation:1 proportional:1 srebro:1 agent:1 sufficient:1 destabilizes:1 consistent:1 playing:1 share:2 tiny:1 normalizes:1 last:3 free:1 bias:2 deeper:1 wide:3 taking:2 benefit:2 distributed:2 curve:2 calculated:2 default:1 xlarge:1 computes:1 ferent:1 author:3 commonly:1 reinforcement:8 qualitatively:1 autoregressive:2 made:1 welling:3 approximate:6 dpkingma:1 keep:1 ioffe:1 cooijmans:1 alternatively:1 factorizing:1 search:1 latent:2 table:3 learn:1 nature:3 robust:2 decoupling:1 elastic:1 obtaining:1 init:1 improving:4 necessarily:1 domain:1 did:1 linearly:2 noise:14 grosse:2 momentum:6 inferring:1 exponential:2 tied:1 third:1 donahue:1 theorem:1 removing:1 rectifier:1 covariate:2 showing:1 offset:1 decay:1 striving:1 normalizing:5 glorot:1 consist:1 workshop:2 mnist:7 adding:1 kr:1 importance:1 magnitude:1 gallagher:1 gap:1 easier:2 subtract:1 suited:2 forget:1 simply:1 explore:1 g2:1 scalar:5 minibatches:1 identity:3 viewed:1 acceleration:1 eventual:1 maxout:2 replace:1 fisher:6 change:2 experimentally:3 specifically:2 reducing:1 acting:1 averaging:2 impractically:1 called:2 pas:2 partly:1 e:1 adamax:2 perceptrons:1 internal:1 cholesky:1 preparation:1 evaluate:2 regularizing:1
5,653
6,115
Linear-Memory and Decomposition-Invariant Linearly Convergent Conditional Gradient Algorithm for Structured Polytopes Dan Garber Toyota Technological Institute at Chicago dgarber@ttic.edu Ofer Meshi Google meshi@google.com Abstract Recently, several works have shown that natural modifications of the classical conditional gradient method (aka Frank-Wolfe algorithm) for constrained convex optimization, provably converge with a linear rate when: i) the feasible set is a polytope, and ii) the objective is smooth and strongly-convex. However, all of these results suffer from two significant shortcomings: 1. large memory requirement due to the need to store an explicit convex decomposition of the current iterate, and as a consequence, large running-time overhead per iteration 2. the worst case convergence rate depends unfavorably on the dimension In this work we present a new conditional gradient variant and a corresponding analysis that improves on both of the above shortcomings. In particular: 1. both memory and computation overheads are only linear in the dimension 2. in case the optimal solution is sparse, the new convergence rate replaces a factor which is at least linear in the dimension in previous work, with a linear dependence on the number of non-zeros in the optimal solution At the heart of our method and corresponding analysis, is a novel way to compute decomposition-invariant away-steps. While our theoretical guarantees do not apply to any polytope, they apply to several important structured polytopes that capture central concepts such as paths in graphs, perfect matchings in bipartite graphs, marginal distributions that arise in structured prediction tasks, and more. Our theoretical findings are complemented by empirical evidence which shows that our method delivers state-of-the-art performance. 1 Introduction The efficient reduction of a constrained convex optimization problem to a constrained linear optimization problem is an appealing algorithmic concept, in particular for large-scale problems. The reason is that for many feasible sets of interest, the problem of minimizing a linear function over the set admits much more efficient methods than its non-linear convex counterpart. Prime examples for this phenomenon include various structured polytopes that arise in combinatorial optimization, such as the path polytope of a graph (aka the unit flow polytope), the perfect matching polytope of a bipartite graph, and the base polyhedron of a matroid, for which we have highly efficient combinatorial algorithms for linear minimization that rely heavily on the specific rich structure of the polytope [21]. At the same time, minimizing a non-linear convex function over these sets usually requires the use of generic interior point solvers that are oblivious to the specific combinatorial structure of the underlying set, and as a result, are often much less efficient. Indeed, it is for this reason, that the 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. conditional gradient (CG) method (aka Frank-Wolfe algorithm), a method for constrained convex optimization that is based on solving linear subproblems over the feasible domain, has regained much interest in recent years in the machine learning, signal processing and optimization communities. It has been recently shown that the method delivers state-of-the-art performance on many problems of interest, see for instance [14, 17, 4, 10, 11, 22, 19, 25, 12, 15]. As part of the regained interest in the conditional gradient method, there is also a recent effort to understand the convergence rates and associated complexities of conditional gradient-based methods, which are in general far less understood than other first-order methods, e.g., the projected gradient method. It is known, already from the first introduction of the method by Frank and Wolfe in the 1950?s [5] that the method converges with a rate of roughly O(1/t) for minimizing a smooth convex function over a convex and compact set. However, it is not clear if this convergence rate improves under an additional standard strong-convexity assumption. In fact, certain lower bounds, such as in [18, 8], suggest that such improvement, even if possible, should come with a worse dependence on the problem?s parameters (e.g., the dimension). Nevertheless, over the past years, various works tried to design natural variants of the CG method that converge provably faster under the strong convexity assumption, without dramatically increasing the per-iteration complexity of the method. For instance, Gu?Lat and Marcotte [9] showed that a CG variant which uses the concept of away-steps converges exponentially fast in case the objective function is strongly convex, the feasible set is a polytope, and the optimal solution is located in the interior of the set. A similar result was presented by Beck and Teboulle [3] who considered a specific problem they refer to as the convex feasibility problem over an arbitrary convex set. They also obtained a linear convergence rate under the assumption that an optimal solution that is far enough from the boundary of the set exists. In both of these works, the exponent depends on the distance of the optimal solution from the boundary of the set, which in general can be arbitrarily small. Later, Ahipasaoglu et al. [1] showed that in the specific case of minimizing a smooth and strongly convex function over the unit simplex, a variant of the CG method which also uses away-steps, converges with a linear rate. Unfortunately, it is not clear from their analysis how this rate depends on natural parameters of the problem such as the dimension and the condition number of the objective function. Recently, Garber and Hazan presented a linearly-converging CG variant for polytopes without any restrictions on the location of the optimum [8]. In a later work, Lacoste-Julien and Jaggi [16] gave a refined affine-invariant analysis of an algorithm presented in [9] which also uses away steps, and showed that it also converges exponentially fast in the same setting as the Garber-Hazan result. More recently, Beck and Shtern [2] gave a different, duality-based, analysis for the algorithm of [9], and showed that it can be applied to a wider class of functions than purely strongly convex functions. However, the explicit dependency of their convergence rate on the dimension is suboptimal, compared to [8, 16]. Aside from the polytope case, Garber and Hazan [7] have shown that in case the feasible set is strongly-convex and the objective function satisfies certain strong convexity-like proprieties, then the standard CG method converges with an accelerated rate of O(1/t2 ). Finally, in [6] Garber showed a similar improvement (roughly quadratic) for the spectrahedron ? the set of unit trace positive semidefinite matrices. Despite the exponential improvement in convergence rate for polytopes obtained in recent results, they all suffer from two major drawbacks. First, while in terms of the number of calls per-iteration to the linear optimization oracle, these methods match the standard CG method, i.e., a single call per iteration, the overhead of other operations both in terms of running time and memory requirements is significantly worse. The reason is that in order to apply the so-called away-steps, which all methods use, they require to maintain at all times an explicit decomposition of the current iterate into vertices of the polytope. In the worst case, maintaining such a decomposition and computing the away-steps require both memory and per-iteration runtime overheads that are at least quadratic in the dimension. This is much worse than the standard CG method, whose memory and runtime overheads are only linear in the dimension. Second, the convergence rate of all previous linearly convergent CG methods depends explicitly on the dimension. While it is known that this dependency is unavoidable in certain cases, e.g., when the optimal solution is, informally speaking, dense (see for instance the lower bound in [8]), it is not clear that such an unfavorable dependence is mandatory when the optimum is sparse. In this paper, we revisit the application of CG variants to smooth and strongly-convex optimization over polytopes. We introduce a new variant which overcomes both of the above shortcomings from which all previous linearly-converging variants suffer. The main novelty of our method, which is the key to its improved performance, is that unlike previous variants, it is decomposition-invariant, i.e., it 2 Paper Frank & Wolfe [5] Garber & Hazan [8] Lacoste-Julien & Jaggi [16] Beck & Shtern [2] This paper #iterations to ? err. D2 /? ?nD2 log(1/?) ?nD2 log(1/?) ?n2 D2 log(1/?) ?card(x? )D2 log(1/?) #LOO calls 1 1 1 1 2 runtime n n min(n, t) n min(n, t) n min(n, t) n memory n n min(n, t) n min(n, t) n min(n, t) n Table 1: Comparison with previous work. We define ? := /?, we let n denote the dimension and D denote the Euclidean diameter of the polytope. The third column gives the number of calls to the linear optimization oracle per iteration, the fourth column gives the additional arithmetic complexity at iteration t, and the fifth column gives the worst case memory requirement at iteration t. The bounds for the algorithms of [8, 16, 2], which are independent of t, assume an algorithmic version of Carath?odory?s theorem, as fully detailed in [2]. The bound on number of iterations of [16] depends on the squared inverse pyramidal width of P, which is difficult to evaluate, however, this quantity is at least proportional to n. does not require to maintain an explicit convex decomposition of the current iterate. This principle proves to be crucial both for eliminating the memory and runtime overheads, as well as to obtaining shaper convergence rates for instances that admit a sparse optimal solution. A detailed comparison of our method to previous art is shown in Table 1. We also provide in Section 5 empirical evidence that the proposed method delivers state-of-the-art performance on several tasks of interest. While our method is less general than previous ones, i.e., our theoretical guarantees do not hold for arbitrary polytopes, they readily apply to many structured polytopes that capture important concepts such as paths in graphs, perfect matchings in bipartite graphs, Markov random fields, and more. 2 Preliminaries Throughout this work we let k ? k denote the standard Euclidean norm. Given a point x 2 Rn , we let card(x) denote the number of non-zero entries in x. Definition 1. We say that a function f (x) : Rn ! R is ?-strongly convex w.r.t. a norm k ? k, if for all x, y 2 Rn it holds that f (y) f (x) + rf (x) ? (y x) + ?2 kx yk2 . Definition 2. We say that a function f (x) : Rn ! R is -smooth w.r.t. a norm k ? k, if for all x, y 2 Rn it holds that f (y) ? f (x) + rf (x) ? (y x) + 2 kx yk2 . The first-order optimality condition implies that for a ?-strongly convex f , if x? is the unique minimizer of f over a convex and compact set K ? Rn , then for all x 2 K it holds that ? f (x) f (x? ) kx x? k2 . (1) 2 2.1 Setting In this work we consider the optimization problem minx2P f (x), where we assume that: ? f (x) is ?-strongly convex and -smooth with respect to the Euclidean norm. ? P is a polytope which satisfies the following two properties: 1. P can be described algebraically as P = {x 2 Rn | x 0, Ax = b} . 2. All vertices of P lie on the hypercube {0, 1}n . We let x? denote the (unique) minimizer of f over P, and we let D denote the Euclidean diameter of P, namely, D = maxx,y2P kx yk. We let V denote the set of vertices of P, where according to our assumptions, it holds that V ? {0, 1}n . While the polytopes that satisfy the above assumptions are not completely general, these assumptions already capture several important concepts such as paths in graphs, perfect-matchings, Markov 3 random fields, and more. Indeed, a surprisingly large number of applications from machine learning, signal processing and other domains are formulated as optimization problems in this category (e.g., [13, 15, 16]). We give detailed examples of such polytopes in Section A in the appendix. Importantly, the above assumptions allow us to get rid of the dependency of the convergence rate on certain geometric parameters (such as , ? in [8] or the pyramidal width in [16]), which can be polynomial in the dimension, and hence result in an impractical convergence rate. Finally, for many of these polytopes, the vertices are sparse, i.e., for any vertex v 2 V, card(v) << n. In this case, when the optimum x? can be decomposed as a convex combination of only a few vertices (and thus, sparse by itself), we get a sharper convergence rate that depends on the sparsity of x? and not explicitly on the dimension, as in previous work. We believe that our theoretical guarantees could be well extended to more general polytopes, as suggested in Section C in the appendix; we leave this extension for future work. 3 Our Approach In order to better communicate our ideas, we begin by first briefly introducing the standard conditional gradient method and its accelerated away-steps-based variants. We discuss both the blessings and shortcomings of these away-steps-based variants in Subsection 3.1. Then, in Subsection 3.2, we present our new method, a decomposition-invariant away-steps-based conditional gradient algorithm, and discuss how it addresses the shortcomings of previous variants. 3.1 The conditional gradient method and acceleration via away-steps The standard conditional gradient algorithm is given below (Algorithm 1). It is well known that when setting the step-size ?t in an appropriate way, the worst case convergence rate of the method is O( D2 /t) [13]. This convergence rate is tight for the method in general, see for instance [18]. Algorithm 1 Conditional Gradient 1: let x1 be some vertex in V 2: for t = 1... do 3: vt arg minv2V v ? rf (xt ) 4: choose a step-size ?t 2 (0, 1] 5: xt+1 xt + ?t (vt xt ) 6: end for Algorithm 2 Pairwise Conditional Gradient 1: let x1 be some vertex in V 2: for t =P 1... do (i) (i) kt 3: let be an explicitly maini=1 t vt tained convex decomposition of xt 4: vt+ arg minv2V v ? rf (xt ) (j) 5: jt arg minj2[kt ] vt ? ( rf (xt )) (j ) 6: choose a step-size ?t 2 (0, t t ] (jt ) + 7: xt+1 xt + ?t (vt vt ) 8: update the convex decomposition of xt+1 9: end for Pk Consider the iterate of Algorithm 1 on iteration t, and let xt = i=1 i vi be its convex decomposition into vertices of the polytope P. Note that Algorithm 1 implicitly discounts each coefficient i by a factor (1 ?t ), in favor of the new added vertex vt . A different approach is not to decrease all vertices in the decomposition of xt uniformly, but to more-aggressively decrease vertices that are worse than others with respect to some measure, such as their product with the gradient direction. This key principle proves to be crucial to breaking the 1/t rate of the standard method, and to achieve a linear convergence rate under certain strong-convexity assumptions, as described in the recent works [8, 16, 2]. For instance, in [8] it has been shown, via the introduction of the concept of a Local Linear Optimization Oracle, that using such a non-uniform reweighing rule, in fact approximates a certain proximal problem, that together with the shrinking effect of strong convexity, as captured by Eq. (1), yields a linear convergence rate. We refer to these methods as away-step-based CG methods. As a concrete example, which will also serve as a basis for our new method, we describe the pairwise variant recently studied in [16], which applies this principle in Algorithm 2.1 Note that Algorithm 2 decreases the weight of exactly one vertex in the decomposition: that with the largest product with the gradient. 1 While the convergence rate of this pairwise variant, established in [16], is significantly worse than other away-step-based variants, here we show that a proper analysis yields state-of-the-art performance guarantees. 4 It is important to note that since previous away-step-based CG variants do not decrease the coefficients in the convex decomposition of the current iterate uniformly, they all require to explicitly store and maintain a convex decomposition of the current iterate. This issue raises two main disadvantages: Superlinear memory and running-time overheads Storing a decomposition of the current iterate as a convex combination of vertices generally requires O(n2 ) memory in the worst case. While the away-step-based variants increase the size of the decomposition by at most a single vertex per iteration, they also typically exhibit linear convergence after performing at least ?(n) steps [8, 16, 2], and thus, this O(n2 ) estimate still holds. Moreover, since these methods require i) to find the worst vertex in the decomposition, in terms of dot-product with current gradient direction, and ii) to update this decomposition at each iteration (even when using sophisticated update techniques such as in [2]), then the worst case per-iteration overhead in terms of computation is also ?(n2 ). Decomposition-specific performance The choice of away-step depends on the specific decomposition that is maintained by the algorithm. Since the feasible point xt may admit several different convex decompositions, committing to one such decomposition, might result in sub-optimal away-steps. As observable in Table 1, for certain problems in which the optimal solution is sparse, all analyses of previous away-steps-based variants are significantly suboptimal, since they all depend explicitly on the dimension. This seems to be an unavoidable side-effect of being decomposition-dependent. On the other hand, the fact that our new approach is decomposition-invariant allows us to obtain sharper convergence rates for such instances. 3.2 A new decomposition-invariant pairwise conditional gradient method Our main observation is that in many cases of interest, given a feasible iterate xt , one can in fact compute an optimal away-step from xt without relying on any single specific decomposition. This observation allows us to overcome both of the main disadvantages of previous away-step-based CG variants. Our algorithm, which we refer to as a decomposition-invariant pairwise conditional gradient (DICG), is given below in Algorithm 3. Algorithm 3 Decomposition-invariant Pairwise Conditional Gradient 1: 2: 3: 4: 5: 6: input: sequence of step-sizes {?t }t let x0 be an arbitrary point in P x1 arg minv2V v ? rf (x0 ) for t = 1... do vt+ arg minv2V v ? rf (xt ) 1 ? (xt ) 2 Rn as follows: [rf ? (xt )]i := define the vector rf ? [rf (xt )]i 1 if xt (i) > 0 if xt (i) = 0 ? (xt )) vt arg minv2V v ? ( rf choose a new step-size ??t using one of the following two options: Option 1: predefined step-size let t be the smallest natural number such that 2 t ? ?t , and set a new step-size ??t 2 Option 2: line-search max 2[0,1] {xt + (vt+ vt ) 0}, ??t min?2(0, t ] f (xt + ?(vt+ vt )) t + 9: xt+1 xt + ??t (vt vt ) 10: end for 7: 8: t The following observation shows the optimality of away-steps taken by Algorithm 3. Observation 1 (optimal away-steps in Algorithm 3). Consider an iteration t of Algorithm 3 and Pk suppose that the iterate xt is feasible. Let xt = i=1 i vi for some integer k, be an irreducible way of writing xt as a convex sum of vertices of P, i.e., i > 0 for all i 2 [k]. Then it holds that 8i 2 [k] : vi ? rf (xt ) ? vt ? rf (xt ), and t min{xt (i) | i 2 [n], xt (i) > 0}. Pk Proof. Let xt = i=1 i vi be a convex decomposition of xt into vertices of P, for some integer k, where each i is positive. Note that it must hold that for any j 2 [n] and any i 2 [k], xt (j) = 0 ) vi (j) = 0, since by our assumption V ? Rn+ . The observation then follows directly from the definition of vt . We next state the main theorem of this paper, which bounds the convergence rate of Algorithm 3. The proof is provided in Section B.3 in the appendix. 5 Theorem 1. Let M1 = p ?/(8card(x? )) and M2 = D2 /2. Consider running Algorithm 3 with t 1 p Option 1 as the step-size, and suppose that 8t 1 : ?t = M1 /(2 M2 ) 1 M12 /(4M2 ) 2 . Then, the iterates of Algorithm 3 are always feasible, and 8t 1: ? ? D2 ? ? f (xt ) f (x ) ? exp t . 2 8 D2 card(x? ) We now turn to make several remarks regarding Algorithm 3 and Theorem 1: The so-called dual gap, defined as gt := (xt vt+ ) ? rf (xt ), which serves as a certificate for the sub-optimality of the iterates of Algorithm 3, also converges with a linear rate, as we prove in Section B.4 in the appendix. Note that despite the different parameters of the problem at hand (e.g., ?, , D, card(x? )), running the algorithm with Option 1 for choosing the step-size, p for which the guarantee of Theorem 1 holds, requires knowing a single parameter, i.e., M1 / M2 . p In particular, pit is an easy consequence that running the algorithm with an estimate M 2 [0.5M1 / M2 , M1 / p M2 ], will only affect the leading constant in the convergence rate listed in the theorem. Hence, M1 / M2 could be efficiently estimated via a logarithmic-scale search. Theorem 1 improves significantly over the convergence rate established for the pairwise conditional gradient variant in [16]. In particular, the number of iterations to reach an ? error in the analysis of [16] depends linearly on |V|!, where |V| is the number of vertices of P. 4 Analysis Throughout this section we let ht denote the approximation error of Algorithm 3 on iteration t, for any t 1, i.e., ht = f (xt ) f (x? ). 4.1 Feasibility of the iterates generated by Algorithm 3 We start by proving that the iterates of Algorithm 3 are always feasible. While feasibility is straightforward when using the the line-search option to set the step-size (Option 2), it is less obvious when using Option 1. We will make use of the following observation, which is a simple consequence of the optimal choice of vt and our assumptions on P. A proof is given in Section B.1 in the appendix. Observation 2. Suppose that on some iteration t of Algorithm 3, the iterate xt is feasible, and that the step-size is chosen using Option 1. Then, if for all i 2 [n] for which xt (i) 6= 0 it holds that xt (i) ??t , the following iterate xt+1 is also feasible. Lemma 1 (feasibility of iterates under Option 1). Suppose that the sequence of step-sizes {?t }t 1 is monotonically non-increasing, and contained in the interval [0, 1]. Then, the iterates generated by Algorithm 3 using Option 1 for setting the step-size, are always feasible. Proof. We are going to prove by induction that on each iteration t there exists a non-negative integervalued vector st 2 Nn , such that for any i 2 [n], it holds that xt (i) = 2 t st (i). The lemma then follows since, by definition, ??t = 2 t , and by applying Observation 2. The base case t = 1 holds since x1 is a vertex of P and thus for any i 2 [n] we have that x1 (i) 2 {0, 1} (recall that V ? {0, 1}n ). On the other hand, since ?1 ? 1, it follows that 1 0. Thus, there indeed exists a non-negative integer-valued vector s1 , such that x1 = 2 1 s1 . Suppose now that the induction holds for some t 1. Since by definition of vt , subtracting ??t vt from xt can only decrease positive entries in xt (see proof of Observation 2), and both vt , vt+ are vertices of P (and thus in {0, 1}n ), and ??t = 2 t , it follows that each entry i in xt+1 is given by: xt+1 (i) = 2 8 < st (i) t st (i) 1 : st (i) + 1 if st (i) 1 & vt (i) = vt+ (i) = 1 or vt (i) = vt+ (i) = 0 if st (i) 1 & vt (i) = 1 & vt+ (i) = 0 if vt (i) = 0 & vt+ (i) = 1 Thus, xt+1 can also be written in the form 2 t s?t+1 for some s?t+1 2 Nn . By definition of t and the t t monotonicity of {?t }t 1 , we have that 22 t+1 is a positive integer. Thus, setting st+1 = 22 t+1 s?t+1 , the induction holds also for t + 1. 6 4.2 Bounding the per-iteration error-reduction of Algorithm 3 The following technical lemma is the key to deriving the linear convergence rate of our method, and in particular, to deriving the improved dependence on the sparsity of x? , instead of the dimension. At a high-level, the lemma translates the `2 distance between two feasible points into a `1 distance in a simplex defined over the set of vertices of the polytope. Lemma 2. Let x, y 2 P. There exists a way to write x as a convex combination of vertices of P, Pk Pk Pk x = i=1 i vi for some integer k, such that y can be written as y = i=1 ( i i )vi +( i )z i=1 p Pk with i 2 [0, i ] 8i 2 [k],z 2 P, and i=1 i ? card(y)kx yk. The proof is given in Section B.2 in the appendix. The next lemma bounds the per-iteration improvement of Algorithm 3 and is the key step to proving Theorem 1. We defer the rest of the proof of Theorem 1 to Section B.3 in the appendix. Lemmap3. Consider the iterates of Algorithm 3, when the step-sizes are chosen using Option 1. Let 1/2 M1 = ?/(8card(x? )) and M2 = D2 /2. For any t 1 it holds that ht+1 ? ht ?t M1 ht + 2 ? t M2 . p Proof. Define = 2card(x? )ht /?, and note that from Eq. (1) we have that t t p ? ? card(x )kxt x k. As a first step, we are going to show that the point yt := xt + t (vt+ vt ) satisfies: yt ? rf (xt ) ? x? ? rf (xt ). From Lemma 2 it follows that we can write x as a convex comPk Pk Pk bination xt = i=1 i vi and write x? as x? = i=1 ( i i )vi + i z, where i 2 [0, i ], i=1 Pk z 2 P, and i=1 i ? t . It holds that Xk + (yt xt ) ? rf (xt ) = t (vt+ vt ) ? rf (xt ) ? vt ) ? rf (xt ) i (vt i=1 Xk ? vi ) ? rf (xt ) = (x? xt ) ? rf (xt ), i (z i=1 where the first inequality follows since (vt+ vt ) ? rf (xt ) ? 0, and the second inequality follows from the optimality of vt+ and vt (Observation 1). Rearranging, we have that indeed xt + + t (vt vt ) ? rf (xt ) ? x? ? rf (xt ). Observe now that from the definition of ??t it follows for any t smoothness of f (x) we have that ht+1 = f (xt + ??t (vt+ ? ht + ??t (vt+ = ht + ? ?t 2 t ?t ht + 2 t vt )) f (x? ) ? ht + ??t (vt+ 1 that (2) ?t 2 vt ) ? rf (xt ) + ? ??t ? ?t . Using the ??t2 kvt+ 2 vt k 2 ??t2 D2 ?t ?2 D2 ? ht + (vt+ vt ) ? rf (xt ) + t 2 2 2 ?t2 D2 + (xt + t (vt vt ) xt ? rf (xt ) + 2 ?t2 D2 ?t ?2 D2 ? (x xt ) ? rf (xt ) + ? ht ht + t , 2 2 t 2 vt ) ? rf (xt ) + where the third inequality follows since (vt+ vt ) ? rf (xt ) ? 0, the forth inequality follows from Eq. (2), and the last inequality follows from convexity of f (x). Finally, plugging the value of t completes the proof. 5 Experiments In this section we illustrate the performance of our algorithm in numerical experiments. We use the two experimental settings from [16], which include a constrained Lasso problem and a video co-localization problem. In addition, we test our algorithm on a learning problem related to an optical character recognition (OCR) task from [23]. In each setting we compare the performance of our algorithm (DICG) to standard conditional gradient (CG), as well as to the fast away (ACG) and pairwise (PCG) variants [16]. For the baselines in the first two settings we use the publicly available code from [16], to which we add our own implementation of Algorithm 3. Similarly, for the OCR problem we extend code from [20], kindly provided by the authors. For all algorithms we use line-search to set the step size. 7 Lasso Video co-localization 5 OCR ?2 10 10 CG ACG PCG DICG ?4 CG ACG PCG DICG ?1 10 10 Gap Gap Gap 0 10 ?2 10 ?6 10 ?5 10 0 CG ACG PCG DICG ?3 10 ?8 10 200 400 600 800 1000 0 500 1000 Iteration 1500 2000 0 500 Iteration 5 10 10 CG ACG PCG DICG 1000 1500 2000 Effective passes ?2 10 CG ACG PCG DICG ?4 -2 CG ACG PCG DICG 10 Gap Gap Gap 0 10 ?6 10 10 -3 ?5 10 ?8 10 0 2 4 6 Time (sec) 8 10 0 5 10 15 Time (sec) 20 25 30 0 1 2 3 4 5 6 Time (hr) Figure 1: Duality gap gt vs. iterations (top) and time (bottom) in various settings. ?bk2 , where M is a ? Lasso In the first example the goal is to solve the problem: minx2M kAx scaled `1 ball. Notice that the constraints M do not match the required structure of P, however, with a simple change of variables we can obtain an equivalent optimization problem over the simplex. We generate the random matrix A? and vector ?b as in [16]. In Figure 1 (left, top) we observe that our algorithm (DICG) converges similarly to the pairwise variant PCG and faster than the other baselines. This is expected since the away direction v in DICG (Algorithm 3) is equivalent to the away direction in PCG (Algorithm 2) in the case of simplex constraints. Video co-localization The second example is a quadratic program over the flow polytope, originally proposed in [15]. This is an instance of P that is mentioned in Section A in the appendix. As can be seen in Figure 1 (middle, top), in this setting our proposed algorithm significantly outperforms the baselines, as a result of finding a better away direction v . Figure 1 (middle, bottom) shows convergence on a time scale, where the difference between the algorithms is even larger. One reason for this difference is the costly search over the history of vertices maintained by the baseline algorithms. Specifically, the number of stored vertices grows fast with the number of iterations and reaches 1222 for away steps and 1438 for pairwise steps (out of 2000 iterations). OCR We next conduct experiments on a structured SVM learning problem resulting from an OCR task. The constraints in this setting are the marginal polytope corresponding to a chain graph over the letters of a word (see [23]), and the objective function is quadratic. Notice that the marginal polytope has a concise characterization in this case and also satisfies our assumptions (see Section A in the appendix for more details). For this problem we actually run Algorithm 3 in a block-coordinate fashion, where blocks correspond to training examples in the dual SVM formulation [17, 20]. In Figure 1 (right, top) we see that our DICG algorithm is comparable to the PCG algorithm and faster than the other baselines on the iteration scale. Figure 1 (right, bottom) demonstrates that in terms of actual running time we get a noticeable speedup compared to all baselines. We point out that for this OCR problem, both ACG and PCG each require about 5GB of memory to store the explicit decomposition in the implementation of [20]. In comparison, our algorithm requires 220MB of memory to store the current iterate, and the other variables in the code require 430MB (common to all algorithms), so using DICG results in significant memory savings. 6 Extensions Our results are readily extendable in two important directions. First, we can relax the strong convexity requirement of f (x) and handle a broader class of functions, namely the class considered in [2]. Second, we extend the line-search variant of Algorithm 3 to handle arbitrary polytopes, but without convergence guarantees, which is left as future work. Both extensions are brought in full detail in Section C in the appendix. 8 References [1] S. Damla Ahipasaoglu, Peng Sun, and Michael J. Todd. Linear convergence of a modified frank-wolfe algorithm for computing minimum-volume enclosing ellipsoids. Optimization Methods and Software, 23(1):5?19, 2008. [2] Amir Beck and Shimrit Shtern. Linearly convergent away-step conditional gradient for non-strongly convex functions. arXiv preprint arXiv:1504.05002, 2015. [3] Amir Beck and Marc Teboulle. A conditional gradient method with linear rate of convergence for solving convex linear systems. Math. Meth. of OR, 59(2):235?247, 2004. [4] Miroslav Dud?k, Za?d Harchaoui, and J?r?me Malick. Lifted coordinate descent for learning with tracenorm regularization. Journal of Machine Learning Research - Proceedings Track, 22:327?336, 2012. [5] M. Frank and P. Wolfe. An algorithm for quadratic programming. Naval Research Logistics Quarterly, 3:149?154, 1956. [6] Dan Garber. Faster projection-free convex optimization over the spectrahedron. arXiv:1605.06203, 2016. arXiv preprint [7] Dan Garber and Elad Hazan. Faster rates for the frank-wolfe method over strongly-convex sets. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pages 541?549, 2015. [8] Dan Garber and Elad Hazan. A linearly convergent variant of the conditional gradient algorithm under strong convexity, with applications to online and stochastic optimization. SIAM Journal on Optimization, 26(3):1493?1528, 2016. [9] Jacques Gu?Lat and Patrice Marcotte. Some comments on Wolfe?s ?away step?. Mathematical Programming, 35(1), 1986. [10] Za?d Harchaoui, Matthijs Douze, Mattis Paulin, Miroslav Dud?k, and J?r?me Malick. Large-scale image classification with trace-norm regularization. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2012. [11] Elad Hazan and Satyen Kale. Projection-free online learning. In Proceedings of the 29th International Conference on Machine Learning, ICML, 2012. [12] Elad Hazan and Haipeng Luo. Variance-reduced and projection-free stochastic optimization. CoRR, abs/1602.02101, 2016. [13] Martin Jaggi. Revisiting frank-wolfe: Projection-free sparse convex optimization. In Proceedings of the 30th International Conference on Machine Learning, ICML, 2013. [14] Martin Jaggi and Marek Sulovsk?. A simple algorithm for nuclear norm regularized problems. In Proceedings of the 27th International Conference on Machine Learning, ICML, 2010. [15] Armand Joulin, Kevin Tang, and Li Fei-Fei. Efficient image and video co-localization with Frank-Wolfe algorithm. In Computer Vision?ECCV 2014, pages 253?268. Springer, 2014. [16] Simon Lacoste-Julien and Martin Jaggi. On the global linear convergence of Frank-Wolfe optimization variants. In Advances in Neural Information Processing Systems, pages 496?504, 2015. [17] Simon Lacoste-Julien, Martin Jaggi, Mark W. Schmidt, and Patrick Pletscher. Block-coordinate frankwolfe optimization for structural svms. In Proceedings of the 30th International Conference on Machine Learning, ICML, 2013. [18] Guanghui Lan. The complexity of large-scale convex programming under a linear optimization oracle. CoRR, abs/1309.5550, 2013. [19] S?ren Laue. A hybrid algorithm for convex semidefinite optimization. In Proceedings of the 29th International Conference on Machine Learning, ICML, 2012. [20] Anton Osokin, Jean-Baptiste Alayrac, Puneet K. Dokania, and Simon Lacoste-Julien. Minding the gaps for block frank-wolfe optimization of structured svm. In International Conference on Machine Learning (ICML), 2016. [21] A. Schrijver. Combinatorial Optimization - Polyhedra and Efficiency. Springer, 2003. [22] Shai Shalev-Shwartz, Alon Gonen, and Ohad Shamir. Large-scale convex minimization with a low-rank constraint. In Proceedings of the 28th International Conference on Machine Learning, ICML, 2011. [23] B. Taskar, C. Guestrin, and D. Koller. Max-margin Markov networks. In Advances in Neural Information Processing Systems. MIT Press, 2003. [24] M. Wainwright and M. I. Jordan. Graphical Models, Exponential Families, and Variational Inference. Now Publishers Inc., Hanover, MA, USA, 2008. [25] Yiming Ying and Peng Li. Distance metric learning with eigenvalue optimization. J. Mach. Learn. Res., 13(1):1?26, January 2012. 9
6115 |@word armand:1 version:1 eliminating:1 polynomial:1 norm:6 briefly:1 seems:1 middle:2 nd:1 d2:13 tried:1 decomposition:31 concise:1 reduction:2 minding:1 frankwolfe:1 past:1 outperforms:1 err:1 current:8 com:1 luo:1 must:1 readily:2 written:2 chicago:1 numerical:1 update:3 aside:1 v:1 amir:2 xk:2 paulin:1 iterates:7 certificate:1 characterization:1 location:1 math:1 mathematical:1 prove:2 dan:4 overhead:8 introduce:1 x0:2 pairwise:10 peng:2 expected:1 indeed:4 roughly:2 relying:1 decomposed:1 actual:1 solver:1 increasing:2 spain:1 begin:1 underlying:1 moreover:1 provided:2 finding:2 impractical:1 guarantee:6 runtime:4 exactly:1 k2:1 scaled:1 demonstrates:1 unit:3 positive:4 understood:1 local:1 todd:1 consequence:3 despite:2 mach:1 path:4 might:1 studied:1 pit:1 co:4 unique:2 block:4 empirical:2 maxx:1 significantly:5 matching:1 projection:4 word:1 suggest:1 get:3 interior:2 superlinear:1 applying:1 writing:1 restriction:1 equivalent:2 yt:3 straightforward:1 kale:1 convex:41 m2:9 rule:1 importantly:1 deriving:2 nuclear:1 proving:2 handle:2 coordinate:3 shamir:1 suppose:5 heavily:1 programming:3 us:3 wolfe:12 recognition:2 located:1 bottom:3 taskar:1 preprint:2 capture:3 worst:7 revisiting:1 sun:1 decrease:5 technological:1 yk:2 mentioned:1 convexity:8 complexity:4 raise:1 solving:2 tight:1 depend:1 y2p:1 purely:1 serve:1 bipartite:3 localization:4 efficiency:1 completely:1 matchings:3 gu:2 basis:1 various:3 fast:4 shortcoming:5 describe:1 committing:1 effective:1 kevin:1 choosing:1 refined:1 shalev:1 whose:1 garber:9 larger:1 valued:1 solve:1 say:2 relax:1 elad:4 cvpr:1 jean:1 favor:1 satyen:1 itself:1 patrice:1 online:2 sequence:2 kxt:1 eigenvalue:1 douze:1 subtracting:1 product:3 mb:2 achieve:1 forth:1 haipeng:1 convergence:28 requirement:4 optimum:3 perfect:4 converges:7 leave:1 yiming:1 wider:1 illustrate:1 alon:1 noticeable:1 eq:3 strong:7 come:1 implies:1 direction:6 drawback:1 stochastic:2 meshi:2 require:7 preliminary:1 extension:3 hold:16 considered:2 exp:1 algorithmic:2 kvt:1 major:1 smallest:1 combinatorial:4 largest:1 minimization:2 brought:1 mit:1 always:3 modified:1 tained:1 lifted:1 broader:1 ax:1 naval:1 improvement:4 nd2:2 rank:1 polyhedron:2 dgarber:1 aka:3 cg:20 baseline:6 inference:1 carath:1 dependent:1 nn:2 typically:1 koller:1 going:2 france:1 provably:2 arg:6 issue:1 dual:2 classification:1 pcg:11 exponent:1 malick:2 constrained:5 art:5 marginal:3 field:2 saving:1 lille:1 icml:8 future:2 simplex:4 t2:5 others:1 oblivious:1 few:1 irreducible:1 beck:5 maintain:3 ab:2 interest:6 highly:1 semidefinite:2 spectrahedron:2 chain:1 predefined:1 kt:2 ohad:1 conduct:1 euclidean:4 re:1 theoretical:4 miroslav:2 instance:8 column:3 teboulle:2 disadvantage:2 introducing:1 vertex:25 entry:3 uniform:1 loo:1 stored:1 dependency:3 sulovsk:1 proximal:1 extendable:1 guanghui:1 st:8 international:8 siam:1 matthijs:1 michael:1 together:1 concrete:1 squared:1 central:1 unavoidable:2 choose:3 worse:5 admit:2 leading:1 li:2 sec:2 coefficient:2 inc:1 satisfy:1 explicitly:5 depends:8 vi:10 later:2 hazan:8 start:1 option:12 shai:1 defer:1 simon:3 publicly:1 variance:1 who:1 efficiently:1 yield:2 correspond:1 anton:1 ren:1 history:1 za:2 reach:2 definition:7 obvious:1 associated:1 proof:9 recall:1 subsection:2 improves:3 sophisticated:1 actually:1 originally:1 improved:2 formulation:1 strongly:11 hand:3 google:2 believe:1 grows:1 usa:1 effect:2 concept:6 counterpart:1 hence:2 regularization:2 aggressively:1 dud:2 width:2 maintained:2 delivers:3 image:2 variational:1 novel:1 recently:5 common:1 exponentially:2 volume:1 extend:2 approximates:1 m1:8 significant:2 refer:3 smoothness:1 similarly:2 tracenorm:1 dot:1 yk2:2 gt:2 base:2 add:1 jaggi:6 patrick:1 own:1 recent:4 showed:5 prime:1 mandatory:1 store:4 certain:7 inequality:5 arbitrarily:1 vt:57 captured:1 seen:1 regained:2 additional:2 minimum:1 guestrin:1 converge:2 novelty:1 algebraically:1 monotonically:1 signal:2 ii:2 arithmetic:1 full:1 harchaoui:2 july:1 smooth:6 technical:1 faster:5 match:2 baptiste:1 plugging:1 feasibility:4 prediction:1 variant:25 converging:2 kax:1 vision:2 metric:1 arxiv:4 iteration:27 addition:1 interval:1 completes:1 pyramidal:2 crucial:2 publisher:1 rest:1 unlike:1 pass:1 comment:1 flow:2 alayrac:1 marcotte:2 call:4 integer:5 structural:1 jordan:1 enough:1 easy:1 iterate:12 affect:1 matroid:1 gave:2 lasso:3 suboptimal:2 idea:1 regarding:1 knowing:1 translates:1 gb:1 effort:1 suffer:3 dokania:1 speaking:1 remark:1 dramatically:1 generally:1 clear:3 informally:1 detailed:3 listed:1 discount:1 svms:1 category:1 diameter:2 reduced:1 generate:1 revisit:1 notice:2 estimated:1 jacques:1 per:10 track:1 write:3 key:4 nevertheless:1 lan:1 ht:14 lacoste:5 graph:8 year:2 sum:1 run:1 inverse:1 letter:1 fourth:1 communicate:1 throughout:2 family:1 appendix:10 comparable:1 bound:6 convergent:4 replaces:1 quadratic:5 oracle:4 constraint:4 fei:2 software:1 min:8 optimality:4 performing:1 optical:1 martin:4 speedup:1 structured:7 according:1 combination:3 ball:1 character:1 puneet:1 appealing:1 modification:1 s1:2 invariant:9 heart:1 taken:1 discus:2 turn:1 end:3 serf:1 ofer:1 operation:1 available:1 hanover:1 apply:4 observe:2 ocr:6 away:28 generic:1 appropriate:1 quarterly:1 odory:1 schmidt:1 top:4 running:7 include:2 lat:2 graphical:1 maintaining:1 prof:2 classical:1 hypercube:1 objective:5 already:2 quantity:1 added:1 reweighing:1 laue:1 costly:1 dependence:4 exhibit:1 gradient:24 distance:4 card:10 me:2 polytope:16 reason:4 induction:3 code:3 ellipsoid:1 minimizing:4 ying:1 difficult:1 unfortunately:1 sharper:2 frank:11 subproblems:1 trace:2 negative:2 design:1 implementation:2 proper:1 enclosing:1 m12:1 observation:10 markov:3 descent:1 logistics:1 january:1 extended:1 rn:9 arbitrary:4 community:1 ttic:1 namely:2 required:1 propriety:1 polytopes:13 barcelona:1 established:2 nip:1 address:1 suggested:1 usually:1 below:2 pattern:1 gonen:1 sparsity:2 program:1 rf:30 max:2 memory:14 video:4 marek:1 wainwright:1 natural:4 rely:1 regularized:1 hybrid:1 shaper:1 hr:1 pletscher:1 meth:1 julien:5 geometric:1 fully:1 proportional:1 affine:1 principle:3 bk2:1 storing:1 eccv:1 surprisingly:1 last:1 unfavorably:1 free:4 side:1 allow:1 understand:1 institute:1 fifth:1 sparse:7 boundary:2 dimension:14 overcome:1 rich:1 author:1 projected:1 osokin:1 far:2 compact:2 observable:1 implicitly:1 overcomes:1 monotonicity:1 global:1 rid:1 shwartz:1 search:6 table:3 learn:1 bination:1 rearranging:1 obtaining:1 domain:2 marc:1 kindly:1 pk:10 dense:1 main:5 linearly:7 joulin:1 bounding:1 arise:2 n2:4 x1:6 fashion:1 shrinking:1 sub:2 explicit:5 exponential:2 lie:1 breaking:1 toyota:1 third:2 acg:8 tang:1 theorem:9 specific:7 xt:75 jt:2 admits:1 svm:3 evidence:2 exists:4 corr:2 kx:5 margin:1 gap:9 logarithmic:1 contained:1 applies:1 springer:2 minimizer:2 satisfies:4 complemented:1 ma:1 conditional:20 goal:1 formulated:1 acceleration:1 shtern:3 feasible:14 change:1 specifically:1 uniformly:2 lemma:7 called:2 blessing:1 duality:2 experimental:1 schrijver:1 unfavorable:1 mark:1 accelerated:2 evaluate:1 ahipasaoglu:2 phenomenon:1
5,654
6,116
Proximal Stochastic Methods for Nonsmooth Nonconvex Finite-Sum Optimization Sashank J. Reddi Carnegie Mellon University sjakkamr@cs.cmu.edu Suvrit Sra Massachusetts Institute of Technology suvrit@mit.edu Barnab?s P?czos Carnegie Mellon University bapoczos@cs.cmu.edu Alexander J. Smola Carnegie Mellon University alex@smola.org Abstract We analyze stochastic algorithms for optimizing nonconvex, nonsmooth finite-sum problems, where the nonsmooth part is convex. Surprisingly, unlike the smooth case, our knowledge of this fundamental problem is very limited. For example, it is not known whether the proximal stochastic gradient method with constant minibatch converges to a stationary point. To tackle this issue, we develop fast stochastic algorithms that provably converge to a stationary point for constant minibatches. Furthermore, using a variant of these algorithms, we obtain provably faster convergence than batch proximal gradient descent. Our results are based on the recent variance reduction techniques for convex optimization but with a novel analysis for handling nonconvex and nonsmooth functions. We also prove global linear convergence rate for an interesting subclass of nonsmooth nonconvex functions, which subsumes several recent works. 1 Introduction We study nonconvex, nonsmooth, finite-sum optimization problems of the form n min x2Rd F (x) := f (x) + h(x), where f (x) := 1X fi (x), n i=1 (1) and each fi : Rd ! R is smooth (possibly nonconvex) for all i 2 {1, . . . , n} , [n], while h : Rd ! R is nonsmooth but convex and relatively simple. Such finite-sum optimization problems are fundamental to machine learning when performing regularized empirical risk minimization. While there has been extensive research in solving nonsmooth convex finite-sum problems (i.e., each fi is convex for i 2 [n]) [4, 16, 31], our understanding of their nonsmooth nonconvex counterparts is surprisingly limited. We hope to amend this situation (at least partially), given the widespread importance of nonconvexity throughout machine learning. A popular approach to handle nonsmoothness in convex problems is via proximal operators [14, 25], but as we will soon see, this approach does not work so easily for the nonconvex problem (1). Nevertheless, recall that proper closed convex function h, the proximal operator is defined as ? ? 1 prox?h (x) := argmin h(y) + 2? ky xk2 , for ? > 0. (2) y2Rd The power of proximal operators lies in how they generalize projections: e.g., if h is the indicator function IC (x) of a closed convex set C, then proxIC (x) ? projC (x) ? argminy2C ky xk. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Throughout this paper, we assume that the proximal operator of h is easy to compute. This is true for many applications in machine learning and statistics including `1 regularization, box-constraints, simplex constraints, among others [2, 18]. Similar to other algorithms, we also assume access to a proximal oracle (PO) that takes a point x 2 Rd and returns the output of (2). In addition to the number of PO calls, to describe our P complexity results we use the incremental first-order oracle (IFO) model.1 For a function f = n1 i fi , an IFO takes an index i 2 [n] and a point x 2 Rd , and returns the pair (fi (x), rfi (x)). A standard (batch) method for solving (1) is the proximal-gradient method (P ROX GD) [13], first studied for (batch) nonconvex problems in [5]. This method performs the following iteration: xt+1 = prox?h (xt ?rf (xt )), t = 0, 1, . . . , (3) where ? > 0 is a step size. The following convergence rate for P ROX GD was proved recently. Theorem (Informal). [7]: The number of IFO and PO calls made by the proximal gradient method (3) to reach ? close to a stationary point is O(n/?) and O(1/?), respectively. We refer the reader to [7] for details. The key point to note here is that the IFO complexity of (3) is O(n/?). This is due to the fact that a full gradient rf needs to be computed at each iteration (3), which requires n IFO calls. When n is large, this high cost per iteration is prohibitive. A more practical approach is offered by proximal stochastic gradient (P ROX S GD), which performs the iteration ? ? ?t X t+1 t t x = prox?t h x rfi (x ) , t = 0, 1, . . . , (4) i2It |It | where It (referred to as minibatch) is a randomly chosen set (with replacement) from [n] and ?t is a step size. Non-asymptotic convergence of P ROX S GD was also shown recently, as noted below. Theorem (Informal). [7]: The number of IFO and PO calls made by P ROX S GD, i.e., iteration (4), to reach ? close to a stationary point is O(1/?2 ) and O(1/?) respectively. For achieving this convergence, we impose batch sizes |It | that increase and step sizes ?t that decrease with 1/?. Notice that the PO complexity of P ROX S GD is similar to P ROX GD, but its IFO complexity is independent of n; though, this benefit comes at the cost of an extra 1/? factor. Furthermore, the step size must decrease with 1/? (or alternatively decay with the number of iterations of the algorithm). The same two aspects are also seen for convex stochastic gradient, in both the smooth and proximal versions. However, in the nonconvex setting there is a key third and more important aspect: the minibatch size |It | increases with 1/?. To understand this aspect, consider the case where |It | is a constant (independent of both n and ?), typically the choice used in practice. In this case, the above convergence result no longer holds and it is not clear if P ROX S GD even converges to a stationary point at all! To clarify, a decreasing step size ?t trivially ensures convergence as t ! 1, but the limiting point is not necessarily stationary. On the other hand, increasing |It | with 1/? can easily lead to |It | n for reasonably small ?, which effectively reduces the algorithm to (batch) P ROX GD. This dismal news does not apply to the convex setting, where P ROX S GD is known to converge (in expectation) to an optimal solution using constant minibatch sizes |It |. Furthermore, this problem does not afflict smooth nonconvex problems (h ? 0), where convergence with constant minibatches is known [6, 21, 22]. Thus, there is a fundamental gap in our understanding of stochastic methods for nonsmooth nonconvex problems. Given the ubiquity of nonconvex models in machine learning, bridging this gap is important. We do so by analyzing stochastic proximal methods with guaranteed convergence for constant minibatches, and faster convergence with minibatches independent of 1/?. Main Contributions We state our main contributions below and list the key complexity results in Table 1. ? We analyze nonconvex proximal versions of the recently proposed stochastic algorithms S VRG and S AGA [4, 8, 31], hereafter referred to as P ROX S VRG and P ROX S AGA, respectively. We show convergence of these algorithms with constant minibatches. To the best of our knowledge, this is the first work to present non-asymptotic convergence rates for stochastic methods that apply to nonsmooth nonconvex problems with constant (hence more realistic) minibatches. 1 Introduced in [1] to study lower bounds of deterministic algorithms for convex finite-sum problems. 2 ? We show that by carefully choosing the minibatch size (to be sublinearly dependent on n but still independent of 1/?), we can achieve provably faster convergence than both proximal gradient and proximal stochastic gradient. We are not aware of any earlier results on stochastic methods for the general nonsmooth nonconvex problem that have faster convergence than proximal gradient. ? We study a nonconvex subclass of (1) based on the proximal extension of Polyak-?ojasiewicz inequality [9]. We show linear convergence of P ROX S VRG and P ROX S AGA to the optimal solution for this subclass. This includes the recent results proved in [27, 32] as special cases. Ours is the first stochastic method with provable global linear convergence for this subclass of problems. 1.1 Related Work The literature on finite-sum problems is vast; so we summarize only a few closely related works. Convex instances of (1) have been long studied [3, 15] and are fairly well-understood. Remarkable recent progress for smooth convex instances of (1) is the creation of variance reduced (VR) stochastic methods [4, 8, 26, 28]. Nonsmooth proximal VR stochastic algorithms are studied in [4, 31] where faster convergence rates for both strongly convex and non-strongly convex cases are proved. Asynchronous VR frameworks are developed in [20]; lower-bounds are studied in [1, 10]. In contrast, nonconvex instances of (1) are much less understood. Stochastic gradient for smooth nonconvex problems is analyzed in [6], and only very recently, convergence results for VR stochastic methods for smooth nonconvex problems were obtained in [21, 22]. In [11], the authors consider a VR nonconvex setting different from ours, namely, where the loss is (essentially strongly) convex but hard thresholding is used. We build upon [21, 22], and focus on handling nonsmooth convex regularizers (h 6? 0 in (1)).2 Incremental proximal gradient methods for this class were also considered in [30] but only asymptotic convergence was shown. The first analysis of a projection version of nonconvex S VRG is due to [29], who considers the special problem of PCA. Perhaps, the closest to our work is [7], where convergence of minibatch nonconvex P ROX S GD method is studied. However, typical to the stochastic gradient method, the convergence is slow; moreover, no convergence for constant minibatches is provided. 2 Preliminaries We assume that the function h(x) in (1) is lower semi-continuous (lsc) and convex. Furthermore, we also assume that its domain dom(h) = {x 2 Rd |h(x) < +1} is closed. We say f is L-smooth if there is a constant L such that krf (x) rf (y)k ? Lkx yk, 8 x, y 2 Rd . Throughout, we assume that the functions fi in (1) are L-smooth, so that krfi (x) rfi (y)k ? Lkx yk for all i 2 [n]. Such an assumption is typical in the analysis of first-order methods. One crucial aspect of the analysis for nonsmooth nonconvex problems is the convergence criterion. For convex problems, typically the optimality gap F (x) F (x? ) is used as a criterion. It is unreasonable to use such a criterion for general nonconvex problems due to their intractability. For smooth nonconvex problems (i.e., h ? 0), it is typical to measure stationarity, e.g., using krF k. This cannot be used for nonsmooth problems, but a fitting alternative is the gradient mapping3 [17]: G? (x) := ?1 [x prox?h (x ?rf (x))]. (5) When h ? 0 this mapping reduces to G? (x) = rf (x) = rF (x), the gradient of function F at x. We analyze our algorithms using the gradient mapping (5) as described more precisely below. Definition 1. A point x output by stochastic iterative algorithm for solving (1) is called an ?-accurate solution, if E[kG? (x)k2 ] ? ? for some ? > 0. Our goal is to obtain efficient algorithms for achieving an ?-accurate solution, where efficiency is measured using IFO and PO complexity as functions of 1/? and n. 2 More recently, the authors have also developed VR Frank-Wolfe methods for handling constrained problems that do not admit easy projection operators [24]. 3 This mapping has also been used in the analysis of nonconvex proximal methods in [6, 7, 30]. 3 Algorithm IFO PO IFO (PL) PO (PL) Constant minibatch? P ROX S GD O 1/?2 O (1/?) O 1/?2 P ROX GD O (n/?) O (1/?) ? O (1/?) O (n? log(1/?)) O (? log(1/?)) P ROX S VRG O(n + (n2/3 /?)) O(1/?) O((n + ?n2/3 ) log(1/?)) O(? log(1/?)) p P ROX S AGA O(n + (n2/3 /?)) O(1/?) O((n + ?n2/3 ) log(1/?)) O(? log(1/?)) p Table 1: Table comparing the best IFO and PO complexity of different algorithms discussed in the paper. The complexity is measured in terms of the number of oracle calls required to achieve an ?-accurate solution. The IFO (PL) and PO (PL) represents the IFO and PO complexity of PL functions (see Section 4 for a formal definition). The results marked in red are the contributions of this paper. In the table, ?constant minibatch? indicates whether stochastic algorithm converges using a constant minibatch size. To the best of our knowledge, it is not known if P ROX S GD converges on using constant minibatches for nonconvex nonsmooth optimization. Also, we are not aware of any specific convergence results for P ROX S GD in the context of PL functions. 3 Algorithms We focus on two algorithms: (a) proximal S VRG (P ROX S VRG) and (b) proximal S AGA (P ROX S AGA). 3.1 Nonconvex Proximal SVRG We first consider a variant of P ROX S VRG [31]; pseudocode of this variant is stated in Algorithm 1. When F is strongly convex, S VRG attains linear convergence rate as opposed to sublinear convergence of S GD [8]. Note that, while S VRG is typically stated with b = 1, we use its minibatch variant with batch size b. The specific reasons for using such a variant will become clear during the analysis. While some other algorithms have been proposed for reducing the variance in the stochastic gradients, S VRG is particularly attractive because of its low memory requirement; it requires just O(d) extra memory in comparison to S GD for storing the average gradient (g s in Algorithm 1), while algorithms like S AG and S AGA incur O(nd) storage cost. In addition to its strong theoretical results, S VRG is known to outperform S GD empirically while being more robust to selection of step size. For convex problems, P ROX S VRG is known to inherit these advantages of S VRG [31]. We now present our analysis of nonconvex P ROX S VRG, starting with a result for batch size b = 1. Theorem 1. Let b = 1 in Algorithm 1. Let ? = 1/(3Ln), m = n and T be a multiple of m. Then the output xa of Algorithm 1 satisfies the following bound: ? ? 18Ln2 F (x0 ) F (x? ) E[kG? (xa )k2 ] ? , 3n 2 T where x? is an optimal solution of (1). Theorem 1 shows that P ROX S VRG converges for constant minibatches of size b = 1. This result is in strong contrast to P ROX S GD whose convergence with constant minibatches is still unknown. However, the result delivered by Theorem 1 is not stronger than that of P ROX GD. The following corollary to Theorem 1 highlights this point. Corollary 1. To obtain an ?-accurate solution, with b = 1 and parameters from Theorem 1, the IFO and PO complexities of Algorithm 1 are O(n/?) and O(n/?), respectively. Corollary 1 follows upon noting that each inner iteration (Step 7) of Algorithm 1 has an effective IFO complexity of O(1) since m = n. This IFO complexity includes the IFO calls for calculating the average gradient at the end of each epoch. Furthermore, each inner iteration also invokes the proximal oracle, whereby the PO complexity is also O(n/?). While the IFO complexity of constant minibatch P ROX S VRG is same as P ROX GD, we see that its PO complexity is much worse. This is due to the fact that n IFO calls correspond to one PO call in P ROX GD, while one IFO call in P ROX S VRG corresponds to one PO call. Consequently, we do not gain any theoretical advantage by using constant minibatch P ROX S VRG over P ROX GD. 4 Algorithm 1: Nonconvex P ROX S VRG x0 , T, m, b, ? 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: Input: x ?0 = x0m = x0 2 Rd , epoch length m, step sizes ? > 0, S = dT /me, minibatch size b for s = 0 to S 1 do xs+1 = xsmP 0 g s+1 = n1 n xs ) i=1 rfi (? for t = 0 to m 1 do UniformlyP randomly pick It ? {1, . . . , n} (with replacement) such that |It | = b vts+1 = 1b it 2It (rfit (xs+1 ) rfit (? xs )) + g s+1 t s+1 s+1 s+1 xt+1 = prox?h (xt ?vt ) end for x ?s+1 = xs+1 m end for 1 S 1 Output: Iterate xa chosen uniformly at random from {{xs+1 }m t t=0 }s=0 . The key question is therefore: can we modify the algorithm to obtain better theoretical guarantees? To answer this question, we prove the following main convergence result. For ease of theoretical exposition, we assume n2/3 to be an integer. This is only for convenience in stating our theoretical results and all the results in the paper hold for the general case. Theorem 2. Suppose b = n2/3 in Algorithm 1. Let ? = 1/(3L), m = bn1/3 c and T be a multiple of m. Then for the output xa of Algorithm 1, we have: E[kG? (xa )k2 ] ? 18L(F (x0 ) T F (x? )) , where x? is an optimal solution to (1). Rewriting Theorem 2 in terms of the IFO and PO complexity, we obtain the following corollary. Corollary 2. Let b = n2/3 and set parameters as in Theorem 2. Then, to obtain an ?-accurate solution the IFO and PO complexities of Algorithm 1 are O(n + n2/3 /?) and O(1/?), respectively. The above corollary is due to the following observations. From Theorem 2, it can be seen that the total number of inner iterations (across all epochs) of Algorithm 1 to obtain an ?-accurate solution is O(1/?). Since each inner iteration of Algorithm 2 involves a call to the PO, we obtain a PO complexity of O(1/?). Further, since b = n2/3 IFO calls are made at each inner iteration, we obtain a net IFO complexity of O(n2/3 /?). Adding the IFO calls for the calculation of the average gradient (and noting that T is a multiple of m), as well as noting that S 1, we obtain a total cost of O(n + n2/3 /?). A noteworthy aspect of Corollary 2 is that its PO complexity matches P ROX GD, but its IFO complexity is significantly decreased to O(n + n2/3 /?) as opposed to O(n/?) in P ROX GD. 3.2 Nonconvex Proximal SAGA In the previous section, we investigated P ROX S VRG for solving (1). Note that P ROX S VRG is not a fully ?incremental" algorithm since it requires calculation of the full gradient once per epoch. An alternative to P ROX S VRG is the algorithm proposed in [4] (popularly referred to as SAGA). We build upon the work of [4] to develop P ROX S AGA, a nonconvex proximal variant of S AGA. The pseudocode for P ROX S AGA is presented in Algorithm 2. The key difference between Algorithm 1 and 2 is that P ROX S AGA, unlike P ROX S VRG, avoids computation of the full gradient. Instead, it maintains an average gradient vector g t , which changes at each iteration (refer to [20]). However, such a strategy entails additional storage costs. In particular, for implementing Algorithm 2, we must store the gradients {rfi (?it )}ni=1 , which in general can cost O(nd) in storage. Nevertheless, in some scenarios common to machine learning (see [4]), one can reduce the storage requirements to O(n). Whenever such an implementation of P ROX S AGA is possible, it can perform similar to or even better than P ROX S VRG [4]; hence, in addition to theoretical interest, it is of significant practical value. We remark that P ROX S AGA in Algorithm 2 differs slightly from [4]. In particular, it uses minibatches where two sets It , Jt are sampled at each iteration as opposed to one in [4]. This is mainly for the ease of theoretical analysis. 5 Algorithm 2: Nonconvex P ROX S AGA x0 , T, b, ? 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 0 Input: xP 2 Rd , ?i0 = x0 for i 2 [n], step size ? > 0, minibatch size b 0 g 0 = n1 n i=1 rfi (?i ) for t = 0 to T 1 do Uniformly P randomly pick sets It , Jt from [n] (with replacement) such that |It | = |Jt | = b v t = 1b it 2It (rfit (xt ) rfit (?itt )) + g t xt+1 = prox?h (xt ?v t ) ?jt+1 = xt for j 2 Jt and ?jt+1 = ?jt for j 2 / Jt P )) g t+1 = g t n1 jt 2Jt (rfjt (?jtt ) rfjt (?jt+1 t end for Output: Iterate xa chosen uniformly random from {xt }Tt=01 . We prove that as in the convex case, nonconvex P ROX S VRG and P ROX S AGA share similar theoretical guarantees. In particular, our first result for P ROX S AGA is a counterpart to Theorem 1 for P ROX S VRG. Theorem 3. Suppose b = 1 in Algorithm 2. Let ? = 1/(5Ln). Then for the output xa of Algorithm 2 after T iterations, the following stationarity bound holds: E[kG? (xa )k2 ] ? 50Ln2 F (x0 ) F (x? ) , 5n 2 T where x? is an optimal solution of (1). Theorem 3 immediately leads to the following corollary. Corollary 3. The IFO and PO complexity of Algorithm 3 for b = 1 and parameters specified in Theorem 3 to obtain an ?-accurate solution are O(n/?) and O(n/?) respectively. Similar to Theorem 2 for P ROX S VRG, we obtain the following main result for P ROX S AGA. Theorem 4. Suppose b = n2/3 in Algorithm 2. Let ? = 1/(5L). Then for the output xa of Algorithm 2 after T iterations, the following holds: 50L(F (x0 ) F (x? )) , 3T where x? is an optimal solution of Problem (1). E[kG? (xa )k2 ] ? Rewriting this result in terms of IFO and PO access, we obtain the following important corollary. Corollary 4. Let b = n2/3 and set parameters as in Theorem 4. Then, to obtain an ?-accurate solution the IFO and PO complexities of Algorithm 2 are O(n + n2/3 /?) and O(1/?), respectively. The above result is due to Theorem 4 and because each iteration of P ROX S AGA requires O(n2/3 ) IFO calls. The number of PO calls is only O(1/?), since make one PO call for every n2/3 IFO calls. Discussion: It is important to note the role of minibatches in Corollaries 2 and 4. Minibatches are typically used for reducing variance and promoting parallelism in stochastic methods. But unlike previous works, we use minibatches as a theoretical tool to improve convergence rates of both nonconvex P ROX S VRG and P ROX S AGA. In particular, by carefully selecting the minibatch size, we can improve the IFO complexity of the algorithms described in the paper from O(n/?) (similar to P ROX GD) to O(n2/3 /?) (matching the smooth nonconvex case). Furthermore, the PO complexity is also improved in a similar manner by using the minibatch size mentioned in Theorems 2 and 4. 4 4 Extensions We discuss some extensions of our approach in this section. Our first extension is to provide convergence analysis for a subclass of nonconvex functions that satisfy a specific growth condition popularly known as the Polyak-?ojasiewicz (PL) inequality. In the context of gradient descent, 4 We refer the readers to the full version [23] for a more general convergence analysis of the algorithms. 6 PL-SVRG:(x0 , K, T, m, ?) for k = 1 to K do xk = ProxSVRG(xk 1 , T, m, b, ?) ; end Output: xK PL-SAGA:(x0 , K, T, m, ?) for k = 1 to K do xk = ProxSAGA(xk 1 , T, b, ?) ; end Output: xK Figure 1: P ROX S VRG and P ROX S AGA variants for PL functions. this inequality was proposed by Polyak in 1963 [19], who showed global linear convergence of gradient descent for functions that satisfy the PL inequality. Recently, in [9] the PL inequality was generalized to nonsmooth functions and used for proving linear convergence of proximal gradient. The generalization presented in [9] considers functions F (x) = f (x)+h(x) that satisfy the following: 1 F (x? )) ? Dh (x, ?), where ? > 0 2 ? ? and Dh (x, ?) := 2? miny hrf (x), y xi + ky xk2 + h(y) 2 An F that satisfies (6) is called a ?-PL function. ?(F (x) ? h(x) . (6) When h ? 0, condition (6) reduces to the usual PL inequality. The class of ?-PL functions includes several other classes as special cases. It subsumes strongly convex functions, covers fi (x) = g(a> i x) with only g being strongly convex, and includes functions that satisfy a optimal strong convexity property [12]. Note that the ?-PL functions also subsume the recently studied special case where fi ?s are nonconvex but their sum f is strongly convex. Hence, it encapsulates the problems of [27, 32]. The algorithms in Figure 1 provide variants of P ROX S VRG and P ROX S AGA adapted to optimize ?-PL functions. We show the following global linear convergence result of P L -S VRG and P L -S AGA in Figure 1 for PL functions. For simplicity, we assume ? = (L/?) > n1/3 . When f is strongly convex, ? is referred to as the condition number, in which case ? > n1/3 corresponds to the high condition number regime. Theorem 5. Suppose F is a ?-PL function. Let b = n2/3 , ? = 1/5L, m = bn1/3 c and T = d30?e. Then for the output xK of P L -S VRG and P L -S AGA (in Figure 1), the following holds: [F (x0 ) F (x? )] E[F (xK ) F (x? )] ? , 2K ? where x is an optimal solution of (1). The following corollary on IFO and PO complexity of P L -S VRG and P L -S AGA is immediate. Corollary 5. When F is a ?-PL function, then the IFO and PO complexities of P L -S VRG and P L -S AGA with the parameters specified in Theorem 5 to obtain an ?-accurate solution are O((n + ?n2/3 ) log(1/?)) and O(? log(1/?)), respectively. Note that proximal gradient also has global linear convergence for PL functions, as recently shown in [9]. However, its IFO complexity is O(?n log(1/?)), which is much worse than that of P L -S VRG and P L -S AGA (Corollary 5). Other extensions: While we state our results for specific minibatch sizes, a more general convergence analysis is provided for any minibatch size b ? n2/3 (Theorems 6 and 7 in the Appendix). Moreover, our results can be easily generalized to the case where non-uniform sampling is used in Algorithm 1 and Algorithm 2. This is useful when the functions fi have different Lipschitz constants. 5 Experiments We present our empirical results in this section. For our experiments, we study the problem of non-negative principal component analysis (NN-PCA). More specifically, for a given set of samples {zi }ni=1 , we solve the following optimization problem: min kxk?1, x 0 1 > x 2 7 n X i=1 zi zi> ! x. (7) 10 -15 0 5 10 15 10 -10 10 -15 # grad/n 0 5 10 15 # grad/n SGD SAGA SVRG 10 -5 10 f (x) ! f (^ x) 10 -10 SGD SAGA SVRG 10 -5 f (x) ! f (^ x) f (x) ! f (^ x) f (x) ! f (^ x) SGD SAGA SVRG 10 -5 -10 10 -15 0 5 10 15 # grad/n SGD SAGA SVRG 10 -5 10 -10 10 -15 0 5 10 15 # grad/n Figure 2: Non-negative principal component analysis. Performance of P ROX S GD, P ROX S VRG and P ROX S AGA on ?rcv1? (left), ?a9a?(left-center), ?mnist? (right-center) and ?aloi? (right) datasets. Here, the y-axis is the function suboptimality i.e., f (x) f (? x) where x ? represents the best solution obtained by running gradient descent for long time and with multiple restarts. The problem of NN-PCA is, in general, NP-hard. This variant of the standard PCA problem can be written in the form (1) with fi (x) = (x> zi )2 for all i 2 [n] and h(x) = IC (x) where C is the convex set {x 2 Rd |kxk ? 1, x 0}. In our experiments, we compare P ROX S GD with nonconvex P ROX S VRG and P ROX S AGA. The choice of step size is important to P ROX S GD. The step size of P ROX S GD is set using the popular t-inverse step size choice of ?t = ?0 (1 + ? 0 bt/nc) 1 where ?0 , ? 0 > 0. For P ROX S VRG and P ROX S AGA, motivated by the theoretical analysis, we use a fixed step size. The parameters of the step size in each of these methods are chosen so that the method gives the best performance on the objective value. In our experiments, we include the value ? 0 = 0, which corresponds to P ROX S GD with fixed step size. For P ROX S VRG, we use the epoch length m = n. We use standard machine learning datasets in LIBSVM for all our experiments 5 . The samples from each of these datasets are normalized i.e. kzi k = 1 for all i 2 [n]. Each of these methods is initialized by running P ROX S GD for n iterations. Such an initialization serves two purposes: (a) it provides a reasonably good initial point, typically beneficial for variance reduction techniques [4, 26]. (b) it provides a heuristic for calculating the initial average gradient g 0 [26]. In our experiments, we use b = 1 in order to demonstrate the performance of the algorithms with constant minibatches. We report the objective function value for the datasets. In particular, we report the suboptimality in objective function i.e., f (xs+1 ) f (? x) (for P ROX S VRG) and f (xt ) f (? x) (for P ROX S AGA). Here t x ? refers to the solution obtained by running proximal gradient descent for a large number of iterations and multiple random initializations. For all the algorithms, we compare the aforementioned criteria against for the number of effective passes through the dataset i.e., IFO complexity divided by n. For P ROX S VRG, this includes the cost of calculating the full gradient at the end of each epoch. Figure 2 shows the performance of the algorithms on NN-PCA problem (see Section D of the Appendix for more experiments). It can be seen that the objective value for P ROX S VRG and P ROX S AGA is much lower compared to P ROX S GD, suggesting faster convergence for these algorithms. We observed a significant gain consistently across all the datasets. Moreover, the selection of step size was much simpler for P ROX S VRG and P ROX S AGA than that for P ROX S GD. We did not observe any significant difference in the performance of P ROX S VRG and P ROX S AGA for this particular task. 6 Final Discussion In this paper, we presented fast stochastic methods for nonsmooth nonconvex optimization. In particular, by employing variance reduction techniques, we show that one can design methods that can provably perform better than P ROX S GD and proximal gradient descent. Furthermore, in contrast to P ROX S GD, the resulting approaches have provable convergence to a stationary point with constant minibatches; thus, bridging a fundamental gap in our knowledge of nonsmooth nonconvex problems. We proved that with a careful selection of minibatch size, it is possible to theoretically show superior performance to proximal gradient descent. Our empirical results provide evidence for a similar conclusion even with constant minibatches. Thus, we conclude with an important open problem of developing stochastic methods with provably better performance than proximal gradient descent with constant minibatch size. Acknowledgment: SS acknowledges support of NSF grant: IIS-1409802. 5 The datasets can be libsvmtools/datasets. downloaded from 8 https://www.csie.ntu.edu.tw/~cjlin/ References [1] A. Agarwal and L. Bottou. A lower bound for the optimization of finite sums. arXiv:1410.0723, 2014. [2] F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Convex optimization with sparsity-inducing norms. In S. Sra, S. Nowozin, and S. J. Wright, editors, Optimization for Machine Learning. MIT Press, 2011. [3] L?on Bottou. Stochastic gradient learning in neural networks. Proceedings of Neuro-N?mes, 91(8), 1991. [4] Aaron Defazio, Francis Bach, and Simon Lacoste-Julien. SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. In NIPS 27, pages 1646?1654. 2014. [5] Masao Fukushima and Hisashi Mine. A generalized proximal point algorithm for certain non-convex minimization problems. International Journal of Systems Science, 12(8):989?1000, 1981. [6] Saeed Ghadimi and Guanghui Lan. Stochastic first- and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization, 23(4):2341?2368, 2013. [7] Saeed Ghadimi, Guanghui Lan, and Hongchao Zhang. Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization. Mathematical Programming, 155(1-2):267?305, 2014. [8] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In NIPS 26, pages 315?323. 2013. [9] Hamed Karimi, Julie Nutini, and Mark W. Schmidt. Linear convergence of gradient and proximal-gradient methods under the polyak-?ojasiewicz condition. In Machine Learning and Knowledge Discovery in Databases - European Conference, ECML PKDD 2016, pages 795?811, 2016. [10] G. Lan and Y. Zhou. An optimal randomized incremental gradient method. arXiv:1507.02000, 2015. [11] Xingguo Li, Tuo Zhao, Raman Arora, Han Liu, and Jarvis Haupt. Stochastic variance reduced optimization for nonconvex sparse learning. In ICML, 2016. arXiv:1605.02711. [12] Ji Liu and Stephen J. Wright. Asynchronous stochastic coordinate descent: Parallelism and convergence properties. SIAM Journal on Optimization, 25(1):351?376, January 2015. [13] Hisashi Mine and Masao Fukushima. A minimization method for the sum of a convex function and a continuously differentiable function. Journal of Optimization Theory and Applications, 33(1):9?23, 1981. [14] J. J. Moreau. Fonctions convexes duales et points proximaux dans un espace hilbertien. C. R. Acad. Sci. Paris S?r. A Math., 255:2897?2899, 1962. [15] Arkadi Nemirovski and D Yudin. Problem Complexity and Method Efficiency in Optimization. John Wiley and Sons, 1983. [16] Yu Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM Journal on Optimization, 22(2):341?362, 2012. [17] Yurii Nesterov. Introductory Lectures On Convex Optimization: A Basic Course. Springer, 2003. [18] N. Parikh and S. Boyd. Proximal algorithms. Foundations and Trends in Optimization, 1(3):127?239, 2014. [19] B.T. Polyak. Gradient methods for the minimisation of functionals. USSR Computational Mathematics and Mathematical Physics, 3(4):864?878, January 1963. [20] Sashank Reddi, Ahmed Hefny, Suvrit Sra, Barnabas Poczos, and Alex J Smola. On variance reduction in stochastic gradient descent and its asynchronous variants. In NIPS 28, pages 2629?2637, 2015. [21] Sashank J. Reddi, Ahmed Hefny, Suvrit Sra, Barnab?s P?czos, and Alexander J. Smola. Stochastic variance reduction for nonconvex optimization. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, pages 314?323, 2016. [22] Sashank J. Reddi, Suvrit Sra, Barnab?s P?czos, and Alexander J. Smola. Fast incremental method for nonconvex optimization. CoRR, abs/1603.06159, 2016. [23] Sashank J. Reddi, Suvrit Sra, Barnab?s P?czos, and Alexander J. Smola. Fast stochastic methods for nonsmooth nonconvex optimization. CoRR, abs/1605.06900, 2016. [24] Sashank J. Reddi, Suvrit Sra, Barnab?s P?czos, and Alexander J. Smola. Stochastic frank-wolfe methods for nonconvex optimization. In 54th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2016, 2016. [25] R Tyrrell Rockafellar. Monotone operators and the proximal point algorithm. SIAM journal on control and optimization, 14(5):877?898, 1976. [26] Mark W. Schmidt, Nicolas Le Roux, and Francis R. Bach. Minimizing Finite Sums with the Stochastic Average Gradient. arXiv:1309.2388, 2013. [27] Shai Shalev-Shwartz. SDCA without duality. CoRR, abs/1502.06177, 2015. [28] Shai Shalev-Shwartz and Tong Zhang. Stochastic dual coordinate ascent methods for regularized loss. The Journal of Machine Learning Research, 14(1):567?599, 2013. [29] Ohad Shamir. A stochastic PCA and SVD algorithm with an exponential convergence rate. arXiv:1409.2848, 2014. [30] Suvrit Sra. Scalable nonconvex inexact proximal splitting. In NIPS, pages 530?538, 2012. [31] Lin Xiao and Tong Zhang. A proximal stochastic gradient method with progressive variance reduction. SIAM Journal on Optimization, 24(4):2057?2075, 2014. [32] Zeyuan Allen Zhu and Yang Yuan. Improved svrg for non-strongly-convex or sum-of-non-convex objectives. CoRR, abs/1506.01972, 2015. 9
6116 |@word version:4 stronger:1 norm:1 nd:3 open:1 pick:2 sgd:4 reduction:7 initial:2 liu:2 hereafter:1 selecting:1 ours:2 comparing:1 afflict:1 must:2 written:1 john:1 realistic:1 stationary:7 prohibitive:1 xk:9 ojasiewicz:3 provides:2 math:1 allerton:2 org:1 simpler:1 zhang:4 mathematical:2 become:1 yuan:1 prove:3 fitting:1 introductory:1 manner:1 theoretically:1 x0:11 proxic:1 sublinearly:1 pkdd:1 decreasing:1 increasing:1 spain:1 provided:2 moreover:3 kg:5 argmin:1 developed:2 ag:1 guarantee:2 every:1 subclass:5 tackle:1 growth:1 proximaux:1 k2:5 x0m:1 control:2 grant:1 understood:2 modify:1 acad:1 analyzing:1 noteworthy:1 zeroth:1 initialization:2 studied:6 ease:2 limited:2 nemirovski:1 practical:2 acknowledgment:1 practice:1 differs:1 sdca:1 empirical:3 significantly:1 composite:2 projection:3 matching:1 boyd:1 refers:1 cannot:1 close:2 selection:3 operator:6 convenience:1 storage:4 risk:1 context:2 optimize:1 www:1 deterministic:1 ghadimi:2 center:2 starting:1 convex:34 simplicity:1 roux:1 immediately:1 splitting:1 proving:1 handle:1 coordinate:3 limiting:1 shamir:1 suppose:4 programming:2 us:1 wolfe:2 trend:1 particularly:1 database:1 observed:1 role:1 csie:1 ensures:1 news:1 decrease:2 yk:2 mentioned:1 convexity:1 complexity:30 miny:1 nesterov:2 mine:2 barnabas:1 dom:1 solving:4 creation:1 incur:1 upon:3 predictive:1 efficiency:3 easily:3 po:29 fast:5 amend:1 describe:1 effective:2 choosing:1 shalev:2 whose:1 heuristic:1 solve:1 say:1 s:1 statistic:1 hilbertien:1 delivered:1 final:1 advantage:2 differentiable:1 net:1 jarvis:1 dans:1 achieve:2 inducing:1 ky:3 convergence:41 requirement:2 incremental:6 converges:5 sjakkamr:1 develop:2 stating:1 measured:2 progress:1 bn1:2 strong:3 c:2 involves:1 come:1 closely:1 popularly:2 stochastic:40 libsvmtools:1 implementing:1 barnab:5 generalization:1 preliminary:1 ntu:1 extension:5 pl:21 clarify:1 hold:5 considered:1 ic:2 aga:33 wright:2 mapping:3 xk2:2 purpose:1 city:1 tool:1 minimization:3 hope:1 mit:2 zhou:1 minimisation:1 corollary:15 focus:2 june:1 consistently:1 indicates:1 mainly:1 a9a:1 contrast:3 attains:1 dependent:1 i0:1 nn:3 typically:5 bt:1 provably:5 karimi:1 issue:1 among:1 aforementioned:1 dual:1 ussr:1 constrained:1 special:4 fairly:1 aware:2 once:1 sampling:1 represents:2 progressive:1 yu:1 icml:2 espace:1 simplex:1 nonsmooth:21 others:1 np:1 report:2 few:1 randomly:3 replacement:3 saeed:2 n1:6 fukushima:2 ab:4 stationarity:2 interest:1 huge:1 rfit:4 analyzed:1 regularizers:1 accurate:9 ohad:1 initialized:1 theoretical:10 instance:3 earlier:1 cover:1 cost:7 uniform:1 johnson:1 answer:1 proximal:39 gd:36 guanghui:2 fundamental:4 international:2 siam:5 randomized:1 physic:1 continuously:1 opposed:3 possibly:1 rox:89 worse:2 admit:1 zhao:1 return:2 li:1 suggesting:1 prox:6 hisashi:2 subsumes:2 ifo:36 includes:5 rockafellar:1 satisfy:4 jtt:1 closed:3 analyze:3 francis:2 red:1 maintains:1 shai:2 simon:1 arkadi:1 contribution:3 ni:2 variance:11 who:2 correspond:1 generalize:1 hamed:1 reach:2 whenever:1 definition:2 inexact:1 against:1 gain:2 sampled:1 proved:4 dataset:1 massachusetts:1 popular:2 recall:1 knowledge:5 hefny:2 carefully:2 jenatton:1 dt:1 restarts:1 improved:2 rie:1 though:1 box:1 strongly:10 furthermore:7 just:1 smola:7 xa:10 hand:1 nonsmoothness:1 minibatch:20 widespread:1 perhaps:1 usa:1 normalized:1 true:1 counterpart:2 regularization:1 hence:3 attractive:1 during:1 noted:1 whereby:1 suboptimality:2 criterion:4 ln2:2 generalized:3 tt:1 demonstrate:1 performs:2 allen:1 novel:1 fi:10 recently:8 d30:1 common:1 superior:1 parikh:1 pseudocode:2 empirically:1 ji:1 discussed:1 dismal:1 mellon:3 refer:3 significant:3 projc:1 fonctions:1 rd:9 trivially:1 mathematics:1 access:2 entail:1 longer:1 proxsvrg:1 lkx:2 han:1 closest:1 masao:2 recent:4 showed:1 optimizing:1 scenario:1 store:1 certain:1 nonconvex:50 suvrit:8 inequality:6 vt:2 seen:3 additional:1 impose:1 zeyuan:1 converge:2 semi:1 ii:1 full:5 multiple:5 stephen:1 reduces:3 smooth:11 faster:6 match:1 calculation:2 vrg:45 long:2 bach:3 ahmed:2 divided:1 lin:1 variant:10 neuro:1 basic:1 scalable:1 essentially:1 cmu:2 expectation:1 arxiv:5 iteration:18 agarwal:1 addition:3 decreased:1 aloi:1 crucial:1 duales:1 extra:2 unlike:3 pass:1 ascent:1 reddi:6 call:17 integer:1 noting:3 yang:1 easy:2 iterate:2 zi:4 polyak:5 inner:5 reduce:1 grad:4 whether:2 motivated:1 pca:6 defazio:1 bridging:2 accelerating:1 sashank:6 poczos:1 york:1 remark:1 rfi:6 useful:1 clear:2 reduced:2 http:1 outperform:1 nsf:1 notice:1 per:2 carnegie:3 key:5 nevertheless:2 lan:3 achieving:2 krf:2 rewriting:2 libsvm:1 nonconvexity:1 lacoste:1 vast:1 monotone:1 sum:12 inverse:1 throughout:3 reader:2 raman:1 appendix:2 bound:5 guaranteed:1 oracle:4 annual:1 adapted:1 constraint:2 precisely:1 alex:2 aspect:5 min:2 optimality:1 performing:1 rcv1:1 relatively:1 xingguo:1 developing:1 convexes:1 across:2 slightly:1 beneficial:1 son:1 tw:1 encapsulates:1 bapoczos:1 ln:2 discus:1 cjlin:1 end:7 serf:1 informal:2 yurii:1 unreasonable:1 apply:2 promoting:1 observe:1 ubiquity:1 batch:8 alternative:2 schmidt:2 running:3 include:1 calculating:3 invokes:1 build:2 objective:6 question:2 strategy:1 usual:1 gradient:45 sci:1 me:2 considers:2 reason:1 provable:2 length:2 index:1 mini:1 minimizing:1 nc:1 frank:2 stated:2 negative:2 implementation:1 design:1 proper:1 unknown:1 perform:2 observation:1 datasets:7 finite:9 descent:12 ecml:1 january:2 immediate:1 situation:1 subsume:1 communication:1 tuo:1 introduced:1 pair:1 namely:1 required:1 extensive:1 specified:2 paris:1 barcelona:1 nip:5 below:3 parallelism:2 regime:1 sparsity:1 summarize:1 rf:6 including:1 memory:2 power:1 regularized:2 indicator:1 zhu:1 improve:2 technology:1 julien:1 axis:1 arora:1 acknowledges:1 epoch:6 understanding:2 literature:1 discovery:1 asymptotic:3 loss:2 fully:1 highlight:1 haupt:1 sublinear:1 interesting:1 lecture:1 remarkable:1 foundation:1 downloaded:1 offered:1 xp:1 xiao:1 thresholding:1 editor:1 intractability:1 storing:1 share:1 nowozin:1 course:1 surprisingly:2 czos:5 soon:1 asynchronous:3 svrg:7 formal:1 understand:1 institute:1 julie:1 sparse:1 benefit:1 moreau:1 avoids:1 yudin:1 author:2 made:3 y2rd:1 employing:1 kzi:1 functionals:1 global:5 mairal:1 conclude:1 xi:1 shwartz:2 alternatively:1 continuous:1 iterative:1 un:1 table:4 reasonably:2 robust:1 itt:1 sra:8 nicolas:1 lsc:1 investigated:1 necessarily:1 bottou:2 european:1 domain:1 inherit:1 did:1 main:4 n2:21 referred:4 slow:1 vr:6 tong:3 wiley:1 ny:1 saga:8 exponential:1 lie:1 hrf:1 third:1 x2rd:1 theorem:23 xt:11 specific:4 jt:11 list:1 decay:1 x:7 evidence:1 mnist:1 adding:1 effectively:1 importance:1 corr:4 gap:4 kxk:2 partially:1 nutini:1 springer:1 corresponds:3 satisfies:2 dh:2 minibatches:17 obozinski:1 goal:1 marked:1 consequently:1 exposition:1 careful:1 lipschitz:1 hard:2 change:1 typical:3 specifically:1 reducing:2 uniformly:3 tyrrell:1 principal:2 called:2 total:2 duality:1 svd:1 aaron:1 support:2 mark:2 alexander:5 handling:3
5,655
6,117
Bayesian Optimization with Robust Bayesian Neural Networks Jost Tobias Springenberg Aaron Klein Stefan Falkner Frank Hutter Department of Computer Science University of Freiburg {springj,kleinaa,sfalkner,fh}@cs.uni-freiburg.de Abstract Bayesian optimization is a prominent method for optimizing expensive-to-evaluate black-box functions that is widely applied to tuning the hyperparameters of machine learning algorithms. Despite its successes, the prototypical Bayesian optimization approach ? using Gaussian process models ? does not scale well to either many hyperparameters or many function evaluations. Attacking this lack of scalability and flexibility is thus one of the key challenges of the field. We present a general approach for using flexible parametric models (neural networks) for Bayesian optimization, staying as close to a truly Bayesian treatment as possible. We obtain scalability through stochastic gradient Hamiltonian Monte Carlo, whose robustness we improve via a scale adaptation. Experiments including multi-task Bayesian optimization with 21 tasks, parallel optimization of deep neural networks and deep reinforcement learning show the power and flexibility of this approach. 1 Introduction Hyperparameter optimization is crucial for obtaining good performance in many machine learning algorithms, such as support vector machines, deep neural networks, and deep reinforcement learning. The most prominent method for hyperparameter optimization is Bayesian optimization (BO) based on Gaussian processes (GPs), as e.g., implemented in the Spearmint system [1]. While GPs are the natural probabilistic models for BO, unfortunately, their complexity is cubic in the number of data points and they often do not gracefully scale to high dimensions [2]. Although alternative methods based on tree models [3, 4] or Bayesian linear regression using features from a neural network [5] exist, they obtain scalability by partially sacrificing a principled treatment of model uncertainties. Here, we propose to use neural networks as a powerful and scalable parametric model, while staying as close to a truly Bayesian treatment as possible. Crucially, we aim to keep the wellcalibrated uncertainty estimates of GPs since BO relies on them to accurately determine promising hyperparameters. To this end we derive a more robust variant of the recent stochastic gradient Hamiltonian Monte Carlo (SGHMC) method [6]. After providing background (Section 2), we make the following contributions: We derive a general formulation for both single-task and multi-task BO with Bayesian neural networks that leads to a robust, scalable, and parallel optimizer (Section 3). We derive a scale adaptation technique to substantially improve the robustness of stochastic gradient HMC (Section 4). Finally, using our method ? which we dub Bayesian Optimization with Hamiltonian Monte Carlo Artificial Neural Networks (BOHAMIANN) ? we demonstrate state-of-the-art performance for a wide range of optimization tasks. This includes multi-task BO, parallel optimization of deep residual networks, and deep reinforcement learning. An implementation of our method can be found at https://github. com/automl/RoBO. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 2 Background 2.1 Bayesian optimization for single and multiple tasks Let f : X ? R be an arbitrary function defined over a convex set X ? Rd that can be evaluated at 2 x ? X , yielding noisy observations y ? N (f (x), ?obs ). We aim to find x? ? arg minx?X f (x). To solve this problem, BO (see, e.g., Brochu et al. [7]) typically starts by observing the function at an initial design D = {(x1 , y1 ), . . . , (xI , yI )}. BO then repeatedly executes the following steps: (1) fit a regression model p(f | D) to the current data D; (2) use p(f | D) to select an input xt+1 at which to query f by maximizing an acquisition function (which trades off exploration and exploitation); (3) 2 observe yt+1 ? N (f (xt+1 ), ?obs ) and add the result to the dataset: D := D ? {xt+1 , yt+1 }. In the generalized case of multi-task Bayesian optimization [8], there are K related black-box functions, F = {f1 , . . . , fK }, each with the same domain X ; and, the goal is to find x? ? arg minx?X ft (x) for a given t.1 In this case, the initial design is augmented with previous evaluations of the related functions. That is, D = D1 ? ? ? ? ? DK with Dk = {(xk1 , y1k ), . . . , (xknk , ynk k )}, 2 where yik ? N (fk (xki ), ?obs ) and nk = |Dk | points have already been evaluated for function fk . BO then requires a probabilistic model p(f | D) over the K functions, which can be used to transfer knowledge from related tasks to the target task t (and thus reduce the required number of function evaluations on t). A concrete instantiation of BO is obtained by specifying the acquisition function and the probabilistic model. As acquisition function, here, we will use the popular expected improvement (EI) criterion [9]; other commonly used options, such as UCB [10] could be directly applied. EI is defined as ?EI (x; D) = ?(f (x) | D) (?(x)?(?(x)) + ?(?(x))) , with ?(x) = y? ? ?(f (x) | D) , ?(f (x) | D) (1) where ?(?) and ?(?) denote the cumulative distribution function and the probability density function of a standard normal distribution, respectively, and ?(f (x) | D) and ?(f (x) | D) denote the posterior mean and standard deviation of our probabilistic model based on data D. While the prototypical probabilistic model in BO is a GP [1], we will use a Bayesian neural network (BNN). 2.2 Bayesian methods for neural networks The ability to combine the flexibility and scalability of (deep) neural networks with well-calibrated uncertainty estimates is highly desirable in many contexts. Not surprisingly, there thus exist many approaches for this problem, including early work on (non-scalable) Hamiltonian Monte Carlo [11], recent work on variational inference methods [12, 13] and expectation propagation [14], reinterpretations of dropout as approximate inference [15, 16], as well as stochastic gradient MCMC methods based on Hamiltonian Monte Carlo [6] and stochastic gradient Langevin MCMC [17]. While any of these methods could, in principle, be used for BO, we found most of them to result in suboptimal uncertainty estimates. Our preliminary experiments ? presented in the supplementary material (Section B) ? suggest these methods often conservatively estimate the uncertainty for points far away from the data, particularly when based on little training data. This is problematic for BO, which crucially relies on well-calibrated uncertainty estimates based on few function evaluations. One family of methods that consistently resulted in good uncertainty estimates in our tests were Hamiltonian Monte Carlo (HMC) methods, which we will thus use throughout this paper. Concretely, we will build on the scalable stochastic MCMC method from Chen et al. [6]. 3 Bayesian optimization with Bayesian neural networks We now formalize the Bayesian neural network regression model we use as the basis of our Bayesian optimization approach. Formally, under the assumption that the observed function values (conditioned on x) are normally distributed (with unknown mean and variance), we start by defining our probabilistic function model as p(ft (x) | x, ?) = N (f?(x, t; ?? ), ??2 ), 1 The standard single-task case is recovered when K = t = 1. 2 (2) where ? = [?? , ??2 ]T , f?(x, t; ?? ) is the output of a parametric model with parameters ?? , and where we assume a homoscedastic noise model with zero mean and variance ??2 .2 A single-task model can trivially be obtained from this definition: Single-task model. In the single-task setting we simply model the function mean f?(x, t; ?? ) = h(x; ?? ) using a neural network, with output h (i.e. h implements a forward-pass). Multi-task model. For the multi-task model we use a slightly adapted network architecture. As additional input,the networkis provided with a task-specific embedding vector. That is, we have T f?(x, t; ?? ) = h [x; ?t ] , ?h , where h(?), again, denotes the output of the neural network (here with parameters ?h ) and ?t is the t-th row of an embedding matrix ? ? RK?L (we choose L = 5 for our experiments). This embedding matrix is learned alongside all other parameters. Additionally, if information about the dataset (such as data-set size etc.) is available it can be appended to this embedding vector. The full vector of the network parameters then becomes ?? = [?h , vec(?)], where vec(?) denotes vectorization. Instead of using a learned embedding we could have chosen to represent the tasks through a one-out-of-K encoding vector, which functionally would be equivalent but would induce a large number of additional parameters to be learned for large K. With these definitions, the joint probability of the model parameters and the observed data is then p(D, ?) = p(?? )p(??2 ) K |D k| Y Y N (yik |f?(xki , k; ?? ), ??2 ), (3) k=1 i=1 where p(?? ) and p(??2 ) are priors on the network parameters and on the variance, respectively. For BO, we need to be able to compute the acquisition function at given candidate points x. For this we require the predictive posterior p(ft (x)|x, D) (marginalized over the model parameters ?). Unfortunately, for our choice of modeling ft with a neural network, evaluating this posterior exactly is intractable. Let us, for now, assume that we can generate samples ?i ? p(? | D) from the posterior for the model parameters given the data; we will show how to do this with stochastic gradient Hamiltonian Monte Carlo (SGHMC) in Section 4. We can then use these samples to approximate the predictive posterior p(ft (x)|x, D) as Z M 1 X (4) p(ft (x) | x, ?i ). p(ft (x)|x, D) = p(ft (x) | x, ?)p(? | D)d? ? M ? i=1 Using the same samples ?i ? p(? | D), we make a Gaussian approximation to this predictive distribution to obtain mean and variance to compute the EI value in Equation (1): M M 2 1 X? 1 X? ?(ft (x)|D) = f (x, t; ??i ), ? 2 (f (x)|D) = f (x, t; ??i ) ? ?(ft (x)|D) + ??i 2 . M i=1 M i=1 (5) Notably, we can compute partial derivatives of ?EI (with respect to x) via backpropagation through all functions f?(x, t; ??i ) which allows gradient-based maximization of the acquisition function. We also extended this formulation to parallel asynchronous BO by sampling possible outcomes for currently-running function evaluations and using the acquisition function ?MCEI proposed by Snoek et al. [1]. Details are given in the supplementary material (Section A). 4 Robust stochastic gradient HMC via scale adaptation In this section, we show how stochastic gradient Hamiltonian Monte Carlo (SGHMC) can be used to sample from the model defined by Equation (3). We first summarize the general formalism behind SGHMC [6] and then derive a more robust variant suitable for BO. 4.1 Stochastic gradient HMC HMC introduces a set of auxiliary variables, r, and then samples from the joint distribution   1 p(?, r | D) ? exp ?U (?) ? rT M?1 r , with U (?) = ? log p(D, ?) 2 2 (6) We note that, if required, we could model heteroscedastic functions by defining the observation noise variance ??2 as a deterministic function of x (e.g. as the second output of the neural network). 3 by simulating a fictitious physical system described by a set of differential equations, called Hamilton?s equations. In this system, the negative log-likelihood U (?) plays the role of a potential energy, r corresponds to the momentum of the system, and M represents the (arbitrary) mass matrix [18]. Classically, the dynamics for ? and r depend on the gradient ?U (?) whose evaluation is too expensive for our purposes, since it would involve evaluating the model on all data-points. By introducing a user-defined friction matrix C, Chen et al. [6] showed how Hamiltonian dynamics can be modified to ? (?), e.g. computed from a mini-batch, sample from the correct distribution if only a noisy estimate ?U is available. In particular, their discretized system of equations reads ?? = M?1 r , ? ? (?) ? CM?1 r + N (0, 2(C ? B)) ?r = ??U , (7) where, in a suggestive notation, we write N (0, ?) representing the addition of a sample from a multivariate Gaussian with zero mean and covariance matrix ?. Besides the estimate for the noise ? and an undefined step length , all that is required for simulating the of the gradient evaluation B, dynamics in Equation (7) is a mechanism for computing gradients of the log likelihood (and thus of our model) on small subsets (or batches) of the data. This makes SGHMC particularly appealing when working with large models and data-sets. Furthermore, Equation (7) can be seen as an MCMC analogue to stochastic gradient descent (with momentum) [6]. Following these update equations, the distribution of (?, r) is the one in Equation (6), and ? is guaranteed to be distributed according to p(? | D). 4.2 Scale adapted stochastic gradient HMC Like many Monte Carlo methods, SGHMC does not come without caveats, namely the correct setting ? the mass of the user-defined quantities: the friction term C, the estimate of the gradient noise B, matrix M, the number of MCMC steps, and ? most importantly ? the step-size . We found the friction term and the step-size to be highly model and data-set dependent3 , which is unacceptable for BO, where robust estimates are required across many different functions F with as few parameter choices as possible. A closer look at Equation (7) shows why the step-size crucially impacts the robustness of SGHMC. For the popular choice M = I, the change in the momentum is proportional to the gradient. If the gradient elements are on vastly different scales (and potentially correlated), then the update effectively assigns unequal importance to changes in different parameters of the model. This, in turn, can lead to slow exploration of the target density. To correct for unequal parameter scales (and respect their correlation), we would ideally like to use M as a pre-conditioner, reflecting the metric underlying the model?s parameters. This would lead to a stochastic gradient analogue of Riemann Manifold Hamiltonian Monte Carlo [19], which has been studied before by Ma et al. [20] and results in an algorithm called generalized stochastic gradient Riemann Hamiltonian Monte Carlo (gSGRHMC). Unfortunately, gSGRHMC requires computation (and storage) of the full Fisher information matrix of U and its gradient, which is prohibitively expensive for our purposes. As a pragmatic approach, we consider a pre-conditioning scheme increasing SGHMCs robustness with respect to  and C, while avoiding the costly computations of gSGRHMC. We want to note that recently ? and directly related to our approach ? adaptive pre-conditioning using ideas from SGD methods has been combined with stochastic gradient Langevin dynamics in Li et al. [21] and to derive a hybrid between SGD optimization and HMC sampling in Chen et al. [22]. These approaches however either come with additional hyperparameters that need to be set or do not guarantee unbiased sampling. The rest of this section shows how all remaining SGHMC parameters in our method are determined automatically. Choosing M. For the mass matrix, we take inspiration from the connection between SGHMC and SGD. Specifically, the literature [23, 24] shows how normalizing the gradient by its magnitude (estimated over the whole dataset) improves the robustness of SGD. To perform the analogous operation in SGHMC,  ?1  we propose to adapt the mass matrix during the burn-in phase. We set /2 ?1 M = diag V?? , where V?? is an estimate of the (element-wise) uncentered variance of the ? (?))2 ]. We estimate V?? using an exponential moving average during the gradient: V?? ? E[(?U 3 We refer to Section 5 for a quantitative evaluation of this claim. 4 burn-in phase yielding the update equation ? (?))2 , ?V?? = ?? ?1 V?? + ? ?1 ?(U (8) where ? is a free parameter vector specifying the exponential averaging windows. Note that all multiplications above are element-wise and ? is a vector with the same dimensionality as ?. Automatically choosing ? . To avoid adding ? as a new hyperparameter ? that would have to be tuned ? we automatically determine its value. For this purpose, we use an adaptive estimate previously derived for adaptive learning rate procedures for SGD [25]. We maintain an additional smoothed 2 estimate of the gradient g? ? ?U (?) and consider the element-wise ratio g?/V?? between the squared estimated gradient and the gradient variance. This ratio will be large if the estimated gradient is large compared to the noise ? in which case we can use a small averaging window ? and it will be small if the noise is large compared to the average gradient ? in which case we want a larger averaging window. We formalize these desiderata by simultaneously updating Equation 8, ?? = ?g?2 V???1 ? + 1 , ? (?) . ?g? = ?? ?1 g? + ? ?1 ?U and (9) ? While the above procedure removes the need to hand-tune M?1 (and will stabilize Estimating B. ? Ideally, B ? should be the method for different C and ), we have not yet defined an estimate for B. the estimate of the empirical Fisher information matrix that, as discussed above, is too expensive ? = 1 V?? which is readily to compute. We therefore resort to a diagonal approximation yielding B 2 available from Equation (8). Scale adapted update equations. Finally, we can combine all parameter estimates to formulate our automatically scale adapted SGHMC method. Following Chen et al. [6], we introduce the variable ?1/2 substitution v = M?1 r = V?? r which leads us to the dynamical equations   ?1/2 ? (?) ? V? ?1/2 Cv + N 0, 23 V? ?1/2 CV? ?1/2 ? 4 I , (10) ?? = v , ?v = ?2 V?? ?U ? ? ? using the quantities estimated in Equations (8)-(9) during the burn-in phase, and then fixing the ? cancels with the square of our estimate choices for all parameters. Note that the approximation of B ?1 of M . In practice, we choose C = CI, i.e. the same independent noise for each element of ?. In this case, Equation (10) constrains the choices of C and , as we need them to fulfill the relation min(V??1 )C ? . For the remainder of the paper, we fix  = 10?2 (a robust choice in our experience) ?1/2 and chose C such that we have V?? C = 0.05I (intuitively this corresponds to a constant decay in momentum of 0.05 per time step) potentially increasing it to satisfy the mentioned constraint at the end of the burn-in phase. We want to emphasize that our estimation/adaptation of the parameters only changes the HMC procedure during the burn-in phase. After it, when actual samples are recorded, all parameters stay fixed. In particular, this entails that as long as our choice of  and C satisfies min(V???1 )C ? , our method samples from the correct distribution. Our choices are compatible with the constraints on the free parameters of the original SGHMC [6]. Further, we note that the scale adaptation technique is agnostic to the parametric form of the density we aim to sample from; and could therefore potentially also simplify SGHMC sampling for models beyond those considered in this paper. 5 Experiments on the effects of scale adptation First, to test the efficacy of the proposed scale adaptation technique, we performed an evaluation on four common regression datasets following the protocol from Hern?ndez-Lobato and Adams [14], presented in Table 1. The comparison shows that ? despite its guarantees for sampling from the correct distribution ? SGHMC (without our adaptation) required tuning for each dataset to obtain good uncertainty estimates. This effect can likely be attributed to the high dimensionality (and non-uniformity) of the parameter space (for which the standard SGHMC procedure might just require too many MCMC steps to sample from the target density). Our adaptation removed these problems. Additionally we found our method to faithfully represent model uncertainty even in regimes were only few data-points are available. This observation is qualitatively shown in Figure 1 (right) and further explored in the supplementary material. 5 Table 1: Log likelihood for regression benchmarks from the UCI repository. For comparison, we include results for VI (variational inference) and PBP (probabilistic backpropagation) taken from Hern?ndez-Lobato and Adams [14]. We report mean ? standard deviation across 10 runs. The first two SGHMC variants are the vanilla algorithm (without our modifications) optimized for best mean performance (best average), and best performance on each dataset (tuned per dataset) via grid search. Method/Dataset Boston Housing Yacht Hydrodynamics Concrete Wine Quality Red SGHMC (best average) -3.474 ? 0.511 SGHMC (tuned per dataset) -2.489 ? 0.151 SGHMC (scale-adapted) -2.536 ? 0.036 -13.579 ? 0.983 -1.753 ? 0.19 -1.107 ? 0.083 -4.871 ? 0.051 -4.165 ? 0.723 -3.384 ? 0.24 -1.825 ? 0.75 -1.287 ? 0.28 -1.041 ? 0.17 -2.903 ? 0.071 -2.574 ? 0.089 -3.439 ? 0.163 -1.634 ? 0.016 -3.391 ? 0.017 -3.161 ? 0.019 -0.980 ? 0.013 -0.968 ? 0.014 VI PBP 6 Bayesian optimization experiments We now show Bayesian optimization experiments for BOHAMIANN. Unless noted otherwise, we used a three layer neural network with 50 tanh units for all experiments. For the priors we let p(?? ) = N (0, ??2 ) be normally distributed and placed a Gamma hyperprior on ??2 , which is periodically updated via Gibbs sampling. For p(??2 ) we chose a log-normal prior. To approximate EI we used 50 samples acquired via SGHMC sampling. Maximization of the acquision function was performed via gradient ascent. Due to space constraints, full details on the experimental setup as well as the optimized hyperparameters for all experiments are given in the supplementary material (Section C), which also contains additional plots and evaluations for all experiments. Branin 103 GP DNGO (10-10-10) DNGO (50-50-50) 102 Immediate regret 101 2.0 BOHAMIANN (10-10-10) BOHAMIANN (50-50-50) Random Search sinc(x) BOHAMIANN 1.0 100 0.5 10-1 0.0 10-2 ?0.5 10-3 ?1.0 10-4 ?1.5 10-5 10-6 Fit of the sinOne function after 20 BO steps using BOHAMIANN 1.5 ?2.0 0 50 100 150 Number of function evaluations 200 ?2.5 ?2.0 ?1.5 ?1.0 ?0.5 0.0 0.5 1.0 1.5 Figure 1: Evaluation on common benchmark problems. (Left) Immediate regret of various optimizers averaged over 30 runs on the Branin function. For DNGO and BOHAMIANN, we denote the layer sizes for the (3 layer) networks in parenthesis. (Right) A fit of the sinOne function after 20 steps of BO using BOHAMIANN. We plot the mean of the predictive posterior and ? 2 standard deviations; calculated based on 50 MCMC samples. 6.1 Common benchmark problems As a first experiment, we compare BOHAMIANN to existing state-of-the-art BO on a set of synthetic functions and hyperparameter optimization tasks devised by Eggensperger et al. [2]. All optimizers achieved acceptable performance, but GP based methods were found to perform best on these lowdimensional benchmarks, which we thus take as a point of reference. Overall, on the 5 benchmarks BOHAMIANN matched the performance of GP based BO on 4 and performed worse on one, indicating that even in the low-data regime Bayesian neural networks (BNNs) are a feasible model class for BO. A detailed listing of the results is given in the supplementary material. We further compared to our re-implementation of the recently proposed DNGO method [5], which uses features extracted from a maximum likelihood fit of a neural network as the basis for a Bayesian linear regression fit (and was also proposed as a replacement of GPs for scalable BO). For the benchmark tasks we found both DNGO and BOHAMIANN to perform well with BOHAMIANN being slightly more robust to different architecture choices. This behavior is illustrated in Figure 1 (left) where we compare DNGO with two different network architectures to BOHAMIANN. 6 Additionally, DNGO performed well for some high-dimensional problems (cf. Section 6.3), but it got stuck when we used it to optimize 13 hyperparameters of a Deep RL agent (cf. Section 6.4). 6.2 Multi-task hyperparameter optimization Next, we evaluated BOHAMIANN for multi-task hyperparameter optimization of a support vector machine (SVM) and a random forest (RF) over a range of different benchmarks. Concretely, we considered a set of 21 different classification datasets downloaded from the OpenML repository [26]. These were grouped into four groups of related tasks (as determined by a distance based on metafeatures extracted from the datasets). Within each group (consisting of 3-6 datasets), we randomly designated the optimization of the algorithms hyperparameters for one dataset as the target function ft . The remaining datasets were used for collecting |Dk | = 30 additional training data points each, which were used as the initial design for BO. To allow for fast evaluation of this benchmark, we pre-computed the performance of different hyperparameter settings on all datasets following Feurer et al. [27]. The task for the optimizer then is to find an optimal hyperparameter setting for the target benchmark (for which it receives no initial data). We compared our method to the GP based multi-task BO procedure from Swersky et al. [8], as well as to standard, single-task, GP based BO. Overall, while all optimizers eventually found a solution close to the optimum the multi-task version of BOHAMIANN was able to exploit the knowledge obtained from the related datasets, resulting in quicker convergence. On average over all four benchmarks, MT-BOHAMIANN was 12 % faster than GP based BO (to reach an immediate regret ? 0.25), whereas MTBO was only 5% faster. Plots showing the optimizer behavior are included in the supplementary material. Parallel hyperparameter optimization for deep residual networks ResNet on CIFAR-10 30 Validation Error % 25 20 15 10 5 0 100000 200000 300000 Runtime in seconds BO of deep RL algorithm (DDPG on CartPole) 1000 BOHAMIANN DNGO random search Function value (episodes to success) 6.3 400000 BOHAMIANN DNGO 900 800 700 600 500 400 0 50 100 150 200 250 Function evaluations 300 350 400 Figure 2: (Left) DNGO vs.BOHAMIANN for optimizing the 8 hyperparameters of a deep residual network on CIFAR-10; we plotted each function evaluation performed over time, as well as the current best; parallel random search is included as an additional baseline. (Right) DNGO vs. BOHAMIANN for optimizing the 12 hyperparameters of an RL agent. Next, we optimized the hyperparameters of the recently proposed residual network (ResNet) architecture [28] for classification of CIFAR-10. We adopted a general parameterization of this architecture, tuning both the parameters of the stochastic gradient descent training as well as key architectural choices (such as the dimensionality reduction strategy used between residual blocks). We kept the maximum number of parameters fixed at the number used by the 32 layer ResNet [28]. Training a single ResNet took up to 6 hours in our experiments and we therefore used the parallel BO procedure described in Section 1 of the supplementary material (evaluating 8 ResNet configurations in parallel, for all of DNGO, random search, and BOHAMIANN). Interestingly, all methods quickly found good configurations of the hyperparameters as shown in Figure 2(left), with BOHAMIANN reaching the validation performance of the manually-tuned baseline ResNet after 104 function evaluations (or approximately 27 hours of total training time). When re-training this model on the full dataset it obtained a classification error of 7.40 % ? 0.3, matching the performance of the hand-tuned version from He et al. [28] (7.51 %). Perhaps surprisingly, this result was reached with a different architecture than the one presented in He et al. [28]: (1) it used max-pooling instead of strided convolutions for the spatial dimensionality reduction; (2) approximately 50% of the weights in all residual blocks were shared (thus reducing the number of parameters). 7 Table 2: Comparison between the original DDPG algorithm and a version optimized using BOHAMIANN on two control tasks. We show the number of episodes required to obtain successful performance in 10 consecutive test episodes (reward above -2 for CartPole, above -6 for reaching) and the maximum reward achieved by the controller. DDGP on Cartpole 0 ?1 ?2 Reward ?3 ?4 ?5 ?6 ?7 DDPG (optimized) DDPG (original) ?8 ?9 0 100 200 300 400 Collected episodes 500 600 Cartpole 700 Figure 3: Learning curve for DDPG on the Cartpole benchmark. We compare the original hyperparameter settings to an optimized version of DDPG. The plot shows the cumulative reward (over 100 test episodes) obtained by the DDPG algorithm after it obtained x episodes of data for training. 6.4 DDPG DDPG + DNGO DDPG + BOHAMIANN 2-link reaching task DDPG DDPG + DNGO DDPG + BOHAMIANN Reward Episodes -1.18 -1.39 -1.46 470 507 405 Reward Episodes -4.36 -4.39 -4.57 1512 1642 1102 Hyperparameter optimization for deep reinforcement learning Finally, we optimized a neural reinforcement learning (RL) algorithm on two control tasks: the Cartpole swing-up task and a two link robot arm reaching task. We used a re-implementation of the DDPG algorithm by Lillicrap et al. [29] and aimed to minimize the interaction time with the simulated system required to achieve stable performance (defined as: solving the task in 10 consecutive test episodes). This is a critical performance metric for data-efficient RL. The results of this experiment are given in Table 2 . While the original DDPG hyperparameters were set to achieve robust performance on a large set of benchmarks (and out-of-the-box DDPG performed remarkably well on the considered problems) our experiments indicate that the number of samples required to achieve good performance can be substantially reduced for individual tasks by hyperparameter optimization with BOHAMIANN. In contrast, DNGO did not perform as well on this specific task, getting stuck during optimization, see Figure 2 (right). A comparison between the learning curves of the original and the optimized DDPG, depicted in Figure 3, confirms this observation. The parameters that had the most influence on this improved performance were (perhaps unsurprisingly) the learning-rates of the Q-and policy networks and the number of SGD steps performed between collected episodes. This observation was already used by domain experts in a recent paper by Gu et al. [30] where they used 5 updates per sample (the hyperparameters found by our method correspond to 10 updates per sample). 7 Conclusion We proposed BOHAMIANN, a scalable and flexible Bayesian optimization method. It natively supports multi-task optimization as well as parallel function evaluations, and scales to high dimensions and many function evaluations. At its heart lies Bayesian inference for neural networks via stochastic gradient Hamiltonian Monte Carlo, and we improved the robustness thereof by means of a scale adaptation technique. In future work, we plan to implement Freeze-Thaw Bayesian optimization [31] and Bayesian optimization across dataset sizes [32] in our framework, since both of these generate many cheap function evaluations and thus reach the scalability limit of GPs. We thereby expect substantial speedups in the practical hyperparameter optimization for ML algorithms on big datasets. Acknowledgements This work has partly been supported by the European Commission under Grant no. H2020-ICT645403-ROBDREAM, by the German Research Foundation (DFG), under Priority Programme Autonomous Learning (SPP 1527, grant HU 1900/3-1), under Emmy Noether grant HU 1900/2-1, and under the BrainLinks-BrainTools Cluster of Excellence (grant number EXC 1086). 8 References [1] J. Snoek, H. Larochelle, and R. P. Adams. Practical Bayesian optimization of machine learning algorithms. In Proc. of NIPS?12, 2012. [2] K. Eggensperger, M. Feurer, F. Hutter, J. Bergstra, J. Snoek, H. Hoos, and K. Leyton-Brown. Towards an empirical foundation for assessing Bayesian optimization of hyperparameters. In BayesOpt?13, 2013. [3] F. Hutter, H. Hoos, and K. Leyton-Brown. Sequential model-based optimization for general algorithm configuration. In LION?11, 2011. [4] J. Bergstra, R. Bardenet, Y. Bengio, and B. K?gl. Algorithms for hyper-parameter optimization. In Proc. of NIPS?11, 2011. [5] J. Snoek, O. Rippel, K. Swersky, R. Kiros, N. Satish, N. Sundaram, M. M. A. Patwary, Prabhat, and R. P. Adams. Scalable Bayesian optimization using deep neural networks. In Proc. of ICML?15, 2015. [6] T. Chen, E.B. Fox, and C. Guestrin. Stochastic gradient Hamiltonian Monte Carlo. In Proc. of ICML?14, 2014. [7] E. Brochu, V. Cora, and N. de Freitas. A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. CoRR, 2010. [8] K. Swersky, J. Snoek, and R. Adams. Multi-task Bayesian optimization. In Proc. of NIPS?13, 2013. [9] D. Jones, M. Schonlau, and W. Welch. Efficient global optimization of expensive black box functions. JGO, 1998. [10] N. Srinivas, A. Krause, S. Kakade, and M. Seeger. Gaussian process optimization in the bandit setting: No regret and experimental design. In Proc. of ICML?10, 2010. [11] Radford M. Neal. Bayesian learning for neural networks. PhD thesis, University of Toronto, 1996. [12] A. Graves. Practical variational inference for neural networks. In Proc. of ICML?11, 2011. [13] C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra. Weight uncertainty in neural networks. In Proc. of ICML?15, 2015. [14] J. M. Hern?ndez-Lobato and R. Adams. Probabilistic backpropagation for scalable learning of Bayesian neural networks. In Proc. of ICML?15, 2015. [15] Y. Gal and Z. Ghahramani. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. arXiv:1506.02142, 2015. [16] D. P. Kingma, T. Salimans, and M. Welling. Variational dropout and the local reparameterization trick. In Proc. of NIPS?15, 2015. [17] A. Korattikara, V. Rathod, K. P. Murphy, and M. Welling. Bayesian dark knowledge. In Proc. of NIPS?15. 2015. [18] S. Duane, A. D. Kennedy, Brian J. Pendleton, and D. Roweth. Hybrid monte carlo. Phys. Lett. B, 1987. [19] M. Girolami and B. Calderhead. Riemann manifold langevin and hamiltonian monte carlo methods. Journal of the Royal Statistical Society Series B - Statistical Methodology, 2011. [20] Y. Ma, T. Chen, and E.B. Fox. A complete recipe for stochastic gradient MCMC. In Proc. of NIPS?15, 2015. [21] Chunyuan Li, Changyou Chen, David E. Carlson, and Lawrence Carin. Preconditioned stochastic gradient langevin dynamics for deep neural networks. In Proc. of AAAI?16, 2016. [22] Changyou Chen, David E. Carlson, Zhe Gan, Chunyuan Li, and Lawrence Carin. Bridging the gap between stochastic gradient MCMC and stochastic optimization. In Proc. of AISTATS, 2016. [23] T. Tieleman and G. Hinton. RmsProp: Divide the gradient by a running average of its recent magnitude. In COURSERA: Neural Networks for Machine Learning. 2012. [24] J.C. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 2011. [25] T. Schaul, S. Zhang, and Y. LeCun. No More Pesky Learning Rates. In Proc. of ICML?13, 2013. [26] J. Vanschoren, J. van Rijn, B. Bischl, and L. Torgo. OpenML: Networked science in machine learning. SIGKDD Explor. Newsl., (2), June 2014. [27] M. Feurer, T. Springenberg, and F. Hutter. Initializing Bayesian hyperparameter optimization via metalearning. In Proc. of AAAI?15, 2015. [28] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proc. of CVPR?16, 2016. [29] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. In Proc. of ICLR, 2016. [30] S. Gu, T. P. Lillicrap, I. Sutskever, and S. Levine. Continuous deep q-learning with model-based acceleration. In Proc. of ICML, 2016. [31] K. Swersky, J. Snoek, and R. Adams. Freeze-thaw Bayesian optimization. CoRR, 2014. [32] A. Klein, S. Falkner, S. Bartels, P. Hennig, and F. Hutter. Fast bayesian optimization of machine learning hyperparameters on large datasets. CoRR, 2016. 9
6117 |@word exploitation:1 version:4 repository:2 changyou:2 hu:2 confirms:1 crucially:3 covariance:1 sgd:6 thereby:1 reduction:2 initial:4 automl:1 substitution:1 efficacy:1 ndez:3 contains:1 configuration:3 tuned:5 rippel:1 interestingly:1 series:1 existing:1 freitas:1 current:2 com:1 recovered:1 yet:1 readily:1 periodically:1 cheap:1 remove:1 plot:4 update:6 sundaram:1 v:2 parameterization:1 hamiltonian:14 caveat:1 toronto:1 zhang:2 branin:2 unacceptable:1 wierstra:2 differential:1 pritzel:1 combine:2 introduce:1 excellence:1 acquired:1 snoek:6 notably:1 expected:1 behavior:2 kiros:1 multi:12 discretized:1 riemann:3 springj:1 automatically:4 little:1 actual:1 window:3 increasing:2 becomes:1 spain:1 provided:1 notation:1 underlying:1 estimating:1 mass:4 agnostic:1 matched:1 cm:1 substantially:2 gal:1 guarantee:2 quantitative:1 collecting:1 runtime:1 exactly:1 prohibitively:1 control:3 normally:2 unit:1 grant:4 hamilton:1 before:1 local:1 limit:1 despite:2 encoding:1 conditioner:1 falkner:2 approximately:2 black:3 burn:5 chose:2 might:1 studied:1 specifying:2 heteroscedastic:1 hunt:1 range:2 averaged:1 practical:3 lecun:1 practice:1 regret:4 implement:2 block:2 backpropagation:3 yacht:1 optimizers:3 pesky:1 procedure:6 empirical:2 got:1 matching:1 pre:4 induce:1 suggest:1 close:3 storage:1 context:1 influence:1 optimize:1 equivalent:1 deterministic:1 yt:2 maximizing:1 lobato:3 convex:1 formulate:1 welch:1 ynk:1 assigns:1 schonlau:1 importantly:1 jgo:1 reparameterization:1 embedding:5 autonomous:1 analogous:1 updated:1 target:5 play:1 user:3 gps:5 us:1 trick:1 element:5 expensive:6 particularly:2 updating:1 recognition:1 observed:2 ft:11 role:1 quicker:1 levine:1 initializing:1 coursera:1 sun:1 episode:10 trade:1 removed:1 principled:1 mentioned:1 substantial:1 bischl:1 complexity:1 constrains:1 reward:6 ideally:2 rmsprop:1 tobias:1 dynamic:5 depend:1 reinterpretation:1 uniformity:1 solving:1 torgo:1 predictive:4 calderhead:1 basis:2 gu:2 joint:2 various:1 fast:2 monte:15 artificial:1 query:1 hyper:1 outcome:1 choosing:2 emmy:1 pendleton:1 whose:2 widely:1 solve:1 supplementary:7 larger:1 cvpr:1 otherwise:1 ability:1 gp:7 noisy:2 online:1 housing:1 took:1 propose:2 lowdimensional:1 interaction:1 adaptation:9 remainder:1 uci:1 korattikara:1 networked:1 flexibility:3 achieve:3 schaul:1 scalability:5 getting:1 recipe:1 sutskever:1 convergence:1 cluster:1 optimum:1 spearmint:1 assessing:1 adam:7 h2020:1 staying:2 resnet:6 silver:1 derive:5 fixing:1 implemented:1 c:1 auxiliary:1 come:2 indicate:1 larochelle:1 girolami:1 correct:5 stochastic:23 exploration:2 material:7 require:2 fix:1 f1:1 preliminary:1 brian:1 considered:3 normal:2 exp:1 lawrence:2 claim:1 bnns:1 optimizer:3 early:1 consecutive:2 homoscedastic:1 fh:1 purpose:3 wine:1 estimation:1 proc:19 currently:1 tanh:1 grouped:1 faithfully:1 stefan:1 cora:1 gaussian:5 aim:3 modified:1 fulfill:1 reaching:4 avoid:1 derived:1 june:1 improvement:1 consistently:1 likelihood:4 contrast:1 seeger:1 sigkdd:1 baseline:2 inference:5 brainlinks:1 typically:1 relation:1 bandit:1 bartels:1 arg:2 overall:2 flexible:2 classification:3 plan:1 art:2 spatial:1 field:1 sampling:7 manually:1 represents:1 look:1 cancel:1 icml:8 jones:1 carin:2 future:1 report:1 simplify:1 few:3 strided:1 randomly:1 simultaneously:1 resulted:1 gamma:1 individual:1 dfg:1 murphy:1 phase:5 consisting:1 replacement:1 maintain:1 highly:2 evaluation:19 introduces:1 truly:2 yielding:3 undefined:1 behind:1 closer:1 partial:1 experience:1 cartpole:6 unless:1 tree:1 fox:2 divide:1 hyperprior:1 y1k:1 re:3 sacrificing:1 plotted:1 hutter:5 roweth:1 formalism:1 modeling:2 bayesopt:1 maximization:2 cost:1 introducing:1 deviation:3 subset:1 successful:1 satish:1 too:3 commission:1 hydrodynamics:1 synthetic:1 calibrated:2 combined:1 density:4 stay:1 probabilistic:8 off:1 quickly:1 concrete:2 again:1 aaai:2 vastly:1 squared:1 recorded:1 choose:2 thesis:1 classically:1 priority:1 worse:1 resort:1 derivative:1 expert:1 li:3 potential:1 de:2 bergstra:2 stabilize:1 includes:1 satisfy:1 vi:2 performed:7 observing:1 hazan:1 red:1 start:2 reached:1 option:1 parallel:9 contribution:1 appended:1 square:1 minimize:1 variance:7 listing:1 correspond:1 bayesian:40 kavukcuoglu:1 accurately:1 dub:1 ren:1 carlo:15 kennedy:1 executes:1 metalearning:1 reach:2 phys:1 definition:2 energy:1 acquisition:6 thereof:1 attributed:1 dataset:11 treatment:3 popular:2 knowledge:3 improves:1 dimensionality:4 formalize:2 brochu:2 reflecting:1 methodology:1 improved:2 formulation:2 evaluated:3 box:4 furthermore:1 xk1:1 just:1 correlation:1 working:1 hand:2 receives:1 ei:6 lack:1 propagation:1 quality:1 perhaps:2 effect:2 lillicrap:3 brown:2 unbiased:1 swing:1 inspiration:1 read:1 neal:1 bnn:1 illustrated:1 during:5 openml:2 noted:1 criterion:1 generalized:2 prominent:2 freiburg:2 complete:1 demonstrate:1 gsgrhmc:3 duchi:1 image:1 variational:4 wise:3 recently:3 pbp:2 common:3 mt:1 physical:1 rl:5 conditioning:2 tassa:1 discussed:1 he:3 functionally:1 refer:1 freeze:2 vec:2 cv:2 gibbs:1 tuning:3 rd:1 fk:3 trivially:1 vanilla:1 grid:1 erez:1 had:1 moving:1 robot:1 entail:1 stable:1 etc:1 add:1 posterior:6 multivariate:1 recent:4 showed:1 optimizing:3 success:2 patwary:1 yi:1 seen:1 guestrin:1 additional:7 attacking:1 determine:2 multiple:1 desirable:1 full:4 faster:2 adapt:1 long:1 cifar:3 devised:1 parenthesis:1 impact:1 jost:1 scalable:8 regression:6 variant:3 desideratum:1 controller:1 expectation:1 metric:2 arxiv:1 represent:2 achieved:2 background:2 addition:1 want:3 whereas:1 remarkably:1 krause:1 crucial:1 rest:1 ascent:1 pooling:1 prabhat:1 bengio:1 fit:5 architecture:6 suboptimal:1 reduce:1 idea:1 thaw:2 blundell:1 bridging:1 repeatedly:1 deep:18 yik:2 heess:1 detailed:1 involve:1 tune:1 aimed:1 dark:1 reduced:1 http:1 generate:2 exist:2 problematic:1 tutorial:1 estimated:4 per:5 klein:2 write:1 hyperparameter:14 hennig:1 group:2 key:2 four:3 bardenet:1 kept:1 subgradient:1 run:2 uncertainty:11 powerful:1 springenberg:2 swersky:4 family:1 throughout:1 architectural:1 ob:3 acceptable:1 dropout:3 layer:4 guaranteed:1 adapted:5 constraint:3 friction:3 xki:2 min:2 speedup:1 department:1 designated:1 according:1 across:3 slightly:2 appealing:1 kakade:1 modification:1 intuitively:1 taken:1 heart:1 equation:17 previously:1 hern:3 turn:1 eventually:1 mechanism:1 german:1 singer:1 end:2 noether:1 adopted:1 available:4 operation:1 sghmc:20 observe:1 hierarchical:1 away:1 salimans:1 simulating:2 alternative:1 robustness:6 batch:2 original:6 denotes:2 running:2 remaining:2 include:1 cf:2 gan:1 marginalized:1 carlson:2 exploit:1 ghahramani:1 build:1 society:1 vanschoren:1 already:2 quantity:2 parametric:4 costly:1 rt:1 strategy:1 diagonal:1 gradient:37 minx:2 iclr:1 distance:1 link:2 simulated:1 gracefully:1 exc:1 manifold:2 collected:2 preconditioned:1 besides:1 length:1 mini:1 providing:1 robo:1 ratio:2 setup:1 unfortunately:3 hmc:8 potentially:3 frank:1 negative:1 implementation:3 design:4 policy:1 unknown:1 perform:4 observation:5 convolution:1 datasets:9 benchmark:12 descent:2 immediate:3 langevin:4 defining:2 extended:1 hinton:1 y1:1 smoothed:1 arbitrary:2 chunyuan:2 david:2 namely:1 required:8 connection:1 optimized:8 unequal:2 learned:3 barcelona:1 hour:2 nip:7 kingma:1 able:2 beyond:1 alongside:1 dynamical:1 lion:1 spp:1 regime:2 challenge:1 summarize:1 rf:1 royal:1 including:2 max:1 cornebise:1 analogue:2 power:1 suitable:1 critical:1 natural:1 hybrid:2 braintools:1 residual:7 arm:1 representing:2 scheme:1 improve:2 github:1 prior:3 literature:1 acknowledgement:1 rathod:1 multiplication:1 graf:1 unsurprisingly:1 expect:1 rijn:1 prototypical:2 proportional:1 fictitious:1 validation:2 foundation:2 downloaded:1 agent:2 principle:1 row:1 compatible:1 surprisingly:2 placed:1 asynchronous:1 free:2 supported:1 gl:1 allow:1 wide:1 distributed:3 van:1 curve:2 dimension:2 calculated:1 evaluating:3 cumulative:2 lett:1 conservatively:1 concretely:2 commonly:1 reinforcement:7 forward:1 adaptive:4 qualitatively:1 stuck:2 far:1 programme:1 welling:2 approximate:3 emphasize:1 uni:1 keep:1 ml:1 suggestive:1 uncentered:1 instantiation:1 active:1 global:1 xi:1 zhe:1 search:5 vectorization:1 continuous:2 why:1 table:4 additionally:3 promising:1 transfer:1 robust:9 obtaining:1 forest:1 feurer:3 european:1 domain:2 diag:1 protocol:1 did:1 aistats:1 whole:1 noise:7 hyperparameters:15 big:1 x1:1 augmented:1 mtbo:1 cubic:1 explor:1 slow:1 natively:1 momentum:4 exponential:2 candidate:1 lie:1 rk:1 xt:3 specific:2 showing:1 explored:1 dk:4 decay:1 sinc:1 svm:1 normalizing:1 intractable:1 adding:1 effectively:1 importance:1 ci:1 sequential:1 corr:3 magnitude:2 phd:1 conditioned:1 nk:1 chen:8 gap:1 boston:1 depicted:1 hoos:2 simply:1 likely:1 partially:1 bo:28 radford:1 duane:1 corresponds:2 leyton:2 satisfies:1 relies:2 extracted:2 ma:2 tieleman:1 goal:1 ddpg:17 acceleration:1 towards:1 shared:1 fisher:2 feasible:1 change:3 included:2 determined:2 specifically:1 reducing:1 averaging:3 called:2 total:1 pas:1 partly:1 experimental:2 ucb:1 aaron:1 select:1 formally:1 pragmatic:1 indicating:1 support:3 metafeatures:1 avoiding:1 eggensperger:2 evaluate:1 mcmc:9 d1:1 srinivas:1 correlated:1
5,656
6,118
Gaussian Process Bandit Optimisation with Multi-fidelity Evaluations Kirthevasan Kandasamy \ , Gautam Dasarathy ? , Junier Oliva \ , Jeff Schneider \ , Barnab?s P?czos \ \ Carnegie Mellon University, ? Rice University {kandasamy, joliva, schneide, bapoczos}@cs.cmu.edu, gautamd@rice.edu Abstract In many scientific and engineering applications, we are tasked with the optimisation of an expensive to evaluate black box function f . Traditional methods for this problem assume just the availability of this single function. However, in many cases, cheap approximations to f may be obtainable. For example, the expensive real world behaviour of a robot can be approximated by a cheap computer simulation. We can use these approximations to eliminate low function value regions cheaply and use the expensive evaluations of f in a small but promising region and speedily identify the optimum. We formalise this task as a multi-fidelity bandit problem where the target function and its approximations are sampled from a Gaussian process. We develop MF-GP-UCB, a novel method based on upper confidence bound techniques. In our theoretical analysis we demonstrate that it exhibits precisely the above behaviour, and achieves better regret than strategies which ignore multi-fidelity information. MF-GP-UCB outperforms such naive strategies and other multi-fidelity methods on several synthetic and real experiments. 1 Introduction In stochastic bandit optimisation, we wish to optimise a payoff function f : X ? R by sequentially querying it and obtaining bandit feedback, i.e. when we query at any x ? X , we observe a possibly noisy evaluation of f (x). f is typically expensive and the goal is to identify its maximum while keeping the number of queries as low as possible. Some applications are hyper-parameter tuning in expensive machine learning algorithms, optimal policy search in complex systems, and scientific experiments [20, 23, 27]. Historically, bandit problems were studied in settings where the goal is to maximise the cumulative reward of all queries to the payoff instead of just finding the maximum. Applications in this setting include clinical trials and online advertising. Conventional methods in these settings assume access to only this single expensive function of interest f . We will collectively refer to them as single fidelity methods. In many practical problems however, cheap approximations to f might be available. For instance, when tuning hyper-parameters of learning algorithms, the goal is to maximise a cross validation (CV) score on a training set, which can be expensive if the training set is large. However CV curves tend to vary smoothly with training set size; therefore, we can train and cross validate on small subsets to approximate the CV accuracies of the entire dataset. For a concrete example, consider kernel density estimation (KDE), where we need to tune the bandwidth h of a kernel. Figure 1 shows the CV likelihood against h for a dataset of size n = 3000 and a smaller subset of size n = 300. The two maximisers are different, which is to be expected since optimal hyper-parameters are functions of the training set size. That said, the curve for n = 300 approximates the n = 3000 curve quite well. Since training/CV on small n is cheap, we can use it to eliminate bad values of the hyper-parameters and reserve the expensive experiments with the entire dataset for the promising candidates (e.g. boxed region in Fig. 1). In online advertising, the goal is to maximise the cumulative number of clicks over a given period. In the conventional bandit treatment, each query to f is the display of an ad for a specific time, say one 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. hour. However, we may display ads for shorter intervals, say a few minutes, to approximate its hourly performance. The estimate is biased, as displaying an ad for a longer interval changes user behaviour, but will nonetheless be useful in gauging its long run click through rate. In optimal policy search in robotics and automated driving vastly cheaper computer simulations are used to approximate the expensive real world performance of the system. Scientific experiments can be approximated to varying degrees using less expensive data collection, analysis, and computational techniques. In this paper, we cast these tasks as multi-fidelity bandit optimisation problems assuming the availability of cheap approximate functions (fidelities) to the payoff f . Our contributions are: 1. We present a formalism for multi-fidelity bandit optimisation using Gaussian Process (GP) assumptions on f and its approximations. We develop a novel algorithm, Multi-Fidelity Gaussian Process Upper Confidence Bound (MF-GP-UCB) for this setting. 2. Our theoretical analysis proves that MF-GP-UCB explores the space at lower fidelities and uses the high fidelities in successively smaller regions to zero in on the optimum. As lower fidelity queries are cheaper, MF-GP-UCB has better regret than single fidelity strategies. 3. Empirically, we demonstrate that MF-GP-UCB outperforms single fidelity methods on a series of synthetic examples, three hyper-parameter tuning tasks and one inference problem in Astrophysics. Our matlab implementation and experiments are available at github.com/kirthevasank/mf-gp-ucb. Related Work: Since the seminal work by Robbins [25], the multi-armed bandit problem has been studied extensively in the K-armed setting. Recently, there has been a surge of interest in the optimism under uncertainty principle for K armed bandits, typified by upper confidence bound (UCB) methods [2, 4]. UCB strategies have also been used in bandit tasks with linear [6] and GP [28] payoffs. There is a plethora of work on single fidelity methods for global optimisation both with noisy and noiseless evaluations. Some examples are branch and bound techniques such as dividing rectangles (DiRect) [12], simulated annealing, genetic algorithms and more [17, 18, 22]. A suite of single fidelity methods in the GP framework closely related to our work is Bayesian Optimisation (BO). While there are several techniques for BO [13, 21, 30], of particular interest to us is the Gaussian process upper confidence bound (GP-UCB) algorithm of Srinivas et al. [28]. Many applied domains of research such as aerodynamics, industrial design and hyper-parameter tuning have studied multi-fidelity methods [9, 11, 19, 29]; a plurality of them use BO techniques. However none of these treatments neither formalise nor analyse any notion of regret in the multifidelity setting. In contrast, MF-GP-UCB is an intuitive UCB idea with good theoretical properties. Some literature have analysed multi-fidelity methods in specific contexts such as hyper-parameter tuning, active learning and reinforcement learning [1, 5, 26, 33]. Their settings and assumptions are substantially different from ours. Critically, none of them are in the more difficult bandit setting where there is a price for exploration. Due to space constraints we discuss them in detail in Appendix A.3. The multi-fidelity poses substantially new theoretical and algorithmic challenges. We build on GPUCB and our recent work on multi-fidelity bandits in the K-armed setting [16]. Section 2 presents our formalism including a notion of regret for multi-fidelity GP bandits. Section 3 presents our algorithm. The theoretical analysis is in Appendix C with a synopsis for the 2-fidelity case in Section 4. Section 6 presents our experiments. Appendix A.1 tabulates the notation used in the manuscript. 2 Preliminaries We wish to maximise a payoff function f : X ? R where X ? [0, r]d . We can interact with f only by querying at some x ? X and obtaining a noisy observation y = f (x) + . Let x? ? argmaxx?X f (x) and f? = f (x? ). Let xt ? X be P the point queried at time t. The goal of a bandit strategy n is to maximise the sum of rewards t=1 f (xt ) or equivalently minimise the cumulative regret Pn t=1 f? ? f (xt ) after n queries; i.e. we compete against an oracle which queries at x? at all t. Our primary distinction from the classical setting is that we have access to M ?1 successively accurate approximations f (1) , f (2) , . . . , f (M ?1) to the payoff f = f (M ) . We refer to these approximations as fidelities. We encode the fact that fidelity m approximates fidelity M via the assumption, kf (M ) ? f (m) k? ? ? (m) , where ? (1) > ? (2) > ? ? ? > ? (M ) = 0. Each query at fidelity m expends a cost ?(m) of a resource, e.g. computational effort or advertising time, where ?(1) < ?(2) < ? ? ? < ?(M ) . A strategy for multi-fidelity bandits is a sequence of query-fidelity pairs {(xt , mt )}t?0 , where 2 n=300 n=3000 ?t f x Figure 1: Left: Average CV log likelihood on datasets of size 300, 3000 on a synthetic KDE task. The crosses are the maxima. Right: Illustration of GP-UCB at time t. The figure shows f (x) (solid black line), the UCB ?t (x) (dashed blue line) and queries until t ? 1 (black crosses). We query at xt = argmaxx?X ?t (x) (red star). (xn , mn ) could depend on the previous query-observation-fidelity tuples {(xt , yt , mt )}n?1 t=1 . Here yt = f (mt ) (xt ) + . After n steps we will have queried any of the M fidelities multiple times. Some smoothness assumptions on f (m) ?s are needed to make the problem tractable. A standard in the Bayesian nonparametric literature is to use a Gaussian process (GP) prior [24] with covariance kernel ?. In this work we focus on the squared exponential (SE) ??,h and the Mat?rn ??,h kernels as they are popularly used in practice and their theoretical properties are well studied. Writing x0 k2 ,  ? z ?= kx? ?  2?z 2?z 21?? 0 2 2 0 they are defined as ??,h (x, x ) = ? exp ?z /(2h ) , ??,h (x, x ) = ?(?) , B? h h where ?, B? are the Gamma and modified Bessel functions. A convenience the GP framework offers is that posterior distributions are analytically tractable. If f ? GP(0, ?), and we have observations Dn = {(xi , yi )}ni=1 , where yi = f (xi ) +  and  ? N (0, ? 2 ) is Gaussian noise, the posterior distribution for f (x)|Dn is also Gaussian N (?n (x), ?n2 (x)) with ?n (x) = k> ??1 Y, n n ?n2 (x) = ?(x, x) ? k> ??1 k. (1) 2 n?n Here, Y ? R with Yi = yi , k ? R with ki = ?(x, xi ) and ? = K + ? I ? R where Ki,j = ?(xi , xj ). In keeping with the above, we make the following assumptions on our problem. Assumption 1. A1: The functions at all fidelities are sampled from GPs, f (m) ? GP(0, ?) for all m = 1, . . . , M . A2: kf (M ) ? f (m) k? ? ? (m) for all m = 1, . . . , M . A3: kf (M ) k? ? B. The purpose of A3 is primarily to define the regret. In Remark 7, Appendix A.4 we argue that these assumptions are probabilistically valid, i.e. the latter two events occur with nontrivial probability when we sample the f (m) ?s from a GP. So a generative mechanism would keep sampling the functions and deliver them when the conditions hold true. A point x ? X can be queried at any of the M fidelities. When we query at fidelity m, we observe y = f (m) (x) +  where  ? N (0, ? 2 ). We now present our notion of cumulative regret R(?) after spending capital ? of a resource in the multi-fidelity setting. R(?) should reduce to the conventional definition of regret for any single fidelity strategy that queries only at M th fidelity. As only the optimum of f = f (M ) is of interest to us, queries at fidelities less than M should yield the lowest possible reward, (?B) according to A3. Accordingly, we set the instantaneous reward qt at time to be ?B if mt 6= M and f (M ) (xt ) if mt = M . If we let rt = f? ? qt denote the instantaneous regret, we have rt = f? + B if mt 6= M and f? ? f (xt ) if mt = M . R(?) should also factor in the costs of the fidelity of each query. Finally, we should also receive (?B) reward for any unused capital. Accordingly, we define R(?) as, "N #   N N X X X (mt ) (mt ) R(?) = ?f? ? ? qt + ? ? ? (?B) ? 2B?res + ?(mt ) rt , (2) t=1 (mt ) . t=1 ? PN t=1 t=1 where ?res = ? ? Here, N is the (random) number of queries at all fidelities within Pn capital ?, i.e. the largest n such that t=1 ?(mt ) ? ?. According to (2) above, we wish to compete against an oracle that uses all its capital ? to query x? at the M th fidelity. R(?) is at best 0 when we follow the oracle and at most 2?B. Our goal is a strategy that has small regret for all values of (sufficiently large) ?, i.e. the equivalent of an anytime strategy, as opposed to a fixed time horizon strategy in the usual bandit setting. For the purpose of optimisation, we also define the simple regret as S(?) = mint rt = f? ? maxt qt . S(?) is the difference between f? and the best highest fidelity query (and f? + B if we have never queried at fidelity M ). Since S(?) ? ?1 R(?), any strategy with asymptotic sublinear regret lim??? ?1 R(?) = 0, also has vanishing simple regret. Since, to our knowledge, this is the first attempt to formalise regret for multi-fidelity problems, the definition for R(?) (2) necessitates justification. Consider a two fidelity robot gold mining problem 3 where the second fidelity is a real world robot trial, costing ?(2) dollars and the first fidelity is a computer simulation costing ?(1) . A multi-fidelity algorithm queries the simulator to learn about the real world. But it does not collect any actual gold during a simulation; hence no reward, which according to our assumptions is ?B. Meantime the oracle is investing this capital on the best experiment and collecting ? f? gold. Therefore, the regret at this time instant is f? + B. However we weight this by the cost to account for the fact that the simulation costs only ?(1) . Note that lower fidelities use up capital but yield the lowest reward. The goal however, is to leverage information from these cheap queries to query prudently at the highest fidelity and obtain better regret. That said, other multi-fidelity settings might require different definitions for R(?). In online advertising, the lower fidelities (displaying ads for shorter periods) would still yield rewards. In clinical trials, the regret at the highest fidelity due to a bad treatment would be, say, a dead patient. However, a bad treatment on a simulation may not warrant large penalty. We use the definition in (2) because it is more aligned with our optimisation experiments: lower fidelities are useful to the extent that they guide search on the expensive f (M ) , but there is no reward to finding the optimum of a cheap f (m) . A crucial challenge for a multi-fidelity method is to not get stuck at the optimum of a lower fidelity, which is typically suboptimal for f (M ) . While exploiting information from the lower fidelities, it is also important to explore sufficiently at f (M ) . In our experiments we demonstrate that naive strategies which do not do so would get stuck at the optimum of a lower fidelity. A note on GP-UCB: Sequential optimisation methods adopting UCB principles maintain a high probability upper bound ?t : X ? R for f (x) for all x ? X [2]. For GP-UCB, ?t takes the form 1/2 ?t (x) = ?t?1 (x) + ?t ?t?1 (x) where ?t?1 , ?t?1 are the posterior mean and standard deviation of the GP conditioned on the previous t ? 1 queries. The key intuition is that the mean ?t?1 encourages an exploitative strategy ? in that we want to query where we know the function is high ? and the 1/2 confidence band ?t ?t?1 encourages an explorative strategy ? in that we want to query at regions we are uncertain about f lest we miss out on high valued regions. We have illustrated GP-UCB in Fig 1 and reviewed the algorithm and its theoretical properties in Appendix A.2. MF-GP-UCB 3 The proposed algorithm, MF-GP-UCB, will also maintain a UCB for f (M ) obtained via the previous queries at all fidelities. Denote the posterior GP mean and standard deviation of f (m) conditioned (m) (m) only on the previous queries at fidelity m by ?t , ?t respectively (See (1)). Then define, (m) ?t (m) 1/2 (m) ?t?1 (x) (x) = ?t?1 (x) + ?t (m) + ? (m) , ? m, ?t (x) = min m=1,...,M (m) ?t (x). (3) 1/2 (m) For appropriately chosen ?t , ?t?1 (x) + ?t ?t?1 (x) will upper bound f (m) (x) with high probability. (m) By A2, ?t (x) upper bounds f (M ) (x) for all m. We have M such upper bounds, and their minimum ?t (x) gives the best bound. Our next query is at the maximiser of this UCB, xt = argmaxx?X ?t (x). Next we need to decide which fidelity to query at. Consider any m < M . The ? (m) conditions 1/2 (m) on f (m) constrain the value of f (M ) ? the confidence band ?t ?t?1 for f (m) is lengthened by 1/2 (m) ? (m) to obtain confidence on f (M ) . If ?t ?t?1 (xt ) for f (m) is large, it means that we have not constrained f (m) sufficiently well at xt and should query at the mth fidelity. On the other hand, 1/2 (m) querying indefinitely in the same region to reduce ?t ?t?1 in that region will not help us much as the ? (m) elongation caps off how much we can learn about f (M ) from f (m) ; i.e. even if we knew f (m) perfectly, we will only have constrained f (M ) to within a ?? (m) band. Our algorithm captures 1/2 (1) this simple intuition. Having selected xt , we begin by checking at the first fidelity. If ?t ?t?1 (xt ) is 1/2 (m) smaller than a threshold ? (1) , we proceed to the second fidelity. If at any stage ?t ?t?1 (xt ) ? ? (m) we query at fidelity mt = m. If we proceed all the way to fidelity M , we query at mt = M . We will discuss choices for ? (m) shortly. We summarise the resulting procedure in Algorithm 1. Fig 2 illustrates MF-GP-UCB on a 2?fidelity problem. Initially, MF-GP-UCB is mostly exploring 1/2 (1) X in the first fidelity. ?t ?t?1 is large and we are yet to constrain f (1) well to proceed to f (2) . By t = 14, we have constrained f (1) around the optimum and have started querying at f (2) in this region. 4 Algorithm 1 MF-GP-UCB (m) D0 (m) M Inputs: kernel ?, bounds {? (m) }M }m=1 . m=1 , thresholds {? (m) (m) ? For m = 1, . . . , M : ? ?, (?0 , ?0 ) ? (0, ?1/2 ). ? for t = 1, 2, . . . 1. xt ? argmaxx?X ?t (x). (See Equation (3)) 1/2 (m) ?t?1 (xt ) 2. mt = minm { m |?t 3. yt ? Query f (m ) Dt t 4. Update (mt ) at xt . ? ? (m) or m = M }. (m ) (mt ) ? Dt?1t ? {(xt , yt )}. Obtain ?t (1) t=6 ?t (mt ) conditioned on Dt (See (1)). t = 14 (2) ?t ?t f (2) f f (2) (1) xt f (1) x? xt x? mt = 1 mt = 2 1/2 (1) ? (1) (mt ) , ?t (See Appendix B, C for ?t ) ?t ?t?1 (x) ? (1) Figure 2: Illustration of MF-GP-UCB for a 2-fidelity problem initialised with 5 random points at the first fidelity. In the top figures, the solid lines in brown and blue are f (1) , f (2) respectively, and the dashed lines are (1) (2) (1) (2) ?t , ?t . The solid green line is ?t = min(?t , ?t ). The small crosses are queries from 1 to t ? 1 and the red star is the maximiser of ?t , i.e. the next query xt . x? , the optimum of f (2) is shown in magenta. In the 1/2 (1) 1/2 (1) bottom figures, the solid orange line is ?t ?t?1 and the dashed black line is ? (1) . When ?t ?t?1 (xt ) ? ? (1) we play at fidelity mt = 2 and otherwise at mt = 1. See Fig. 6 in Appendix B for an extended simulation. (2) Notice how ?t dips to change ?t in this region. MF-GP-UCB has identified the maximum with just 3 queries to f (2) . In Appendix B we provide an extended simulation and discuss further insights. Finally, we make an essential observation. The posterior for any f (m) (x) conditioned on previous queries at all fidelities is not Gaussian due to the ? (m) constraints (A2). However, |f (m) (x) ? (m) 1/2 (m) ?t?1 (x)| < ?t ?t?1 (x) holds with high probability, since, by conditioning only on queries at the mth fidelity we have Gaussianity for f (m) (x). Next we summarise our main theoretical contributions. 4 Summary of Theoretical Results For pedagogical reasons we present our results for the M = 2 case. Appendix C contains statements and proofs for general M . We also ignore constants and polylog terms when they are dominated by other terms. .,  denote inequality and equality ignoring constants. We begin by defining the Maximum Information Gain (MIG) which characterises the statistical difficulty of GP bandits. Definition 2. (Maximum Information Gain) Let f ? GP(0, ?). Consider any A ? Rd and let e = {x1 , . . . , xn } ? A be a finite subset. Let f e,  e ? Rn be such that (f e)i = f (xi ), ( e)i ? A A A A A N (0, ? 2 ), and yAe = fAe + Ae. Let I denote the Shannon Mutual Information. The Maximum Information Gain of A is ?n (A) = maxA?A,| I(yAe; fAe). e e A|=n The MIG, which depends on the kernel ? and the set A, is an important quantity in our analysis. For a given ?, it typically scales with the volume of A; i.e. if A = [0, r]d then ?n (A) ? O(rd ?n ([0, 1]d )). d(d+1) For the SE kernel, ?n ([0, 1]d ) ? O((log(n))d+1 ) and for Mat?rn, ?n ([0, 1]d ) ? O(n 2?+d(d+1) ) [28]. Recall, N is the (random) number of queries by a multi-fidelity strategy within capital ? at either fidelity. Let n? = b?/?(2) c be the (non-random) number of queries by a single fidelity method operating only at the second fidelity. As ?(1) < ?(2) , N could be large for an arbitrary multi-fidelity method. However, our analysis reveals that for MF-GP-UCB, N is on the order of n? . 5 Fundamental to the 2-fidelity problem is the set Xg = {x ? X ; f? ? f (1) (x) ? ? (1) }. Xg is a high valued region for f (2) (x): for all x ? Xg , f (2) (x) is at most 2? (1) away from the optimum. More interestingly, when ? (1) is small, i.e. when f (1) is a good approximation to f (2) , Xg will be much smaller than X . This is precisely the target domain for this research. For instance, in the robot gold mining example, a cheap computer simulator can be used to eliminate several bad policies and we could reserve the real world trials for the promising candidates. If a multifidelity strategy were to use the second fidelity queries only in Xg , then the regret will only have ?n (Xg ) dependence after n high fidelity queries. In contrast, a strategy that only operates at the highest fidelity (e.g. GP-UCB) will have ?n (X ) dependence. In the scenario described above ?n (Xg )  ?n (X ), and the multi-fidelity strategy will have significantly better regret than a single fidelity strategy. MF-GP-UCB roughly achieves this goal. In particular, we consider a slightly inflated set Xeg,? = {x ? X ; f? ? f (1) (x) ? ? (1) + ?? (1) }, of Xg where ? > 0. The following result which characterises the regret of MF-GP-UCB in terms of Xeg,? is the main theorem of this paper. Theorem 3 (Regret of MF-GP-UCB ? Informal). Let X = [0, r]d and f (1) , f (2) ? GP(0, ?) satisfy Assumption 1. Pick ? ? (0, 1) and run MF-GP-UCB with ?t  d log(t/?). Then, with probability > 1 ? ?, for sufficiently large ? and for all ? ? (0, 1), there exists ? depending on ? such that, R(?) . ?(2) q q q (1) ? n? ?n? ?n? (Xeg,? ) + ?(1) n? ?n? ?n? (X ) + ?(2) n? ?n,Xeg,? ,? (1) ? ?n? ?n? (X ) + ? As we will explain shortly, the latter two terms are of lower order. It is instructive to compare the above rates against that for GP-UCB (see Theorem 4, Appendix A.2). By dropping the common 1/2 1/2 and subdominant terms, the rate for MF-GP-UCB is ?(2) ?n? (Xeg,? ) + ?(1) ?n? (X ) whereas for 1/2 GP-UCB it is ?(2) ?n? (X ). When ?(1)  ?(2) and vol(Xeg,? )  vol(X ) the rates for MF-GPUCB are very appealing. When the approximation worsens (Xg , Xeg,? become larger) and the costs ?(1) , ?(2) become comparable, the bound for MF-GP-UCB decays gracefully. In the worst case, MF-GP-UCB is never worse than GP-UCB up to constant terms. Intuitively, the above result states that MF-GP-UCB explores the entire X using f (1) but uses ?most? of its queries to f (2) inside Xeg,? . Now let us turn to the latter two terms in the bound. The third term is the regret due to the second fidelity queries outside Xeg,? . We are able to show that the number of such queries is O(n? ? ) for all ? > 0 for an appropriate ?. This strong result is only possible in the multi-fidelity setting. For example, in GP-UCB the best bound you can achieve on the number of plays on a suboptimal set is 1/2 O(n? ) for the SE kernel and worse for the Mat?rn kernel. The last term is due to the first fidelity plays inside Xeg,? and it scales with vol(Xeg,? ) and polylogarithmically with n, both of which are small. However, it has a 1/poly(? (1) ) dependence which could be bad if ? (1) is too small: intuitively, if 1/2 (1) ? (1) is too small then you will wait for a long time in step 2 of Algorithm 1 for ?t ?t?1 to decrease (2) without proceeding to f , incurring large regret (f? + B) in the process. Our analysis reveals that an optimal choice for the SE kernel scales ? (1)  (?(1) ? (1) /(t?(2) ))1/(d+2) at time t. However this is of little practical use as the leading constant depends on several problem dependent quantities such as ?n (Xg ). In Section 5 we describe a heuristic to set ? (m) which worked well in our experiments. Theorem 3 can be generalised to cases where the kernels ?(m) and observation noises ? (m) are different at each fidelity. The changes to the proofs are minimal. In fact, our practical implementation uses different kernels. As with any nonparametric method, our algorithm has exponential dependence on dimension. This can be alleviated by assuming additional structure in the problem [8, 15]. Finally, we note that the above rates translate to bounds on the simple regret S(?) for optimisation. 5 Implementation Details Our implementation uses some standard techniques in Bayesian optimisation to learn the kernel such as initialisation with random queries and periodic marginal likelihood maximisation. The above techniques might be already known to a reader familiar with the BO literature. We have elaborated these in Appendix B but now focus on the ? (m) , ? (m) parameters of our method. Algorithm 1 assumes that the ? (m) ?s are given with the problem description, which is hardly the case in practice. In our implementation, instead of having to deal with M ? 1, ? (m) values we set (? (1) , ? (2) , . . . , ? (M ?1) ) = ((M ? 1)?, (M ? 2)?, . . . , ?) so we only have one value ?. This for 6 BoreHole-8D, M = 2, Costs = [1; 10] Query frequencies for Hartmann-3D Hartmann-3D, M = 3, Costs = [1; 10; 100] 40 10 0 Number of Queries S(?) S(?) 10 -1 10 1 MF-GP-UCB GP-UCB EI DiRect MF-NAIVE MF-SKO 200 10 -2 10 -3 400 600 800 1000 m=1 m=2 m=3 35 10 0 10 2 30 25 20 15 10 5 2000 4000 6000 ? ? 8000 10000 0 0 0.5 1 1.5 2 2.5 3 3.5 f (3)(x) Figure 3: The simple regret S(?) against the spent capital ? on synthetic functions. The title states the function, its dimensionality, the number of fidelities and the costs we used for each fidelity in the experiment. All curves barring DiRect (which is a deterministic), were produced by averaging over 20 experiments. The error bars indicate one standard error. See Figures 8, 9 10 in Appendix D for more synthetic results. The last panel shows the number of queries at different function values at each fidelity for the Hartmann-3D example. instance, is satisfied if kf (m) ? f (m?1) k? ? ? which is stronger than Assumption A2. Initially, we start with small ?. Whenever we query at any fidelity m > 1 we also check the posterior mean of (m?1) the (m ? 1)th fidelity. If |f (m) (xt ) ? ?t?1 (xt )| > ?, we query again at xt , but at the (m ? 1)th fidelity. If |f (m) (xt ) ? f (m?1) (xt )| > ?, we update ? to twice the violation. To set ? (m) ?s we use the following intuition: if the algorithm, is stuck at fidelity m for too long then ? (m) is probably too small. We start with small values for ? (m) . If the algorithm does not query above the mth fidelity for more than ?(m+1) /?(m) iterations, we double ? (m) . We found our implementation to be fairly robust even recovering from fairly bad approximations at the lower fidelities (see Appendix D.3). 6 Experiments We compare MF-GP-UCB to the following methods. Single fidelity methods: GP-UCB; EI: the expected improvement criterion for BO [13]; DiRect: the dividing rectangles method [12]. Multifidelity methods: MF-NAIVE: a naive baseline where we use GP-UCB to query at the first fidelity a large number of times and then query at the last fidelity at the points queried at f (1) in decreasing order of f (1) -value; MF-SKO: the multi-fidelity sequential kriging method from [11]. Previous works on multi-fidelity methods (including MF-SKO) had not made their code available and were not straightforward to implement. Hence, we could not compare to all of them. We discuss this more in Appendix D along with some other single and multi-fidelity baselines we tried but excluded in the comparison to avoid clutter in the figures. In addition, we also detail the design choices and hyper-parameters for all methods in Appendix D. Synthetic Examples: We use the Currin exponential (d = 2), Park (d = 4) and Borehole (d = 8) functions in M = 2 fidelity experiments and the Hartmann functions in d = 3 and 6 with M = 3 and 4 fidelities respectively. The first three are taken from previous multi-fidelity literature [32] while we tweaked the Hartmann functions to obtain the lower fidelities for the latter two cases. We show the simple regret S(?) against capital ? for the Borehole and Hartmann-3D functions in Fig. 3 with the rest deferred to Appendix D due to space constraints. MF-GP-UCB outperforms other methods. Appendix D also contains results for the cumulative regret R(?) and the formulae for these functions. A common occurrence with MF-NAIVE was that once we started querying at fidelity M , the regret barely decreased. The diagnosis in all cases was the same: it was stuck around the maximum of f (1) which is suboptimal for f (M ) . This suggests that while we have cheap approximations, the problem is by no means trivial. As explained previously, it is also important to ?explore? at the higher fidelities to achieve good regret. The efficacy of MF-GP-UCB when compared to single fidelity methods is that it confines this exploration to a small set containing the optimum. In our experiments we found that MF-SKO did not consistently beat other single fidelity methods. Despite our best efforts to reproduce this (and another) multi-fidelity method, we found them to be quite brittle (Appendix D.1). The third panel of Fig. 3 shows a histogram of the number of queries at each fidelity after 184 queries of MF-GP-UCB, for different ranges of f (3) (x) for the Hartmann-3D function. Many of the queries at the low f (3) values are at fidelity 1, but as we progress they decrease and the second fidelity queries increase. The third fidelity dominates very close to the optimum but is used sparingly elsewhere. This corroborates the prediction in our analysis that MF-GP-UCB uses low fidelities to explore and successively higher fidelities at promising regions to zero in on x? . (Also see Fig. 6, Appendix B.) 7 SVM-2D, M = 2, ntr = [500, 2000] SALSA-6D, M = 3, ntr = [2000, 4000, 8000] MF-GP-UCB GP-UCB EI DiRect MF-NAIVE MF-SKO 0.13 0.125 0.12 0.115 CV (Classification) Error 0.135 V&J-22D, M = 2, ntr = [300, 3000] 0.35 1 CV (Least Squares) Error CV (Classification) Error 0.14 0.8 0.6 0.4 0.2 0.3 0.25 0.2 0.15 0.1 0 0 2000 4000 6000 8000 0 1000 2000 3000 4000 5000 CPU Time (s) CPU Time (s) 6000 7000 1000 2000 3000 4000 5000 6000 7000 8000 CPU Time (s) Figure 4: Results on the hyper-parameter tuning experiments. The title states the experiment, dimensionality (number of hyperparameters) and training set size at each fidelity. All curves were produced by averaging over 10 experiments. The error bars indicate one standard error. The lengths of the curves are different in time as we ran each method for a pre-specified number of iterations and they concluded at different times. Real Experiments: We present results on three hyper-parameter tuning tasks (results in Fig. 4), and a maximum likelihood inference task in Astrophysics (Fig. 5). We compare methods on computation time since that is the ?cost? in all experiments. We include the processing time for each method in the comparison (i.e. the cost of determining the next query). Classification using SVMs (SVM): We trained an SVM on the magic gamma dataset using the SMO algorithm to an accuracy of 10?12 . The goal is to tune the kernel bandwidth and the soft margin coefficient in the ranges (10?3 , 101 ) and (10?1 , 105 ) respectively on a dataset of size 2000. We set this up as a M = 2 fidelity experiment with the entire training set at the second fidelity and 500 points at the first. Each query was 5-fold cross validation on these training sets. Regression using Additive Kernels (SALSA): We used the regression method from [14] on the 4-dimensional coal power plant dataset. We tuned the 6 hyper-parameters ?the regularisation penalty, the kernel scale and the kernel bandwidth for each dimension? each in the range (10?3 , 104 ) using 5-fold cross validation. This experiment used M = 3 and 2000, 4000, 8000 points at each fidelity. Viola & Jones face detection (V&J): The V&J classifier [31], which uses a cascade of weak classifiers, is a popular method for face detection. To classify an image, we pass it through each classifier. If at any point the classifier score falls below a threshold, the image is classified negative. If it passes through the cascade, then it is classified positive. One of the more popular implementations comes with OpenCV and uses a cascade of 22 weak classifiers. The threshold values in OpenCV are pre-set based on some heuristics and there is no reason to think they are optimal for a given face detection task. The goal is to tune these 22 thresholds by optimising for them over a training set. We modified the OpenCV implementation to take in the thresholds as parameters. As our domain X we chose a neighbourhood around the configuration used in OpenCV. We set this up as a M = 2 fidelity experiment where the second fidelity used 3000 images from the V&J face database and the first used 300. Interestingly, on an independent test set, the configurations found by MF-GP-UCB consistently achieved over 90% accuracy while the OpenCV configuration achieved only 87.4% accuracy. Type Ia Supernovae: We use Type Ia supernovae data [7] for maximum likelihood inference on 3 cosmological parameters, the Hubble constant H0 ? (60, 80), the dark matter and dark energy fractions ?M , ?? ? (0, 1). Unlike typical parametric maximum likelihood problems, the likelihood is only available as a black-box. It is computed using the Robertson?Walker metric which requires a one dimensional numerical integration for each sample in the dataset. We set CPU Time (s) this up as a M = 3 fidelity task. The goal is to maximise the Figure 5: Results on the supernova infer- likelihood at the third fidelity where the integration was perence problem. The y-axis is the log likeli- formed using the trapezoidal rule on a grid of size 106 . For hood so higher is better. MF-NAIVE is not the first and second fidelities, we used grids of size 102 , 104 visible as it performed very poorly. respectively. The results are given in Fig. 5. Supernova-3D, M = 3, Grid = [100, 10K, 1M ] Log Likelihood 10 5 0 -5 -10 500 1000 1500 2000 2500 3000 3500 Conclusion: We introduced and studied the multi-fidelity bandit under Gaussian Process assumptions. We present, to our knowledge, the first formalism of regret and the first theoretical results in this setting. They demonstrate that MF-GP-UCB explores the space via cheap lower fidelities, and leverages the higher fidelities on successively smaller regions hence achieving better regret than single fidelity strategies. Experimental results demonstrate the efficacy of our method. 8 References [1] Alekh Agarwal, John C Duchi, Peter L Bartlett, and Clement Levrard. Oracle inequalities for computationally budgeted model selection. In COLT, 2011. [2] Peter Auer. Using Confidence Bounds for Exploitation-exploration Trade-offs. J. Mach. Learn. Res., 2003. [3] E. Brochu, V. M. Cora, and N. de Freitas. A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical RL. CoRR, 2010. [4] S?bastien Bubeck and Nicol? Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends in Machine Learning, 2012. [5] Mark Cutler, Thomas J. Walsh, and Jonathan P. How. Reinforcement Learning with Multi-Fidelity Simulators. In ICRA, 2014. [6] V. Dani, T. P. P. Hayes, and S. M Kakade. Stochastic Linear Optimization under Bandit Feedback. In COLT, 2008. [7] T. M. Davis et al. Scrutinizing Exotic Cosmological Models Using ESSENCE Supernova Data Combined with Other Cosmological Probes. Astrophysical Journal, 2007. [8] J Djolonga, A Krause, and V Cevher. High-Dimensional Gaussian Process Bandits. In NIPS, 2013. [9] Alexander I. J. Forrester, Andr?s S?bester, and Andy J. Keane. Multi-fidelity optimization via surrogate modelling. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Science, 2007. [10] Subhashis Ghosal and Anindya Roy. Posterior consistency of Gaussian process prior for nonparametric binary regression". Annals of Statistics, 2006. [11] D. Huang, T.T. Allen, W.I. Notz, and R.A. Miller. Sequential kriging optimization using multiple-fidelity evaluations. Structural and Multidisciplinary Optimization, 2006. [12] D. R. Jones, C. D. Perttunen, and B. E. Stuckman. Lipschitzian Optimization Without the Lipschitz Constant. J. Optim. Theory Appl., 1993. [13] Donald R. Jones, Matthias Schonlau, and William J. Welch. Efficient global optimization of expensive black-box functions. J. of Global Optimization, 1998. [14] Kirthevasan Kandasamy and Yaoliang Yu. Additive Approximations in High Dimensional Nonparametric Regression via the SALSA. In ICML, 2016. [15] Kirthevasan Kandasamy, Jeff Schenider, and Barnab?s P?czos. High Dimensional Bayesian Optimisation and Bandits via Additive Models. In International Conference on Machine Learning, 2015. [16] Kirthevasan Kandasamy, Gautam Dasarathy, Jeff Schneider, and Barnabas Poczos. The Multi-fidelity Multi-armed Bandit. In NIPS, 2016. [17] K. Kawaguchi, L. P. Kaelbling, and T. Lozano-P?rez. Bayesian Optimization with Exponential Convergence. In NIPS, 2015. [18] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi. Optimization by simulated annealing. SCIENCE, 1983. [19] A. Klein, S. Bartels, S. Falkner, P. Hennig, and F. Hutter. Towards efficient Bayesian Optimization for Big Data. In BayesOpt, 2015. [20] R. Martinez-Cantin, N. de Freitas, A. Doucet, and J. Castellanos. Active Policy Learning for Robot Planning and Exploration under Uncertainty. In Proceedings of Robotics: Science and Systems, 2007. [21] Jonas Mockus. Application of Bayesian approach to numerical methods of global and stochastic optimization. Journal of Global Optimization, 1994. [22] R. Munos. Optimistic Optimization of Deterministic Functions without the Knowledge of its Smoothness. In NIPS, 2011. [23] D. Parkinson, P. Mukherjee, and A.. R Liddle. A Bayesian model selection analysis of WMAP3. Physical Review, 2006. [24] C.E. Rasmussen and C.K.I. Williams. Gaussian Processes for Machine Learning. UPG Ltd, 2006. [25] Herbert Robbins. Some aspects of the sequential design of experiments. Bulletin of the American Mathematical Society, 1952. [26] A Sabharwal, H Samulowitz, and G Tesauro. Selecting near-optimal learners via incremental data allocation. In AAAI, 2015. [27] J. Snoek, H. Larochelle, and R. P Adams. Practical Bayesian Optimization of Machine Learning Algorithms. In NIPS, 2012. [28] Niranjan Srinivas, Andreas Krause, Sham Kakade, and Matthias Seeger. Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design. In ICML, 2010. [29] Kevin Swersky, Jasper Snoek, and Ryan P Adams. Multi-task bayesian optimization. In NIPS, 2013. [30] W. R. Thompson. On the Likelihood that one Unknown Probability Exceeds Another in View of the Evidence of Two Samples. Biometrika, 1933. [31] Paul A. Viola and Michael J. Jones. Rapid Object Detection using a Boosted Cascade of Simple Features. In Computer Vision and Pattern Recognition, 2001. [32] Shifeng Xiong, Peter Z. G. Qian, and C. F. Jeff Wu. Sequential design and analysis of high-accuracy and low-accuracy computer codes. Technometrics, 2013. [33] C. Zhang and K. Chaudhuri. Active Learning from Weak and Strong Labelers. In NIPS, 2015. 9
6118 |@word trial:4 exploitation:1 worsens:1 stronger:1 mockus:1 simulation:8 tried:1 covariance:1 pick:1 solid:4 configuration:3 series:1 score:2 contains:2 efficacy:2 initialisation:1 genetic:1 ours:1 interestingly:2 tuned:1 selecting:1 outperforms:3 freitas:2 com:1 optim:1 analysed:1 yet:1 john:1 explorative:1 additive:3 numerical:2 visible:1 cheap:10 update:2 kandasamy:5 generative:1 selected:1 accordingly:2 shifeng:1 vanishing:1 indefinitely:1 gautam:2 zhang:1 mathematical:2 dn:2 along:1 direct:5 become:2 jonas:1 inside:2 coal:1 x0:1 snoek:2 expected:2 rapid:1 roughly:1 surge:1 nor:1 multi:35 simulator:3 planning:1 decreasing:1 actual:1 armed:6 little:1 cpu:4 spain:1 begin:2 notation:1 tweaked:1 panel:2 exotic:1 lowest:2 substantially:2 maxa:1 finding:2 suite:1 collecting:1 biometrika:1 classifier:5 hourly:1 maximise:6 engineering:2 generalised:1 positive:1 despite:1 mach:1 dasarathy:2 falkner:1 black:6 might:3 twice:1 chose:1 studied:5 collect:1 suggests:1 appl:1 walsh:1 range:3 practical:4 hood:1 practice:2 regret:33 maximisation:1 implement:1 procedure:1 significantly:1 cascade:4 alleviated:1 confidence:8 pre:2 donald:1 wait:1 get:2 convenience:1 close:1 selection:2 context:1 seminal:1 writing:1 conventional:3 equivalent:1 deterministic:2 yt:4 straightforward:1 williams:1 thompson:1 welch:1 subhashis:1 schonlau:1 qian:1 insight:1 rule:1 notion:3 justification:1 annals:1 target:2 play:3 user:2 gps:1 us:8 trend:1 robertson:1 expensive:13 approximated:2 roy:1 trapezoidal:1 recognition:1 mukherjee:1 database:1 bottom:1 capture:1 worst:1 region:13 decrease:2 highest:4 trade:1 kriging:2 ran:1 intuition:3 reward:9 barnabas:1 trained:1 depend:1 likeli:1 borehole:3 deliver:1 learner:1 necessitates:1 gpucb:2 train:1 describe:1 query:60 hyper:11 outside:1 h0:1 kevin:1 quite:2 heuristic:2 larger:1 valued:2 say:3 otherwise:1 statistic:1 gp:61 analyse:1 noisy:3 think:1 samulowitz:1 online:3 sequence:1 matthias:2 fae:2 aligned:1 translate:1 poorly:1 achieve:2 chaudhuri:1 gold:4 intuitive:1 description:1 validate:1 mig:2 exploiting:1 convergence:1 double:1 optimum:11 plethora:1 incremental:1 adam:2 object:1 help:1 polylog:1 develop:2 depending:1 pose:1 spent:1 qt:4 progress:1 strong:2 dividing:2 recovering:1 c:1 indicate:2 come:1 inflated:1 larochelle:1 sabharwal:1 closely:1 popularly:1 stochastic:4 exploration:4 require:1 behaviour:3 barnab:2 plurality:1 preliminary:1 ryan:1 exploring:1 hold:2 sufficiently:4 around:3 exp:1 algorithmic:1 opencv:5 reserve:2 driving:1 achieves:2 vary:1 a2:4 purpose:2 estimation:1 title:2 robbins:2 largest:1 cora:1 offs:1 dani:1 gaussian:14 modified:2 pn:3 avoid:1 parkinson:1 boosted:1 varying:1 probabilistically:1 encode:1 focus:2 improvement:1 consistently:2 modelling:1 likelihood:10 check:1 industrial:1 contrast:2 seeger:1 baseline:2 dollar:1 inference:3 dependent:1 eliminate:3 typically:3 entire:4 yaoliang:1 mth:3 bandit:25 initially:2 bartels:1 reproduce:1 xeg:11 fidelity:134 hartmann:7 classification:3 colt:2 constrained:3 integration:2 orange:1 mutual:1 marginal:1 fairly:2 once:1 never:2 having:2 barring:1 sampling:1 elongation:1 optimising:1 park:1 jones:4 yu:1 icml:2 warrant:1 gauging:1 djolonga:1 summarise:2 few:1 primarily:1 gamma:2 cheaper:2 familiar:1 multifidelity:3 maintain:2 attempt:1 william:1 detection:4 technometrics:1 interest:4 mining:2 evaluation:5 deferred:1 violation:1 kirkpatrick:1 cutler:1 accurate:1 andy:1 shorter:2 re:3 formalise:3 theoretical:10 uncertain:1 minimal:1 instance:3 formalism:3 soft:1 classify:1 typified:1 modeling:1 cevher:1 hutter:1 castellanos:1 bayesopt:1 cost:11 kaelbling:1 deviation:2 subset:3 too:4 subdominant:1 periodic:1 synthetic:6 sparingly:1 combined:1 density:1 explores:3 fundamental:1 international:1 off:1 michael:1 concrete:1 vastly:1 aaai:1 cesa:1 squared:1 successively:4 opposed:1 possibly:1 satisfied:1 again:1 containing:1 huang:1 worse:2 dead:1 american:1 leading:1 account:1 de:2 star:2 availability:2 gaussianity:1 coefficient:1 matter:1 satisfy:1 ad:4 depends:2 astrophysical:1 performed:1 view:1 optimistic:1 red:2 start:2 elaborated:1 contribution:2 square:1 ni:1 accuracy:6 formed:1 miller:1 yield:3 identify:2 weak:3 bayesian:11 critically:1 produced:2 none:2 advertising:4 minm:1 classified:2 explain:1 whenever:1 definition:5 against:6 nonetheless:1 energy:1 initialised:1 frequency:1 proof:2 sampled:2 gain:3 dataset:7 treatment:4 popular:2 recall:1 anytime:1 lim:1 knowledge:3 cap:1 dimensionality:2 obtainable:1 brochu:1 auer:1 manuscript:1 higher:4 dt:3 follow:1 synopsis:1 box:3 keane:1 cantin:1 just:3 stage:1 until:1 hand:1 ei:3 multidisciplinary:1 scientific:3 liddle:1 brown:1 true:1 lozano:1 analytically:1 hence:3 equality:1 excluded:1 illustrated:1 deal:1 during:1 encourages:2 davis:1 essence:1 criterion:1 pedagogical:1 demonstrate:5 duchi:1 allen:1 spending:1 image:3 instantaneous:2 novel:2 recently:1 common:2 mt:23 empirically:1 rl:1 physical:2 jasper:1 conditioning:1 volume:1 approximates:2 mellon:1 refer:2 cv:9 queried:5 smoothness:2 tuning:7 rd:2 cosmological:3 grid:3 clement:1 consistency:1 schneide:1 had:1 robot:5 access:2 longer:1 operating:1 alekh:1 labelers:1 posterior:7 recent:1 mint:1 tesauro:1 scenario:1 salsa:3 inequality:2 binary:1 yi:4 herbert:1 minimum:1 additional:1 schneider:2 period:2 bessel:1 dashed:3 branch:1 multiple:2 ntr:3 infer:1 d0:1 sham:1 exceeds:1 clinical:2 cross:7 long:3 offer:1 niranjan:1 a1:1 prediction:1 regression:4 oliva:1 ae:1 vision:1 metric:1 optimisation:13 cmu:1 tasked:1 noiseless:1 patient:1 kernel:17 adopting:1 iteration:2 robotics:2 histogram:1 achieved:2 receive:1 whereas:1 want:2 addition:1 agarwal:1 interval:2 annealing:2 polylogarithmically:1 decreased:1 walker:1 concluded:1 crucial:1 appropriately:1 biased:1 aerodynamics:1 rest:1 unlike:1 lest:1 probably:1 pass:1 tend:1 anindya:1 structural:1 leverage:2 unused:1 near:1 automated:1 xj:1 nonstochastic:1 bandwidth:3 click:2 suboptimal:3 reduce:2 idea:1 perfectly:1 identified:1 andreas:1 minimise:1 optimism:1 bartlett:1 ltd:1 effort:2 penalty:2 peter:3 poczos:1 proceed:3 hardly:1 remark:1 matlab:1 useful:2 se:4 tune:3 nonparametric:4 clutter:1 dark:2 extensively:1 band:3 svms:1 andr:1 exploitative:1 notice:1 tutorial:1 klein:1 blue:2 diagnosis:1 perttunen:1 carnegie:1 mat:3 dropping:1 vol:3 hennig:1 key:1 threshold:6 achieving:1 capital:9 costing:2 budgeted:1 neither:1 rectangle:2 fraction:1 sum:1 run:2 compete:2 uncertainty:2 you:2 swersky:1 reader:1 decide:1 wu:1 appendix:19 comparable:1 maximiser:2 bound:16 ki:2 display:2 fold:2 oracle:5 nontrivial:1 occur:1 precisely:2 constraint:3 constrain:2 worked:1 dominated:1 aspect:1 vecchi:1 min:2 according:3 smaller:5 slightly:1 appealing:1 kakade:2 intuitively:2 explained:1 bapoczos:1 taken:1 computationally:1 resource:2 equation:1 previously:1 discus:4 turn:1 mechanism:1 needed:1 know:1 tractable:2 informal:1 available:4 incurring:1 probe:1 observe:2 hierarchical:1 away:1 appropriate:1 occurrence:1 gelatt:1 neighbourhood:1 xiong:1 shortly:2 thomas:1 top:1 assumes:1 include:2 instant:1 lipschitzian:1 tabulates:1 prof:1 build:1 kawaguchi:1 classical:1 icra:1 society:2 already:1 quantity:2 strategy:20 primary:1 rt:4 usual:1 traditional:1 dependence:4 said:2 exhibit:1 parametric:1 surrogate:1 simulated:2 gracefully:1 argue:1 extent:1 trivial:1 reason:2 barely:1 assuming:2 code:2 length:1 illustration:2 equivalently:1 difficult:1 mostly:1 forrester:1 statement:1 kde:2 negative:1 kirthevasan:4 astrophysics:2 implementation:8 design:5 magic:1 policy:4 unknown:1 bianchi:1 upper:8 observation:5 datasets:1 finite:1 stuckman:1 beat:1 payoff:6 extended:2 defining:1 viola:2 rn:4 arbitrary:1 ghosal:1 introduced:1 lengthened:1 cast:1 pair:1 specified:1 smo:1 distinction:1 barcelona:1 hour:1 nip:8 able:1 bar:2 below:1 pattern:1 challenge:2 optimise:1 including:2 green:1 royal:1 power:1 event:1 ia:2 difficulty:1 meantime:1 mn:1 github:1 historically:1 axis:1 started:2 xg:10 naive:8 review:1 prior:2 literature:4 checking:1 kf:4 characterises:2 determining:1 asymptotic:1 regularisation:1 nicol:1 plant:1 brittle:1 sublinear:1 allocation:1 querying:5 validation:3 foundation:1 degree:1 displaying:2 principle:2 maxt:1 elsewhere:1 summary:1 czos:2 keeping:2 last:3 rasmussen:1 guide:1 levrard:1 krause:2 srinivas:2 fall:1 face:4 bulletin:1 munos:1 feedback:2 curve:6 xn:2 world:5 cumulative:5 valid:1 dip:1 dimension:2 stuck:4 collection:1 reinforcement:2 made:1 approximate:4 ignore:2 keep:1 global:5 sequentially:1 active:4 reveals:2 hayes:1 doucet:1 tuples:1 xi:5 knew:1 corroborates:1 search:3 investing:1 reviewed:1 promising:4 learn:4 robust:1 ignoring:1 obtaining:2 argmaxx:4 interact:1 boxed:1 complex:1 poly:1 domain:3 did:1 main:2 big:1 noise:2 hyperparameters:1 paul:1 n2:2 martinez:1 x1:1 fig:10 wish:3 exponential:4 candidate:2 third:4 rez:1 minute:1 magenta:1 theorem:4 bad:6 specific:2 xt:28 formula:1 bastien:1 decay:1 svm:3 a3:3 dominates:1 essential:1 exists:1 evidence:1 sequential:5 corr:1 conditioned:4 illustrates:1 kx:1 horizon:1 margin:1 mf:44 smoothly:1 explore:3 bubeck:1 cheaply:1 bo:5 collectively:1 expends:1 rice:2 goal:11 towards:1 jeff:4 price:1 lipschitz:1 notz:1 change:3 typical:1 operates:1 averaging:2 miss:1 pas:1 junier:1 experimental:2 shannon:1 ucb:54 kirthevasank:1 mark:1 latter:4 jonathan:1 confines:1 alexander:1 evaluate:1 instructive:1
5,657
6,119
Maximizing Influence in an Ising Network: A Mean-Field Optimal Solution Christopher W. Lynn Department of Physics and Astronomy University of Pennsylvania chlynn@sas.upenn.edu Daniel D. Lee Department of Electrical and Systems Engineering University of Pennsylvania ddlee@seas.upenn.edu Abstract Influence maximization in social networks has typically been studied in the context of contagion models and irreversible processes. In this paper, we consider an alternate model that treats individual opinions as spins in an Ising system at dynamic equilibrium. We formalize the Ising influence maximization problem, which has a natural physical interpretation as maximizing the magnetization given a budget of external magnetic field. Under the mean-field (MF) approximation, we present a gradient ascent algorithm that uses the susceptibility to efficiently calculate local maxima of the magnetization, and we develop a number of sufficient conditions for when the MF magnetization is concave and our algorithm converges to a global optimum. We apply our algorithm on random and real-world networks, demonstrating, remarkably, that the MF optimal external fields (i.e., the external fields which maximize the MF magnetization) shift from focusing on high-degree individuals at high temperatures to focusing on low-degree individuals at low temperatures. We also establish a number of novel results about the structure of steady-states in the ferromagnetic MF Ising model on general graph topologies, which are of independent interest. 1 Introduction With the proliferation of online social networks, the problem of optimally influencing the opinions of individuals in a population has garnered tremendous attention [1?3]. The prevailing paradigm treats marketing as a viral process, whereby the advertiser is given a budget of seed infections and chooses the subset of individuals to infect such that the spread of the ensuing contagion is maximized. The development of algorithmic methods for influence maximization under the viral paradigm has been the subject of vigorous study, resulting in a number of efficient techniques for identifying meaningful marketing strategies in real-world settings [4?6]. While the viral paradigm accurately describes out-of-equilibrium phenomena, such as the introduction of new ideas or products to a system, these models fail to capture reverberant opinion dynamics wherein repeated interactions between individuals in the network give rise to complex macroscopic opinion patterns, as, for example, is the case in the formation of political opinions [7?10]. In this context, rather than maximizing the spread of a viral advertisement, the marketer is interested in optimally shifting the equilibrium opinions of individuals in the network. To describe complex macroscopic opinion patterns resulting from repeated microscopic interactions, we naturally employ the language of statistical mechanics, treating individual opinions as spins in an Ising system at dynamic equilibrium and modeling marketing as the addition of an external magnetic field. The resulting problem, which we call Ising influence maximization (IIM), has a natural physical interpretation as maximizing the magnetization of an Ising system given a budget of external field. While a number of models have been proposed for describing reverberant opinion dynamics [11], our 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. use of the Ising model follows a vibrant interdisciplinary literature [12, 13], and is closely related to models in game theory [14, 15] and sociophysics [16, 17]. Furthermore, complex Ising models have found widespread use in machine learning, and our model is formally equivalent to a pair-wise Markov random field or a Boltzmann machine [18?20]. Our main contributions are as follows: 1. We formalize the influence maximization problem in the context of the Ising model, which we call the Ising influence maximization (IIM) problem. We also propose the mean-field Ising influence maximization (MF-IIM) problem as an approximation to IIM (Section 2). 2. We find sufficient conditions under which the MF-IIM objective is smooth and concave, and we present a gradient ascent algorithm that guarantees an -approximation to MF-IIM (Section 4). 3. We present numerical simulations that probe the structure and performance of MF optimal marketing strategies. We find that at high temperatures, it is optimal to focus influence on high-degree individuals, while at low temperatures, it is optimal to spread influence among low-degree individuals (Sections 5 and 6). 4. Throughout the paper we present a number of novel results concerning the structure of steady-states in the ferromagnetic MF Ising model on general (weighted, directed) stronglyconnected graphs, which are of independent interest. We name two highlights: ? The well-known pitchfork bifurcation structure for the ferromagnetic MF Ising model on a lattice extends exactly to general strongly-connected graphs, and the critical temperature is equal to the spectral radius of the adjacency matrix (Theorem 3). ? There can exist at most one stable steady-state with non-negative (non-positive) components, and it is smooth and concave (convex) in the external field (Theorem 4). 2 The Ising influence maximization problem We consider a weighted, directed social network consisting of a set of individuals N = {1, . . . , n}, each of which is assigned an opinion ?i ? {?1} that captures its current state. By analogy with the Ising model, we refer to ? = (?i ) as a spin configuration of the system. Individuals in the network interact via a non-negative weighted coupling matrix J ? Rn?n ?0 , where Jij ? 0 represents the amount of influence that individual j holds over the opinion of individual i, and the non-negativity of J represents the assumption that opinions of neighboring individuals tend to align, known in physics as a ferromagnetic interaction. Each individual also interacts with forces external to the network via an external field h ? Rn . For example, if the spins represent the political opinions of individuals in a social network, then Jij represents the influence that j holds over i?s opinion and hi represents the political bias of node i due to external forces such as campaign advertisements and news articles. The opinions of individuals in the network evolve according to asynchronous Glauber dynamics. At each time t, an individual i is selected uniformly at random and her opinion is updated in response to the external field h and the opinions of others in the network ?(t) by sampling from P (?i (t + 1) = 1|?(t)) = P e? ( ?i0 =?1 P j Jij ?j (t)+hi ) e??i ( 0 P j Jij ?j (t)+hi ) , (1) where ? is the inverse temperature, which we refer to as the interaction strength, and unless otherwise specified, sums are assumed over N . Together, the quadruple (N, J, h, ?) defines our system. We P refer to the total expected opinion, M = i h?i i, as the magnetization, where h?i denotes an average over the dynamics in Eq. (1), and we often consider the magnetization as a function of the external ii field, denoted M (h). Another important concept is the susceptibility matrix, ?ij = ?h? ?hj , which quantifies the response of individual i to a change in the external field on node j. We study the problem of maximizing the magnetization of an Ising system with respect to the external field. We assume that an external field h can be added to the system, subject to the constraints P h ? 0 and i hi ? H, where H > 0 is theP external field budget, and we denote the set of feasible external fields by FH = {h ? Rn : h ? 0, i hi = H}. In general, we also assume that the system experiences an initial external field b ? Rn , which cannot be controlled. 2 Definition 1. (Ising influence maximization (IIM)) Given a system (N, J, b, ?) and a budget H, find a feasible external field h ? FH that maximizes the magnetization; that is, find an optimal external field h? such that h? = arg max M (b + h). (2) h?FH Notation. Unless otherwise specified, bold symbols represent column vectors with the appropriate number of components, while non-bold symbols with subscripts represent individual components. We often abuse notation and write relations such as m ? 0 to mean mi ? 0 for all components i. 2.1 The mean-field approximation In general, calculating expectations over the dynamics in Eq. (1) requires Monte-Carlo simulations or other numerical approximation techniques. To make analytic progress, we employ the variational mean-field approximation, which has roots in statistical physics and has long been used to tackle inference problems in Boltzmann machines and Markov random fields [21?24]. The mean-field approximation replaces the intractable task of calculating exact averages over Eq. (1) with the problem of solving the following set of self-consistency equations: ? ? ?? X mi = tanh ?? ? Jij mj + hi ?? , (3) j for all i ? N , where mi approximates h?i i. We refer to the right-hand side of Eq. (3) as the mean-field map, f (m) = tanh [?(Jm + h)], where tanh(?) is applied component-wise. In this way, a fixed point of the mean-field map is a solution to Eq. (3), which we call a steady-state. In general, there may be many solutions to Eq. (3), and we denote by Mh the set of steady-states for a system (N, J, h, ?). We say that a steady-state m is stable if ?(f 0 (m)) < 1, where ?(?) denotes the spectral radius and  ?fi 0 f (m)ij = = ? 1 ? m2i Jij ? f 0 (m) = ?D(m)J, (4) ?mj m where D(m)ij = (1 ? m2i )?ij . Furthermore, under the mean-field approximation, given a stable steady-state m, the susceptibility has a particularly nice form: !  X ?1 MF 2 ?ij = ? 1 ? mi Jik ?kj + ?ij ? ?M F = ? (I ? ?D(m)J) D(m), (5) k where I is the n ? n identity matrix. For the purpose of uniquely defining our objective, we optimistically choose to maximize the maximum magnetization among the set of steady-states, defined by X M M F (h) = max mi (h). (6) m?Mh i We note that the pessimistic framework of maximizing the minimum magnetization yields an equally valid objective. We also note that simply choosing a steady-state to optimize does not yield a well-defined objective since, as h increases, steady-states can pop in and out of existence. Definition 2. (Mean-field Ising influence maximization (MF-IIM)) Given a system (N, J, b, ?) and a budget H, find an optimal external field h? such that h? = arg max M M F (b + h). (7) h?FH 3 The structure of steady-states in the MF Ising model Before proceeding further, we must prove an important result concerning the existence and structure of solutions to Eq. (3), for if there exists a system that does not admit a steady-state, then our objective 3 is ill-defined. Furthermore, if there exists a unique steady-state m, then M M F = is no ambiguity in our choice of objective. P i mi , and there Theorem 3 establishes that every system admits a steady-state and that the well-known pitchfork bifurcation structure for steady-states of the ferromagnetic MF Ising model on a lattice extends exactly to general (weighted, directed) strongly-connected graphs. In particular, for any strongly-connected graph described by J, there is a critical interaction strength ?c below which there exists a unique and stable steady-state. For h = 0, as ? crosses ?c from below, two new stable steady-states appear, one with all-positive components and one with all-negative components. Interestingly, the critical interaction strength is equal to the inverse of the spectral radius of J, denoted ?c = 1/?(J). Theorem 3. Any system (N, J, h, ?) exhibits a steady-state. Furthermore, if its network is stronglyconnected, then, for ? < ?c , there exists a unique and stable steady-state. For h = 0, as ? crosses ?c from below, the unique steady-state gives rise to two stable steady-states, one with all-positive components and one with all-negative components. Proof sketch. The existence of a steady-state follows directly by applying Brouwer?s fixed-point theorem to f . For ? < ?c , it can be shown that f is a contraction mapping, and hence admits a unique and stable steady-state by Banach?s fixed point theorem. For h = 0 and ? < ?c , m = 0 is the unique steady-state and f 0 (m) = ?J. Because J is strongly-connected, the Perron-Frobenius theorem guarantees a simple eigenvalue equal to ?(J) and a corresponding all-positive eigenvector. Thus, when ? crosses 1/?(J) from below, the Perron-Frobenius eigenvalue of f 0 (m) crosses 1 from below, giving rise to a supercritical pitchfork bifurcation with two new stable steady-states corresponding to the Perron-Frobenius eigenvector. Remark. Some of our results assume J is strongly-connected in order to use the Perron-Frobenius theorem. We note that this assumption is not restrictive, since any graph can be efficiently decomposed into strongly-connected components on which our results apply independently. Theorem 3 shows that the objective M M F (b + h) is well-defined. Furthermore, for ? < ?c , Theorem 3 guarantees a unique P and stable steady-state m for all b + h. In this case, MF-IIM reduces to maximizing M M F = i mi , and because m is stable, M M F (b + h) is smooth for all h by the implicit function theorem. Thus, for ? < ?c , we can use standard gradient ascent techniques to efficiently calculate locally-optimal solutions to MF-IIM. In general, M M F is not necessarily smooth in h since the topological structure of steady-states may change as h varies. However, in the next section we show that if there exists a stable and entry-wise non-negative steady-state, and if J is strongly-connected, then M M F (b + h) is both smooth and concave in h, regardless of the interaction strength. 4 Sufficient conditions for when MF-IIM is concave We consider conditions for which MF-IIM is smooth and concave, and hence exactly solvable by efficient techniques. The case under consideration is when J is strongly-connected and there exists a stable non-negative steady-state. Theorem 4. Let (N, J, b, ?) describe a system with a strongly-connected graph for P which there exists a stable non-negative steady-state m(b). Then, for any H, M M F (b + h) = i mi (b + h), M M F (b + h) is smooth in h, and M M F (b + h) is concave in h for all h ? FH . Proof sketch. Our argument follows in three steps. We first show that m(b) is the unique stable non-negative steady-state and Pthat it attains the maximum total opinion among steady-states. This guarantees that M M F (b) = i mi (b). Furthermore, m(b) gives rise to a unique and P smooth branch MF of stable non-negative steady-states for additional h, and hence M (b + h) = i mi (b + h) for all h > 0. Finally, one can directly show that M M F (b + h) is concave in h. Remark. By arguments similar to those in Theorem 4, it can be shown that any stable non-positive steady-state is unique, attains the minimum total opinion among steady-states, and is smooth and convex for decreasing h. The above result paints a significantly simplified picture of the MF-IIM problem when J is stronglyconnected and there exists a stable non-negative steady-state m(b). Given a budget H, for any feasible marketing strategy h ? FH , m(b + h) is the unique stable non-negative steady-state, attains the maximum total opinion among steady-states, and is smooth in h. Thus, the objective 4 Algorithm 1: An -approximation to MF-IIM Input: System (N, J, b, ?) for which there exists a stable non-negative steady-state, budget H, accuracy parameter  > 0 Output: External field h that approximates a MF optimal external field h? t = 0; h(0) ? FH ; ? ? (0, L1 ) ; repeat P ?M M F (b+h(t)) F = i ?M ij (b + h(t)); ?hj   h(t + 1) = PFH h(t) + ?Oh M M F (b + h(t)) ; t++; until M M F (b + h? ) ? M M F (b + h(t)) ? ; h = h(t); P M M F (b + h) = i mi (b + h) is smooth, allowing us to write down a gradient ascent algorithm that approximates a local maximum. Furthermore, since M M F (b+h) is concave in h, any local maximum of M M F on FH is a global maximum, and we can apply efficient gradient ascent techniques to solve MF-IIM. Our algorithm, summarized in Algorithm 1, is initialized at a feasible external field. At each iteration, P MF F we calculate the susceptibility of the system, namely ?M = i ?M ij , and project this gradient ?hj onto FH (the projection operator PFH is well-defined since FH is convex). Stepping along the direction of the projected gradient with step size ? ? (0, L1 ), where L is a Lipschitz constant of M M F , Algorithm 1 converges to an -approximation to MF-IIM in O(1/) iterations [25]. 4.1 Sufficient conditions for the existence of a stable non-negative steady-state In the previous section we found that MF-IIM is efficiently solvable if there exists a stable nonnegative steady-state. While this assumption may seem restrictive, we show, to the contrary, that the appearance of a stable non-negative steady-state is a fairly general phenomenon. We first show, for J strongly-connected, that the existence of a stable non-negative steady-state is robust to increases in h and that the existence of a stable positive steady-state is robust to increases in ?. Theorem 5. Let (N, J, h, ?) describe a system with a strongly-connected graph for which there exists a stable non-negative steady-state m. If m ? 0, then as h increases, m gives rise to a unique and smooth branch of stable non-negative steady-states. If m > 0, then as ? increases, m gives rise to a unique and smooth branch of stable positive steady-states. Proof sketch. By the implicit function theorem, any stable steady-state can be locally defined as a function of both h and ?. Using the susceptibility, one can directly show that any stable non-negative steady-state remains stable and non-negative as h increases and that any stable positive steady-state remains stable and positive as ? increases. The intuition behind Theorem 5 is that increasing the external field will never destroy a steady-state in which all of the opinions are already non-positive. Furthermore, as the interaction strength increases, each individual reacts more strongly to the positive influence of her neighbors, creating a positive feedback loop that results in an even more positive magnetization. We conclude by showing for J strongly-connected that if h ? 0, then there exists a stable non-negative steady-state. Theorem 6. Let (N, J, h, ?) describe any system with a strongly-connected network. If h ? 0, then there exists a stable non-negative steady-state. Proof sketch. For h > 0 and ? < ?c , it can be shown that the unique steady-state is positive, and hence Theorem 5 guarantees the result for all ? 0 > ?. For h = 0, Theorem 3 provides the result. All together, the results of this section provide a number of sufficient conditions under which MF-IIM is exactly and efficiently solvable by Algorithm 1. 5 5 A shift in the structure of solutions to MF-IIM The structure of solutions to MF-IIM is of fundamental theoretical and practical interest. We demonstrate, remarkably, that solutions to MF-IIM shift from focusing on nodes of high degree at low interaction strengths to focusing on nodes of low degree at high interaction strengths. Consider an Ising system described by (N, J, h, ?) in the limit ?  ?c . To first-order in ?, the self-consistency equations (3) take the form: m = ? (Jm + h) ? m = ?(I ? ?J)?1 h. Since ? < ?c , we have ?(?J) < 1, allowing us to expand (I ? ?J)?1 in a geometric series: X X 3 m = ?h + ? 2 Jh + O(? 3 ) ? M M F (h) = ? hi + ? 2 dout i hi + O(? ), i (8) (9) i P where dout = i j Jji is the out-degree of node i. Thus, for low interaction strengths, the MF magnetization is maximized by focusing the external field on the nodes of highest out-degree in the network, independent of b and H. To study the structure of solutions to MF-IIM at high interaction strengths, we make the simplifying assumptions that J is strongly-connected and b ? 0 so that Theorem 6 guarantees a stable nonnegative steady state m. For large ? and an additional external field h ? FH , m takes the form ? ? ?? X in mi ? tanh ?? ? Jij + bi + hi ?? ? 1 ? 2e?2?(di +bi +hi ) , (10) j where din i = P Jij is the in-degree of node i. Thus, in the high-? limit, we have:  X (0) in in M M F (b + h) ? 1 ? 2e?2?(di +bi +hi ) ? n ? 2e?2?(di? +hi? +hi? ) , j (11) i where i? = arg mini (din i + bi + hi ). Thus, for high interaction strengths, the solutions to MF-IIM for an external field budget H are given by:    (0) in h? = arg max n ? 2e?2?(di? +hi? +hi? ) ? arg max min din (12) i + bi + hi . h?FH h?FH i Eq. (12) reveals that the high-? solutions to MF-IIM focus on the nodes for which din i + bi + hi is smallest. Thus, if b is uniform, the MF magnetization is maximized by focusing the external field on the nodes of smallest in-degree in the network. We emphasize the strength and novelty of the above results. In the context of reverberant opinion dynamics, the optimal control strategy has a highly non-trivial dependence on the strength of interactions in the system, a feature not captured by viral models. Thus, when controlling a social system, accurately determining the strength of interactions is of critical importance. 6 Numerical simulations We present numerical experiments to probe the structure and performance of MF optimal external fields. We verify that the solutions to MF-IIM undergo a shift from focusing on high-degree nodes at low interaction strengths to focusing on low-degree nodes at high interaction strengths. We also find that for sufficiently high and low interaction strengths, the MF optimal external field achieves the maximum exact magnetization, while admitting performance losses near ?c . However, even at ?c , we demonstrate that solutions to MF-IIM significantly outperform common node-selection heuristics based on node degree and centrality. We first consider an undirected hub-and-spoke network, shown in Figure 1, where Jij ? {0, 1} and we set b = 0 for simplicity. Since b ? 0, Algorithm 1 is guaranteed to achieve a globally optimal MF magnetization. Furthermore, because the network is small, we can calculate exact solutions to IIM by brute force search. The left plot in Figure 1 compares the average degree of the MF and exact optimal external fields over a range of temperatures for an external field budget H = 1, verifying 6 Figure 1: Left: A comparison of the structure of the MF and exact optimal external fields, denoted h?M F and h? , in a hub-and-spoke network. Right: The relative performance of h?M F compared to h? ; i.e., M (h?M F )/M (h?M F ), where M denotes the exact magnetization. Figure 2: Left: A stochastic block network consisting of a highly-connected community (Block 1) and a sparsely-connected community (Block2). Center: The solution to MF-IIM shifts from focusing on Block 1 to Block 2 as ? increases. Right: Even at ?c , the MF solution outperforms common node-selection heuristics. that the solution to MF-IIM shifts from focusing on high-degree nodes at low interaction strengths to low-degree nodes at high interaction strengths. Furthermore, we find that the shift in the MF optimal external field occurs near the critical interaction strength ?c = .5. The performance of the MF optimal strategy (measured as the ratio of the magnetization achieved by the MF solution to that achieved by the exact solution) is shown in the right plot in Figure 1. For low and high interaction strengths, the MF optimal external field achieves the maximum magnetization, while near ?c , it incurs significant performance losses, a phenomenon well-studied in the literature [21]. We now consider a stochastic block network consisting of 100 nodes split into two blocks of 50 nodes each, shown in Figure 2. An undirected edge of weight 1 is placed between each pair of nodes in Block 1 with probability .2, between each pair in Block 2 with probability .05, and between nodes in different blocks with probability .05, resulting in a highly-connected community (Block 1) surrounded by a sparsely-connected community (Block 2). For b = 0 and H = 20, the center plot in Figure 2 demonstrates that the solution to MF-IIM shifts from focusing on Block 1 at low ? to focusing on Block 2 at high ? and that the shift occurs near ?c . The stochastic block network is sufficiently large that exact calculation of the optimal external fields is infeasible. Thus, we resort to comparing the MF solutions with three node-selection heuristics: one that distributes the budget in amounts proportional to nodes? degrees, one that distributes the budget proportional to nodes? centralities (the inverse of a node?s average shortest path length to all other nodes), and one that distributes the budget randomly. The magnetizations are approximated via Monte Carlo simulations of the Glauber dynamics, and we consider the system at ? = ?c to represent the worst-case scenario for the MF optimal external fields. The right plot in Figure 2 shows that, even at ?c , the solutions to MF-IIM outperform common node-selection heuristics. We consider a real-world collaboration network (Figure 3) composed of 904 individuals, where each edge is unweighted and represents the co-authorship of a paper on the arXiv [26]. We note that co-authorship networks are known to capture many of the key structural features of social networks 7 Figure 3: Left: A collaboration network of 904 physicists where each edge represents the co-authorship of a paper on the arXiv. Center: The solution to MF-IIM shifts from high- to lowdegree nodes as ? increases. Right: The MF solution out-performs common node-selection heuristics, even at ?c . [27]. For b = 0 and H = 40, the center plot in Figure 3 illustrates the sharp shift in the solution to MF-IIM at ?c = 0.05 from high- to low-degree nodes. Furthermore, the right plot in Figure 3 compares the performance of the MF optimal external field with the node-selection heuristics described above, where we again consider the system at ?c as a worst-case scenario, demonstrating that Algorithm 1 is scalable and performs well on real-world networks. 7 Conclusions We study influence maximization, one of the fundamental problems in network science, in the context of the Ising model, wherein repeated interactions between individuals give rise to complex macroscopic patterns. The resulting problem, which we call Ising influence maximization, has a natural physical interpretation as maximizing the magnetization of an Ising system given a budget of external magnetic field. Under the mean-field approximation, we develop a number of sufficient conditions for when the problem is concave, and we provide a gradient ascent algorithm that uses the susceptibility to efficiently calculate locally-optimal external fields. Furthermore, we demonstrate that the MF optimal external fields shift from focusing on high-degree individuals at low interaction strengths to focusing on low-degree individuals at high interaction strengths, a phenomenon not observed in viral models. We apply our algorithm on random and real-world networks, numerically demonstrating shifts in the solution structure and showing that our algorithm out-performs common node-selection heuristics. It would be interesting to study the exact Ising model on an undirected network, in which case the spin statistics are governed by the Boltzmann distribution. Using this elegant steady-state description, one might be able to derive analytic results for the exact IIM problem. Our work establishes a fruitful connection between influence maximization and statistical physics, paving the way for exciting cross-disciplinary research. For example, one could apply advanced mean-field techniques, such as those in [21], to generate efficient algorithms of increasing accuracy. Furthermore, because our model is equivalent to a Boltzmann machine, one could propose a framework for data-based influence maximization based on well-known Boltzmann machine learning techniques. Acknowledgements. We thank Michael Kearns and Eric Horsley for enlightening discussions, and we acknowledge support from the U.S. National Science Foundation, the Air Force Office of Scientific Research, and the Department of Transportation. References [1] P. Domingos and M. Richardson. Mining the network value of customers. KDD, pages 57?66, 2001. [2] M. Richardson and P. Domingos. Mining knowledge-sharing sites for viral marketing. KDD?02. ACM, pages 61?70, 2002. 8 [3] D. Kempe, J. M. Kleinberg, and ?. Tardos. Maximizing the spread of influence through a social network. KDD?03. ACM, pages 137?146, 2003. [4] E. Mossel and S. Roch. On the submodularity of influence in social networks. In STOC?07, pages 128?134. ACM, 2007. [5] S. Goyal, H. Heidari, and M. Kearns. Competitive contagion in networks. GEB, 2014. [6] M. Gomez Rodriguez and B. Sch?lkopf. Influence maximization in continuous time diffusion networks. In ICML, 2012. [7] S. Galam and S. Moscovici. Towards a theory of collective phenomena: consensus and attitude changes in groups. European Journal of Social Psychology, 21(1):49?74, 1991. [8] D. J. Isenberg. Group polarization: A critical review and meta-analysis. Journal of personality and social psychology, 50(6):1141, 1986. [9] M. M?s, A. Flache, and D. Helbing. Individualization as driving force of clustering phenomena in humans. PLoS Comput Biol, 6(10), 2010. [10] M. Moussa?d, J. E. K?mmer, P. P. Analytis, and H. Neth. Social influence and the collective dynamics of opinion formation. PLoS One, 8(11), 2013. [11] A. De, I. Valera, N. Ganguly, S. Bhattacharya, et al. Learning opinion dynamics in social networks. arXiv preprint arXiv:1506.05474, 2015. [12] A. Montanari and A. Saberi. The spread of innovations in social networks. PNAS, 107(47), 2010. [13] C. Castellano, S. Fortunato, and V. Loreto. Statistical physics of social dynamics. Rev. Mod. Phys., 81:591?646, 2009. [14] L. Blume. The statistical mechanics of strategic interaction. GEB, 5:387?424, 1993. [15] R. McKelvey and T. Palfrey. Quantal response equilibria for normal form games. GEB, 7:6?38, 1995. [16] S. Galam. Sociophysics: a review of galam models. Int. J. Mod. Phys. C, 19(3):409?440, 2008. [17] K. Sznajd-Weron and J. Sznajd. Opinion evolution in closed community. International Journal of Modern Physics C, 11(06), 2000. [18] R. Kindermann and J. Snell. Markov random fields and their applications. AMS, Providence, RI, 1980. [19] T. Tanaka. Mean-field theory of boltzmann machine learning. PRE, pages 2302?2310, 1998. [20] H. Nishimori and K. M. Wong. Statistical mechanics of image restoration and error-correcting codes. PRE, 60(1):132, 1999. [21] J. Yedidia. An idiosyncratic journey beyond mean field theory. Advanced mean field methods: Theory and practice, pages 21?36, 2001. [22] M. I. Jordan, Z. Ghahraman, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183?233, 1999. [23] M. Opper and D. Saad. Advanced mean field methods: Theory and practice. MIT press, 2001. [24] L. K. Saul, T. Jaakkola, and M. I. Jordan. Mean field theory for sigmoid belief networks. Journal of artificial intelligence research, 4(1):61?76, 1996. [25] M. Teboulle. First order algorithms for convex minimization. IPAM, 2010. Tutorials. [26] J. Leskovec and A. Krevl. SNAP Datasets: Stanford large network dataset collection, June 2014. [27] M. Newman. The structure of scientific collaboration networks. PNAS, 98, 2001. 9
6119 |@word simulation:4 contraction:1 simplifying:1 incurs:1 initial:1 configuration:1 series:1 daniel:1 interestingly:1 outperforms:1 current:1 comparing:1 must:1 numerical:4 kdd:3 analytic:2 treating:1 plot:6 intelligence:1 selected:1 provides:1 node:31 along:1 prove:1 upenn:2 expected:1 proliferation:1 mechanic:3 globally:1 decomposed:1 decreasing:1 jm:2 increasing:2 spain:1 project:1 notation:2 maximizes:1 eigenvector:2 jik:1 astronomy:1 blume:1 guarantee:6 every:1 concave:10 tackle:1 exactly:4 demonstrates:1 control:1 brute:1 appear:1 positive:14 before:1 engineering:1 local:3 treat:2 influencing:1 limit:2 irreversible:1 physicist:1 quadruple:1 subscript:1 optimistically:1 abuse:1 path:1 might:1 studied:2 co:3 campaign:1 bi:6 range:1 directed:3 unique:14 pthat:1 practical:1 practice:2 block:14 goyal:1 significantly:2 projection:1 pre:2 dout:2 cannot:1 onto:1 selection:7 operator:1 context:5 influence:24 applying:1 wong:1 optimize:1 equivalent:2 map:2 fruitful:1 center:4 maximizing:9 transportation:1 customer:1 attention:1 regardless:1 independently:1 convex:4 simplicity:1 identifying:1 correcting:1 oh:1 population:1 updated:1 tardos:1 controlling:1 exact:10 us:2 domingo:2 approximated:1 particularly:1 sparsely:2 ising:26 observed:1 preprint:1 electrical:1 capture:3 verifying:1 calculate:5 worst:2 ferromagnetic:5 connected:18 news:1 plo:2 highest:1 intuition:1 dynamic:12 solving:1 eric:1 mh:2 attitude:1 describe:4 monte:2 artificial:1 newman:1 formation:2 choosing:1 heuristic:7 stanford:1 solve:1 say:1 snap:1 otherwise:2 statistic:1 richardson:2 ganguly:1 online:1 eigenvalue:2 propose:2 interaction:26 product:1 jij:9 neighboring:1 loop:1 loreto:1 achieve:1 description:1 frobenius:4 optimum:1 sea:1 converges:2 coupling:1 develop:2 derive:1 measured:1 ij:8 progress:1 sa:1 eq:8 direction:1 submodularity:1 radius:3 closely:1 stochastic:3 human:1 opinion:26 disciplinary:1 adjacency:1 snell:1 pessimistic:1 krevl:1 hold:2 sufficiently:2 normal:1 equilibrium:5 seed:1 algorithmic:1 mapping:1 driving:1 achieves:2 smallest:2 susceptibility:6 fh:13 purpose:1 tanh:4 kindermann:1 establishes:2 weighted:4 minimization:1 mit:1 rather:1 hj:3 jaakkola:2 office:1 focus:2 june:1 political:3 attains:3 am:1 inference:1 i0:1 typically:1 her:2 relation:1 supercritical:1 expand:1 interested:1 arg:5 among:5 ill:1 denoted:3 development:1 prevailing:1 bifurcation:3 fairly:1 kempe:1 field:59 equal:3 never:1 sampling:1 represents:6 icml:1 others:1 employ:2 modern:1 randomly:1 composed:1 national:1 individual:26 consisting:3 interest:3 highly:3 mining:2 admitting:1 behind:1 edge:3 experience:1 unless:2 initialized:1 theoretical:1 leskovec:1 column:1 modeling:1 teboulle:1 restoration:1 maximization:15 lattice:2 strategic:1 subset:1 entry:1 uniform:1 optimally:2 providence:1 varies:1 chooses:1 fundamental:2 international:1 pitchfork:3 interdisciplinary:1 lee:1 physic:6 michael:1 together:2 again:1 ambiguity:1 choose:1 external:41 admit:1 creating:1 resort:1 de:1 bold:2 summarized:1 int:1 root:1 closed:1 competitive:1 contribution:1 air:1 spin:5 accuracy:2 efficiently:6 maximized:3 yield:2 lkopf:1 accurately:2 carlo:2 phys:2 sharing:1 infection:1 definition:2 naturally:1 proof:4 mi:12 di:4 dataset:1 knowledge:1 formalize:2 focusing:14 wherein:2 response:3 strongly:15 furthermore:13 marketing:6 implicit:2 heidari:1 until:1 hand:1 sketch:4 christopher:1 rodriguez:1 widespread:1 defines:1 scientific:2 name:1 concept:1 verify:1 evolution:1 hence:4 assigned:1 din:4 polarization:1 castellano:1 glauber:2 game:2 self:2 uniquely:1 whereby:1 steady:56 authorship:3 demonstrate:3 magnetization:21 performs:3 l1:2 temperature:7 saberi:1 image:1 wise:3 variational:2 novel:2 fi:1 consideration:1 common:5 sigmoid:1 viral:7 garnered:1 palfrey:1 physical:3 stepping:1 banach:1 interpretation:3 approximates:3 numerically:1 refer:4 significant:1 consistency:2 language:1 stable:36 align:1 scenario:2 meta:1 captured:1 minimum:2 additional:2 novelty:1 maximize:2 paradigm:3 advertiser:1 shortest:1 ii:1 branch:3 pnas:2 reduces:1 smooth:13 calculation:1 cross:5 long:1 mmer:1 concerning:2 equally:1 controlled:1 scalable:1 expectation:1 m2i:2 arxiv:4 iteration:2 represent:4 achieved:2 addition:1 remarkably:2 macroscopic:3 sch:1 saad:1 ascent:6 subject:2 tend:1 undergo:1 undirected:3 elegant:1 contrary:1 mod:2 seem:1 jordan:2 call:4 structural:1 near:4 reverberant:3 split:1 reacts:1 psychology:2 pennsylvania:2 topology:1 idea:1 shift:13 remark:2 amount:2 locally:3 generate:1 outperform:2 exist:1 mckelvey:1 tutorial:1 write:2 group:2 key:1 demonstrating:3 spoke:2 diffusion:1 destroy:1 graph:8 sum:1 inverse:3 journey:1 extends:2 throughout:1 hi:18 guaranteed:1 gomez:1 jji:1 replaces:1 topological:1 nonnegative:2 strength:22 constraint:1 ri:1 kleinberg:1 argument:2 min:1 department:3 according:1 alternate:1 describes:1 rev:1 equation:2 remains:2 describing:1 fail:1 yedidia:1 apply:5 probe:2 spectral:3 appropriate:1 magnetic:3 centrality:2 bhattacharya:1 paving:1 existence:6 personality:1 denotes:3 clustering:1 brouwer:1 graphical:1 calculating:2 giving:1 restrictive:2 establish:1 objective:8 added:1 paint:1 already:1 occurs:2 strategy:5 dependence:1 interacts:1 microscopic:1 gradient:8 exhibit:1 thank:1 ensuing:1 consensus:1 trivial:1 length:1 code:1 quantal:1 mini:1 ratio:1 innovation:1 idiosyncratic:1 lynn:1 stoc:1 fortunato:1 negative:21 rise:7 collective:2 boltzmann:6 allowing:2 markov:3 datasets:1 acknowledge:1 defining:1 rn:4 sharp:1 community:5 pair:3 perron:4 specified:2 namely:1 connection:1 tremendous:1 barcelona:1 pop:1 nip:1 tanaka:1 able:1 roch:1 beyond:1 below:5 pattern:3 max:5 belief:1 shifting:1 enlightening:1 critical:6 natural:3 force:5 solvable:3 valera:1 advanced:3 geb:3 mossel:1 contagion:3 picture:1 negativity:1 kj:1 nice:1 literature:2 geometric:1 acknowledgement:1 review:2 evolve:1 determining:1 relative:1 nishimori:1 loss:2 highlight:1 interesting:1 proportional:2 analogy:1 foundation:1 degree:20 sufficient:6 article:1 exciting:1 surrounded:1 collaboration:3 repeat:1 placed:1 asynchronous:1 infeasible:1 iim:34 bias:1 side:1 jh:1 neighbor:1 saul:2 feedback:1 opper:1 world:5 valid:1 unweighted:1 collection:1 projected:1 simplified:1 social:14 emphasize:1 global:2 reveals:1 assumed:1 conclude:1 thep:1 search:1 continuous:1 quantifies:1 mj:2 robust:2 interact:1 complex:4 necessarily:1 european:1 spread:5 main:1 montanari:1 repeated:3 site:1 ddlee:1 comput:1 governed:1 advertisement:2 infect:1 theorem:20 down:1 showing:2 hub:2 symbol:2 admits:2 intractable:1 exists:13 importance:1 budget:14 illustrates:1 mf:59 vigorous:1 simply:1 appearance:1 marketer:1 acm:3 identity:1 towards:1 lipschitz:1 feasible:4 change:3 uniformly:1 distributes:3 kearns:2 total:4 meaningful:1 formally:1 support:1 phenomenon:6 biol:1
5,658
612
Combining Neural and Symbolic Learning to Revise Probabilistic Rule Bases J. Jeffrey Mahoney and Raymond J. Mooney Dept. of Computer Sciences University of Texas Austin, TX 78712 mahoney@cs.utexas.edu, mooney@cs.utexas.edu Abstract This paper describes RAPTURE - a system for revising probabilistic knowledge bases that combines neural and symbolic learning methods. RAPTURE uses a modified version of backpropagation to refine the certainty factors of a MYCIN-style rule base and uses ID3's information gain heuristic to add new rules. Results on refining two actual expert knowledge bases demonstrate that this combined approach performs better than previous methods. 1 Introduction In complex domains, learning needs to be biased with prior knowledge in order to produce satisfactory results from limited training data. Recently, both connectionist and symbolic methods have been developed for biasing learning with prior knowledge lFu, 1989; Towell et a/., 1990; Ourston and Mooney, 1990]. Most ofthese methods revise an imperfect knowledge base (usually obtained from a domain expert) to fit a set of empirical data. Some of these methods have been successfully applied to real-world tasks, such as recognizing promoter sequences in DNA [Towell et ai., 1990; Ourston and Mooney, 1990]. The results demonstrate that revising an expert-given knowledge base produces more accurate results than learning from training data alone. In this paper, we describe the RAPTURE system (Revising Approximate 107 108 Mahoney and Mooney Probabilistic Theories Using Repositories of Examples), which combines connectionist and symbolic methods to revise both the parameters and structure of a certainty-factor rule base. 2 The Rapture Algorithm The RAPTURE algorithm breaks down into three main phases. First, an initial rule-base (created by a human expert) is converted into a RAPTURE network. The result is then trained using :ertainty-factor backpropagation (CFBP). The theory is further revised through network architecture modification. Once the network is fully trained, the solution is at hand-there is no need for retranslation. Each of these steps is outlined in full below. 2.1 The Initial Rule-Base RAPTURE uses propositional certainty factor rules to represent its theories. These rules have the form A ~ D, which expresses the idea that belief in proposition A gives a 0.8 measure of belief in proposition D [Shafer and Pearl, 1990]. Certainty factors can range in value from -1 to +1, and indicate a degree of confidence in a particular proposition. Certainty factor rules allow updating of these beliefs based upon new observed evidence. Rules combine evidence via probabilistic sum, which is defined as a EB b - a + b - abo In general, all positive evidence is combined to determine the measure of belief(MB) for a given proposition, and all negative evidence is combined to obtain a measure of disbelief (MD). The certainty factor is then calculated using C F = M B + MD. RAPTURE uses this formalism to represent its rule base for a variety of reasons. First, it is perhaps the simplest method that retains the desired evidence-summing aspect of uncertain reasoning. As each rule fires, additional evidence is contributed towards belief in the rule's consequent. The use of probabilistic sum enables many small pieces of evidence to add up to significant evidence. This is lacking in formalisms that use only MIN or MAX for combining evidence [Valtorta, 1988]. Second, probabilistic sum is a simple, differentiable, non-linear function. This is crucial for implementing gradient descent using backpropagation. Finally, and perhaps most significantly, is the widespread use of certainty factors. Numerous knowledgebases have been implemented using this formalism, which immediately gives our approach a large base of applicability. 2.2 Converting the Rule Base into a Network Once the initial theory is obtained, it is converted into a RAPTURE -network. Building the network begins by mapping all identical propositions in the rule-base to the same node in the network. Input features (those only appearing as rule-antecedents) become input nodes, and output symbols (those only appearing as rule-consequents) become output nodes. The certainty factors of the rules become the weights on the links that connect nodes. Networks for classification problems contain one output for each category. When an example is presented, the certainty factor for each of the categories is computed and the example is assigned to the category with the Combining Neural and Symbolic Learning to Revise Probabilistic Rule Bases 1.3 I , I I ,,.2t' Figure 1: A RAPTURE NETWORK highest value. Figure 1 illustrates the following set of rules. ABC~D E~D C~G EF~G HI~C As shown in the network, conjuncts must first pass through a MIN node before any activation reaches the consequent node. Note that each of the conjuncts is connected to the corresponding MIN mode with a solid line. This represents the fact that the link is non-adjustable, and simply passes its full activation value onto the MIN node. The standard (certainty-factor) links are drawn as dotted lines indicating that their values are adjustable. This construction shows how easily a RAPTURE-network can model a MYCIN rulebase. Each representation can be converted into the other, without loss or corruption of information. They are two equivalent representations of the same set of rules. 2.3 Certainty Factor Backpropagation Using the constructed RAPTURE-network, we desire to maximize its predictive accuracy over a set of training examples. Cycling through the examples one at a time, and slightly adjusting all relevant network weights in a direction that will minimize the output error, results in hill-climbing to a local minimum. This is the idea behind gradient descent [Rumelhart et al., 1986), which RAPTURE accomplishes with Certainty Factor Backpropagation (CFBP), using the following equations. ApWji = 7Jopj(1 ? LWjkOpk) k#-i If Uj is an output unit (1) 109 110 Mahoney and Mooney (2) If Uj is not an output unit hpj = L: kmin hpk w kj(1 ? EWjkOpk) (3) i;tk The "Sigma with circle" notation is meant to represent probabilistic sum over the index, and the ? notation is shorthand for two separate cases. If WjiOpi ~ 0, then - is used, otherwise + is used. The kmin subscript refers to the fact that we do not perform this summation for every unit k (as in standard backpropagation), but only those units that received some contribution from unit j. Since a unit j may be required to pass through a min or max-node before reaching the next layer (k), it is possible that its value may not reach k. RAPTURE deems a classification correct when the output value for the correct category is greater than that of any other category. No error propagation takes place in this case (hpj 0). CFBP terminates when overall error reaches a minimal value. = 2.4 Changing the Network Architecture Whenever training accuracy fails to reach 100% through CFBP, it may be an indication that the network architecture is inappropriate for the current classification task. To date, RAPTURE has been given two ways of changing network architecture. First, whenever the weight of a link in the network approaches zero, it is removed from the network along with all of the nodes and links that become detached due to this removal. Further, whenever an intermediate node loses all of its input links due to link deletion, it too is removed from the network, along with its output link. This link/node deletion is performed immediately after CFBP, and before anything new is introduced into the network. also has a method for adding new nodes into the network. Specific nodes are added in an attempt to maximize the number of training examples that are classified correctly. The simple solution employed by RAPTURE is to create new input nodes that connect directly, either positively or negatively, to one or more output nodes. These new nodes are created in a way that will best help the network distinguish among training examples that are being misclassified. Specifically, RAPTURE attempts to distinguish for each output category, those examples of that category that are being misclassified (Le. being classified into a different output category), from those examples that do belong in these different output categories. Quinlan's ID3 information gain metric [Quinlan, 1986] has been adopted by RAPTURE to select this new node, which becomes positive evidence for the correct category, and negative evidence for mistaken categories. RAPTURE With these new nodes in place, we can now return to CFBP, where hopefully more training examples will be successfully classified. This entire process (CFBP followed by deleting links and adding new nodes) repeats until all training examples are correctly classified. Once this has occurred, the network is considered trained, and testing may begin. Combining Neural and Symbolic Learning to Revise Probabilistic Rule Bases Soybean Test Ac:eurac:y ..- i 1 ,-' , :-:...- n.oo-+-+---Lf--,.:---:-f--':........-::-t~=-"'--+-- ,,' ,.' 70.00--+--+-_,-"-?...,..::.f---~ I, I,' I"~ ''--+-----+----1---- . ', . ~ I,' ,,' 015.00-++-+---+.--+----+---+---+-- / . ,' I .' I 6O.00'-++-+_..;...<f-_ _--+_ _-+__-+__ (1 ,. ':' n,oo'---H-J(.<-",'-;,-.'- + - - - + - - - - + - - - - + - " I~ ~ SO.OO . ## i,' 41.OO--:O'*'00:-----:20.00:I-::----:00'*00=-----,6O.IIO?:-----:1O.::I-:0,-:----.""..... 0 O'CII--,O~.oo.------:!~'=----7.lco.:i-;CII;-------;I:-.\50.=OO 1hinil:p Figure 2: RAPTURE Testing Accuracy 3 Experimental Results To date, RAPTURE has been tested on two real-world domains. The first of these is a domain for recognizing promoter sequences in strings of DNA-nucleotides. The second uses a theory for diagnosing soybean diseases. These datasets are discussed in detail in the following sections. 3.1 Promoter Recognition Results A prokaryotic promoter is a short DNA sequence that precedes the beginnings of genes, and are locations where the protein RNA polymerase binds to the DNA structure [Towell et al., 1990]. A set of propositional Horn-clause rules for recognizing promoters, along with 106 labelled examples (53 promoters, 53 non-promoters) was provided as the initial theory. In order for this theory to used by RAPTURE it had to be modified into a certainty factor format. This was done by breaking up rules with multiple antecedents, into several rules. In this fashion, each antecedent is able to contribute some evidence towards belief in the consequent. Initial certainty factors were assigned in such a way that if every antecedent (from the original rule) were true, a certainty factor of 0.9 would result for the consequent. To test RAPTURE using this dataset, standard training and test runs were performed, which resulted in the learning curve of Figure 2a. This graph is a plot of average performance in accuracy at classifying DNA strings over 25 independent trials. A single trial consists of providing each system with increasing numbers of examples to use for training, and then seeing how well it can classify unseen test examples. This graph clearly demonstrates the advantages of an evidence summing 111 112 Mahoney and Mooney system like RAPTURE over a pure Horn-clause system such as EITHER, a pure inductive system such as ID3, or a pure connectionist system, like backprop. Also plotted in the graph, is KBANN [Towell et a/., 1990], a symbolic-connectionist system that uses standard backpropagation, and RAPTURE-O, which is simply RAPTURE given no initial theory, emphasizing the importance of the expert knowledge. For this dataset, CFBP alone was all that was required in order to train the network. The node addition module was never called. 3.2 Soybean Disease Diagnosis Results The Soybean Data comes from [Michalski and Chilausky, 1980] and is a dataset of 562 examples of diseased soybean plants. Examples were described by a string of 35 features including the condition of the stem, the roots, the seeds, as well as information such as the time of year, temperature, and features of the soil. An expert classified each example into one of 15 soybean diseases. This dataset has been used as a benchmark for a number of learning systems. Figure 2b is a learning curve on this data comparing RAPTURE, RAPTURE-O, backpropagation, ID3, and EITHER. The headstart given to RAPTURE does not last throughout testing in this domain. RAPTURE maintains a statistically significant lead over the other systems (except RAPTURE-O) through 80 examples, but by 150 examples, all systems are performing at statistically equivalent levels. A likely explanation for this is that the expert provided theory is more helpful on the easier to diagnose diseases than on those that are more difficult. But these easy ones are also easy to learn via pure induction, and good rules can be created after seeing only a few examples. Trials have actually been run out to 300 examples, though all systems are performing at equivalent levels of accuracy. 4 Related Work The SEEK system [Ginsberg et a/., 1988) revises rule bases containing M-of-N rules, though can not modify real-valued weights and contains no means for adding new rules. Valtorta [Valtorta, 1988) has examined the computational complexity of various refinement tasks for probabilistic knowledge bases, and shows that refining the weights to fit a set of training data is an NP-Hard problem. Ma and Wilkins [Ma and Wilkins, 1991) have developed methods for improving the accuracy of a certainty-factor knowledge base by deleting rules, and they report modest improvements in the accuracy of a MYCIN rule base. Gallant [Gallant, 1988) designed and implemented a system that combines expert domain knowledge with connectionist learning, though is not suitable for multi-layer networks or for combination functions like probabilistic sum. KBANN [Towell et a/., 1990] uses standard backpropagation to refine a symbolic rule base, though the mapping between the symbolic rules and the network is only an approximation. Fu [Fu, 1989) and Lacher [Lacher, 1992) have also used backpropagation techniques to revise certainty factors on rules. However, the current publications on these two projects do not address the problem of altering the network architecture (i.e. adding new rules) and do not present results on revising actual expert knowledge bases. Combining Neural and Symbolic Learning to Revise Probabilistic Rule Bases 5 Future Work The current method for changing network architecture in RAPTURE is restricted to adding new input units that directly feed the outputs. We hope to incorporate newer techniques for creating and linking to hidden nodes, in order to improve the range of architectural changes that it can make. Another area requiring further research concerns the differences between certaintyfactor networks and traditional connectionist networks. Further comparison of the RAPTURE and KBANN approaches to knowledge-base refinement are also indicated. Finally, in recent years, certainty-factors have been the subject of considerable criticism from researchers in uncertain reasoning [Shafer and Pearl, 1990]. However, the basic revision framework in RAPTURE should be applicable to other uncertain reasoning formalisms such as Bayesian networks, Dempster-Shafer theory, or fuzzy logic [Shafer and Pearl, 1990]. As long as the activation functions in the corresponding network implementations of these methods are differentiable, backpropagation techniques should be employable. 6 Conclusions Automatic refinement of probabilistic rule bases is an under-studied problem with important applications to the development of intelligent systems. This paper has described and evaluated an approach to refining certainty-factor rule bases that integrates connectionist and symbolic learning. The approach is implemented in a system called RAPTURE, which uses a revised backpropagation algorithm to modify certainty factors and ID3's information gain criteria to determine new rules to add to the network. In other words, connectionist methods are used to adjust parameters and symbolic methods are used to make structural changes to the knowledge base. In domains with limited training data or domains requiring meaningful explanations for conclusions, refining existing expert knowledge has clear advantages. Results on revising three real-world knowledge bases indicates that RAPTURE generally performs better than purely inductive systems (ID3 and backpropagation), a purely symbolic revision system (EITHER), and and purely connectionist revision system (KBANN). The certainty-factor networks used in RAPTURE blur the distinction between connectionist and symbolic representations. They can be viewed either as connectionist networks or symbolic rule bases. RAPTURE demonstrates the utility of applying connectionist learning methods to "symbolic" knowledge bases and employing symbolic methods to modify "connectionist" networks. Hopefully these results will encourage others to explore similar opportunities for cross-fertilization of ideas between connectionist and symbolic learning. Acknowledgements This research was supported by the National Science Foundation under grant IRI9102926, the NASA Ames Research Center under grant NCC 2-629, and the Texas Advanced Research Program under grant 003658114. We wish to thank R.S. Michal- 113 114 Mahoney and Mooney ski for furnishing the soybean data, M. Noordewier, G.G. Towell, and J.W. Shavlik for supplying the DNA data, and the KBANN results. References [Fu, 1989] Li-Min Fu. Integration of neural heuristics into knowledge-based inference. Connection Science, 1(3):325-339, 1989. [Gallant, 1988] S.1. Gallant. Connectionist ex'")ert systems. Communications of the Association for Computing Machinery, 31:152-169, 1988. [Ginsberg et al., 1988] A. Ginsberg, S. M. Weiss, and P. Politakis. Automatic knowledge based refinement for classification systems. Artificial Intelligence, 35:197-226, 1988. [Lacher, 1992] R.C. Lacher. Expert networks: Paradigmatic conflict, technological rapprochement. Neuroprose FTP Archive, 1992. [Ma and Wilkins, 1991] Y. Ma and D. C. Wilkins. Improving the performance of inconsistent knowledge bases via combined optimization method. In Proceedings of the Eighth International Workshop on Machine Learning, pages 23-27, Evanston, IL, June 1991. [Michalski and Chilausky, 1980] R. S. Michalski and S. Chilausky. Learning by being told and learning from examples: An experimental comparison of the two methods of knowledge acquisition in the context of developing an expert system for soybean disease diagnosis. Journal of Policy Analysis and Information Systems, 4(2):126-161, 1980. [Ourston and Mooney, 1990] D. Ourston and R. Mooney. Changing the rules: a comprehensive approach to theory refinement. In Proceedings of the Eighth National Conference on Artificial Intelligence, pages 815-820, Detroit, MI, July 1990. [Quinlan, 1986] J. R. Quinlan. Induction of decision trees. Machine Learning, 1(1):81-106,1986. [Rumelhart et al., 1986] D. E. Rumelhart, G. E. Hinton, and J. R. Williams. Learning internal representations by error propagation. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Processing, Vol. I, pages 318-362. MIT Press, Cambridge, MA, 1986. [Shafer and Pearl, 1990] G. Shafer and J. Pearl, editors. Readings in Uncertain Reasoning. Morgan Kaufmann, Inc., San Mateo,CA, 1990. [Towell et al., 1990] G. G. Towell, J. W. Shavlik, and Michiel O. Noordewier. Refinement of approximate domain theories by knowledge-based artificial neural networks. In Proceedings of the Eighth National Conference on Artificial Intelligence, pages 861-866, Boston, MA, July 1990. [Valtorta, 1988] M. Valtorta. Some results on the complexity of knowledge-base refinement. In Proceedings of the Sixth International Workshop on Machine Learning, pages 326-331, Ithaca, NY, June 1988.
612 |@word trial:3 repository:1 version:1 seek:1 deems:1 solid:1 initial:6 contains:1 existing:1 current:3 comparing:1 ginsberg:3 michal:1 activation:3 must:1 blur:1 enables:1 fertilization:1 plot:1 designed:1 alone:2 intelligence:3 rulebase:1 beginning:1 short:1 supplying:1 node:21 location:1 contribute:1 ames:1 diagnosing:1 along:3 constructed:1 become:4 shorthand:1 consists:1 combine:4 multi:1 actual:2 inappropriate:1 increasing:1 becomes:1 begin:2 provided:2 notation:2 project:1 revision:3 string:3 fuzzy:1 revising:5 developed:2 certainty:21 every:2 demonstrates:2 evanston:1 unit:7 grant:3 positive:2 before:3 local:1 bind:1 modify:3 subscript:1 eb:1 studied:1 examined:1 mateo:1 limited:2 range:2 statistically:2 horn:2 testing:3 lf:1 backpropagation:13 area:1 empirical:1 significantly:1 confidence:1 word:1 refers:1 polymerase:1 seeing:2 protein:1 symbolic:18 onto:1 context:1 applying:1 equivalent:3 center:1 williams:1 immediately:2 pure:4 rule:42 ert:1 construction:1 us:8 rumelhart:4 recognition:1 updating:1 observed:1 module:1 connected:1 highest:1 removed:2 technological:1 disease:5 dempster:1 complexity:2 trained:3 predictive:1 kbann:5 upon:1 negatively:1 purely:3 easily:1 various:1 tx:1 train:1 describe:1 precedes:1 artificial:4 heuristic:2 valued:1 otherwise:1 unseen:1 id3:6 sequence:3 differentiable:2 indication:1 advantage:2 michalski:3 mb:1 relevant:1 combining:5 date:2 produce:2 diseased:1 tk:1 help:1 oo:6 ftp:1 ac:1 received:1 implemented:3 c:2 indicate:1 come:1 direction:1 correct:3 iio:1 human:1 implementing:1 backprop:1 proposition:5 summation:1 considered:1 seed:1 mapping:2 integrates:1 applicable:1 utexas:2 create:1 successfully:2 detroit:1 hope:1 mit:1 clearly:1 rna:1 modified:2 reaching:1 publication:1 refining:4 june:2 improvement:1 hpk:1 indicates:1 criticism:1 helpful:1 inference:1 entire:1 hidden:1 misclassified:2 overall:1 classification:4 among:1 development:1 integration:1 once:3 never:1 identical:1 represents:1 future:1 connectionist:15 np:1 report:1 intelligent:1 few:1 others:1 resulted:1 national:3 comprehensive:1 phase:1 antecedent:4 prokaryotic:1 jeffrey:1 fire:1 disbelief:1 attempt:2 adjust:1 mahoney:6 behind:1 accurate:1 fu:4 encourage:1 nucleotide:1 modest:1 machinery:1 tree:1 desired:1 circle:1 plotted:1 minimal:1 uncertain:4 formalism:4 classify:1 retains:1 altering:1 applicability:1 noordewier:2 recognizing:3 too:1 connect:2 combined:4 international:2 probabilistic:13 told:1 ourston:4 containing:1 soybean:8 creating:1 expert:12 style:1 return:1 li:1 converted:3 inc:1 piece:1 performed:2 break:1 root:1 diagnose:1 maintains:1 parallel:1 contribution:1 minimize:1 il:1 accuracy:7 kaufmann:1 climbing:1 bayesian:1 researcher:1 corruption:1 mooney:10 classified:5 ncc:1 reach:4 whenever:3 sixth:1 acquisition:1 mi:1 gain:3 dataset:4 adjusting:1 revise:8 knowledge:22 actually:1 nasa:1 feed:1 wei:1 done:1 though:4 evaluated:1 until:1 hand:1 hopefully:2 propagation:2 widespread:1 mode:1 indicated:1 perhaps:2 building:1 detached:1 contain:1 true:1 requiring:2 inductive:2 assigned:2 satisfactory:1 hpj:2 anything:1 criterion:1 hill:1 demonstrate:2 performs:2 temperature:1 reasoning:4 ef:1 recently:1 clause:2 belong:1 occurred:1 discussed:1 linking:1 association:1 significant:2 cambridge:1 ai:1 valtorta:5 mistaken:1 automatic:2 outlined:1 had:1 base:31 add:3 recent:1 morgan:1 minimum:1 additional:1 greater:1 cii:2 employed:1 accomplishes:1 converting:1 determine:2 maximize:2 paradigmatic:1 july:2 full:2 multiple:1 stem:1 cross:1 long:1 michiel:1 basic:1 metric:1 represent:3 addition:1 kmin:2 crucial:1 ithaca:1 biased:1 archive:1 pass:1 subject:1 inconsistent:1 structural:1 intermediate:1 easy:2 variety:1 fit:2 architecture:6 imperfect:1 idea:3 texas:2 utility:1 generally:1 clear:1 furnishing:1 category:11 dna:6 simplest:1 mcclelland:1 dotted:1 towell:8 correctly:2 diagnosis:2 vol:1 express:1 drawn:1 changing:4 graph:3 sum:5 year:2 run:2 place:2 throughout:1 architectural:1 decision:1 layer:2 hi:1 followed:1 distinguish:2 refine:2 aspect:1 min:6 performing:2 format:1 developing:1 combination:1 describes:1 slightly:1 terminates:1 newer:1 ofthese:1 modification:1 restricted:1 neuroprose:1 equation:1 adopted:1 rapture:38 appearing:2 original:1 opportunity:1 quinlan:4 uj:2 added:1 md:2 traditional:1 cycling:1 gradient:2 link:10 separate:1 thank:1 reason:1 induction:2 index:1 providing:1 difficult:1 sigma:1 negative:2 implementation:1 ski:1 policy:1 adjustable:2 contributed:1 perform:1 gallant:4 revised:2 datasets:1 benchmark:1 descent:2 lco:1 hinton:1 communication:1 introduced:1 propositional:2 required:2 connection:1 conflict:1 distinction:1 deletion:2 pearl:5 address:1 able:1 usually:1 below:1 eighth:3 biasing:1 reading:1 program:1 max:2 including:1 explanation:2 belief:6 deleting:2 suitable:1 advanced:1 improve:1 numerous:1 created:3 kj:1 raymond:1 prior:2 acknowledgement:1 removal:1 lacking:1 fully:1 loss:1 plant:1 foundation:1 degree:1 editor:2 classifying:1 austin:1 repeat:1 soil:1 last:1 supported:1 allow:1 shavlik:2 abo:1 distributed:1 curve:2 calculated:1 world:3 refinement:7 san:1 employing:1 approximate:2 gene:1 logic:1 summing:2 learn:1 ca:1 improving:2 conjuncts:2 complex:1 domain:9 main:1 promoter:7 shafer:6 positively:1 fashion:1 ny:1 fails:1 wish:1 breaking:1 down:1 emphasizing:1 lfu:1 specific:1 symbol:1 consequent:4 evidence:13 concern:1 workshop:2 adding:5 importance:1 illustrates:1 easier:1 boston:1 simply:2 likely:1 explore:1 desire:1 loses:1 abc:1 ma:6 viewed:1 towards:2 labelled:1 considerable:1 hard:1 change:2 specifically:1 except:1 called:2 pas:2 experimental:2 wilkins:4 meaningful:1 indicating:1 select:1 internal:1 meant:1 incorporate:1 dept:1 tested:1 ex:1
5,659
6,120
An urn model for majority voting in classification ensembles Victor Soto Computer Science Department Columbia University New York, NY, USA vsoto@cs.columbia.edu Alberto Su?rez and Gonzalo Mart?nez-Mu?oz Computer Science Department Universidad Aut?noma de Madrid Madrid, Spain {gonzalo.martinez,alberto.suarez}@uam.es Abstract In this work we analyze the class prediction of parallel randomized ensembles by majority voting as an urn model. For a given test instance, the ensemble can be viewed as an urn of marbles of different colors. A marble represents an individual classifier. Its color represents the class label prediction of the corresponding classifier. The sequential querying of classifiers in the ensemble can be seen as draws without replacement from the urn. An analysis of this classical urn model based on the hypergeometric distribution makes it possible to estimate the confidence on the outcome of majority voting when only a fraction of the individual predictions is known. These estimates can be used to speed up the prediction by the ensemble. Specifically, the aggregation of votes can be halted when the confidence in the final prediction is sufficiently high. If one assumes a uniform prior for the distribution of possible votes the analysis is shown to be equivalent to a previous one based on Dirichlet distributions. The advantage of the current approach is that prior knowledge on the possible vote outcomes can be readily incorporated in a Bayesian framework. We show how incorporating this type of problem-specific knowledge into the statistical analysis of majority voting leads to faster classification by the ensemble and allows us to estimate the expected average speed-up beforehand. 1 Introduction Combining the outputs of multiple predictors is in many cases of interest a successful strategy to improve the capabilities of artificial intelligence systems, ranging from agent architectures [19], to committee learning [13, 15, 8, 9]. A common approach is to build a collection of individual subsystems and then integrate their outputs into a final decision by means of a voting process. Specifically, in the machine learning literature, there is extensive empirical evidence on the improvements in generalization capacity that can be obtained using ensembles of learners [7, 11]. However, one of the drawbacks of these types of systems is the linear memory and time costs incurred in the computation of the final ensemble prediction by combination of the individual predictions. There are various strategies that alleviate these shortcomings. These techniques are grouped into static (or off-line) and dynamic (or online). In static pruning techniques, only a subset of complementary predictors from the original ensemble is kept [16, 21, 6]. By contrast, in dynamic pruning, the whole ensemble is retained. The prediction of the class label of a particular instance is accelerated by halting the sequential querying process when it is unlikely that the remaining (unknown) votes would change the output prediction [10, 20, 14, 12, 2, 3, 17]. These techniques are online in the sense that, as new individual predictions become known, the algorithm dynamically updated the estimated probability of having a stable prediction; i.d. a prediction that coincides with that by the complete ensemble. This is the basis of the Statistical Instance-Based Algorithm (SIBA) proposed in [14]. In a similar 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. approach, albeit with a different objective, Reyzin proposes to randomly sample hypotheses from the original AdaBoost ensemble. The goal is to minimize the number of features that are used for prediction, with a limited loss of accuracy [18]. This feature-efficient prediction is beneficial when access to the features of a new instance at test time is costly (e.g., in some medical problems). A different approach is followed in [3]. In this work, a policy is learned to decide which classifiers should be queried and which discarded in the prediction of the class label of a given instance. The dynamic ensemble pruning method proposed in this work is closely related to SIBA [14]. In SIBA, the members of a committee are queried sequentially. At each step in the querying process, the votes recorded are used to estimate the probability that the majority decision of the classifiers queried up to that moment coincides with the complete ensemble. If this probability exceeds a specified confidence level, ?, the voting process is halted. To compute this estimate, the probability that a single predictor outputs a given decision for the particular instance considered is modeled as a random variable. Starting from a uniform prior, Bayes? theorem is used to update the distribution of this variable with the information provided by the actual votes, as they become known. In most of the problems analyzed in [14], the assumption that the prior is uniform leads to conservative estimates of the confidence on the stability of the predictions when only a fraction of the classifiers have been queried. Analyzing the results of those experiments, it is apparent that the actual disagreement percentages between the dynamic decision output and the decision made by the complete committee are significantly lower than the specified target ?. As a consequence, more queries are made than the ones that are actually needed. The present work has two objectives. First, we propose an intuitive mathematical modeling of the voting process in ensembles of classifiers based on the hypergeometric distribution. Under the assumption that the distribution of possible vote outcomes is uniform, we prove that this derivation is equivalent to the one presented in [14]. However, the vote distribution is, in general, not uniform. Its shape depends on the classification task considered and on the base learning algorithm used to generate the predictors. Second, to take into account this dependence, we propose to approximate this distribution using a non-parametric prior. The use of this problem-specific prior knowledge leads to more accurate estimations of the disagreement rates between the dynamic sub-committee prediction and the complete committee, which are closer to the specified target ?. In this manner, faster classification can be achieved with minimal loss of accuracy. In addition, the use of priors allow us to estimate quite precisely the expected average number of trees that would be necessary to query. 2 Modeling ensemble voting processes as a classical urn problem Consider the following process modeled as a classical urn model. Let us suppose we have marbles of l different colors in an urn. The number of marbles of color yk in the urn is Tk , with k = 1 . . . l. Pl The total number of marbles in the urn is T = k=1 Tk . The contents of the urn can therefore be described by vector T = hT1 , T2 . . . Tl i. Assume that t < T marbles are extracted from the urn without replacement. This extraction process can be characterized by vector t = ht1 , t2 . . . tl i where Pl tk is the number of marbles of color yk extracted, with t = k=1 tk . The probability of extracting a color distribution of marbles t, given the initial color distribution of the urn T is described by the multivariate hypergeometric distribution    Ql T1 Tl Ti i=1 ti t1 . . . tl P(t|T) = = . (1)   T T t t Consider the case in which the total number of marbles in the urn, T , is known but that the color distribution, T, is unknown. In this case, the color distribution of the extracted marbles, t, can be used to estimate the content of the urn applying Bayes Theorem   T1 Tl P(t|T)P(T) P(t|T)P(T) t1 . . . tl P(T) P P(T|t) = = =P (2)   Tl? T1? ? ? ? P(t) T? ??t P(t|T )P(T ) T? ??t t1 . . . tl P(T ) Pl where ?t is the set of vectors T? , such that Ti? ? ti ?i and i=1 Ti? = T . This problem is equivalent to the voting process in an ensemble of classifiers: Suppose we want to predict the class label of an instance by combining the individual predictions of the ensemble classifiers (marbles). Assuming that the individual predictions are deterministic, the class (color) that 2 each classifier (marble) would output if queried is fixed, but unknown before the query. Therefore, for each instance considered we have a different "bag of colored marbles" with an unknown class distribution. After a partial count of votes of the ensemble is known, Eq. 2 provides an estimate of the distribution of votes for the complete ensemble. This estimate can be used to compute the probability that the decision obtained using only a partial tally of votes, t, of size t < T and by the final decision using all T votes, coincide   T1 Tl X t1 . . . tl P(T) ? P (t, T ) = , (3)   P Tl? T1? ? T? ??t t1 . . . tl P(T ) T?Tt where Tt is the set of vectors of votes for the complete ensemble T = {T1 , T2 . . . Tl } such that the class predicted by the subensemble of size t and the class predicted by the complete committee Pl coincide, with Ti ? ti , and i=1 Ti = T . If P ? (t, T ) = 1, then the classification given by the partial ensemble and the full ensemble coincide. This case happens when the difference between the number of votes for the first and second class in t is greater than the remaining votes in the urn. In such case, the voting process can be halted with full confidence that the decision of the partial ensemble will not change when the predictions of the remaining classifiers are considered. In addition, if it is acceptable that, with a small probability 1 ? ?, the prediction of the partially polled ensemble and that of the complete ensemble disagree, then the voting process can be stopped when the P ? (t, T ) exceeds the specified confidence level ?. The final classification would be given as the combined decisions of the classifiers that have been polled up to that point only. 2.1 Uniform prior Assuming a uniform prior for the distribution of possible T vectors P(T) = 1/kTk, where kTk stands for the number of possible T vectors, this derivation is equivalent to the one presented in [14]. That formulation assumes that the base classifiers of the ensemble are independent realizations from a pool of all possible classifiers given the training dataset. Assuming that an unlimited number of realizations can be performed, the distribution of class votes in the ensemble converges to a Dirichlet distribution in the limit of infinite ensemble size. Then, assuming a partial tally of t votes, the probability that the ensemble?s decision will change if the precictions of the remaining T ? t classifiers are considered, can be estimated. In order to prove the equivalence between both formulations, we first need to introduce three results, presented in the theorem and propositions below. Theorem. Chu-Vandermonde Identity. Let s, t, r ? N then   X  r   s+t s t = r k r?k (4) k=0 Proposition 1. Upper negation. Let r ? C and k ? Z, then     r k k?r?1 = (?1) k k (5) The previous theorem and proposition are used in the following proposition, which is the key to prove the equivalence between the two formulations: Proposition 2. Let n1 and n2 be positive integers such that n1 + n2 = n and n ? N . Then    NX ?n2   i N ?i N +1 = n1 n2 N ?n i=n (6) 1   n Proof. First the symmetry property of the binomial (i.e., nk = n?k ) is used to bring down the indices  NX   NX ?n2   ?n2  i N ?i i N ?i = n1 n2 i ? n1 N ? i ? n2 i=n i=n 1 1 3 The upper indices are removed by applying the upper negation property of proposition 1.     NX ?n2  NX ?n2  ?n1 ? 1 ?n2 ? 1 i N ?i (?1)i?n1 (?1)N ?i?n2 = i ? n N ? i ? n i ? n N ? i ? n 1 2 1 2 i=n i=n 1 1 Now, the Chu-Vandermonde identity can be applied with r = N ? n1 ? n2 and k = i ? n1     NX ?n2  ?n1 ? 1 ?n2 ? 1 ?n ? 2 i?n1 N ?i?n2 (?1) (?1) = (?1)N ?n i ? n N ? i ? n N ? n 1 2 i=n 1 Finally the upper negation is applied again     ?n ? 2 N +1 (?1)N ?n = N ?n N ?n Proposition 3 Following the hypergeometric reformulation given by Equation 2 and assuming that P(T) follows a uniform distribution 1/kTk, where kTk stands for the number of possible T vectors then Ql (T ? t)! i=1 (ti + 1)Ti ?ti P(T|t) = Ql (t + l)T ?t i=1 (Ti ? ti )! where (x)n = x(x + 1) . . . (x + n ? 1) is the Pochhammer symbol. This formulation is equivalent to the one proposed in [14]. Proof. Equation 2 can be simplified by taking into account the uniform prior P(T) = 1/kTk as   T1 Tl P(t|T)P(T) t1 . . . tl =P (7) P(T|t) =   Tl? T1? P(t) T? ??t t1 . . . tl The indices of the summation, ?t , is the set of vectors T such that Ti ? ti ?i and They can be changed for l classes to   T1 Tl t1 . . . tl P(T|t) = P ?   PT?2 PT?l?1 T1 Tl? T1? T ? =tl?1 t1 . . . tl T ? =t1 T ? =t2 ? ? ? 1 2 Pl i=1 Ti = T . (8) l?1 where T?k for k = 1, . . . , (l ? 1) are the maximum values for Tk? in the summations. Note that the ? summation over Tl? is unnecessary since the value of Tl? becomes fixed once the values of T1? . . . Tl?1 Pl are fixed since i=1 Ti? = T . In this sense, the values for T?k have a dependency on Ti? for i < k as Pk?1 T?k = T ? t + tk ? i=1 (Ti? ? ti ), k = 1, . . . , (l ? 1). The summations in the denominator of Eq. 8 can be rearranged T?1 X T?l?1 T?2 X ??? T1? =t1 T2? =t2 X ? =t Tl?1 l?1    ? T?1  ?  X T?2  ?  X T1? Tl T1 T2 ... = ??? t1 tl t1 t2 ? ? T1 =t1 Proposition 2 (Eq. 6) can be used, together with N = T ? ? Tl?1 in closed form T2 =t2 Pl?2 i=1 T?l?1 X ? =t Tl?1 l?1  ? Tl?1 tl?1   Tl? . tl Ti? , to express the summation over l?2 l?2 ? ? Xi=1 (Ti ?ti ) T ? T ?  T ? i=1 XTi ?tl T ? T ? Pl?2 T ? ? T ?  l?1 l?1 l l?1 i=1 i = = t t t t l?1 l l?1 l ? ? T ?t+tl?1 ? P P Tl?1 =tl?1 Tl?1 =tl?1  Pl?2 Pl?2    T ? i=1 Ti? + 1 T ? i=1 Ti? + 1 = , Pl?2 tl?1 + tl + 1 T ? i=1 Ti? ? tl?1 ? tl 4 where the symmetry property of the binomial has been used in the last step. The subsequent summations are carried out in the same manner. The summation over Tk? requires the application of Pk?1 Pl Eq. 6 with N = T ? i=1 Ti? + (l ? k ? 1), n1 = tk and n2 = i=k+1 ti + (l ? k ? 1) T?1  ?  X T1 ... t1 ? T1 =t1 T?l?2 X ? =t Tl?2 l?2  ? Tl?2 tl?2  Pl?2    T ? i=1 Ti? + 1 T +l?1 = ??? = tl?1 + tl + 1 t+l?1 Employing this result in Eq. 8, one obtains   Ql Tl ! T1 ! T1 Tl (T ? t)! t1 !(T1 ?t1 )! . . . tl !(Tl ?tl )! t1 . . . tl i=1 (ti + 1)Ti ?ti = Ql P(T|t) = .  = T +l?1 (T +l?1)! (t + l)T ?t t+l?1 i=1 (Ti ? ti )! (t+l?1)!(T ?t)! 2.2 Non-uniform prior The distribution P(T) can be modeled using a non-parametric non-uniform prior. The values of this prior can be obtained from the training data by some form of validation; e.g., out-of-bag or cross validation. Out-of-bag validation is faster because it does not require multiple generations of the ensemble. Therefore, it will be the validation method used in our implementation of the method. To compute the out-of-bag error, each training instance, xn , is classified by the ensemble predictors that do not have that particular instance in their training set. Let T?n = T?1n + . . . + T?ln , be the number of such classifiers, where T?in is the number of out-of-bag votes for class i, where i = 1, . . . , l, assigned to instance xn . The number of votes for each class for an ensemble of size T is estimated as Tin ? round(T T?in /T?n ). To mitigate the influence of the random fluctuations that appear because of the finite size of the training set and to avoid spurious numeric artifacts, the prior is subsequently smoothed using a sliding window of size 5 over the vote distribution. As shown in Section 2, the response time of the ensemble can be reduced by using Eq. 3, if we allow that a small fraction, 1 ? ?, of the predictions given by ensembles of size t and T do not coincide. Assuming this tolerance, when P ? (t, T ) > ?, the voting process can be halted and the ensemble will output the decision given by the t ? T queried classifiers. However, the computation of Eq. 3 is costly and should be performed off-line. In the SIBA formulation, a lookup table indexed by the number of votes of the minority class (for binary problems) and whose values are the minimum number of votes of the majority class such that P ? (t, T ) > ?, is used. Using a precomputed lookup table to halt the voting process does not entail a significant overhead during classification: a single lookup operation in the table is needed for each vote. The consequence of using a uniform prior is that all classes are considered equivalent. Hence, it is sufficient to compute one lookup table and use the minority class for indexing. When prior knowledge is taken into account, the probability P ? (t1 = n, t2 = m, T ) is not necessarily equal to P ? (t1 = m, t2 = n, T ) for n 6= m. Therefore, a different lookup table per class will be necessary. In addition, it is necessary to compute a different set of tables for each dataset. In the original formulation, the lookup table values depend only on T and ?. Therefore, they are independent of the particular classification problem considered. In our case, the prior distribution is estimated from the training data: Hence, it is problem dependent. However, the querying process is similar to SIBA. For instance, if we have 1 vote for class 1 and 7 for class 2, one determines whether the value in position 1 (minority class at this moment) of the lookup table for class 1 is greater or equal to 7. If it is, the querying process stops. As a side effect, for the experimental comparison, it is necessary to recompute the lookup tables for each realization of the data. Notwithstanding, in a real setting, these tables need to be computed only once. This can be done offline. Therefore, the speed improvements in the classification phase are independent of the size of the training set. The lookup table and the estimated non-parametric prior can be used to estimate also the average number of classifiers that are expected to be queried during test. This estimation can be made using Monte Carlo simulation. To this end one would perform the following experiment repeatedly and compute the average number of queries: extract a random vector T from the prior distribution; generate a vector of votes of size T that contains exactly Ti votes for class i with i = 1 . . . l; finally, query a random permutation of this vector of votes until the process can be halted as given by the lookup table and keep the number of queries. 5 3 Experiments In this section we present the results of an extensive empirical evaluation of the dynamical ensemble pruning method described in the previous section. The experiments are performed in a series of benchmark classification problems from the UCI Repository [1] and synthetic data [4] using Random Forests [5]. The code is available at: https://github.com/vsoto/majority-ibp-prior. The protocol for the experiments is as follows: for each problem, 100 partitions are created by 10 ? 10-fold cross-validation for real datasets and by random sampling in the synthetic datasets. All the classification tasks considered are binary, except for New-thyroid, Waveform and Wine, which have three classes. For each partition, the following steps are carried out: (i) a Random Forest ensemble of size T = 101 is built; (ii) we compute the generalization error rate of the complete ensemble in the test set and record the mean number of trees that are queried to determine the final prediction. Note that this number need not be T : the voting process can be halted when the remaining votes (i.e. the predictions of classifiers that have not been queried up to that point) cannot modify the partial ensemble decision. This is the case when the number of remaining votes is below the difference between the majority class and the second most voted class; (iii) The SIBA algorithm [14] is applied to dynamically select the number of classifiers that are needed for each instance in the test set to achieve a level of confidence in the prediction above ? = 0.99. We use SIBA as the benchmark for comparison since in previous studies it has been shown to provide the best overall results, especially for T < 500 [2]; (iv) The process is repeated using the proposed method with non-uniform priors for the class vote distribution, with the same confidence threshold, ? = 0.99. The prior distribution P(T) is estimated in the training set using out-of-bag data. This prior is also used to estimate the expected number of trees to be queried in the testing phase. In addition, for steps (iii) and (iv) we compute the test error rate, the average number of queried trees, and the disagreement rates between the predictions of the partially queried ensembles and the complete ones. Table 1: Error rates (left) and disagreement % (right). The statistical significant differences, using paired t-tests at a significance level ? = 0.05, are highlighted in boldface. Problem Australian Breast Diabetes Echocardiogram German Heart Horse-colic Ionosphere Labor Liver Mushroom New-thyroid Ringnorm Sonar Spam Threenorm Tic-tac-toe Twonorm Votes Waveform Wine RF 13.00?3.7 3.22?2.1 24.34?4.2 22.18?14.3 23.43?3.5 18.30?6.9 15.47?5.6 6.44?4.1 6.33?8.9 27.10?6.7 0.00?0.0 4.29?4.0 7.60?1.3 16.25?8.7 4.59?1.5 17.85?1.1 1.05?1.1 4.66?0.6 4.05?2.9 17.30?0.9 1.69?2.8 Error rates SIBA 13.09?3.7 3.23?2.1 24.25?4.1 22.05?14.7 23.65?3.3 18.37?7.0 15.44?5.4 6.44?4.1 6.17?8.8 27.09?7.0 0.00?0.0 4.38?4.0 7.72?1.2 16.45?8.7 4.63?1.5 18.04?1.1 1.16?1.1 4.77?0.6 4.12?2.9 17.36?0.8 1.74?2.8 HYPER 13.25?3.8 3.76?2.3 24.23?4.0 22.18?14.1 23.62?3.3 18.37?7.2 15.44?5.4 6.52?3.9 6.43?9.1 27.01?6.9 0.08?0.2 4.66?4.2 7.82?1.2 16.45?8.8 4.86?1.4 17.97?1.1 1.72?1.5 4.90?0.6 4.30?2.9 17.45?0.8 2.30?3.5 Disagreement % SIBA HYPER 0.3?0.6 0.9?1.1 0.1?0.4 1.0?1.1 0.6?0.9 0.8?1.0 0.7?3.1 1.4?4.6 0.8?0.8 0.8?0.9 0.8?1.8 1.0?2.1 0.4?0.9 0.7?1.3 0.1?0.6 0.7?1.3 0.2?1.7 1.2?4.5 1.0?1.7 0.9?1.5 0.0?0.0 0.1?0.2 0.1?0.7 0.7?2.0 0.5?0.2 0.8?0.3 0.9?2.0 0.8?1.9 0.1?0.2 0.7?0.4 1.0?0.2 0.8?0.2 0.1?0.4 0.7?1.0 0.4?0.1 0.7?0.2 0.1?0.4 1.0?1.8 0.6?0.1 1.0?0.3 0.1?0.6 1.1?2.5 In Table 1, we compare the error rates of Random Forest (RF) and of the dynamically pruned ensembles using the halting rule derived from assuming uniform priors (SIBA) and using nonuniform priors (HYPER), and the disagreement rates. The values displayed are averages over 100 realizations of the datasets The standard deviation is given after the ? symbol. 6 sonar votes 0.12 0.025 0.06 HYPER SIBA 0.1 FIXED HYPER SIBA 0.05 FIXED 0.3 0.25 disagreement P(t1) 0.01 0.08 0.2 P(t1) disagreement 0.02 0.015 0.06 0.15 0.04 0.04 0.03 0.02 0.1 0.02 0.005 0.01 0.05 0 0 0 10 20 30 40 50 t1 60 70 80 90 100 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 0.045 0.05 1-alpha 0 0 0 10 20 30 40 50 t1 60 70 80 90 100 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 0.045 0.05 1-alpha Figure 1: Vote distribution, P(T), and disagreement rates for Sonar (left) and Votes (right) From Table 1, one observes that the mean error rates of the pruned ensembles using SIBA and HYPER are only slightly worse than the rates obtained by the complete ensemble (RF). These differences should be expected since we are allowing a small disagreement of 1 ? ? = 1% between the decisions of the partial and the complete ensemble. In any case, the differences in generalization error can be made arbitrarily small by increasing ?. By design, the disagreement rates are expected to be below, but close to 1%. From Table 1, one observes that the disagreement % of the proposed method (HYPER) are closer to the specified threshold (1 ? ? = 1%) than those of SIBA, except for Liver, Sonar and Threenorm, where the differences are small. In these problems (and in general in the problems where SIBA obtains disagreement rates closer to 1 ? ?), the distribution of T is closer to a uniform distribution (see Figure 1, left histogram). In consequence, the assumption of uniform prior taken by SIBA is closer to the real one. However, when P(T) differs from the uniform distribution (see for instance Votes in Figure 1 right histogram) the results of SIBA are rather different from the expected disagreement rates. Table 2: Number of queried trees and speed-up rate with respect to the full ensemble of 101 trees. The statistical significant differences between SIBA and HYPER, using paired t-tests at a significance level ? = 0.05, are highlighted in boldface. Problem Australian Breast Diabetes Echocardiogram German Heart Horse-colic Ionosphere Labor Liver Mushroom New-thyroid Ringnorm Sonar Spam Threenorm Tic-tac-toe Twonorm Votes Waveform Wine RF? 62.2?1.4 54.2?0.9 68.8?1.8 68.0?4.6 71.8?1.3 67.2?2.5 66.2?2.1 57.9?1.5 61.6?4.0 74.5?2.3 51.0?0.0 55.2?1.8 68.6?0.8 73.9?3.0 57.1?0.3 76.6?0.5 60.7?0.9 67.2?0.2 54.5?1.2 72.3?0.7 57.3?2.1 # of trees SIBA HYPER 16.1?2.1 12.8?2.3 8.9?1.4 4.0?1.0 24.9?3.2 24.0?3.2 22.6?8.2 20.0?8.0 28.4?2.8 27.7?2.9 22.5?4.2 20.9?4.2 20.2?3.5 17.5?3.7 11.9?2.3 7.8?2.1 14.1?6.0 9.7?5.3 31.8?4.5 31.7?4.5 6.0?0.0 1.0?0.0 10.7?2.6 6.0?2.3 22.9?1.1 20.4?1.5 32.1?6.6 32.6?6.8 11.1?0.5 7.2?0.6 34.8?1.0 35.8?1.6 12.8?1.4 7.8?1.2 21.0?0.5 18.4?0.9 8.8?1.8 4.1?1.4 29.3?1.1 27.8?1.7 11.4?2.7 5.8?1.8 MC Estim 12.9?0.9 4.0?0.4 23.8?1.1 21.6?3.2 30.1?1.0 20.7?1.7 18.6?1.5 7.8?0.6 10.2?2.0 31.6?2.0 1.0?0.0 6.2?1.4 19.7?2.3 31.8?2.4 7.1?0.5 33.4?2.5 8.6?0.7 18.8?1.7 4.0?0.7 28.6?2.8 6.7?1.4 RF? 1.6 1.9 1.5 1.5 1.4 1.5 1.5 1.7 1.6 1.4 2.0 1.8 1.5 1.4 1.8 1.3 1.7 1.5 1.9 1.4 1.8 Speed-up rate SIBA HYPER 6.3 7.9 11.3 25.3 4.1 4.2 4.5 5.1 3.6 3.6 4.5 4.8 5.0 5.8 8.5 12.9 7.2 10.4 3.2 3.2 16.8 101.0 9.4 16.8 4.4 5.0 3.1 3.1 9.1 14.0 2.9 2.8 7.9 12.9 4.8 5.5 11.5 24.6 3.4 3.6 8.9 17.5 In order to analyze this aspect in more detail, we have computed the disagreement rates for different values of alpha (? = 0.999, 0.995, 0.99, 0.95). In Figure 1 the relation between the target 1 ? ? and the actual disagreement rate is presented. A diagonal solid line marks the expected upper limit for the disagreement. The results for SIBA, HYPER and for the case of using a fixed number of trees for all instances (FIXED) (and equal to the average number of trees used by HYPER in those tasks) 7 are presented in these plots. This last case (FIXED) can be seen as a stochastic approximation to the prediction of the whole ensemble. From these plots, we observe that the results for HYPER are very close to the expected disagreement rates for cases in which the prior is approximately uniform (Sonar), and for cases in which the prior is non-uniform (Votes). As expected, the results of SIBA are close to the target only for the case of approximately uniform prior (Sonar). Finally, when a stochastic approximation is used (FIXED) the disagreement rates are clearly above the target threshold given by ?. From these results we conclude that the proposed model provides a more accurate description of the voting process used to compute the prediction of the ensemble. This means that taking into account the prior distribution of possible vote outcomes, P(T), is important to obtain disagreement rates that are closer to the threshold established. Finally, in Table 2, we present the average number of trees used by Random Forest (RF? ), the SIBA method,the proposed method using non-parametric priors (HYPER), and the expected average of the number of trees to be queried in HYPER using Monte Carlo sampling (MC Estim). Note that the number of trees used by RF? is not necessary T = 101: the voting process is halted when the remaining (unknown) predictions cannot alter the decision of the ensemble. The number of trees given in RF? is the same as the trees that SIBA or HYPER would use when ? = 100%. Finally, the last three columns of Table 2 display the speed-up rate of the partial ensembles with respect to the full ensemble of size T = 101. From this table it is clear that HYPER reduces the number of queried classifiers with respect to SIBA in most of the tasks investigated. In addition, using only training data, the Monte Carlo estimations of the average number of trees are very precise. The largest average difference between this estimation and HYPER is 2.4 trees for German and Threenorm. The speed-up rate of HYPER with respect to the full ensemble is remarkable: from 2.8 times faster for Threenorm to 101 times faster in Mushroom. This dataset can be used to illustrate the benefits of using the prior distribution. For this problem, most classifiers agree in their predictions. HYPER takes advantage of this prior knowledge and queries only one classifier to cast the final decision. In this problem, the chances that the prediction of a single classifier, and the prediction of the complete ensemble are different, are below 1%. Similar behavior (but not as extreme) is observed in Breast and Votes. 4 Conclusions In this work, we present an intuitive, rigorous mathematical description of the voting process in an ensemble of classifiers: For a given an instance, the process is equivalent to extracting marbles (the individual classifiers), without replacement, from a bag that contains a known number of marbles, but whose color (class label prediction) distribution is unknown. In addition, we show that for the specific case of a uniform prior distribution of class votes this process is equivalent to the one developed in [14]. In the current description, which does not assume a uniform distribution prior for the class votes, the hypergeometric distribution plays a central role. The results of this statistical description are then used to design a dynamic ensemble pruning method, with the goal of speeding up predictions in the test phase. For a given instance, it is possible to compute the probability that the the partial decision made on the basis of the known votes (i.e., the class label predictions of the subset of classifiers that have been queried) and the final ensemble decision coincide. If this probability is above a specified threshold, sufficiently close to 1, a reliable estimate of the class label that the complete ensemble would predict can be made on the basis of the known votes. The effectiveness of this dynamic ensemble pruning method is illustrated using random forests. The prior distribution of class votes is estimated using out-of-bag data. As a result of incorporating this problem-specific knowledge in the statistical analysis of the voting process, the differences between the predictions of the dynamically pruned ensemble and the complete ensemble are closer to the specified threshold than when a uniform distribution is assumed, as in SIBA [14]. In the empirical evaluation performed, this dynamic ensemble pruning algorithm consistently yields improvements of classification speed over SIBA without a significant deterioration of accuracy. Finally, the statistical model proposed is used to provide an accurate estimate of the average number of individual classifier predictions that are needed to reach a stable ensemble prediction. Acknowledgments The authors acknowledge financial support from the Comunidad de Madrid (project CASI-CAMCM S2013/ICE-2845), and from the Spanish Ministerio de Econom?a y Competitividad (projects TIN2013-42351-P and TIN2015-70308-REDT). 8 References [1] A. Asuncion and D. Newman. UCI machine learning repository, 2007. [2] J. Basilico, M. Munson, T. Kolda, K. Dixon, and W. Kegelmeyer. Comet: A recipe for learning and using large ensembles on massive data. In Proceedings - IEEE International Conference on Data Mining, ICDM, pages 41?50, 2011. [3] D. Benbouzid, R. Busa-Fekete, and B. K?gl. Fast classification using sparse decision dags. In Proceedings of the 29th International Conference on Machine Learning, ICML 2012, volume 1, pages 951?958, 2012. [4] L. Breiman. Bias, variance, and arcing classifiers. Technical Report 460, Statistics Department, University of California, 1996. [5] L. Breiman. Random forests. Machine Learning, 45(1):5?32, 2001. [6] R. Caruana and A. Niculescu-Mizil. Ensemble selection from libraries of models. In Proc. of the 21st International Conference on Machine Learning (ICML?04), 2004. [7] R. Caruana and A. Niculescu-Mizil. An empirical comparison of supervised learning algorithms. In Proc. of the 23rd International Conference on Machine Learning, pages 161?168, New York, NY, USA, 2006. ACM Press. [8] T. G. Dietterich. Ensemble methods in machine learning. In Multiple Classifier Systems: First International Workshop, pages 1?15, 2000. [9] T. G. Dietterich. An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization. Machine Learning, 40(2):139?157, 2000. [10] W. Fan, F. Chu, H. Wang, and P. S. Yu. Pruning and dynamic scheduling of cost-sensitive ensembles. In Proc. of the 18th National Conference on Artificial Intelligence, pages 146?151. American Association for Artificial Intelligence, 2002. [11] M. Fern?ndez-Delgado, E. Cernadas, S. Barro, and D. Amorim. Do we need hundreds of classifiers to solve real world classification problems? Journal of Machine Learning Research, 15:3133?3181, 2014. [12] T. Gao and D. Koller. Active classification based on value of classifier. In NIPS, 2011. [13] L. K. Hansen and P. Salamon. Neural network ensembles. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12:993?1001, 1990. [14] D. Hern?ndez-Lobato, G. Mart?nez-Mu?oz, and A. Su?rez. Statistical instance-based pruning in ensembles of independent classifiers. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(2):364?369, 2009. [15] T. K. Ho, J. J. Hull, and S. N. Srihari. Decision combination in multiple classifier systems. IEEE Transactions on Pattern Analysis Machine Intelligence, 16(1):66?75, 1994. [16] D. D. Margineantu and T. G. Dietterich. Pruning adaptive boosting. In Proc. of the 14th International Conference on Machine Learning, pages 211?218. Morgan Kaufmann, 1997. [17] F. Markatopoulou, G. Tsoumakas, and I. Vlahavas. Dynamic ensemble pruning based on multi-label classification. Neurocomputing, 150(PB):501?512, 2015. [18] L. Reyzin. Boosting on a budget: Sampling for feature-efficient prediction. In L. Getoor and T. Scheffer, editors, Proceedings of the 28th International Conference on Machine Learning (ICML-11), ICML ?11, pages 529?536, New York, NY, USA, June 2011. ACM. [19] R. S. Sutton, J. Modayil, M. Delp, T. Degris, P. M. Pilarski, A. White, and D. Precup. Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction. In S. Tumer, Yolum and Stone, editors, Proc. of 10th Int. Conf. on Autonomous Agents and Multiagent Systems (AAMAS 2011), pages 761?768, Taipei, Taiwan, 2011. [20] H. Wang, W. Fan, P. S. Yu, and J. Han. Mining concept-drifting data streams using ensemble classifiers. In KDD ?03: Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 226?235, New York, NY, USA, 2003. ACM Press. [21] Y. Zhang, S. Burer, and W. N. Street. Ensemble pruning via semi-definite programming. Journal of Machine Learning Research, 7:1315?1338, 2006. 9
6120 |@word repository:2 simulation:1 solid:1 delgado:1 moment:2 initial:1 ndez:2 contains:2 series:1 current:2 com:1 noma:1 mushroom:3 chu:3 readily:1 casi:1 subsequent:1 ministerio:1 partition:2 kdd:1 shape:1 plot:2 update:1 intelligence:6 record:1 colored:1 tumer:1 provides:2 recompute:1 s2013:1 boosting:3 zhang:1 mathematical:2 become:2 prove:3 overhead:1 busa:1 introduce:1 manner:2 barro:1 cernadas:1 expected:11 behavior:1 multi:1 actual:3 xti:1 window:1 increasing:1 becomes:1 spain:2 provided:1 project:2 tic:2 developed:1 mitigate:1 voting:18 ti:35 exactly:1 classifier:35 medical:1 appear:1 kegelmeyer:1 t1:46 before:1 positive:1 ice:1 modify:1 limit:2 consequence:3 sutton:1 analyzing:1 marble:15 fluctuation:1 approximately:2 dynamically:4 equivalence:2 ringnorm:2 limited:1 acknowledgment:1 testing:1 definite:1 differs:1 empirical:4 significantly:1 confidence:8 cannot:2 close:4 subsystem:1 selection:1 scheduling:1 applying:2 influence:1 equivalent:8 deterministic:1 lobato:1 starting:1 colic:2 rule:1 financial:1 stability:1 autonomous:1 updated:1 kolda:1 target:5 suppose:2 pt:2 play:1 massive:1 programming:1 hypothesis:1 diabetes:2 observed:1 role:1 suarez:1 wang:2 munson:1 removed:1 observes:2 yk:2 mu:2 dynamic:10 depend:1 learner:1 basis:3 various:1 derivation:2 fast:1 shortcoming:1 monte:3 artificial:3 query:7 horse:2 newman:1 hyper:20 outcome:4 apparent:1 quite:1 whose:2 solve:1 pilarski:1 statistic:1 highlighted:2 final:8 online:2 advantage:2 propose:2 polled:2 interaction:1 uci:2 combining:2 realization:4 reyzin:2 achieve:1 oz:2 intuitive:2 description:4 recipe:1 converges:1 tk:8 illustrate:1 liver:3 ibp:1 eq:7 c:1 predicted:2 australian:2 waveform:3 drawback:1 closely:1 subsequently:1 stochastic:2 hull:1 tsoumakas:1 require:1 generalization:3 alleviate:1 randomization:1 proposition:8 summation:7 pl:13 sufficiently:2 considered:8 predict:2 wine:3 estimation:4 proc:5 bag:8 label:8 hansen:1 sensitive:1 grouped:1 echocardiogram:2 largest:1 clearly:1 rather:1 avoid:1 breiman:2 arcing:1 derived:1 june:1 improvement:3 consistently:1 contrast:1 rigorous:1 sigkdd:1 sense:2 dependent:1 niculescu:2 unlikely:1 spurious:1 relation:1 koller:1 overall:1 classification:16 proposes:1 equal:3 once:2 having:1 extraction:1 sampling:3 represents:2 yu:2 icml:4 unsupervised:1 alter:1 t2:12 report:1 randomly:1 national:1 neurocomputing:1 individual:9 phase:3 replacement:3 comunidad:1 n1:12 negation:3 interest:1 mining:3 evaluation:2 analyzed:1 extreme:1 accurate:3 beforehand:1 closer:7 partial:9 necessary:5 tree:17 indexed:1 iv:2 benbouzid:1 minimal:1 stopped:1 instance:18 column:1 modeling:2 halted:7 caruana:2 cost:2 deviation:1 subset:2 uniform:23 predictor:5 hundred:1 successful:1 margineantu:1 dependency:1 synthetic:2 combined:1 st:1 international:8 randomized:1 twonorm:2 universidad:1 off:2 pool:1 together:1 precup:1 again:1 central:1 recorded:1 worse:1 conf:1 american:1 account:4 halting:2 de:3 lookup:10 degris:1 int:1 dixon:1 depends:1 stream:1 performed:4 closed:1 analyze:2 bayes:2 aggregation:1 parallel:1 capability:1 asuncion:1 minimize:1 voted:1 accuracy:3 variance:1 kaufmann:1 ensemble:72 yield:1 bayesian:1 fern:1 mc:2 carlo:3 classified:1 pochhammer:1 reach:1 sensorimotor:1 toe:2 proof:2 static:2 stop:1 dataset:3 color:11 knowledge:8 actually:1 salamon:1 supervised:1 adaboost:1 response:1 formulation:6 done:1 until:1 su:2 artifact:1 soto:1 usa:4 effect:1 dietterich:3 concept:1 horde:1 hence:2 assigned:1 illustrated:1 white:1 round:1 during:2 spanish:1 coincides:2 stone:1 complete:15 tt:2 bring:1 ranging:1 common:1 volume:1 association:1 significant:4 queried:16 tac:2 dag:1 rd:1 gonzalo:2 stable:2 access:1 entail:1 han:1 base:2 multivariate:1 delp:1 binary:2 arbitrarily:1 ht1:2 victor:1 seen:2 minimum:1 aut:1 greater:2 morgan:1 determine:1 semi:1 sliding:1 multiple:4 full:5 ii:1 reduces:1 exceeds:2 technical:1 faster:5 characterized:1 burer:1 cross:2 alberto:2 icdm:1 halt:1 paired:2 prediction:39 scalable:1 denominator:1 breast:3 histogram:2 deterioration:1 achieved:1 econom:1 addition:6 want:1 member:1 effectiveness:1 integer:1 extracting:2 iii:2 architecture:2 whether:1 york:4 repeatedly:1 clear:1 rearranged:1 reduced:1 generate:2 http:1 percentage:1 estimated:7 per:1 express:1 key:1 reformulation:1 threshold:6 comet:1 pb:1 kept:1 fraction:3 decide:1 draw:1 decision:20 acceptable:1 followed:1 display:1 fold:1 fan:2 precisely:1 unlimited:1 aspect:1 speed:8 thyroid:3 pruned:3 urn:16 department:3 combination:2 beneficial:1 slightly:1 happens:1 indexing:1 modayil:1 taken:2 heart:2 ln:1 equation:2 agree:1 hern:1 count:1 committee:6 precomputed:1 needed:4 german:3 end:1 available:1 operation:1 uam:1 observe:1 disagreement:20 vlahavas:1 ho:1 drifting:1 ktk:5 original:3 bagging:1 assumes:2 dirichlet:2 remaining:7 binomial:2 taipei:1 build:1 especially:1 classical:3 objective:2 strategy:2 costly:2 dependence:1 parametric:4 diagonal:1 capacity:1 majority:8 street:1 nx:6 boldface:2 minority:3 assuming:7 taiwan:1 code:1 retained:1 modeled:3 index:3 ql:5 implementation:1 design:2 policy:1 unknown:6 perform:1 allowing:1 disagree:1 upper:5 datasets:3 discarded:1 benchmark:2 finite:1 acknowledge:1 displayed:1 incorporated:1 precise:1 nonuniform:1 ninth:1 smoothed:1 amorim:1 cast:1 specified:7 extensive:2 california:1 hypergeometric:5 learned:1 established:1 barcelona:1 nip:2 below:4 dynamical:1 pattern:3 built:1 rf:8 memory:1 reliable:1 getoor:1 mizil:2 improve:1 github:1 library:1 created:1 carried:2 extract:1 columbia:2 speeding:1 prior:36 literature:1 discovery:1 loss:2 multiagent:1 permutation:1 generation:1 querying:5 remarkable:1 validation:5 vandermonde:2 integrate:1 incurred:1 agent:2 sufficient:1 editor:2 changed:1 gl:1 last:3 offline:1 side:1 allow:2 bias:1 taking:2 sparse:1 tolerance:1 benefit:1 xn:2 stand:2 numeric:1 world:1 author:1 collection:1 made:6 coincide:5 simplified:1 spam:2 adaptive:1 employing:1 transaction:3 pruning:12 approximate:1 obtains:2 alpha:3 keep:1 estim:2 sequentially:1 active:1 unnecessary:1 conclude:1 assumed:1 xi:1 sonar:7 table:20 symmetry:2 forest:6 investigated:1 necessarily:1 constructing:1 protocol:1 pk:2 significance:2 whole:2 n2:17 martinez:1 repeated:1 complementary:1 aamas:1 madrid:3 tl:55 scheffer:1 ny:4 sub:1 position:1 tally:2 tin:1 rez:2 theorem:5 down:1 specific:4 symbol:2 ionosphere:2 evidence:1 incorporating:2 workshop:1 albeit:1 sequential:2 notwithstanding:1 budget:1 nk:1 nez:2 srihari:1 gao:1 labor:2 partially:2 fekete:1 determines:1 chance:1 extracted:3 mart:2 acm:4 viewed:1 goal:2 identity:2 content:2 change:3 specifically:2 infinite:1 except:2 conservative:1 total:2 e:1 experimental:2 vote:44 select:1 mark:1 support:1 accelerated:1
5,660
6,121
Dense Associative Memory for Pattern Recognition Dmitry Krotov Simons Center for Systems Biology Institute for Advanced Study Princeton, USA krotov@ias.edu John J. Hopfield Princeton Neuroscience Institute Princeton University Princeton, USA hopfield@princeton.edu Abstract A model of associative memory is studied, which stores and reliably retrieves many more patterns than the number of neurons in the network. We propose a simple duality between this dense associative memory and neural networks commonly used in deep learning. On the associative memory side of this duality, a family of models that smoothly interpolates between two limiting cases can be constructed. One limit is referred to as the feature-matching mode of pattern recognition, and the other one as the prototype regime. On the deep learning side of the duality, this family corresponds to feedforward neural networks with one hidden layer and various activation functions, which transmit the activities of the visible neurons to the hidden layer. This family of activation functions includes logistics, rectified linear units, and rectified polynomials of higher degrees. The proposed duality makes it possible to apply energy-based intuition from associative memory to analyze computational properties of neural networks with unusual activation functions ? the higher rectified polynomials which until now have not been used in deep learning. The utility of the dense memories is illustrated for two test cases: the logical gate XOR and the recognition of handwritten digits from the MNIST data set. 1 Introduction Pattern recognition and models of associative memory [1] are closely related. Consider image classification as an example of pattern recognition. In this problem, the network is presented with an image and the task is to label the image. In the case of associative memory the network stores a set of memory vectors. In a typical query the network is presented with an incomplete pattern resembling, but not identical to, one of the stored memories and the task is to recover the full memory. Pixel intensities of the image can be combined together with the label of that image into one vector [2], which will serve as a memory for the associative memory. Then the image itself can be thought of as a partial memory cue. The task of identifying an appropriate label is a subpart of the associative memory reconstruction. There is a limitation in using this idea to do pattern recognition. The standard model of associative memory works well in the limit when the number of stored patterns is much smaller than the number of neurons [1], or equivalently the number of pixels in an image. In order to do pattern recognition with small error rate one would need to store many more memories than the typical number of pixels in the presented images. This is a serious problem. It can be solved by modifying the standard energy function of associative memory, quadratic in interactions between the neurons, by including in it higher order interactions. By properly designing the energy function (or Hamiltonian) for these models with higher order interactions one can store and reliably retrieve many more memories than the number of neurons in the network. Deep neural networks have proven to be useful for a broad range of problems in machine learning including image classification, speech recognition, object detection, etc. These models are composed of several layers of neurons, so that the output of one layer serves as the input to the next layer. Each 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. neuron calculates a weighted sum of the inputs and passes the result through a non-linear activation function. Traditionally, deep neural networks used activation functions such as hyperbolic tangents or logistics. Learning the weights in such networks, using a backpropagation algorithm, faced serious problems in the 1980s and 1990s. These issues were largely resolved by introducing unsupervised pre-training, which made it possible to initialize the weights in such a way that the subsequent backpropagation could only gently move boundaries between the classes without destroying the feature detectors [3, 4]. More recently, it was realized that the use of rectified linear units (ReLU) instead of the logistic functions speeds up learning and improves generalization [5, 6, 7]. Rectified linear functions are usually interpreted as firing rates of biological neurons. These rates are equal to zero if the input is below a certain threshold and linearly grow with the input if it is above the threshold. To mimic biology the output should be small or zero if the input is below the threshold, but it is much less clear what the behavior of the activation function should be for inputs exceeding the threshold. Should it grow linearly, sub-linearly, or faster than linearly? How does this choice affect the computational properties of the neural network? Are there other functions that would work even better than the rectified linear units? These questions to the best of our knowledge remain open. This paper examines these questions through the lens of associative memory. We start by discussing a family of models of associative memory with large capacity. These models use higher order (higher than quadratic) interactions between the neurons in the energy function. The associative memory description is then mapped onto a neural network with one hidden layer and an unusual activation function, related to the Hamiltonian. We show that by varying the power of interaction vertex in the energy function (or equivalently by changing the activation function of the neural network) one can force the model to learn representations of the data either in terms of features or in terms of prototypes. 2 Associative memory with large capacity The standard model of associative memory [1] uses a system of N binary neurons, with values ?1. A configuration of all the neurons is denoted by a vector i . The model stores K memories, denoted by ?i? , which for the moment are also assumed to be binary. The model is defined by an energy function, which is given by E= N 1 X 2 i,j=1 i Tij j , Tij = K X ?i? ?j? , (1) ?=1 and a dynamical update rule that decreases the energy at every update. The basic problem is the following: when presented with a new pattern the network should respond with a stored memory which most closely resembles the input. There has been a large amount of work in the community of statistical physicists investigating the capacity of this model, which is the maximal number of memories that the network can store and reliably retrieve. It has been demonstrated [1, 8, 9] that in case of random memories this maximal value is of the order of K max ? 0.14N . If one tries to store more patterns, several neighboring memories in the configuration space will merge together producing a ground state of the Hamiltonian (1), which has nothing to do with any of the stored memories. By modifying the Hamiltonian (1) in a way that removes second order correlations between the stored memories, it is possible [10] to improve the capacity to K max = N . The mathematical reason why the model (1) gets confused when many memories are stored is that several memories produce contributions to the energy which are of the same order. In other words the energy decreases too slowly as the pattern approaches a memory in the configuration space. In order to take care of this problem, consider a modification of the standard energy E= K X F ?i? i (2) ?=1 In this formula F (x) is some smooth function (summation over index i is assumed). The computational capabilities of the model will be illustrated for two cases. First, when F (x) = xn (n is an integer number), which is referred to as a polynomial energy function. Second, when F (x) is a 2 rectified polynomial energy function ? xn , x 0 (3) 0, x < 0 In the case of the polynomial function with n = 2 the network reduces to the standard model of associative memory [1]. If n > 2 each term in (2) becomes sharper compared to the n = 2 case, thus more memories can be packed into the same configuration space before cross-talk intervenes. F (x) = Having defined the energy function one can derive an iterative update rule that leads to decrease of the energy. We use asynchronous updates flipping one unit at a time. The update rule is: ?X K ? ? ? X ? (t) ? X ? (t) ?? (t+1) ? ? = Sign F ? + ? F ? + ?j j , (4) i i j j i ?=1 j6=i j6=i The argument of the sign function is the difference of two energies. One, for the configuration with all but the i-th units clumped to their current states and the i-th unit in the ?off? state. The other one for a similar configuration, but with the i-th unit in the ?on? state. This rule means that the system updates a unit, given the states of the rest of the network, in such a way that the energy of the entire configuration decreases. For the case of polynomial energy function a very similar family of models was considered in [11, 12, 13, 14, 15, 16]. The update rule in those models was based on the induced magnetic fields, however, and not on the difference of energies. The two are slightly different due to the presence of self-coupling terms. Throughout this paper we use energy-based update rules. How many memories can model (4) store and reliably retrieve? Consider the case of random patterns, so that each element of the memories is equal to ?1 with equal probability. Imagine that the system is initialized in a state equal to one of the memories (pattern number ?). One can derive a stability criterion, i.e. the upper bound on the number of memories such that the network stays in that initial state. Define the energy difference between the initial state and the state with spin i flipped K ? K ? ?n X ?n X X X E= ?i? ?i? + ?j? ?j? ?i? ?i? + ?j? ?j? , ?=1 ?=1 j6=i j6=i where the polynomial energy function is used. This quantity has a mean h Ei = N n (N 2)n ? 2nN n 1 , which comes from the term with ? = ?, and a variance (in the limit of large N ) ?2 = ?n (K 1)N n 1 , where ?n = 4n2 (2n 3)!! The i-th bit becomes unstable when the magnitude of the fluctuation exceeds the energy gap h Ei and the sign of the fluctuation is opposite to the sign of the energy gap. Thus the probability that the state of a single neuron is unstable (in the limit when both N and K are large, so that the noise is effectively gaussian) is equal to r Z1 Nn 1 x2 dx (2n 3)!! K p Perror = e 2?2 ? e 2K(2n 3)!! n 1 2? N 2??2 h Ei Requiring that this probability is less than a small value, say 0.5%, one can find the upper limit on the number of patterns that the network can store K max = ?n N n 1 , (5) where ?n is a numerical constant, which depends on the (arbitrary) threshold 0.5%. The case n = 2 corresponds to the standard model of associative memory and gives the well known result K = 0.14N . For the perfect recovery of a memory (Perror < 1/N ) one obtains 1 Nn 1 max Kno (6) errors ? 2(2n 3)!! ln(N ) For higher powers n the capacity rapidly grows with N in a non-linear way, allowing the network to store and reliably retrieve many more patterns than the number of neurons that it has, in accord1 with [13, 14, 15, 16]. This non-linear scaling relationship between the capacity and the size of the network is the phenomenon that we exploit. 1 The n-dependent coefficient in (6) depends on the exact form of the Hamiltonian and the update rule. References [13, 14, 15] do not allow repeated indices in the products over neurons in the energy function, therefore obtain a different coefficient. In [16] the Hamiltonian coincides with ours, but the update rule is different, which, however, results in exactly the same coefficient as in (6). 3 We study a family of models of this kind as a function of n. At small n many terms contribute to the sum over ? in (2) approximately equally. In the limit n ! 1 the dominant contribution to the sum comes from a single memory, which has the largest overlap with the input. It turns out that optimal computation occurs in the intermediate range. 3 The case of XOR The case of XOR is elementary, yet instructive. It is presented here for three reasons. First, it illustrates the construction (2) in this simplest case. Second, it shows that as n increases, the computational capabilities of the network also increase. Third, it provides the simplest example of a situation in which the number of memories is larger than the number of neurons, yet the network works reliably. The problem is the following: given two inputs x and y produce an output z such that the truth table x -1 -1 1 1 y -1 1 -1 1 z -1 1 1 -1 is satisfied. We will treat this task as an associative memory problem and will simply embed the four examples of the input-output triplets x, y, z in the memory. Therefore the network has N = 3 identical units: two of which will be used for the inputs and one for the output, and K = 4 memories ?i? , which are the four lines of the truth table. Thus, the energy (2) is equal to En (x, y, z) = x y z n x+y+z n x y+z n x+y z n , (7) where the energy function is chosen to be a polynomial of degree n. For odd n, energy (7) is an odd function of each of its arguments, En (x, y, z) = En (x, y, z). For even n, it is an even function. For n = 1 it is equal to zero. Thus, if evaluated on the corners of the cube x, y, z = ?1, it reduces to 8 n=1 <0, En (x, y, z) = Cn , (8) n = 2, 4, 6, ... : Cn xyz, n = 3, 5, 7, ..., where coefficients Cn denote numerical constants. In order to solve the XOR problem one can present to the network an ?incomplete pattern? of inputs (x, y) and let the output z adjust to minimize the energy of the three-spin configuration, while holding the inputs fixed. The network clearly cannot solve this problem for n = 1 and n = 2, since the energy does not depend on the spin configuration. The case n = 2 is the standard model of associative memory. It can also be thought of as a linear perceptron, and the inability to solve this problem represents the well known statement [17] that linear perceptrons cannot compute XOR without hidden neurons. The case of odd n 3 provides an interesting solution. Given two inputs, x and y, one can choose the output z that minimizes the energy. This leads to the update rule ? ? ? ? z = Sign En (x, y, 1) En (x, y, +1) = Sign xy Thus, in this simple case the network is capable of solving the problem for higher odd values of n, while it cannot do so for n = 1 and n = 2. In case of rectified polynomials, a similar construction solves the problem for any n 2. The network works well in spite of the fact that K > N . 4 An example of a pattern recognition problem, the case of MNIST The MNIST data set is a collection of handwritten digits, which has 60000 training examples and 10000 test images. The goal is to classify the digits into 10 classes. The visible neurons, one for each pixel, are combined together with 10 classification neurons in one vector that defines the state of the network. The visible part of this vector is treated as an ?incomplete? pattern and the associative memory is allowed to calculate a completion of that pattern, which is the label of the image. Dense associative memory (2) is a recurrent network in which every neuron can be updated multiple times. For the purposes of digit classification, however, this model will be used in a very limited 4 capacity, allowing it to perform only one update of the classification neurons. The network is initialized in the state when the visible units vi are clamped to the intensities of a given image and the classification neurons are in the off state x? = 1 (see Fig.1A). The network is allowed to make one update of the classification neurons, while keeping the visible units clamped, to produce the output c? . The update rule is similar to (4) except that the sign is replaced by the continuous function g(x) = tanh(x) ? X K ? ? N N ? ? ?? X X X X c? = g F ??? x? + ??x + ?i? vi F ??? x? + ??x + ?i? vi , (9) ?=1 i=1 6=? i=1 6=? where parameter regulates the slope of g(x). The proposed digit class is given by the number of a classification neuron producing the maximal output. Throughout this section the rectified polynomials (3) are used as functions F . To learn effective memories for use in pattern classification, an objective function is defined (see Appendix A in Supplemental), which penalizes the discrepancy vi vi c? x? 179-312 epochs 2 n=2 1.9 1.8 1.7 1.6 1.5 n=3 1.9 1.8 1.7 1.6 1.5 1.4 0 158-262 epochs 2 test error,error, test set % B test error,error, test set % A 1.4 500 1000 1500 Epochs 2000 2500 number of epochs 3000 0 500 1000 1500 2000 2500 numberEpochs of epochs 3000 Figure 1: (A) The network has N = 28 ? 28 = 784 visible neurons and Nc = 10 classification neurons. The visible units are clamped to intensities of pixels (which is mapped on the segment [ 1, 1]), while the classification neurons are initialized in the state x? and then updated once to the state c? . (B) Behavior of the error on the test set as training progresses. Each curve corresponds to a different combination of hyperparameters from the optimal window, which was determined on the validation set. The arrows show the first time when the error falls below a 2% threshold. All models have K = 2000 memories (hidden units). between the output c? and the target output. This objective function is then minimized using a backpropagation algorithm. The learning starts with random memories drawn from a Gaussian ? distribution. The backpropagation algorithm then finds a collection of K memories ?i,? , which minimize the classification error on the training set. The memories are normalized to stay within the ? 1 ? ?i,? ? 1 range, absorbing their overall scale into the definition of the parameter . The performance of the proposed classification framework is studied as a function of the power n. The next section shows that a rectified polynomial of power n in the energy function is equivalent to the rectified polynomial of power n 1 used as an activation function in a feedforward neural network with one hidden layer of neurons. Currently, the most common choice of activation functions for training deep neural networks is the ReLU, which in our language corresponds to n = 2 for the energy function. Although not currently used to train deep networks, the case n = 3 would correspond to a rectified parabola as an activation function. We start by comparing the performances of the dense memories in these two cases. The performance of the network depends on n and on the remaining hyperparameters, thus the hyperparameters should be optimized for each value of n. In order to test the variability of performances for various choices of hyperparameters at a given n, a window of hyperparameters for which the network works well on the validation set (see the Appendix A in Supplemental) was determined. Then many networks were trained for various choices of the hyperparameters from this window to evaluate the performance on the test set. The test errors as training progresses are shown in Fig.1B. While there is substantial variability among these samples, on average the cluster of trajectories for n = 3 achieves better results on the test set than that for n = 2. These error rates should be compared with error rates for backpropagation alone without the use of generative pretraining, various kinds of regularizations (for example dropout) or adversarial training, all of which could be added to our construction if necessary. In this class of models the best published results are all2 in the 1.6% range [18], see also controls in [19, 20]. This agrees with our results for n = 2. The n = 3 case does slightly better than that as is clear from Fig.1B, with all the samples performing better than 1.6%. 2 Although there are better results on pixel permutation invariant task, see for example [19, 20, 21, 22]. 5 Higher rectified polynomials are also faster in training compared to ReLU. For the n = 2 case, the error crosses the 2% threshold for the first time during training in the range of 179-312 epochs. For the n = 3 case, this happens earlier on average, between 158-262 epochs. For higher powers n this speed-up is larger. This is not a huge effect for a small dataset such as MNIST. However, this speed-up might be very helpful for training large networks on large datasets, such as ImageNet. A similar effect was reported earlier for the transition between saturating units, such as logistics or hyperbolic tangents, to ReLU [7]. In our family of models that result corresponds to moving from n = 1 to n = 2. Feature to prototype transition How does the computation performed by the neural network change as n varies? There are two extreme classes of theories of pattern recognition: feature-matching and formation of a prototype. According to the former, an input is decomposed into a set of features, which are compared with those stored in the memory. The subset of the stored features activated by the presented input is then interpreted as an object. One object has many features; features can also appear in more than one object. The prototype theory provides an alternative approach, in which objects are recognized as a whole. The prototypes do not necessarily match the object exactly, but rather are blurred abstract n=3 n = 20 n = 30 n=2 1 256 256 256 256 256 192 192 192 192 192 128 128 128 128 128 64 64 64 64 64 0.5 0 0.5 1 n=2 n=3 n=20 n=30 40 30 20 10 0 0 1 2 3 4 5 6 7 8 9 number of strongly positively driven RU number of RU with ??? 10000 number testimages images numberof of test percent of active memories percent of memories, % 50 8000 6000 4000 2000 n=2 n=3 n=20 n=30 errortest = 1.51% errortest = 1.44% errortest = 1.61% errortest = 1.80% 0 1 2 3 4 5 6 7 8 9 10 11 12 number of memories strongly contributing to the correct RU 10 number of memories making the decision > 0.99 Figure 2: We show 25 randomly selected memories (feature detectors) for four networks, which use rectified polynomials of degrees n = 2, 3, 20, 30 as the energy function. The magnitude of a memory element corresponding to each pixel is plotted in the location of that pixel, the color bar explains the color code. The histograms at the bottom are explained in the text. The error rates refer to the particular four samples used in this figure. RU stands for recognition unit. representations which include all the features that an object has. We argue that the computational models proposed here describe feature-matching mode of pattern recognition for small n and the prototype regime for large n. This can be anticipated from the sharpness of contributions that each memory makes to the total energy (2). For large n the function F (x) peaks much more sharply around each memory compared to the case of small n. Thus, at large n all the information about a digit must be written in only one memory, while at small n this information can be distributed among several memories. In the case of intermediate n some learned memories behave like features while others behave like prototypes. These two classes of memories work together to model the data in an efficient way. The feature to prototype transition is clearly seen in memories shown in Fig.2. For n = 2 or 3 each memory does not look like a digit, but resembles a pattern of activity that might be useful for recognizing several different digits. For n = 20 many of the memories can be recognized as digits, which are surrounded by white margins representing elements of memories having approximately zero values. These margins describe the variability of thicknesses of lines of different training examples and mathematically mean that the energy (2) does not depend on whether the corresponding pixel is on or off. For n = 30 most of the memories represent prototypes of whole digits or large portions of digits, with a small admixture of feature memories that do not resemble any digit. 6 The feature to prototype transition can be visualized by showing the feature detectors in situations when there is a natural ordering of pixels. Such ordering exists in images, for example. In general situations, however, there is no preferred permutation of visible neurons that would reveal this structure (e.g. in the case of genomic data). It is therefore useful to develop a measure that permits a distinction to be made between features and prototypes in the absence of such visual space. Towards the end of training most of the recognition connections ??? are approximately equal to ?1. One can choose an arbitrary cutoff, and count the number of recognition connections that are in the ?on? state (??? = +1) for each memory. The distribution function of this number is shown on the left histogram in Fig.2. Intuitively, this quantity corresponds to the number of different digit classes that a particular memory votes for. At small n, most of the memories vote for three to five different digit classes, a behavior characteristic of features. As n increases, each memory specializes and votes for only a single class. In the case n = 30, for example, more than 40% of memories vote for only one class, a behavior characteristic of prototypes. A second way to see the feature to prototype transition is to look at the number of memories which make large contributions to the classification decision (right histogram in Fig.2). For each test image one can find the memory that makes the largest contribution to the energy gap, which is the sum over ? in (9). Then one can count the number of memories that contribute to the gap by more than 0.9 of this largest contribution. For small n, there are many memories that satisfy this criterion and the distribution function has a long tail. In this regime several memories are cooperating with each other to make a classification decision. For n = 30, however, more than 8000 of 10000 test images do not have a single other memory that would make a contribution comparable with the largest one. This result is not sensitive to the arbitrary choice (0.9) of the cutoff. Interestingly, the performance remains competitive even for very large n ? 20 (see Fig.2) in spite of the fact that these networks are doing a very different kind of computation compared with that at small n. 5 Relationship to a neural network with one hidden layer In this section we derive a simple duality between the dense associative memory and a feedforward neural network with one layer of hidden neurons. In other words, we show that the same computational model has two very different descriptions: one in terms of associative memory, the other one in terms c? vi c? vi x? = g h? f " vi Figure 3: On the left a feedforward neural network with one layer of hidden neurons. The states of the visible units are transformed to the hidden neurons using a non-linear function f , the states of the hidden units are transformed to the output layer using a non-linear function g. On the right the model of dense associative memory with one step update (9). The two models are equivalent. of a network with one layer of hidden units. Using this correspondence one can transform the family of dense memories, constructed for different values of power n, to the language of models used in deep learning. The resulting neural networks are guaranteed to inherit computational properties of the dense memories such as the feature to prototype transition. The construction is very similar to (9), except that the classification neurons are initialized in the state when all of them are equal to ", see Fig.3. In the limit " ! 0 one can expand the function F in (9) so that the dominant contribution comes from the term linear in ". Then K N K K h X ?X ? i hX i hX i c? ? g F0 ?i? vi ( 2??? x? ) = g ??? F 0 ?i? vi = g ??? f ?i? vi , (10) ?=1 ?=1 i=1 ?=1 where the parameter is set to = 1/(2") (summation over the visible index i is assumed). Thus, the model of associative memory with one step update is equivalent to a conventional feedforward neural network with one hidden layer provided that the activation function from the visible layer to the hidden layer is equal to the derivative of the energy function f (x) = F 0 (x) (11) 7 The visible part of each memory serves as an incoming weight to the hidden layer, and the recognition part of the memory serves as an outgoing weight from the hidden layer. The expansion used in (10) Nc N P P is justified by a condition ?i? vi ??? x? , which is satisfied for most common problems, and i=1 ?=1 is simply a statement that labels contain far less information than the data itself3 . From the point of view of associative memory, the dominant contribution shaping the basins of attraction comes from the low energy states. Therefore mathematically it is determined by the asymptotics of the activation function f (x), or the energy function F (x), at x ! 1. Thus different activation functions having similar asymptotics at x ! 1 should fall into the same universality class and should have similar computational properties. In the table below we list some common activation activation function energy function n f (x) = tanh(x) F (x) = ln cosh(x) ? x, at x ! 1 1 f (x) = logistic function F (x) = ln 1 + ex ? x, at x ! 1 1 f (x) =ReLU F (x) ? x2 , at x ! 1 2 f (x) = RePn 1 F (x) = RePn n functions used in models of deep learning, their associative memory counterparts and the power n which determines the asymptotic behavior of the energy function at x ! 1.The results of section 4 suggest that for not too large n the speed of learning should improve as n increases. This is consistent with the previous observation that ReLU are faster in training than hyperbolic tangents and logistics [5, 6, 7]. The last row of the table corresponds to rectified polynomials of higher degrees. To the best of our knowledge these activation functions have not been used in neural networks. Our results suggest that for some problems these higher power activation functions should have even better computational properties than the rectified liner units. 6 Discussion and conclusions What is the relationship between the capacity of the dense associative memory, calculated in section 2, and the neural network with one step update that is used for digit classification? Consider the limit of very large in (9), so that the hyperbolic tangent is approximately equal to the sign function, as in (4). In the limit of sufficiently large n the network is operating in the prototype regime. The presented image places the initial state of the network close to a local minimum of energy, which corresponds to one of the prototypes. In most cases the one step update of the classification neurons is sufficient to bring this initial state to the nearest local minimum, thus completing the memory recovery. This is true, however, only if the stored patterns are stable and have basins of attraction around them of at least the size of one neuron flip, which is exactly (in the case of random patterns) the condition given by (6). For correlated patterns the maximal number of stored memories might be different from (6), however it still rapidly increases with increase of n. The associative memory with one step update (or the feedforward neural network) is exactly equivalent to the full associative memory with multiple updates in this limit. The calculation with random patterns thus theoretically justifies the expectation of a good performance in the prototype regime. To summarize, this paper contains three main results. First, it is shown how to use the general framework of associative memory for pattern recognition. Second, a family of models is constructed that can learn representations of the data in terms of features or in terms of prototypes, and that smoothly interpolates between these two extreme regimes by varying the power of interaction vertex. Third, there exists a simple duality between a one step update version of the associative memory model and a feedforward neural network with one layer of hidden units and an unusual activation function. This duality makes it possible to propose a class of activation functions that encourages the network to learn representations of the data with various proportions of features and prototypes. These activation functions can be used in models of deep learning and should be more effective than the standard choices. They allow the networks to train faster. We have also observed an improvement of generalization ability in networks trained with the rectified parabola activation function compared to the ReLU for the case of MNIST. While these ideas were illustrated using the simplest architecture of the neural network with one layer of hidden units, the proposed activation functions can also be used in multilayer architectures. We did not study various regularizations (weight decay, dropout, etc), which can be added to our construction. The performance of the model supplemented with these regularizations, as well as performance on other common benchmarks, will be reported elsewhere. 3 A relationshp similar to (11) was discussed in [23, 24] in the context of autoencoders. 8 References [1] Hopfield, J.J., 1982. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the national academy of sciences, 79(8), pp.2554-2558. [2] LeCun, Y., Chopra, S., Hadsell, R., Ranzato, M. and Huang, F., 2006. A tutorial on energy-based learning. Predicting structured data, 1, p.0. [3] Hinton, G.E., Osindero, S. and Teh, Y.W., 2006. A fast learning algorithm for deep belief nets. Neural computation, 18(7), pp.1527-1554. [4] Hinton, G.E. and Salakhutdinov, R.R., 2006. Reducing the dimensionality of data with neural networks. Science, 313(5786), pp.504-507. [5] Nair, V. and Hinton, G.E., 2010. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10) (pp. 807-814). [6] Glorot, X., Bordes, A. and Bengio, Y., 2011. Deep sparse rectifier neural networks. In International Conference on Artificial Intelligence and Statistics (pp. 315-323). [7] Krizhevsky, A., Sutskever, I. and Hinton, G.E., 2012. ImageNet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105). [8] Amit, D.J., Gutfreund, H. and Sompolinsky, H., 1985. Storing infinite numbers of patterns in a spin-glass model of neural networks. Physical Review Letters, 55(14), p.1530. [9] McEliece, R.J., Posner, E.C., Rodemich, E.R. and Venkatesh, S.S., 1987. The capacity of the Hopfield associative memory. Information Theory, IEEE Transactions on, 33(4), pp.461-482. [10] Kanter, I. and Sompolinsky, H., 1987. Associative recall of memory without errors. Physical Review A, 35(1), p.380. [11] Chen, H.H., Lee, Y.C., Sun, G.Z., Lee, H.Y., Maxwell, T. and Giles, C.L., 1986. High order correlation model for associative memory. In Neural Networks for Computing (Vol. 151, No. 1, pp. 86-99). AIP Publishing. [12] Psaltis, D. and Park, C.H., 1986. Nonlinear discriminant functions and associative memories. In Neural networks for computing (Vol. 151, No. 1, pp. 370-375). AIP Publishing. [13] Baldi, P. and Venkatesh, S.S., 1987. Number of stable points for spin-glasses and neural networks of higher orders. Physical Review Letters, 58(9), p.913. [14] Gardner, E., 1987. Multiconnected neural network models. Journal of Physics A: Mathematical and General, 20(11), p.3453. [15] Abbott, L.F. and Arian, Y., 1987. Storage capacity of generalized networks. Physical Review A, 36(10), p.5091. [16] Horn, D. and Usher, M., 1988. Capacities of multiconnected memory models. Journal de Physique, 49(3), pp.389-395. [17] Minsky, M. and Papert, S., 1969. Perceptron: an introduction to computational geometry. The MIT Press, Cambridge, expanded edition, 19(88), p.2. [18] Simard, P.Y., Steinkraus, D. and Platt, J.C., 2003, August. Best practices for convolutional neural networks applied to visual document analysis. In null (p. 958). IEEE. [19] Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. and Salakhutdinov, R., 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1), pp.1929-1958. [20] Wan, L., Zeiler, M., Zhang, S., LeCun, Y. and Fergus, R., 2013. Regularization of neural networks using dropconnect. In Proceedings of the 30th International Conference on Machine Learning (ICML-13) (pp. 1058-1066). [21] Goodfellow, I.J., Shlens, J. and Szegedy, C., 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. [22] Rasmus, A., Berglund, M., Honkala, M., Valpola, H. and Raiko, T., 2015. Semi-supervised learning with ladder networks. In Advances in Neural Information Processing Systems (pp. 3546-3554). [23] Kamyshanska, H. and Memisevic, R., 2013, April. On autoencoder scoring. In ICML (3) (pp. 720-728). [24] Kamyshanska, H. and Memisevic, R., 2015. The potential energy of an autoencoder. IEEE transactions on pattern analysis and machine intelligence, 37(6), pp.1261-1273. 9
6121 |@word version:1 polynomial:15 proportion:1 open:1 moment:1 initial:4 configuration:9 contains:1 ours:1 interestingly:1 document:1 current:1 comparing:1 activation:23 perror:2 dx:1 yet:2 must:1 john:1 written:1 universality:1 visible:12 subsequent:1 numerical:2 remove:1 update:21 alone:1 cue:1 generative:1 selected:1 intelligence:2 hamiltonian:6 provides:3 contribute:2 location:1 zhang:1 five:1 mathematical:2 constructed:3 baldi:1 theoretically:1 behavior:5 salakhutdinov:2 decomposed:1 steinkraus:1 window:3 becomes:2 spain:1 confused:1 provided:1 null:1 what:2 kind:3 interpreted:2 minimizes:1 gutfreund:1 supplemental:2 every:2 exactly:4 platt:1 control:1 unit:22 appear:1 producing:2 before:1 local:2 treat:1 limit:10 physicist:1 firing:1 fluctuation:2 merge:1 approximately:4 might:3 studied:2 resembles:2 limited:1 range:5 horn:1 lecun:2 practice:1 backpropagation:5 digit:15 asymptotics:2 thought:2 hyperbolic:4 matching:3 pre:1 word:2 spite:2 suggest:2 get:1 onto:1 cannot:3 close:1 storage:1 context:1 equivalent:4 conventional:1 demonstrated:1 center:1 destroying:1 resembling:1 sharpness:1 hadsell:1 identifying:1 recovery:2 examines:1 rule:10 attraction:2 shlens:1 posner:1 retrieve:4 stability:1 traditionally:1 limiting:1 transmit:1 imagine:1 construction:5 updated:2 target:1 exact:1 us:1 designing:1 goodfellow:1 element:3 recognition:16 parabola:2 bottom:1 observed:1 preprint:1 solved:1 calculate:1 sun:1 sompolinsky:2 ranzato:1 ordering:2 decrease:4 substantial:1 intuition:1 trained:2 depend:2 solving:1 segment:1 serve:1 resolved:1 hopfield:4 emergent:1 various:6 retrieves:1 talk:1 train:2 fast:1 effective:2 describe:2 query:1 artificial:1 formation:1 harnessing:1 kanter:1 larger:2 solve:3 say:1 ability:2 statistic:1 transform:1 itself:1 associative:37 net:1 propose:2 reconstruction:1 interaction:6 maximal:4 product:1 neighboring:1 rapidly:2 academy:1 description:2 sutskever:2 cluster:1 produce:3 perfect:1 object:7 derive:3 coupling:1 completion:1 recurrent:1 develop:1 nearest:1 odd:4 progress:2 solves:1 resemble:1 come:4 closely:2 kno:1 correct:1 modifying:2 explains:1 hx:2 generalization:2 biological:1 elementary:1 summation:2 mathematically:2 around:2 considered:1 ground:1 sufficiently:1 achieves:1 purpose:1 label:5 tanh:2 currently:2 psaltis:1 honkala:1 sensitive:1 largest:4 agrees:1 weighted:1 mit:1 clearly:2 genomic:1 gaussian:2 rather:1 varying:2 properly:1 improvement:1 adversarial:2 glass:2 helpful:1 dependent:1 nn:3 entire:1 hidden:18 expand:1 transformed:2 pixel:10 issue:1 classification:19 overall:1 among:2 denoted:2 initialize:1 cube:1 equal:11 field:1 once:1 having:3 biology:2 identical:2 broad:1 flipped:1 unsupervised:1 represents:1 look:2 anticipated:1 icml:3 mimic:1 discrepancy:1 minimized:1 others:1 aip:2 serious:2 randomly:1 composed:1 national:1 replaced:1 minsky:1 geometry:1 detection:1 huge:1 adjust:1 physique:1 extreme:2 activated:1 capable:1 partial:1 necessary:1 xy:1 arian:1 incomplete:3 initialized:4 penalizes:1 plotted:1 classify:1 earlier:2 giles:1 introducing:1 vertex:2 subset:1 recognizing:1 krizhevsky:2 osindero:1 too:2 stored:10 reported:2 thickness:1 varies:1 combined:2 peak:1 international:3 stay:2 memisevic:2 lee:2 off:3 physic:1 together:4 intervenes:1 satisfied:2 choose:2 slowly:1 huang:1 wan:1 dropconnect:1 berglund:1 corner:1 derivative:1 simard:1 szegedy:1 potential:1 de:1 includes:1 coefficient:4 blurred:1 satisfy:1 depends:3 vi:12 performed:1 try:1 view:1 analyze:1 doing:1 portion:1 start:3 recover:1 competitive:1 capability:2 simon:1 slope:1 contribution:9 minimize:2 spin:5 xor:5 variance:1 largely:1 characteristic:2 convolutional:2 correspond:1 handwritten:2 trajectory:1 rectified:18 j6:4 published:1 detector:3 definition:1 energy:44 pp:15 dataset:1 logical:1 recall:1 knowledge:2 color:2 improves:1 dimensionality:1 shaping:1 rodemich:1 maxwell:1 higher:13 supervised:1 april:1 evaluated:1 strongly:2 until:1 correlation:2 autoencoders:1 mceliece:1 ei:3 nonlinear:1 defines:1 mode:2 logistic:2 reveal:1 grows:1 usa:2 effect:2 requiring:1 normalized:1 contain:1 counterpart:1 former:1 regularization:4 true:1 illustrated:3 white:1 during:1 self:1 encourages:1 coincides:1 criterion:2 generalized:1 bring:1 percent:2 image:17 recently:1 common:4 absorbing:1 physical:5 regulates:1 gently:1 tail:1 discussed:1 refer:1 cambridge:1 clumped:1 language:2 moving:1 stable:2 f0:1 operating:1 etc:2 dominant:3 driven:1 store:10 certain:1 binary:2 discussing:1 scoring:1 seen:1 minimum:2 care:1 recognized:2 semi:1 full:2 multiple:2 reduces:2 smooth:1 exceeds:1 faster:4 match:1 calculation:1 cross:2 long:1 equally:1 calculates:1 basic:1 multilayer:1 expectation:1 arxiv:2 histogram:3 represent:1 justified:1 grow:2 rest:1 usher:1 pass:1 induced:1 integer:1 chopra:1 presence:1 feedforward:7 intermediate:2 bengio:1 affect:1 relu:7 architecture:2 opposite:1 idea:2 prototype:20 cn:3 whether:1 utility:1 interpolates:2 speech:1 pretraining:1 deep:13 tij:2 useful:3 clear:2 amount:1 cosh:1 visualized:1 simplest:3 tutorial:1 sign:8 neuroscience:1 subpart:1 vol:2 four:4 liner:1 threshold:7 drawn:1 changing:1 prevent:1 cutoff:2 abbott:1 cooperating:1 sum:4 letter:2 respond:1 place:1 family:9 throughout:2 decision:3 appendix:2 scaling:1 comparable:1 bit:1 dropout:3 layer:19 bound:1 completing:1 guaranteed:1 correspondence:1 quadratic:2 activity:2 sharply:1 x2:2 speed:4 argument:2 performing:1 expanded:1 structured:1 according:1 combination:1 smaller:1 remain:1 slightly:2 modification:1 happens:1 making:1 explained:1 invariant:1 intuitively:1 restricted:1 ln:3 remains:1 turn:1 count:2 xyz:1 flip:1 serf:3 unusual:3 end:1 permit:1 apply:1 appropriate:1 magnetic:1 alternative:1 gate:1 remaining:1 include:1 publishing:2 zeiler:1 exploit:1 amit:1 move:1 objective:2 question:2 realized:1 flipping:1 quantity:2 occurs:1 added:2 valpola:1 mapped:2 capacity:11 argue:1 unstable:2 discriminant:1 reason:2 ru:4 code:1 index:3 relationship:3 rasmus:1 equivalently:2 nc:2 sharper:1 holding:1 statement:2 reliably:6 packed:1 boltzmann:1 collective:1 perform:1 allowing:2 upper:2 teh:1 neuron:34 observation:1 datasets:1 benchmark:1 behave:2 logistics:4 situation:3 hinton:5 variability:3 arbitrary:3 august:1 community:1 intensity:3 princeton:5 venkatesh:2 z1:1 optimized:1 imagenet:2 connection:2 learned:1 distinction:1 barcelona:1 nip:1 bar:1 usually:1 pattern:31 below:4 dynamical:1 regime:6 summarize:1 including:2 memory:108 max:4 belief:1 ia:1 power:10 overlap:1 treated:1 force:1 natural:1 predicting:1 advanced:1 representing:1 improve:3 ladder:1 gardner:1 admixture:1 specializes:1 raiko:1 autoencoder:2 faced:1 epoch:7 text:1 review:4 tangent:4 contributing:1 asymptotic:1 permutation:2 interesting:1 limitation:1 proven:1 validation:2 degree:4 basin:2 consistent:1 sufficient:1 storing:1 surrounded:1 bordes:1 row:1 elsewhere:1 last:1 asynchronous:1 keeping:1 side:2 allow:2 perceptron:2 institute:2 fall:2 explaining:1 sparse:1 distributed:1 boundary:1 curve:1 xn:2 transition:6 stand:1 calculated:1 commonly:1 made:2 collection:2 far:1 transaction:2 obtains:1 preferred:1 dmitry:1 active:1 investigating:1 incoming:1 overfitting:1 assumed:3 fergus:1 continuous:1 iterative:1 triplet:1 why:1 table:4 learn:4 correlated:1 expansion:1 necessarily:1 inherit:1 did:1 dense:10 main:1 linearly:4 arrow:1 whole:2 noise:1 hyperparameters:6 edition:1 n2:1 nothing:1 repeated:1 allowed:2 positively:1 fig:8 referred:2 en:6 sub:1 papert:1 exceeding:1 clamped:3 third:2 formula:1 embed:1 rectifier:1 showing:1 supplemented:1 list:1 decay:1 glorot:1 exists:2 mnist:5 effectively:1 magnitude:2 illustrates:1 justifies:1 margin:2 gap:4 chen:1 smoothly:2 simply:2 visual:2 saturating:1 srivastava:1 corresponds:8 truth:2 determines:1 nair:1 goal:1 towards:1 absence:1 change:1 typical:2 except:2 determined:3 reducing:1 infinite:1 lens:1 total:1 duality:7 vote:4 perceptrons:1 inability:1 phenomenon:1 evaluate:1 outgoing:1 park:1 instructive:1 ex:1
5,661
6,122
Cooperative Graphical Models Josip Djolonga Dept. of Computer Science, ETH Z?urich josipd@inf.ethz.ch Stefanie Jegelka CSAIL, MIT stefje@mit.edu Sebastian Tschiatschek Dept. of Computer Science, ETH Z?urich stschia@inf.ethz.ch Andreas Krause Dept. of Computer Science, ETH Z?urich krausea@inf.ethz.ch Abstract We study a rich family of distributions that capture variable interactions significantly more expressive than those representable with low-treewidth or pairwise graphical models, or log-supermodular models. We call these cooperative graphical models. Yet, this family retains structure, which we carefully exploit for efficient inference techniques. Our algorithms combine the polyhedral structure of submodular functions in new ways with variational inference methods to obtain both lower and upper bounds on the partition function. While our fully convex upper bound is minimized as an SDP or via tree-reweighted belief propagation, our lower bound is tightened via belief propagation or mean-field algorithms. The resulting algorithms are easy to implement and, as our experiments show, effectively obtain good bounds and marginals for synthetic and real-world examples. 1 Introduction Probabilistic inference in high-order discrete graphical models has been an ongoing computational challenge, and all existing methods rely on exploiting specific structure: either low-treewidth or pairwise graphical models, or functional properties of the distribution such as log-submodularity. Here, we aim to compute approximate marginal probabilities in complex models with long-range variable interactions that do not possess any of these properties. Instead, we exploit a combination of structural and functional properties in new ways. X1,1 X1,2 X1,3 X1,4 X2,1 X2,2 X2,3 X2,4 X3,1 X3,2 X3,3 X3,4 The classical example of image segmentation may serve to motivate our family of models: we would like to estimate a posterior marginal Figure 1: Example cooperadistribution over k labels for each pixel in an image. A common tive model. Edge colors inapproach uses Conditional Random Fields on a pixel neighborhood dicate the edge cluster. Dotgraph with pairwise potentials that encourage neighboring pixels to ted edges are cut under the take on the same label. From the perspective of the graph, this model current assignment. prefers configurations with few edges cut, where an edge is said to be cut if its endpoints have different labels. Such cut-based models, however, short-cut elongated structures (e.g. tree branches), a problem known as shrinking bias. Jegelka and Bilmes [1] hence replace the bias towards short cuts (boundaries) by a bias towards configurations with certain higher-order structure: the cut edges occur at similar-looking pixel pairs. They group the graph edges into clusters (based on, say, color gradients across the endpoints), observing that the true object boundary is captured by few of these clusters. To encourage cutting edges from few clusters, the cost of cutting an edge decreases as more edges in its cluster are cut. In short, the edges ?cooperate?. In Figure 1, each pixel takes on one of two labels (colors), and cut 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. edges are indicated by dotted lines. The current configuration cuts three red edges and one blue edge, and has lower probability than the configuration that swaps X3,1 to gray, cutting only red edges. Such a model can? be implemented by an energy (cost) h(#red edges cut) + h(#blue edges cut), where e.g. h(u) = u. Similar cooperative models can express a preference for shapes [2]. While being expressive, such models are computationally very challenging: the nonlinear function on pairs of variables (edges) is equivalent to a graphical model of extremely high order (up to the number of variables). Previous work hence addressed only MAP inference [3, 4]; the computation of marginals and partition functions was left as an open problem. In this paper, we close this gap, even for a larger family of models. We address models, which we call cooperative graphical models, that are specified by an undirected graph G = (V, E): each node i ? V is associated with a random variable Xi that takes values in X = {1, 2, . . . , k}. To each vertex i ? V and edge {i, j}, we attach a potential function ?i : X ? R and ?i,j : X 2 ? R, respectively. Our distribution is then ? ? X  X 1 P (x) = exp ?? ?i (xi ) + ?i,j (xi , xj ) + f (y(x)) ? ?(x), (1) Z i?V {i,j}?E where we call y : X n ? {0, 1}E the disagreement variable1 , defined as yi,j = Jxi 6= xj K. The term ? : X n ? R?0 is the base-measure and allows to encode constraints, e.g., conditioning on some variables. With f ? 0 we obtain a Markov random field. Probabilistic inference in our model class (1) is very challenging, since we make no factorization assumption about f . One solution would be to encode P (x) as a log-linear model via a new variable z ? {0, 1}E and constraints ?(x, z) = Jy(x) = zK, but this in general requires computing exponential-sized sufficient statistics from z. In contrast, we make one additional key assumption that will enable the development of efficiently computable variational lower and upper bounds: we henceforth assume that f : {0, 1}E ? R is submodular, i.e., it satisfies f (min(y, y0 )) + f (max(y, y0 )) ? f (y) + f (y0 ) for all y, y0 ? {0, 1}E , where the min and max operations are taken element-wise. For example, the pairwise potentials ?i,j are submodular if ?i,j (0, 0) + ?i,j (1, 1) ? ?i,j (0, 1) + ?i,j (1, 0). In our introductory example, f is submodular if h is concave. As opposed to [3], we do not assume that f is monotone increasing. Importantly, even if f is submodular, P (x) neither has low treewdith, nor is its logarithm sub- or supermodular in x, properties that have commonly been exploited for inference. Contributions. We make the following contributions: (1) We introduce a new family of probabilistic models that can capture rich non-submodular interactions, while still admitting efficient inference. This family includes pairwise and certain higher-order graphical models, cooperative cuts [1], and other, new models. We develop new inference methods for these models; in particular, (2) upper bounds that are amenable to convex optimization, and (3) lower bounds that we optimize with traditional variational methods. Finally, we demonstrate the efficacy of our methods empirically. 1.1 Related work Maximum-a-posteriori (MAP). Computing the mode of (1) for binary models is also known as the cooperative cut problem, and has been analyzed for the case when both the pairwise interactions ?i,j are submodular and f is monotone [1]. While the general problem is NP-hard, it can be solved if f is defined by a piecewise linear concave function [4]. Variational inference. Since computing marginal probabilities for (1) is #P-hard even for pairwise models (when f ? 0) [5, 6], we revert to approximate inference. Variational inference methods for discrete pairwise models have been studied extensively; a comprehensive overview may be found in [7]. We will build on a selection of techniques that we discuss in the next section. Most existing methods focus on pairwise models (f ? 0), and many scale exponentially with the size of the largest factor, which is infeasible for our cooperative models. Some specialized tractable inference methods exist for higher-order models [8, 9], but they do not apply to our family of models (1). 1 The results presented in this paper can be easily extended to arbitrary binary-valued functions y(x). 2 Log-supermodular models. A related class of relatively tractable models are distributions P (x) = 1 Z exp(?g(x)) for some submodular function g; Djolonga and Krause [10] showed variational inference methods for those models. However, our models are not log-supermodular. While [10] also obtain upper and lower bounds, we need different optimization techniques, and also different polytopes. In fact, submodular and multi-class submodular [11] settings are a strict subset of ours: the function g(x) can be expressed via an auxiliary variable z ? {0, 1} that is fixed to zero using ?(x, z) = Jz = 0K. We then set f (y(x, z)) = g(x1 6= z, x2 6= z, . . . , xn 6= z). 2 Notation and Background Throughout this paper, we have n variables in a graph of m edges, and the potentials ?i and ?i,j are stored in a vector ?. The characteristic vector (or indicator vector) 1A of a set A is the binary vector which contains 1 in the positions corresponding to elements in A, and zeros elsewhere. Moreover, the vector of all ones is 1, and the neighbours of i ? V are denoted by ?(i) ? V . Submodularity. We assume that f in Eqn. (1) is submodular. Occasionally (in Sec. 4 and 5, where stated), we assume that f is monotone: for any y and y0 in {0, 1}E such that y ? y0 coordinate-wise, it holds that f (y) ? f (y0 ). When defining the inference schemes, we make use of two polytopes associated with f . First, the base polytope of a submodular function f is B(f ) = {g ? Rm | ?y ? {0, 1}E : gT y ? f (y)} ? {g ? Rm | gT 1 = f (1)}. Although B(f ) is defined by exponentially many inequalities, an influential result [12] states that it is tractable: we can optimize linear functions over B(f ) in time O(m log m + mF ), where F is the time complexity of evaluating f . This algorithm is part of our scheme in Figure 2. Moreover, as a result of this (linear) tractability, it is possible to compute orthogonal projections onto B(f ). Projection is equivalent to the minimum norm point problem [13]. While the general projection problem has a high degree polynomial time complexity, there are many very commonly used models that admit practically fast projections [14, 15, 16]. The second polytope is the upper submodular polyhedron of f [17], defined as U(f ) = {(g, c) ? Rm+1 | ?y ? {0, 1}E : gT y + c ? f (y)}. Unfortunately, U(f ) is not as tractable as B(f ): even checking membership in U(f ) is hard [17]. However, we can still succinctly describe specific elements of U(f ). In ?4, we show how to efficiently optimize over those elements. Variational inference. We briefly summarize key results for variational inference for pairwise models, following Wainwright and Jordan [7]. We write pairwise models as2 ? ? X  X P (x) = exp ?? ?i (xi ) + (gi,j Jxi 6= xj K + ?i,j (xi , xj ) ? A(g)? ?(x), i?V {i,j}?E where g ? RE is an arbitrary vector and A(g) is the log-partition function. For any choice of 2 parameters (?, g), there is a resulting vector of marginals ? ? [0, 1]k|V |+k |E| . Specifically, for every i ? V , ? has k elements ?i,xi = P (Xi = xi ), one for each xi ? X . Similarly, for each {i, j} ? E, there are k 2 elements ?ij,xi xj so that ?ij,xi xj = P (Xi = xi , Xj = xj ). The marginal polytope M is now the set of all such vectors ? that are realizable under some distribution P (x), and the partition function can equally be expressed in terms of the marginals [7]: ? ? X X X A(g) = sup ?? ?i,xi ?i (xi ) ? ?ij,xi xj ?i,j (xi , xj ) ? ?(?)T g? + H(?), (2) ??M i?V,xi | {i,j}?E xi ,xj {z hstack(?,g),?i } where H(?) is the entropy of the distribution, ?(?) is the vector of disagreement probabilities with P entries ?(?)i,j = xi 6=xj ?ij,xi xj , and stack(?, g) adds the elements of ? and g into a single 2 This formulation is slightly nonstandard, but will be very useful for the subsequent discussion in ?3. 3 vector so that the sum can be written as an inner product. Alas, neither M nor H(?) have succinct descriptions and we will have to approximate them. Because the vectors in the approximation of M are in general not correct marginals, they are called pseudo-marginals and will be denoted by ? instead of ?. Different approximations of M and H yield various methods, e.g. mean-field [7], the semidefinite programming (SDP) relaxation of Wainwright and Jordan [18], tree-reweighted belief propagation (TRWBP) [19], or the family of weighted entropies [20, 21]. Due to the space constraints, we only discuss the latter. They approximate M with the local polytope X X L = {? ? 0 | (?i ? V ) ?i,xi = 1 and (?j ? ?(i)) ?i,xi = ?ij,xi xj }. xi xj The approximations H to the entropy H are parametrized by one weight ?i,j per edge and one ?i per vertex i, all collected in a vector ? ? R|V |+|E| . Then, they take the following form P X X Hi (? i ) = ? ?i,x log ?i,xi , and H(? , ?) = ?i Hi (? i )+ ?i,j Hi,j (? i,j ), where H (? ) = ? Pxi ?i i,j i,j xi ,xj ij,xi jxj log ?ij,xi xj . i?V {i,j}?E The most prominent example is traditional belief propagation, i.e., using the Bethe entropy, which sets ?e = 1 for all e ? E, and assigns to each vertex i ? V a weight of ?i = 1 ? |?(i)|. 3 Convex upper bounds The above variational methods do not directly generalize to our cooperative models: the vectors of marginals could be exponentially large. Hence, we derive a different approach that relies on the submodularity of f . Our first step is to approximate f (y(x)) by a linear lower bound, f (y(x)) ? gT y(x), so that the resulting (pairwise) linearized model will have a partition function upper bounding that of the original model. Ensuring that g indeed remains a lower bound means to satisfy an exponential number of constraints f (y(x)) ? gT y(x), one for each x ? {0, 1}n . While this is hard in general, the submodularity of f implies that these constraints are easily satisfied if g ? B(f ), a very tractable constraint. For g ? B(f ), we have X X XX  log Z = log exp ? ( ?i (xi ) + ?i,j (xi , xj ) + f (y(x))) ? log x?{0,1}V i?V xi X XX x?{0,1}V exp ? ( {i,j}?E X ?i (xi ) + i?V xi {i,j}?E  (?i,j (xi , xj ) + gi,j Jxi 6= xj K)) ? A(g). Unfortunately, A(g) is still very hard to compute and we need to approximate it. If we use an approximation A(g) that upper bounds A(g), then the above inequality will still hold when we replace A by A. Such approximations can be obtained by relaxing the marginal polytope M to an outer bound M ? M, and using a concave entropy surrogate H that upper bounds the true entropy H. TRWBP [19] or the SDP formulation [18] implement this approach. Our central optimization problem is now to find the tightest upper bound, an optimization problem3 in g: minimize sup hstack(?, g), ? i + H(? ). (3) g?B(f ) ? ?M Because the inner problem is linear in g, this is a convex optimization problem over the base polytope. To obtain the gradient with respect to g (equal to the negative disagreement probabilities ??(? )), we have to solve the inner problem. This subproblem corresponds to performing variational inference in a pairwise model, e.g. via TRWBP or an SDP. The optimization properties of the problem (3) depend on its Lipschitz continuity of the gradients (smoothness). Informally, the inferred pseudomarginals should not drastically change if we perturb the linearization g. The formal condition is that there exists some ? > 0 so that k?(? ) ? ?(? 0 )k ? ?k? ? ? 0 k for all ? , ? 0 ? M. We discuss below when this condition holds. Before that, we discuss two different algorithms for solving problem (3), and how their convergence depends on ?. 3 If we compute the Fenchel dual, we obtain a special case of the problem considered in [22] with the Lov?asz extension acting as a non-smooth non-local energy function (in the terminology introduced therein). 4 Frank-Wolfe. Given that we can efficiently solve linear programs over B(f ), the Frank-Wolfe [23] algorithm is a natural candidate for solving the problem. We present it in Figure 2. It iteratively moves towards the minimizer of a linearization of the objective around the current iterate. The method has a convergence rate of O(?/t) [24], where ? is the assumed smoothness parameter. One can either use a fixed step size ? = 2/(t + 2), or determine it using line search. In each iteration, the algorithm calls the procedure L INEAR -O RACLE, which finds the vector s ? B(f ) that minimizes the linearization of the objective function in (3) over the base polytope B(f ). The linearization is given by the (approximate) gradient ?(? ), determined by the computed approximate marginals ? . When taking a step towards s, the weight of edge ei is changed by sei = f ({e1 , e2 , . . . , ei }) ? f ({e1 , e2 , . . . , ei?1 }). Due to the submodularity4 of f , an edge will obtain a higher weight if it appears earlier in the order determined by the disagreement probabilities ?(? ). Hence, in every iteration, the algorithm will re-adjusts the pairwise potentials, by encouraging the variables to agree more as a function of their (approximate) disagreement probability. 1: procedure FW-I NFERENCE(f, ?) 2: g ? L INEAR -O RACLE(f, 0) 3: for t = 0, 1, . . . , max steps do 4: ? ? VAR -I NFERENCE(?, g) 5: s ? L INEAR -O RACLE(f, ? ) 6: ? ? C OMPUTE -S TEP -S IZE(g, s) 7: g ? (1 ? ?)g + ?s 8: return ? , A? 1: procedure L INEAR -O RACLE(f, ? ) Let e1 , e2 , . . . , e|E| be the edges E sorted so 2: that ?(? )e1 ? ?(? )e2 ? . . . ? ?(? )e|E| 3: for i = 0, 1, . . . , |E| do 4: f?i ? f ({e1 , e2 , . . . , ei?1 }) 5: f+i ? f ({e1 , e2 , . . . , ei }) 6: sei ? f+i ? f?i 7: return s Figure 2: Inference with Frank-Wolfe, assuming that VAR -I NFERENCE guarantees an upper bound. Projected gradient descent (PGD). Since it is possible to compute projections onto B(f ), and practically so for many submodular functions f , we can alternatively use projected ? gradient or subgradient descent (PGD). Without smoothness, PGD converges at a rate of O(1/ t). If the objective is smooth, we can use an accelerated methods like FISTA [25], which has both a much better O(?/t2 ) rate and seems to converge faster than many Frank-Wolfe variants in our experiments. Smoothness and convergence. The final question that remains to be answered is under which conditions problem (3) is smooth (the proof can be found in the appendix). Theorem 1 Problem (3) is k 2 ?-smooth over B(f ) if the entropy surrogate ?H is ?1 -strongly convex. This result follows from the duality between smoothness and strong convexity for convex conjugates, see e.g. [26]. It implies that the convergence rates of the proposed algorithms depend on the strong convexity of the entropy approximation ?H. The benefits of strongly convex entropy approximations are known. For instance, the tree-reweighted entropy approximation is strongly convex with a modulus ? depending on the size of the graph; similarly, the SDP relaxation is strongly convex [27]. London et al. [28] provide an even sharper bound for the tree reweighted entropy, and show how one can strong-convexify any weighted entropy by solving a QP over the weights ?. In practice, because the inner problem is typically solved using an iterative algorithm and because the problem is smooth, we obtain speedups by warm-starting the solver with the solution at the previous iterate. We can moreover easily obtain duality certificates using the results in [24]. Joint optimization. When using weighted entropy approximations, it makes sense to optimize over both the linearization g and the weights ? jointly. Specifically, let T be some set of weights that yield an entropy approximation H that upper bounds H. Then, if we expand H in problem (3), we obtain X X minimize sup hstack(?, g), ? i + ?i Hi (? i ) + ?i,j Hi,j (? i,j ). g?B(f ),??T ? ?L i?V {i,j}?E Note that inside the supremum, both g and ? appear only linearly, and there is no summand that has terms from both of them. Thus, the problem is convex in (g, ?), and we can optimize jointly over 4 This is also known as the diminishing returns property. 5 both variables. As a final remark, if we already perform inference in a pairwise model and repeatedly tighten the approximation by optimizing over ? via Frank-Wolfe (as suggested in [19]), then the complexity per iteration remains the same even if we use the higher-order term f . 4 Submodular lower bounds While we just derived variational upper bounds, we next develop lower bounds on the partition function. Specifically, analogously to the linearization for the upper bound, if we pick an element (g, c) of U(f ), the partition function of the resulting pairwise approximation always lower bounds the partition function of (1). Formally, X X X  log Z ? log exp ? (aT x + ?ij,xi xj + gi,j Jxi 6= xj K + c) = A(g) ? c. x?{0,1}V {i,j}?E {i,j}?E As before, after plugging in a lower bound estimate of A, we obtain a variational lower bound over the partition function, which takes the form log Z ? sup ?c + hstack(?, g), ? i + H(? ), (4) (g,c)?U (f ),? ?M for any pair of approximations of M and H that guarantee a lower bound of the pairwise model. We propose to optimize this lower bound in a block-coordinate-wise manner: first with respect to the pseudo-marginals ? (which amounts to approximate inference in the linearized model), and then with respect to the supergradient (g, c) ? U(f ). As already noted, this step is in general intractable. However, it is well-known [29] that for any Y ? E we can construct a point (so called bar supergradient) in U(f ) as follows. First, define the vectors ai,j = f (1{i,j} ) and bi,j = f (1)?f (1?1{i,j} ). Then, the vector (g, c) with g = b 1Y +(1?1Y ) a and c = f (Y )?bT 1Y belongs to U(f ), where denotes element-wise multiplication. Theorem 2 Optimizing problem (4) for a fixed ? over all bar supergradients is equal to the following T submodular minimization problem minY ?E f (Y ) + ?(? ) (b ? a) ? b 1Y . In contrast to computing the MAP, the above problem has no constraints and can be easily solved using existing algorithms. As the approximation algorithm for the linearized pairwise model, one can always use mean-field [7]. Moreover, if (i) the problem is binary with submodular pairwise potentials ?i,j and (ii) f is monotone, we can also use belief propagation. This is an implication of the result of Ruozzi [30], who shows that traditional belief-propagation yields a lower bound on the partition function for binary pairwise log-supermodular models. It is easy to see that the above conditions are sufficient for the log-supermodularity of the linearized model, as g ? 0 when f is monotone (because both a and b have non-negative components). Moreover, in this setting both the mean-field and belief propagation objectives (i.e. computing ? ) can be cast as an instance of continuous submodular minimization (see e.g. [31]), which means that they can be solved to arbitrary precision in polynomial time. Unfortunately, problem (4) will not be jointly submodular, so we still need to use the block-coordinate ascent method we have just outlined. 5 Approximate inference via MAP perturbations For binary models with submodular pairwise potentials and monotone f we can (approximately) solve the MAP problem using the techniques in [1, 4]. Hence, this opens as an alternative approach the perturb-and-MAP method of Papandreou and Yuille [32]. This method relies on a set of tractable first order perturbations: For any i ? V define ?i0 (xi ) = ?i (xi ) ? ?i,xi , where ? = (?i,xi )i?V,xi ?X are a set of independently drawn Gumbel randomP variables. The optimizer argminx G? (x) of the P perturbed model energy G? (x) = i?V ?i0 (xi ) + {i,j}?E ?i,j (xi , xj ) + f (y(x)) is then a sample from (an approximation to) the true distribution. If this MAP problem can be solved exactly (which is not always the case here), then it is possible to obtain an upper bound on the partition function [33]. 6 Experiments Synthetic experiments. Our first set of experiments uses a complete graph on n variables. The unary potentials were sampled as ?i (xi ) ? Uniform(??, ?). The edges E were randomly split 6 P5 into five disjoint buckets E1 , E2 , . . . , E5 , and we used f (y) = j=1 hj (yEj ), where yEi are the coordinates of y corresponding to that group, and the functions {hj } will be defined below. To perform inference in the linearized pairwise models, we used: trwbp, jtree+ (exact inference, upper bound), jtree- (same, lower bound), sdp (SDP), mf (mean-field), bp (belief propagation), pmap (perturb-and-MAP with approximate MAP) and epmap (perturb-and-MAP with exact MAP). We used libDAI [34] and implemented sdp using cvxpy [35] and SCS [36]. As a maxflow solver we used [37]. Errors bars denote three standard errors. qP p Figure 3 shows the results for hi (yEi ) = wi e?Ei ye / |Ei |, with weights wi ? Uniform(0, ?). In panel (c) we use mixed (attractive and repulsive) pairwise potentials, chosen as ?i,j (xi , xj ) = wi,j Jxi 6= xj K, where wi,j ? Uniform(??, ?). First, the results imply that the methods optimizing the fully convex upper bound yield very good marginal probabilities over a large set of parameter configurations. The estimate of the log-partition function from trwbp is also very good, while sdp is much worse, which we believe can be attributed to the very loose entropy bound used in the relaxation. The lower bounds (bp and mf) work well for settings when the pairwise strength ? is small compared to the unary strength ?. Otherwise, both the bound and the marginals become worse, while jtreestill performs very well. This could be explained by the hardness of the pairwise models obtained after linearizing f . Finally, pmap (when applicable) seems very promising for small ?. To better understand the regimes when one should use trwbp or pmap, we compare their marginal errors in Figure 5. We see that for most parameter configurations, trwbp performs better, and significantly so when the edge interactions are strong. Finally, we evaluate the effects of the approximate MAP solver for pmap 4. To be able P in Figure P to solve the MAP problem exactly (see [4]), we used h(yEj ) = max{ e?Ej ye ve , e?Ej ve /2}, where ve ? Uniform(0, ?). As evident from the figure, the gains from the exact solver seem minimal, and it seems that solving the MAP problem approximately does not strongly affect the results. An example from computer vision. To demonstrate the scalability of our method and obtain a better qualitative understanding of the resulting marginals, we ran trwbp and pmap on a real world image segmentation task. We use the same setting, data and models as [1], as implemented in the pycoop5 package. Because libDAI was too slow, we wrote our own TRWBP implementation. Figure 6 shows the results for two specific images (size 305 ? 398 and 214 ? 320). The example in the first row is particularly difficult for pairwise models, but the rich higher-order model has no problem capturing the details even in the challenging shaded regions of the image. The second row shows results for two different model parameters. The second model uses a function f that is closer to being linear, while the first one is more curved (see the appendix for details). We observe that trwbp requires lower temperature parameters (i.e. relatively larger functions ?i , ?i,j and f ) than pmap, and that the bottleneck of the complete inference procedure is running the trwbp updates. In other words, the added complexity from our method is minimal and the runtime is dominated by the message passing updates of TRWBP. Hence, any algorithms that speed up TRWBP (e.g., by parallelization or better message scheduling) will result in a direct improvement on the proposed inference procedure. 7 Conclusion We developed new inference techniques for a new broad family of discrete probabilistic models by exploiting the (indirect) submodularity in the model, and carefully combining it with ideas from classical variational inference in graphical models. The result are inference schemes that optimize rigorous bounds on the partition function. For example, our upper bounds lead to convex variational inference problems. Our experiments indicate the scalability, efficacy and quality of these schemes. Acknowledgements. This research was supported in part by SNSF grant CRSII2 147633, ERC StG 307036, a Microsoft Research Faculty Fellowship, a Google European Doctoral Fellowship, and NSF CAREER 1553284. References [1] 5 S. Jegelka and J. Bilmes. ?Submodularity beyond submodular energies: coupling edges in graph cuts?. CVPR. 2011. https://github.com/shelhamer/coop-cut. 7 0.7 0.3 0.2 0.1 0.12 0.10 0.08 0.06 0.04 0.02 0.00 6 Error in the estimate log Z? ? log Z Error in the estimate log Z? ? log Z 0.0 ?0.1 6 4 2 0 ?2 Absolute mean error in marginals 0.4 bp jtree+ jtreemf pmap sdp trwbp 0.14 4 2 ?8 ?1 10 100 101 0.1 0.0 2 0 ?4 ?6 ?6 ?1 10 102 0.2 ?2 0 ?4 ?6 0.3 ?0.1 4 ?2 ?4 jtree+ jtreemf trwbp 0.4 Error in the estimate log Z? ? log Z 0.5 0.5 bp jtree+ jtreemf pmap sdp trwbp Mean absolute error in marginals Mean absolute error in marginals 0.6 100 101 ?8 ?1 10 102 100 (a) ? = 2, binary, K15 101 102 Pairwise strength ? Pairwise strength ? Pairwise strength ? (b) ? = 0.1, binary, K15 (c) ? = 0.1, mixed, 4 labels, K10 Figure 3: Results on several synthetic models. The methods that optimize the convex upper bound (trwbp, sdp) obtain very good marginals for a large set of parameter settings. Those maximizing the lower bound (bp, mf) fail when there is strong coupling between the edges. In the strong coupling regime the results of pmap also deteriorate, but not as strongly. In (c) bp, pmap, sdp are not applicable. 10?1 101 102 0.1 0.5 0.085 0.17 0.12 0.097 0.2 0.2 2.0 0.0088 -0.0011 -0.015 -0.032 0.057 0.086 0.15 0.2 0.12 4.0 2 0 ?4 100 0.0099 0.0072 0.0046 0.0056 0.0012 -0.0052 -0.013 0.16 0.12 0.1 0.073 0.0036 0.0023 0.058 0.002 0.0021 0.0014 ?6 ?1 10 100 101 102 1.0 4 ?2 0.0 0.019 0.037 -0.049 -0.23 6 Unary strength ? 0.2 0.012 0.012 0.011 0.0087 8.0 0.4 Error in the estimate log Z? ? log Z Mean absolute error in marginals 0.6 bp epmap jtree+ jtreemf pmap sdp trwbp 16.0 8 0.8 0.01 0.1 -0.0095 -0.0095 0.5 1.0 Pairwise strength ? 2.0 4.0 8.0 0.068 0.24 0.069 0.091 0.26 0.034 16.0 32.0 64.0 Pairwise strength ? Figure 4: ? = 2, K15 , model where epmap is applicable. Solving Figure 5: errorpmap - errortrwbp the MAP problem exactly only marginally improves over pmap. on K15 . Missing entries were The other observations are similar to those in Fig. 3b. not significant at the 0.05 level. (a) Original image (b) trwbp, pairwise (c) pmap, pairwise (d) trwbp, coop. (e) pmap, coop. (f) Original image (g) trwbp, model 1 (h) pmap, model 1 (i) trwbp, model 2 (j) pmap, model 2 Figure 6: Inferred marginals on an image segmentation task. The first row showcases an example that is particularly hard for pairwise models. In the second row we show the results for two different models (the cooperative function f is more curved for model 1). 8 [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] N. Silberman, L. Shapira, R. Gal, and P. Kohli. ?A Contour Completion Model for Augmenting Surface Reconstructions?. ECCV. 2014. S. Jegelka and J. Bilmes. ?Approximation Bounds for Inference using Cooperative Cuts?. ICML. 2011. P. Kohli, A. Osokin, and S. Jegelka. ?A principled deep random field model for image segmentation?. CVPR. 2013. M. Jerrum and A. Sinclair. ?Polynomial-time approximation algorithms for the Ising model?. SIAM Journal on computing 22.5 (1993), pp. 1087?1116. L. A. Goldberg and M. Jerrum. ?The complexity of ferromagnetic Ising with local fields?. Combinatorics, Probability and Computing 16.01 (2007), pp. 43?61. M. J. Wainwright and M. I. Jordan. ?Graphical models, exponential families, and variational inference?. R in Machine Learning 1.1-2 (2008). Foundations and Trends D. Tarlow, K. Swersky, R. S. Zemel, R. P. Adams, and B. J. Frey. ?Fast Exact Inference for Recursive Cardinality Models?. UAI. 2012. V. Vineet, J. Warrell, and P. H. Torr. ?Filter-based mean-field inference for random fields with higher-order terms and product label-spaces?. IJCV 110 (2014). J. Djolonga and A. Krause. ?From MAP to Marginals: Variational Inference in Bayesian Submodular Models?. NIPS. 2014. J. Zhang, J. Djolonga, and A. Krause. ?Higher-Order Inference for Multi-class Log-supermodular Models?. ICCV. 2015. J. Edmonds. ?Submodular functions, matroids, and certain polyhedra?. Combinatorial structures and their applications (1970), pp. 69?87. S. Fujishige and S. Isotani. ?A submodular function minimization algorithm based on the minimum-norm base?. Pacific Journal of Optimization 7.1 (2011), pp. 3?17. P. Stobbe and A. Krause. ?Efficient Minimization of Decomposable Submodular Functions?. NIPS. 2010. S. Jegelka, F. Bach, and S. Sra. ?Reflection methods for user-friendly submodular optimization?. NIPS. 2013. F. Bach. ?Learning with submodular functions: a convex optimization perspective?. Foundations and R in Machine Learning 6.2-3 (2013). Trends R. Iyer and J. Bilmes. ?Polyhedral aspects of Submodularity, Convexity and Concavity?. arXiv:1506.07329 (2015). M. J. Wainwright and M. I. Jordan. ?Log-determinant relaxation for approximate inference in discrete Markov random fields?. Signal Processing, IEEE Trans. on 54.6 (2006). M. J. Wainwright, T. S. Jaakkola, and A. S. Willsky. ?A new class of upper bounds on the log partition function?. UAI. 2002. T. Heskes. ?Convexity Arguments for Efficient Minimization of the Bethe and Kikuchi Free Energies.? JAIR 26 (2006). O. Meshi, A. Jaimovich, A. Globerson, and N. Friedman. ?Convexifying the Bethe free energy?. UAI. 2009. L. Vilnis, D. Belanger, D. Sheldon, and A. McCallum. ?Bethe Projections for Non-Local Inference?. UAI. 2015. M. Frank and P. Wolfe. ?An algorithm for quadratic programming?. Naval Res. Logist. Quart. (1956). M. Jaggi. ?Revisiting Frank-Wolfe: Projection-free sparse convex optimization?. ICML. 2013. A. Beck and M. Teboulle. ?A fast iterative shrinkage-thresholding algorithm for linear inverse problems?. SIAM Journal on imaging sciences 2.1 (2009), pp. 183?202. S. Kakade, S. Shalev-Shwartz, and A. Tewari. ?On the duality of strong convexity and strong smoothness: Learning applications and matrix regularization?. Technical Report (2009). M. J. Wainwright. ?Estimating the wrong graphical model: Benefits in the computation-limited setting?. JMLR 7 (2006). B. London, B. Huang, and L. Getoor. ?The benefits of learning with strongly convex approximate inference?. ICML. 2015. R. Iyer, S. Jegelka, and J. Bilmes. ?Fast Semidifferential-based Submodular Function Optimization?. ICML. 2013. N. Ruozzi. ?The Bethe partition function of log-supermodular graphical models?. NIPS. 2012. A. Weller and T. Jebara. ?Approximating the Bethe Partition Function?. UAI. 2014. G. Papandreou and A. L. Yuille. ?Perturb-and-MAP random fields: Using discrete optimization to learn and sample from energy models?. ICCV. 2011. T. Hazan and T. Jaakkola. ?On the partition function and random maximum a-posteriori perturbations?. ICML (2012). J. M. Mooij. ?libDAI: A Free and Open Source C++ Library for Discrete Approximate Inference in Graphical Models?. Journal of Machine Learning Research (2010), pp. 2169?2173. S. Diamond and S. Boyd. ?CVXPY: A Python-Embedded Modeling Language for Convex Optimization?. JMLR (2016). To appear. B. O?Donoghue, E. Chu, N. Parikh, and S. Boyd. ?Conic Optimization via Operator Splitting and Homogeneous Self-Dual Embedding?. Journal of Optimization Theory and Applications (2016). Y. Boykov and V. Kolmogorov. ?An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision?. Pattern Analysis and Machine Intelligence, IEEE Trans. on 26 (2004). 9
6122 |@word kohli:2 determinant:1 faculty:1 briefly:1 polynomial:3 norm:2 seems:3 semidifferential:1 open:3 linearized:5 pick:1 configuration:6 contains:1 efficacy:2 ours:1 ala:1 existing:3 current:3 com:1 yet:1 chu:1 written:1 subsequent:1 partition:17 shape:1 pseudomarginals:1 update:2 k15:4 intelligence:1 mccallum:1 short:3 tarlow:1 certificate:1 node:1 preference:1 zhang:1 five:1 direct:1 become:1 qualitative:1 pmap:16 ijcv:1 combine:1 introductory:1 polyhedral:2 inside:1 manner:1 introduce:1 deteriorate:1 pairwise:34 lov:1 hardness:1 indeed:1 nor:2 sdp:14 multi:2 encouraging:1 solver:4 increasing:1 cardinality:1 spain:1 xx:2 notation:1 moreover:5 panel:1 estimating:1 minimizes:1 developed:1 gal:1 convexify:1 guarantee:2 pseudo:2 every:2 concave:3 friendly:1 runtime:1 exactly:3 rm:3 wrong:1 grant:1 appear:2 before:2 local:4 frey:1 approximately:2 therein:1 studied:1 doctoral:1 challenging:3 relaxing:1 shaded:1 tschiatschek:1 factorization:1 limited:1 range:1 bi:1 globerson:1 practice:1 block:2 implement:2 recursive:1 x3:5 procedure:5 maxflow:1 k10:1 eth:3 significantly:2 projection:7 boyd:2 word:1 shapira:1 onto:2 close:1 selection:1 yej:2 scheduling:1 operator:1 optimize:8 equivalent:2 elongated:1 map:17 missing:1 maximizing:1 urich:3 starting:1 independently:1 convex:17 decomposable:1 splitting:1 assigns:1 adjusts:1 importantly:1 embedding:1 coordinate:4 user:1 exact:4 programming:2 homogeneous:1 us:3 goldberg:1 element:9 wolfe:7 trend:2 particularly:2 showcase:1 cut:18 ising:2 cooperative:10 subproblem:1 p5:1 solved:5 capture:2 revisiting:1 region:1 ferromagnetic:1 decrease:1 ran:1 principled:1 convexity:5 complexity:5 miny:1 josipd:1 motivate:1 depend:2 solving:5 serve:1 yuille:2 yei:2 swap:1 easily:4 joint:1 indirect:1 various:1 kolmogorov:1 revert:1 fast:4 describe:1 london:2 inear:4 sc:1 zemel:1 neighborhood:1 shalev:1 larger:2 valued:1 solve:4 say:1 supermodularity:1 otherwise:1 cvpr:2 coop:3 statistic:1 gi:3 jerrum:2 jointly:3 final:2 propose:1 reconstruction:1 interaction:5 product:2 neighboring:1 combining:1 description:1 scalability:2 exploiting:2 convergence:4 cluster:5 adam:1 converges:1 object:1 kikuchi:1 derive:1 develop:2 depending:1 augmenting:1 coupling:3 completion:1 ij:8 strong:8 implemented:3 auxiliary:1 treewidth:2 implies:2 indicate:1 submodularity:7 correct:1 filter:1 enable:1 meshi:1 extension:1 hold:3 practically:2 around:1 considered:1 supergradients:1 exp:6 optimizer:1 applicable:3 label:6 combinatorial:1 sei:2 largest:1 weighted:3 minimization:6 mit:2 jxi:5 always:3 snsf:1 aim:1 hj:2 ej:2 shrinkage:1 jaakkola:2 encode:2 derived:1 focus:1 pxi:1 naval:1 improvement:1 polyhedron:2 contrast:2 rigorous:1 stg:1 realizable:1 sense:1 posteriori:2 inference:39 membership:1 i0:2 unary:3 typically:1 bt:1 diminishing:1 expand:1 pixel:5 dual:2 denoted:2 development:1 special:1 marginal:7 field:13 equal:2 construct:1 libdai:3 ted:1 broad:1 icml:5 djolonga:4 minimized:1 report:1 np:1 piecewise:1 t2:1 few:3 summand:1 randomly:1 neighbour:1 ve:3 comprehensive:1 beck:1 argminx:1 microsoft:1 friedman:1 message:2 warrell:1 analyzed:1 admitting:1 semidefinite:1 amenable:1 implication:1 edge:28 encourage:2 closer:1 nference:3 orthogonal:1 tree:5 logarithm:1 re:3 josip:1 minimal:2 fenchel:1 instance:2 earlier:1 modeling:1 teboulle:1 papandreou:2 retains:1 assignment:1 cost:2 tractability:1 vertex:3 subset:1 entry:2 uniform:4 too:1 weller:1 stored:1 perturbed:1 synthetic:3 randomp:1 cvxpy:2 siam:2 csail:1 vineet:1 probabilistic:4 analogously:1 central:1 satisfied:1 opposed:1 huang:1 henceforth:1 worse:2 admit:1 sinclair:1 return:3 potential:9 sec:1 includes:1 satisfy:1 combinatorics:1 depends:1 observing:1 sup:4 red:3 hazan:1 contribution:2 minimize:2 characteristic:1 efficiently:3 who:1 yield:4 generalize:1 bayesian:1 marginally:1 bilmes:5 nonstandard:1 sebastian:1 stobbe:1 energy:8 pp:6 e2:7 associated:2 proof:1 attributed:1 jxj:1 sampled:1 gain:1 color:3 improves:1 segmentation:4 carefully:2 appears:1 higher:8 supermodular:7 jair:1 formulation:2 strongly:7 just:2 eqn:1 belanger:1 expressive:2 ei:7 nonlinear:1 propagation:8 google:1 continuity:1 mode:1 quality:1 gray:1 indicated:1 believe:1 modulus:1 effect:1 ye:2 true:3 ize:1 hence:6 regularization:1 iteratively:1 reweighted:4 attractive:1 self:1 noted:1 linearizing:1 prominent:1 tep:1 evident:1 complete:2 demonstrate:2 performs:2 temperature:1 reflection:1 cooperate:1 image:9 variational:16 wise:4 parikh:1 boykov:1 common:1 specialized:1 functional:2 empirically:1 overview:1 qp:2 endpoint:2 conditioning:1 exponentially:3 marginals:18 significant:1 ai:1 smoothness:6 outlined:1 heskes:1 similarly:2 erc:1 submodular:28 language:1 surface:1 gt:5 base:5 add:1 jaggi:1 posterior:1 own:1 showed:1 perspective:2 optimizing:3 inf:3 belongs:1 occasionally:1 certain:3 logist:1 inequality:2 binary:8 yi:1 exploited:1 captured:1 minimum:2 additional:1 determine:1 converge:1 signal:1 ii:1 branch:1 smooth:5 technical:1 faster:1 bach:2 long:1 dept:3 supergradient:2 equally:1 e1:7 jy:1 plugging:1 ensuring:1 variant:1 vision:2 arxiv:1 iteration:3 background:1 ompute:1 krause:5 fellowship:2 addressed:1 source:1 parallelization:1 posse:1 asz:1 strict:1 ascent:1 fujishige:1 undirected:1 flow:1 seem:1 jordan:4 call:4 structural:1 split:1 easy:2 iterate:2 xj:25 affect:1 andreas:1 inner:4 idea:1 computable:1 donoghue:1 bottleneck:1 passing:1 prefers:1 remark:1 repeatedly:1 deep:1 useful:1 tewari:1 informally:1 quart:1 amount:1 extensively:1 http:1 exist:1 nsf:1 dotted:1 disjoint:1 per:3 blue:2 ruozzi:2 edmonds:1 discrete:6 write:1 express:1 group:2 key:2 terminology:1 drawn:1 neither:2 imaging:1 graph:7 relaxation:4 monotone:6 subgradient:1 sum:1 package:1 inverse:1 swersky:1 family:10 throughout:1 appendix:2 capturing:1 bound:41 hi:6 quadratic:1 strength:8 occur:1 constraint:7 as2:1 x2:5 bp:7 sheldon:1 dominated:1 aspect:1 answered:1 speed:1 extremely:1 min:3 argument:1 performing:1 relatively:2 speedup:1 influential:1 pacific:1 combination:1 representable:1 conjugate:1 across:1 slightly:1 y0:7 wi:4 kakade:1 explained:1 iccv:2 bucket:1 taken:1 computationally:1 agree:1 remains:3 discus:4 loose:1 fail:1 tractable:6 repulsive:1 operation:1 tightest:1 apply:1 observe:1 disagreement:5 alternative:1 original:3 denotes:1 running:1 graphical:13 exploit:2 perturb:5 build:1 approximating:1 classical:2 silberman:1 move:1 objective:4 question:1 already:2 added:1 traditional:3 surrogate:2 said:1 gradient:6 parametrized:1 outer:1 polytope:7 collected:1 willsky:1 assuming:1 difficult:1 unfortunately:3 sharper:1 frank:7 stated:1 negative:2 implementation:1 perform:2 diamond:1 upper:21 observation:1 markov:2 racle:4 descent:2 curved:2 defining:1 extended:1 looking:1 perturbation:3 stack:1 arbitrary:3 jebara:1 inferred:2 tive:1 introduced:1 pair:3 cast:1 specified:1 polytopes:2 barcelona:1 nip:5 trans:2 address:1 able:1 suggested:1 bar:3 below:2 beyond:1 pattern:1 regime:2 challenge:1 summarize:1 program:1 max:5 belief:8 wainwright:6 convexifying:1 getoor:1 natural:1 rely:1 attach:1 warm:1 indicator:1 scheme:4 github:1 imply:1 library:1 conic:1 stefanie:1 understanding:1 acknowledgement:1 checking:1 mooij:1 multiplication:1 python:1 embedded:1 fully:2 mixed:2 var:2 shelhamer:1 foundation:2 krausea:1 degree:1 jegelka:7 sufficient:2 thresholding:1 tightened:1 row:4 eccv:1 elsewhere:1 succinctly:1 changed:1 supported:1 free:4 infeasible:1 drastically:1 bias:3 formal:1 understand:1 taking:1 absolute:4 matroids:1 sparse:1 benefit:3 boundary:2 xn:1 world:2 evaluating:1 rich:3 contour:1 crsii2:1 concavity:1 commonly:2 projected:2 osokin:1 tighten:1 approximate:16 cutting:3 supremum:1 wrote:1 uai:5 assumed:1 xi:45 shwartz:1 alternatively:1 search:1 iterative:2 continuous:1 promising:1 learn:1 jz:1 zk:1 bethe:6 career:1 sra:1 e5:1 complex:1 european:1 jaimovich:1 linearly:1 bounding:1 succinct:1 x1:5 fig:1 slow:1 shrinking:1 sub:1 position:1 precision:1 exponential:3 candidate:1 jmlr:2 jtree:6 theorem:2 specific:3 exists:1 intractable:1 effectively:1 linearization:6 iyer:2 gumbel:1 gap:1 mf:4 entropy:15 expressed:2 trwbp:22 ch:3 corresponds:1 minimizer:1 satisfies:1 relies:2 conditional:1 sized:1 sorted:1 towards:4 replace:2 lipschitz:1 hard:6 change:1 fw:1 specifically:3 determined:2 pgd:3 fista:1 acting:1 torr:1 isotani:1 called:2 duality:3 experimental:1 formally:1 latter:1 vilnis:1 ethz:3 accelerated:1 ongoing:1 evaluate:1 stefje:1 problem3:1
5,662
6,123
Finite-Sample Analysis of Fixed-k Nearest Neighbor Density Functional Estimators Shashank Singh Statistics & Machine Learning Departments Carnegie Mellon University sss1@andrew.cmu.edu Barnab?s P?czos Machine Learning Departments Carnegie Mellon University bapoczos@cs.cmu.edu Abstract We provide finite-sample analysis of a general framework for using k-nearest neighbor statistics to estimate functionals of a nonparametric continuous probability density, including entropies and divergences. Rather than plugging a consistent density estimate (which requires k ? ? as the sample size n ? ?) into the functional of interest, the estimators we consider fix k and perform a bias correction. This is more efficient computationally, and, as we show in certain cases, statistically, leading to faster convergence rates. Our framework unifies several previous estimators, for most of which ours are the first finite sample guarantees. 1 Introduction Estimating entropies and divergences of probability distributions in a consistent manner is of importance in a number of problems in machine learning. Entropy estimators have applications in goodness-of-fit testing [13], parameter estimation in semi-parametric models [51], studying fractal random walks [3], and texture classification [14, 15]. Divergence estimators have been used to generalize machine learning algorithms for regression, classification, and clustering from inputs in RD to sets and distributions [40, 33]. Divergences also include mutual informations as a special case; mutual information estimators have applications in feature selection [35], clustering [2], causality detection [16], optimal experimental design [26, 38], fMRI data analysis [7], prediction of protein structures [1], and boosting and facial expression recognition [41]. Both entropy estimators and mutual information estimators have been used for independent component and subspace analysis [23, 47, 37, 17], as well as for image registration [14, 15]. Further applications can be found in [25]. This paper considers the more general problem of estimating functionals of the form F (P ) := E [f (p(X))] , (1) X?P using n IID samples from P , where P is an unknown probability measure with smooth density function p and f is a known smooth function. We are interested in analyzing a class of nonparametric estimators based on k-nearest neighbor (k-NN) distance statistics. Rather than plugging a consistent estimator of p into (1), which requires k ? ? as n ? ?, these estimators derive a bias correction for the plug-in estimator with fixed k; hence, we refer to this type of estimator as a fixed-k estimator. Compared to plug-in estimators, fixed-k estimators are faster to compute. As we show, fixed-k estimators can also exhibit superior rates of convergence. As shown in Table 1, several authors have derived bias corrections necessary for fixed-k estimators of entropies and divergences, including, most famously, the Shannon entropy estimator of [20]. 1 The estimators in Table 1 estimators are known to be weakly consistent, 2 but, except for Shannon entropy, 1 MATLAB code for these estimators is in the ITE toolbox https://bitbucket.org/szzoli/ite/ [48]. Several of these proofs contain errors regarding the use of integral convergence theorems when their conditions do not hold, as described in [39]. 2 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Functional Name Shannon Entropy R?nyi-? Entropy KL Divergence ?-Divergence Functional Form E [log p(X)]   E p??1 (X) i h p(X) E log q(X)  ??1  p(X) E q(X) Bias Correction Additive constant: ?(n) ? ?(k) + log(k/n) ?(k) Multiplicative constant: ?(k+1??) Ref. [20][13] [25, 24] None? [50] 2 Multiplicative constant: ? (k) ?(k??+1)?(k+??1) [39] Table 1: Functionals with known bias-corrected k-NN estimators, their bias corrections, and references. All R ? d expectations are over X ? P . ?(t) = 0 xt?1 e?x dx is the gamma function, and ?(x) = dx log (?(x)) is the digamma function. ? ? R\{1} is a free parameter. ? For KL divergence, bias corrections for p and q cancel. no finite sample bounds are known. The main goal of this paper is to provide finite-sample analysis of these estimators, via unified analysis of the estimator after bias correction. Specifically, we show conditions under which, for ?-H?lder continuous space, the  (? ? (0, 2]) densities on D dimensional  bias of fixed-k estimators decays as O n??/D and the variance decays as O n?1 , giving a mean  squared error of O n?2?/D + n?1 . Hence, the estimators converge at the parametric O(n?1 ) rate when ? ? D/2, and at the slower rate O(n?2?/D ) otherwise. A modification of the estimators would be necessary to leverage additional smoothness for ? > 2, but we do not pursue this here. Along the way, we prove a finite-sample version of the useful fact [25] that (normalized) k-NN distances have an Erlang asymptotic distribution, which may be of independent interest. We present our results for distributions P supported on the unit cube in RD because this significantly simplifies the statements of our results, but, as we discuss in the supplement, our results generalize fairly naturally, for example to distributions supported on smooth compact manifolds. In this context, it is worth noting that our results scale with the intrinsic dimension of the manifold. As we discuss later, we believe deriving finite sample rates for distributions with unbounded support may require a truncated modification of the estimators we study (as in [49]), but we do not pursue this here. 2 Problem statement and notation Let X := [0, 1]D denote the unit cube in RD , and let ? denote the Lebesgue measure. Suppose P is an unknown ?-absolutely continuous Borel probability measure supported on X , and let p : X ? [0, ?) denote the density of P . Consider a (known) differentiable function f : (0, ?) ? R. Given n samples X1 , ..., Xn drawn IID from P , we are interested in estimating the functional F (P ) := E [f (p(X))] . X?P Somewhat more generally (as in divergence estimation), we may have a function f : (0, ?)2 ? R of two variables and a second unknown probability measure Q, with density q and n IID samples Y1 , ..., Yn . Then, we are interested in estimating F (P, Q) := E [f (p(X), q(X))] . X?P Fix r ? [1, ?] and a positive integer k. We will work with distances induced by the r-norm !1/r D D X (2?(1 + 1/r)) r kxkr := xi and define cD,r := = ?(B(0, 1)), ?(1 + D/r) i=1 where B(x, ?) := {y ? RD : kx ? ykr < ?} denotes the open radius-? ball centered at x. Our estimators use k-nearest neighbor (k-NN) distances: Definition 1. (k-NN distance): Given n IID samples X1 , ..., Xn from P , for x ? RD , we define the k-NN distance ?k (x) by ?k (x) = kx ? Xi kr , where Xi is the k th -nearest element (in k ? kr ) of the set {X1 , ..., Xn } to x. For divergence estimation, given n samples Y1 , ..., Yn from Q, then we similarly define ?k (x) by ?k (x) = kx ? Yi kr , where Yi is the k th -nearest element of {Y1 , ..., Yn } to x. ?-absolute continuity of P precludes the existence of atoms (i.e., ?x ? RD , P ({x}) = ?({x}) = 0). Hence, each ?k (x) > 0 a.s. We will require this to study quantities such as log ?k (x) and 1/?k (x). 2 3 Estimator 3.1 k-NN density estimation and plug-in functional estimators The k-NN density estimator p?k (x) = k/n k/n = ?(B(x, ?k (x)) cD ?D k (x) is well-studied nonparametric density estimator [28], motivated by noting that, for small ? > 0, p(x) ? P (B(x, ?)) , ?(B(x, ?)) and that, P (B(x, ?k (x))) ? k/n. One can show that, for x ? RD at which p is continuous, if k ? ? and k/n ? 0 as n ? ?, then p?k (x) ? p(x) in probability ([28], Theorem 3.1). Thus, a natural approach for estimating F (P ) is the plug-in estimator n 1X F?P I := f (? pk (Xi )) . n i=1 (2) Since p?k ? p in probability pointwise as k, n ? ? and f is smooth, one can show F?P I is consistent, and in fact derive finite sampleconvergence rates  (depending on how k ? ?). For example, [44] 2? show a convergence rate of O n? min{ ?+D ,1} for ?-H?lder continuous densities (after sample ? splitting and boundary correction) by setting k  n ?+d . Unfortunately, while necessary to ensure V [? pk (x)] ? 0, the requirement k ? ? is computationally burdensome. Furthermore, increasing k can increase the bias of p?k due to over-smoothing (see (5) below), suggesting that this may be sub-optimal for estimating F (P ). Indeed, similar work based on kernel density estimation [42] suggests that, for plug-in functional estimators, under-smoothing may be preferable, since the empirical mean results in additional smoothing. 3.2 Fixed-k functional estimators Anhalternative approach is to fix k as n ? ?. Since F?P I is itself an empirical mean, unlike V [? pk (x)], i V F?P I ? 0 as n ? ?. The more critical complication of fixing k is bias. Since f is typically non-linear, the non-vanishing variance of p?k translates into asymptotic bias. A solution adopted by several papers is to derive a bias correction function B (depending only on known factors) such that        P (B(x, ?k (x))) k/n f B f = . (3) E E ?(B(x, ?k (x)) ?(B(x, ?k (x)) X1 ,...,Xn X1 ,...,Xn For continuous p, the quantity p?k (x) (x) := P (B(x, ?k (x))) ?(B(x, ?k (x)) (4) is a consistent estimate of p(x) with k fixed, but it is not computable, since P is unknown. The bias correction B gives us an asymptotically unbiased estimator    n n 1X 1X k/n ? FB (P ) := B (f (? pk (Xi ))) = B f . n i=1 n i=1 ?(B(Xi , ?k (Xi )) that uses k/n in place of P (B(x, ?k (x))). This estimate extends naturally to divergences: n 1X F?B (P, Q) := B (f (? pk (Xi ), q?k (Xi ))) . n i=1 As an example, if f = log (as in Shannon entropy), then it can be shown that, for any continuous p, E [log P (B(x, ?k (x)))] = ?(k) ? ?(n). 3 Hence, for Bn,k := ?(k) ? ?(n) + log(n) ? log(k),       k/n P (B(x, ?k (x))) + Bn,k = . f f E E ?(B(x, ?k (x)) ?(B(x, ?k (x)) X1 ,...,Xn X1 ,...,Xn giving the estimator of [20]. Other examples of functionals for which the bias correction is known are given in Table 1. In general, deriving an appropriate bias correction can be quite a difficult problem specific to the functional of interest, and it is not our goal presently to study this problem; rather, we are interested in bounding the error of F?B (P ), assuming the bias correction is known. Hence, our results apply to all of the estimators in Table 1, as well as any estimators of this form that may be derived in the future. 4 Related work 4.1 Estimating information theoretic functionals Recently, there has been much work on analyzing estimators for entropy, mutual information, divergences, and other functionals of densities. Besides bias-corrected fixed-k estimators, most of this work has taken one of three approaches. One series of papers [27, 42, 43] studied a boundarycorrected plug-in approach based on under-smoothed kernel density estimation. This approach has strong finite sample guarantees, but requires prior knowledge of the support of the density, and can have a slow rate of convergence. A second approach [18, 22] uses von Mises expansion to partially correct the bias of optimally smoothed density estimates. This is statistically more efficient, but can require computationally demanding numerical integration over the support of the density. A final line of work [30, 31, 44, 46] studied plug-in estimators based on consistent, boundary corrected k-NN density estimates (i.e., with k ? ? as n ? ?). [32] study a divergence estimator based on convex risk minimization, but this relies of the context of an RKHS, making results are difficult to compare. Rates of Convergence: For densities over RD satisfying a H?lder smoothness condition parametrized by ? ? (0, ?), the minimax mean squared error functionals of the form  rate for8?estimating  R f (p(x)) dx has been known since [6] to be O n? min{ 4?+D ,1} . [22] recently derived identical minimax rates for divergence estimation.   2? Most of the above estimators have been shown to converge at the rate O n? min{ ?+D ,1} . Only the von Mises approach [22] is known to achieve the minimax rate for general ? and D, but due to its computational demand (O(2D n3 )), 3 the authors suggest using other statistically less efficient estimators for moderate sample size. Here, we show that, for ? ? (0, 2], bias-corrected fixed-k  ? min{ 2? D ,1} . For ? > 2, modifications are estimators converge at the relatively fast rate O n needed for the estimator to leverage the additional smoothness of the density. Notably, this rate is adaptive; that is, it does not require selecting a smoothing parameter depending on the unknown ?; our results (Theorem 5) imply the above rate is achieved for any fixed choice of k. On the other hand, since no empirical error metric is available for cross-validation, parameter selection is an obstacle for competing estimators. 4.2 Prior analysis of fixed-k estimators As of writing this paper, the only finite-sample results for F?B (P ) were those of [5] for the KozachenkoLeonenko (KL) 4 Shannon entropy estimator. [20] Theorem 7.1 of [5] shows that, if the density p has compact support, then the variance of the KL estimator decays as O(n?1 ). They also claim (Theorem 7.2) to bound the bias of the KL estimator by O(n?? ), under the assumptions that p is ?-H?lder continuous (? ? (0, 1]), bounded away from 0, and supported on the interval [0, 1]. However, in their proof, [5] neglect to bound the additional bias incurred near the boundaries of [0, 1], where the density cannot simultaneously be bounded away from 0 and continuous. In fact, because the KL estimator does not attempt to correct for boundary bias, it is not clear that the bias should decay as O(n?? ) under these conditions; we require additional conditions at the boundary of X .   Fixed-k estimators can be computed in O Dn2 time, or O 2D n log n using k-d trees for small D. 4 Not to be confused with Kullback-Leibler (KL) divergence, for which we also analyze an estimator. 3 4 ? [49] studied a closely related entropy estimator for which they prove n-consistency. Their estimator ? is identical to?the KL estimator, except that it truncates k-NN distances at n, replacing ?k (x) with min{?k (x), n}. This sort of truncation may be necessary for certain fixed-k estimators to satisfy finite-sample bounds for densities of unbounded support, though consistency can be shown regardless. Finally, two very recent papers [12, 4] have analyzed the KL estimator. In this case, [12] generalize the results of [5] to D > 1, and [4] weaken the regularity and boundary assumptions required by our bias bound, while deriving the same rate of convergence. Moreover, they show that, if k increases with n at the rate k  log5 n, the KL estimator is asymptotically efficient (i.e., asymptotically normal, with optimal asymptotic variance). As explained in Section 8, together with our results this elucidates the role of k in the KL estimator: fixing k optimizes the convergence rate of the estimator, but increasing k slowly can further improve error by constant factors. 5 Discussion of assumptions The lack of finite-sample results for fixed-k estimators is due to several technical challenges. Here, we discuss some of these challenges, motivating the assumptions we make to overcome them. First, these estimators are sensitive to regions of low probability (i.e., p(x) small), for two reasons: 1. Many functions f of interest (e.g., f = log or f (z) = z ? , ? < 0) have singularities at 0. 2. The k-NN estimate p?k (x) of p(x) is highly biased when p(x) is small. For example, for p ?-H?lder continuous (? ? (0, 2]), one has ([29], Theorem 2)  ?/D k Bias(? pk (x))  . (5) np(x) For these reasons, it is common in analysis of k-NN estimators to assume the following [5, 39]: (A1) p is bounded away from zero on its support. That is, p? := inf x?X p(x) > 0. Second, unlike many functional estimators (see e.g., [34, 45, 42]), the fixed-k estimators we consider do not attempt correct for boundary bias (i.e., bias incurred due to discontinuity of p on the boundary ?X of X ). 5 The boundary bias of the density estimate p?k (x) does vanish at x in the interior X ? of X as n ? ?, but additional assumptions are needed to obtain finite-sample rates. Either of the following assumptions would suffice: (A2) p is continuous not only on X ? but also on ?X (i.e., p(x) ? 0 as dist(x, ?X ) ? 0). (A3) p is supported on all of RD . That is, the support of p has no boundary. This is the approach of [49], but we reiterate that, to handle an unbounded domain, they require truncating ?k (x). Unfortunately, both assumptions (A2) and (A3) are inconsistent with (A1). Our approach is to assume (A2) and replace assumption (A1) with a much milder assumption that p is locally lower bounded on its support in the following sense: (A4) There exist ? > 0 and a function p? : X ? (0, ?) such that, for all x ? X , r ? (0, ?], (B(x,r)) p? (x) ? P?(B(x,r)) . We show in Lemma 2 that assumption (A4) is in fact very mild; in a metric measure space of positive dimension D, as ? long as p is continuous on X , such a p? exists for any desired ? > 0. For simplicity, we will use ? = D = diam(X ). As hinted by (5) and the fact that F (P ) is an expectation, our bounds will contain terms of the form " # Z 1 p(x) = d?(x) E ?/D ?/D X?P (p? (X)) X (p? (x)) (with an additional f 0 (p? (x)) factor if f has a singularity at zero). Hence, our key assumption is that these quantities are finite. This depends primarily on how quickly p approaches zero near ?X . For many functionals, Lemma 6 gives a simple sufficient condition. 5 This complication was omitted in the bias bound (Theorem 7.2) of [5] for entropy estimation. 5 6 Preliminary lemmas Here, we present some lemmas, both as a means of summarizing our proof techniques and also because they may be of independent interest for proving finite-sample bounds for other k-NN methods. Due to space constraints, all proofs are given in the appendix. Our first lemma states that, if p is continuous, then it is locally lower bounded as described in the previous section. Lemma 2. (Existence of?Local Bounds) If p is continuous on X and strictly positive on the interior X ? of X , then, for ? := D = diam(X ), there exists a continuous function p? : X ? ? (0, ?) and a constant p? ? (0, ?) such that P (B(x, r)) 0 < p? (x) ? ? p? < ?, ?x ? X , r ? (0, ?]. ?(B(x, r)) We now use these local lower and upper bounds to prove that k-NN distances concentrate around a 1/D term of order (k/(np(x))) . Related lemmas, also based on multiplicative Chernoff bounds, are used by [21, 9] and [8, 19] to prove finite-sample bounds on k-NN methods for cluster tree pruning and classification, respectively. For cluster tree pruning, the relevant inequalities bound the error of the k-NN density estimate, and, for classification, they lower bound the probability of nearby samples of the same class. Unlike in cluster tree pruning, we are not using a consistent density estimate, and, unlike in classification, our estimator is a function of k-NN distances themselves (rather than their ordering). Thus, our statement is somewhat different, bounding the k-NN distances themselves: Lemma 3. (Concentration of k-NN Distances) Suppose p is continuous on X and strictly positive on X ? . Let p? and p? be as in Lemma 2. Then, for any x ? X ? ,  k  1/D p? (x)rD n ?p? (x)r D n k 1. if r > p? (x)n . , then P [?k (x) > r] ? e e k    ? D kp? (x)/p? 1/D  ep r n ?p? (x)r D n k 2. if r ? 0, p? n . , then P [?k (x) < r] ? e k It is worth noting an asymmetry in the above bounds: counter-intuitively, the lower bound depends on p? . This asymmetry is related to the large bias of k-NN density estimators when p is small (as in (5)). The next lemma uses Lemma 3 to bound expectations of monotone functions of the ratio p?k /p? . As suggested by the form of integrals (6) and (7), this is essentially a finite-sample statement of the fact that (appropriately normalized) k-NN distances have Erlang asymptotic distributions; this asymptotic statement is key to consistency proofs of [25] and [39] for ?-entropy and divergence estimators. Lemma 4. Let p be continuous on X and strictly positive on X ? . Define p? and p? as in Lemma 2. Suppose f : (0, ?) ? R is continuously differentiable and f 0 > 0. Then, we have the upper bound 6    y ? Z ? e?y y k p? (x) sup E f+ ? f+ (1) + e k f+ dy, (6) p?k (x) ?(k + 1) k x?X ? k and, for all x ? X ? , for ?(x) := kp? (x)/p? , the lower bound s    Z ?(x) ?y ?(x) y p? (x) k e y ? f? (1) + e f? dy E f? p?k (x) ?(x) 0 ?(?(x) + 1) k Note that plugging the function z 7? f  kz cD,r np? (x)  D1  (7) into Lemma 4 gives bounds on E [f (?k (x))]. As one might guess from Lemma 3 and the assumption that f is smooth, this bound is   D1 k roughly of the order  np(x) . For example, for any ? > 0, a simple calculation from (6) gives   D?  k ? ? . (8) [? (x)] ? 1 + E k D cD,r np? (x) (8) is used for our bias bound, and more direct applications of Lemma 4 are used in variance bound. 6 f+ (x) = max{0, f (x)} and f? (x) = ? min{0, f (x)} denote the positive and negative parts of f . Recall that E [f (X)] = E [f+ (X)] ? E [f? (X)]. 6 7 Main results Here, we present our main results on the bias and variance of F?B (P ). Again, due to space constraints, all proofs are given in the appendix. We begin with bounding the bias: Theorem 5. (Bias Bound) Suppose that, for some ? ? (0, 2], p is ?-H?lder continuous with constant L > 0 on X , and p is strictly positive on X ? . Let p? and p? be as in Lemma 2. Let f : (0, ?) ? R be differentiable, and define Mf,p : X ? [0, ?) by d Mf,p (x) := sup f (z) dz ? z?[p? (x),p ] Assume " Cf := E X?p Mf,p (X) # ? < ?. Then, (p? (X)) D   D? k ? . E FB (P ) ? F (P ) ? Cf L n The statement for divergences is similar, assuming that q is also ?-H?lder continuous with constant L and strictly positive on X ? . Specifically, we get the same bound if we replace Mf,o with ? f (w, z) Mf,p (x) := sup ?w ? ? (w,z)?[p? (x),p ]?[q? (x),q ] and define Mf,q similarly (i.e., with " Cf := E X?p ? ?z ) and we assume that " # # Mf,p (X) Mf,q (X) + E < ?. ? ? X?p (q (X)) D (p? (X)) D ? As an example of the applicability of Theorem 5, consider estimating the Shannon entropy. Then, R ??/D f (z) = log(x), and so we need Cf = X (p? (x)) d?(x) < ?. The assumption Cf < ? is not immediately transparent. For the functionals in Table 1, Cf has the R ?c form X (p(x)) dx, for some c > 0, and hence Cf < ? intuitively means p(x) cannot approach zero too quickly as dist(x, ?X ) ? 0. The following lemma gives a formal sufficient condition: Lemma 6. (Boundary Condition) Let c > 0. Suppose there exist b? ? (0, 1c ), c? , ?? > 0 such that, R ?c for all x ? X with ?(x) := dist(x, ?X ) < ?? , p(x) ? c? ?b? (x). Then, X (p? (x)) d?(x) < ?. In the supplement, we give examples showing that this condition is fairly general, satisfied by densities proportional to xb? near ?X (i.e., those with at least b? nonzero one-sided derivatives on the boundary). We now bound the variance. The main obstacle here is that the fixed-k estimator is an empirical mean of dependent terms (functions of k-NN distances). We generalize the approach used by [5] to bound the variance of the KL estimator of Shannon entropy. The key insight is the geometric fact that, in (RD , k ? kp ), there exists a constant Nk,D (independent of n) such that any sample Xi can be amongst the k-nearest neighbors of at most Nk,D other samples. Hence, at most Nk,D + 1 of the terms in (2) can change when a single Xi is added, suggesting a variance bound via the Efron-Stein inequality [10], which bounds the variance of a function of random variables in terms of its expected change when its arguments are resampled. [11] originally used this approach to prove a general Law of Large Numbers (LLN) for nearest-neighbors statistics. Unfortunately, this LLN relies on bounded kurtosis assumptions that are difficult to justify for the log or negative power statistics we study. Theorem 7. (Variance Bound) Suppose  B ? f is continuously R ? differentiable and strictly monotone. Assume Cf,p := EX?P B 2 (f (p? (X))) < ?, and Cf := 0 e?y y k f (y) < ?. Then, for h i C V CV := 2 (1 + Nk,D ) (3 + 4k) (Cf,p + Cf ) , we have V F?B (P ) ? . n As an example, if f = log (as in Shannon entropy), then, since B is an additive constant, we simply R cD require X p(x) log2 (p? (x)) < ?. In general, h Nk,D iis of the order k2 , for some c > 0. Our bound is likely quite loose in k; in practice, V F?B (P ) typically decreases somewhat with k. 7 8 Conclusions and discussion In this paper, we gave finite-sample bias and variance error bounds for a class of fixed-k estimators of functionals of probability density functions, including the entropy and divergence estimators in Table 1. The bias and variance bounds in turn imply a bound on the mean squared error (MSE) of the bias-corrected estimator via the usual decomposition into squared bias and variance: Corollary 8. (MSE Bound) Under the conditions of Theorems 5 and 7,   2?/D 2  k CV 2 2 ? ? Cf L + . (9) E Hk (X) ? H(X) n n Choosing k: Contrary to the name, fixing k is not required for ?fixed-k? estimators. [36] empirically studied the effect of changing k with n and found that fixing k = 1 gave best results for estimating F (P ). However, there has been no theoretical justification for fixing k. Assuming tightness of our bias bound in k, we provide this in a worst-case sense: since our bias bound is nondecreasing in k and our variance bound is no larger than the minimax MSE rate for these estimation problems, reducing variance (i.e., increasing k) does not improve the (worst-case) convergence rate. On the other hand, [4] recently showed that slowly increasing k can improves the asymptotic variance of the estimator, with the rate k  log5 n leading to asymptotic efficiency. In view of these results, we suggest that increasing k can improve error by constant factors, but cannot improve the convergence rate. Finally, we note that [36] found increasing k quickly (e.g., k = n/2) was best for certain hypothesis tests based on these estimators. Intuitively, this is because, in testing problems, bias is less problematic than variance (e.g., an asymptotically biased estimator can still lead to a consistent test). Acknowledgments This material is based upon work supported by a National Science Foundation Graduate Research Fellowship to the first author under Grant No. DGE-1252522. References [1] C. Adami. Information theory in molecular biology. Physics of Life Reviews, 1:3?22, 2004. [2] M. Aghagolzadeh, H. Soltanian-Zadeh, B. Araabi, and A. Aghagolzadeh. A hierarchical clustering based on mutual information maximization. In Proc. of IEEE International Conf. on Image Processing, 2007. [3] P. A. Alemany and D. H. Zanette. Fractal random walks from a variational formalism for Tsallis entropies. Phys. Rev. E, 49(2):R956?R958, Feb 1994. doi: 10.1103/PhysRevE.49.R956. [4] Thomas B Berrett, Richard J Samworth, and Ming Yuan. Efficient multivariate entropy estimation via k-nearest neighbour distances. arXiv preprint arXiv:1606.00304, 2016. [5] G?rard Biau and Luc Devroye. Entropy estimation. In Lectures on the Nearest Neighbor Method, pages 75?91. Springer, 2015. [6] L. Birge and P. Massart. Estimation of integral functions of a density. Annals of Statistics, 23:11?29, 1995. [7] B. Chai, D. B. Walther, D. M. Beck, and L. Fei-Fei. Exploring functional connectivity of the human brain using multivariate information analysis. In NIPS, 2009. [8] Kamalika Chaudhuri and Sanjoy Dasgupta. Rates of convergence for nearest neighbor classification. In Advances in Neural Information Processing Systems, pages 3437?3445, 2014. [9] Kamalika Chaudhuri, Sanjoy Dasgupta, Samory Kpotufe, and Ulrike von Luxburg. Consistent procedures for cluster tree estimation and pruning. IEEE Trans. on Information Theory, 60(12):7900?7912, 2014. [10] Bradley Efron and Charles Stein. The jackknife estimate of variance. Ann. of Stat., pages 586?596, 1981. [11] D. Evans. A law of large numbers for nearest neighbor statistics. In Proceedings of the Royal Society, volume 464, pages 3175?3192, 2008. [12] Weihao Gao, Sewoong Oh, and Pramod Viswanath. Demystifying fixed k-nearest neighbor information estimators. arXiv preprint arXiv:1604.03006, 2016. [13] M. N. Goria, N. N. Leonenko, V. V. Mergel, and P. L. Novi Inverardi. A new class of random vector entropy estimators and its applications in testing statistical hypotheses. J. Nonparametric Stat., 17:277?297, 2005. [14] A. O. Hero, B. Ma, O. Michel, and J. Gorman. Alpha-divergence for classification, indexing and retrieval, 2002. Communications and Signal Processing Laboratory Technical Report CSPL-328. [15] A. O. Hero, B. Ma, O. J. J. Michel, and J. Gorman. Applications of entropic spanning graphs. IEEE Signal Processing Magazine, 19(5):85?95, 2002. [16] K. Hlav?ckova-Schindler, M. Palu?sb, M. Vejmelkab, and J. Bhattacharya. Causality detection based on information-theoretic approaches in time series analysis. Physics Reports, 441:1?46, 2007. [17] M. M. Van Hulle. Constrained subspace ICA based on mutual information optimization directly. Neural Computation, 20:964?973, 2008. [18] Kirthevasan Kandasamy, Akshay Krishnamurthy, Barnabas Poczos, Larry Wasserman, et al. Nonparametric von Mises estimators for entropies, divergences and mutual informations. In NIPS, pages 397?405, 2015. 8 [19] Aryeh Kontorovich and Roi Weiss. A Bayes consistent 1-NN classifier. In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, pages 480?488, 2015. [20] L. F. Kozachenko and N. N. Leonenko. A statistical estimate for the entropy of a random vector. Problems of Information Transmission, 23:9?16, 1987. [21] Samory Kpotufe and Ulrike V Luxburg. Pruning nearest neighbor cluster trees. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 225?232, 2011. [22] A. Krishnamurthy, K. Kandasamy, B. Poczos, and L. Wasserman. Nonparametric estimation of renyi divergence and friends. In International Conference on Machine Learning (ICML), 2014. [23] E. G. Learned-Miller and J. W. Fisher. ICA using spacings estimates of entropy. J. Machine Learning Research, 4:1271?1295, 2003. [24] N. Leonenko and L. Pronzato. Correction of ?a class of R?nyi information estimators for mulitidimensional densities? Ann. Statist., 36(2008) 2153-2182, 2010. [25] N. Leonenko, L. Pronzato, and V. Savani. A class of R?nyi information estimators for multidimensional densities. Annals of Statistics, 36(5):2153?2182, 2008. [26] J. Lewi, R. Butera, and L. Paninski. Real-time adaptive information-theoretic optimization of neurophysiology experiments. In Advances in Neural Information Processing Systems, volume 19, 2007. [27] H. Liu, J. Lafferty, and L. Wasserman. Exponential concentration inequality for mutual information estimation. In Neural Information Processing Systems (NIPS), 2012. [28] D. O. Loftsgaarden and C. P. Quesenberry. A nonparametric estimate of a multivariate density function. Ann. Math. Statist, 36:1049?1051, 1965. [29] YP Mack and M Rosenblatt. Multivariate k-nearest neighbor density estimates. J. Multivar. Analysis, 1979. [30] Kevin Moon and Alfred Hero. Multivariate f-divergence estimation with confidence. In Advances in Neural Information Processing Systems, pages 2420?2428, 2014. [31] Kevin R Moon and Alfred O Hero. Ensemble estimation of multivariate f-divergence. In Information Theory (ISIT), 2014 IEEE International Symposium on, pages 356?360. IEEE, 2014. [32] X. Nguyen, M.J. Wainwright, and M.I. Jordan. Estimating divergence functionals and the likelihood ratio by convex risk minimization. IEEE Transactions on Information Theory, To appear., 2010. [33] J. Oliva, B. Poczos, and J. Schneider. Distribution to distribution regression. In International Conference on Machine Learning (ICML), 2013. [34] D. P?l, B. P?czos, and Cs. Szepesv?ri. Estimation of R?nyi entropy and mutual information based on generalized nearest-neighbor graphs. In Proceedings of the Neural Information Processing Systems, 2010. [35] H. Peng and C. Dind. Feature selection based on mutual information: Criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans On Pattern Analysis and Machine Intelligence, 27, 2005. [36] F. P?rez-Cruz. Estimation of information theoretic measures for continuous random variables. In Advances in Neural Information Processing Systems 21, 2008. [37] B. P?czos and A. L?orincz. Independent subspace analysis using geodesic spanning trees. In ICML, 2005. [38] B. P?czos and A. L?orincz. Identification of recurrent neural networks by Bayesian interrogation techniques. J. Machine Learning Research, 10:515?554, 2009. [39] B. Poczos and J. Schneider. On the estimation of alpha-divergences. In International Conference on AI and Statistics (AISTATS), volume 15 of JMLR Workshop and Conference Proceedings, pages 609?617, 2011. [40] B. Poczos, L. Xiong, D. Sutherland, and J. Schneider. Nonparametric kernel estimators for image classification. In 25th IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012. [41] C. Shan, S. Gong, and P. W. Mcowan. Conditional mutual information based boosting for facial expression recognition. In British Machine Vision Conference (BMVC), 2005. [42] S. Singh and B. Poczos. Exponential concentration of a density functional estimator. In Neural Information Processing Systems (NIPS), 2014. [43] S. Singh and B. Poczos. Generalized exponential concentration inequality for R?nyi divergence estimation. In International Conference on Machine Learning (ICML), 2014. [44] Kumar Sricharan, Raviv Raich, and Alfred O Hero. k-nearest neighbor estimation of entropies with confidence. In IEEE International Symposium on Information Theory, pages 1205?1209. IEEE, 2011. [45] Kumar Sricharan, Raviv Raich, and Alfred O Hero III. Estimation of nonlinear functionals of densities with confidence. Information Theory, IEEE Transactions on, 58(7):4135?4159, 2012. [46] Kumar Sricharan, Dennis Wei, and Alfred O Hero. Ensemble estimators for multivariate entropy estimation. IEEE Transactions on Information Theory, 59(7):4374?4388, 2013. [47] Z. Szab?, B. P?czos, and A. L?orincz. Undercomplete blind subspace deconvolution. J. Machine Learning Research, 8:1063?1095, 2007. [48] Zolt?n Szab?. Information theoretical estimators toolbox. Journal of Machine Learning Research, 15: 283?287, 2014. (https://bitbucket.org/szzoli/ite/). [49] A. B. Tsybakov and E. C. van der Meulen. Root-n consistent estimators of entropy for densities with unbounded support. Scandinavian J. Statistics, 23:75?83, 1996. [50] Q. Wang, S.R. Kulkarni, and S. Verd?. Divergence estimation for multidimensional densities via k-nearestneighbor distances. IEEE Transactions on Information Theory, 55(5), 2009. [51] E. Wolsztynski, E. Thierry, and L. Pronzato. Minimum-entropy estimation in semi-parametric models. Signal Process., 85(5):937?949, 2005. ISSN 0165-1684. 9
6123 |@word mild:1 neurophysiology:1 sss1:1 version:1 norm:1 open:1 bn:2 decomposition:1 zolt:1 liu:1 series:2 selecting:1 ours:1 rkhs:1 bradley:1 dx:4 cruz:1 evans:1 numerical:1 additive:2 kandasamy:2 intelligence:2 guess:1 vanishing:1 boosting:2 complication:2 math:1 org:2 unbounded:4 along:1 direct:1 aryeh:1 symposium:2 yuan:1 prove:5 walther:1 manner:1 bitbucket:2 peng:1 notably:1 ica:2 expected:1 indeed:1 themselves:2 dist:3 roughly:1 brain:1 ming:1 increasing:6 spain:1 estimating:11 notation:1 bounded:6 confused:1 moreover:1 suffice:1 begin:1 pursue:2 unified:1 guarantee:2 multidimensional:2 pramod:1 preferable:1 k2:1 classifier:1 unit:2 grant:1 yn:3 appear:1 positive:8 sutherland:1 local:2 analyzing:2 might:1 studied:5 nearestneighbor:1 suggests:1 tsallis:1 graduate:1 statistically:3 savani:1 acknowledgment:1 testing:3 practice:1 lewi:1 procedure:1 empirical:4 significantly:1 confidence:3 protein:1 suggest:2 get:1 cannot:3 interior:2 selection:3 context:2 risk:2 writing:1 dz:1 eighteenth:1 demystifying:1 regardless:1 truncating:1 convex:2 simplicity:1 splitting:1 immediately:1 wasserman:3 estimator:92 insight:1 deriving:3 oh:1 proving:1 handle:1 justification:1 krishnamurthy:2 annals:2 suppose:6 magazine:1 elucidates:1 us:3 verd:1 hypothesis:2 element:2 recognition:3 satisfying:1 viswanath:1 ep:1 role:1 preprint:2 shashank:1 wang:1 worst:2 region:1 ordering:1 counter:1 decrease:1 barnabas:1 geodesic:1 singh:3 weakly:1 upon:1 efficiency:1 soltanian:1 fast:1 kp:3 doi:1 artificial:1 kevin:2 choosing:1 quite:2 larger:1 cvpr:1 tightness:1 otherwise:1 lder:7 precludes:1 statistic:11 nondecreasing:1 itself:1 final:1 differentiable:4 kurtosis:1 relevant:1 chaudhuri:2 achieve:1 chai:1 convergence:12 regularity:1 requirement:1 cluster:5 asymmetry:2 transmission:1 raviv:2 derive:3 andrew:1 depending:3 stat:2 gong:1 fixing:5 friend:1 recurrent:1 nearest:17 thierry:1 strong:1 c:2 concentrate:1 radius:1 closely:1 correct:3 weihao:1 centered:1 human:1 larry:1 material:1 require:7 barnab:1 fix:3 transparent:1 preliminary:1 isit:1 singularity:2 hinted:1 strictly:6 exploring:1 correction:14 hold:1 around:1 normal:1 roi:1 claim:1 entropic:1 a2:3 omitted:1 estimation:26 proc:1 samworth:1 sensitive:1 minimization:2 rather:4 corollary:1 derived:3 likelihood:1 hk:1 digamma:1 sense:2 burdensome:1 summarizing:1 milder:1 birge:1 dependent:1 nn:23 sb:1 typically:2 interested:4 classification:8 smoothing:4 special:1 fairly:2 mutual:11 cube:2 integration:1 constrained:1 atom:1 chernoff:1 identical:2 biology:1 cancel:1 novi:1 icml:5 fmri:1 future:1 np:5 report:2 richard:1 primarily:1 neighbour:1 gamma:1 divergence:27 simultaneously:1 national:1 beck:1 lebesgue:1 attempt:2 detection:2 interest:5 highly:1 analyzed:1 xb:1 integral:3 erlang:2 necessary:4 facial:2 tree:7 walk:2 desired:1 theoretical:2 weaken:1 formalism:1 obstacle:2 araabi:1 goodness:1 maximization:1 applicability:1 undercomplete:1 too:1 optimally:1 motivating:1 dependency:1 density:38 international:9 physic:2 alemany:1 together:1 quickly:3 continuously:2 kontorovich:1 connectivity:1 squared:4 von:4 again:1 satisfied:1 slowly:2 conf:1 derivative:1 leading:2 michel:2 yp:1 suggesting:2 satisfy:1 blind:1 reiterate:1 depends:2 multiplicative:3 later:1 view:1 root:1 analyze:1 sup:3 ulrike:2 sort:1 bayes:1 moon:2 variance:19 miller:1 ensemble:2 biau:1 generalize:4 identification:1 unifies:1 bayesian:1 iid:4 none:1 worth:2 phys:1 definition:1 naturally:2 proof:6 mi:3 recall:1 knowledge:1 efron:2 improves:1 originally:1 wei:2 bmvc:1 rard:1 though:1 furthermore:1 hand:2 dennis:1 replacing:1 nonlinear:1 lack:1 continuity:1 believe:1 dge:1 name:2 effect:1 contain:2 normalized:2 unbiased:1 hence:8 hulle:1 butera:1 leibler:1 nonzero:1 laboratory:1 criterion:1 generalized:2 theoretic:4 quesenberry:1 image:3 variational:1 recently:3 charles:1 superior:1 common:1 functional:12 empirically:1 volume:3 mellon:2 refer:1 cv:2 ai:1 smoothness:3 rd:11 consistency:3 similarly:2 scandinavian:1 zanette:1 feb:1 multivariate:7 recent:1 showed:1 moderate:1 optimizes:1 inf:1 certain:3 inequality:4 life:1 yi:2 der:1 minimum:1 additional:7 somewhat:3 schneider:3 converge:3 signal:3 semi:2 ii:1 smooth:5 technical:2 faster:2 multivar:1 plug:7 cross:1 long:1 calculation:1 retrieval:1 molecular:1 plugging:3 a1:3 prediction:1 regression:2 oliva:1 essentially:1 cmu:2 expectation:3 metric:2 arxiv:4 vision:2 kernel:3 achieved:1 szepesv:1 fellowship:1 spacing:1 interval:1 appropriately:1 biased:2 unlike:4 massart:1 r956:2 induced:1 contrary:1 inconsistent:1 lafferty:1 jordan:1 integer:1 near:3 leverage:2 noting:3 iii:1 mergel:1 fit:1 gave:2 competing:1 regarding:1 simplifies:1 computable:1 translates:1 palu:1 expression:2 motivated:1 aghagolzadeh:2 poczos:7 matlab:1 fractal:2 useful:1 generally:1 clear:1 nonparametric:8 stein:2 adami:1 locally:2 statist:2 tsybakov:1 http:2 inverardi:1 exist:2 problematic:1 rosenblatt:1 alfred:5 carnegie:2 dasgupta:2 key:3 redundancy:1 drawn:1 goria:1 changing:1 schindler:1 registration:1 asymptotically:4 graph:2 monotone:2 luxburg:2 place:1 extends:1 zadeh:1 appendix:2 dy:2 bound:38 resampled:1 shan:1 pronzato:3 log5:2 constraint:2 fei:2 n3:1 ri:1 raich:2 nearby:1 argument:1 min:7 leonenko:4 kumar:3 relatively:1 jackknife:1 mcowan:1 department:2 ball:1 rev:1 modification:3 making:1 presently:1 explained:1 intuitively:3 indexing:1 bapoczos:1 sided:1 taken:1 mack:1 computationally:3 discus:3 loose:1 turn:1 needed:2 ykr:1 hero:7 studying:1 adopted:1 available:1 apply:1 hierarchical:1 away:3 appropriate:1 kozachenko:1 bhattacharya:1 xiong:1 slower:1 existence:2 thomas:1 denotes:1 clustering:3 include:1 ensure:1 cf:12 a4:2 log2:1 wolsztynski:1 neglect:1 giving:2 nyi:5 society:1 added:1 quantity:3 parametric:3 concentration:4 usual:1 exhibit:1 amongst:1 subspace:4 distance:15 parametrized:1 manifold:2 considers:1 reason:2 spanning:2 assuming:3 code:1 besides:1 pointwise:1 devroye:1 issn:1 ratio:2 difficult:3 unfortunately:3 truncates:1 statement:6 negative:2 kirthevasan:1 design:1 unknown:5 perform:1 kpotufe:2 upper:2 sricharan:3 finite:18 truncated:1 communication:1 orincz:3 y1:3 smoothed:2 required:2 toolbox:2 kl:12 learned:1 barcelona:1 nip:5 discontinuity:1 trans:2 suggested:1 below:1 pattern:2 challenge:2 including:3 max:3 royal:1 wainwright:1 power:1 critical:1 demanding:1 natural:1 minimax:4 improve:4 meulen:1 imply:2 kxkr:1 prior:2 geometric:1 review:1 asymptotic:7 law:2 lecture:1 interrogation:1 proportional:1 validation:1 foundation:1 incurred:2 sufficient:2 consistent:12 sewoong:1 famously:1 cd:5 supported:6 czos:5 free:1 truncation:1 lln:2 bias:42 formal:1 neighbor:14 akshay:1 absolute:1 van:2 boundary:12 dimension:2 xn:7 overcome:1 fb:2 dn2:1 author:3 kz:1 adaptive:2 nguyen:1 transaction:4 functionals:12 pruning:5 compact:2 alpha:2 kullback:1 xi:11 continuous:20 loftsgaarden:1 table:7 expansion:1 mse:3 domain:1 aistats:1 pk:6 main:4 bounding:3 ref:1 x1:7 causality:2 borel:1 slow:1 samory:2 sub:1 exponential:3 vanish:1 jmlr:1 renyi:1 rez:1 theorem:11 british:1 hlav:1 xt:1 specific:1 showing:1 decay:4 a3:2 deconvolution:1 intrinsic:1 exists:3 workshop:1 kamalika:2 importance:1 kr:3 supplement:2 texture:1 kx:3 demand:1 nk:5 gorman:2 mf:8 entropy:31 simply:1 likely:1 paninski:1 gao:1 partially:1 springer:1 relies:2 ma:2 conditional:1 goal:2 ite:3 diam:2 ann:3 replace:2 luc:1 fisher:1 change:2 specifically:2 except:2 corrected:5 reducing:1 justify:1 szab:2 lemma:19 sanjoy:2 experimental:1 shannon:8 support:9 relevance:1 absolutely:1 kulkarni:1 d1:2 ex:1
5,663
6,124
Achieving Budget-optimality with Adaptive Schemes in Crowdsourcing Ashish Khetan and Sewoong Oh Department of ISE, University of Illinois at Urbana-Champaign Email: {khetan2,swoh}@illinois.edu Abstract Adaptive schemes, where tasks are assigned based on the data collected thus far, are widely used in practical crowdsourcing systems to efficiently allocate the budget. However, existing theoretical analyses of crowdsourcing systems suggest that the gain of adaptive task assignments is minimal. To bridge this gap, we investigate this question under a strictly more general probabilistic model, which has been recently introduced to model practical crowdsourcing datasets. Under this generalized Dawid-Skene model, we characterize the fundamental trade-off between budget and accuracy. We introduce a novel adaptive scheme that matches this fundamental limit. A given budget is allocated over multiple rounds. In each round, a subset of tasks with high enough confidence are classified, and increasing budget is allocated on remaining ones that are potentially more difficult. On each round, decisions are made based on the leading eigenvector of (weighted) non-backtracking operator corresponding to the bipartite assignment graph. We further quantify the gain of adaptivity, by comparing the tradeoff with the one for non-adaptive schemes, and confirm that the gain is significant and can be made arbitrarily large depending on the distribution of the difficulty level of the tasks at hand. 1 Introduction Crowdsourcing platforms provide labor markets in which pieces of micro-tasks are electronically distributed to a pool of workers. In typical crowdsourcing scenarios, such as those on Amazon?s Mechanical Turk, a requester posts a collection of tasks, and a batch is picked up by any worker who is willing to complete it. The worker is subsequently rewarded for each task he/she completes. However, some workers are spammers trying to make easy money. Moreover, since the reward is small and tasks are tedious, errors are common even among those who try. To correct for the errors, a common approach is to introduce redundancy by assigning each task to multiple workers and aggregating their responses using some schemes such as majority voting. A fundamental problem of interest is how to maximize the accuracy of thus inferred solutions, while using as small number of repetitions as possible. There are two challenges in achieving such an optimal tradeoff between accuracy and the budget: (a) we need a scheme for deciding which tasks to assign to which workers; and (b) at the same time infer the true solutions from their responses. Since the workers are fleeting, the requester has no control over who gets to work on which tasks. It is impossible to make a trust relationship with the workers. In particular, it does not make sense to explore reliable workers, and exploit them in subsequent steps. Each arriving worker is completely new and you may never get him back. Nevertheless, by comparing responses from multiple workers, we can estimate the true answer to the task, and use it in subsequent steps to learn the reliability of the workers. Our beliefs on the true answers as well as the difficulty of the tasks and the reliability of the workers can be iteratively refined, and one can potentially choose to assign more workers to the more difficult tasks. We would like to understand such intricate interplay of task assignment and inference. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Setup. We have m binary classification tasks to be completed by workers. We assume a recent generalization of the Dawid-Skene model introduced in [22] to model the responses, which captures the heterogeneity in the tasks as well as the workers. Precisely, each new arriving worker is parametrized by a quality parameter pj ? [0, 1] (for the j-th arriving worker), which is i.i.d. according to some prior distribution F . Each task is parametrized by a difficulty parameter qi ? [0, 1] (for the i-th task), which is drawn i.i.d. according to some prior distribution G . When a worker j is assigned a task i, the task is perceived as a positive task with probability qi , and as a negative task otherwise. Hence, if qi is close to a half then it is confusing and difficult to correctly classify, and easy if close to one or zero. When task i is assigned to worker j, the response is a noisy perception of the task:  1, w.p. qi pj + q?i p?j , Aij = , (1) ?1, w.p. q?i pj + qi p?j . where q?i = 1 ? qi and p?j = 1 ? pj . With probability pj , the worker answers truthfully as he perceives the task, and otherwise gives the opposite answer. Hence, if pj is close to one then he tells the truth (in his opinion) and if it is close to half he gives random answers. If it is zero, he is also reliable, in the sense that a requester who can correctly decode his reliability can extract the truths exactly. We define the ground truth of a task as what the majority of the workers agree on, had we asked all the workers. Accordingly, we assume that EF [pj ] > 1/2 and the true labels are defined as ti = I{qi >(1/2)} ? I{qi <(1/2)} . Otherwise, we do not impose any condition on the distribution of pj ?s. However, we assume qi ?s are discrete random variables with support at K points. Our results do not directly depend on this support size K, and therefore K can be made arbitrarily large. Note that we focus on only binary tasks with two types of classes, and also the workers are assumed to be symmetric, i.e. the error probability does not depend on the perceived label of the task. The original Dawid-Skene model introduced in [3] and analyzed in [9] is a special case, when all tasks are equally easy, i.e. qi ?s are either one or zero. This makes inference easier as all tasks are perceived their true class; the only source of error is in workers? noisy responses. We assume the following task assignment scenario to model practical crowdsourcing systems. It is a discrete time system, where at the beginning of each time step the requester can create a batch of tasks. This batch is picked up by a new arriving worker, and his/her responses are collected. To model real-world constraints we assume there is a limit on how many tasks a single worker can complete, which we denote by r. The requester (also called task master) has no control over who is arriving next, but he has control over which of the m tasks are to be solved by the next arriving worker. This allows for adaptive task assignment schemes, where the requester can choose to include those tasks that he is most uncertain about based on all the history of responses collected thus far. We consider all randomized task assignment schemes, whose expected number of assignment per task is `, and all inference algorithms. We study the minimax rate when the nature chooses the worst case priors F and G (from a family of priors parametrized by average worker reliability ? and average task difficulty ? defined in (2)), and we choose the best possible adaptive task assignment together with the best possible inference algorithm. We further propose a novel adaptive approach that achieves this minimax rate up to a constant factor. Our approach is different from existing adaptive schemes in [5], where there are multiple types of tasks and the main source of uncertainty is which type the next arriving worker is expert on. Golden tasks with known answers are used to explore expertise and tasks are assigned accordingly. Related work. Existing work on crowdsourcing systems study the standard Dawid-Skene (DS) model [3], where all tasks are equally difficult and hence qi ? {0, 1} for all tasks. Several inference algorithms have been proposed [3, 17, 6, 16, 4, 7, 11, 23, 10, 21, 2, 8, 14], and the question of task assignment is addressed in [9], where the minimax rate on the probability of error is characterized and a matching task assignment scheme and an inference algorithm are proposed. Perhaps surprisingly, for the standard DS model, a non-adaptive task assignment scheme achieves the fundamental limit. Namely, given m tasks and a total budget for m` responses, the requester first constructs a bipartite task-assignment graph with m task nodes, n = m`/r worker nodes, and edges drawn uniformly at random with degree ` for the task nodes and r for the worker nodes. Then, j-th arriving worker is assigned a batch of r tasks that are adjacent to the j-th worker node. Together with an inference algorithm explained in detail in Section 2, this achieves a near-optimal performance. Namely, to achieve an average probability of error ?, it is sufficient to have total budget O((m/?) log(1/?)), where ? = EF [(2pj ? 1)2 ] is the quality of the workers defined in (2). Perhaps surprisingly, no adaptive assignment can improve upon it. Even the best adaptive scheme and the best inference 2 algorithm still requires ?((m/?) log(1/?)) total budget. Hence, there is no gain in adaptivity. This negative result relies crucially in the fact that under the standard DS model, all tasks are inherently equally difficult. Hence, adaptively assigning more workers to relatively more ambiguous tasks has only a marginal gain. However, simple adaptive schemes are widely used in practice, where significant gains are achieved; in real-world systems, tasks are widely heterogeneous. To capture such varying difficulties in the tasks, generalizations of the DS model were proposed in [19, 18, 22, 15] and significant improvement has been reported on inference problems for real datasets. The generalized DS model serves as the missing piece in bridging the gap between practical gains of adaptivity and theoretical limitations of adaptivity. We investigate the fundamental question of ?do adaptive task assignments improve accuracy?? under this generalized Dawid-Skene model of Eq. (1). Contributions. To investigate the gain of adaptivity, we first characterize the fundamental lower bound on the budget required to achieve a target accuracy. To match this fundamental limit, we introduce a novel adaptive task assignment scheme. Our approach consists of multiple rounds of non-adaptive schemes, and we provide sharp analyses on the performance at each round, which guides the design of the task assignment in each round adaptively using the data from previous rounds. The proposed adaptive task assignment is simple to apply in practice, and numerical simulations confirm the superiority compared to state-of-the-art non-adaptive schemes. Under a certain assumption on the choice of parameters in the algorithm, which requires a moderate access to an oracle, we can prove that the performance of the proposed adaptive scheme matches that of the fundamental limit up to a constant factor. Finally, we quantify the gain of adaptivity by proving a strictly larger lower bound on the budget required for any non-adaptive schemes. Precisely, we show that the minimax rate on the budget required to achieve a target average error rate of ? scales as ?((m/??) log(1/?)). The dependence on the prior F and G are solely captured in ? (the quality of the crowd as a whole) and ? (the quality of the tasks as a whole). We show that the fundamental tradeoff for non-adaptive schemes is ?((m/?min ?) log(1/?)), requiring a factor of ?/?min larger budget for non-adaptive schemes. This factor of ?/?min is precisely how much we gain by adaptivity, and this gain can be made arbitrarily large in the worst case distribution G . 2 Main Results The following quantities are fundamental in capturing the dependence of the minimax rate on the distribution of task difficulties and worker reliabilities:  ?1 1 ? ? EG , ? ? EG [(2qi ? 1)2 ], and ? ? EF [(2pj ? 1)2 ] . (2) (2qi ? 1)2 Let n denote the total number of workers used, and Tj denote the set of all tasks assigned to worker j ? [n] and Wi denote the set of all workers assigned to task i ? [m] until the adaptive task assignment scheme has terminated. We consider discrete distribution G with K types of tasks of varying difficulty levels. Define effective difficulty level of each task i to be ?i ? (2qi ? 1)2 , and ?min = mini?[m] ?i . A task with a small ?i is more difficult, since qi close to 1/2 means the task is more P ambiguous. Let ?a denote fraction of total tasks having difficulty level ?a for a ? [K] such that a?[K] ?a = 1, and ?max ? maxa?[K] ?a , ?min ? mina?[K] ?a . 2.1 Fundamental limit under the adaptive scenario We prove a lower bound on the minimax error rate: the error that is achieved by the best inference algorithm t? using the best adaptive task assignment scheme ? under a worst case worker distribution F and the worst-case?true answers t for the given distribution of? difficulty level ?i ?s. Note that given ?i , either qi = (1 + ?i )/2 in which case ti = 1 or qi = (1 ? ?i )/2 and ti = ?1. Let T` be the set of all task assignment schemes that use at most m` queries in total, and let F? be the set of all the worker distributions such that expectation of worker quality is ?, i.e. F? ? {F |EF [(2pj ?1)2 ] = ?}. Then we can show the following lower bound on the minimax rate on the probability of error. A proof of this theorem is provided in Section 4 in the supplementary material. Theorem 2.1. When ? < 1, there exists a positive constant C 0 such that for each task i ? [m], min max m ? ?T` ,t? t?{?1} ,F ?F? P[ti 6= t?i |?i ] ? 3 1 ?C 0 ?i ? E[|Wi | | ?i ] e . 2 This proves a lower bound on per task probability of error that decays exponentially with exponent scaling as ?i ?E[|Wi | | ?i ]. The easier the task (?i large), the more reliable the workers are (? large), and the more workers assigned to that task (|Wi | large), the smaller the achievable error. To get a lower bound on the average probability of error, suppose we know the difficulties of the tasks and P assign `a workers to tasks of difficulty ?a . With average budget constraint a?[K] `a ?a ? `, m min max m ? ?T` ,t? t?{?1} ,F ?F? 1 X P[ti 6= t?i ] ? m i=1 = min P `a : a?[K] ?a `a =` K X 1 2 a=1 ?a e?C 0 `a ?a ? (3)  K  P 1 ?C 0 `?? X ?? a6=a0 (?a0 /?a0 ) log(?a /?a0 ) ?a e , e 2 a=1 where the equality follows from solving the optimization problem. Note that the summand in the bound does not depend upon the budget `, and it is lower bounded by ?min > 0. The error scales as 0 e?C `?? , where ? = 1/(E[1/?i ]) as defined in (2), and captures how difficult the set of tasks are collectively. This gives a lower bound on the budget ? required to achieve error ?; there exists a constant C 00 such that if   ?min 00 m log , (4) ?? ? C ?? ? then no task assignment scheme (adaptive or not) with any inference algorithm can achieve error less than . Intuitively, ? captures the (collective) quality of the workers as specified by F and ? captures the (collective) difficulty of the tasks as specified by G . This recovers the known fundamental limit  1 for standard DS model where all tasks have ?i = 1 and hence ? = 1 in [9]: ?? > C 000 m ? log  . 2.2 Upper bound on the achievable error rate We present an adaptive task assignment scheme and an iterative inference algorithm that asymptotically achieve an error rate of C1 e?(C? /4)`?? , when m grows large and ` = ?(log m) where C1 = log2 (2?max /?min ) log2 (?1 /?K ). This matches the lower bound in (3) and the expected number of queries (or task-worker assignments) is bounded by m`. Comparing it to a fundamental lower bound in Theorem 2.1 establishes the near-optimality of our approach, and the sufficient condition to achieve average error ? is for the average total budget to be larger than, C  m 1 ?? ? C 0 log . (5) ?? ? 2.2.1 Adaptive algorithm Since difficulty level is varying across the tasks, it is intuitive to assign fewer workers to easy tasks and more workers to hard tasks. Suppose we know the difficulty levels, then optimizing the lower bound (3) over `?i ?s, it suggests to assign `?i ' `(?/?i ) workers to the task i with difficulty ?i , when given a fixed budget of ` workers per task on average. However, the difficulty levels are not known. Non-adaptive schemes can be arbitrarily worse (see Theorem 2.4). We propose a novel approach of adaptively assigning workers in multiple rounds, refining our belief on ?i , and making decisions on the tasks with higher confidence. The main algorithmic component is the sub-routine in line 8-13 of Algorithm 1. For a choice of the (per task) budget `t , we collect responses according to a (`t , rt = `t ) regular random graph on |M | tasks and |M | workers. The leading eigen-vector of the non-backtracking operator on this bipartite graph, weighted by the ?1 responses reveals a noisy observation of the true class and the difficulty levels of the tasks. Let x ? R|M | denote the top left eigenvector, computed as per Algorithm 2. Then the i-th entry xi asymptotically converges in the large number of tasks m limit to a Gaussian random variable with mean proportional to the difficulty level (2qi ? 1), with mean and variance specified in Lemma 5.1 in the the supplementary material. This non-backtracking operator approach to crowdsourcing was first introduced in [7] for the standard DS model, is a single-round non-adaptive scheme, and uses a threshold of zero to classify tasks based on the sign of xi ?s. We generalize their analysis to this generalized DS model in Theorem 2.3 for finite sample regime, and further give a sharper characterization based on central limit theorem in the asymptotic regime (Lemma 5.1 in the supplementary material). 4 This provides us a sub-routine that reveals (2qi ? 1)?s we want, corrupted by additive Gaussian noise. This resembles the setting in racing algorithms introduced in [12] where the goal is to choose the variable (i.e. task) with largest mean (i.e. easiest) with minimal budget. However, our goal is to identify the sign of the mean of the variables (i.e. classes) with sufficient accuracy. The key idea is to classify the easier tasks first with minimal budget, and then classify the remaining more difficult tasks with more budget allocated per task. We can set a threshold Xt,u at each round, and make a permanent decision on a subset of tasks that have large xi ?s in absolute value, since those are the tasks we are most confident about in its class, i.e. sign(2qi ? 1). We are now left to choose the budget `t and the threshold Xt,u for each round. We prescribe a choice using following notations. Assume that ?a ?s are indexed such that ?1 > ?2 > . . . > ?K . For simplicity, assume that ?K = ?1 2?(T ?1) for some T ? Z+ \ {1}. Given the ? a , ??a }a?[T ] which is supported distribution {?a , ?a }a?[K] , we first bin it to get another distribution {? ? 1 = ?1 and ? ? a+1 = ? ? a 2?1 for each a ? [T ? 1]. ??a is the total at most at T points. We take ? fraction ?i is smaller than ? 1 2?(a?2) and larger than ?1 2?(a?1) . Precisely,  difficulty Pof tasks whose (a?1) ? ?a = a0 ?[K] ?a0 I ?1 /2 ? ?a0 < ?1 /2(a?2) , for a ? [T ] . The choice of 2 for the ratio of ? a ?s is arbitrary and can be further optimized for a given distribution of ?i ?s. For ease of notations in ? ? a , ??a } ? , for T? ? T , such that writing the algorithm, we re-index the binned distribution to get {? a?[T ] ??a 6= 0 for all a ? T?. Note that T? ? dlog (?1 /?K )e. 2 We start with a set of all tasks M = [m]. A fraction of tasks are classified in each round and the un-classified ones are taken to the next round. At round t ? {1, . . . , T?}, our goal is to classify sufficient fraction of those tasks in the same difficulty group {i ? M : ?i = ?t } to be classified with desired level of accuracy. If `t is too low and/or threshold Xt,u too small, then misclassification rate will be too large. If `t is too large, we are wasting our budget unnecessarily. If Xt,u is too ? ? ? t and an appropriate Xt,u to large, not enough tasks will be classified. We choose `t = `C? ?/ ?(C? /4)??` ensure that the misclassification probability is at most C1 e based on the central limit theorem on the leading eigen vector (see (21) in the supplementary material). We run this sub-routine st = max{0, dlog2 (??t (1 + ?t )/??t+1 ?t+1 )e} times to ensure that enough fraction from t-th group is classified. We make sure that the expected number of unclassified tasks is at most equal to the number of tasks in the next group, i.e., difficulty level ?i = ?t+1 . We provide a near-optimal performance guarantee for ?t = 1 for all t ? [T?], and ?t provides an extra degree of freedom for practitioners to further optimize the efficiency. ? t ) that get classified Note that statistically, the fraction of the t-th group (i.e. tasks with difficulty ? before the t-th round is very small as the threshold set in these rounds is more than their absolute ? t will get classified in round t. Further, the binning of the original mean message. Most tasks with ? ? ? given distribution to get {?a , ?a } ensures that `t+1 ? 2`t . It ensures that the total extraneous budget ? t tasks is not more than a constant times the allocated budget of those tasks, and the spent on ? constant can be made one, by changing the initial choice of `1 by a constant factor. 2.2.2 Performance Guarantee Since we are not wasting any budget on any of the tasks, with the right choice of the constant C? , we are guaranteed that this algorithm uses at most m` P assignments in expectation. One caveat is that, the threshold Xt,u depends on ?t,u = (1/|M |) i?[M ] ?i , which is the average difficulty of the remaining tasks. As the remaining tasks are changing over the course of the algorithm, we need to estimate this value in each sub-routine. We provide an estimator of ?t,u in Algorithm 3 (in the supplementary material) that only uses the sampled responses that are already collected. All numerical results are based on this estimator. However, analyzing the sensitivity of the performance with respect to the estimation error in ?t,u is quite challenging, and for a theoretical analysis, we assume we have access to an oracle that provides the exact value of ?t,u , replacing Algorithm 3. P Theorem 2.2. Suppose Algorithm 3 returns the exact value of ?t,u = (1/|M |) i?[M ] ?i . With the choice of ?a = 1 for all a ? [T?] and C? = (4 + dlog(2?max /?min )e)?1 for any given distribution of task difficulty {?a , ?a }a?[K] of m tasks and an average number of workers per task ` = ?(logP m), the expected number of queries made by Algorithm 1 is asymptotically bounded by limm?? t?[T?],u?st `t E[|Mt,u |]/(m`) ? 1, where Mt,u is the number of tasks remaining at round 5 (t, u). Further, Algorithm 1 returns estimates {t?i }i?[m] that asymptotically achieves, m 1 X P[ti 6= t?i ] ? C1 e?(C? /4)`?? , m?? m i=1 lim (6) where C1 = log2 (2?max /?min ) log2 (?1 /?K ) for ?? scaling as 1/` such that `?? = ?(1). A proof of this theorem is provided in Section 5 in the supplementary material. This shows the nearoptimal sufficient condition of our approach in (5). The constant C? can be improved by optimizing over the choice of ?a ?s by minimizing the expected number of queries that the algorithm makes. Algorithm 1 Adaptive Task Assignment and Inference Algorithm ? a , ??a } ? , `, C? , {?a } ? , ?, ?, ? = E[2pj ? 1] Require: m, {? a?[T ] Ensure: Estimate {t?i }i?[m] ?= 1: M ? {1, 2, ? ? ? , m}, ? a?[T ] P 2: for all t = 1, 2, ? ? ? , T? do ? ? ? 3: `t ? (`C? ?)/ n lt , rt ? `t 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: ? ? a?[T? ] (?a /?a ) ?1 m o ? t) I{t < T?} + 1 I{t = T?} st ? max 0, log ???t (1+? t+1 ?t+1 for all u = 1, 2, ? ? ? , st do if M 6= ? then p n ? |M | , k ? log |M | Draw E ? {0, 1}|M |?n ? (`t , rt )-regular random graph Collect answers {Ai,j ? {1,  ?1}}(i,j)?E  {xi }i?M ? Algorithm 2 E, {Ai,j }(i,j)?E , k ?t,u ? Algorithm 3 [E, {Ai,j }(i,j)?E , `t , rt ] p  ? t ?`t (`t ? 1)(rt ? 1)?t,u ? k?1 I{t < T?} + 0 I{t = T?} Xt,u ? ?  t?i = I{xi > Xt,u } ? I{xi < ?Xt,u } i?M , M ? {i ? M : |xi | ? Xt,u } end if end for end for Algorithm 2 Message-Passing Algorithm Require: E ? {0, 1}|M |?n , {Aij ? {1, ?1}}(i,j)?E , kmax Ensure: {xi ? R}i?[|M |] 1: for all (i, j) ? E do (0) 2: Initialize yj?i with random Zj?i ? N (1, 1) 3: end for 4: for all k = 1, 2, ? ? ? , kmax do 5: for all (i, j) ? E do P (k) 6: xi?j ? j 0 ?Wi \j Aij 0 yjk?1 0 ?i 7: end for 8: for all (i, j) ? E do P (k) 9: yj?i ? i0 ?Tj \i Ai0 j xki0 ?j 10: end for 11: end for 12: for all i ? [m] do P kmax ?1 13: xi ? j?Wi Aij yj?i 14: end for In Figure 1, we compare performance of our algorithm with majority voting and also non-adaptive version of our Algorithm 1, where we assign to each task ` (the given budget) number of workers in 6 one round and set classification threshold Xt,u = 0 so as to classify all the tasks. This non-adaptive special case has been introduced for the standard DS model in [9]. We make a slight modification to Algorithm 1. In the final round, when the classification threshold is set to zero, we include all the responses collected thus far when running the message passing Algorithm 2, and not just the fresh samples collected in that round. This creates dependencies between rounds, which makes the analysis challenging. However, in practice we see improved performance and it allows us to use the given fixed budget efficiently. We run synthetic experiments with m = 1800 and fix n = 1800 for the non-adaptive version. The crowds are generated from the spammer-hammer model with hammer probability equal to 0.3. In the left panel, we take difficulty level ?a to be uniformly distributed over {1, 1/4, 1/16}, that gives ? = 1/7. In the right panel, we take ?a = 1 with probability 3/4, otherwise we take it to be 1/4 or 1/16 with equal probability, that gives ? = 4/13. As predicted from the theoretical analysis, our adaptive algorithm improves significantly over its non-adaptive version. In particular, for the left panel, the non-adaptive algorithm?s error scaling depends on smallest ?i that is 1/16 while for the adaptive algorithm it scales with ? = 1/7. In the figure, it can be seen that the adaptive algorithm requires approximately (7/16)` queries to acheive the same error as achieved by the non-adaptive one using ` queries. This gap widens in the right panel to approximately (13/64) as predicted, and the adaptive algorithm achieves zero error as the number of queries increase. For a fair comparison with the non-adaptive version, we fix total budget to be m` and assign workers in each round until the budget is exhausted. C? is 1 and st = 1 for t ? {1, 2, 3}. probability of error probability of error 0.1 0.1 0.01 0.01 0.001 0.001 0.0001 0.0001 1e-005 1e-006 1e-005 Majority voting Non-adaptive Adaptive 50 100 150 Majority voting Non-adaptive Adaptive 1e-006 200 250 300 350 50 number of queries per task ` 100 150 200 250 300 number of queries per task ` Figure 1: Algorithm 1 improves significantly over its non-adaptive version and majority voting. 2.3 Achievable error rate under the non-adaptive scenario Consider a non-adaptive version of our approach where we apply it for one round using an (`, r) random regular graph, where ` is the given budget. Naturally, the classification threshold is set to Xt,u = 0 so as to classify all the tasks. We provide a sharp upper bound on the achieved error, that holds for all (non-asymptotic) regimes of m. Define ?k2 as    ?r(??)2 k?1 1 ? 1/ `? 2? 1 2 +3 1+ (7) ?k ?  .  ?r(??)2 r??? ?r(??)2 k?1 1 ? 1/ `? ?2 `? This captures the effective variance in the sub-Gaussian tail of the messages xi ?s after k iterations of the inference algorithm (Algorithm 2), as shown in the proof of the following theorem (see the supplementary material in Section 6). Theorem 2.3. For any ` > 1 and r > 1, suppose m tasks are assigned according to a random ?r?2 ? 2 > 1, and r?? > 1, then (`, r)-regular graph drawn from the configuration model. If ? > 0, `? m ? for any t ? {?1} , the estimate ti = sign(xi ) after k iterations of Algorithm 2 achieves  2 3`r ? 2k?2 (k)  P ti 6= t?i ?i ? e?`??i /(2?k ) + (`? r) . (8) m Therefore, the average error rate is bounded by m 1 X (k) P[ti 6= t?i ] ? m i=1  ?`??i  3`r ? 2k?2 2 EG e 2?k + (`? r) . m 7 (9) The second term, which is the probability that the resulting (`, ? r) regular random graph is not locally tree-like, can be made small for large m as long as k = O( log m) (which is the choice we make in Algorithm 1). Hence, the dominant term in the error bound is the first term. Further, when we run our algorithm for large enough numbers of iterations, ?k2 converges linearly to a finite limit  2 2 ?r??)2 /((`? ?r??)2 ? 1), which for large enough ?? ? limk?? ?k2 such that ?? = 3 1 + 1/(? r??) (`? ? r??? and `? r is upper bounded by Hence, for a wide range of parameters, the average  a constant. P 2 error in (9) is dominated by EG e?`??i /2?k = a ?a e?C`??a . When all ??s are strictly positive, the error is dominated by the difficult tasks with ?min = mina ?a , as illustrated in Figure 2. Hence, it is sufficient to have budget ?? ? C 00 m/(?min ?) log(1/?) to achieve an average error of ? > 0. Such a scaling is also necessary as we show in the next section. This is further illustrated in Figure 2. The error decays exponentially in ` and ? as predicted, but the rate of decay crucially hinges on the difficulty level. We run synthetic experiments with m = n = 1000 and the crowds are generated from the spammer-hammer model where pj = 1 with probability ? and 1/2 otherwise. We fix ? = 0.3 and vary ` in the left figure and fix ` = 30 and vary ? in the right figure. We let qi ?s take values in {0.6, 0.8, 1} with equal probability such that ? = 1.4/3. The error rate of each task grouped by their difficulty is plotted in the dashed lines, 2 matching predicted e??(`?(2qi ?1) ) . The average error rates in solid lines are dominated by those of the difficult tasks, which is a universal drawback for all non-adaptive schemes. probability of error probability of error 1 1 0.1 0.1 0.01 0.01 0.001 0.001 q = 1.0 q = 0.8 q = 0.6 Mean Error 0.0001 1e-005 0 5 q = 1.0 q = 0.8 q = 0.6 Mean Error 0.0001 1e-005 10 15 20 25 30 0 number of queries per task ` 0.1 0.2 0.3 0.4 crowd quality ? Figure 2: Non-adaptive schemes suffer as average error is dominated by difficult tasks. 2.4 Fundamental limit under the non-adaptive scenario Theorem 2.3 implies that it suffices to assign ` ? (c/(??i )) log(1/?) to achieve an error smaller than ? for a task i. We show in the following theorem that this scaling is also necessary. Hence, applying one round of Algorithm 1 is near-optimal in the non-adaptive scenario compared to a minimax rate where the nature chooses the worst distribution of worker pj ?s among the set of distributions with the same ?. We provide a proof of the theorem in Section 7 in the supplementary material. Theorem 2.4. There exists a positive constant C 0 and a distribution F of workers with average reliability E[(2pj ? 1)2 ] = ? s.t. when ?i < 1, if the number of workers assigned to task i by any non-adaptive task assignment scheme is less than (C 0 /(??i )) log(1/), then no algorithm can achieve conditional probability of error on task i less than  for any m and r. Since in this non-adaptive scheme, task assignments are done a priori, there are on average ` workers assigned to any set of tasks of the same difficulty. Hence, if the total budget is less than m ?min ?? ? C 0 log , (10) ?min ? ? then no algorithm can achieve average error less than ?, where ?min = mina ?a . Compared to the adaptive case in (4) (nearly achieved in (5)), the gain of adaptivity is a factor Pa of ?/?min . The RHS is negative when ?min < P ?, and can be tightened to C 0 (m/?a ?) log( b=1 ?b /?) where a is the a smallest integer such that b=1 ?b > ?. Acknowledgements This work is supported by NSF SaTC award CNS-1527754, and NSF CISE award CCF-1553452. 8 References [1] N. Alon and J. H. Spencer. The probabilistic method. John Wiley and Sons, 2004. [2] N. Dalvi, A. Dasgupta, R. Kumar, and V. Rastogi. Aggregating crowdsourced binary ratings. In Proceedings of the 22nd international conference on World Wide Web, pages 285?294, 2013. [3] A. P. Dawid and A. M. Skene. Maximum likelihood estimation of observer error-rates using the em algorithm. Applied statistics, pages 20?28, 1979. [4] A. Ghosh, S. Kale, and P. McAfee. Who moderates the moderators?: crowdsourcing abuse detection in user-generated content. In Proceedings of the 12th ACM conference on Electronic commerce, pages 167?176. ACM, 2011. [5] C. Ho, S. Jabbari, and J. W. Vaughan. Adaptive task assignment for crowdsourced classification. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pages 534?542, 2013. [6] R. Jin and Z. Ghahramani. Learning with multiple labels. In Advances in neural information processing systems, pages 921?928, 2003. [7] D. R. Karger, S. Oh, and D. Shah. Iterative learning for reliable crowdsourcing systems. In Advances in neural information processing systems, pages 1953?1961, 2011. [8] D. R. Karger, S. Oh, and D. Shah. Efficient crowdsourcing for multi-class labeling. In Proceedings of the ACM SIGMETRICS/international conference on Measurement and modeling of computer systems, pages 81?92, 2013. [9] D. R. Karger, S. Oh, and D. Shah. Budget-optimal task allocation for reliable crowdsourcing systems. Operations Research, 62:1?24, 2014. [10] H. Li and B. Yu. Error rate bounds and iterative weighted majority voting for crowdsourcing. arXiv preprint arXiv:1411.4086, 2014. [11] Q. Liu, J. Peng, and A. Ihler. Variational inference for crowdsourcing. In Advances in Neural Information Processing Systems 25, pages 701?709, 2012. [12] O. Maron and A. W. Moore. Hoeffding races: Accelerating model selection search for classification and function approximation. Robotics Institute, page 263, 1993. [13] M. Mezard and A. Montanari. Information, physics, and computation. Oxford University Press, 2009. [14] J. Ok, S. Oh, J. Shin, and Y. Yi. Optimality of belief propagation for crowdsourced classification. In International Conference on Machine Learning, 2016. [15] N. B. Shah, S. Balakrishnan, and M. J. Wainwright. A permutation-based model for crowd labeling: Optimal estimation and robustness. arXiv preprint arXiv:1606.09632, 2016. [16] V. S Sheng, F. Provost, and P. G. Ipeirotis. Get another label? improving data quality and data mining using multiple, noisy labelers. In Proceedings of the 14th ACM SIGKDD, pages 614?622. ACM, 2008. [17] P. Smyth, U. Fayyad, M. Burl, P. Perona, and P. Baldi. Inferring ground truth from subjective labelling of venus images. In NIPS, pages 1085?1092, 1995. [18] P. Welinder, S. Branson, S. Belongie, and P. Perona. The multidimensional wisdom of crowds. In Advances in Neural Information Processing Systems, pages 2424?2432, 2010. [19] J. Whitehill, P. Ruvolo, T. Wu, J. Bergsma, and J. Movellan. Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. In Advances in Neural Information Processing Systems, volume 22, pages 2035?2043, 2009. [20] D. Williams. Probability with martingales. Cambridge university press, 1991. [21] Y. Zhang, X. Chen, D. Zhou, and M. I. Jordan. Spectral methods meet em: A provably optimal algorithm for crowdsourcing. In Advances in neural information processing systems, pages 1260?1268, 2014. [22] D. Zhou, Q. Liu, J. C. Platt, C. Meek, and N. B. Shah. Regularized minimax conditional entropy for crowdsourcing. arXiv preprint arXiv:1503.07240, 2015. [23] D. Zhou, J. Platt, S. Basu, and Y. Mao. Learning from the wisdom of crowds by minimax entropy. In Advances in Neural Information Processing Systems 25, pages 2204?2212, 2012. 9
6124 |@word version:6 achievable:3 nd:1 tedious:1 willing:1 simulation:1 crucially:2 solid:1 initial:1 configuration:1 liu:2 karger:3 khetan:1 subjective:1 existing:3 comparing:3 assigning:3 john:1 numerical:2 additive:1 subsequent:2 half:2 fewer:1 accordingly:2 ruvolo:1 beginning:1 caveat:1 characterization:1 provides:3 node:5 zhang:1 consists:1 prove:2 dalvi:1 baldi:1 introduce:3 peng:1 intricate:1 expected:5 market:1 multi:1 increasing:1 perceives:1 spain:1 provided:2 moreover:1 bounded:5 notation:2 pof:1 panel:4 what:1 easiest:1 eigenvector:2 maxa:1 ghosh:1 wasting:2 guarantee:2 multidimensional:1 voting:6 ti:9 golden:1 exactly:1 k2:3 platt:2 control:3 superiority:1 positive:4 before:1 aggregating:2 limit:12 analyzing:1 oxford:1 meet:1 solely:1 approximately:2 abuse:1 resembles:1 suggests:1 collect:2 challenging:2 branson:1 ease:1 range:1 statistically:1 practical:4 commerce:1 yj:3 practice:3 movellan:1 shin:1 universal:1 significantly:2 matching:2 confidence:2 regular:5 suggest:1 get:9 close:5 selection:1 operator:3 kmax:3 impossible:1 writing:1 applying:1 vaughan:1 optimize:1 missing:1 kale:1 williams:1 amazon:1 simplicity:1 estimator:2 oh:5 his:3 proving:1 requester:7 target:2 suppose:4 decode:1 exact:2 user:1 smyth:1 us:3 prescribe:1 pa:1 dawid:6 racing:1 binning:1 preprint:3 solved:1 capture:6 worst:5 ensures:2 trade:1 reward:1 asked:1 depend:3 solving:1 upon:2 bipartite:3 efficiency:1 creates:1 completely:1 effective:2 query:10 tell:1 labeling:2 ise:1 refined:1 crowd:7 whose:3 quite:1 widely:3 larger:4 supplementary:8 otherwise:5 statistic:1 noisy:4 final:1 interplay:1 propose:2 ai0:1 achieve:11 intuitive:1 converges:2 spent:1 depending:1 alon:1 eq:1 predicted:4 implies:1 quantify:2 drawback:1 correct:1 hammer:3 subsequently:1 opinion:1 material:8 bin:1 require:2 assign:8 fix:4 generalization:2 khetan2:1 suffices:1 spencer:1 strictly:3 hold:1 ground:2 deciding:1 algorithmic:1 achieves:6 vary:2 smallest:2 perceived:3 estimation:3 label:5 bridge:1 him:1 largest:1 grouped:1 repetition:1 create:1 establishes:1 weighted:3 gaussian:3 sigmetrics:1 satc:1 zhou:3 varying:3 focus:1 refining:1 she:1 improvement:1 likelihood:1 sigkdd:1 sense:2 inference:15 i0:1 a0:7 her:1 perona:2 limm:1 provably:1 among:2 classification:7 exponent:1 extraneous:1 priori:1 platform:1 integration:1 special:2 art:1 initialize:1 marginal:1 equal:4 construct:1 never:1 having:1 unnecessarily:1 yu:1 icml:1 nearly:1 micro:1 summand:1 cns:1 freedom:1 detection:1 interest:1 message:4 investigate:3 mining:1 analyzed:1 tj:2 edge:1 worker:61 necessary:2 indexed:1 tree:1 re:1 desired:1 plotted:1 theoretical:4 minimal:3 uncertain:1 classify:7 modeling:1 logp:1 assignment:28 a6:1 subset:2 entry:1 welinder:1 too:5 characterize:2 reported:1 nearoptimal:1 dependency:1 answer:8 corrupted:1 synthetic:2 chooses:2 adaptively:3 confident:1 st:5 fundamental:14 randomized:1 sensitivity:1 international:4 probabilistic:2 off:1 physic:1 pool:1 ashish:1 together:2 central:2 choose:6 hoeffding:1 worse:1 expert:1 leading:3 return:2 li:1 permanent:1 race:1 depends:2 piece:2 try:1 picked:2 observer:1 start:1 crowdsourced:3 contribution:1 accuracy:7 variance:2 who:6 efficiently:2 rastogi:1 identify:1 wisdom:2 generalize:1 expertise:2 classified:8 history:1 moderator:1 email:1 turk:1 naturally:1 proof:4 ihler:1 recovers:1 gain:12 sampled:1 lim:1 improves:2 routine:4 back:1 ok:1 higher:1 response:13 improved:2 done:1 just:1 until:2 d:9 hand:1 sheng:1 web:1 trust:1 replacing:1 propagation:1 maron:1 quality:8 perhaps:2 grows:1 requiring:1 true:7 ccf:1 burl:1 hence:11 assigned:11 equality:1 symmetric:1 iteratively:1 moore:1 illustrated:2 eg:4 round:25 adjacent:1 ambiguous:2 generalized:4 trying:1 mina:3 complete:2 image:1 variational:1 novel:4 recently:1 ef:4 common:2 mt:2 exponentially:2 volume:1 tail:1 he:7 slight:1 significant:3 measurement:1 cambridge:1 ai:3 swoh:1 illinois:2 had:1 reliability:6 access:2 money:1 labelers:2 dominant:1 bergsma:1 recent:1 optimizing:2 moderate:2 rewarded:1 scenario:6 certain:1 binary:3 arbitrarily:4 yi:1 captured:1 seen:1 impose:1 maximize:1 dashed:1 multiple:8 infer:1 champaign:1 match:4 characterized:1 long:1 post:1 equally:3 award:2 qi:22 heterogeneous:1 expectation:2 arxiv:6 iteration:3 achieved:5 robotics:1 c1:5 want:1 addressed:1 completes:1 source:2 allocated:4 extra:1 limk:1 sure:1 acheive:1 balakrishnan:1 jordan:1 practitioner:1 integer:1 near:4 enough:5 easy:4 mcafee:1 opposite:1 idea:1 venus:1 tradeoff:3 allocate:1 bridging:1 accelerating:1 suffer:1 spammer:3 passing:2 locally:1 zj:1 nsf:2 sign:4 correctly:2 per:10 discrete:3 dasgupta:1 group:4 redundancy:1 key:1 nevertheless:1 threshold:9 achieving:2 drawn:3 changing:2 pj:15 graph:8 asymptotically:4 fraction:6 run:4 unclassified:1 you:1 master:1 uncertainty:1 family:1 electronic:1 wu:1 draw:1 decision:3 confusing:1 scaling:5 capturing:1 bound:15 guaranteed:1 meek:1 oracle:2 binned:1 precisely:4 constraint:2 dominated:4 optimality:3 min:20 kumar:1 fayyad:1 relatively:1 skene:6 department:1 according:4 smaller:3 across:1 son:1 em:2 wi:6 making:1 modification:1 explained:1 intuitively:1 dlog:2 taken:1 agree:1 count:1 know:2 serf:1 end:8 jabbari:1 operation:1 apply:2 appropriate:1 spectral:1 batch:4 robustness:1 ho:1 eigen:2 shah:5 original:2 top:1 remaining:5 include:2 ensure:4 completed:1 running:1 log2:4 hinge:1 widens:1 exploit:1 ghahramani:1 prof:1 question:3 quantity:1 already:1 dependence:2 rt:5 majority:7 parametrized:3 collected:6 fresh:1 index:1 relationship:1 mini:1 ratio:1 minimizing:1 difficult:11 setup:1 potentially:2 sharper:1 yjk:1 whitehill:1 negative:3 design:1 collective:2 unknown:1 upper:3 observation:1 datasets:2 urbana:1 finite:2 jin:1 heterogeneity:1 provost:1 sharp:2 arbitrary:1 inferred:1 rating:1 introduced:6 namely:2 mechanical:1 required:4 specified:3 optimized:1 barcelona:1 nip:2 perception:1 regime:3 challenge:1 reliable:5 max:8 belief:3 wainwright:1 misclassification:2 difficulty:29 regularized:1 ipeirotis:1 minimax:10 scheme:31 improve:2 extract:1 prior:5 acknowledgement:1 asymptotic:2 permutation:1 adaptivity:8 limitation:1 proportional:1 allocation:1 degree:2 sufficient:6 sewoong:1 tightened:1 course:1 surprisingly:2 supported:2 electronically:1 arriving:8 aij:4 guide:1 understand:1 institute:1 wide:2 basu:1 absolute:2 distributed:2 world:3 made:7 adaptive:57 collection:1 far:3 dlog2:1 confirm:2 reveals:2 assumed:1 belongie:1 xi:12 truthfully:1 un:1 iterative:3 search:1 learn:1 nature:2 inherently:1 improving:1 main:3 montanari:1 linearly:1 terminated:1 whole:2 noise:1 rh:1 fair:1 martingale:1 wiley:1 sub:5 mezard:1 inferring:1 mao:1 theorem:15 xt:12 decay:3 exists:3 labelling:1 budget:35 exhausted:1 gap:3 easier:3 chen:1 entropy:2 lt:1 backtracking:3 explore:2 labor:1 collectively:1 truth:4 relies:1 acm:5 conditional:2 goal:3 cise:1 content:1 hard:1 typical:1 uniformly:2 lemma:2 called:1 total:11 vote:1 support:2 crowdsourcing:17
5,664
6,125
Improved Techniques for Training GANs Tim Salimans tim@openai.com Ian Goodfellow ian@openai.com Wojciech Zaremba woj@openai.com Alec Radford alec@openai.com Vicki Cheung vicki@openai.com Xi Chen peter@openai.com Abstract We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3%. We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes. 1 Introduction Generative adversarial networks [1] (GANs) are a class of methods for learning generative models based on game theory. The goal of GANs is to train a generator network G(z; ? (G) ) that produces samples from the data distribution, pdata (x), by transforming vectors of noise z as x = G(z; ? (G) ). The training signal for G is provided by a discriminator network D(x) that is trained to distinguish samples from the generator distribution pmodel (x) from real data. The generator network G in turn is then trained to fool the discriminator into accepting its outputs as being real. Recent applications of GANs have shown that they can produce excellent samples [2, 3]. However, training GANs requires finding a Nash equilibrium of a non-convex game with continuous, highdimensional parameters. GANs are typically trained using gradient descent techniques that are designed to find a low value of a cost function, rather than to find the Nash equilibrium of a game. When used to seek for a Nash equilibrium, these algorithms may fail to converge [4]. In this work, we introduce several techniques intended to encourage convergence of the GANs game. These techniques are motivated by a heuristic understanding of the non-convergence problem. They lead to improved semi-supervised learning peformance and improved sample generation. We hope that some of them may form the basis for future work, providing formal guarantees of convergence. All code and hyperparameters may be found at https://github.com/openai/improved-gan. 2 Related work Several recent papers focus on improving the stability of training and the resulting perceptual quality of GAN samples [2, 3, 5, 6]. We build on some of these techniques in this work. For instance, we use some of the ?DCGAN? architectural innovations proposed in Radford et al. [3], as discussed below. One of our proposed techniques, feature matching, discussed in Sec. 3.1, is similar in spirit to approaches that use maximum mean discrepancy [7, 8, 9] to train generator networks [10, 11]. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Another of our proposed techniques, minibatch features, is based in part on ideas used for batch normalization [12], while our proposed virtual batch normalization is a direct extension of batch normalization. One of the primary goals of this work is to improve the effectiveness of generative adversarial networks for semi-supervised learning (improving the performance of a supervised task, in this case, classification, by learning on additional unlabeled examples). Like many deep generative models, GANs have previously been applied to semi-supervised learning [13, 14], and our work can be seen as a continuation and refinement of this effort. In concurrent work, Odena [15] proposes to extend GANs to predict image labels like we do in Section 5, but without our feature matching extension (Section 3.1) which we found to be critical for obtaining state-of-the-art performance. 3 Toward Convergent GAN Training Training GANs consists in finding a Nash equilibrium to a two-player non-cooperative game. Each player wishes to minimize its own cost function, J (D) (? (D) , ? (G) ) for the discriminator and J (G) (? (D) , ? (G) ) for the generator. A Nash equilibirum is a point (? (D) , ? (G) ) such that J (D) is at a minimum with respect to ? (D) and J (G) is at a minimum with respect to ? (G) . Unfortunately, finding Nash equilibria is a very difficult problem. Algorithms exist for specialized cases, but we are not aware of any that are feasible to apply to the GAN game, where the cost functions are non-convex, the parameters are continuous, and the parameter space is extremely high-dimensional. The idea that a Nash equilibrium occurs when each player has minimal cost seems to intuitively motivate the idea of using traditional gradient-based minimization techniques to minimize each player?s cost simultaneously. Unfortunately, a modification to ? (D) that reduces J (D) can increase J (G) , and a modification to ? (G) that reduces J (G) can increase J (D) . Gradient descent thus fails to converge for many games. For example, when one player minimizes xy with respect to x and another player minimizes ?xy with respect to y, gradient descent enters a stable orbit, rather than converging to x = y = 0, the desired equilibrium point [16]. Previous approaches to GAN training have thus applied gradient descent on each player?s cost simultaneously, despite the lack of guarantee that this procedure will converge. We introduce the following techniques that are heuristically motivated to encourage convergence: 3.1 Feature matching Feature matching addresses the instability of GANs by specifying a new objective for the generator that prevents it from overtraining on the current discriminator. Instead of directly maximizing the output of the discriminator, the new objective requires the generator to generate data that matches the statistics of the real data, where we use the discriminator only to specify the statistics that we think are worth matching. Specifically, we train the generator to match the expected value of the features on an intermediate layer of the discriminator. This is a natural choice of statistics for the generator to match, since by training the discriminator we ask it to find those features that are most discriminative of real data versus data generated by the current model. Letting f (x) denote activations on an intermediate layer of the discriminator, our new objective for the generator is defined as: ||Ex?pdata f (x) ? Ez?pz (z) f (G(z))||22 . The discriminator, and hence f (x), are trained in the usual way. As with regular GAN training, the objective has a fixed point where G exactly matches the distribution of training data. We have no guarantee of reaching this fixed point in practice, but our empirical results indicate that feature matching is indeed effective in situations where regular GAN becomes unstable. 3.2 Minibatch discrimination One of the main failure modes for GAN is for the generator to collapse to a parameter setting where it always emits the same point. When collapse to a single mode is imminent, the gradient of the discriminator may point in similar directions for many similar points. Because the discriminator processes each example independently, there is no coordination between its gradients, and thus no mechanism to tell the outputs of the generator to become more dissimilar to each other. Instead, all outputs race toward a single point that the discriminator currently believes is highly realistic. After collapse has occurred, the discriminator learns that this single point comes from the generator, but gradient descent is unable to separate the identical outputs. The gradients of the discriminator 2 then push the single point produced by the generator around space forever, and the algorithm cannot converge to a distribution with the correct amount of entropy. An obvious strategy to avoid this type of failure is to allow the discriminator to look at multiple data examples in combination, and perform what we call minibatch discrimination. The concept of minibatch discrimination is quite general: any discriminator model that looks at multiple examples in combination, rather than in isolation, could potentially help avoid collapse of the generator. In fact, the successful application of batch normalization in the discriminator by Radford et al. [3] is well explained from this perspective. So far, however, we have restricted our experiments to models that explicitly aim to identify generator samples that are particularly close together. One successful specification for modelling the closeness between examples in a minibatch is as follows: Let f (xi ) ? RA denote a vector of features for input xi , produced by some intermediate layer in the discriminator. We then multiply the vector f (xi ) by a tensor T ? RA?B?C , which results in a matrix Mi ? RB?C . We then compute the L1 -distance between the rows of the resulting matrix Mi across samples i ? {1, 2, . . . , n} and apply a negative exponential (Fig. 1): cb (xi , xj ) = exp(?||Mi,b ? Mj,b ||L1 ) ? R. The output o(xi ) for this minibatch layer for a sample xi is then defined as the sum of the cb (xi , xj )?s to all other samples: n X o(xi )b = cb (xi , xj ) ? R j=1 h i o(xi ) = o(xi )1 , o(xi )2 , . . . , o(xi )B ? RB o(X) ? Rn?B Next, we concatenate the output o(xi ) of the minibatch layer with the intermediate features f (xi ) that were its Figure 1: Figure sketches how miniinput, and we feed the result into the next layer of the batch discrimination works. Features discriminator. We compute these minibatch features sep- f (xi ) from sample xi are multiplied arately for samples from the generator and from the train- through a tensor T , and cross-sample ing data. As before, the discriminator is still required to distance is computed. output a single number for each example indicating how likely it is to come from the training data: The task of the discriminator is thus effectively still to classify single examples as real data or generated data, but it is now able to use the other examples in the minibatch as side information. Minibatch discrimination allows us to generate visually appealing samples very quickly, and in this regard it is superior to feature matching (Section 6). Interestingly, however, feature matching was found to work much better if the goal is to obtain a strong classifier using the approach to semi-supervised learning described in Section 5. 3.3 Historical averaging Pt When applying this technique, we modify each player?s cost to include a term ||? ? 1t i=1 ?[i]||2 , where ?[i] is the value of the parameters at past time i. The historical average of the parameters can be updated in an online fashion so this learning rule scales well to long time series. This approach is loosely inspired by the fictitious play [17] algorithm that can find equilibria in other kinds of games. We found that our approach was able to find equilibria of low-dimensional, continuous non-convex games, such as the minimax game with one player controlling x, the other player controlling y, and value function (f (x) ? 1)(y ? 1), where f (x) = x for x < 0 and f (x) = x2 otherwise. For these same toy games, gradient descent fails by going into extended orbits that do not approach the equilibrium point. 3.4 One-sided label smoothing Label smoothing, a technique from the 1980s recently independently re-discovered by Szegedy et. al [18], replaces the 0 and 1 targets for a classifier with smoothed values, like .9 or .1, and was recently shown to reduce the vulnerability of neural networks to adversarial examples [19]. Replacing positive classification targets with ? and negative targets with ?, the optimal discriminator data (x)+?pmodel (x) becomes D(x) = ?ppdata (x)+pmodel (x) . The presence of pmodel in the numerator is problematic because, in areas where pdata is approximately zero and pmodel is large, erroneous samples from 3 pmodel have no incentive to move nearer to the data. We therefore smooth only the positive labels to ?, leaving negative labels set to 0. 3.5 Virtual batch normalization Batch normalization greatly improves optimization of neural networks, and was shown to be highly effective for DCGANs [3]. However, it causes the output of a neural network for an input example x to be highly dependent on several other inputs x0 in the same minibatch. To avoid this problem we introduce virtual batch normalization (VBN), in which each example x is normalized based on the statistics collected on a reference batch of examples that are chosen once and fixed at the start of training, and on x itself. The reference batch is normalized using only its own statistics. VBN is computationally expensive because it requires running forward propagation on two minibatches of data, so we use it only in the generator network. 4 Assessment of image quality Generative adversarial networks lack an objective function, which makes it difficult to compare performance of different models. One intuitive metric of performance can be obtained by having human annotators judge the visual quality of samples [2]. We automate this process using Amazon Mechanical Turk (MTurk), using the web interface in figure Fig. 2 (live at http://infinite-chamber-35121.herokuapp.com/ cifar-minibatch/), which we use to ask annotators to distinguish between generated data and real data. The resulting quality assessments of our models are described in Section 6. Figure 2: Web interface given to annotators. Annotators are asked to distinguish computer generated images from real ones. A downside of using human annotators is that the metric varies depending on the setup of the task and the motivation of the annotators. We also find that results change drastically when we give annotators feedback about their mistakes: By learning from such feedback, annotators are better able to point out the flaws in generated images, giving a more pessimistic quality assessment. The left column of Fig. 2 presents a screen from the annotation process, while the right column shows how we inform annotators about their mistakes. As an alternative to human annotators, we propose an automatic method to evaluate samples, which we find to correlate well with human evaluation: We apply the Inception model1 [20] to every generated image to get the conditional label distribution p(y|x). Images that contain meaningful objects should have a conditional label distribution p(y|x) with low entropy. Moreover, we expect R the model to generate varied images, so the marginal p(y|x = G(z))dz should have high entropy. Combining these two requirements, the metric that we propose is: exp(Ex KL(p(y|x)||p(y))), where we exponentiate results so the values are easier to compare. Our Inception score is closely related to the objective used for training generative models in CatGAN [14]: Although we had less success using such an objective for training, we find it is a good metric for evaluation that correlates very well with human judgment. We find that it?s important to evaluate the metric on a large enough number of samples (i.e. 50k) as part of this metric measures diversity. 5 Semi-supervised learning Consider a standard classifier for classifying a data point x into one of K possible classes. Such a model takes in x as input and outputs a K-dimensional vector of logits {l1 , . . . , lK }, that can exp(lj ) be turned into class probabilities by applying the softmax: pmodel (y = j|x) = PK exp(l . In k) k=1 supervised learning, such a model is then trained by minimizing the cross-entropy between the observed labels and the model predictive distribution pmodel (y|x). 1 We use the pretrained Inception model from http://download.tensorflow.org/models/image/ imagenet/inception-2015-12-05.tgz. Code to compute the Inception score with this model will be made available by the time of publication. 4 We can do semi-supervised learning with any standard classifier by simply adding samples from the GAN generator G to our data set, labeling them with a new ?generated? class y = K + 1, and correspondingly increasing the dimension of our classifier output from K to K + 1. We may then use pmodel (y = K + 1 | x) to supply the probability that x is fake, corresponding to 1 ? D(x) in the original GAN framework. We can now also learn from unlabeled data, as long as we know that it corresponds to one of the K classes of real data by maximizing log pmodel (y ? {1, . . . , K}|x). Assuming half of our data set consists of real data and half of it is generated (this is arbitrary), our loss function for training the classifier then becomes L = ?Ex,y?pdata (x,y) [log pmodel (y|x)] ? Ex?G [log pmodel (y = K + 1|x)] = Lsupervised + Lunsupervised , where Lsupervised = ?Ex,y?pdata (x,y) log pmodel (y|x, y < K + 1) Lunsupervised = ?{Ex?pdata (x) log[1 ? pmodel (y = K + 1|x)] + Ex?G log[pmodel (y = K + 1|x)]}, where we have decomposed the total cross-entropy loss into our standard supervised loss function Lsupervised (the negative log probability of the label, given that the data is real) and an unsupervised loss Lunsupervised which is in fact the standard GAN game-value as becomes evident when we substitute D(x) = 1 ? pmodel (y = K + 1|x) into the expression: Lunsupervised = ?{Ex?pdata (x) log D(x) + Ez?noise log(1 ? D(G(z)))}. The optimal solution for minimizing both Lsupervised and Lunsupervised is to have exp[lj (x)] = c(x)p(y=j, x)?j<K+1 and exp[lK+1 (x)] = c(x)pG (x) for some undetermined scaling function c(x). The unsupervised loss is thus consistent with the supervised loss in the sense of Sutskever et al. [13], and we can hope to better estimate this optimal solution from the data by minimizing these two loss functions jointly. In practice, Lunsupervised will only help if it is not trivial to minimize for our classifier and we thus need to train G to approximate the data distribution. One way to do this is by training G to minimize the GAN game-value, using the discriminator D defined by our classifier. This approach introduces an interaction between G and our classifier that we do not fully understand yet, but empirically we find that optimizing G using feature matching GAN works very well for semi-supervised learning, while training G using GAN with minibatch discrimination does not work at all. Here we present our empirical results using this approach; developing a full theoretical understanding of the interaction between D and G using this approach is left for future work. Finally, note that our classifier with K + 1 outputs is over-parameterized: subtracting a general function f (x) from each output logit, i.e. setting lj (x) ? lj (x) ? f (x)?j, does not change the output of the softmax. This means we may equivalently fix lK+1 (x) = 0?x, in which case Lsupervised becomes the standard supervised loss function of our original classifier with K classes, and our PK Z(x) discriminator D is given by D(x) = Z(x)+1 , where Z(x) = k=1 exp[lk (x)]. 5.1 Importance of labels for image quality Besides achieving state-of-the-art results in semi-supervised learning, the approach described above also has the surprising effect of improving the quality of generated images as judged by human annotators. The reason appears to be that the human visual system is strongly attuned to image statistics that can help infer what class of object an image represents, while it is presumably less sensitive to local statistics that are less important for interpretation of the image. This is supported by the high correlation we find between the quality reported by human annotators and the Inception score we developed in Section 4, which is explicitly constructed to measure the ?objectness? of a generated image. By having the discriminator D classify the object shown in the image, we bias it to develop an internal representation that puts emphasis on the same features humans emphasize. This effect can be understood as a method for transfer learning, and could potentially be applied much more broadly. We leave further exploration of this possibility for future work. 5 6 Experiments We performed semi-supervised experiments on MNIST, CIFAR-10 and SVHN, and sample generation experiments on MNIST, CIFAR-10, SVHN and ImageNet. We provide code to reproduce the majority of our experiments. 6.1 MNIST The MNIST dataset contains 60, 000 labeled images of digits. We perform semi-supervised training with a small randomly picked fraction of these, considering setups with 20, 50, 100, and 200 labeled examples. Results are averaged over 10 random subsets of labeled data, each chosen to have a balanced number of examples from each class. The remaining training images are provided without labels. Our networks have 5 hidden layers each. We use weight normalization [21] and add Gaussian noise to the output Figure 3: (Left) samples generated by model durof each layer of the discriminator. Table 1 sum- ing semi-supervised training. Samples can be marizes our results. clearly distinguished from images coming from Samples generated by the generator during MNIST dataset. (Right) Samples generated with semi-supervised learning using feature match- minibatch discrimination. Samples are coming (Section 3.1) do not look visually appealing pletely indistinguishable from dataset images. (left Fig. 3). By using minibatch discrimination instead (Section 3.2) we can improve their visual quality. On MTurk, annotators were able to distinguish samples in 52.4% of cases (2000 votes total), where 50% would be obtained by random guessing. Similarly, researchers in our institution were not able to find any artifacts that would allow them to distinguish samples. However, semi-supervised learning with minibatch discrimination does not produce as good a classifier as does feature matching. Model 20 DGN [22] Virtual Adversarial [23] CatGAN [14] Skip Deep Generative Model [24] Ladder network [25] Auxiliary Deep Generative Model [24] Our model Ensemble of 10 of our models Number of incorrectly predicted test examples for a given number of labeled samples 50 100 1677 ? 452 1134 ? 445 221 ? 136 142 ? 96 333 ? 14 212 191 ? 10 132 ? 7 106 ? 37 96 ? 2 93 ? 6.5 86 ? 5.6 200 90 ? 4.2 81 ? 4.3 Table 1: Number of incorrectly classified test examples for the semi-supervised setting on permutation invariant MNIST. Results are averaged over 10 seeds. 6.2 CIFAR-10 Model 1000 Ladder network [25] CatGAN [14] Our model Ensemble of 10 of our models 21.83?2.01 19.22?0.54 Test error rate for a given number of labeled samples 2000 4000 19.61?2.09 17.25?0.66 20.40?0.47 19.58?0.46 18.63?2.32 15.59?0.47 8000 17.72?1.82 14.87?0.89 Table 2: Test error on semi-supervised CIFAR-10. Results are averaged over 10 splits of data. CIFAR-10 is a small, well studied dataset of 32 ? 32 natural images. We use this data set to study semi-supervised learning, as well as to examine the visual quality of samples that can be achieved. For the discriminator in our GAN we use a 9 layer deep convolutional network with dropout and weight normalization. The generator is a 4 layer deep CNN with batch normalization. Table 2 summarizes our results on the semi-supervised learning task. 6 Figure 4: Samples generated during semi-supervised training on CIFAR-10 with feature matching (Section 3.1, left) and minibatch discrimination (Section 3.2, right). When presented with 50% real and 50% fake data generated by our best CIFAR-10 model, MTurk users correctly categorized 78.7% of images correctly. However, MTurk users may not be sufficiently familiar with CIFAR-10 images or sufficiently motivated; we ourselves were able to categorize images with > 95% accuracy. We validated the Inception score described above by observing that MTurk accuracy drops to 71.4% when the data is filtered by using only the top 1% of samples according to the Inception score. We performed a series of ablation experiments to demonstrate that our proposed techniques improve the Inception score, presented in Table 3. We also present images for these ablation experiments?in our opinion, the Inception score correlates well with our subjective judgment of image quality. Samples from the dataset achieve the highest value. All the models that even partially collapse have relatively low scores. We caution that the Inception score should be used as a rough guide to evaluate models that were trained via some independent criterion; directly optimizing Inception score will lead to the generation of adversarial examples [26]. Samples Model Score ? std. Real data 11.24 ? .12 Our methods 8.09 ? .07 -VBN+BN 7.54 ? .07 -L+HA 6.86 ? .06 -LS 6.83 ? .06 -L 4.36 ? .04 -MBF 3.87 ? .03 Table 3: Table of Inception scores for samples generated by various models for 50, 000 images. Score highly correlates with human judgment, and the best score is achieved for natural images. Models that generate collapsed samples have relatively low score. This metric allows us to avoid relying on human evaluations. ?Our methods? includes all the techniques described in this work, except for feature matching and historical averaging. The remaining experiments are ablation experiments showing that our techniques are effective. ?-VBN+BN? replaces the VBN in the generator with BN, as in DCGANs. This causes a small decrease in sample quality on CIFAR. VBN is more important for ImageNet. ?-L+HA? removes the labels from the training process, and adds historical averaging to compensate. HA makes it possible to still generate some recognizable objects. Without HA, sample quality is considerably reduced (see ?-L?). ?-LS? removes label smoothing and incurs a noticeable drop in performance relative to ?our methods.? ?-MBF? removes the minibatch features and incurs a very large drop in performance, greater even than the drop resulting from removing the labels. Adding HA cannot prevent this problem. 6.3 SVHN For the SVHN data set, we used the same architecture and experimental setup as for CIFAR-10. Figure 5 compares against the previous state-of-the-art, where it should be noted that the model 7 of [24] is not convolutional, but does use an additional data set of 531131 unlabeled examples. The other methods, including ours, are convolutional and do not use this data. Model Percentage of incorrectly predicted test examples for a given number of labeled samples 500 1000 2000 Virtual Adversarial [23] Stacked What-Where Auto-Encoder [27] DCGAN [3] Skip Deep Generative Model [24] Our model Ensemble of 10 of our models 24.63 23.56 22.48 16.61?0.24 8.11 ? 1.3 5.88 ? 1.0 18.44 ? 4.8 6.16 ? 0.58 Figure 5: (Left) Error rate on SVHN. (Right) Samples from the generator for SVHN. 6.4 ImageNet We tested our techniques on a dataset of unprecedented scale: 128 ? 128 images from the ILSVRC2012 dataset with 1,000 categories. To our knowledge, no previous publication has applied a generative model to a dataset with both this large of a resolution and this large a number of object classes. The large number of object classes is particularly challenging for GANs due to their tendency to underestimate the entropy in the distribution. We extensively modified a publicly available implementation of DCGANs2 using TensorFlow [28] to achieve high performance, using a multi-GPU implementation. DCGANs without modification learn some basic image statistics and generate contiguous shapes with somewhat natural color and texture but do not learn any objects. Using the techniques described in this paper, GANs learn to generate objects that resemble animals, but with incorrect anatomy. Results are shown in Fig. 6. Figure 6: Samples generated from the ImageNet dataset. (Left) Samples generated by a DCGAN. (Right) Samples generated using the techniques proposed in this work. The new techniques enable GANs to learn recognizable features of animals, such as fur, eyes, and noses, but these features are not correctly combined to form an animal with realistic anatomical structure. 7 Conclusion Generative adversarial networks are a promising class of generative models that has so far been held back by unstable training and by the lack of a proper evaluation metric. This work presents partial solutions to both of these problems. We propose several techniques to stabilize training that allow us to train models that were previously untrainable. Moreover, our proposed evaluation metric (the Inception score) gives us a basis for comparing the quality of these models. We apply our techniques to the problem of semi-supervised learning, achieving state-of-the-art results on a number of different data sets in computer vision. The contributions made in this work are of a practical nature; we hope to develop a more rigorous theoretical understanding in future work. 2 https://github.com/carpedm20/DCGAN-tensorflow 8 References [1] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, et al. Generative adversarial nets. In NIPS, 2014. [2] Emily Denton, Soumith Chintala, Arthur Szlam, and Rob Fergus. Deep generative image models using a laplacian pyramid of adversarial networks. arXiv preprint arXiv:1506.05751, 2015. [3] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015. [4] Ian J Goodfellow. On distinguishability criteria for estimating generative models. arXiv preprint arXiv:1412.6515, 2014. [5] Daniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, and Roland Memisevic. Generating images with recurrent adversarial networks. arXiv preprint arXiv:1602.05110, 2016. [6] Donggeun Yoo, Namil Kim, Sunggyun Park, Anthony S Paek, and In So Kweon. Pixel-level domain transfer. arXiv preprint arXiv:1603.07442, 2016. [7] Arthur Gretton, Olivier Bousquet, Alex Smola, and Bernhard Sch?olkopf. Measuring statistical dependence with hilbert-schmidt norms. In Algorithmic learning theory, pages 63?77. Springer, 2005. [8] Kenji Fukumizu, Arthur Gretton, Xiaohai Sun, and Bernhard Sch?olkopf. Kernel measures of conditional dependence. In NIPS, volume 20, pages 489?496, 2007. [9] Alex Smola, Arthur Gretton, Le Song, and Bernhard Sch?olkopf. A hilbert space embedding for distributions. In Algorithmic learning theory, pages 13?31. Springer, 2007. [10] Yujia Li, Kevin Swersky, and Richard S. Zemel. Generative moment matching networks. CoRR, abs/1502.02761, 2015. [11] Gintare Karolina Dziugaite, Daniel M Roy, and Zoubin Ghahramani. Training generative neural networks via maximum mean discrepancy optimization. arXiv preprint arXiv:1505.03906, 2015. [12] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. [13] Ilya Sutskever, Rafal Jozefowicz, Karol Gregor, et al. Towards principled unsupervised learning. arXiv preprint arXiv:1511.06440, 2015. [14] Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial networks. arXiv preprint arXiv:1511.06390, 2015. [15] Augustus Odena. Semi-supervised learning with generative adversarial networks. arXiv preprint arXiv:1606.01583, 2016. [16] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. 2016. MIT Press. [17] George W Brown. Iterative solution of games by fictitious play. Activity analysis of production and allocation, 13(1):374?376, 1951. [18] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the Inception Architecture for Computer Vision. ArXiv e-prints, December 2015. [19] David Warde-Farley and Ian Goodfellow. Adversarial perturbations of deep neural networks. In Tamir Hazan, George Papandreou, and Daniel Tarlow, editors, Perturbations, Optimization, and Statistics, chapter 11. 2016. Book in preparation for MIT Press. [20] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015. [21] Tim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. arXiv preprint arXiv:1602.07868, 2016. [22] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In Neural Information Processing Systems, 2014. [23] Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. Distributional smoothing by virtual adversarial examples. arXiv preprint arXiv:1507.00677, 2015. [24] Lars Maal?e, Casper Kaae S?nderby, S?ren Kaae S?nderby, and Ole Winther. Auxiliary deep generative models. arXiv preprint arXiv:1602.05473, 2016. [25] Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervised learning with ladder networks. In Advances in Neural Information Processing Systems, 2015. [26] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, et al. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. [27] Junbo Zhao, Michael Mathieu, Ross Goroshin, and Yann Lecun. Stacked what-where auto-encoders. arXiv preprint arXiv:1506.02351, 2015. [28] Mart??n Abadi, Ashish Agarwal, Paul Barham, et al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org. 9
6125 |@word cnn:1 seems:1 norm:1 logit:1 heuristically:1 seek:1 bn:3 pg:1 incurs:2 moment:1 series:2 score:16 contains:1 kweon:1 daniel:3 jimenez:1 ours:1 interestingly:1 past:1 subjective:1 current:2 com:9 comparing:1 surprising:1 activation:1 yet:1 diederik:2 intriguing:1 gpu:1 realistic:2 concatenate:1 shape:1 christian:3 remove:3 designed:1 drop:4 discrimination:10 generative:23 alec:3 half:2 accepting:1 filtered:1 tarlow:1 institution:1 org:2 constructed:1 direct:1 become:1 supply:1 junbo:1 incorrect:1 consists:2 abadi:1 recognizable:3 introduce:3 x0:1 indeed:1 ra:2 jiwoong:1 expected:1 examine:1 multi:1 inspired:1 relying:1 decomposed:1 soumith:2 considering:1 increasing:1 becomes:5 provided:2 spain:1 moreover:2 estimating:1 what:4 gintare:1 kind:1 minimizes:2 developed:1 caution:1 finding:3 pmodel:16 guarantee:3 every:1 zaremba:2 exactly:1 classifier:12 szlam:1 nakae:1 before:1 positive:2 understood:1 local:1 modify:1 mistake:2 despite:1 karolina:1 jiang:1 approximately:1 emphasis:1 studied:1 specifying:1 challenging:1 luke:1 collapse:5 catgan:3 averaged:3 practical:1 lecun:1 practice:2 digit:1 procedure:2 shin:2 area:1 empirical:2 dongjoo:1 matching:13 imminent:1 regular:2 zoubin:1 get:1 cannot:3 unlabeled:3 close:1 judged:1 put:1 collapsed:1 applying:2 instability:1 live:1 dz:1 maximizing:2 independently:2 convex:3 l:2 resolution:2 emily:1 amazon:1 pouget:1 rule:1 shlens:2 reparameterization:1 stability:1 embedding:1 updated:1 pt:1 play:2 controlling:2 target:3 user:2 olivier:1 mikko:1 goodfellow:5 roy:1 expensive:1 particularly:2 nderby:2 std:1 distributional:1 cooperative:1 labeled:6 observed:1 preprint:16 enters:1 ilsvrc2012:1 sun:1 decrease:1 highest:1 balanced:1 principled:1 transforming:1 nash:7 asked:1 tobias:1 warde:1 trained:6 motivate:1 predictive:1 basis:2 sep:1 accelerate:1 various:1 chapter:1 harri:1 train:6 stacked:2 effective:3 ole:1 vicki:2 tell:1 woj:1 labeling:1 kevin:1 zemel:1 quite:1 heuristic:1 jean:1 otherwise:1 encoder:1 statistic:9 augustus:1 think:1 jointly:1 itself:1 shakir:1 online:1 unprecedented:2 net:1 propose:3 subtracting:1 interaction:2 coming:2 turned:1 combining:1 ablation:3 achieve:3 intuitive:1 olkopf:3 sutskever:3 convergence:4 requirement:1 produce:3 generating:1 karol:1 leave:1 object:8 tim:3 help:3 depending:1 develop:2 recurrent:1 noticeable:1 strong:1 auxiliary:2 predicted:2 skip:2 indicate:1 goroshin:1 come:2 judge:1 direction:1 resemble:1 anatomy:1 closely:1 correct:1 kaae:2 lars:1 exploration:1 human:13 jonathon:1 enable:2 opinion:1 virtual:6 fix:1 pessimistic:1 im:1 extension:2 around:1 sufficiently:2 exp:7 visually:2 equilibrium:10 cb:3 predict:1 presumably:1 seed:1 automate:1 algorithmic:2 label:14 currently:1 coordination:1 vulnerability:1 sensitive:1 honkala:1 concurrent:1 ross:1 hope:3 minimization:1 mit:2 rough:1 clearly:1 fukumizu:1 always:1 aim:1 gaussian:1 rather:3 reaching:1 avoid:4 modified:1 publication:2 mbf:2 validated:1 focus:1 rezende:1 modelling:1 fur:1 masanori:1 greatly:1 adversarial:17 rigorous:1 kim:2 sense:1 ishii:1 flaw:1 dependent:1 typically:1 lj:4 hidden:1 going:1 reproduce:1 pixel:1 classification:3 proposes:1 animal:3 art:5 smoothing:4 softmax:2 marginal:1 aware:1 once:1 having:2 identical:1 represents:1 park:1 look:3 unsupervised:5 denton:1 pdata:7 future:4 discrepancy:2 mirza:1 yoshua:1 richard:1 randomly:1 simultaneously:2 familiar:1 intended:1 ourselves:1 ab:1 highly:4 possibility:1 multiply:1 evaluation:5 introduces:1 farley:1 held:1 encourage:2 partial:1 arthur:4 xy:2 loosely:1 orbit:2 desired:1 re:1 theoretical:2 minimal:1 instance:1 classify:2 column:2 downside:1 contiguous:1 papandreou:1 measuring:1 cost:7 subset:1 undetermined:1 successful:2 reported:1 encoders:1 varies:1 considerably:1 combined:1 winther:1 memisevic:1 michael:1 together:1 quickly:1 ilya:2 gans:15 ashish:1 rafal:1 berglund:1 book:1 zhao:1 wojciech:2 toy:1 szegedy:5 li:1 diversity:1 sec:1 stabilize:1 includes:1 explicitly:2 race:1 performed:2 picked:1 observing:1 hazan:1 start:1 metz:1 annotation:1 contribution:1 minimize:4 publicly:1 accuracy:2 convolutional:4 ensemble:3 yield:1 identify:1 judgment:3 vbn:6 vincent:1 produced:2 ren:1 confirmed:1 worth:1 researcher:1 classified:1 overtraining:1 inform:1 failure:2 against:1 underestimate:1 turk:1 mohamed:1 obvious:1 chintala:2 mi:3 emits:1 dataset:9 ask:2 knowledge:1 color:1 improves:1 hilbert:2 back:1 appears:1 feed:1 supervised:29 danilo:1 specify:1 improved:4 strongly:1 inception:16 smola:2 correlation:1 sketch:1 web:2 replacing:1 mehdi:1 assessment:3 lack:3 propagation:1 minibatch:18 mode:2 quality:15 artifact:1 effect:2 dziugaite:1 concept:1 normalized:2 contain:1 logits:1 brown:1 hence:1 indistinguishable:1 game:14 numerator:1 during:2 noted:1 criterion:2 evident:1 demonstrate:1 l1:3 svhn:7 interface:2 exponentiate:1 image:32 recently:2 superior:1 specialized:1 empirically:1 volume:1 discussed:2 extend:1 occurred:1 interpretation:1 jozefowicz:1 automatic:1 similarly:1 carpedm20:1 had:1 stable:1 specification:1 add:2 own:2 recent:2 perspective:1 optimizing:2 success:1 seen:1 minimum:2 additional:2 greater:1 somewhat:1 george:2 tapani:1 converge:4 signal:1 semi:24 multiple:2 full:1 reduces:2 infer:1 gretton:3 ing:2 smooth:1 match:5 takeru:1 cross:3 long:2 cifar:13 compensate:1 roland:1 laplacian:1 converging:1 jost:1 basic:1 heterogeneous:1 mturk:5 metric:9 vision:3 arxiv:33 normalization:12 kernel:1 sergey:2 pyramid:1 achieved:2 agarwal:1 leaving:1 sch:3 december:1 spirit:1 effectiveness:1 call:1 presence:1 intermediate:4 split:1 enough:1 bengio:1 peformance:1 variety:1 xj:3 isolation:1 architecture:3 reduce:1 idea:3 barham:1 shift:1 tgz:1 motivated:3 expression:1 accelerating:1 effort:1 song:1 peter:1 cause:2 deep:14 fake:2 fool:1 amount:1 extensively:1 category:1 ken:1 reduced:1 http:4 continuation:1 generate:7 exist:1 dgn:1 problematic:1 percentage:1 correctly:3 rb:2 anatomical:1 broadly:1 incentive:1 ichi:1 openai:7 achieving:2 prevent:1 fraction:1 sum:2 turing:1 parameterized:1 springenberg:1 swersky:1 architectural:2 yann:1 summarizes:1 scaling:1 dropout:1 layer:10 distinguish:6 convergent:1 courville:1 replaces:2 activity:1 alex:2 x2:1 software:1 bousquet:1 generates:1 extremely:1 relatively:2 developing:1 according:1 combination:2 across:1 appealing:2 rob:1 modification:3 intuitively:1 explained:1 restricted:1 invariant:1 sided:1 computationally:1 previously:2 turn:1 fail:1 mechanism:1 know:1 letting:1 nose:1 maal:1 available:3 multiplied:1 apply:5 salimans:2 chamber:1 distinguished:1 batch:12 alternative:1 schmidt:1 original:2 substitute:1 top:1 running:1 include:1 remaining:2 gan:15 paek:1 miyato:1 giving:1 ghahramani:1 build:1 gregor:1 tensor:2 objective:7 move:1 print:1 occurs:1 strategy:1 primary:1 dependence:2 usual:1 traditional:1 guessing:1 gradient:10 distance:2 unable:1 separate:1 valpola:1 rethinking:2 majority:1 koyama:1 chris:1 collected:1 unstable:2 trivial:1 toward:2 reason:1 assuming:1 code:3 besides:1 rasmus:1 providing:1 minimizing:3 innovation:1 equivalently:1 difficult:2 unfortunately:2 setup:3 potentially:2 negative:4 wojna:2 implementation:2 zbigniew:1 proper:1 perform:2 descent:6 incorrectly:3 situation:1 extended:1 rn:1 discovered:1 varied:1 smoothed:1 arbitrary:1 perturbation:2 download:1 david:1 required:1 mechanical:1 kl:1 imagenet:7 discriminator:28 pletely:1 tensorflow:5 barcelona:1 kingma:2 nip:3 nearer:1 address:1 able:6 distinguishability:1 below:1 yujia:1 maeda:1 model1:1 including:1 max:1 belief:1 odena:2 critical:1 natural:4 dcgans:3 minimax:1 improve:3 github:2 ladder:3 eye:1 mathieu:1 lk:4 raiko:1 categorical:1 auto:2 understanding:3 relative:1 loss:8 expect:1 fully:1 permutation:1 generation:3 allocation:1 fictitious:2 versus:1 generator:23 annotator:13 vanhoucke:2 attuned:1 consistent:1 editor:1 classifying:1 production:1 row:1 casper:1 supported:1 antti:1 drastically:1 formal:1 allow:3 side:1 understand:1 bias:1 guide:1 correspondingly:1 regard:1 feedback:2 dimension:1 tamir:1 forward:1 made:2 refinement:1 historical:4 far:2 welling:1 correlate:4 approximate:1 emphasize:1 forever:1 bernhard:3 ioffe:3 xi:18 discriminative:1 fergus:1 continuous:3 iterative:1 table:7 promising:1 nature:1 learn:6 mj:1 transfer:2 obtaining:1 improving:3 excellent:1 anthony:1 domain:1 pk:2 main:1 motivation:1 noise:3 hyperparameters:1 paul:1 kenji:1 categorized:1 fig:5 screen:1 fashion:1 fails:2 wish:1 exponential:1 perceptual:1 learns:1 ian:6 removing:1 erroneous:1 covariate:1 showing:1 pz:1 abadie:1 closeness:1 mnist:8 adding:2 effectively:1 importance:1 hui:1 corr:1 texture:1 push:1 chen:1 easier:1 entropy:6 simply:1 likely:1 ez:2 visual:5 prevents:1 dcgan:4 partially:1 pretrained:1 radford:4 springer:2 corresponds:1 minibatches:1 mart:1 conditional:3 cheung:1 goal:3 towards:1 feasible:1 change:2 objectness:1 specifically:1 infinite:1 except:1 reducing:1 averaging:3 total:2 mathias:1 experimental:1 tendency:1 player:10 vote:1 meaningful:1 indicating:1 aaron:1 highdimensional:1 internal:2 dissimilar:1 categorize:1 preparation:1 evaluate:3 yoo:1 tested:1 ex:8
5,665
6,126
Robust k-means: a Theoretical Revisit Alexandros Georgogiannis School of Electrical and Computer Engineering Technical University of Crete, Greece alexandrosgeorgogiannis at gmail.com Abstract Over the last years, many variations of the quadratic k-means clustering procedure have been proposed, all aiming to robustify the performance of the algorithm in the presence of outliers. In general terms, two main approaches have been developed: one based on penalized regularization methods, and one based on trimming functions. In this work, we present a theoretical analysis of the robustness and consistency properties of a variant of the classical quadratic k-means algorithm, the robust k-means, which borrows ideas from outlier detection in regression. We show that two outliers in a dataset are enough to breakdown this clustering procedure. However, if we focus on ?well-structured? datasets, then robust k-means can recover the underlying cluster structure in spite of the outliers. Finally, we show that, with slight modifications, the most general non-asymptotic results for consistency of quadratic k-means remain valid for this robust variant. 1 Introduction Let ? : R ? R+ be a lower semi-continuous (lsc) and symmetric function with minimum value ?(0). Given a set of points X n = {x1 , . . . , xn } ? Rp , consider the generalized k-means problem (GKM) [7] n ! min Rn (c1 , . . . , ck ) = min ?(||xi ? cl ||2 ) c1 ,...,ck 1?l?k (GKM) i=1 subject to cl ? Rp , l ? {1, . . . , k}. Our aim is to find a set of k centers {c1 , . . . , ck } that minimize the clustering risk Rn . These centers define a partition of X n into k clusters A = {A1 , . . . , Ak }, defined as " # Al = x ? X n : l = argmin1?j?k ?(||x ? cj ||2 ) , (1) where ties are broken randomly. Varying ? beyond the usual quadratic function (?(t) = t2 ) we expect to gain some robustness against the outliers [9]. When ? is upper bounded by ?, the clusters are defined as follows. For l ? k, let " # Al = x ? X n : l = argmin1?j?k ?(||x ? cj ||2 ) and ?(||x ? cl ||2 ) ? ? , (2) and define the extra cluster " # Ak+1 = x ? X n : min ?(||x ? cj ||2 ) > ? . 1?j?k (3) This extra cluster contains points whose distance from their closest center, when measured according to ?(||x?cl ||2 ), is larger than ? and, as will become clear later, it represents the set of outliers. From now on, given a set of centers {c1 , . . . , ck }, we write just A = {A1 , . . . , Ak } and implicitly mean A ? Ak+1 when ? is bounded.1 1 For a similar definition for the set of clusters induced by a bounded ? see also Section 4 in [2]. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Now, consider the following instance of (GKM), for the same set of points X n , n " # ! 1 min Rn? (c1 , . . . , ck ) = min min ||xi ? cl ? oi ||22 + f? (||oi ||2 ) c1 ,...,ck oi 1?l?k 2 i=1 $ %& ' ?(||xi ?cl ||2 ) p subject to cl ? R , l = 1, . . . , k, oi ? Rp , i = 1, . . . , n, (RKM) where f? : R ? R+ is a symmetric, lsc, proper2 and bounded from below function, with minimum value f? (0), and ? a non-negative parameter. This problem is called robust k-means (RKM) and, as we show later, it takes the form of (GKM) when ? equals the Moreau envelope of f? . The problem (RKM) [5, 24] describes the following simple model: we allow each observation xi to take on an ?error? term oi and we penalize the errors, using a group penalty, in order to encourage most of the observations? errors to be equal to zero. We consider functions f? where the parameter ? ? 0 has the following effect: for ? = 0, all oi ?s may become arbitrary large (all observations are outliers), while, for ? ? ?, all oi ?s become zero (no outliers); non-trivial cases occur for intermediate values 0 < ? < ?. Our interest is in understanding the robustness and consistency properties of (RKM). Robustness: Although robustness is an important notion, it has not been given a standard technical definition in the literature. Here, we focus on the finite sample breakdown point [18], which counts how many outliers a dataset may contain without causing significant damage in the estimates of the centers. Such damage is reflected to an arbitrarily large magnitude of at least one center. In Section 3, we show that two outliers in a dataset are enough to breakdown some centers. On the other hand, if we restrict our focus on some ?well structured? datasets, then (RKM) has some remarkable robustness properties even if there is a considerable amount of contamination. Consistency: Much is known about the consistency of (GKM) when the function ? is lsc and increasing [11, 15]. It turns out that this case also includes the case of (RKM) when f? is convex (see Section 3.1 for details). In Section 4, we show that the known non-asymptotic results about consistency of quadratic k-means may remain valid even when f? is non-convex. 2 Preliminaries and some technical remarks We start our analysis with a few technical tools from variational analysis [19]. Here, we introduce the necessary notation and a lemma (the proofs are in the appendix). The Moreau envelope e?f (x) with parameter ? > 0 (Definition 1.22 in [19]) of an lsc, proper, and bounded from below function f : Rp ? R and the associated (possibly multivalued) proximal map Pf? : Rp ? ? Rp are e?f (x) = minp z?R 1 1 ||x ? z||22 + f (z) and Pf? (x) = argminz?Rp ||x ? z||22 + f (z), 2? 2? (4) respectively. In order to simplify the notation, in the following, we fix ? to 1 and suppress the superscript. The Moreau envelope is a continuous approximation from below of f having the same set of minimizers while the proximal map gives the (possibly non-unique) minimizing arguments in (4). For (GKM), we define ? : Rp ? R as ?(x) := ?(||x||2 ). Accordingly, for (RKM), we define F? : Rp ? R as F? (x) := f? (||x||2 ). Thus, we obtain the following pairs: 1 (x ? o)2 + f? (o), Pf? (x) := argmino?R ef? (x), x ? R (5a) 2 1 eF? (x) := minp ||x ? o||22 + F? (o), PF? (x) := argmino?Rp eF? (x), x ? Rp . (5b) o?R 2 Obviously, (RKM) is equivalent to (GKM) when ?(x) = eF? (x). Every map P : R ? ? R throughout the text is assumed to be i) odd, i.e., P(?x) = ?P(x), ii) compact-valued, iii) non-decreasing, and iv) have a closed graph. We know that for any such map there exists at least one function f? such that P = Pf? (Proposition 3 in [26]).3 Finally, for our purposes (outlier detection), it is natural ef? (x) := min o?R 2 We call f proper if f (x) < ? for at least one x ? Rn , and f (x) > ?? for all x ? Rn ; in words, if the domain of f is a nonempty set on which f is finite (see page 5 in [19]). 3 Accordingly, for a general function ? : R ? [0, ?) to be a Moreau envelope, i.e., ?(?) = ef? (?) as defined in (5a) for some function f? , we require that ?(?) ? 12 | ? |2 is a concave function (Proposition 1 in [26]). 2 to require that v) P is a shrinkage rule, i.e., P(x) ? x, ?x ? 0. The following corollary is quite straightforward and useful in the sequel. Corollary 1. Using the notation in definitions (5a) and (5b), we have x PF? (x) = Pf (||x||2 ) and eF? (x) = ef? (||x||2 ). (6) ||x||2 ? Passing from a model of minimization in terms of a single problem, like (GKM), to a model in which a problem is expressed in a particular parametric form, like (RKM) with the Moreau envelope, the description of optimality conditions is opened to the incorporation of the multivalued map PF? . The next lemma describes the necessary conditions for a center cl to be (local) optimal for (RKM). Since we deal with the general case, well known results, such as smoothness of the Moreau envelope or convexity of its subgradients, can no longer be taken for granted. ? Remark 1. Let ?(?) = eF? (?). The usual subgradient, denoted as ??(x), is not sufficient to characterize the differentiability properties of Rn? in (RKM). Instead, we use the (generalized) subd? ifferential ??(x) (Definition 8.3 in [19]). For all x, we have ??(x) ? ??(x). Usually, the previous two sets coincide at a point x. In this case, ? is called regular at x. However, it is common in ? practice that the sets ??(x) and ??(x) are different (for a detailed exposition on subgradients see Chapter 8 in [19]; see also Example 1 in Appendix A.9). Lemma 1. Let PF? : Rp ? ? Rp be a proximal map and set ?(?) = eF? (?). The necessary (generalized) first order conditions for the centers {c1 , . . . , ck } ? Rp to be optimal for (RKM) are "! # ! ! 0?? ?(xi ? cl ) ? ??(xi ? cl ) ? (cl ? xi + PF? (xi ? cl )) , l ? {1, . . . , k}. i?Al i?Al i?Al (7) The interpretation of the set inclusion above is the following: for any center cl ? Rp , every subgradient vector in ??(xi ? cl ) must be a vector associated with a vector in PF? (xi ? cl ) (Theorem 10.13 in [19]). However, in general, the converse does not hold true. We note that when the proximal map is single-valued and continuous, which happens for example not only when f? is convex, but also for many popular non-convex penalties, both set inclusions become equalities and the converse holds, i.e., every vector in PF? (xi ? cl ) is a vector associated with a subgradient in ??(xi ? cl ) (Theorem 10.13 in [19] and Proposition 7 in [26]). We close this section with some useful remarks on the properties of the Moreau envelope as a map between two spaces of functions. There exist cases where two different functions, f? ?= f?? , have equal Moreau envelopes, ef? = ef?? (Proposition 1 in [26]), implying that two different forms of (RKM) correspond to the same ? in (GKM). For example, the proximal hull of f? , defined as h?f? (x) := ?e?(?e? ) (x), is a function different from f? but has the same Moreau envelope as f? f? (see also Example 1.44 in [19], Proposition 2 and Example 3 in [26]). This is the main reason we preferred the proximal map instead of the penalty function point of view for the analysis of (RKM). 3 On the breakdown properties of robust k-means In this section, we study the finite sample breakdown point of (RKM) and, more specifically, its universal breakdown point. Loosely speaking, the breakdown point measures the minimum fraction of outliers that can cause excessive damage in the estimates of the centers. Here, it will become clear how the interplay between the two forms, (GKM) and (RKM), helps the analysis. Given a dataset n X n = {x1 , . . . , xn } and a nonnegative integer m ? n, we say that Xm is an m-modification if it arises from X n after replacing m of its elements by arbitrary elements x?i ? Rp [6]. Denote as r(?) the non-outlier samples, as counted after solving (RKM), for a dataset X n and some ? ? 0, i.e., 4 ( ( ( ( r(?) := ({xi ? X n : ||oi ||2 = 0, i = 1, . . . , n}(. (8) Then, the number of estimated outliers is q(?) = n ? r(?). In order to simplify notation, we drop the dependence of r and q on ?. With this notation, we proceed to the following definition. 4 More than one ? can yield the same r, but this does not affect our analysis. 3 Definition 1 (universal breakdown point for the centers [6]). Let n, r, k be such that n ? r ? k + 1. n Given a dataset Xm in Rp , let {c1 , . . . , ck } denote the (global) optimal set of centers for (RKM). The universal breakdown value of (RKM) is "m # ?(n, r, k) := min min : sup max ||c || = ? . (9) l 2 n 1?l?k X n 1?m?n n Xm n Here, X n = {x1 , . . . , xn } ? Rp while Xm ? Rp runs over all m-modifications of X n . According to the concept of universal breakdown point, (RKM) breaks down at the first integer m for which there exists a set X n such that the estimates of the cluster centers become arbitrarily bad n for a suitable modification Xm . Our analysis is based on Pf? and considers two cases: those of biased and unbiased proximal maps. The former corresponds to the class of convex functions f? , while the latter corresponds to a class of non-convex f? . 3.1 Biased proximal maps: the case of convex f? If f? is convex, then ? = eF? is also convex while PF? is continuous, single-valued, and satisfies [19] ||x ? PF? (x)||2 ? ? as ||x||2 ? ?. (10) Proximal maps with this property are called biased since, as the l2 -norm of x increases, so does the norm of the difference in (10). In this case, for each xi ? Al , from Lemma 1 and expression (10), we have ||??(xi ?cl )||2 = ||?eF? (xi ?cl )||2 = ||cl ?xi +PF? (xi ?cl )||2 ? ? as ||xi ?cl ||2 ? ?. (11) The supremum value of ||??(x ? cl )||2 is closely related to the gross error sensitivity of an estimator [9]. It is interpreted as the worst possible influence which a sample x can have on cl [7]. In view of (11) and the definition of the clusters in (1), (RKM) is extremely sensitive. Although it can detect an outlier, i.e., a sample xi with a nonzero estimate for ||oi ||2 , it does not reject it since the influence of xi on its closest center never vanishes.5 The l1 -norm, f? (x) = ?|x|, which has Moreau envelope equal to the Huber loss-function [24], is the limiting case for the class of convex penalty functions that, although it keeps the difference ||x ? PF? (x)||2 in (10) constant and equal to ?, introduces a bias term proportional to ? in the estimate cl . The following proposition shows that (RKM) with a biased PF? has breakdown point equal to n1 , i.e., one outlier suffices to breakdown a center. Proposition 1. Assume k ? 2, k + 1 < r ? n. Given a biased proximal map, there exist a dataset X n and a modification X1n such that (RKM) breaks down. 3.2 Unbiased proximal maps: the case of non-convex f? 2 Consider now the l0 -(pseudo)norm on R, f? (z) := ?|z|0 = ?2 {z?=0} , thresholding proximal operator P?|?|0 : R ? ? R, ? ?0, P?|?|0 (x) = arg minz?R 12 (x ? z)2 + f? (z) = {0, x}, ? x, According to Lemma 1, for p = 1 (scalar case), we have and the associated hard|x| < ?, |x| = ?, |x| > ?. (12) ??(xi ? cl ) ? cl ? xi + P?|?|0 (xi ? cl ) = {0} for |xi ? cl | > ?, xi ? Al , (12) (13) implying that ?(xi ? cl ), as a function of, cl , remains constant for |xi ? cl | > ?. As a consequence of (13), if cl is local optimal, then 0 ? ?{ i?Al ?(xi ? cl )} and ! ! . 0? (cl ? xi ) + cl ? xi + P?|?| (xi ? cl ) . (14) i?Al , i?Al , |xi ?cl |<? |xi ?cl |=? Depending on the value of ?, (RKM) with the l0 -norm is able to ignore samples with distance from their closest center larger than ?. This is done since P?|?|0 (xi ? cl ) = xi ? cl whenever |xi ? cl | > ? 5 See the analysis in [7] about the influence function of (GKM) when ? is convex. 4 and the influence of xi vanishes. In fact, there is a whole family of non-convex f? ?s whose proximal map Pf? satisfies Pf? (x) = x, for all |x| > ?, (15) for some ? > 0. These are called unbiased proximal maps [13, 20] and have the useful property that, as one observation is arbitrarily modified, all estimated cluster centers remain bounded by a constant that depends only on the remaining unmodified samples. Under certain circumstances, the proof of the following proposition reveals that, if there exists one outlier in the dataset, then robust k-means will reject it. Proposition 2. Assume k ? 2, k + 1 < r ? n, and consider the dataset X n = {x1 , . . . , xn } along with its modification by one replacement y, X1n = {x1 , . . . , xn?1 , y}. If we solve (RKM) with X1n and an unbiased proximal map satisfying (15), then all estimates for the cluster centers remain bounded by a constant that depends only on the unmodified samples of X n . Next, we show that, even for this class of maps, there always exists a dataset that causes one of the estimated centers to breakdown as two particular observations are suitably replaced. Theorem 1 (Universal breakdown point for (RKM)). Assume k ? 2 and n ? r ? k + 2. Given an unbiased proximal map satisfying (15), there exist a dataset X n and a modification X2n , such that (RKM) breaks down. Hence, the universal breakdown point of (RKM) with an unbiased proximal map is n2 . In Figure 1, we give a visual interpretation of Theorem 1. The top subfigure depicts the unmodified initial dataset X 9 = {x1 , . . . , x9 } (black circles) with a clear two-cluster structure; the bottom subfigure shows the modification X29 (dashed line arrows). Theorem 1 states that (RKM) on X29 fails to be robust since, every subset of X29 with r = 8 points has a cluster containing an outlier. 3.3 Figure 1: The top subfigure is the unmodified dataset X 9 . Theorem 1 states that every subset of the modification X29 (bottom subfigure) with size 8 contains an outlier. Restricted robustness of robust k-means for well-clustered data The result of Theorem 1 is disappointing but it is not (RKM) to be blamed for the poor performance but the tight notion of the definition about the breakdown point [6, 7]; allowing any kind of contamination in a dataset is a very general assumption. In this section, we place two restrictions: i) we consider datasets where inlier samples can be covered by unions of balls with centers that are ?far apart? each other, and ii) we ask a question different from the finite sample breakdown point. We want to exploit as much as possible the results of [2] concerning a new quantitative measure of noise robustness which compares the output of (RKM) on a contaminated dataset to its output on the uncontaminated version of the dataset. Our aim is to show that (RKM), with a certain class of proximal maps and datasets that are well-structured ignores the influence of outliers when grouping the inliers. First, we introduce Corollary 2 which states the form that Pf? should have in order the results of [2] to apply to (RKM) and, second, we give details about the datasets which we consider as wellstructured. Using this corollary we are able to design proximal maps for which Theorems 3, 4, and 5 in [2] apply; otherwise, it is not clear how the analysis of [2] is valid for (RKM). Let h : R ? R be a continuous function with the following properties: 1. h is odd and non-decreasing (h+ (?) is used to denote its restriction on [0, ?)); 2. h is a shrinkage rule: 0 ? h+ (x) ? x, ?x ? [0, ?); 3. the difference x ? h+ (x) is non-decreasing, i.e., for 0 ? x1 ? x2 we have x1 ? h+ (x1 ) ? x2 ? h+ (x2 ). 5 Define the map ? ?h(x), Pf? (x) := {h(x), x}, ? x, |x| < ?, |x| = ?, |x| > ?. (16) Multivaluedness of Pf? at |x| = ? signals that ef? is non-smooth at these points. An immediate consequence for the Moreau envelope associated with the previous map is the following. Corollary 2. Let the function g : [0, ?) ? [0, ?) be defined as / x g(x) := (u ? h(u))du, x ? [0, ?). (17) 0 Then, the Moreau envelope associated with Pf? in (16) is ef? (x) = min{g(|x|), g(?)} = g(min{|x|, ?}). (18) Next, we define what it means for a dataset to be (?1 , ?2 )-balanced; this is the class of datasets which we consider to be well-structured. Definition 2 ((?1 , ?2 ) balanced dataset [2]). Assume that a set X n ? Rp has a subset I (inliers), with at least n2 samples, and the following properties: 1. I = 0k l=1 Bl , where Bl = B(bl , r) is a ball in Rp with bounded radius r and center bl ; 2. ?1 |I| ? |Bl | ? ?2 |I| for every l, where |Bl | is the number of samples in Bl and ?1 ,?2 > 0; 3. ||bl ? bl? ||2 > v for every l ?= l? , i.e., the centers of the balls are at least v > 0 apart. Then, X n is a (?1 , ?2 )-balanced dataset. We now state the form that Theorem 3 in [2] takes for (RKM). Theorem 2 (Restricted robustness of (RKM)). If i) ef? is as in Corollary 2, i.e., ef? (||x||2 ) = g(min{||x||2 , ?}), ii) X n has a (?1 , ?2 )-balanced subset of samples I with k balls, and 2 iii) the centers of the balls are at least v > 4r + 2g ?1 ( ?1?+? g(r)) apart, then for ? ? 1 1 2 33 |I| v v ?1 the set of outliers X n \I has no effect on the 2, g |X n \I| (?1 g( 2 ? 2r) ? (?1 + ?2 )g(r)) grouping of inliers I. In other words, if {x, y} ? Bl and {c1 , . . . , ck } are the optimal centers when solving (RKM) for a ? as described before, then l = argmin1?j?k ef? (||x ? cj ||2 ) = argmin1?j?k ef? (||y ? cj ||2 ). For the sake of completeness, we give a proof of this theorem in the appendix. In a similar way, we can recast the results of Theorems 4 and 5 in [2] to be valid for (RKM). 4 On the consistency of robust k-means Let X n be a set with n independent and identically distributed random samples xi from a fixed but unknown probability distribution ?. Let C? be the empirical optimal set of centers, i.e., C? := argminc1 ...,ck ?Rp Rn? (c1 , . . . , ck ). (19) C ? := argminc1 ...,ck ?Rp R? (c1 , . . . , ck ), (20) The population optimal set of centers is the set where R? is the population clustering risk, defined as / " # 1 R? (c1 , . . . , ck ) := min minp ||x ? cl ? o||22 + f? (||o||2 ) ?(dx). 1?l?k o?R 2 $ %& ' (21) ?(||x?cl ||2 )=ef? (||x?cl ||2 ) Loss consistency and (simply) consistency for (RKM) require, respectively, that n?? ? n?? Rn? (C) ?? R? (C ? ) and C? ?? C ? . 6 (22) ? converges In words, as the size n of the dataset X n increases, the empirical clustering risk Rn? (C) ? ? almost surely to the minimum population risk R (C ) and (for n large enough) C? can effectively replace the optimal set C ? in quantizing the unknown probability measure ?. For the case of convex f? , non-asymptotic results describing the rate of convergence of Rn? to R in (22) are already known ([11], Theorem 3). Noting that the Moreau envelope of a non-convex f? belongs to a class of functions with polynomial discrimination [16] (the shatter coefficient of this class is bounded by a polynomial) we give a sketch proof of the following result. Theorem 3 (Consistency of (RKM)). Let the samples xi ? X n , i ? {1, . . . , n}, come from a fixed but unknown probability measure ?. For any k ? 1 and any unbiased proximal map, we have ? ? R? (C ? ) and lim ER? (C) n?? lim C? ? C ? (convergence in probability). n?? (23) Theorem 3 reads like an asymptotic convergence result. However, its proof (given in the appendix) uses combinatorial tools from Vapnik-Chervonenkis 4theory, revealing that the non-asymptotic rate ? to R? (C ? ) is of order O( log n/n) (see Corollary 12.1 in [4]). of convergence of ER? (C) 5 Relating (RKM) to trimmed k-means As the effectiveness of robust k-means on real world and synthetic data has already been evaluated [5, 24], the purpose of this section is to relate (RKM) to trimmed k-means (TKM) [7]. Trimmed kmeans is based on the methodology of ?impartial trimming?, which is a combinatorial problem fundamentally different from (RKM). Despite their differences, the experiments show that, both (RKM) and (TKM) perform remarkably similar in practice. The solution of (TKM) (which is also a set of k centers) is the solution of quadratic k-means on the subsample containing ?n(1 ? ?)? points with the smallest mean deviation (0 < ? < 1). The only common characteristic of (RKM) and (TKM) is that they both have the same universal breakdown point, i.e., n2 , for arbitrary datasets. Trimmed k-means takes as input a dataset X n , the number of clusters k, and a proportion of outliers a ? (0, 1) to remove.6 A popular heuristic algorithm for (TKM) is the following. After the initialization, each iteration of (TKM) consists of the following steps: i) the distance of each observation from its closest center is computed, ii) the top ?an? observations with larger distance from its closest center are removed, iii) the remaining points are used to update the centers. The previous three steps are repeated untill the centers converge.7 As for robust k-means, we solve the (RKM) problem with a coordinate optimization procedure (see Appendix A.9 for details). The synthetic data for the experiments come from a mixture of Gaussians with 10 components and without any overlap between them.8 The number of inlier samples is 500 and each inlier xi ? [?1, 1]10 for i ? {1, . . . , 500}. On top of the inliers lie 150 outliers in R10 distributed uniformly in general positions over the entire space. We consider two scenarios: in the first, the outliers lie in [?3, 3]10 (call it mild-contamination), while, in the second, the outliers lie in [?6, 6]10 (call it heavy-contamination). The parameter a in trimmed k-means (the percentage of outliers) is set to a = 0.3, while the value of the parameter ? for which (RKM) yields 150 outliers is found through a search over a grid on the set ? ? (0, ?max ) (we set ?max as the maximum distance between two points in a dataset). Both algorithms, as they are designed, require as input an initial set of k points; these points form the initial set of centers. In all experiments, both (RKM) and (TKM) take the same k vectors as initial centers, i.e., k points sampled randomly from the dataset. The statistics we use for the comparison are: i) the rand-index for clustering accuracy [17] ii) the cluster estimation error, i.e., the root mean square error between the estimated cluster centers and the sample mean of each cluster, iii) the true positive outlier detection rate, and finally, iv) the false positive outlier detection rate. In Figures 2-3, we plot the results for a proximal map Pf like the one in (16) with h(x) = ?x and ? = 0.005; with this choice for h, we mimic the hard-thresholding operator. The results for each scenario (accuracy, cluster estimation error, etc) are averages over 150 runs of the experiment. As seen, both algorithms share almost the same statistics in all cases. 6 We use the implementation of trimmed k-means in the R package trimcluster [10]. The previous three steps are performed also by another robust variant of k-means, the k-means? (see [3]). 8 We use the R toolbox MixSim [14] that guarantees no overlap among the 10 mixtures. 7 7 0.5 ? ? ? ? ? ? ? 7.5 5.0 ? ? 0.950 0.925 ? ? robust k?means trimmed k?means robust k?means trimmed k?means 0.015 0.010 ? 0.005 ? ? 0.000 robust k?meanstrimmed k?means Cluster Radius Estimation Error ? ? ? ? ? ? 10.0 0.975 False Positive Error Rate ? ? ? True Positive Error Rate Accuracy 0.7 12.5 ? Center Estimation Error ? ? 0.9 ? ? ? 9 ? ? ? 6 3 ? ? ? ? 0 robust k?meanstrimmed k?means robust k?means trimmed k?means 0.4 ? ? ? ? ? ? ? ? ? ? 0.2 ? ? 15.0 12.5 10.0 ? robust k?means trimmed k?means ? 0.75 ? ? 0.50 ? ? 0.25 ? ? ? ? 0.00 ? 0.3 ? ? ? ? False Positive Error Rate 0.6 1.00 ? True Positive Error Rate Accuracy 0.8 Center Estimation Error 17.5 robust k?means trimmed k?means ? ? 0.2 Cluster Radius Estimation Error Figure 2: Performance of robust and trimmed k-means on a mixture of 10 Gaussians without overlap. On top of the 500 samples from the mixture there are 150 outliers uniformly distributed in [?1, 1]10 . ? ? ? ? ? ? ? ? ? ? 0.1 ? 0.0 robust k?means trimmed k?means ? ? ? 4 3 2 1 robust k?means trimmed k?means robust k?means trimmed k?means ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 20 1.000 ? ? ? 10 ? robust k?means trimmed k?means ? ? ? ? ? 0.975 ? ? 0.950 0.925 0.900 robust k?means trimmed k?means ? ? ? ? ? ? ? ? ? ? robust k?meanstrimmed k?means False Positive Error Rate 0.6 Center Estimation Error Accuracy 0.8 ? True Positive Error Rate 30 1.0 0.04 ? ? ? 0.03 ? ? 0.02 0.01 ? ? ? ? ? ? ? ? ? 0.00 robust k?means trimmed k?means Figure 4: Results on two spherical clusters with equal radius r, each one with 150 samples, and centers are at least 4r apart. On top of the samples lie 150 outliers uniformly distributed in [?6, 6]10 . In Figure 4, we plot the results for the case of two spherical clusters in R10 with equal radius r, each one with 150 samples, and centers that are at least 4r apart from each other. The inlier samples are in [?3, 3]10 . The outliers are 150 (half of the dataset is contaminated) and are uniformly distributed in [?6, 6]10 . The results (accuracy, cluster estimation error, etc) are averages over 150 runs of the experiment. This configuration is a heavy contamination scenario but, due to the structure of the dataset, as expected from Theorem 2, (RKM) performs remarkably well; the same holds for (TKM). 6 Conclusions We provided a theoretical analysis for the robustness and consistency properties of a variation of the classical quadratic k-means called robust k-means (RKM). As a by-product of the analysis, we derived a detailed description of the optimality conditions for the associated minimization problem. In most cases, (RKM) shares the computational simplicity of quadratic k-means, making it a ?computationally cheap? candidate for robust nearest neighbor clustering. We show that (RKM) cannot be robust against any type of contamination and any type of datasets, no matter the form of the proximal map we use. If we restrict our attention to ?well-structured? datasets, then the algorithm exhibits some desirable noise robustness. As for the consistency properties, we showed that most general results for consistency of quadratic k-means still remain valid for this robust variant. Acknowledgments The author would like to thank Athanasios P. Liavas for useful comments and suggestions that improved the quality of the article. 8 Cluster Radius Estimation Error Figure 3: The same setup as in Figure 2 except that the coordinates of each outlier lie in [?3, 3]10 . 7.5 5.0 2.5 0.0 robust k?means trimmed k?means References [1] Anestis Antoniadis and Jianqing Fan. Regularization of wavelet approximations. Journal of the American Statistical Association, 2011. [2] Shai Ben-David and Nika Haghtalab. Clustering in the presence of background noise. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 280?288, 2014. [3] Sanjay Chawla and Aristides Gionis. k-means-: A unified approach to clustering and outlier detection. SIAM. [4] L. Devroye, L. Gy?orfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Stochastic Modelling and Applied Probability. Springer New York, 1997. [5] Pedro A Forero, Vassilis Kekatos, and Georgios B Giannakis. Robust clustering using outlier-sparsity regularization. Signal Processing, IEEE Transactions on, 60(8):4163?4177, 2012. [6] Mar??a Teresa Gallegos and Gunter Ritter. A robust method for cluster analysis. Annals of Statistics, pages 347?380, 2005. ? [7] Luis Angel Garc??a-Escudero and Alfonso Gordaliza. Robustness properties of k-means and trimmed k-means. Journal of the American Statistical Association, 94(447):956?969, 1999. [8] Michael R. Garey and David S. Johnson. Computers and Intractability: A Guide to the Theory of NPCompleteness. W. H. Freeman & Co., New York, NY, USA, 1979. [9] Frank R Hampel, Elvezio M Ronchetti, Peter J Rousseeuw, and Werner A Stahel. Robust statistics: the approach based on influence functions, volume 114. John Wiley & Sons, 2011. [10] Christian Hennig. trimcluster: Cluster analysis with trimming, 2012. R package version 0.1-2. [11] Tam?as Linder. Learning-theoretic methods in vector quantization. In Principles of nonparametric learning, pages 163?210. Springer, 2002. [12] Stuart P Lloyd. Least squares quantization in pcm. Information Theory, IEEE Transactions on, 28(2):129? 137, 1982. [13] Rahul Mazumder, Jerome H Friedman, and Trevor Hastie. Sparsenet: Coordinate descent with nonconvex penalties. Journal of the American Statistical Association, 2012. [14] Volodymyr Melnykov, Wei-Chen Chen, and Ranjan Maitra. MixSim: An R package for simulating data to study performance of clustering algorithms. Journal of Statistical Software, 51(12):1?25, 2012. [15] David Pollard. Strong consistency of k-means clustering. The Annals of Statistics, 9(1):135?140, 1981. [16] David Pollard. Convergence of stochastic processes. Springer Science & Business Media, 1984. [17] William M Rand. Objective criteria for the evaluation of clustering methods. Journal of the American Statistical association, 66(336):846?850, 1971. [18] G. Ritter. Robust Cluster Analysis and Variable Selection. Chapman & Hall/CRC Monographs on Statistics & Applied Probability. CRC Press, 2014. [19] R Tyrrell Rockafellar and Roger J-B Wets. Variational analysis, volume 317. Springer Science & Business Media, 2009. [20] Yiyuan She et al. Thresholding-based iterative selection procedures for model selection and shrinkage. Electronic Journal of statistics, 3:384?415, 2009. [21] Marc Teboulle. A unified continuous optimization framework for center-based clustering methods. The Journal of Machine Learning Research, 8:65?102, 2007. [22] Paul Tseng. Convergence of a block coordinate descent method for nondifferentiable minimization. Journal of optimization theory and applications, 109(3):475?494, 2001. [23] Sara Van De Geer. Empirical processes in m-estimation. June 13, 2003. Handout at New Directions in General Equilibrium Analysis (Cowles Workshop, Yale University). [24] Daniela M Witten. Penalized unsupervised learning with outliers. Statistics and its Interface, 6(2):211, 2013. [25] Stephen J Wright. Coordinate descent algorithms. Mathematical Programming, 151(1):3?34, 2015. [26] Yaoliang Yu, Xun Zheng, Micol Marchetti-Bowick, and Eric P Xing. Minimizing nonconvex nonseparable functions. In AISTATS, 2015. 9
6126 |@word mild:1 version:2 polynomial:2 norm:5 proportion:1 suitably:1 ronchetti:1 initial:4 configuration:1 contains:2 chervonenkis:1 com:1 gmail:1 dx:1 must:1 luis:1 john:1 partition:1 cheap:1 christian:1 remove:1 drop:1 designed:1 update:1 plot:2 discrimination:1 implying:2 half:1 antoniadis:1 accordingly:2 stahel:1 alexandros:1 completeness:1 mathematical:1 along:1 shatter:1 become:6 consists:1 introduce:2 angel:1 huber:1 expected:1 nonseparable:1 freeman:1 decreasing:3 spherical:2 pf:25 increasing:1 spain:1 provided:1 underlying:1 bounded:9 notation:5 medium:2 what:1 kind:1 interpreted:1 developed:1 unified:2 guarantee:1 pseudo:1 quantitative:1 every:7 concave:1 tie:1 converse:2 impartial:1 before:1 positive:8 engineering:1 local:2 aiming:1 consequence:2 despite:1 ak:4 lugosi:1 black:1 initialization:1 sara:1 co:1 unique:1 acknowledgment:1 practice:2 union:1 block:1 procedure:4 universal:7 empirical:3 reject:2 revealing:1 orfi:1 word:3 regular:1 spite:1 cannot:1 close:1 selection:3 operator:2 risk:4 influence:6 restriction:2 equivalent:1 map:27 center:41 ranjan:1 straightforward:1 attention:1 convex:15 simplicity:1 rule:2 estimator:1 population:3 notion:2 variation:2 coordinate:5 haghtalab:1 limiting:1 annals:2 programming:1 us:1 element:2 satisfying:2 recognition:1 breakdown:18 bottom:2 electrical:1 worst:1 contamination:6 removed:1 gross:1 balanced:4 vanishes:2 broken:1 convexity:1 monograph:1 solving:2 tight:1 eric:1 chapter:1 whose:2 quite:1 larger:3 valued:3 solve:2 say:1 heuristic:1 otherwise:1 statistic:8 superscript:1 obviously:1 interplay:1 quantizing:1 product:1 causing:1 description:2 xun:1 convergence:6 cluster:26 converges:1 ben:1 inlier:4 help:1 depending:1 measured:1 nearest:1 school:1 odd:2 strong:1 come:2 direction:1 radius:6 closely:1 opened:1 hull:1 stochastic:2 garc:1 require:4 crc:2 fix:1 suffices:1 clustered:1 preliminary:1 proposition:9 hold:3 hall:1 wright:1 equilibrium:1 smallest:1 purpose:2 estimation:10 wet:1 combinatorial:2 sensitive:1 tool:2 minimization:3 gunter:1 always:1 argmin1:4 aim:2 modified:1 ck:14 shrinkage:3 varying:1 corollary:7 l0:2 focus:3 derived:1 june:1 she:1 modelling:1 detect:1 minimizers:1 nika:1 entire:1 yaoliang:1 arg:1 among:1 denoted:1 equal:8 never:1 having:1 x2n:1 chapman:1 represents:1 stuart:1 yu:1 icml:1 excessive:1 unsupervised:1 mimic:1 t2:1 contaminated:2 simplify:2 fundamentally:1 few:1 randomly:2 replaced:1 replacement:1 n1:1 william:1 friedman:1 detection:5 interest:1 trimming:3 zheng:1 evaluation:1 introduces:1 mixture:4 inliers:4 encourage:1 necessary:3 iv:2 loosely:1 circle:1 theoretical:3 subfigure:4 instance:1 teboulle:1 unmodified:4 werner:1 deviation:1 subset:4 johnson:1 characterize:1 proximal:22 synthetic:2 st:1 international:1 sensitivity:1 siam:1 sequel:1 probabilistic:1 ritter:2 michael:1 x9:1 containing:2 possibly:2 tam:1 american:4 volodymyr:1 de:1 gy:1 lloyd:1 includes:1 coefficient:1 matter:1 gionis:1 rockafellar:1 depends:2 later:2 root:1 view:2 closed:1 break:3 performed:1 sup:1 start:1 recover:1 kekatos:1 xing:1 shai:1 minimize:1 oi:9 square:2 accuracy:6 characteristic:1 correspond:1 yield:2 whenever:1 trevor:1 definition:10 against:2 uncontaminated:1 garey:1 proof:5 associated:7 gain:1 sampled:1 dataset:25 popular:2 ask:1 multivalued:2 lim:2 cj:5 greece:1 athanasios:1 reflected:1 methodology:1 improved:1 rand:2 rahul:1 wei:1 done:1 evaluated:1 mar:1 just:1 roger:1 robustify:1 jerome:1 hand:1 sketch:1 replacing:1 quality:1 usa:1 effect:2 contain:1 true:5 concept:1 unbiased:7 former:1 regularization:3 equality:1 hence:1 read:1 symmetric:2 nonzero:1 deal:1 x1n:3 criterion:1 generalized:3 forero:1 theoretic:1 performs:1 l1:1 interface:1 variational:2 ef:21 common:2 witten:1 volume:2 association:4 slight:1 interpretation:2 relating:1 significant:1 smoothness:1 consistency:14 grid:1 inclusion:2 longer:1 etc:2 closest:5 showed:1 belongs:1 apart:5 disappointing:1 scenario:3 certain:2 nonconvex:2 jianqing:1 arbitrarily:3 seen:1 minimum:4 surely:1 converge:1 dashed:1 semi:1 ii:5 signal:2 desirable:1 stephen:1 smooth:1 technical:4 concerning:1 a1:2 variant:4 regression:1 circumstance:1 iteration:1 c1:12 penalize:1 background:1 want:1 remarkably:2 extra:2 envelope:13 biased:5 comment:1 subject:2 induced:1 alfonso:1 gkm:11 effectiveness:1 call:3 integer:2 presence:2 noting:1 intermediate:1 iii:4 enough:3 identically:1 affect:1 hastie:1 restrict:2 idea:1 expression:1 granted:1 trimmed:20 penalty:5 peter:1 pollard:2 passing:1 speaking:1 cause:2 remark:3 proceed:1 york:2 useful:4 clear:4 detailed:2 covered:1 amount:1 rousseeuw:1 nonparametric:1 differentiability:1 vassilis:1 argminz:1 exist:3 percentage:1 revisit:1 estimated:4 write:1 hennig:1 group:1 r10:2 graph:1 subgradient:3 fraction:1 year:1 escudero:1 run:3 package:3 place:1 throughout:1 family:1 almost:2 electronic:1 appendix:5 yale:1 fan:1 quadratic:9 nonnegative:1 occur:1 incorporation:1 x2:3 software:1 sake:1 argument:1 min:13 optimality:2 extremely:1 subgradients:2 structured:5 according:3 ball:5 poor:1 remain:5 describes:2 giannakis:1 son:1 modification:9 happens:1 making:1 outlier:36 restricted:2 taken:1 computationally:1 remains:1 turn:1 count:1 nonempty:1 describing:1 daniela:1 know:1 gaussians:2 apply:2 chawla:1 simulating:1 robustness:12 rp:23 top:6 clustering:14 remaining:2 exploit:1 classical:2 bl:10 objective:1 question:1 already:2 damage:3 parametric:1 dependence:1 usual:2 exhibit:1 distance:5 thank:1 nondifferentiable:1 considers:1 tseng:1 trivial:1 reason:1 devroye:1 index:1 minimizing:2 setup:1 relate:1 frank:1 negative:1 marchetti:1 suppress:1 design:1 implementation:1 proper:2 unknown:3 perform:1 allowing:1 upper:1 observation:7 datasets:9 finite:4 descent:3 immediate:1 rn:10 arbitrary:3 yiyuan:1 david:4 pair:1 toolbox:1 crete:1 rkm:52 teresa:1 barcelona:1 nip:1 sparsenet:1 beyond:1 able:2 below:3 usually:1 xm:5 sanjay:1 pattern:1 sparsity:1 recast:1 max:3 suitable:1 overlap:3 natural:1 business:2 hampel:1 text:1 understanding:1 literature:1 l2:1 asymptotic:5 georgios:1 loss:2 expect:1 suggestion:1 proportional:1 borrows:1 remarkable:1 maitra:1 sufficient:1 minp:3 thresholding:3 argmino:2 article:1 intractability:1 principle:1 share:2 heavy:2 penalized:2 last:1 bias:1 allow:1 guide:1 neighbor:1 moreau:13 distributed:5 van:1 xn:5 valid:5 world:1 ignores:1 author:1 coincide:1 counted:1 far:1 transaction:2 compact:1 ignore:1 implicitly:1 preferred:1 supremum:1 keep:1 global:1 reveals:1 assumed:1 xi:41 continuous:6 search:1 iterative:1 aristides:1 robust:37 mazumder:1 lsc:4 du:1 cl:45 domain:1 marc:1 aistats:1 main:2 arrow:1 whole:1 noise:3 tkm:8 subsample:1 n2:3 subd:1 paul:1 repeated:1 x1:9 depicts:1 ny:1 wiley:1 fails:1 position:1 lie:5 candidate:1 minz:1 wavelet:1 theorem:16 down:3 bad:1 er:2 npcompleteness:1 grouping:2 exists:4 workshop:1 quantization:2 vapnik:1 false:4 effectively:1 magnitude:1 chen:2 simply:1 pcm:1 visual:1 expressed:1 scalar:1 springer:4 pedro:1 corresponds:2 satisfies:2 kmeans:1 exposition:1 replace:1 considerable:1 hard:2 specifically:1 except:1 uniformly:4 tyrrell:1 lemma:5 called:5 geer:1 linder:1 latter:1 arises:1 untill:1 cowles:1
5,666
6,127
Stochastic Three-Composite Convex Minimization ? and Volkan Cevher Alp Yurtsever, B`a? ng C?ng Vu, Laboratory for Information and Inference Systems (LIONS) ?cole Polytechnique F?d?rale de Lausanne, Switzerland alp.yurtsever@epfl.ch, bang.vu@epfl.ch, volkan.cevher@epfl.ch Abstract We propose a stochastic optimization method for the minimization of the sum of three convex functions, one of which has Lipschitz continuous gradient as well as restricted strong convexity. Our approach is most suitable in the setting where it is computationally advantageous to process smooth term in the decomposition with its stochastic gradient estimate and the other two functions separately with their proximal operators, such as doubly regularized empirical risk minimization problems. We prove the convergence characterization of the proposed algorithm in expectation under the standard assumptions for the stochastic gradient estimate of the smooth term. Our method operates in the primal space and can be considered as a stochastic extension of the three-operator splitting method. Numerical evidence supports the effectiveness of our method in real-world problems. 1 Introduction We propose a stochastic optimization method for the three-composite minimization problem: minimize f (x) + g(x) + h(x), x?Rd (1) where f : Rd ? R and g : Rd ? R are proper, lower semicontinuous convex functions that admit tractable proximal operators, and h : Rd ? R is a smooth function with restricted strong convexity. We assume that we have access to unbiased, stochastic estimates of the gradient of h in the sequel, which is key to scale up optimization and to address streaming settings where data arrive in time. Template (1) covers a large number of applications in machine learning, statistics, and signal processing by appropriately choosing the individual terms. Operator splitting methods are powerful in this setting, since they reduce the complex problem (1) into smaller subproblems. These algorithms are easy to implement, and they typically exhibit state-of-the-art performance. To our knowledge, there is no operator splitting framework that can currently tackle template (1) using stochastic gradient of h and the proximal operators of f and g separately, which is critical to the scalability of the methods. This paper specifically bridges this gap. Our basic framework is closely related to the deterministic three operator splitting method proposed in [11], but we avoid the computation of the gradient ?h and instead work with its unbiased estimates. We provide rigorous convergence guarantees for our approach and provide guidance in selecting the learning rate under different scenarios. Road map. Section 2 introduces the basic optimization background. Section 3 then presents the main algorithm and provides its convergence characterization. Section 4 places our contributions in light of the existing work. Numerical evidence that illustrates our theory appears in Section 5. We relegate the technical proofs to the supplementary material. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 2 Notation and background This section recalls a few basic notions from the convex analysis and the probability theory, and presents the notation used in the rest of the paper. Throughout, ?0 (Rd ) denotes the set of all proper, lower semicontinuous convex functions from Rd to [??, +?], and h? | ?i is the standard scalar product on Rd with its associated norm k ? k. Subdifferential. The subdifferential of f ? ?0 (Rd ) at a point x ? Rd is defined as ?f (x) = {u ? Rd | f (y) ? f (x) ? hy ? x | ui , ?y ? Rd }. We denote the domain of ?f as dom(?f ) = {x ? Rd | ?f (x) 6= ?}. If ?f (x) is a singleton, then f is a differentiable function, and ?f (x) = {?f (x)}. Indicator function. Given a nonempty subset C in Rd , the indicator function of C is given by  0 if x ? C, ?C (x) = +? if x 6? C. Proximal operator. The proximal operator of a function f ? ?0 (Rd ) is defined as follows   1 2 proxf (x) = arg min f (z) + kz ? xk . 2 z?Rd (2) (3) Roughly speaking, the proximal operator is tractable when the computation of (3) is cheap. If f is the indicator function of a nonempty, closed convex subset C, its proximity operator is the projection operator on C. Lipschitz continuos gradient. A function f ? ?0 (Rd ) has Lipschitz continuous gradient with Lipschitz constant L > 0 (or simply L-Lipschitz), if k?f (x) ? ?f (y)k ? Lkx ? yk, ?x, y ? Rd . Strong convexity. A function f ? ?0 (Rd ) is called strongly convex with some parameter ? > 0 (or simply ?-strongly convex), if hp ? q | x ? yi ? ?kx ? yk2 , ?x, y ? dom(?f ), ?p ? ?f (x), ?q ? ?f (y). Solution set. We denote optimum points of (1) by x? , and the solution set by X ? : x? ? X ? = {x ? Rd | 0 ? ?h(x) + ?g(x) + ?f (x)}. Throughout this paper, we assume that X ? is not empty. Restricted strong convexity. A function f ? ?0 (Rd ) has restricted strong convexity with respect to a point x? in a set M ? dom(?f ), with parameter ? > 0, if hp ? q | x ? x? i ? ?kx ? x? k2 , ?x ? M, ?p ? ?f (x), ?q ? ?f (x? ). Let (?, F , P) be a probability space. An Rd -valued random variable is a measurable function x : ? ? Rd , where Rd is endowed with the Borel ?-algebra. We denote by ?(x) the ?-field generated by x. The expectation of a random variable x is denoted by E[x]. The conditional expectation of x given a ?-field A ? F is denoted by E[x|A]. Given a random variable y : ? ? Rd , the conditional expectation of x given y is denoted by E[x|y]. See [17] for more details on probability theory. An Rd -valued random process is a sequence (xn )n?N of Rd -valued random variables. 3 Stochastic three-composite minimization algorithm and its analysis We present stochastic three-composite minimization method (S3CM) in Algorithm 1, for solving the three-composite template (1). Our approach combines the stochastic gradient of h, denoted as r, and the proximal operators of f and g in essentially the same structrure as the three-operator splitting method [11, Algorithm 2]. Our technique is a nontrivial combination of the algorithmic framework of [11] with stochastic analysis. 2 Algorithm 1 Stochastic three-composite minimization algorithm (S3CM) Input: An initial point xf,0 , a sequence of learning rates (?n )n?N , and a sequence of squared integrable Rd -valued stochastic gradient estimates (rn )n?N . Initialization: xg,0 = prox?0 g (xf,0 ) ug,0 = ?0?1 (xf,0 ? xg,0 ) Main loop: for n = 0, 1, 2, . . . do xg,n+1 = prox?n g (xf,n + ?n ug,n ) ug,n+1 = ?n?1 (xf,n ? xg,n+1 ) + ug,n xf,n+1 = prox?n+1 f (xg,n+1 ? ?n+1 ug,n+1 ? ?n+1 rn+1 ) end for Output: xg,n as an approximation of an optimal solution x? . Theorem 1 Assume that h is ?h -strongly convex and has L-Lipschitz continuous gradient. Further assume that g is ?g -strongly convex, where we allow ?g = 0. Consider the following update rule for the learning rate: p ??n2 ?h ? + (?n2 ?h ?)2 + (1 + 2?n ?g )?n2 ?n+1 = , for some ?0 > 0 and ? ?]0, 1[. 1 + 2?n ?g Define F n = ?(xf,k )0?k?n , and suppose that the following conditions hold for every n ? N: 1. E[rn+1 |F n ] = ?h(xg,n+1 ) almost surely, 2. There exists c ? [0, +?[ and t ? R, that satisfies Pn k=0 E[krk ? ?h(xg,k )k2 ] ? cnt . Then, the iterates of S3CM satisfy E[kxg,n ? x? k2 ] = O(1/n2 ) + O(1/n2?t ). (4) Remark 1 The variance condition of the stochastic gradient estimates in the theorems above is satisfied when E[krn ? ?h(xg,n )k2 ] ? c for all n ? N and for some constant c ? [0, +?[. See [15, 22, 26] for details. Remark 2 When rn = ?h(xn ), S3CM reduces to the deterministic three-operator splitting scheme [11, Algorithm 2] and we recover the convergence rate O(1/n2 ) as in [11]. When g is zero, S3CM reduces to the standard stochastic proximal point algorithm [2, 13, 26]. Remark 3 Learning rate sequence (?n )n?N in Theorem 1 depends on the strong convexity parameter ?h , which may not be available a priori. Our next result avoids the explicit reliance on the strong convexity parameter, while providing essentially the same convergence rate. Theorem 2 Assume that h is ?h -strongly convex and has L-Lipschitz continuous gradient. Consider a positive decreasing learning rate sequence ?n = ?(1/n? ) for some ? ?]0, 1], and denote ? = limn?? 2?h n? ?n . Define F n = ?(xf,k )0?k?n , and suppose that the following conditions hold for every n ? N: 1. E[rn+1 |F n ] = ?h(xg,n+1 ) almost surely, 2. E[krn ? ?h(xg,n )k2 ] is uniformly bounded by some positive constant. 3. E[kug,n ? x? k2 ] is uniformly bounded by some positive constant. Then, the iterates of S3CM satisfy ? O ? ? ? O E[kxg,n ? x? k2 ] = ? O ? ? O  1/n?  1/n?  (log n)/n  1/n 3 if if if if 0<?<1 ? = 1, and ? < 1 ? = 1, and ? = 1, ? = 1, and ? > 1. Proof outline. We consider the proof of three-operator splitting method as a baseline, and we use the stochastic fixed point theory to derive the convergence of the iterates via the stochastic Fej?r monotone sequence. See the supplement for the complete proof. Remark 4 Note that ug,n ? ?g(xg,n ). Hence, we can replace condition 3 in Theorem 2 with the bounded subgradient assumption: kpk ? c, ?p ? ?g(xg,n ), for some positive constant c. Remark 5 (Restricted strong convexity) Let M be a subset of Rd that contains (xg,n )n?N and x? . Suppose that h has restricted strong convexity on M with parameter ?h . Then, Theorems 1 and 2 still hold. An example role of the restricted strong convexity assumption on algorithmic convergence can be found in [1, 21]. Remark 6 (Extension to arbitrary number of non-smooth terms.) Using the product space technique [5, Section 6.1], S3CM can be applied to composite problems with arbitrary number of non-smooth terms: m X minimize fi (x) + h(x), x?Rd i=1 d where fi : R ? R are proper, lower semicontinuous convex functions, and h : Rd ? R is a smooth function with restricted strong convexity. We present this variant in Algorithm 2. Theorems 1 and 2 hold for this variant, replacing xg,n by xn , and ug,n by ui,n for i = 1, 2, . . . , m. Algorithm 2 Stochastic m(ulti)-composite minimization algorithm (SmCM) Input: Initial points {xf1 ,0 , xf2 ,0 , . . . , xfm ,0 }, a sequence of learning rates (?n )n?N , and a sequence of squared integrable Rd -valued stochastic gradient estimates (rn )n?N Initialization: Pm x0 = m?1 i=1 xfi ,0 for i=1,2,. . . ,m do ui,0 = ?0?1 (xfi ,0 ? x0 ) end for Main loop: for n = 0, 1, 2, . P . . do m xn+1 = m?1 i=1 (xfi ,n + ?n ui,n ) for i=1,2,. . . ,m do ui,n+1 = ?n?1 (xfi ,n ? xn+1 ) + ui,n xfi ,n+1 = prox?n+1 mfi (xn+1 ? ?n+1 ui,n+1 ? ?n+1 rn+1 ) end for end for Output: xn as an approximation of an optimal solution x? . Remark 7 With a proper learning rate, S3CM still converges even if h is not (restricted) strongly convex under mild assumptions. Suppose that h has L-Lipschitz continuous gradient. Set the learning rate such that ? ? ?n ? ? ? ?(2L?1 ? ?), for some ? and ? in ]0, 1[. Define F n = ?(xf,k )0?k?n , and suppose that the following conditions hold for every n ? N: 1. E[rn+1 |F n ] = ?h(xg,n+1 ) almost surely. P 2 2. n?N E[krn+1 ? ?h(xg,n+1 )k |F n ] < +? almost surely. Then, (xg,n )n?N converges to a X ?-valued random vector almost surely. See [7] for details. Remark 8 All the results above hold for any separable Hilbert space, except that the strong convergence in Remark 7 is replaced by weak convergence. Note however that extending Remark 7 to variable metric setting as in [10, 27] is an open problem. 4 4 Contributions in the light of prior work Recent algorithms in the operator splitting, such as generalized forward-backward splitting [24], forward-Douglas-Rachford splitting [5], and the three-operator splitting [11], apply to our problem template (1). These key results, however, are in the deterministic setting. Our basic framework can be viewed as a combination of the three-operator splitting method in [11] with the stochastic analysis. The idea of using unbiased estimates of the gradient dates back to [25]. Recent developments of this idea can be viewed as proximal based methods for solving the generic composite convex minimization template with a single non-smooth term [2, 9, 12, 13, 15, 16, 19, 26, 23]. This generic form arises naturally in regularized or constrained composite problems [3, 13, 20], where the smooth term typically encodes the data fidelity. These methods require the evaluation of the joint prox of f and g when applied to the three-composite template (1). Unfortunately, evaluation of the joint prox is arguably more expensive compared to the individual prox operators. To make comparison stark, consider the simple example where f and g are indicator functions for two convex sets. Even if the projection onto the individual sets are easy to compute, projection onto the intersection of these sets can be challenging. Related literature also contains algorithms that solve some specific instances of template (1). To point out a few, random averaging projection method [28] handles multiple constraints simultaneously but cannot deal with regularizers. On the other hand, accelerated stochastic gradient descent with proximal average [29] can handle multiple regularizers simultaneously, but the algorithm imposes a Lipschitz condition on regularizers, and hence, it cannot deal with constraints. To our knowledge, our method is the first operator splitting framework that can tackle optimization template (1) using the stochastic gradient estimate of h and the proximal operators of f and g separately, without any restriction on the non-smooth parts except that their subdifferentials are maximally monotone. When h is strongly convex, under mild assumptions, and with a proper learning rate, our algorithm converges with O(1/n) rate, which is optimal for the stochastic methods under strong convexity assumption for this problem class. 5 Numerical experiments We present numerical evidence to assess the theoretical convergence guarantees of the proposed algorithm. We provide two numerical examples from Markowitz portfolio optimization and support vector machines. As a baseline, we use the deterministic three-operator splitting method [11]. Even though the random averaging projection method proposed in [28] does not apply to our template (1) with its all generality, it does for the specific applications that we present below. In our numerical tests, however, we observed that this method exhibits essentially the same convergence behavior as ours when used with the same learning rate sequence. For the clarity of the presentation, we omit this method in our results. 5.1 Portfolio optimization Traditional Markowitz portfolio optimization aims to reduce risk by minimizing the variance for a given expected return. Mathematically, we can formulate this as a convex optimization problem [6]:   minimize E |aTi x ? b|2 subject to x ? ?, aTav x ? b, x?Rd where ? is the standard simplex for portfolios with no-short positions or a simple sum constraint, aav = E [ai ] is the average returns for each asset that is assumed to be known (or estimated), and b encodes a minimum desired return. This problem has a streaming nature where new data points arrive in time. Hence, we typically do not have access to the whole dataset, and the stochastic setting is more favorable. For implementation, 5 we replace the expectation with the empirical sample average: p minimize x?Rd 1X T (a x ? b)2 p i=1 i subject to x ? ?, aTav x ? b. (5) This problem fits into our optimization template (1) by setting p h(x) = 1X T (a x ? b)2 , p i=1 i g(x) = ?? (x), and f (x) = ?{x | aTav x?b} (x). We compute the unbiased estimates of the gradient by rn = 2(aTin x ? b)ain , where index in is chosen uniformly random. We use 5 different real portfolio datasets: Dow Jones industrial average (DJIA, with 30 stocks for 507 days), New York stock exchange (NYSE, with 36 stocks for 5651 days), Standard & Poor?s 500 (SP500, with 25 stocks for 1276 days), Toronto stock exchange (TSE, with 88 stocks for 1258 days) that are also considered in [4]; and one dataset by Fama and French (FF100, 100 portfolios formed on size and book-to-market, 23,647 days) that is commonly used in financial literature, e.g., [6, 14]. We impute the missing data in FF100 using nearest-neighbor method with Euclidean distance. Figure 1: Comparison of the deterministic three-operators splitting method [11, Algorithm 2] and our stochastic three-composite minimization method (S3CM) for Markowitz portfolio optimization (5). Results are averaged over 100 Monte-Carlo simulations, and the boundaries of the shaded area are the best and worst instances. For the deterministic algorithm, we set ? = 0.1. We evaluate the Lipschitz constant L and the strong convexity parameter ?h to determine the step-size. For the stochastic algorithm, we do not have access to the whole data, so we cannot compute these parameter. Hence, we adopt the learning rate sequence defined in Theorem 2. We simply use ?n = ?0 /(n + 1) with ?0 = 1 for FF100, and ?0 = 103 for others.1 We start both algorithms from the zero vector. 1 Note that a fine-tuned learning rate with a more complex definition can improve the empirical performance, e.g., ?n = ?0 /(n + ?) for some positive constants ?0 and ?. 6 We split all the datasets into test (10%) and train (90%) partitions randomly. We set the desired return as the average return over all assets in the training set, b = mean(aav ). Other b values exhibit qualitatively similar behavior. The results of this experiment are compiled in Figure 1. We compute the objective function over the datapoints in the test partition, htest . We compare our algorithm against the deterministic threeoperator splitting method [11, Algorithm 2]. Since we seek statistical solutions, we compare the algorithms to achieve low to medium accuracy. [11] provides other variants of the deterministic algorithm, including two ergodic averaging schemes that feature improved theoretical rate of convergence. However, these variants performed worse in practice than the original method, and are omitted. Solid lines in Figure 1 present the average results over 100 Monte-Carlo simulations, and the boundaries of the shaded area are the best and worst instances. We also assess empirical evidence of the O(1/n) convergence rate guaranteed in Theorem 2, by presenting squared relative distance to the optimum solution for FF100 dataset. Here, we approximate the ground truth by solving the problem to high accuracy with the deterministic algorithm for 105 iterations. 5.2 Nonlinear support vector machines classification This section demonstrates S3CM on a support vector machines (SVM) for binary classification problem. We are given a training set A = {a1 , a2 , . . . , ad } and the corresponding class labels {b1 , b2 , . . . , bd }, where ai ? Rp and bi ? {?1, 1}. The goal is to build a model that assigns new examples into one class or the other correctly. As common in practice, we solve the dual soft-margin SVM formulation: d d d X 1 XX K(ai , aj )bi bj xi xj ? xi 2 i=1 j=1 i=1 minimize x?Rd subject to x ? [0, C]d , bT x = 0, where C ? [0, +?[ is the penalty parameter and K : Rp ? Rp ? R is a kernel function. In our example we use the Gaussian kernel given by K? (ai , aj ) = exp(??kai ? aj k2 ) for some ? > 0. Define symmetric positive semidefinite matrix M ? Rd?d with entries Mij = K? (ai , aj )bi bj . Then the problem takes the form minimize x?Rd d X 1 T x Mx ? xi 2 i=1 subject to x ? [0, C]d , bT x = 0. (6) This problem fits into three-composite optimization template (1) with h(x) = d X 1 T x Mx ? xi , 2 i=1 g(x) = ?[0,C]d (x), and f (x) = ?{x | bT x=0} (x). One can solve this problem using three-operator splitting method [11, Algorithm 1]. Note that proxf and proxg , which are projections onto the corresponding constraint sets, incur O(d) computational cost, whereas the cost of computing the gradient is O(d2 ). To compute an unbiased gradient estimate, we choose an index in uniformly random, and we form rn = dM in xin ? 1. Here M in denotes ith n column of matrix M , and 1 represents the vector of ones. We can compute rn in O(d) computations, hence each iteration of S3CM costs an order cheaper compared to deterministic algorithm. We use UCI machine learning dataset ?a1a?, with d = 1605 datapoints and p = 123 features [8, 18]. Note that our goal here is to demonstrate the optimization performance of our algorithm for a real world problem, rather than competing the prediction quality of the best engineered solvers. Hence, to keep experiments simple, we fix problem parameters C = 1 and ? = 2?2 , and we focus on the effects of algorithmic parameters on the convergence behavior. Since p < d, M is rank deficient and h is not strongly convex. Nevertheless we use S3CM with the learning rate ?n = ?0 /(n + 1) for various values of ?0 . We observe O(1/n) empirical convergence rate on the squared relative error for large enough ?0 , which is guaranteed under restricted strong convexity assumption. See Figure 2 for the results. 7 Figure 2: [Left] Convergence of S3CM in the squared relative error with learning rate ?n = ?0 /(n + 1). [Right] Comparison of the deterministic three-operators splitting method [11, Algorithm 1] and S3CM with ?0 = 1 for SVM classification problem. Results are averaged over 100 Monte-Carlo simulations. Boundaries of the shaded area are the best and worst instances. Acknowledgments This work was supported in part by ERC Future Proof, SNF 200021-146750, SNF CRSII2-147633, and NCCR-Marvel. References [1] A. Agarwal, S. Negahban, and M. J. Wainwright. Fast global convergence of gradient methods for high-dimensional statistical recovery. Ann. Stat., 40(5):2452?2482, 2012. [2] Y. F. Atchad?, G. Fort, and E. Moulines. arXiv:1402.2365v2, 2014. On stochastic proximal gradient algorithms. [3] H. H. Bauschke and P. L. Combettes. Convex analysis and monotone operator theory in Hilbert spaces. Springer-Verlag, 2011. [4] A. Borodin, R. El-Yaniv, and V. Gogan. Can we learn to beat the best stock. In Advances in Neural Information Processing Systems 16, pages 345?352. 2004. [5] L. M. Brice?o-Arias. Forward-Douglas?Rachford splitting and forward-partial inverse method for solving monotone inclusions. Optimization, 64(5):1239?1261, 2015. [6] J. Brodie, I. Daubechies, C. de Mol, D. Giannone, and I. Loris. Sparse and stable Markowitz portfolios. Proc. Natl. Acad. Sci., 106:12267?12272, 2009. [7] V. Cevher, B. C. V?u, and A. Yurtsever. Stochastic forward?Douglas?Rachford splitting for monotone inclusions. EPFL-Report-215759, 2016. [8] C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol., 2(3):27:1?27:27, 2011. [9] P. L. Combettes and J.-C. Pesquet. Stochastic approximations and perturbations in forwardbackward splitting for monotone operators. arXiv:1507.07095v1, 2015. [10] P. L. Combettes and B. C. V?u. Variable metric forward?backward splitting with applications to monotone inclusions in duality. Optimization, 63(9):1289?1318, 2014. [11] D. Davis and W. Yin. A three-operator splitting scheme and its optimization applications. arXiv:1504.01032v1, 2015. [12] O. Devolder. Stochastic first order methods in smooth convex optimization. Technical report, Center for Operations Research and Econometrics, 2011. 8 [13] J. Duchi and Y. Singer. Efficient online and batch learning using forward backward splitting. J. Mach. Learn. Res., 10:2899?2934, 2009. [14] E. F. Fama and K. R. French. Multifactor explanations of asset pricing anomalies. Journal of Finance,, 51:55?84, 1996. [15] C. Hu, W. Pan, and J. T. Kwok. Accelerated gradient methods for stochastic optimization and online learning. In Advances in Neural Information Processing Systems 22, pages 781?789. 2009. [16] G. Lan. An optimal method for stochastic composite optimization. Math. Program., 133(1):365? 397, 2012. [17] M. Ledoux and M. Talagrand. Probability in Banach spaces: Isoperimetry and processes. Springer-Verlag, 1991. [18] M. Lichman. UCI machine learning repository. University of California, Irvine, School of Information and Computer Sciences, 2013. [19] Q. Lin, X. Chen, and J. Pe?a. A smoothing stochastic gradient method for composite optimization. Optimization Methods and Software, 29(6):1281?1301, 2014. [20] S. Mosci, L. Rosasco, M. Santoro, A. Verri, and S. Villa. Solving structured sparsity regularization with proximal methods. In European Conf. Machine Learning and Principles and Practice of Knowledge Discovery, pages 418?433, 2010. [21] S. Negahban, B. Yu, M. J. Wainwright, and P. K. Ravikumar. A unified framework for highdimensional analysis of M-estimators with decomposable regularizers. In Advances in Neural Information Processing Systems 22, pages 1348?1356, 2009. [22] A. Nemirovski. Prox-method with rate of convergence O(1/t) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM J. on Optimization, 15(1):229?251, 2005. [23] A. Nitanda. Stochastic proximal gradient descent with acceleration techniques. In Advances in Neural Information Processing Systems 27, pages 1574?1582. 2014. [24] H. Raguet, J. Fadili, and G. Peyr?. A generalized forward-backward splitting. SIAM Journal on Imaging Sciences, 6(3):1199?1226, 2013. [25] H. Robbins and S. Monro. A stochastic approximation method. Ann. Math. Statist., 22(3):400? 407, 1951. [26] L. Rosasco, S. Villa, and B. C. V?u. Convergence of stochastic proximal gradient algorithm. arXiv:1403.5074v3, 2014. [27] B. C. V?u. Almost sure convergence of the forward?backward?forward splitting algorithm. Optimization Letters, 10(4):781?803, 2016. [28] M. Wang, Y. Chen, J. Liu, and Y. Gu. Random multi?constraint projection: Stochastic gradient methods for convex optimization with many constraints. arXiv:1511.03760v1, 2015. [29] W. Zhong and J. Kwok. Accelerated stochastic gradient method for composite regularization. J. Mach. Learn. Res., 33:1086?1094, 2014. 9
6127 |@word mild:2 repository:1 advantageous:1 norm:1 open:1 d2:1 semicontinuous:3 simulation:3 seek:1 hu:1 decomposition:1 solid:1 initial:2 liu:1 contains:2 lichman:1 selecting:1 tuned:1 ours:1 ati:1 existing:1 bd:1 numerical:6 partition:2 cheap:1 fama:2 update:1 xk:1 ith:1 short:1 volkan:2 characterization:2 provides:2 iterates:3 toronto:1 math:2 prove:1 doubly:1 htest:1 combine:1 x0:2 mosci:1 market:1 expected:1 behavior:3 roughly:1 multi:1 moulines:1 decreasing:1 solver:1 spain:1 xx:1 notation:2 bounded:3 medium:1 unified:1 guarantee:2 every:3 concave:1 tackle:2 finance:1 k2:8 demonstrates:1 omit:1 arguably:1 positive:6 mfi:1 acad:1 mach:2 xf2:1 initialization:2 lausanne:1 challenging:1 shaded:3 nemirovski:1 bi:3 averaged:2 acknowledgment:1 vu:2 practice:3 implement:1 area:3 snf:2 empirical:5 composite:16 projection:7 road:1 onto:3 cannot:3 operator:30 risk:2 restriction:1 measurable:1 deterministic:11 map:1 missing:1 center:1 fadili:1 convex:22 ergodic:1 formulate:1 decomposable:1 splitting:26 assigns:1 recovery:1 rule:1 estimator:1 financial:1 datapoints:2 handle:2 notion:1 suppose:5 anomaly:1 expensive:1 econometrics:1 observed:1 role:1 wang:1 worst:3 forwardbackward:1 yk:1 convexity:14 ui:7 dom:3 solving:5 algebra:1 incur:1 sp500:1 gu:1 joint:2 stock:7 various:1 train:1 fast:1 monte:3 choosing:1 supplementary:1 valued:6 solve:3 kai:1 statistic:1 online:2 sequence:10 differentiable:1 ledoux:1 propose:2 product:2 uci:2 loop:2 date:1 achieve:1 scalability:1 convergence:20 empty:1 optimum:2 extending:1 yaniv:1 converges:3 cnt:1 derive:1 stat:1 nearest:1 school:1 strong:15 switzerland:1 closely:1 stochastic:39 alp:2 engineered:1 material:1 require:1 exchange:2 fix:1 mathematically:1 extension:2 a1a:1 hold:6 proximity:1 considered:2 ground:1 exp:1 proxg:1 algorithmic:3 bj:2 brodie:1 adopt:1 a2:1 omitted:1 favorable:1 proc:1 label:1 currently:1 cole:1 bridge:1 ain:1 robbins:1 minimization:10 gaussian:1 aim:1 rather:1 avoid:1 pn:1 zhong:1 focus:1 rank:1 industrial:1 rigorous:1 baseline:2 inference:1 el:1 epfl:4 streaming:2 typically:3 bt:3 santoro:1 arg:1 fidelity:1 classification:3 dual:1 denoted:4 priori:1 development:1 art:1 constrained:1 smoothing:1 field:2 ng:2 represents:1 jones:1 yu:1 future:1 markowitz:4 simplex:1 others:1 report:2 few:2 randomly:1 simultaneously:2 intell:1 individual:3 cheaper:1 replaced:1 evaluation:2 introduces:1 semidefinite:1 light:2 primal:1 natl:1 regularizers:4 partial:1 euclidean:1 desired:2 re:2 guidance:1 theoretical:2 cevher:3 instance:4 tse:1 soft:1 column:1 cover:1 cost:3 subset:3 entry:1 peyr:1 bauschke:1 proximal:15 negahban:2 siam:2 sequel:1 squared:5 daubechies:1 satisfied:1 choose:1 rosasco:2 worse:1 admit:1 book:1 conf:1 nccr:1 return:5 stark:1 syst:1 prox:8 de:2 singleton:1 b2:1 satisfy:2 depends:1 ad:1 performed:1 closed:1 start:1 recover:1 monro:1 contribution:2 minimize:6 kxg:2 ass:2 formed:1 accuracy:2 variance:2 weak:1 carlo:3 asset:3 kpk:1 definition:1 against:1 dm:1 naturally:1 proof:5 associated:1 irvine:1 dataset:4 recall:1 knowledge:3 hilbert:2 back:1 appears:1 day:5 maximally:1 improved:1 verri:1 formulation:1 though:1 strongly:8 generality:1 nyse:1 talagrand:1 hand:1 dow:1 replacing:1 nonlinear:1 french:2 quality:1 aj:4 aav:2 pricing:1 effect:1 subdifferentials:1 unbiased:5 hence:6 regularization:2 symmetric:1 laboratory:1 deal:2 impute:1 ulti:1 davis:1 proxf:2 generalized:2 presenting:1 outline:1 complete:1 polytechnique:1 demonstrate:1 duchi:1 variational:1 fi:2 common:1 ug:7 rachford:3 banach:1 ai:5 rd:36 pm:1 hp:2 erc:1 inclusion:3 loris:1 portfolio:8 access:3 stable:1 yk2:1 lkx:1 compiled:1 recent:2 scenario:1 verlag:2 inequality:1 binary:1 yi:1 integrable:2 minimum:1 surely:5 determine:1 v3:1 signal:1 multiple:2 reduces:2 smooth:11 technical:2 xf:9 lin:2 ravikumar:1 a1:1 prediction:1 variant:4 basic:4 essentially:3 expectation:5 metric:2 arxiv:5 iteration:2 kernel:2 agarwal:1 background:2 subdifferential:2 separately:3 fine:1 whereas:1 limn:1 appropriately:1 rest:1 sure:1 subject:4 deficient:1 effectiveness:1 split:1 easy:2 enough:1 xj:1 fit:2 pesquet:1 competing:1 reduce:2 idea:2 xfi:5 penalty:1 speaking:1 york:1 remark:10 marvel:1 statist:1 yurtsever:3 multifactor:1 estimated:1 correctly:1 devolder:1 key:2 reliance:1 nevertheless:1 lan:1 clarity:1 douglas:3 libsvm:1 backward:5 v1:3 imaging:1 subgradient:1 monotone:8 sum:2 inverse:1 letter:1 powerful:1 arrive:2 place:1 throughout:2 almost:6 guaranteed:2 nontrivial:1 constraint:6 software:1 encodes:2 hy:1 min:1 separable:1 structured:1 combination:2 poor:1 smaller:1 pan:1 continuo:1 restricted:10 computationally:1 nonempty:2 singer:1 nitanda:1 tractable:2 end:4 available:1 operation:1 endowed:1 apply:2 observe:1 kwok:2 v2:1 generic:2 batch:1 rp:3 original:1 denotes:2 build:1 xf1:1 objective:1 traditional:1 villa:2 exhibit:3 gradient:29 mx:2 distance:2 sci:1 index:2 providing:1 minimizing:1 unfortunately:1 subproblems:1 implementation:1 proper:5 datasets:2 descent:2 technol:1 beat:1 rn:11 perturbation:1 arbitrary:2 fort:1 california:1 barcelona:1 nip:1 trans:1 address:1 lion:1 below:1 rale:1 borodin:1 sparsity:1 program:1 including:1 explanation:1 wainwright:2 suitable:1 critical:1 regularized:2 indicator:4 isoperimetry:1 scheme:3 improve:1 library:1 xg:18 prior:1 literature:2 discovery:1 relative:3 kug:1 raguet:1 imposes:1 principle:1 supported:1 allow:1 neighbor:1 template:11 sparse:1 boundary:3 xn:7 world:2 avoids:1 kz:1 crsii2:1 forward:10 commonly:1 qualitatively:1 djia:1 approximate:1 keep:1 global:1 b1:1 assumed:1 xi:4 continuous:6 nature:1 learn:3 mol:1 complex:2 european:1 domain:1 krk:1 main:3 whole:2 n2:6 borel:1 combettes:3 position:1 explicit:1 pe:1 krn:3 theorem:9 specific:2 svm:3 evidence:4 exists:1 supplement:1 aria:1 illustrates:1 kx:2 margin:1 gap:1 chen:2 intersection:1 yin:1 simply:3 relegate:1 saddle:1 scalar:1 chang:1 springer:2 ch:3 mij:1 truth:1 satisfies:1 acm:1 conditional:2 viewed:2 presentation:1 goal:2 bang:1 ann:2 acceleration:1 lipschitz:11 replace:2 specifically:1 except:2 operates:1 uniformly:4 averaging:3 called:1 duality:1 xin:1 highdimensional:1 support:5 arises:1 accelerated:3 evaluate:1
5,667
6,128
Normalized Spectral Map Synchronization Yanyao Shen UT Austin Austin, TX 78712 shenyanyao@utexas.edu Qixing Huang TTI Chicago and UT Austin Austin, TX 78712 huangqx@cs.utexas.edu Nathan Srebro TTI Chicago Chicago, IL 60637 nati@ttic.edu Sujay Sanghavi UT Austin Austin, TX 78712 sanghavi@mail.utexas.edu Abstract Estimating maps among large collections of objects (e.g., dense correspondences across images and 3D shapes) is a fundamental problem across a wide range of domains. In this paper, we provide theoretical justifications of spectral techniques for the map synchronization problem, i.e., it takes as input a collection of objects and noisy maps estimated between pairs of objects along a connected object graph, and outputs clean maps between all pairs of objects. We show that a simple normalized spectral method (or NormSpecSync) that projects the blocks of the top eigenvectors of a data matrix to the map space, exhibits surprisingly good behavior ? NormSpecSync is much more efficient than state-of-the-art convex optimization techniques, yet still admitting similar exact recovery conditions. We demonstrate the usefulness of NormSpecSync on both synthetic and real datasets. 1 Introduction The problem of establishing maps (e.g., point correspondences or transformations) among a collection of objects is connected with a wide range of scientific problems, including fusing partially overlapped range scans [1], multi-view structure from motion [2], re-assembling fractured objects [3], analyzing and organizing geometric data collections [4] as well as DNA sequencing and modeling [5]. A fundamental problem in this domain is the so-called map synchronization, which takes as input noisy maps computed between pairs of objects, and utilizes the natural constraint that composite maps along cycles are identity maps to obtain improved maps. Despite the importance of map synchronization, the algorithmic advancements on this problem remain limited. Earlier works formulate map synchronization as solving combinatorial optimizations [1, 6, 7, 8]. These formulations are restricted to small-scale problems and are susceptible to local minimums. Recent works establish the connection between the cycle-consistency constraint and the low-rank property of the matrix that stores pairwise maps in blocks; they cast map synchronization as low-rank matrix inference [9, 10, 11]. These techniques exhibit improvements on both the theoretical and practical sides. In particular, they admit exact recovery conditions (i.e., on the underlying maps can be recovered from noisy input maps). Yet due to the limitations of convex optimization, all of these methods do not scale well to large-scale datasets. In contrast to convex optimizations, we demonstrate that spectral techniques work remarkably well for map synchronization. We focus on the problem of synchronizing permutations and introduce a robust and efficient algorithm that consists of two simple steps. The first step computes the top eigenvectors of a data matrix that encodes the input maps, and the second step rounds each block of 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. the top-eigenvector matrix into a permutation matrix. We show that such a simple algorithm possesses a remarkable denoising ability. In particular, its exact recovery conditions match the state-of-the-art convex optimization techniques. Yet computation-wise, it is much more efficient, and such a property enables us to apply the proposed algorithm on large-scale dataset (e.g., many thousands of objects). Spectral map synchronization has been considered in [12, 13] for input observations between all pairs of objects. In contrast to these techniques, we consider incomplete pair-wise observations, and provide theoretical justifications on a much more practical noise model. 2 Algorithm In this section, we describe the proposed algorithm for permutation synchronization. We begin with the problem setup in Section 2.1. Then we introduce the algorithmic details in Section 2.2. 2.1 Problem Setup Suppose we have n objects S1 , ? ? ? , Sn . Each object is represented by m points (e.g., feature points on images and shapes). We consider bijective maps ?ij : Si ? Sj , 1 ? i, j ? n between pairs of objects. Following the convention, we encode each such map ?ij as a permutation matrix Xij ? Pm , where Pm is the space of permutation matrices of dimension m: Pm := {X|X ? [0, 1]m?m , X1m = 1m , X T 1m = 1m }, where 1m = (1, ? ? ? , 1)T ? Rm is the vector whose elements are 1. in The input permutation synchronization consists of noisy permutations Xij ? G along a connected object graph G. As described in [4, 9], a widely used pipeline to generate such input is to 1) establish the object graph G by connecting each object and similar objects using object descriptors (e.g., HOG [14] for images) , and 2) apply off-the-shelf pair-wise object matching methods to compute the input pair-wise maps (e.g., SIFTFlow [15] for images and BIM [16] for 3D shapes). The output consists of improved maps between all of objects Xij , 1 ? i, j ? n. 2.2 Algorithm We begin with defining a data matrix X obs ? Rnm?nm that encodes the initial pairwise maps in blocks: ( in ? 1 Xij , (i, j) ? G obs di dj (1) Xij = 0, otherwise where di := |{Sj |(Si , Sj ) ? G}| is the degree of object Si in graph G. Remark 1. Note that the way we encode the data matrix is different from [12, 13] in the sense that we follow the common strategy for handling irregular graphs and use a normalized data matrix. The proposed algorithm is motivated from the fact that when the input pair-wise maps are correct, the correct maps between all pairs of objects can be recovered from the leading eigenvectors of X obs : Proposition 2.1. Suppose there exist latent maps (e.g., the ground-truth maps to one object) Xi , 1 ? in i ? n so that Xij = XjT Xi , (i, j) ? G. Denote W ? Rnm?m as the matrix that collects the first m obs eigenvectors of X in its columns. Then the underlying pair-wise maps can be computed from the corresponding matrix blocks of matrix W W T : Pn di T Xj Xi = pi=1 (W W T )ij , 1 ? i, j ? n. (2) di dj The key insight of the proposed approach is that even when the input maps are noisy (i.e., the blocks of X obs are corrupted), the leading eigenvectors of X obs are still stable under these perturbations (we will analyze this stability property in Section 3). This motivates us to design a simple two-step permutation synchronization approach called NormSpecSync. The first step of NormSpecSync computes the leading eigenvectors of W ; the second step of NormSpecSync rounds the induced 2 Algorithm 1 NormSpecSync Input: Xobs based on (1), ?max Initialize W0 : set W0 as an initial guess for the top-m orthonormal eigenvectors, k ? 0 while kW (k) ? W (k?1) k > ?max do + W (k+1) = X obs ? W (k) , + W (k+1) R(k+1) = W (k+1) , (QR factorization), k ? k + 1. end while spec Set W = W (k) and X i1 = (W W T )i1 . spec Round each X i1 into the corresponding Xi1 by solving (3). T Xi1 , 1 ? i, j ? n. Output: Xij = Xj1 matrix blocks (2) into permutations. In the following, we elaborate these two steps and analyze the complexity. Algorithm 1 provides the pseudo-code. Leading eigenvector computation. Since we only need to compute the leading m eigenvectors of X obs , we propose to use generalized power method. This is justified by the observation that usually there exists a gap between ?m and ?m+1 . In fact, when the input pair-wise maps are correct, it is easy to derive that the leading eigenvectors of X obs are given by: ?1 (X obs ) = ? ? ? = ?m (X obs ) = 1, ?m+1 (X obs ) = ?n?1 (G), where ?n?1 (G) is the second largest eigenvalue of the normalized adjacency matrix of G. As we will see later, the eigen-gap ?m (X obs ) ? ?m+1 (X obs ) is still persistent in the presence of corrupted pair-wise maps, due to the stability of eigenvalues under perturbation. Pn d spec Projection onto Pm . Denote Xij := ?i=1 i (W W T )ij . Since the underlying ground-truth maps di dj spec T Xij , 1 ? i, j ? n obey Xij = Xjk Xik , 1 ? i, j ? n for any fixed k, we only need to round Xik into Xik . Without losing generality, we set k = 1 in this paper. The rounding is done by solving the following constrained optimization problem, which projects obs Xi1 onto the space of permutations via the Frobenius norm: 2 obs Xi1 = arg min kX ? Xi1 kF = arg min X?Pm X?Pm  2 2 obs obs kXkF + kXi1 kF ? 2hX, Xi1 i  obs = arg max hX, Xi1 i. (3) X?Pm The optimization problem described in (3) is the so-called linear assignment problem, which can be solved exactly using the Hungarian algorithm whose complexity is O(m3 ) (c.f. [17]). Note that the Pn d obs optimal solution of (3) is invariant under global scaling and shifting of Xi1 , so we omit ?i=1 i and di dj T 1 m 11 when generating obs Xij (See Algorithm 1). Time complexity of NormSpecSync. Each step of the generalized power method consists of a matrixvector multiplication and a QR factorization. The complexity of the matrix-vector multiplication, which leverages of the sparsity in X obs , is O(nE ? m2 ), where nE is the number of edges in G. The complexity of each QR factorization is O(nm3 ). As we will analyze laser, generalized power method converges linearly, and setting ?max = 1/n provides a sufficiently accurate estimation of the leading eigenvectors. So the total time complexity of the Generalized power method is  O (nE m2 + nm3 log(n)). The time complexity of therounding step is O(nm3 ). In summary, the total complexity of NormSpecSync is O (nE m2 + nm3 log(n)). In comparison, the complexity of the SDP formulation [9], even when it is solved using the fast ADMM method (alternating direction of multiplier method), is at least O(n3 m3 nadmm . So NormSpecSync exhibits significant speedups when compared to SDP formulations. 3 3 Analysis In this section, we provide an analysis of NormSpecSync under a generalized Erd?os-R?nyi noise model. 3.1 Noise Model The noise model we consider is given by two parameters m and p. Specifically, we assume the observation graph G is fixed. Then independently for each edge (i, j) ? E,  Im with probability p in Xij = (4) Pij with probability 1 ? p where Pij ? Pm is a random permutation. Remark 2. The noise model described above assumes the underlying permutations are identity maps. In fact, one can assume a generalized noise model  T Xj1 Xi1 with probability p in Xij = Pij with probability 1 ? p where Xi1 , 1 ? i ? n are pre-defined underlying permutations from object Si to the first object S1 . However, since Pij are independent of Xi1 . It turns out the model described above is equivalent to  Im with probability p in T Xj1 Xij Xi1 = Pij with probability 1 ? p Where Pij are independent random permutations. This means it is sufficient to consider the model described in (4). Remark 3. The fundamental difference between our model and the one proposed in [11] or the ones used in low-rank matrix recovery [18] is that the observation pattern (i.e., G) is fixed, while in other models it also follows a random model. We argue that our assumption is more practical because the observation graph is constructed by comparing object descriptors and it is dependent on the distribution of the input objects. On the other hand, fixing G significantly complicates the analysis of NormSpecSync, which is the main contribution of this paper. 3.2 Main Theorem Now we state the main result of the paper. P Theorem 3.1. Let dmin := min1?i?n di , davg := i di /n, and denote ? as the second top eigen? value of normalized adjacency matrix of G. Assume dmin = ?( n ln3 n), davg = O(dmin ), ? < min{p, 1/2}. Then under the noise model described above, NormSpecSync recovers the underlying pair-wise maps with high probability if p>C? ln3 n ? , dmin / n (5) for some constant C. Proof Roadmap. The proof of Theorem 3.1 combines two stability bounds. The first one considers the projection step: Proposition 3.1. Consider a permutation matrix X = (xij ) ? Pm and another matrix X = (xij ) ? Rm?m . If kX ? Xk < 12 , then 2 X = arg minkY ? XkF . Y ?Pm Proof. The proof is quite straight-forward. In fact, kX ? Xk? ? kX ? Xk < 4 1 . 2 Varying graph density Varying graph density 0.1 0.2 0.2 0.2 0.3 0.4 p-true 0.1 p-true p-true Varying graph density 0.1 0.3 0.4 0.4 0.5 0.5 0.5 0.1 0.3 0.5 0.7 0.9 0.3 0.1 0.3 0.5 0.7 0.1 0.9 0.3 0.5 0.7 0.9 Graph density Graph density Graph density (a) NormSpecSync (2.25 seconds) (b) SDP (203.12 seconds) (c) DiffSync(1.07 seconds) Figure 1: Comparisons between NormSpecSync, SDP[9], DiffSync[13] on the noise model described in Sec. 2. This means the corresponding element xij of each non-zero element in xij is dominant in its row and column, i.e., xij 6= 0 ? xij > max(max xik , max xkj ), k6=j k6=i  which ends the proof. The second bound concerns the block-wise stability of the leading eigenvectors of X Lemma 3.1. Under the assumption of Theorem 3.1, then w.h.p., Pn i=1 di 1 T ? d d (W W )i1 ? Im < 3 , 1 ? i ? n. i 1 obs : (6) It is easy to see that we can prove Theorem 3.1 by combing Lemma 3.1 and Prop. 3.1. Yet unlike Prop. 3.1, the proof of Lemma 3.1 is much harder. The major difficulty is that (6) requires controlling each block of the leading eigenvectors, namely, it requires a L? bound, whereas most stability results on eigenvectors are based on the L2 -norm. Due to space constraint, we defer the proof of Lemma 3.1 to Appendix A and the supplemental material.  Near-optimality of NormSpecSync. Theorem 3.1 implies that NormSpecSync is near-optimal with respect to the information theoretical bound described in [19]. In fact, when G is a clique, (5) becomes 3 p > C ? ln?(n) , which aligns with the lower bound in [19] up to a polylogarithmic factor. Following n the model described in [19], we can also assume that the observation graph G is sampled with a density factor q, namely, two objects are connected independently with probability q. In this case, 4 ? n . This bound also it is easy to see that dmin > O(nq/ ln n) w.h.p., and (5) becomes p > C ? ln nq stays within a polylogarithmic factor from the lower bound in [19], indicating the near-optimality of NormSpecSync. 4 Experiments In this section, we perform quantitative evaluations of NormSpecSync on both synthetic and real examples. Experimental results show that NormSpecSync is superior to state-of-the-art map synchronization methods in the literature. We organize the remainder of this section as follows. In Section 4.1, we evaluate NormSpecSync on synthetic examples. Then in section 4.2, we evaluate NormSpecSync on real examples. 4.1 Quantitative Evaluations on Synthetic Examples We generate synthetic data by following the same procedure described in Section 2. Specifically, each synthetic example is controlled by three parameters G, m, and p. Here G specifies the input graph; m describes the size of each permutation matrix; p controls the noise level of the input maps. The input maps follow a generalized Erdos-Renyi model, i.e., independently for each edge (i, j) ? G in in in the input graph, with probability p the input map Xij = Im , and otherwise Xij is a random permutation. To simplify the discussion, we fix m = 10, n = 200 and vary the observation graph G and p to evaluate NormSpecSync and existing algorithms. 5 Varying vertex degrees 0.1 0.2 0.2 p-true p-true Varying vertex degrees 0.1 0.3 0.3 0.4 0.4 0.5 0.5 0.0 0.1 0.2 0.3 0.0 0.4 0.1 0.2 0.3 Irregularty Irregularty (a) NormSpecSync (b) SpecSync 0.4 Figure 2: Comparison between NorSpecSync and SpecSync on irregular observation graphs. Dense graph versus sparse graph. We first study the performance of NormSpecSync with respect to the density of the graph. In this experiment, we control the density of G by following a standard Erd?os-R?nyi model with parameter q, namely independently, each edge is connected with probability q. For each pair of fixed p and q, we generate 10 examples. We then apply NormSpecSync and count the ratio that the underlying permutations are recovered. Figure 1(a) illustrates the success rate of NormSpecSync on a grid of samples for p and q. Blue and yellow colors indicate it succeeded and failed on all the examples, respectively, and the colors in between indicate a mixture of success and failure. We can see that NormSpecSync tolerates more noise when the graph becomes denser. This aligns with our theoretical analysis result. NormSpecSync versus SpecSync. We also compare NormSpecSync with SpecSync [12], and show the advantage of NormSpecSync on irregular observation graphs. To this end, we generate G using a different model. Specifically, we let the degree of the vertex to be uniformly distribute between ( 12 ? q)n and ( 12 + q)n. As illustrated in Figure 2, when q is small, i.e., all the vertices have similar degrees, the performance of NormSpecSync and SpecSync are similar. When q is large, i.e., G is irregular, NormSpecSync tend to tolerate more noise than SpecSync. This shows the advantage of utilizing a normalized data matrix. NormSpecSync versus DiffSync. We proceed to compare NormSpecSync with DiffSync [13], which is a permutation synchronization method based on diffusion distances. NormSpecSync and DiffSync exhibit similar computation efficiency. However, NormSpecSync can tolerate significantly more noise than DiffSync, as illustrated in Figure 1(c). NormSpecSync versus SDP. Finally, we compare NormSpecSync with SDP [9], which formulates permutation synchronization as solving a semidefinite program. As illustrated in Figure 1(b), the exact recovery ability of NormSpecSync and SDP are similar. This aligns with our theoretical analysis result, which shows the near-optimality of NormSpecSync under the noise model of consideration. Yet computationally, NormSpecSync is much more efficient than SDP. The averaged running time for SpecSync is 2.25 second. In contrast, SDP takes 203.12 seconds in average. 4.2 Quantitative Evaluations on Real Examples In this section, we present quantitative evaluation of NormSpecSync on real datasets. CMU Hotel/House. We first evaluate NormSpecSync on CMU Hotel and CMU House datasets [20]. The CMU Hotel dataset contains 110 images, where each image has 30 marked feature points. In our experiment, we estimate the initial map between a pair of images using RANSAC [21]. We consider two observation graphs: a clique observation graph Gf ull , where we have initial maps computed between all pairs of images, and a sparse observation graph Gsparse . Gsparse is constructed to only connect similar images. In this experiment, we connect an edge between two images if the difference in their HOG descriptors [22] is smaller than 12 of the average descriptor differences among all pairs of images. Note that Gsparse shows high variance in terms of vertex 6 CMU-G-Sparse CMU-G-Full 90 90 80 80 % correspondences 100 % correspondences 100 70 60 50 RANSAC DiffSync SpecSync NonSpecSync SDP 40 30 20 10 70 60 50 RANSAC DiffSync SpecSync NonSpecSync SDP 40 30 20 10 0 0 0 1 2 3 4 5 0 6 1 2 3 4 5 6 Euclidean distance (pixels) Euclidean distance (pixels) SCAPE-G-Sparse SCAPE-G-Full 100 100 90 % correspondences % correspondences 90 80 70 60 50 RANSAC DiffSync SpecSync NonSpecSync SDP 40 30 20 10 80 70 60 50 RANSAC DiffSync SpecSync NonSpecSync SDP 40 30 20 10 0 0 0 0.05 0.1 0.15 0 0.2 0.05 0.1 0.15 0.2 Geodesic distance (diameter) Geodesic distance (diameter) Figure 3: Comparison between NorSpecSync, SpecSync, DiffSync and SDP on CMU Hotel/House and SCAPE. In each dataset, we consider a full observation graph and a sparse observation graph that only connects potentially similar objects. degree. The CMU House dataset is similar to CMU Hotel, containing 100 images and exhibiting slightly bigger intra-cluster variability than CMU Hotel. We construct the observation graphs and the initial maps in a similar fashion. For quantitative evaluation, we measure the cumulative distribution of distances between the predicted target points and the ground-truth target points. Figure 3(Left) compares NormSpecSync with the SDP formulation, SpecSync, and DiffSync. On both full and sparse observation graphs, we can see that NormSpecSync, SDP and SpecSync are superior to DiffSync. The performance of NormSpecSync and SpecSync on Gf ull is similar, while on Gsparse , NormSpecSync shows a slight advantage, due to its ability to handle irregular graphs. Moreover, although the performance of NormSpecSync and SDP are similar, SDP is much slower than NormSpecSync. For example, on Gsparse , SDP took 1002.4 seconds, while NormSpecSync only took 3.4 seconds. SCAPE. Next we evaluate NormSpecSync on the SCAPE dataset. SCAPE consists of 71 different poses of a human subject. We uniformly sample 128 points on each model. Again we consider a full observation graph Gf ull and a sparse observation graph Gsparse . Gsparse is constructed in the same way as above, except we use the shape context descriptor [4] for measuring the similarity between 3D models. In addition, the initial maps are computed from blended-intrinsic-map [16], which is the state-of-the-art technique for computing dense correspondences between organic shapes. For quantitative evaluation, we measure the cumulative distribution of geodesic distances between the predicted target points and the ground-truth target points. As illustrated in Figure 3(Right), the relative performance between NormSpecSync and the other three algorithms is similar to CMU Hotel and CMU House. In particular, NormSpecSync shows an advantage over SpecSync on Gsparse . Yet in terms of computational efficiency, NormSpecSync is far better than SDP. 5 Conclusions In this paper, we propose an efficient algorithm named NormSpecSync towards solving the permutation synchronization problem. The algorithm adopts a spectral view of the mapping problem and exhibits surprising behavior both in terms of computation complexity and exact recovery conditions. The theoretical result improves upon existing methods from several aspects, including a fixed obser7 vation graph and a practical noise method. Experimental results demonstrate the usefulness of the proposed approach. There are multiple opportunities for future research. For example, we would like to extend NormSpecSync to handle the case where input objects only partially overlap with each other. In this scenario, developing and analyzing suitable rounding procedures become subtle. Another example is to extend NormSpecSync for rotation synchronization, e.g., by applying Spectral decomposition and rounding in an iterative manner. Acknowledgement. We would like to thank the anonymous reviewers for detailed comments on how to improve the paper. The authors would like to thank the support of DMS-1700234, CCF-1302435, CCF-1320175, CCF-1564000, CNS-0954059, IIS-1302662, and IIS-1546500. A Proof Architecture of Lemma 3.1 In this section, we provide a roadmap for the proof of Lemma 3.1. The detailed proofs are deferred to the supplemental material. 1 1 Reformulate the observation matrix. The normalized adjacency matrix A? = D? 2 AD? 2 can be decomposed as A? = ssT + V ?V T , where the dominant eigenvalue is 1 and corresponding ? , and it is clear eigenvector is s. We reformulate the observation matrix as p1 M = A? ? Im + N to see that the ground truth result relates to the term (ssT ) ? Im , while the noise comes from two ? . More specifically, the noise not only comes from the randomness of terms: (V ?V T ) ? Im and N uncertainty of the measurements, but also from the graph structure, and we use ? to represent the spectral norm of ?. When the graph is disconnected or near disconnected, ? is close to 1 and it is impossible to recover the ground truth. ? . The noise term N ? consists of random matrices with mean zero Bound the spectral norm of N in each block. In a complete graph, the spectral norm is bounded by O( p?1 n ), however, when considering the graph structure, we give a O( p?d1 ) bound. min Measure the block-wise distance between U and s ? Im . Let M = U ?U T + U2 ?2 U2T , we want to show the distance between U and s ? 1m is small, where the distance function dist(?) is defined as: dist(U, V ) = min U ? V R , (7) R:RRT =I B and this B?norm for any matrix X represented in the form X = [X1T , ? ? ? , XnT ]T ? Rmn?m is defined as kXkB = maxkXi kF . (8) i More specifically, we bound the distance between U and s ? Im by constructing a series of matrix {Ak }, and we can show for some k = O(log n), the distances from s ? Ak to both U and s ? Im are small. Therefore, by using the triangle inequality, we can show that U and s ? Im is close. Sketch proof of Lemma 3.1. Once we are able to show that there exists some rotation matrix R, such that dist(U, s ? Im ) is in the order of o( ?1n ), then it is straightforward to prove Lemma 3.1. Intuitively, this is because the measurements from the eigenvectors is close enough to the ground truth, hence their second moment will still be close. Formally speaking, Ui UjT ? (si ? Im )(sj ? Im ) (9) = Ui RRT UjT ? (si ? Im )(sj ? Im ) (10) T T T T = Ui R(R Uj ? (sj ? Im ) ) + (Ui R ? si ? Im )(sj ? Im ) (11) ? Ui ? dist(U, s ? Im ) + dist(U, s ? Im ) ? sj ? Im (12) On the other hand, notice that Pn d Pn d i=1 i i T Ui Uj ? Im = pi=1 Ui UjT ? (si ? Im )(sj ? Im ) , (13) p di dj di dj and we only need to show that (13) is in the order of o(1). The details are included in the supplemental material. 8 References [1] D. F. Huber, ?Automatic three-dimensional modeling from reality,? Tech. Rep., 2002. [2] D. Crandall, A. Owens, N. Snavely, and D. Huttenlocher, ?Discrete-continuous optimization for large-scale structure from motion,? in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011, pp. 3001?3008. [3] Q.-X. Huang, S. Fl?ry, N. Gelfand, M. Hofer, and H. Pottmann, ?Reassembling fractured objects by geometric matching,? in ACM SIGGRAPH 2006 Papers, 2006, pp. 569?578. [4] V. G. Kim, W. Li, N. Mitra, S. DiVerdi, and T. Funkhouser, ?Exploring collections of 3D models using fuzzy correspondences,? Transactions on Graphics (Proc. of SIGGRAPH 2012), vol. 31, no. 4, Aug. 2012. [5] W. Marande and G. Burger, ?Mitochondrial dna as a genomic jigsaw puzzle,? Science, vol. 318, Jul. 2007. [6] C. Zach, M. Klopschitz, and M. Pollefeys, ?Disambiguating visual relations using loop constraints.? in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010, pp. 1426?1433. [7] D. Crandall, A. Owens, N. Snavely, and D. Huttenlocher, ?Discrete-continuous optimization for large-scale structure from motion,? in IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2011, pp. 3001?3008. [8] A. Nguyen, M. Ben-Chen, K. Welnicka, Y. Ye, and L. Guibas, ?An optimization approach to improving collections of shape maps,? in Eurographics Symposium on Geometry Processing (SGP), 2011, pp. 1481? 1491. [9] Q. Huang and L. Guibas, ?Consistent shape maps via semidefinite programming,? Computer Graphics Forum, Proc. Eurographics Symposium on Geometry Processing (SGP), vol. 32, no. 5, pp. 177?186, 2013. [10] L. Wang and A. Singer, ?Exact and stable recovery of rotations for robust synchronization,? CoRR, vol. abs/1211.2441, 2012. [11] Y. Chen, L. J. Guibas, and Q. Huang, ?Near-optimal joint object matching via convex relaxation,? 2014. [Online]. Available: http://arxiv.org/abs/1402.1473 [12] D. Pachauri, R. Kondor, and V. Singh, ?Solving the multi-way matching problem by permutation synchronization,? in Advances in Neural Information Processing Systems, 2013, pp. 1860?1868. [13] D. Pachauri, R. Kondor, G. Sargur, and V. Singh, ?Permutation diffusion maps (pdm) with application to the image association problem in computer vision,? in Advances in Neural Information Processing Systems, 2014, pp. 541?549. [14] N. Dalal and B. Triggs, ?Histograms of oriented gradients for human detection,? in Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR?05) - Volume 1 Volume 01, ser. CVPR ?05, 2005, pp. 886?893. [15] C. Liu, J. Yuen, A. Torralba, J. Sivic, and W. T. Freeman, ?Sift flow: Dense correspondence across different scenes,? in Proceedings of the 10th European Conference on Computer Vision: Part III, ser. ECCV ?08, 2008, pp. 28?42. [16] V. G. Kim, Y. Lipman, and T. Funkhouser, ?Blended intrinsic maps,? in ACM Transactions on Graphics (TOG), vol. 30, no. 4. ACM, 2011, p. 79. [17] R. Burkard, M. Dell?Amico, and S. Martello, Assignment Problems. Industrial and Applied Mathematics, 2009. Philadelphia, PA, USA: Society for [18] E. J. Cand?s, X. Li, Y. Ma, and J. Wright, ?Robust principal component analysis?? J. ACM, vol. 58, no. 3, pp. 11:1?11:37, Jun. 2011. [Online]. Available: http://doi.acm.org/10.1145/1970392.1970395 [19] Y. Chen, C. Suh, and A. J. Goldsmith, ?Information recovery from pairwise measurements: A shannontheoretic approach,? CoRR, vol. abs/1504.01369, 2015. [20] T. S. Caetano, L. Cheng, Q. V. Le, and A. J. Smola, ?Learning graph matching,? in Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on. IEEE, 2007, pp. 1?8. [21] M. A. Fischler and R. C. Bolles, ?Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,? Commun. ACM, vol. 24, no. 6, pp. 381?395, Jun. 1981. [22] R. Osada, T. Funkhouser, B. Chazelle, and D. Dobkin, ACM Trans. Graph., vol. 21, no. 4, pp. 807?832, 2002. 9
6128 |@word kondor:2 dalal:1 norm:6 triggs:1 decomposition:1 harder:1 moment:1 initial:6 liu:1 contains:1 series:1 existing:2 recovered:3 comparing:1 chazelle:1 surprising:1 si:8 yet:6 chicago:3 shape:7 enables:1 rrt:2 spec:4 advancement:1 guess:1 nq:2 xk:3 provides:2 org:2 dell:1 along:3 constructed:3 become:1 symposium:2 persistent:1 consists:6 prove:2 combine:1 fitting:1 manner:1 introduce:2 pairwise:3 huber:1 behavior:2 p1:1 dist:5 sdp:20 multi:2 ry:1 cand:1 freeman:1 decomposed:1 xobs:1 considering:1 becomes:3 project:2 estimating:1 underlying:7 spain:1 begin:2 moreover:1 bounded:1 burger:1 burkard:1 eigenvector:3 fuzzy:1 supplemental:3 transformation:1 pseudo:1 quantitative:6 mitochondrial:1 exactly:1 ull:3 rm:2 ser:2 control:2 omit:1 organize:1 local:1 mitra:1 despite:1 ak:2 analyzing:2 establishing:1 collect:1 limited:1 factorization:3 bim:1 range:3 averaged:1 practical:4 block:11 procedure:2 significantly:2 composite:1 matching:5 projection:2 pre:1 organic:1 onto:2 close:4 context:1 applying:1 impossible:1 equivalent:1 map:50 reviewer:1 straightforward:1 independently:4 convex:5 shen:1 formulate:1 recovery:8 m2:3 insight:1 utilizing:1 orthonormal:1 stability:5 handle:2 justification:2 controlling:1 suppose:2 target:4 exact:6 losing:1 programming:1 overlapped:1 element:3 pa:1 recognition:4 huttenlocher:2 min1:1 solved:2 wang:1 thousand:1 connected:5 cycle:2 caetano:1 complexity:10 ui:7 pdm:1 fischler:1 geodesic:3 singh:2 solving:6 upon:1 tog:1 efficiency:2 triangle:1 siggraph:2 joint:1 represented:2 tx:3 laser:1 fast:1 describe:1 doi:1 crandall:2 whose:2 quite:1 widely:1 gelfand:1 denser:1 cvpr:5 otherwise:2 ability:3 noisy:5 online:2 advantage:4 eigenvalue:3 took:2 propose:2 remainder:1 loop:1 x1m:1 organizing:1 frobenius:1 qr:3 x1t:1 cluster:1 generating:1 tti:2 converges:1 object:32 ben:1 derive:1 pose:1 fixing:1 ij:4 tolerates:1 aug:1 c:1 hungarian:1 implies:1 indicate:2 convention:1 exhibiting:1 direction:1 come:2 predicted:2 correct:3 human:2 material:3 adjacency:3 hx:2 fix:1 anonymous:1 yuen:1 proposition:2 im:25 exploring:1 sufficiently:1 considered:1 ground:7 guibas:3 wright:1 algorithmic:2 mapping:1 puzzle:1 major:1 vary:1 torralba:1 estimation:1 proc:2 combinatorial:1 utexas:3 largest:1 genomic:1 pn:6 shelf:1 varying:5 encode:2 focus:1 improvement:1 sequencing:1 rank:3 cartography:1 tech:1 contrast:3 industrial:1 martello:1 kim:2 sense:1 inference:1 dependent:1 relation:1 i1:4 pixel:2 arg:4 among:3 k6:2 art:4 constrained:1 initialize:1 construct:1 once:1 lipman:1 kw:1 synchronizing:1 pottmann:1 future:1 sanghavi:2 simplify:1 oriented:1 geometry:2 connects:1 cns:1 ab:3 detection:1 intra:1 evaluation:6 deferred:1 mixture:1 admitting:1 semidefinite:2 accurate:1 edge:5 succeeded:1 ln3:2 incomplete:1 euclidean:2 re:1 xjk:1 theoretical:7 complicates:1 column:2 modeling:2 earlier:1 blended:2 kxkf:1 formulates:1 measuring:1 assignment:2 fusing:1 vertex:5 usefulness:2 rounding:4 graphic:3 vation:1 connect:2 corrupted:2 synthetic:6 density:9 fundamental:3 international:1 stay:1 off:1 xi1:12 connecting:1 again:1 nm:1 eurographics:2 containing:1 huang:4 admit:1 klopschitz:1 leading:9 combing:1 li:2 distribute:1 sec:1 ad:1 later:1 view:2 jigsaw:1 analyze:3 welnicka:1 recover:1 jul:1 defer:1 contribution:1 il:1 descriptor:5 variance:1 yellow:1 straight:1 randomness:1 aligns:3 failure:1 hotel:7 pp:14 dm:1 proof:11 di:11 recovers:1 reassembling:1 sampled:1 dataset:5 color:2 ut:3 improves:1 subtle:1 tolerate:2 follow:2 improved:2 erd:2 formulation:4 done:1 generality:1 smola:1 hand:2 sketch:1 o:2 scientific:1 fractured:2 usa:1 xj1:3 normalized:7 multiplier:1 true:5 ccf:3 ye:1 hence:1 alternating:1 funkhouser:3 illustrated:4 sgp:2 round:4 generalized:7 bijective:1 complete:1 demonstrate:3 goldsmith:1 bolles:1 motion:3 image:14 wise:11 consideration:1 xkj:1 common:1 superior:2 rotation:3 rmn:1 hofer:1 volume:2 extend:2 assembling:1 slight:1 association:1 significant:1 measurement:3 automatic:1 sujay:1 consistency:1 pm:10 grid:1 mathematics:1 dj:6 stable:2 similarity:1 dominant:2 recent:1 commun:1 scenario:1 store:1 inequality:1 rep:1 success:2 matrixvector:1 minimum:1 scape:6 paradigm:1 ii:2 relates:1 full:5 multiple:1 match:1 bigger:1 controlled:1 ransac:5 xjt:1 vision:7 cmu:12 arxiv:1 histogram:1 represent:1 irregular:5 justified:1 whereas:1 remarkably:1 addition:1 want:1 unlike:1 posse:1 comment:1 induced:1 tend:1 subject:1 flow:1 near:6 presence:1 leverage:1 iii:1 easy:3 enough:1 automated:1 xj:1 architecture:1 motivated:1 proceed:1 speaking:1 remark:3 detailed:2 eigenvectors:14 sst:2 clear:1 dna:2 diameter:2 generate:4 specifies:1 http:2 xij:22 exist:1 notice:1 estimated:1 blue:1 discrete:2 pollefeys:1 vol:9 key:1 rnm:2 clean:1 diffusion:2 graph:40 relaxation:1 uncertainty:1 named:1 utilizes:1 ob:23 appendix:1 scaling:1 osada:1 bound:10 fl:1 correspondence:9 cheng:1 constraint:4 n3:1 scene:1 encodes:2 nathan:1 aspect:1 min:5 optimality:3 speedup:1 developing:1 disconnected:2 across:3 remain:1 describes:1 smaller:1 slightly:1 s1:2 intuitively:1 restricted:1 invariant:1 iccv:1 pipeline:1 ln:3 computationally:1 turn:1 count:1 singer:1 end:3 davg:2 available:2 apply:3 obey:1 spectral:10 eigen:2 slower:1 xkf:1 top:5 assumes:1 running:1 opportunity:1 uj:2 establish:2 nyi:2 forum:1 pachauri:2 society:2 strategy:1 snavely:2 exhibit:5 gradient:1 distance:12 thank:2 w0:2 mail:1 argue:1 roadmap:2 considers:1 consensus:1 code:1 reformulate:2 ratio:1 setup:2 susceptible:1 potentially:1 hog:2 xik:4 xnt:1 design:1 motivates:1 perform:1 dmin:5 observation:21 datasets:4 defining:1 variability:1 perturbation:2 dobkin:1 ttic:1 pair:18 cast:1 namely:3 connection:1 sivic:1 polylogarithmic:2 barcelona:1 nip:1 trans:1 able:1 usually:1 pattern:5 ujt:3 sparsity:1 program:1 including:2 max:7 shifting:1 power:4 overlap:1 suitable:1 natural:1 difficulty:1 improve:1 ne:4 jun:2 gf:3 philadelphia:1 sn:1 geometric:2 l2:1 nati:1 kf:3 multiplication:2 literature:1 relative:1 acknowledgement:1 synchronization:18 permutation:23 limitation:1 srebro:1 versus:4 remarkable:1 degree:6 pij:6 sufficient:1 nm3:4 consistent:1 pi:2 austin:6 row:1 eccv:1 summary:1 surprisingly:1 side:1 wide:2 sparse:7 dimension:1 cumulative:2 computes:2 forward:1 collection:6 adopts:1 author:1 nguyen:1 far:1 transaction:2 sj:9 erdos:1 clique:2 global:1 xi:3 continuous:2 latent:1 iterative:1 suh:1 reality:1 robust:3 improving:1 european:1 constructing:1 domain:2 u2t:1 dense:4 main:3 linearly:1 noise:17 elaborate:1 fashion:1 zach:1 house:5 renyi:1 theorem:6 sift:1 amico:1 concern:1 exists:2 intrinsic:2 corr:2 importance:1 illustrates:1 kx:4 gap:2 chen:3 visual:1 failed:1 sargur:1 partially:2 u2:1 truth:7 acm:7 ma:1 prop:2 identity:2 marked:1 towards:1 disambiguating:1 owen:2 admm:1 included:1 specifically:5 except:1 uniformly:2 denoising:1 lemma:8 principal:1 called:3 total:2 experimental:2 m3:2 indicating:1 formally:1 qixing:1 support:1 scan:1 evaluate:5 d1:1 handling:1
5,668
6,129
Reconstructing Parameters of Spreading Models from Partial Observations Andrey Y. Lokhov Center for Nonlinear Studies and Theoretical Division T-4 Los Alamos National Laboratory, Los Alamos, NM 87545, USA lokhov@lanl.gov Abstract Spreading processes are often modelled as a stochastic dynamics occurring on top of a given network with edge weights corresponding to the transmission probabilities. Knowledge of veracious transmission probabilities is essential for prediction, optimization, and control of diffusion dynamics. Unfortunately, in most cases the transmission rates are unknown and need to be reconstructed from the spreading data. Moreover, in realistic settings it is impossible to monitor the state of each node at every time, and thus the data is highly incomplete. We introduce an efficient dynamic message-passing algorithm, which is able to reconstruct parameters of the spreading model given only partial information on the activation times of nodes in the network. The method is generalizable to a large class of dynamic models, as well to the case of temporal graphs. 1 Introduction Knowledge of the underlying parameters of the spreading model is crucial for understanding the global properties of the dynamics and for development of effective control strategies for an optimal dissemination or mitigation of diffusion [1, 2]. However, in many realistic settings effective transmission probabilities are not known a priori and need to be recovered from a limited number of realizations of the process. Examples of such situations include spreading of a disease [3], propagation of information and opinions in a social network [4], correlated infrastructure failures [5], or activation cascades in biological and neural networks [6]: precise model and parameters, as well as propagation paths are often unknown, and one is left at most with several observed diffusion traces. It can be argued that for many interesting systems, even the functional form of the dynamic model is uncertain. Nevertheless, the reconstruction problem still makes sense in this case: the common approach is to assume some simple and reasonable form of the dynamics, and recover the parameters of the model which explain the data in the most accurate and minimalistic way; this is crucial for understanding the basic mechanisms of the spreading process, as well as for making further predictions without overfitting. For example, if only a small number of samples is available, a few-parameter model should be used. In practice, it is very costly or even impossible to record the state of each node at every time step of the dynamics: we might only have access to a subset of nodes, or monitor the state of the system at particular times. For instance, surveys may give some information on the health or awareness of certain individuals, but there is no way to get a detailed account for the whole population; neural avalanches are usually recorded in cortical slices, representing only a small part of the brain; it is costly to deploy measurement devices on each unit of a complex infrastructure system; finally, hidden nodes play an important role in the artificial learning architectures. This is precisely the setting that we address in this article: reconstruction of parameters of a propagation model in the presence of nodes with hidden information, and/or partial information in time. It is not surprising that this 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. challenging problem turns out to be notably harder then its detailed counterpart and requires new algorithms which would be robust with respect to missing observations. Related work. The inverse problem of network and couplings reconstruction in the dynamic setting has attracted a considerable attention in the past several years. However, most of the existing works are focused on learning the propagation networks under the assumption of availability of full diffusion information. The papers [7, 8, 9, 10] developed inference methods based on the maximization of the likelihood of the observed cascades, leading to distributed and convex optimization algorithms in the case of continuous and discrete dynamics, principally for the variants of the independent cascade (IC) model [11]. These algorithms have been further improved under the sparse recovery framework [12, 13], particularly efficient for structure learning of treelike networks. A careful rigorous analysis of these likelihood-based and alternative [14, 15] reconstruction algorithms give an estimation of the number of observed cascades required for an exact network recovery with high probability. Precise conditions for the parameters recovery at a given accuracy are still lacking. The fact that the aforementioned algorithms rely on the fully observed spreading history represents an important limitation in the case of incomplete information. The case of missing time information has been addressed in two recent papers: focusing primarily on tree graphs, [16] studied the structure learning problem in which only initial and final spreading states are observed; [17] addressed the network reconstruction problem in the case of partial time snapshots of the network, using relaxation optimization techniques and assuming that full probabilistic trace for each node in the network is available. A standard technique for dealing with incomplete data involves maximizing the likelihood marginalized over the hidden information; for example, this approach has been used in [18] for identifying the diffusion source. In what follows, we use this method for benchmarking our results. Overview of results. In this article, we propose a different algorithm, based on recently introduced dynamic message-passing (DMP) equations for cascading processes [19, 20], which will be referred to as DMP REC (DMP-reconstruction) throughout the text. Making use of all available information, it yields significantly more accurate reconstruction results, outperforming the likelihood method and having a substantially lower algorithmic complexity, independent on the number of nodes with unobserved information. More generally, the DMP REC framework can be easily adapted to allow reconstruction of heterogeneous transmission probabilities in a large class of cascading processes, including the IC and threshold models, SIR and other epidemiological models, rumor spreading dynamics, etc., as well as for the processes occurring on dynamically-changing networks. 2 Problem formulation Model. For the sake of simplicity and definiteness, we assume that cascades follow the dynamics of stochastic susceptible-infected (SI) model in discrete time, defined on a network G = (V, E) with set of nodes denoted by V and set of directed edges E [3]. Each node i ? V at times t = 1, 2, . . . , T can be in either of two states: susceptible (S) or infected (I). At each time step, node i in the I state can activate one of its susceptible neighbors j with probability ?ij 1 . The dynamics is non-recurrent: once the node is activated (infected), it can never change its state back to susceptible. In what follows, the network G is supposed to be known. Incomplete observations and inference problem. We assume that the input is formed from M independent cascades, where a cascade ?c is defined as a collection of activation times of nodes in the network {?ic }i?V . Each cascade is observed up to the final observation time T . Notice that T is an important parameter: intuitively, the larger is T , the more information is contained in cascades, and the less samples are needed. We assume that T is given and fixed, being related to the availability of the finite-time observation window. If node i in cascade c does not get activated at a certain time prior to the horizon T , we put by definition ?ic = T ; hence, ?ic = T means that node i changes its state at time T or later. The full information on the cascades ? = ?c ?c is divided into the observed part, ?O , and the hidden part ?H . Thus, in general ?O contains only a subset of activation times in T ? [0, T ] for a part of observed nodes in the network O ? V . The task is to reconstruct the 1 We chose this two-state model since it has slightly more general dynamic rules compared to the popular IC model [11] with an additional restriction: a node infected at time t has a single chance to activate its susceptible neighbors at time step t+1, while further infection attempts in subsequent rounds are not allowed. The DMP REC method presented below can be easily applied to the case of IC model by noticing that it corresponds to the SIR model with a recovery probability equal to one, for which the DMP equations are known [20]. 2 ? couplings {?ij }(ij)?E ? G?? , where G?? with a star denotes the original transmission probabilities that have been used to generate the data. Maximization of the likelihood. Similarly to the formulations considered in [7, 8, 10], it is possible to explicitly write the expression for the likelihood of the discrete-time SI model in the case of fully available information ?O = ? under the assumption that the data has been generated using the couplings G? : Y Y P (? | G? ) = Pi (?ic | ?c , G? ), (1) i?V 1?c?M with ? Pi (?ic c | ? , G? ) = ? ?" ?ic ?2 Y Y (1 ? ?ki 1?kc ?t0 )? 1 ? t0 =0 k??i ! Y # (1 ? ?ki 1?kc ??ic ?1 ) 1?ic <T , (2) k??i where ?i denotes the set of neighbors of node i in the graph G, and 1 is the indicator function. The expression (2) has the following meaning: the probability that node i has been activated at time ?i given the activation times of its neighbors is equal to the probability that the activation signal has not been transmitted by any infected neighbor of i until the time ?i ? 2 (first term in the product), and that at least one of the active neighbors actually transmitted the infection at time ?i ? 1 (second term). A straightforward adaptation of the N ET R ATE algorithm, suggested in [8], to the present b ?? is obtained as a solution of setting implies that the estimation of the transmission probabilities G the convex optimization problem b ?? = arg min (? ln P (? | G? )) , G (3) which can be solved locally for each node i and its neighborhood due to the factorization of the likelihood (1) under assumption of asymmetry of the couplings. In the case of partial observations, the optimization problem (3) is not well defined since it requires the full knowledge of activation times for each node. A simple and natural extension of this scheme, which we will refer to as the maximum likelihood estimator (MLE), is to consider the likelihood function marginalized over unknown activation times: X P (?O | G? ) = P (? | G? ). (4) {?hc },h?H An exact evaluation of (4) is a computationally hard high-dimensional integration problem with complexity proportional to T H in the presence of H nodes with hidden information. In order to correct for this fact, we propose a heuristic scheme which we denote as the heuristic two-stage (HTS) algorithm. The idea of HTS consists of completing the missing part {?hc }h?H of the cascades at each step of the optimization process with the most probable values according to the current b? , ? b H = arg max P (? | G b ? ), and solving the optimization problem estimation of the couplings G b H ; these two alternating steps are iterated (3) using the full information on the cascades ? = ?O ? ? b H requires an until the global convergence of the algorithm. An exact (brute-force) estimation of ? H exponential number of operations T , as the original MLE formulation. However, we found that in practice the computational time can be significantly reduced with the use of the Monte Carlo sampling. The corresponding approximation is based on the observation that the likelihood (1) is non-zero only for {?ic }i?V forming possible (realizable) cascades. Hence, for each c, we sample LH,T auxiliary cascades, and choose the set of {?hc }h?H maximizing (1). LH,T is typically a large sampling parameter, growing with T and H to ensure a proper convergence. This procedure leads to an algorithm with a complexity O(N M |E|2 LH,T ) at each step of the optimization, where |E| denotes the number of edges; see the journal version of the paper [21] for a more in-depth discussion. Hence, both MLE and HTS algorithms are practically intractable; the remaining part of the paper is devoted to the development of an accurate algorithm with a polynomial-time computational complexity for this hard problem. The next section introduces dynamic message-passing equations which serve as a basis for such algorithm. 3 Dynamic message-passing equations. The dynamic message-passing equations for the SI model in continuous [19] and discrete [20] settings allow to compute marginal probabilities that node i is in the state S at time t: Y ?k?i (t) (5) PSi (t) = PSi (0) k??i 3 for t > 0 and a given initial condition PSi (0). The variables ?k?i (t) represent the probability that node k did not pass the activation signal to the node i until time t. The intuition behind the key Equation (5) is that the probability of node i to be susceptible at time t is equal to the probability of being in the S state at initial time times the probability that neither of its neighbors infected it until time t. The quantities ?k?i (t) can be computed iteratively using the following expressions: ?k?i (t) = ?k?i (t ? 1) ? ?ki ?k?i (t ? 1), (6) ? ?k?i (t) = (1 ? ?ki )?k?i (t ? 1) + PSk (0) ? ? Y ?l?k (t ? 1) ? l??k\i Y ?l?k (t)? , (7) l??k\i where ?k\i denotes the set of neighbors of k excluding i. The Equation (6) translates the fact that ?k?i (t) can only decrease if the infection is actually transmitted along the directed link (ki) ? E; this happens with probability ?ki times ?k?i (t ? 1) which denotes the probability that node k is in the state I at time t, but has not transmitted the infection to node i until time t ? 1. The Equation (7), which allows to close the system of dynamic equations, describes the evolution of probability ?k?i (t): at time t ? 1, it decreases if the infection is transmitted (first term in the sum), and increases if node k goes from the state S to the state I (difference of terms 2 and 3). Note that node i is excluded from the corresponding products over ?-variables because this equation is conditioned on the fact that i is in the state S, and therefore can not infect k. The Equations (6) and (7) are iterated in time starting from initial conditions ?i?j (0) = 1 and ?i?j (0) = 1 ? PSi (0) which are consistent with the definitions above. The name ?DMP equations? comes from the fact the whole scheme can be interpreted as the procedure of passing ?messages? along the edges of the network. Theorem 1. DMP equations for the SI model, defined by Equations (5)-(7), yield exact marginal probabilities on tree networks. On general networks, the quantities PSi (t) give lower bound on values of marginal probabilities. Proof Sketch. The exactness of solution on tree graphs immediately follows from the fact that the DMP equations can be derived from belief propagation equations on time trajectories [20], which provide exact marginals on trees. The fact that PSi (t) computed according to (5) represent a lower bound on marginal probabilities in general networks can be derived from a counting argument, considering multiple infection paths on a loopy graph which contribute to the computation of PSi (t), effectively lowering its value through the Equation (5); the proof technique is borrowed from [19], where similar dynamic equations in the continuous-time case have been considered.  Using the definition (5) of PSi (t), it is convenient to define the marginal probability mi (t) of activation " # of node i at time t: Y Y mi (t) = PSi (0) ?k?i (t ? 1) ? ?k?i (t) . (8) k??i k??i As it often happens with message-passing algorithms, although being exact only on tree networks, DMP equations provide accurate results even on loopy networks. An example is provided in the Figure 1, where the DMP-predicted marginals are compared with the values obtained from extensive simulations of the dynamics on a network of retweets with N = 96 nodes [22]. This observation will allow us to use DMP equations as a suitable approximation tool on general networks. In the next section we describe an efficient reconstruction algorithm, DMP REC, which is based on the resolution of the dynamics given by DMP equations and makes use of all available information. 4 Proposed algorithm: DMP REC Probability of cascades and free energy. The marginalization over hidden nodes in (4) creates a complex relation between couplings in the whole graph, resulting in a non-explicit expression. The main idea behind the DMP REC algorithm is to approximate the likelihood of observed cascades (4) through the marginal probability distributions (5) and (8): P (?O | G? ) ? M Y Y  i c  m (?i | G? )1?ic ?T + PSi (?ic | G? )1?ic =T . (9) c=1 i?O The expression (9) is at the core of the suggested algorithm. As there is no tractable way to compute exactly the joint probability of partial observations, we approximate it using a mean-field-type 4 MC-predicted PSi (t) 1 0.8 0.6 0.4 0.2 0 0 0.2 0.4 0.6 0.8 1 DMP-predicted PSi (t) (a) (b) Figure 1: Illustration of the accuracy of DMP equations on a network of retweets with N = 96 nodes [22]. (a) Comparison of DMP-predicted PSi (t) with PSi (t) estimated from 106 runs of Monte Carlo simulations with t = 10 and one infected node at initial time. The couplings {?ij } have been generated uniformly at random in the range [0, 1]. (b) Visualization of the network topology created with the Gephi software. approach as a product of marginal probabilities provided by the dynamic message-passing equations. The reasoning behind this approach is that each marginal is expressed through an average of all possible realizations of dynamics with a given initial condition; this is in contrast with the likelihood function which considers only particular instance realized in the given cascade. Therefore, equation (9) summarizes the effect of different propagation paths, and the maximization of this probability function will yield the most likely consensus between the ensemble of couplings in the network. Precisely this key property makes the reconstruction possible in the case involving nodes with hidden information via maximization of the objective (9) which can be interpreted as a cost function representing the product of individual probabilities of activation taken precisely at the value of the observed infection times. Starting from this expression, one can define the associated ?free energy?: X i fDMP = ? ln P (?O | G? ) = fDMP , (10) i?O   i i where fDMP = ? c ln mi (?ic )1?ic ?T ?1 + PSi (T ? 1)1?ic =T . In the last expression for fDMP we i i i used the fact that m (T ) + PS (T ) = PS (T ? 1). Our goal is to minimize the free energy (10) with respect to {?ij }(ij)?E . A similar approach has been previously outlined by [23] as a way to learn homogeneous couplings in the spreading source inference algorithm. In order to carry out this optimization task, we need to develop an efficient way of gradient evaluation. P Computation of the gradient. The gradient of the free energy reads (note that the indicator functions point to disjoint events): i i X h ?mi (? c | G? )/??rs ?fDMP ?PSi (T ? 1 | G? )/??rs i c ?T ?1 + c =T , =? 1 1 (11) ? ? i i ??rs mi (?ic | G? ) PSi (T ? 1 | G? ) c where the derivatives of the marginal probabilities can be computed explicitly by taking the derivative k?i of the DMP equations (5)-(8). Let us denote ??k?i (t)/??rs ? pk?i (t)/??rs ? rs (t) and ?? k?i i?j i?j qrs (t). Since the dynamic messages at initial time {? (0)} and {? (0)} are independent k?i on the couplings, we have pk?i rs (0) = qrs (0) = 0 for all k, i, r, s, and these quantities can be computed iteratively using the analogues of the Equations (6) and (7): k?i k?i k?i pk?i (t ? 1)1k=r,i=s , rs (t) = prs (t ? 1) ? ?ki qrs (t ? 1) ? ? q k?i rs (t) k?i ?ki )qrs (t (12) k?i = (1 ? ? 1) ? ? (t ? 1)1k=r,i=s X Y X Y + PSk (0) pl?k ?n?k (t ? 1) ? PSk (0) pl?k rs (t ? 1) rs (t) l??k\i n??k\{i,l} l?k\i ?n?k (t). (13) n??k\{i,l} Using these quantities, the derivatives of the marginals entering in Equation (11) can be written as X Y ?PSi (t) = PSi (0) pk?i ?l?i (t), rs (t) ??rs k??i l??i\k 5 ?PSi (t ? 1) ?PSi (t) ?mi (t) = ? . ??rs ??rs ??rs (14) The following observation shows that at least on tree networks, corresponding to the regime in which DMP equations have been derived, the values of the original transmission probabilities G?? correspond to the point in which the gradient of the free energy takes zero value. Claim 1. On a tree network, in the limit of large number of samples M ? ?, the derivative of the free energy is equal to zero at the values of couplings G?? used for generating cascades. Proof. Let us first look at samples originating from the same initial condition. According to Theorem 1, the DMP equations are exact on tree graphs, and hence it is easy to see that X i lim fDMP =? mi (t | G?? ) ln mi (t | G? ) ? PSi (T ? 1 | G?? ) ln PSi (T ? 1 | G? ). (15) M ?? t?T ?1 Therefore, i ?fDMP ? lim |G?? = ? M ?? ??rs ??rs " # X i m (t | G?? ) + PSi (T ? 1 | G?? ) = 0, t?T ?1 since the expression inside the brackets sums exactly to one. This result trivially holds by summing up samples with different initial conditions. Combination of this result with the definition (10) completes the proof. The DMP REC algorithm consists of running the message-passing equations for the derivatives of the dynamic variables (12), (13) in parallel with DMP equations (5)-(7), allowing for the computation of the gradient of the free energy (11) through (14), which is used afterwards in the optimization procedure. Let us analyse the computational complexity of each step of parameters update. The number of runs is equal to the number of distinct initial conditions in the ensemble of observed cascades, so if all M cascades start with distinct initial conditions, the complexity of the DMP REC algorithm is equal to O(|E|2 T M ) for each step of the update of {?rs }(rs)?E . Hence, in a typical situation where each cascade is initiated at one particular node, the number of runs will be limited by N , and the overall update-step complexity of DMP REC will be O(|E|2 T N ). Missing information in time. On top of inaccessible nodes, the state of the network can be monitored at a lower frequency compared to the natural time scale of the dynamics. It is easy to adapt the algorithm to the case of observations at K time steps T ? {tk }k?[1,K] . Since the activation time c c ?ic of node i in cascade c is now known only up to the interval [tki + 1, tki +1 ] ? ?kic , where P c c c c tki < ?ic ? tki +1 , one should maximize t??kc mi (t) = PSi (tki ) ? PSi (tki +1 ) ? ?kic PSi (t | G? ) i instead of mi (?ic ) in this case. This leads to obvious modifications to the expressions (10) and (11), using the differences of derivatives at corresponding times instead of one-step differences as in (14). For instance, if the final time is not included in the observations, we have " # i X  X ??kc PSi (t | G? )/??rs  ?f DMP i i fDMP =? ln ?kic PSi (t | G? ) , =? . i ?? ? rs kic PS (t | G? ) c c 5 Numerical results We evaluate the performance of the DMP REC algorithm on synthetic and real-world networks under assumption of partial observations. In numerical experiments, we focus primarily on the presence of inaccessible nodes, which is a more computationally difficult case compared to the setting of missing information in time. An example involving partial time observations is shown in section 5.1. 5.1 Tests with synthetic data Experimental setup. In the tests described in this section, the couplings {?ij } are sampled uniformly in the range [0, 1], the final observation time is set to T = 10. Each cascade is generated using a discrete-time SI model defined in section 2 from randomly selected sources. In the case of inaccessible nodes, the activation times data is hidden in all the samples for H randomly selected nodes. We use the likelihood methods for benchmarking the accuracy of our approach. The MLE algorithm introduced above is not tractable even on small graphs, therefore we compare the results of DMP REC with 6 the HTS algorithm outlined in the section 2. Still, HTS has a very high computational complexity, and therefore we are bound to run comparative tests on small graphs: a connected component of an artificially-generated network with N = 20, sampled using a power-law degree distribution, and a real directed network of relationships in a New England monastery with N = 18 nodes [24]. Both algorithms are initialized with ?ij = 0.5 for all (ij) ? E. The accuracy of reconstruction is assessed using the `1 norm of the difference between reconstructed and original couplings, normalized over the number of directed edges in the graph2 . Intuitively, this measure gives an average expected error for each parameter ?ij . HTS DMPrec 0.1 0.05 0.05 0 (a) 0.1 ?*ij ???ij - ?*ij?? 0.15 102 103 104 105 106 M(?0.64) 1 0.8 0.6 0.4 0.2 0 0 2 5 H (b) 7 0.2 0.4 0.6 0.8 10 1 ?ij (c) Figure 2: Tests for DMP REC and HTS on a small power-law network: (a) for fixed number of nodes with ???ij - ?*ij?? unobserved information H = 5, (b) for fixed number of samples M = 6400. (c) Scatter plot of {?ij } obtained ? with DMP REC versus original parameters {?ij } in the case of missing information in time with M = 6400, T = 10; the state of the network is observed every other time step. 0.15 0.1 0.1 0.05 0.05 0 (a) HTS DMPrec 102 103 104 105 106 M(?0.64) 0 0 (b) 2 4 H 6 8 (c) Figure 3: Numerical results for the real-world Monastery network of [24]: (a) for fixed number of nodes with unobserved information H = 4, (b) for fixed number of samples M = 6400. (c) The topology of the network ? (thickness of edges proportional to {?ij } used for generating cascades). Results. In the Figure 2 we present results for a small power-law network with short loops, which is not a favorable situation for DMP equations derived in the treelike approximation of the graph. Figures 2 (a) and 2 (b) show the dependence of an average reconstruction error as a function of M (for fixed H/N = 0.25) and H (for fixed M = 6400), respectively. DMP REC clearly outperforms the HTS algorithm, yielding surprisingly accurate reconstruction of transmission probabilities even in the case where a half of network nodes do not report any information. Most importantly, DMP REC achieves reconstruction with a significantly lower computational time: for example, while it took more than 24 hours to compute the point corresponding to H = 4 and M = 6400 with HTS (MLE at this test point took several weeks to converge), the computation involving DMP REC converged to the presented level of accuracy in less than 10 minutes on a standard laptop. These times illustrate the hardness of the learning problem involving incomplete information. We have also used this case study network to test the estimation of transmission probabilities with the DMP REC algorithm when the state of the network is recorded only at a subset of times T ? [0, T ]. Results for the case where every other time stamp is missing are given in the Figure 2 (c): couplings ? estimated with DMP REC are compared to the original values {?ij }; despite the fact that only 50% of time stamps are available, the inferred couplings show an excellent agreement with the ground truth. Equivalent results for the real-world relationship network extracted from the study [24] and containing both directed and undirected links, are shown in the Figure 3; an ability of DMP REC to capture the mutual dependencies of different couplings through dynamic correlations is even more pronounced in this case, with almost perfect reconstruction of couplings for large M and a rather weak dependence 2 Note that this measure excludes those few parameters which are impossible to reconstruct: e.g. no algorithm can learn the coupling associated with the ingoing edge of the hidden node located at the leaf of a network. 7 on the number of nodes with removed observations. We have run tests on larger synthetic networks which show similar reconstruction results for DMP REC, but where comparisons with the likelihood method could not be carried out. In the next section we focus on an application involving real-world data which represents a more interesting and important case for the validation of the algorithm. 5.2 Test with a real-world data As a proxy for the real statistics, we used the data provided by the Bureau of Transportation Statistics [25], from which we reconstructed a part of the U.S. air transportation network, where airports are the nodes, and directed links correspond to traffic between them. The reason behind this choice is based on the fact that the majority of large-scale influenza pandemics over the past several decades represented the air-traffic mediated epidemics. For illustration purposes, we selected top N = 30 airports ranked according to the total number of passenger enplanements and commonly classified as large hubs, and extracted a sub-network of flights between them. The weight of each edge is defined by the annual number of transported passengers, aggregated over multiple routes; we have pruned links with a relatively low traffic ? below 10% of the traffic level on the busiest routes, so that the total number of remaining directed links is |E| = 210. The final weights are based on the assumption that the probability of infection transmission is proportional to the flux; the weights have been renormalized accordingly so that the busiest route received the coupling ?ij = 0.5. The resulting network is depicted in the Figure 4 . We have generated M = 10, 000 independent cascades in this network, and have hidden the information at H = 15 nodes (50% of airports) selected at random. We observe that even with a significantly large portion of missing information, the reconstructed parameters show a good agreement with the original ones. 0.5 ? ? ???ij - ?*ij??=0.0400 0.5 ???ij - ?*ij??=0.0473 ? ? ? ? 0.4 0.4 ? ? ? ? ? ? ? ? ? ? * ? ?ij ? ?? ? ? ?ij ? ? ? * ? ?? ? ? ? ? 0.3 0.3 ? ? ? ? ? 0.2 ? ? ? ? ? ? ? 0.2 ? ? ? ? 0.1 ? ? ? ? H=0 0.1 H=15 ? ? 0.1 0.2 0.3 0.4 0.5 ?ij ? ? ? ? 0.1 0.2 0.3 0.4 0.5 ?ij Figure 4: Left: Sub-network of flights between major U.S. hubs, where the thickness of edges is proportional to the aggregated traffic between them; nodes which do not report information are indicated in red. Right: Scatter ? plots of reconstructed {?ij } versus original {?ij } couplings for H = 0 and H = 15 and M = 10, 000. 6 Conclusions and path forward From the algorithmic point of view, inference of spreading parameters in the presence of nodes with incomplete information considerably complicates the problem because the reconstruction can no longer be performed independently for each neighborhood. In this paper, it is shown how the dynamic interdependence of parameters can be exploited in order to be able to recover the couplings in the setting involving hidden information. Let us discuss several directions for future work. DMP REC can be straightforwardly generalized to more complicated spreading models using a generic form of DMP equations [20] and the key approximation ingredient (9), as well as adapted to the case of temporal graphs by encoding network dynamics via time-dependent coefficients ?ij (t), which might be more appropriate in certain real situations. It would also be useful to extend the present framework to the case of continuous dynamics using the continuous-time version of DMP equations of [19]. An important direction would be to generalize the learning problem beyond the assumption of a known network, and formulate precise conditions for detection of hidden nodes and for a perfect network recovery in this case. Finally, in the spirit of active learning, we anticipate that DMP REC could be helpful for the problems involving an optimal placement of observes in the situations where collection of full measurements is costly. Acknowledgements. The author is grateful to M. Chertkov and T. Misiakiewicz for discussions and comments, and acknowledges support from the LDRD Program at Los Alamos National Laboratory by the National Nuclear Security Administration of the U.S. Department of Energy under Contract No. DE-AC52-06NA25396. 8 References [1] C. Nowzari, V. Preciado, and G. Pappas. Analysis and control of epidemics: A survey of spreading processes on complex networks. Control Systems, IEEE, 36(1):26?46, 2016. [2] A. Y. Lokhov and D. Saad. Optimal deployment of resources for maximizing impact in spreading processes. arXiv preprint arXiv:1608.08278, 2016. [3] R. Pastor-Satorras, C. Castellano, P. Van Mieghem, and A. Vespignani. Epidemic processes in complex networks. Rev. Mod. Phys., 87:925?979, 2015. [4] S. Boccaletti, V. Latora, Y. Moreno, M. Chavez, and D.-U. Hwang. Complex networks: Structure and dynamics. Physics reports, 424(4):175?308, 2006. [5] I. Dobson, B. A. Carreras, V. E. Lynch, and D. E. Newman. Complex systems analysis of series of blackouts: Cascading failure, critical points, and self-organization. Chaos, 17(2):026103, 2007. [6] R. O?Dea, J. J. Crofts, and M. Kaiser. Spreading dynamics on spatially constrained complex brain networks. J. R. Soc. Interface, 10(81):20130016, 2013. [7] S. Myers and J. Leskovec. On the convexity of latent social network inference. In Advances in Neural Information Processing Systems, pages 1741?1749, 2010. [8] M. Gomez-Rodriguez, D. Balduzzi, and B. Sch?lkopf. Uncovering the temporal dynamics of diffusion networks. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), ICML ?11, pages 561?568, New York, NY, USA, June 2011. ACM. [9] N. Du, L. Song, M. Yuan, and A. J. Smola. Learning networks of heterogeneous influence. In Advances in Neural Information Processing Systems, pages 2780?2788, 2012. [10] P. Netrapalli and S. Sanghavi. Learning the graph of epidemic cascades. In ACM SIGMETRICS Performance Evaluation Review, volume 40, pages 211?222. ACM, 2012. [11] D. Kempe, J. Kleinberg, and ?. Tardos. Maximizing the spread of influence through a social network. In Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 137?146. ACM, 2003. [12] H. Daneshmand, M. Gomez-Rodriguez, L. Song, and B. Sch?lkopf. Estimating diffusion network structures: Recovery conditions, sample complexity & soft-thresholding algorithm. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), volume 2014, page 793, 2014. [13] J. Pouget-Abadie and T. Horel. Inferring graphs from cascades: A sparse recovery framework. In Proceedings of The 32nd International Conference on Machine Learning, pages 977?986, 2015. [14] B. Abrahao, F. Chierichetti, R. Kleinberg, and A. Panconesi. Trace complexity of network inference. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 491?499. ACM, 2013. [15] V. Gripon and M. Rabbat. Reconstructing a graph from path traces. In Information Theory Proceedings (ISIT), 2013 IEEE International Symposium on, pages 2488?2492. IEEE, 2013. [16] K. Amin, H. Heidari, and M. Kearns. Learning from contagion (without timestamps). In Proceedings of the 31st International Conference on Machine Learning, pages 1845?1853, 2014. [17] E. Sefer and C. Kingsford. Convex risk minimization to infer networks from probabilistic diffusion data at multiple scales. In Data Engineering (ICDE), 2015 IEEE 31th International Conference on, 2015. [18] M. Farajtabar, M. Gomez-Rodriguez, N. Du, M. Zamani, H. Zha, and L. Song. Back to the past: Source identification in diffusion networks from partially observed cascades. In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics (AISTATS), pages 232?240, 2015. [19] B. Karrer and M. E. Newman. Message passing approach for general epidemic models. Physical Review E, 82(1):016101, 2010. [20] A. Y. Lokhov, M. M?zard, and L. Zdeborov?. Dynamic message-passing equations for models with unidirectional dynamics. Physical Review E, 91(1):012811, 2015. [21] A. Y. Lokhov and T. Misiakiewicz. Efficient reconstruction of transmission probabilities in a spreading process from partial observations. arXiv preprint arXiv:1509.06893, 2016. [22] R. Rossi and N. Ahmed. Network repository, 2013. http://networkrepository.com. [23] F. Altarelli, A. Braunstein, L. Dall?Asta, A. Lage-Castellanos, and R. Zecchina. Bayesian inference of epidemics on networks via belief propagation. Physical review letters, 112(11):118701, 2014. [24] S. F. Sampson. Crisis in a cloister. PhD thesis, Cornell University, Ithaca, 1969. [25] Bureau of transportation statistics. http://www.rita.dot.gov/bts/. 9
6129 |@word repository:1 version:2 polynomial:1 norm:1 nd:1 simulation:2 r:22 harder:1 carry:1 initial:11 contains:1 series:1 blackout:1 past:3 existing:1 outperforms:1 recovered:1 current:1 com:1 surprising:1 activation:13 si:5 scatter:2 attracted:1 written:1 realistic:2 subsequent:1 numerical:3 timestamps:1 moreno:1 plot:2 hts:10 update:3 half:1 selected:4 device:1 leaf:1 intelligence:1 accordingly:1 core:1 short:1 record:1 mitigation:1 infrastructure:2 node:53 contribute:1 along:2 symposium:1 yuan:1 consists:2 inside:1 introduce:1 interdependence:1 notably:1 hardness:1 expected:1 growing:1 brain:2 gov:2 window:1 considering:1 spain:1 provided:3 moreover:1 underlying:1 estimating:1 laptop:1 daneshmand:1 what:2 crisis:1 interpreted:2 substantially:1 generalizable:1 developed:1 unobserved:3 temporal:3 zecchina:1 every:4 exactly:2 control:4 unit:1 brute:1 engineering:1 tki:6 limit:1 despite:1 encoding:1 initiated:1 path:5 might:2 chose:1 studied:1 dynamically:1 challenging:1 deployment:1 limited:2 factorization:1 range:2 directed:7 practice:2 epidemiological:1 procedure:3 braunstein:1 cascade:29 significantly:4 convenient:1 get:2 close:1 put:1 risk:1 impossible:3 influence:2 restriction:1 equivalent:1 www:1 transportation:3 eighteenth:1 center:1 maximizing:4 missing:8 straightforward:1 attention:1 go:1 starting:2 convex:3 survey:2 focused:1 resolution:1 simplicity:1 recovery:7 identifying:1 immediately:1 formulate:1 rule:1 estimator:1 cascading:3 importantly:1 nuclear:1 pouget:1 population:1 tardos:1 deploy:1 play:1 exact:7 homogeneous:1 rita:1 agreement:2 particularly:1 rec:22 located:1 observed:13 role:1 preprint:2 solved:1 capture:1 busiest:2 lage:1 mieghem:1 connected:1 decrease:2 removed:1 observes:1 disease:1 intuition:1 inaccessible:3 complexity:10 convexity:1 dynamic:36 graph2:1 renormalized:1 grateful:1 solving:1 serve:1 creates:1 division:1 basis:1 dmp:43 easily:2 joint:1 represented:1 rumor:1 distinct:2 effective:2 activate:2 monte:2 describe:1 artificial:2 newman:2 neighborhood:2 heuristic:2 larger:2 reconstruct:3 epidemic:6 ability:1 statistic:4 analyse:1 final:5 myers:1 took:2 reconstruction:18 propose:2 product:4 adaptation:1 loop:1 realization:2 supposed:1 amin:1 pronounced:1 los:3 convergence:2 transmission:12 asymmetry:1 p:3 generating:2 comparative:1 perfect:2 tk:1 coupling:21 recurrent:1 develop:1 illustrate:1 ij:32 received:1 borrowed:1 soc:1 auxiliary:1 predicted:4 involves:1 implies:1 come:1 netrapalli:1 direction:2 correct:1 stochastic:2 opinion:1 asta:1 argued:1 isit:1 biological:1 probable:1 anticipate:1 extension:1 pl:2 hold:1 practically:1 considered:2 ic:23 ground:1 algorithmic:2 week:1 claim:1 satorras:1 lokhov:5 achieves:1 major:1 purpose:1 estimation:5 favorable:1 spreading:17 tool:1 minimization:1 exactness:1 clearly:1 kingsford:1 lynch:1 sigmetrics:1 rather:1 cornell:1 derived:4 focus:2 june:1 abrahao:1 dall:1 likelihood:14 contrast:1 sigkdd:2 rigorous:1 ldrd:1 sense:1 realizable:1 inference:7 helpful:1 dependent:1 treelike:2 typically:1 hidden:12 kc:4 relation:1 originating:1 arg:2 aforementioned:1 overall:1 uncovering:1 denoted:1 priori:1 development:2 constrained:1 integration:1 kempe:1 airport:3 marginal:9 mutual:1 once:1 never:1 having:1 equal:6 sampling:2 field:1 represents:2 look:1 icml:3 future:1 report:3 sanghavi:1 few:2 primarily:2 randomly:2 national:3 individual:2 attempt:1 detection:1 organization:1 message:12 highly:1 mining:2 evaluation:3 introduces:1 bracket:1 yielding:1 activated:3 behind:4 devoted:1 accurate:5 edge:9 partial:9 lh:3 tree:8 incomplete:6 initialized:1 theoretical:1 leskovec:1 uncertain:1 complicates:1 instance:3 soft:1 castellanos:1 infected:7 bts:1 maximization:4 loopy:2 cost:1 karrer:1 subset:3 alamo:3 straightforwardly:1 dependency:1 thickness:2 ac52:1 synthetic:3 considerably:1 andrey:1 st:2 international:9 probabilistic:2 contract:1 physic:1 thesis:1 nm:1 recorded:2 containing:1 choose:1 derivative:6 leading:1 account:1 de:1 star:1 availability:2 coefficient:1 explicitly:2 passenger:2 later:1 view:1 performed:1 vespignani:1 traffic:5 portion:1 start:1 recover:2 red:1 parallel:1 avalanche:1 complicated:1 zha:1 unidirectional:1 minimize:1 formed:1 air:2 accuracy:5 ensemble:2 yield:3 correspond:2 generalize:1 modelled:1 weak:1 lkopf:2 iterated:2 identification:1 bayesian:1 mc:1 carlo:2 trajectory:1 history:1 converged:1 classified:1 explain:1 phys:1 infection:8 definition:4 failure:2 energy:8 frequency:1 obvious:1 proof:4 psi:29 mi:10 associated:2 monitored:1 sampled:2 popular:1 kic:4 knowledge:5 lim:2 actually:2 back:2 focusing:1 ingoing:1 follow:1 improved:1 formulation:3 pappa:1 horel:1 stage:1 smola:1 heidari:1 until:5 correlation:1 sketch:1 flight:2 nonlinear:1 propagation:7 rodriguez:3 indicated:1 independently:1 hwang:1 usa:2 name:1 effect:1 normalized:1 counterpart:1 evolution:1 hence:5 alternating:1 excluded:1 laboratory:2 iteratively:2 read:1 entering:1 castellano:1 spatially:1 round:1 self:1 generalized:1 interface:1 reasoning:1 meaning:1 chaos:1 recently:1 common:1 functional:1 physical:3 overview:1 influenza:1 volume:2 extend:1 marginals:3 measurement:2 refer:1 outlined:2 trivially:1 similarly:1 dot:1 access:1 longer:1 etc:1 carreras:1 recent:1 pastor:1 route:3 certain:3 outperforming:1 exploited:1 transmitted:5 additional:1 converge:1 maximize:1 aggregated:2 signal:2 full:6 multiple:3 afterwards:1 infer:1 adapt:1 england:1 ahmed:1 divided:1 mle:5 impact:1 prediction:2 variant:1 basic:1 involving:7 heterogeneous:2 arxiv:4 represent:2 addressed:2 interval:1 completes:1 source:4 crucial:2 sch:2 saad:1 ithaca:1 comment:1 undirected:1 spirit:1 mod:1 presence:4 counting:1 easy:2 marginalization:1 architecture:1 topology:2 rabbat:1 idea:2 translates:1 administration:1 panconesi:1 t0:2 expression:9 song:3 passing:11 york:1 generally:1 useful:1 detailed:2 locally:1 reduced:1 generate:1 http:2 notice:1 estimated:2 disjoint:1 dobson:1 discrete:5 write:1 key:3 nevertheless:1 threshold:1 monitor:2 changing:1 neither:1 diffusion:9 lowering:1 graph:14 relaxation:1 excludes:1 icde:1 year:1 sum:2 run:5 inverse:1 noticing:1 letter:1 farajtabar:1 throughout:1 reasonable:1 almost:1 monastery:2 summarizes:1 ki:8 bound:3 completing:1 gomez:3 annual:1 zard:1 adapted:2 placement:1 precisely:3 software:1 sake:1 kleinberg:2 argument:1 min:1 pruned:1 relatively:1 rossi:1 department:1 according:4 combination:1 dissemination:1 ate:1 slightly:1 reconstructing:2 describes:1 qrs:4 rev:1 making:2 happens:2 modification:1 intuitively:2 pr:1 principally:1 taken:1 ln:6 equation:35 computationally:2 visualization:1 previously:1 turn:1 discus:1 mechanism:1 resource:1 needed:1 tractable:2 available:6 operation:1 observe:1 generic:1 minimalistic:1 appropriate:1 alternative:1 original:8 bureau:2 top:3 denotes:5 include:1 ensure:1 remaining:2 running:1 marginalized:2 balduzzi:1 misiakiewicz:2 objective:1 quantity:4 realized:1 kaiser:1 strategy:1 costly:3 dependence:2 gradient:5 zdeborov:1 link:5 majority:1 considers:1 consensus:1 reason:1 assuming:1 relationship:2 illustration:2 difficult:1 unfortunately:1 susceptible:6 setup:1 trace:4 proper:1 unknown:3 allowing:1 observation:17 snapshot:1 finite:1 situation:5 excluding:1 precise:3 ninth:1 inferred:1 introduced:2 required:1 lanl:1 extensive:1 security:1 gripon:1 barcelona:1 hour:1 nip:1 address:1 able:2 suggested:2 beyond:1 usually:1 below:2 regime:1 program:1 including:1 max:1 belief:2 analogue:1 power:3 suitable:1 event:1 natural:2 rely:1 force:1 ranked:1 indicator:2 critical:1 representing:2 scheme:3 contagion:1 created:1 carried:1 acknowledges:1 mediated:1 health:1 text:1 prior:1 understanding:2 acknowledgement:1 review:4 discovery:2 sir:2 lacking:1 fully:2 law:3 interesting:2 limitation:1 proportional:4 versus:2 ingredient:1 validation:1 awareness:1 degree:1 consistent:1 proxy:1 article:2 thresholding:1 pi:2 surprisingly:1 last:1 free:7 allow:3 neighbor:8 taking:1 sparse:2 distributed:1 slice:1 van:1 depth:1 cortical:1 world:5 forward:1 collection:2 commonly:1 author:1 social:3 flux:1 reconstructed:5 approximate:2 dealing:1 global:2 overfitting:1 active:2 summing:1 continuous:5 latent:1 decade:1 learn:2 transported:1 robust:1 dea:1 du:2 hc:3 complex:7 artificially:1 excellent:1 did:1 pk:4 main:1 spread:1 aistats:1 whole:3 allowed:1 referred:1 benchmarking:2 definiteness:1 ny:1 chierichetti:1 retweets:2 sub:2 inferring:1 explicit:1 exponential:1 stamp:2 chertkov:1 infect:1 croft:1 theorem:2 minute:1 hub:2 abadie:1 essential:1 intractable:1 effectively:1 phd:1 conditioned:1 occurring:2 horizon:1 chavez:1 depicted:1 likely:1 forming:1 expressed:1 contained:1 partially:1 corresponds:1 truth:1 chance:1 extracted:2 acm:7 goal:1 careful:1 sampson:1 psk:3 considerable:1 change:2 hard:2 included:1 typical:1 uniformly:2 kearns:1 total:2 pas:1 experimental:1 support:1 zamani:1 assessed:1 evaluate:1 correlated:1
5,669
613
Bayesian Learning via Stochastic Dynamics Radford M. Neal Department of Computer Science University of Toronto Toronto, Ontario, Canada M5S lA4 Abstract The attempt to find a single "optimal" weight vector in conventional network training can lead to overfitting and poor generalization. Bayesian methods avoid this, without the need for a validation set, by averaging the outputs of many networks with weights sampled from the posterior distribution given the training data. This sample can be obtained by simulating a stochastic dynamical system that has the posterior as its stationary distribution. 1 CONVENTIONAL AND BAYESIAN LEARNING I view neural networks as probabilistic models, and learning as statistical inference. Conventional network learning finds a single "optimal" set of network parameter values, corresponding to maximum likelihood or maximum penalized likelihood inference. Bayesian inference instead integrates the predictions of the network over all possible values of the network parameters, weighting each parameter set by its posterior probability in light of the training data. 1.1 NEURAL NETWORKS AS PROBABILISTIC MODELS Consider a network taking a vector of real-valued inputs, x, and producing a vector of real-valued outputs, y, perhaps computed using hidden units. Such a network architecture corresponds to a function, I, with y I(x, w), where w is a vector of connection weights. If we assume the observed outputs, y, are equal to y plus Gaussian noise of standard deviation (j, the network defines the conditional probability = 475 476 Neal for an observed output vector given an input vector as follows: P(y I x, 0") exp( -IY - !(x, w)12 /20"2) ex: (1) The probability of the outputs in a training set (Xl, yt), ... , (Xn, Yn) given this fixed noise level is therefore P(Yl, ... , Yn IXl,?.?, Xn , 0") ex: exp( - E lYe - !(Xe, w)12 /20"2) (2) e Often 0" is unknown. A Bayesian approach to handling this is to assign 0" a vague prior distribution and then ?.ntcgrating it away, giving the following probability for the training set (see (Buntine and Weigend, 1991) or (Neal, 1992) for details): P(Yl,"" Yn IXl, ... , Xn) ex: (so + E lYe - !(Xe, w)12) - mp?nD 2 (3) e where So and mo are parameters of the prior for 0". 1.2 CONVENTIONAL LEARNING Conventional backpropagation learning tries to find the weight vector that assigns the highest probability to the training data, or equivalently, that minimizes minus the log probability of the training data. When 0" is assumed known, we can use (2) to obtain the following objective function to minimize: M(w) = E lYe - !(Xe, w)12 / 20"2 (4) e When 0" is unknown, we can instead minimize the following, derived from (3): M(w) (5) e Conventional learning often leads to the network overfitting the training data modeling the noise, rather than the true regularities. This can be alleviated by stopping learning when the the performance of the network on a separate validation set begins to worsen, rather than improve. Another way to avoid overfitting is to include a weight decay term in the objective function, as follows: M'(w) = Alwl 2 + M(w) (6) Here, the data fit term, M(w), may come from either (4) or (5). We must somehow find an appropriate value for A, perhaps, again, using a separate validation set. 1.3 BAYESIAN LEARNING AND PREDICTION Unlike conventional training, Bayesian learning does not look for a single "optimal" set of network weights. Instead, the training data is used to find the posterior probability distribution over weight vectors. Predictions for future cases are made by averaging the outputs obtained with all possible weight vectors, with each contributing in proportion to its posterior probability. To obtain the posterior, we must first define a prior distribution for weight vectors. We might, for example, give each weight a Gaussian prior of standard deviation w: (7) Bayesian Learning via Stochastic Dynamics We can then obtain the posterior distribution over weight vectors given the training cases (Xl, yt), ... , (Xn, Yn) using Bayes' Theorem: P(w I (Xl, yt}, ... , (Xn, Yn)) oc P(w) P(YI, ... , Yn I Xl, ... , Xn , w) (8) Based on the training data, the best prediction for the output vector in a test case with input vector X., assuming squared-error loss, is Y. = J /(x.,w)P(w I (xI,yd,? .. ,(xn,Yn))dw (9) A full predictive distribution for the outputs in the test case can also be obtained, quantifying the uncertainty in the above prediction. 2 INTEGRATION BY MONTE CARLO METHODS Integrals such as that of (9) are difficult to evaluate. Buntine and Weigend (1991) and MacKay (1992) approach this problem by approximating the posterior distribution by a Gaussian. Instead, I evaluate such integrals using Monte Carlo methods. If we randomly select weight vectors, wo, ... , WN-I, each distributed according to the posterior, the prediction for a test case can be found by approximating the integral of (9) by the average output of networks with these weights: y. ~ ~ L/(x.,Wt) (10) t This formula is valid even if the Wt are dependent, though a larger sample may then be needed to achieve a given error bound. Such a sample can be obtained by simulating an ergodic Markov chain that has the posterior as its stationary distribution. The early part of the chain, before the stationary distribution has been reached, is discarded. Subsequent vectors are used to estimate the integral. 2.1 FORMULATING THE PROBLEM IN TERMS OF ENERGY Consider the general problem of obtaining a sample of (dependent) vectors, qt, with probabilities given by P( q). For Bayesian network learning, q will be the weight vector, or other parameters from which the weights can be obtained, and the distribution of interest will be the posterior. It will be convenient to express this probability distribution in terms of a potential energy function, E( q), chosen so that P(q) oc exp(-E(q)) (11) A momentum vector, p, of the same dimensions as q, is also introduced, and defined to have a kinetic energy of ~ \pI2. The sum of the potential and kinetic energies is the Hamiltonian: (12) H(q,p) = E(q) + ~lpl2 From the Hamiltonian, we define ajoint probability distribution over q and p (phase space) as follows: P(q,p) oc exp(-H(q,p)) (13) The marginal distribution for q in (13) is that of (11), from which we wish to sample. 477 478 Neal We can therefore proceed by sampling from this joint distribution for q and p, and then just ignoring the values obtained for p. 2.2 HAMILTONIAN DYNAMICS Sampling from the distribution (13) can be split into two subproblems - first, to sample uniformly from a surface where H, and hence the probability, is constant, and second, to visit points of differing H with the correct probabilities. The solutions to these subproblems can then be interleaved to give an overall solution. The first subproblem can be solved by simulating the Hamiltonian dynamics of the system, in which q and p evolve through a fictitious time, r, according to the following equations: dq dr 8H 8p = p, 8H -dp = - = -VE(q) dr 8q (14) This dynamics leaves H constant, and preserves the volumes of regions of phase space. It therefore visits points on a surface of constant H with uniform probability. When simulating this dynamics, some discrete approximation must be used. The leapfrog method exactly maintains the preservation of phase space volume. Given a size for the time step, E, an iteration of the leapfrog method goes as follows: p(r+ E/2) 2.3 per) - (E/2)VE(q(r?) q(r+ E) q(r)+Ep p(r + E) p(r + E) - (E/2)V E(q(r + E? (15) THE STOCHASTIC DYNAMICS METHOD To create a Markov chain that converges to the distribution of (13), we must interleave leapfrog iterations, which keep H (approximately) constant, with steps that can change H. It is convenient for the latter to affect only p, since it enters into H in a simple way. This general approach is due to Anderson (1980). I use stochastic steps of the following form to change H: p' (16) where 0 < (l' < 1, and n is a random vector with components picked independently from Gaussian distributions of mean zero and standard deviation one. One can show that these steps leave the distribution of (13) invariant. Alternating these stochastic steps with dynamical leapfrog steps will therefore sample values for q and p with close to the desired probabilities. In so far as the discretized dynamics does not keep H exactly constant, however, there will be some degree of bias, which will be eliminated only in the limit as E goes to zero. It is best to use a value of (l' close to one, as this reduces the random walk aspect of the dynamics. If the random term in (16) is omitted, the procedure is equivalent to ordinary batch mode backpropagation learning with momentum. Bayesian Learning via Stochastic Dynamics 2.4 THE HYBRID MONTE CARLO METHOD The bias introduced into the stochastic dynamics method by using an approximation to the dynamics is eliminated in the Hybrid Monte Carlo method of Duane, Kennedy, Pendleton, and Roweth (1987). This method is a variation on the algorithm of Metropolis, et al (1953), which generates a Markov chain by considering randomly-selected changes to the state. A change is always accepted if it lowers the energy (H), or leaves it unchanged. If it increases the energy, it is accepted with probability exp( -LlH), and is rejected otherwise, with the old state then being repeated. In the Hybrid Monte Carlo method, candidate changes are produced by picking a random value for p from its distribution given by (13) and then performing some predetermined number of leapfrog steps. If the leapfrog method were exact, H would be unchanged, and these changes would always be accepted. Since the method is actually only approximate, H sometimes increases, and changes are sometimes rejected, exactly cancelling the bias introduced by the approximation. Of course, if the errors are very large, the acceptance probability will be very low, and it will take a long time to reach and explore the stationary distribution. To avoid this, we need to choose a step size (f) that is small enough. 3 RESULTS ON A TEST PROBLEM I use the "robot arm" problem of MacKay (1992) for testing. The task is to learn the mapping from two real-valued inputs, Xl and X2, to two real-valued outputs, YI and Y2, given by (17) ih 2.0 cos(xI) + 1.3 COS(XI + X2) (18) Y2 2.0 sin(xI) + 1.3 sin(xi + X2) = = Gaussian noise of mean zero and standard deviation 0.05 is added to (YI' Y2) to give the observed position, (YI, Y2). The training and test sets each consist of 200 cases, with Xl picked randomly from the ranges [-1.932, -0.453] and [+0.453, +1.932], and X2 from the range [0.534,3.142]. A network with 16 sigmoidal hidden units was used. The output units were linear. Like MacKay, I group weights into three categories - input to hidden, bias to hidden, and hidden/bias to output. MacKay gives separate priors to weights in each category, finding an appropriate value of w for each. I fix w to one, but multiply each weight by a scale factor associated with its category before using it, giving an equivalent effect. For conventional training with weight decay, I use an analogous scheme with three weight decay constants (.\ in (6?. In all cases, I assume that the true value of u is not known. I therefore use (3) for the training set probability, and (5) for the data fit term in conventional training. I set 80 rno 0.1, which corresponds to a very vague prior for u. = = 3.1 PERFORMANCE OF CONVENTIONAL LEARNING Conventional backpropagation learning was tested on the robot arm problem to gauge how difficult it is to obtain good generalization with standard methods. 479 480 Neal (a) .006.5 +-l,-----t--___==!r.:-::,===*" (b) .006.5 +--+,--t-----+----_+_ ......... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . o. .0060~~~....-...~.~-r---~~---~ 1.0-"', .0000+-~~ ...-..-. ..-.?r..-.. -...-.. -...-.. -...-...~.~---~ ........................ .~5+--4,,----+-------4-~~--~ --..... -~O+-r----t----~I_---~ .0050 +-----'''''''"--t----==-1I_---;- \.. ___ .~.5+-~~ --r----~---~ .~.5+----~---~~~====~ .~o .oow+----~---~~---~ o 50 Herations ~ X 1000 ~ 0 50 Iterations 100 X ~ 1000 Figure 1: Conventional backpropagation learning - (a) with no weight decay, (b) with carefully-chosen weight decay constants. The solid lines give the squared error on the training data, the dotted lines the squared error on the test data. Fig. l(a) shows results obtained without using weight decay. Error on the test set declined initially, but then increased with further training. To achieve good results, the point where the test error reaches its minimum would have to be identified using a separate validation set. Fig. l(b) shows results using good weight decay constants, one for each category of weights, taken from the Bayesian runs described below. In this case there is no need to stop learning early, but finding the proper weight decay constants by nonBayesian methods would be a problem. Again, a validation set seems necessary, as well as considerable computation. Use of a validation set is wasteful, since data that could otherwise be included in the training set must be excluded. Standard techniques for avoiding this, such as "N-fold" cross-validation, are difficult to apply to neural networks. 3.2 PERFORMANCE OF BAYESIAN LEARNING Bayesian learning was first tested using the unbiased Hybrid Monte Carlo method. The parameter vector in the simulations (q) consisted of the unsealed network weights together with the scale factors for the three weight categories. The actual weight vector (w) was obtained by multiplying each unsealed weight by the scale factor for its category. Each Hybrid Monte Carlo run consisted of 500 Metropolis steps. For each step, a trajectory consisting of 1000 leapfrog iterations with f = 0.00012 was computed, and accepted or rejected based on the change in H at its end-point. Each run therefore required 500,000 batch gradient evaluations, and took approximately four hours on a machine rated at about 25 MIPS. Fig. 2(a) shows the training and test error for the early portion of one Hybrid Monte Carlo run. After initially declining, these values fluctuate about an average. Though not apparent in the figure, some quantities (notably the scale factors) require a hundred or more steps to reach their final distribution. The first 250 steps of each run were therefore discarded as not being from the stationary distribution. Fig. 2(b) shows the training and test set errors produced by networks with weight vectors taken from the last 250 steps of the same run. Also shown is the error on the test set using the average of the outputs of all these networks - that is, the estimate given by (10) for the Bayesian prediction of (9). For the run shown, this Bayesian Learning via Stochastic Dynamics (a) .0140 (b) .0070 --I.----t-------1I---+--+-----II-----!.0061 --li-----t---__II----t.-+-----II-t----!- .0120 ~~~-~~-~-M~~~~--~~~~~ .0064 --Ih--;--t--t-rh-i:'lIIt-i1lr--te!H-+--~!-!1I5B-+._i:_r_-!? .01110 .0062 ~~~~-+:III'!I''HI:4---::.fII-+Iit-'-i''*''-.+-__..H*<~II'-+:~:.H!t_.;+? .0060 --I~14.~!H-;;1fH-~~fml!--lF-Hi:-:t-_i_f.;i_?_'i~f--i!h"lt~!tt? .1lO8O I "'.,";......~.J!?? .1lO6O ,,J !......... ..J>v-A " d ~,. . """'-' ~ .0040 o 50 100 .~.--I~~~~-~~~__Ir+~~~~~--~ .00545 --Ia---;J--=.~"*-~---I~---'--+-.!i=_--1~-~-t? .00S4 --Ift-;;,-;nr-t----:---jih---Jr:--+----r---IHt--JbT;rl- .0052 ~-'\+~'fliIllnl~HI\,rbItl-"'AiIA>tI-Wl~'"T1I'cyJ-y;I-fif'L'\.-tti-tf'l~ .0050 --I-lf---'--f+-'~'---'..:&.!1---'-''---F>LL.:'--....,I--...J.:....--!- 350 300 I&era&ions X 1000 Figure 2: Bayesian learning using Hybrid 400 450 I&era&ions x 1000 Mon~e Carlo - (a) early portion of run, (b) last 250 iterations. The solid lines give the squared error on the training set, the dotted lines the squared error on the test set, for individual networks. The dashed line in (b) is the test error when using the average of the outputs of all 250 networks. +3? - +2? - Figure 3: Predictive distribution for outputs. The two regions from which training data was drawn are outlined. Circles indicate the true, noise-free outputs for a grid of cases in the input space. The dots in the vicinity of each circle (often piled on top of it) are the outputs of every fifth network from the last 250 iterations of a Hybrid Monte Carlo run. ' ?? ",. ? +1? - ? 0.0- ?LD - ?2.0- ::, :.:~:.'. ~., .... ?u- ?10 ' ?1.0 0? +LD +10 +3.0 test set error using averaged outputs is 0.00559, which is (slightly) better than any results obtained using conventional training. Note that with Bayesian training no validation set is necessary. The analogues of the weight decay constants - the weight scale factors - are found during the course of the simulation. Another advantage of the Bayesian approach is that it can provide an indication of how uncertain the predictions for test cases are. Fig. 3 demonstrates this. As one would expect, the uncertainty is greater for test cases with inputs outside the region where training data was supplied. 3.3 STOCHASTIC DYNAMICS VS. HYBRID MONTE CARLO The uncorrected stochastic dynamics method will have some degree of systematic bias, due to inexact simulation of the dynamics. Is the amount of bias introduced of any practical importance, however? 481 482 Neal (a) .IXY1O ~------+---~-++-----~----~-----4-(b) .0068 .0066 ? ? .0064 .D062 .0060 ? .005. .oo5ti .0054 .0052 .0050 .0048 250 ~ 400 Iterations X 1000 .. ? ~ ~ \. 21 \ ? ? ~ II Iterations X 1000 Figure 4: Bayesian learning using uncorrected stochastic dynamics - (a) Training and test error for the last 250 iterations of a run with c = 0.00012, (b) potential energy (E) for a run with c = 0.00030. Note the two peaks where the dynamics became unstable. To help answer this question, the stochastic dynamics method was run with parameters analogous to those used in the Hybrid Monte Carlo runs. The step size of 0.00012 used in those runs was chosen to be as large as possible while keeping the number of trajectories rejected low (about 10%). A smaller step size would not give competitive results, so this value was used for the stochastic dynamics runs as well. A value of 0.999 for 0' in (16) was chosen as being (loosely) equivalent to the use of trajectories 1000 iterations long in the Hybrid Monte Carlo runs. (= The results shown in Fig. 4(a) are comparable to those obtained using Hybrid Monte Carlo in Fig. 2(b). Fig. 4(b) shows that with a larger step size the uncorrected stochastic dynamics method becomes unstable. Large step sizes also cause problems for the Hybrid Monte Carlo method, however, as they lead to high rejection rates. The Hybrid Monte Carlo method may be the more robust choice in some circumstances, but uncorrected stochastic dynamics can also give good results. As it is simpler, the stochastic dynamics method may be better for hardware implementation, and is a more plausible starting point for any attempt to relate Bayesian methods to biology. Numerous other variations on these methods are possible as well, some of which are discussed in (Neal, 1992). References Andersen, H. C. (1980) "Molecular dynamics simulations at constant pressure and/or temperature", Journal of Chemical Physics, vol. 72, pp. 2384-2393. Buntine, W. L. and Weigend, A. S. (1991) "Bayesian back-propagation", Complex Systems, vol. 5, pp. 603-643. Duane, S., Kennedy, A. D., Pendleton, B. J., and Roweth, D. (1987) "Hybrid Monte Carlo", Physics Letters B, vol. 195, pp. 216-222. MacKay, D. J. C. (1992) "A practical Bayesian framework for backpropagation networks", Neural Computation, vol. 4, pp. 448-472. Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., and Teller, E. (1953) "Equation of state calculations by fast computing machines", Journal of Chemical Physics, vol. 21, pp. 1087-1092. Neal, R. M. (1992) "Bayesian training of backpropagation networks by the hybrid Monte Carlo method", CRG-TR-92-1, Dept. of Computer Science, University of Toronto.
613 |@word interleave:1 seems:1 proportion:1 nd:1 rno:1 simulation:4 t_:1 pressure:1 tr:1 minus:1 solid:2 fif:1 ld:2 must:5 subsequent:1 predetermined:1 v:1 stationary:5 leaf:2 selected:1 hamiltonian:4 toronto:3 sigmoidal:1 simpler:1 notably:1 discretized:1 actual:1 considering:1 becomes:1 begin:1 minimizes:1 differing:1 finding:2 every:1 ti:1 exactly:3 demonstrates:1 unit:3 yn:7 producing:1 before:2 limit:1 era:2 yd:1 approximately:2 might:1 plus:1 co:2 range:2 averaged:1 practical:2 testing:1 lf:2 backpropagation:6 procedure:1 alleviated:1 convenient:2 close:2 conventional:13 equivalent:3 yt:3 go:2 starting:1 independently:1 ergodic:1 assigns:1 dw:1 variation:2 analogous:2 exact:1 t1i:1 observed:3 ep:1 subproblem:1 solved:1 enters:1 region:3 highest:1 dynamic:24 predictive:2 vague:2 joint:1 iit:1 unsealed:2 fast:1 monte:17 pendleton:2 outside:1 apparent:1 mon:1 larger:2 valued:4 plausible:1 otherwise:2 la4:1 final:1 advantage:1 indication:1 took:1 ajoint:1 cancelling:1 achieve:2 ontario:1 regularity:1 converges:1 leave:1 pi2:1 tti:1 help:1 qt:1 uncorrected:4 come:1 indicate:1 correct:1 stochastic:17 require:1 assign:1 fix:1 generalization:2 cyj:1 crg:1 exp:5 mapping:1 mo:1 early:4 omitted:1 fh:1 integrates:1 wl:1 create:1 gauge:1 tf:1 gaussian:5 always:2 rather:2 r_:1 avoid:3 fluctuate:1 jih:1 derived:1 leapfrog:7 likelihood:2 inference:3 dependent:2 stopping:1 initially:2 hidden:5 overall:1 integration:1 mackay:5 marginal:1 equal:1 sampling:2 eliminated:2 biology:1 look:1 future:1 randomly:3 preserve:1 ve:2 individual:1 phase:3 consisting:1 attempt:2 interest:1 acceptance:1 multiply:1 evaluation:1 light:1 chain:4 integral:4 necessary:2 old:1 loosely:1 walk:1 desired:1 circle:2 uncertain:1 roweth:2 increased:1 modeling:1 ordinary:1 deviation:4 uniform:1 hundred:1 buntine:3 answer:1 peak:1 probabilistic:2 yl:2 systematic:1 physic:3 picking:1 together:1 iy:1 again:2 squared:5 andersen:1 choose:1 dr:2 li:1 potential:3 mp:1 view:1 try:1 picked:2 reached:1 portion:2 bayes:1 maintains:1 competitive:1 worsen:1 minimize:2 ir:1 became:1 bayesian:23 produced:2 carlo:18 multiplying:1 trajectory:3 kennedy:2 m5s:1 reach:3 iht:1 inexact:1 energy:7 pp:5 associated:1 sampled:1 stop:1 oow:1 carefully:1 actually:1 back:1 though:2 anderson:1 just:1 rejected:4 propagation:1 somehow:1 defines:1 mode:1 perhaps:2 effect:1 consisted:2 true:3 y2:4 unbiased:1 hence:1 vicinity:1 chemical:2 alternating:1 excluded:1 jbt:1 neal:8 sin:2 ll:1 during:1 oc:3 tt:1 temperature:1 rl:1 volume:2 discussed:1 declining:1 fml:1 outlined:1 alwl:1 grid:1 dot:1 robot:2 surface:2 fii:1 posterior:11 xe:3 yi:4 minimum:1 greater:1 dashed:1 preservation:1 ii:5 full:1 reduces:1 calculation:1 cross:1 long:2 molecular:1 visit:2 prediction:8 circumstance:1 iteration:10 sometimes:2 ion:2 unlike:1 split:1 enough:1 wn:1 mips:1 iii:1 affect:1 fit:2 architecture:1 identified:1 wo:1 proceed:1 cause:1 amount:1 s4:1 hardware:1 category:6 supplied:1 dotted:2 per:1 discrete:1 vol:5 express:1 group:1 four:1 drawn:1 wasteful:1 sum:1 weigend:3 run:16 letter:1 uncertainty:2 comparable:1 interleaved:1 bound:1 hi:3 fold:1 x2:4 generates:1 aspect:1 formulating:1 performing:1 department:1 according:2 poor:1 jr:1 smaller:1 slightly:1 metropolis:3 invariant:1 handling:1 taken:2 equation:2 nonbayesian:1 needed:1 end:1 apply:1 away:1 appropriate:2 simulating:4 batch:2 top:1 include:1 giving:2 approximating:2 unchanged:2 objective:2 added:1 quantity:1 question:1 nr:1 gradient:1 dp:1 separate:4 evaluate:2 unstable:2 assuming:1 equivalently:1 difficult:3 relate:1 subproblems:2 implementation:1 rosenbluth:2 proper:1 unknown:2 i_:3 markov:3 discarded:2 canada:1 introduced:4 required:1 connection:1 hour:1 dynamical:2 below:1 analogue:1 ia:1 hybrid:16 arm:2 scheme:1 improve:1 rated:1 numerous:1 declined:1 prior:6 teller:2 evolve:1 contributing:1 loss:1 expect:1 fictitious:1 ixl:2 validation:8 degree:2 dq:1 course:2 penalized:1 ift:1 last:4 free:1 keeping:1 bias:7 taking:1 fifth:1 distributed:1 dimension:1 xn:7 lye:3 valid:1 llh:1 made:1 far:1 approximate:1 keep:2 overfitting:3 assumed:1 xi:5 learn:1 robust:1 ignoring:1 obtaining:1 complex:1 rh:1 noise:5 repeated:1 fig:8 momentum:2 position:1 wish:1 xl:6 candidate:1 weighting:1 theorem:1 formula:1 decay:9 consist:1 ih:2 importance:1 te:1 rejection:1 lt:1 explore:1 radford:1 duane:2 corresponds:2 kinetic:2 conditional:1 quantifying:1 considerable:1 change:8 included:1 uniformly:1 averaging:2 wt:2 accepted:4 select:1 latter:1 dept:1 tested:2 avoiding:1 ex:3
5,670
6,130
Probing the Compositionality of Intuitive Functions Eric Schulz University College London e.schulz@cs.ucl.ac.uk Joshua B. Tenenbaum MIT jbt@mit.edu Maarten Speekenbrink University College London m.speekenbrink@ucl.ac.uk David Duvenaud University of Toronto duvenaud@cs.toronto.edu Samuel J. Gershman Harvard University gershman@fas.harvard.edu Abstract How do people learn about complex functional structure? Taking inspiration from other areas of cognitive science, we propose that this is accomplished by harnessing compositionality: complex structure is decomposed into simpler building blocks. We formalize this idea within the framework of Bayesian regression using a grammar over Gaussian process kernels. We show that participants prefer compositional over non-compositional function extrapolations, that samples from the human prior over functions are best described by a compositional model, and that people perceive compositional functions as more predictable than their non-compositional but otherwise similar counterparts. We argue that the compositional nature of intuitive functions is consistent with broad principles of human cognition. 1 Introduction Function learning underlies many intuitive judgments, such as the perception of time, space and number. All of these tasks require the construction of mental representations that map inputs to outputs. Since the space of such mappings is infinite, inductive biases are necessary to constrain the plausible inferences. What is the nature of human inductive biases over functions? It has been suggested that Gaussian processes (GPs) provide a good characterization of these inductive biases [15]. As we describe more formally below, GPs are distributions over functions that can encode properties such as smoothness, linearity, periodicity, and other inductive biases indicated by research on human function learning [5, 3]. Lucas et al. [15] showed how Bayesian inference with GP priors can unify previous rule-based and exemplar-based theories of function learning [18]. A major unresolved question is how people deal with complex functions that are not easily captured by any simple GP. Insight into this question is provided by the observation that many complex functions encountered in the real world can be broken down into compositions of simpler functions [6, 11]. We pursue this idea theoretically and experimentally, by first defining a hypothetical compositional grammar for intuitive functions (based on [6]) and then investigating whether this grammar quantitatively predicts human function learning performance. We compare the compositional model to a flexible non-compositional model (the spectral mixture representation proposed by [21]). Both models use Bayesian inference to reason about functions, but differ in their inductive biases. We show that (a) participants prefer compositional pattern extrapolations in both forced choice and manual drawing tasks; (b) samples elicited from participants? priors over functions are more consistent with the compositional grammar; and (c) participants perceive compositional functions as more predictable than non-compositional ones. Taken together, these findings provide support for the compositional nature of intuitive functions. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 2 Gaussian process regression as a theory of intuitive function learning A GP is a collection of random variables, any finite subset of which are jointly Gaussian-distributed (see [18] for an introduction). A GP can be expressed as a distribution over functions: f ? GP(m, k), where m(x) = E[f (x)] is a mean function modeling the expected output of the function given input x, and k(x, x0 ) = E [(f (x) ? m(x))(f (x0 ) ? m(x0 ))] is a kernel function modeling the covariance between points. Intuitively, the kernel encodes an inductive bias about the expected smoothness of functions drawn from the GP. To simplify exposition, we follow standard convention in assuming a constant mean of 0. Conditional on data D = {X, y}, where yn ? N (f (xn ), ? 2 ), the posterior predictive distribution for a new input x? is Gaussian with mean and variance given by: 2 ?1 E[f (x? )|D] = k> y (1) ? (K + ? I) 2 ?1 V[f (x? )|D] = k(x? , x? ) ? k> k? , (2) ? (K + ? I) where K is the N ? N matrix of covariances evaluated at each input in X and k? = [k(x1 , x? ), . . . , k(xN , x? )]. As pointed out by Griffiths et al. [10] (see also [15]), the predictive distribution can be viewed as an exemplar (similarity-based) model of function learning [5, 16], since it can be written as a linear combination of the covariance between past and current inputs: N X f (x? ) = ?n k(xn , x? ) (3) n=1 with ? = (K + ? 2 I)?1 y. Equivalently, by Mercer?s theorem any positive definite kernel can be expressed as an outer product of feature vectors: ? X k(x, x0 ) = ?d ?d (x)?d (x0 ), (4) d=1 where {?d (x)} are the eigenfunctions of the kernel and {?d } are the eigenvalues. The posterior predictive mean is a linear combination of the features, which from a psychological perspective can be thought of as encoding ?rules? mapping inputs to outputs [4, 14]. Thus, a GP can be expressed as both an exemplar (similarity-based) model and a feature (rule-based) model, unifying the two dominant classes of function learning theories in cognitive science [15]. 3 Structure learning with Gaussian processes So far we have assumed a fixed kernel function. However, humans can adapt to a wide variety of structural forms [13, 8], suggesting that they have the flexibility to learn the kernel function from experience. The key question addressed in this paper is what space of kernels humans are optimizing over?how rich is their representational vocabulary? This vocabulary will in turn act as an inductive bias, making some functions easier to learn, and other functions harder to learn. Broadly speaking, there are two approaches to parameterizing the kernel space: a fixed functional form with continuous parameters, or a combinatorial space of functional forms. These approaches are not mutually exclusive; indeed, the success of the combinatorial approach depends on optimizing the continuous parameters for each form. Nonetheless, this distinction is useful because it allows us to separate different forms of functional complexity. A function might have internal structure such that when this structure is revealed, the apparent functional complexity is significantly reduced. For example, a function composed of many piecewise linear segments might have a long description length under a typical continuous parametrization (e.g., the radial basis kernel described below), because it violates the smoothness assumptions of the prior. However, conditional on the changepoints between segments, the function can be decomposed into independent parts each of which is well-described by a simple continuous parametrization. If internally structured functions are ?natural kinds,? then the combinatorial approach may be a good model of human intuitive functions. In the rest of this section, we describe three kernel parameterizations. The first two are continuous, differing in their expressiveness. The third one is combinatorial, allowing it to capture complex patterns by composing simpler kernels. For all kernels, we take the standard approach of choosing the parameter values that optimize the log marginal likelihood. 2 3.1 Radial basis kernel The radial basis kernel is a commonly used kernel in machine learning applications, embodying the assumption that the covariance between function values decays exponentially with input distance:   |x ? x0 |2 0 2 , (5) k(x, x ) = ? exp ? 2l2 where ? is a scaling parameter and l is a length-scale parameter. This kernel assumes that the same smoothness properties apply globally for all inputs. It provides a standard baseline to compare with more expressive kernels. 3.2 Spectral mixture kernel The second approach is based on the fact that any stationary kernel can be expressed as an integral using Bochner?s theorem. Letting ? = |x ? x0 | ? RP , then Z > k(? ) = e2?is ? ?(ds). (6) RP If ? has a density S(s), then S is the spectral density of k; S and k are thus Fourier duals [18]. This means that a spectral density fully defines the kernel and that furthermore every stationary kernel can be expressed as a spectral density. Wilson & Adams [21] showed that the spectral density can be approximated by a mixture of Q Gaussians, such that k(? ) = Q X wq q=1 P Y    exp ?2? 2 ?p2 ?qp cos 2??p ?(p) q (7) p=1   (1) (P ) Here, the qth component has mean vector ?q = ?q , . . . , ?q and a covariance matrix   (1) (P ) Mq = diag ?q , . . . , ?q . The result is a non-parametric approach to Gaussian process regression, in which complex kernels are approximated by mixtures of simpler ones. This approach is appealing when simpler kernels fail to capture functional structure. Its main drawback is that because structure is captured implicitly via the spectral density, the building blocks are psychologically less intuitive: humans appear to have preferences for linear [12] and periodic [1] functions, which are not straightforwardly encoded in the spectral mixture (though of course the mixture can approximate these functions). Since the spectral kernel has been successfully applied to reverse engineer human kernels [22], it is a useful reference of comparison to more structured compositional approaches. 3.3 Compositional kernel As positive semidefinite kernels are closed under addition and multiplication, we can create richly structured and interpretable kernels from well understood base components. For example, by summing kernels, we can model the data as a superposition of independent functions. Figure 1 shows an example of how different kernels (radial basis, linear, periodic) can be combined. Table 1 summarizes the kernels used in our grammar. LIN PER PER+LIN RBFxPER f(x) RBF x Figure 1: Examples of base and compositional kernels. Many other compositional grammars are possible. For example, we could have included a more diverse set of kernels, and other composition operators (e.g., convolution, scaling) that generate valid kernels. However, we believe that our simple grammar is a useful starting point, since the components are intuitive and likely to be psychologically plausible. For tractability, we fix the maximum number of combined kernels to 3. Additionally, we do not allow for repetition of kernels in order to restrict the complexity of the kernel space. 3 Linear k(x, x0 ) = (x ? ?1 )(x0 ? ?1 ) Radial basis function   ? )2 k(? ) = ?22 exp ? (2? 2 3 Periodic   2 k(? ) = ?42 exp ? 2 sin ?(?2 ? ?5 ) 6 Table 1: Utilized base kernels in our compositional grammar. ? = |x ? x0 | . 4 Experiment 1: Extrapolation The first experiment assessed whether people prefer compositional over non-compositional extrapolations. In experiment 1a, functions were sampled from a compositional GP and different extrapolations (mean predictions) were produced using each of the aforementioned kernels. Participants were then asked to choose among the 3 different extrapolations for a given function (see Figure 2). In detail, the outputs for xlearn = [0, 0.1, ? ? ? , 7] were used as a training set to which all three kernels were fitted and then used to generate predictions for the test set xtest = [7.1, 7.2, ? ? ? , 10]. Their mean predictions were then used to generate one plot for every approach that showed the learned input as a blue line and the extrapolation as a red line. The procedure was repeated for 20 different compositional functions. Figure 2: Screen shot of first choice experiment. Predictions in this example (from left to right) were generated by a spectral mixture, a radial basis, and a compositional kernel. 52 participants (mean age=36.15, SD = 9.11) were recruited via Amazon Mechanical Turk and received $0.5 for their participation. Participants were asked to select one of 3 extrapolations (displayed as red lines) they thought best completed a given blue line. Results showed that participants chose compositional predictions 69%, spectral mixture predictions 17%, and radial basis predictions 14% of the time. Overall, the compositional predictions were chosen significantly more often than the other two (?2 = 591.2, p < 0.01) as shown in Figure 3a. Compositional Spectral mixture 1.00 Proportion chosen Proportion chosen 1.00 0.75 ? 0.50 0.25 ? 0.75 ? 0.50 ? 0.25 ? 0.00 0.00 Compositional RBF Spectral Compositional Kernel Spectral Kernel (a) Choice proportion for compositional ground truth. (b) Choice proportion for spectral mixture ground truth. Figure 3: Results of extrapolation experiments. Error bars represent the standard error of the mean. In experiment 1b, again 20 functions were sampled but this time from a spectral mixture kernel and 65 participants (mean age=30, SD = 9.84) were asked to choose among either compositional or spectral mixture extrapolations and received $0.5 as before. Results (displayed in Figure 3b) showed that participants again chose compositional extrapolations more frequently (68% vs. 32%, ?2 = 172.8, p < 0.01), even if the ground truth happened to be generated by a spectral mixture kernel. Thus, people seem to prefer compositional over non-compositional extrapolations in forced choice extrapolation tasks. 4 5 Markov chain Monte Carlo with people In a second set of experiments, we assessed participants? inductive biases directly using a Markov chain Monte Carlo with People (MCMCP) approach [19]. Participants accept or reject proposed extrapolations, effectively simulating a Markov chain whose stationary distribution is in this case the posterior predictive. Extrapolations from all possible kernel combinations (up to 3 combined kernels) were generated and stored a priori. These were then used to generate plots of different proposal extrapolations (as in the previous experiment). On each trial, participants chose between their most recently accepted extrapolation and a new proposal. 5.1 Experiment 2a: Compositional ground truth In the first MCMCP experiment, we sampled functions from compositional kernels. Eight different functions were sampled from various compositional kernels, the input space was split into training and test sets, and then all kernel combinations were used to generate extrapolations. Proposals were sampled uniformly from this set. 51 participants with an average age of 32.55 (SD = 8.21) were recruited via Amazon?s Mechanical Turk and paid $1. There were 8 blocks of 30 trials, where each block corresponded to a single training set. We calculated the average proportion of accepted kernels over the last 5 trials, as shown in Figure 4. Proportion of accepted kernels LIN + PER 1 0.4 LIN + PER 2 ? ? 0.3 0.4 ? 0.2 0.2 0.1 ? ? ? 0.0 ? ? l p ? ? r ? ? ? ? ? ? ? 0.0 l+p l+r p+r lxp lxr pxr p+r+lpxr+lpxl+rrxl+ppxlxr ? ? l ? p r LIN + PER 3 ? ? ? ? ? LIN + PER 4 0.25 ? ? 0.3 0.20 ? ? 0.15 0.2 ? ? 0.10 ? 0.1 0.0 ? ? ? l+p l+r p+r lxp lxr pxr p+r+lpxr+lpxl+rrxl+ppxlxr ? ? ? l ? ? p ? r ? ? 0.05 ? ? ? ? 0.00 ? l+p l+r p+r lxp lxr pxr p+r+lpxr+lpxl+rrxl+ppxlxr ? l ? ? p r ? ? l+p l+r p+r lxp lxr pxr p+r+lpxr+lpxl+rrxl+ppxlxr LIN x PER PER x RBF + LIN ? 0.3 ? ? ? ? 0.2 ? ? 0.2 ? ? ? ? 0.1 0.1 ? ? ? ? ? ? 0.0 ? ? l p r ? ? ? ? ? ? ? ? ? ? 0.0 ? l+p l+r p+r lxp lxr pxr p+r+lpxr+lpxl+rrxl+ppxlxr l p r ? l+p l+r p+r lxp lxr pxr p+r+lpxr+lpxl+rrxl+ppxlxr PER LIN + PER + RBF ? ? ? 0.3 0.2 ? 0.2 ? 0.1 0.1 ? ? ? ? 0.0 l ? ? ? ? ? p r ? ? ? ? ? ? ? ? ? 0.0 ? l+p l+r p+r lxp lxr pxr p+r+lpxr+lpxl+rrxl+ppxlxr l p r ? ? ? ? l+p l+r p+r lxp lxr pxr p+r+lpxr+lpxl+rrxl+ppxlxr Figure 4: Proportions of chosen predictions over last 5 trials. Generating kernel marked in red. In all cases participants? subjective probability distribution over kernels corresponded well with the data-generating kernels. Moreover, the inverse marginal likelihood, standardized over all kernels, correlated highly with the subjective beliefs assessed by MCMCP (? = 0.91, p < .01). Thus, participants seemed to converge to sensible structures when the functions were generated by compositional kernels. 5 5.2 Experiment 2b: Naturalistic functions The second MCMCP experiment assessed what structures people converged to when faced with real world data. 51 participants with an average age of 32.55 (SD = 12.14) were recruited via Amazon Mechanical Turk and received $1 for their participation. The functions were an airline passenger data set, volcano CO2 emission data, the number of gym memberships over 5 years, and the number of times people googled the band ?Wham!? over the last 8 years; all shown in Figure 5a. Participants were not told any information about the data set (including input and output descriptions) beyond the input-output pairs. As periodicity in the real world is rarely ever purely periodic, we adapted the periodic component of the grammar by multiplying a periodic kernel with a radial basis kernel, thereby locally smoothing the periodic part of the function.1 Apart from the different training sets, the procedure was identical to the last experiment. Real world data Proportion of accepted kernels Airline passengers Airline passengers 300 0.4 200 0.3 100 ? ? 0.2 0 ? 0.1 ?100 0.0 ?200 0 50 100 150 ? l p r ? ? ? ? ? ? l+p l+r p+r lxp lxr pxr Gym memberships ? p+r+l pxr+l ? ? pxl+r rxl+p ? pxlxr Gym memberships 100 ? 0.6 80 0.4 60 0.2 40 0.0 200 y 0 400 600 ? ? l ? ? p r l+p ? ? ? ? ? l+r p+r lxp lxr pxr Volcano 0.3 375 ? ? ? pxl+r rxl+p pxlxr ? ? 0.2 350 ? ? 0.1 ? 325 0.0 1980 ? pxr+l Volcano 400 1960 p+r+l 2000 ? ? ? l p r l+p l+r p+r Wham! lxp ? ? ? ? lxr pxr ? p+r+l pxr+l ? pxl+r rxl+p pxlxr Wham! ? 20 0.6 15 0.4 10 0.2 5 0.0 ? ? 0 100 200 300 400 ? ? l p r l+p ? ? ? ? ? l+r p+r lxp lxr pxr p+r+l ? ? ? ? pxr+l pxl+r rxl+p pxlxr x (a) Data. (b) Proportions of chosen predictions over last 5 trials. Figure 5: Real world data and MCMCP results. Error bars represent the standard error of the mean. Results are shown in Figure 5b, demonstrating that participants converged to intuitively plausible patterns. In particular, for both the volcano and the airline passenger data, participants converged to compositions resembling those found in previous analyses [6]. The correlation between the mean proportion of accepted predictions and the inverse standardized marginal likelihoods of the different kernels was again significantly positive (? = 0.83, p < .01). 6 Experiment 3: Manual function completion In the next experiment, we let participants draw the functions underlying observed data manually. As all of the prior experiments asked participants to judge between ?pre-generated? predictions of functions, we wanted to compare this to how participants generate predictions themselves. On each round of the experiment, functions were sampled from the compositional grammar, the number of points to be presented on each trial was sampled uniformly between 100 and 200, and the noise variance was sampled uniformly between 0 and 25. Finally, the size of an unobserved region of the 1 See the following page for an example: http://learning.eng.cam.ac.uk/carl/mauna. 6 function was sampled to lie between 5 and 50. Participants were asked to manually draw the function best describing observed data and to inter- and extrapolate this function in two unobserved regions. A screen shot of the experiment is shown in Figure 6. Figure 6: Manual pattern completion experiment. Extrapolation region is delimited by vertical lines. 36 participants with a mean age of 30.5 (SD = 7.15) were recruited from Amazon Mechanical Turk and received $2 for their participation. Participants were asked to draw lines in a cloud of dots that they thought best described the given data. To facilitate this process, participants placed black dots into the cloud, which were then automatically connected by a black line based on a cubic Bezier smoothing curve. They were asked to place the first dot on the left boundary and the final dot on the right boundary of the graph. In between, participants were allowed to place as many dots as they liked (from left to right) and could remove previously placed dots. There were 50 trials in total. We assessed the average root mean squared distance between participants? predictions (the line they drew) and the mean predictions of each kernel given the data participants had seen, for both interpolation and extrapolation areas. Results are shown in Figure 7. Interpolation Extrapolation ? ? ? 75 90 ? ? RMSE RMSE ? 50 25 60 30 0 0 Compositional RBF Spectral Compositional Kernel RBF Spectral Kernel (a) Distance for interpolation drawings. (b) Distance for extrapolation drawings. Figure 7: Root mean squared distances. Error bars represent the standard error of the mean. The mean distance from participants? drawings was significantly higher for the spectral mixture kernel than for the compositional kernel in both interpolation (86.96 vs. 58.33, t(1291.1) = ?6.3, p < .001) and extrapolation areas (110.45 vs 83.91, t(1475.7) = 6.39, p < 0.001). The radial basis kernel produced similar distances as the compositional kernel in interpolation (55.8), but predicted participants? drawings significantly worse in extrapolation areas (97.9, t(1459.9) = 3.26, p < 0.01). 7 Experiment 4: Assessing predictability Compositional patterns might also affect the way in which participants perceive functions a priori [20]. To assess this, we asked participants to judge how well they thought they could predict 40 different functions that were similar on many measures such as their spectral entropy and their average wavelet distance to each other, but 20 of which were sampled from a compositional and 20 from a spectral mixture kernel. Figure 8 shows a screenshot of the experiment. 50 participants with a mean age of 32 (SD = 7.82) were recruited via Amazon Mechanical Turk and received $0.5 for their participation. Participants were asked to rate the predictability of different functions. On each trial participants were shown a total of nj ? {50, 60, . . . , 100} randomly sampled input-output points of a given function and asked to judge how well they thought they could predict the output for a randomly sampled input point on a scale of 0 (not at all) to 100 (very well). Afterwards, they had to rate which of two functions was easier to predict (Figure 8) on a scale from -100 (left graph is definitely easier to predict) to 100 (right graph is definitely easier predict). As shown in Figure 9, compositional functions were perceived as more predictable than spectral functions in isolation (t(948) = 11.422, p < 0.01) and in paired comparisons (t(499) = 13.502, p < 0.01). Perceived predictability increases with the number of observed outputs (r = 0.23, 7 (b) Comparative judgements. (a) Predictability judgements. Figure 8: Screenshot of the predictablity experiment. Predictability Group ? Compositional ? Direct Comparison Spectral 50 70 ? ? ? Mean Judgement Mean Judgement 40 60 ? ? 50 ? ? ? ? 40 ? ? 70 80 ? ? 30 ? ? ? 20 10 ? ? 0 30 50 60 90 100 50 Sample Size 60 70 80 90 100 Sample Size (a) Predictability judgements. (b) Comparative judgements. Figure 9: Results of the predictablity experiment. Error bars represent the standard error of the mean. p < 0.01) and the larger the number of observations, the larger the difference between compositional and spectral mixture functions (r = 0.14, p < 0.01). 8 Discussion In this paper, we probed human intuitions about functions and found that these intuitions are best described as compositional. We operationalized compositionality using a grammar over kernels within a GP regression framework and found that people prefer extrapolations based on compositional kernels over other alternatives, such as a spectral mixture or the standard radial basis kernel. Two Markov chain Monte Carlo with people experiments revealed that participants converge to extrapolations consistent with the compositional kernels. These findings were replicated when people manually drew the functions underlying observed data. Moreover, participants perceived compositional functions as more predictable than non-compositional ? but otherwise similar ? ones. The work presented here is connected to several lines of previous research, most importantly that of Lucas et al. [15], which introduced GP regression as a model of human function learning, and Wilson et al. [22], which attempted to reverse-engineer the human kernel using a spectral mixture. We see our work as complementary; we need both a theory to describe how people make sense of structure as well as a method to indicate what the final structure might look like when represented as a kernel. Our approach also ties together neatly with past attempts to model structure in other cognitive domains such as motion perception [9] and decision making [7]. Our work can be extended in a number of ways. First, it is desirable to more thoroughly explore the space of base kernels and composition operators, since we used an elementary grammar in our analyses that is probably too simple. Second, the compositional approach could be used in traditional function learning paradigms (e.g., [5, 14]) as well as in active input selection paradigms [17]. Another interesting avenue for future research would be to explore the broader implications of compositional function representations. For example, evidence suggests that statistical regularities reduce perceived numerosity [23] and increase memory capacity [2]; these tasks can therefore provide clues about the underlying representations. If compositional functions alter number perception or memory performance to a greater extent than alternative functions, that suggests that our theory extends beyond simple function learning. 8 References [1] L. Bott and E. Heit. Nonmonotonic extrapolation in function learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30:38?50, 2004. [2] T. F. Brady, T. Konkle, and G. A. Alvarez. A review of visual memory capacity: Beyond individual items and toward structured representations. Journal of Vision, 11:4?4, 2011. [3] B. Brehmer. Hypotheses about relations between scaled variables in the learning of probabilistic inference tasks. Organizational Behavior and Human Performance, 11(1):1?27, 1974. [4] J. D. Carroll. Functional learning: The learning of continuous functional mappings relating stimulus and response continua. Educational Testing Service, 1963. [5] E. L. DeLosh, J. R. Busemeyer, and M. A. McDaniel. Extrapolation: The sine qua non for abstraction in function learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 23(4):968, 1997. [6] D. Duvenaud, J. R. Lloyd, R. Grosse, J. B. Tenenbaum, and Z. Ghahramani. Structure discovery in nonparametric regression through compositional kernel search. Proceedings of the 30th International Conference on Machine Learning, pages 1166?1174, 2013. [7] S. J. Gershman, J. Malmaud, J. B. Tenenbaum, and S. Gershman. Structured representations of utility in combinatorial domains. Decision, 2016. [8] S. J. Gershman and Y. Niv. Learning latent structure: carving nature at its joints. Current Opinion in Neurobiology, 20:251?256, 2010. [9] S. J. Gershman, J. B. Tenenbaum, and F. J?kel. Discovering hierarchical motion structure. Vision Research, 2016. [10] T. L. Griffiths, C. Lucas, J. Williams, and M. L. Kalish. Modeling human function learning with gaussian processes. In Advances in Neural Information Processing Systems, pages 553?560, 2009. [11] R. Grosse, R. R. Salakhutdinov, W. T. Freeman, and J. B. Tenenbaum. Exploiting compositionality to explore a large space of model structures. Uncertainty in Artificial Intelligence, 2012. [12] M. L. Kalish, T. L. Griffiths, and S. Lewandowsky. Iterated learning: Intergenerational knowledge transmission reveals inductive biases. Psychonomic Bulletin & Review, 14:288?294, 2007. [13] C. Kemp and J. B. Tenenbaum. Structured statistical models of inductive reasoning. Psychological Review, 116:20?58, 2009. [14] K. Koh and D. E. Meyer. Function learning: Induction of continuous stimulus-response relations. Journal of Experimental Psychology: Learning, Memory, and Cognition, 17:811?836, 1991. [15] C. G. Lucas, T. L. Griffiths, J. J. Williams, and M. L. Kalish. A rational model of function learning. Psychonomic bulletin & review, 22(5):1193?1215, 2015. [16] M. A. Mcdaniel and J. R. Busemeyer. The conceptual basis of function learning and extrapolation: Comparison of rule-based and associative-based models. Psychonomic Bulletin & Review, 12:24?42, 2005. [17] P. Parpart, E. Schulz, M. Speekenbrink, and B. C. Love. Active learning as a means to distinguish among prominent decision strategies. In Proceedings of the 37th Annual Meeting of the Cognitive Science Society, pages 1829?1834, 2015. [18] C. Rasmussen and C. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [19] A. N. Sanborn, T. L. Griffiths, and R. M. Shiffrin. Uncovering mental representations with Markov chain Monte Carlo. Cognitive Psychology, 60(2):63?106, 2010. [20] E. Schulz, J. B. Tenenbaum, D. N. Reshef, M. Speekenbrink, and S. J. Gershman. Assessing the perceived predictability of functions. In Proceedings of the 37th Annual Meeting of the Cognitive Science Society, pages 2116?2121. Cognitive Science Society, 2015. [21] A. G. Wilson and R. P. Adams. Gaussian process kernels for pattern discovery and extrapolation. arXiv preprint arXiv:1302.4245, 2013. [22] A. G. Wilson, C. Dann, C. Lucas, and E. P. Xing. The human kernel. In Advances in Neural Information Processing Systems, pages 2836?2844, 2015. [23] J. Zhao and R. Q. Yu. Statistical regularities reduce perceived numerosity. Cognition, 146:217?222, 2016. 9
6130 |@word trial:8 judgement:6 proportion:10 reshef:1 covariance:5 eng:1 xtest:1 paid:1 thereby:1 shot:2 harder:1 past:2 qth:1 subjective:2 current:2 written:1 wanted:1 remove:1 plot:2 interpretable:1 v:3 stationary:3 intelligence:1 discovering:1 item:1 parametrization:2 mental:2 characterization:1 parameterizations:1 provides:1 toronto:2 preference:1 simpler:5 direct:1 x0:10 theoretically:1 inter:1 indeed:1 expected:2 behavior:1 themselves:1 frequently:1 love:1 salakhutdinov:1 globally:1 decomposed:2 freeman:1 automatically:1 provided:1 spain:1 linearity:1 moreover:2 underlying:3 what:4 kind:1 pursue:1 differing:1 finding:2 unobserved:2 nj:1 brady:1 every:2 hypothetical:1 act:1 tie:1 scaled:1 uk:3 internally:1 yn:1 appear:1 positive:3 before:1 understood:1 service:1 sd:6 encoding:1 interpolation:5 might:4 chose:3 black:2 suggests:2 co:1 carving:1 busemeyer:2 testing:1 block:4 definite:1 procedure:2 area:4 thought:5 significantly:5 reject:1 pre:1 radial:10 griffith:5 naturalistic:1 pxl:4 selection:1 operator:2 optimize:1 map:1 resembling:1 educational:1 williams:3 starting:1 unify:1 amazon:5 perceive:3 rule:4 insight:1 parameterizing:1 importantly:1 mq:1 maarten:1 construction:1 gps:2 carl:1 hypothesis:1 harvard:2 approximated:2 pxr:16 utilized:1 predicts:1 observed:4 cloud:2 preprint:1 capture:2 region:3 connected:2 numerosity:2 intuition:2 predictable:4 broken:1 complexity:3 asked:10 co2:1 cam:1 segment:2 predictive:4 purely:1 eric:1 basis:11 easily:1 joint:1 various:1 rxl:4 represented:1 forced:2 describe:3 london:2 monte:4 artificial:1 corresponded:2 choosing:1 nonmonotonic:1 harnessing:1 apparent:1 encoded:1 whose:1 plausible:3 larger:2 drawing:5 otherwise:2 grammar:12 gp:10 jointly:1 final:2 kalish:3 associative:1 eigenvalue:1 ucl:2 propose:1 product:1 unresolved:1 flexibility:1 shiffrin:1 representational:1 konkle:1 intuitive:9 description:2 exploiting:1 regularity:2 transmission:1 assessing:2 generating:2 adam:2 liked:1 comparative:2 ac:3 completion:2 exemplar:3 received:5 p2:1 c:2 predicted:1 judge:3 indicate:1 convention:1 differ:1 drawback:1 human:16 duals:1 opinion:1 violates:1 require:1 fix:1 niv:1 elementary:1 kel:1 duvenaud:3 ground:4 exp:4 cognition:5 mapping:3 predict:5 major:1 continuum:1 perceived:6 combinatorial:5 superposition:1 repetition:1 create:1 successfully:1 mit:3 gaussian:10 wilson:4 broader:1 encode:1 emission:1 likelihood:3 baseline:1 sense:1 inference:4 abstraction:1 membership:3 accept:1 relation:2 schulz:4 overall:1 aforementioned:1 flexible:1 among:3 uncovering:1 priori:2 lucas:5 smoothing:2 marginal:3 manually:3 identical:1 broad:1 look:1 yu:1 alter:1 future:1 stimulus:2 quantitatively:1 simplify:1 piecewise:1 randomly:2 composed:1 lewandowsky:1 individual:1 attempt:1 highly:1 mixture:18 semidefinite:1 chain:5 implication:1 integral:1 necessary:1 experience:1 fitted:1 psychological:2 modeling:3 tractability:1 organizational:1 subset:1 too:1 stored:1 straightforwardly:1 periodic:7 combined:3 thoroughly:1 density:6 definitely:2 international:1 told:1 probabilistic:1 together:2 again:3 squared:2 choose:2 worse:1 cognitive:7 zhao:1 suggesting:1 lloyd:1 dann:1 depends:1 passenger:4 sine:1 root:2 extrapolation:30 closed:1 red:3 xing:1 participant:39 elicited:1 rmse:2 ass:1 variance:2 judgment:1 bayesian:3 iterated:1 produced:2 heit:1 carlo:4 multiplying:1 converged:3 manual:3 nonetheless:1 turk:5 e2:1 sampled:12 rational:1 richly:1 knowledge:1 formalize:1 delimited:1 higher:1 follow:1 response:2 alvarez:1 evaluated:1 though:1 furthermore:1 correlation:1 d:1 expressive:1 defines:1 indicated:1 believe:1 building:2 facilitate:1 counterpart:1 inductive:10 inspiration:1 jbt:1 deal:1 round:1 sin:1 samuel:1 prominent:1 motion:2 reasoning:1 recently:1 functional:8 psychonomic:3 qp:1 exponentially:1 operationalized:1 relating:1 lxr:12 composition:4 smoothness:4 pointed:1 neatly:1 had:2 dot:6 similarity:2 carroll:1 base:4 dominant:1 posterior:3 showed:5 perspective:1 optimizing:2 apart:1 reverse:2 delosh:1 success:1 meeting:2 accomplished:1 joshua:1 captured:2 seen:1 greater:1 bochner:1 converge:2 paradigm:2 afterwards:1 desirable:1 adapt:1 long:1 lin:9 paired:1 prediction:15 underlies:1 regression:6 vision:2 arxiv:2 psychologically:2 kernel:80 represent:4 proposal:3 addition:1 addressed:1 rest:1 airline:4 eigenfunctions:1 probably:1 recruited:5 seem:1 structural:1 revealed:2 split:1 variety:1 affect:1 isolation:1 psychology:4 restrict:1 reduce:2 idea:2 avenue:1 brehmer:1 whether:2 utility:1 speaking:1 compositional:57 useful:3 nonparametric:1 tenenbaum:7 band:1 locally:1 embodying:1 mcdaniel:2 reduced:1 generate:6 http:1 happened:1 per:10 blue:2 broadly:1 diverse:1 probed:1 group:1 key:1 demonstrating:1 drawn:1 graph:3 year:2 inverse:2 volcano:4 uncertainty:1 place:2 extends:1 draw:3 decision:3 prefer:5 scaling:2 summarizes:1 distinguish:1 encountered:1 annual:2 adapted:1 constrain:1 encodes:1 fourier:1 structured:6 combination:4 appealing:1 making:2 intuitively:2 koh:1 taken:1 mutually:1 previously:1 turn:1 describing:1 fail:1 letting:1 changepoints:1 gaussians:1 apply:1 eight:1 hierarchical:1 spectral:28 simulating:1 gym:3 alternative:2 rp:2 assumes:1 standardized:2 completed:1 unifying:1 ghahramani:1 society:3 question:3 fa:1 parametric:1 exclusive:1 strategy:1 traditional:1 sanborn:1 distance:8 separate:1 capacity:2 outer:1 sensible:1 argue:1 extent:1 kemp:1 reason:1 toward:1 induction:1 assuming:1 length:2 equivalently:1 allowing:1 vertical:1 observation:2 convolution:1 markov:5 finite:1 displayed:2 defining:1 extended:1 ever:1 neurobiology:1 expressiveness:1 compositionality:4 david:1 introduced:1 pair:1 mechanical:5 distinction:1 learned:1 barcelona:1 nip:1 beyond:3 suggested:1 bar:4 below:2 perception:3 pattern:6 including:1 memory:6 belief:1 natural:1 participation:4 faced:1 prior:5 review:5 l2:1 discovery:2 multiplication:1 fully:1 interesting:1 gershman:7 age:6 consistent:3 mercer:1 principle:1 periodicity:2 course:1 placed:2 last:5 rasmussen:1 bias:9 allow:1 mauna:1 wide:1 taking:1 bulletin:3 distributed:1 curve:1 calculated:1 vocabulary:2 world:5 xn:3 rich:1 valid:1 seemed:1 boundary:2 collection:1 commonly:1 replicated:1 clue:1 far:1 approximate:1 implicitly:1 active:2 investigating:1 reveals:1 summing:1 conceptual:1 assumed:1 continuous:7 search:1 latent:1 table:2 additionally:1 learn:4 nature:4 composing:1 correlated:1 complex:6 domain:2 diag:1 main:1 noise:1 repeated:1 allowed:1 complementary:1 x1:1 screen:2 cubic:1 grosse:2 probing:1 predictability:7 meyer:1 lie:1 screenshot:2 third:1 wavelet:1 down:1 theorem:2 qua:1 decay:1 evidence:1 effectively:1 drew:2 easier:4 entropy:1 likely:1 explore:3 visual:1 expressed:5 truth:4 lxp:12 conditional:2 viewed:1 marked:1 exposition:1 rbf:6 experimentally:1 included:1 infinite:1 typical:1 uniformly:3 engineer:2 total:2 accepted:5 experimental:3 attempted:1 rarely:1 formally:1 college:2 select:1 internal:1 people:13 support:1 wq:1 assessed:5 extrapolate:1
5,671
6,131
A Bayesian method for reducing bias in neural representational similarity analysis Ming Bo Cai Princeton Neuroscience Institute Princeton University Princeton, NJ 08544 mcai@princeton.edu Nicolas W. Schuck Princeton Neuroscience Institute Princeton University Princeton, NJ 08544 nschuck@princeton.edu Jonathan W. Pillow Princeton Neuroscience Institute Princeton University Princeton, NJ 08544 pillow@princeton.edu Yael Niv Princeton Neuroscience Institute Princeton University Princeton, NJ 08544 yael@princeton.edu Abstract In neuroscience, the similarity matrix of neural activity patterns in response to different sensory stimuli or under different cognitive states reflects the structure of neural representational space. Existing methods derive point estimations of neural activity patterns from noisy neural imaging data, and the similarity is calculated from these point estimations. We show that this approach translates structured noise from estimated patterns into spurious bias structure in the resulting similarity matrix, which is especially severe when signal-to-noise ratio is low and experimental conditions cannot be fully randomized in a cognitive task. We propose an alternative Bayesian framework for computing representational similarity in which we treat the covariance structure of neural activity patterns as a hyperparameter in a generative model of the neural data, and directly estimate this covariance structure from imaging data while marginalizing over the unknown activity patterns. Converting the estimated covariance structure into a correlation matrix offers a much less biased estimate of neural representational similarity. Our method can also simultaneously estimate a signal-to-noise map that informs where the learned representational structure is supported more strongly, and the learned covariance matrix can be used as a structured prior to constrain Bayesian estimation of neural activity patterns. Our code is freely available in Brain Imaging Analysis Kit (Brainiak) (https://github.com/IntelPNI/brainiak). 1 Neural pattern similarity as a way to understand neural representations Understanding how patterns of neural activity relate to internal representations of the environment is one of the central themes of both system neuroscience and human neural imaging [20, 5, 7, 15]. One can record neural responses (e.g. by functional magnetic resonance imaging; fMRI) while participants observe sensory stimuli, and in parallel, build different computational models to mimic the brain?s encoding of these stimuli. Neural activity pattern corresponding to each feature of an encoding model can then be estimated from the imaging data. Such activity patterns can be used to decode the perceived content with respect to the encoding features from new imaging data. The degree to which stimuli can be decoded from one brain area based on different encoding models informs us of the type of information represented in that area. For example, an encoding model based on motion energy in visual stimuli captured activity fluctuations from visual cortical areas V1 to V3, 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. and was used to successfully decode natural movie watched during an fMRI scan [14]. In contrast, encoding models based on semantic categories can more successfully decode information from higher level visual cortex [7]. While the decoding performance of different encoding models informs us of the type of information represented in a brain region, it does not directly reveal the structure of the representational space in that area. Such structure is indexed by how distinctively different contents are represented in that region [21, 4]. Therefore, one way to directly quantify the structure of the representational space in the neural population activity is to estimate the neural activity pattern elicited by each sensory stimulus, and calculate the similarity between the patterns corresponding to each pair of stimuli. This analysis of pair-wise similarity between neural activity patterns to different stimuli was named Representational Similarity Analysis (RSA) [11]. In fact, one of the earliest demonstrations of decoding from fMRI data was based on pattern similarity [7]. RSA revealed that the representational structures in the inferotemporal (IT) cortex of natural objects are highly similar between human and monkey [12] and a continuum in the abstract representation of biological classes exist in human ventral object visual cortex [2]. Because the similarity structure can be estimated from imaging data even without building an encoding model, RSA allows not only for model testing (by comparing the similarity matrix of neural data with the similarity matrix of the feature vectors when stimuli are represented with an encoding model) but also for exploratory study (e.g., by projecting the similarity structure to a low-dimensional space to visualize its structure, [11]). Therefore, originally as a tool for studying visual representations [2, 16, 10], RSA has recently attracted neuroscientists to explore the neural representational structure in many higher level cognitive areas [23, 18]. 2 Structured noise in pattern estimation translates into bias in RSA Although RSA is gaining popularity, a few recent studies revealed that in certain circumstances the similarity structure estimated by standard RSA might include a significant bias. For example, the estimated similarity between fMRI patterns of two stimuli is much higher when the stimuli are displayed closer in time [8]. This dependence of pattern similarity on inter-stimulus interval was hypothesized to reflect "temporal drift of pattern"[1], but we believe it may also be due to temporal autocorrelation in fMRI noise. Furthermore, we applied RSA to a dataset from a structured cognitive task (Fig 1A) [19] and found that the highly structured representational similarity matrix obtained from the neural data (Fig 1B,C) is very similar to the matrix obtained when RSA is applied to pure white noise (Fig 1D). Since no task-related similarity structure should exist in white noise while the result in Fig 1D is replicable from noise, this shows that the standard RSA approach can introduce similarity structure not present in the data. We now provide an analytical derivation to explain the source of both types of bias (patterns closer in time are more similar and spurious similarity emerges from analyzing pure noise). It is notable that almost all applications of RSA explicitly or implicitly assume that fMRI responses are related to task-related events through a general linear model (GLM): Y = X ? ? + . (1) Here, Y ? RnT ?nS is the fMRI time series from an experiment with nT time points from nS brain voxels. The experiment involves nC different conditions (e.g., different sensory stimuli, task states, or mental states), each of which comprises events whose onset time and duration is either controlled by the experimenter, or can be measured experimentally (e.g., reaction times). In fMRI, the measured blood oxygen-level dependent (BOLD) response is protracted, such that the response to condition c is modelled as the time course of events in the experimental condition sc (t) convolved with a typical hemodynamic response function (HRF) h(t). Importantly, each voxel can respond to different conditions with different amplitudes ? ? RnC ?nS , and the responses to all conditions are assumed to contribute linearly to the measured signal. Thus, denoting the matrix of HRF-convolved event time courses for each task condition with X ? RnT ?nC , often called the design matrix, the measured Y is assumed to be a linear sum of X weighted by response amplitude ? plus zero-mean noise. Each row of ? is the spatial response pattern (i.e., the response across voxels) to an experimental condition. The goal of RSA is therefore to estimate the similarity between the rows of ?. Because ? is unknown, pattern similarity is usually calculated based on ordinary least square estimation of ? = (XT X)?1 XT Y, and then using Pearson correlation of ? ? to measure similarity. Because ?: ? 2 A Markovian state C transition Low-dimensional projection Enter Internal Exit D ?Similarity? from noise Similarity in brain Exit Internal Enter B Figure 1: Standard RSA introduces bias structure to the similarity matrix. (A) A cognitive task that includes 16 different experimental conditions. Transitions between conditions follow a Markov process. Arrows indicate possible transitions, each with p = 0.5. The task conditions can be grouped to 3 categories (color coded) according to the semantics, or mental operations, required in each condition (the exact meaning of these conditions is not relevant to this paper). (B) Standard RSA of activity patterns corresponding to each condition estimated from a region of interest (ROI) reveal a highly structured similarity matrix. (C) Converting the similarity matrix C to a distance matrix 1 ? C and projecting it to a low-dimensional space using multi-dimensional scaling [13] reveals a highly regular structure. Seeing such a result, one may infer that representational structure in the ROI is strongly related to the semantic meanings of the task conditions. (D) However, a very similar similarity matrix can also be obtained if one applies standard RSA to pure white noise, with a similar low-dimensional projection (not shown). This indicates that standard RSA can introduce spurious structure in the resulting similarity matrix that does not exist in the data. calculating sample correlation implies the belief that there exists an underlying covariance structure ? compared to that of true ?. of ?, we examine the source of bias by focusing on the covariance of ? We assume ? of all voxels in the ROI are indeed random vectors generated from a multivariate Gaussian distribution N(0, U) (the size of U being nC ? nC ). If one knew the true U, similarity measures such as correlation could be derived from it. Substituting the expression Y from equation 1 ? = ? + (XT X)?1 XT . We assume that the signal ? is independent from the noise , we have ? and therefore also independent from its linear transformation (XT X)?1 XT . Thus the covariance of ? is the sum of the true covariance of ? and the covariance of (XT X)?1 XT : ? ? ? N(0, U + (XT X)?1 XT ? X(XT X)?1 ) ? (2) Where ? ? RnT ?nT is the temporal covariance of the noise  (for illustration purposes, in this section we assume that all voxels have the same noise covariance). The term (XT X)?1 XT ? X(XT X)?1 ? has this bias term adding to U which we are is the source of the bias. Since the covariance of ? interested in, their sample correlation is also biased. So are many other similarity measures based on ? such as Eucledian distance. ?, 3 The bias term (XT X)?1 XT ? X(XT X)?1 depends on both the design matrix and the properties of the noise. It is well known that autocorrelation exists in fMRI noise [24, 22]. Even if we assume that the noise is temporally independent (i.e., ? is a diagonal matrix, which may be a valid assumption if one "pre-whitens" the data before further analysis [22]), the bias structure still exists but reduces to (XT X)?1 ? 2 , where ? 2 is the variance of the noise. Diedrichsen et al. [6] realized that the noise in ? could contribute to a bias in the correlation matrix but assumed the bias is only in the diagonal ? of the matrix. However, the bias is a diagonal matrix only if the columns of X (hypothetical fMRI response time courses to different conditions) are orthogonal to each other and if the noise has no autocorrelation. This is rarely the case for most cognitive tasks. In the example in Figure 1A, the transitions between experimental conditions follow a Markov process such that some conditions are always temporally closer than others. Due to the long-lasting HRF, conditions of temporal proximity will have higher correlation in their corresponding columns in X. Such correlation structure in X is the major determinant of the bias structure in this case. On the other hand, if each single stimulus is ? modelled as a condition in X and regularization is used during regression, the correlation between ? of temporally adjacent stimuli is higher primarily because of the autocorrelation property of the noise. This can be the major determinant of the bias structure in cases such as [8]. It is worth noting that the magnitude of bias is larger relative to the true covariance structure U when the signal-to-noise ratio (SNR) is lower, or when X has less power (i.e., there are few repetitions of each condition, thus few measurements of the related neural activity), as illustrated later in Figure 2B. The bias in RSA was not noticed until recently [1, 8], probably because RSA was initially applied to visual tasks in which stimuli are presented many times in a well randomized order. Such designs made the bias structure close to a diagonal matrix and researchers typically only focus on off-diagonal elements of a similarity matrix. In contrast, the neural signals in higher-level cognitive tasks are typically weaker than those in visual tasks [9]. Moreover, in many decision-making and memory studies the orders of different task conditions cannot be fully counter-balanced. Therefore, we expect the bias in RSA to be much stronger and highly structured in these cases, misleading researchers and hiding the true (but weaker) representational structure in the data. ? using regression as above, is to perform RSA on the raw conditionOne alternative to estimating ? averaged fMRI data (for instance, taking the average signal ? 6 sec after the onset of an event as a ? This is equivalent to using a design matrix that assumes a 6-sec delayed single-pulse proxy for ?). ? is still biased, so is its HRF. Although here columns of X are orthogonal by definition, the estimate ? T T T T ?1 T ?1 ?1 T covariance (X X) X Xtrue U Xtrue X(X X) + (X X) X ? X(XT X)?1 (where Xtrue is the design matrix reflecting the true HRF in fMRI). See supplementary material for illustration of this bias. 3 Maximum likelihood estimation of similarity structure directly from data As shown in equation 2, the bias in RSA stems from treating the noisy estimate of ? as the true ? and performing a secondary analysis (correlation) on this noisy estimate. The similarly-structured noise ? translates into bias in (in terms of the covariance of their generating distribution) in each voxel?s ? the secondary analysis. Since the bias comes from inferring U indirectly from point estimation of ?, a good way to avoid such bias is by not relying analysis on this point estimation. With a generative model relating U to the measured fMRI data Y, we can avoid the point estimation of unknown ? by marginalizing it in the likelihood of observing the data. In this section, we propose a method which performs maximum-likelihood estimation of the shared covariance structure U of activity patterns directly from the data. Our generative model of fMRI data follows most of the assumptions above, but also allows the noise property and the SNR to vary across voxels. We use an AR(1) process to model the autocorrelation of noise in each voxel: for the ith voxel, we denote the noise at time t(> 0) as t,i , and assume t,i = ?i ? t?1,i + ?t,i , where ?i2 ?t,i ? N(0, ?i2 ) is the variance of the "new" noise and ?i is the autoregressive coefficient for the i (3) th voxel. We assume that the covariance of the Gaussian distribution from which the activity amplitudes ? i of the ith voxel are generated has a scaling factor that depends on its SNR si : ? i ? N(0, (si ?i )2 U). (4) 4 This is to reflect the fact that not all voxels in an ROI respond to tasks (voxels covering partially or entirely white matter might have little or no response). Because the magnitude of the BOLD response to a task is determined by the product of the magnitude of X and ?, but s is a hyper-parameter only of ?, we hereforth refer to s as pseudo-SNR. We further use the Cholesky decomposition to parametrize the shared covariance structure across voxels: U = LLT , where L is a lower triangular matrix. Thus, ? i can be written as ? i = si ?i L?i , where ?i ? N (0, I) (this change of parameter allows for estimating U of less than full rank by setting L as lower-triangular matrix with a few rightmost-columns truncated). And we have Yi ? si ?i XL?i ? N (0, ?i (?i , ?i )). Therefore, for the ith voxel, the likelihood of observing data Yi given the parameters is: Z p(Yi |L, ?i , ?i , si ) = p(Yi |L, ?i , ?i , si , ?i )p(?i )d?i Z nT 1 1 T ?1 2 2 = (2?)? 2 |??1 i | exp[? (Yi ? si ?i XL?i ) ?i (Yi ? si ?i XL?i )] 2 nC 1 ? (2?)? 2 exp[? ?T ?i ]d?i 2 i nT 1 1 1 2 T ?1 T T ?1 T ?1 2 2 =(2?)? 2 |??1 i | |?i | exp[ ((si ?i ) Yi ?i XL?i L X ?i Yi ? Yi ?i Yi )] 2 (5) ?1 where ?i = (s2i ?i2 LT X T ??1 . ??1 i XL + I) i is the inverse of the noise covariance matrix of th the i voxel, which is a function of ?i and ?i (see supplementary material). For simplicity, we assume that the noise for different voxels is independent, which is the common assumption of standard RSA (although see [21]). The likelihood of the whole dataset, including all voxels in an ROI, is then Y p(Y |L, ?, ?, s) = p(Yi |L, ?i , ?i , si ). (6) i We can use gradient-based methods to optimize the model, that is, to search for the values of parameters that maximize the log likelihood of the data. Note that s are determined only up to a scale, because L can be scaled down by a factor and all si can be scaled up by the same factor without influencing the likelihood. Therefore, we set the geometric mean of s to be 1 to circumvent this indeterminacy, and fit s and L iteratively. The spatial pattern of s thus only reflects the relative SNR of different voxels. ? the estimate of L, we can convert the covariance matrix U ? = L ?L ? T into a Once we obtain L, correlation matrix, which is our estimation of neural representational similarity. Because U is a hyper-parameter of the activity pattern in our generative model and we estimate it directly from data, this is an empirical Bayesian approach. We therefore refer to our method as ?Bayesian RSA? now. 4 4.1 Performance of the method Reduced bias in recovering the latent covariance structure from simulated data To test if the proposed method indeed reduces bias, we simulated fMRI data with a predefined covariance structure and compared the structure recovered by our method with that recovered by standard RSA. Fig 2A shows the hypothetical covariance structure from which we drew ?i for each voxel. The bias structure in Fig 1D is the average structure induced by the design matrices of all participants. To simplify the comparison, we use the design matrices of the experiment experienced by one participant. As a result, the bias structure induced by the design matrix deviates slightly from that in Fig 1D. ? depends on both the level of noise As mentioned, the contribution of the bias to the covariance of ? and the power in the design matrix X. The more each experimental condition is measured during an ? and experiment (roughly speaking, the longer the experiment), the less noisy the estimation of ?, the less biased the standard RSA is. To evaluate the improvement of our method over standard RSA 5 A Covariance structure of simulated ? B standard individual Recovered covariance structure standard average Bayesian individual Bayesian average C % of recovered structure not explained by true structure individual average SNR 0.16 0.31 0.63 Figure 2: Bayesian RSA reduces bias in the recovered shared covariance structure of activity patterns. (A) The covariance structure from which we sampled neural activity amplitudes ? for each voxel. fMRI data were synthesized by weighting the design matrix of the task from Fig 1A with the simulated ? and adding AR(1) noise. (B) The recovered covariance structure for different ? as is done in simulated pseudo-SNR. Standard individual: covariance calculated directly from ? standard RSA, for one simulated participant. Standard average: average of covariance matrices of ? from 20 simulated participants. Bayesian individual: covariance estimated directly from data by ? our method for one simulated participant. Bayesian average: average of the covariance matrices estimated by Bayesian RSA from 20 simulated participants. (C) The ratio of the variation in the recovered covariance structure which cannot be explained by the true covariance structure in Fig 2A. Left: the ratio for covariance matrix from individual simulation (panel 1 and 3 of Fig 2B). Right: the ratio for average covariance matrix (panel 2 and 4 of Fig 2B). Number of runs: the design matrices of 1, 2, or 4 runs of a participant in the experiment of Fig 1A were used in each simulation, to test the effect of experiment duration. Error bar: standard deviation. in different scenarios, we therefore varied two factors: the average SNR of voxels and the duration of the experiment. 500 voxels were simulated. For each voxel, ?i was sampled uniformly from [1.0, 3.0], ?i was sampled uniformly from [?0.2, 0.6] (our empirical investigation of example fMRI data shows that small negative autoregressive coefficient can occur in white matter), si was sampled uniformly from f ? [0.5, 2.0]. The average SNR was manipulated by choosing f from one of three levels {1, 2, 4} in different simulations. The duration of the experiment was manipulated by using the design matrices of run 1, runs 1-2, and runs 1-4 from one participant. Fig 2B displays the covariance matrix recovered by standard RSA (first two columns) and Bayesian RSA (last two columns), with an experiment duration of approximately 10 minutes (one run, measurement resolution: TR = 2.4 sec). The rows correspond to different levels of average SNR (calculated std(X ?i ) post-hoc by averaging the ratio across voxels). Covariance matrices recovered from one ?i simulated participant and the average of covariance matrices recovered from 20 simulated participants (?average?) are displayed. Comparing the shapes of the matrix and the magnitudes of values (color bars) across rows, one can see that the bias structure in standard RSA is most severe when SNR is low. Averaging the estimated covariance matrices across simulated participants can reduce noise, but not bias. Comparing between columns, one can see that strong residual structure exists in standard RSA even after averaging, but almost disappears for Bayesian RSA. This is especially apparent for low SNR ? the block structure of the true covariance matrix from Figure 2A is almost undetectable for standard RSA even after averaging (column 2, row 1 of Fig 2B), but emerges after averaging for Bayesian RSA (column 4, row 1 of Fig 2B). Fig 2C compares the proportion of variation in the recovered covariance structure that cannot be explained by the true structure in Fig 2A, for different levels of SNR and different experiment durations, for individual simulated participants and for average results. This comparison confirms that the covariance recovered by Bayesian RSA deviates much less from the true covariance matrix than that by standard RSA, and that the deviation observed in an individual participant can be reduced considerably by averaging over multiple participants (comparing the left with right panels of Fig 2C for Bayesian RSA). 6 4.2 Application to real data: simultaneous estimation of neural representational similarity and spatial location supporting the representation In addition to reducing bias in estimation of representational similarity, our method also has an advantage over standard RSA: it estimates the pseudo-SNR map s. This map reveals the locations within the ROI that support the identified representational structure. When a researcher looks into an anatomically defined ROI, it is often the case that only some of the voxels respond to the task ? in voxels with little or no response to tasks is dominated by structured conditions. In standard RSA, ? noise following the bias covariance structure (XT X)?1 XT ? X(XT X)?1 , but all voxels are taken into account equally in the analysis. In contrast, si in our model is a hyper-parameter learned directly from data ? if a voxel does not respond to any condition of the task, si would be small and the contribution of the voxel to the total log likelihood is small. The fitting of the shared covariance structure is thus less influenced by this voxel. From our simulated data, we found that parameters of the noise (? and ?) can be recovered reliably with small variance. However, the estimation of s had large variance from the true values used in the simulation. One approach to reduce variance of estimation is by harnessing prior knowledge about data. Voxels supporting similar representation of sensory input or tasks tend to spatially cluster together. Therefore, we used a Gaussian Process to impose a smooth prior on log(s) [17]. Specifically, for any two voxels i and j, we assumed cov(log(si ), log(sj )) = (x ?x )T (x ?x ) (I ?I )2 b2 exp(? i 2lj 2 i j ? 2li2 j ), where xi and xj are the spatial coordinates of the two space inten voxels and Ii and Ij are the average intensities of fMRI signals of the two voxels. Intuitively, this means that if two voxels are close together and have similar signal intensity (that is, they are of the same tissue type), then they should have similar SNR. Such a kernel of a Gaussian Process imposes spatial smoothness but also allows the pseudo-SNR to change quickly at tissue boundaries. The variance of the Gaussian process b2 , the length scale lspace and linten were fitted together with the other parameters by maximizing the joint log likelihood of all parameters (here again, we restrict the geometric mean of s to be 1). B lemur lemur C Map of pseudo-SNR lunamoth monkey ladybug monkey warbler mallard mallard mallard lunamoth warbler ladybug warbler warbler ladybug mallard ladybug monkey lunamoth lemur lunamoth Similarity in IT by Bayesian RSA monkey Subjectively judged similarity lemur A Figure 3: Bayesian RSA estimates both the representational similarity structure from fMRI data and the spatial map supporting the learned representation. (A) Similarity between 6 animal categories, as judged behaviorally (reproduced from [2]). (B) Average representational similarity estimated from IT cortex from all participants of [2], using our approach. The estimated structure resembles the subjectively-reported structure. (C) Pseudo-SNR map in IT cortex corresponding to one participant. Red: high pseudo-SNR, green: low pseudo-SNR. Only small clusters of voxels show high pseudo-SNR. We applied our method to the dataset of Connolly et al. (2012) [2]. In their experiment, participants viewed images of animals from 6 different categories during an fMRI scan and rated the similarity between animals outside the scanner. fMRI time series were pre-processed in the same way as in their work [2]. Inferior temporal (IT) cortex is generally considered as the late stage of ventral pathway of the visual system, in which object identity is represented. Fig 3 shows the similarity judged by the participants and the average similarity matrix estimated from IT cortex, which shows similar structure but higher correlations between animal classes. Interestingly, the pseudo-SNR map shows that only part of the anatomically-defined ROI supports the representational structure. 7 5 Discussion In this paper, we demonstrated that representational similarity analysis, a popular method in many recent fMRI studies, suffers from a bias. We showed analytically that such bias is contributed by both the structure of the experiment design and the covariance structure of measurement and neural noise. The bias is induced because standard RSA analyzes noisy estimates of neural activation level, and the structured noise in the estimates turns into bias. Such bias is especially severe when SNR is low and when the order of task conditions cannot be fully counterbalanced. To overcome this bias, we proposed a Bayesian framework of the fMRI data, incorporating the representational structure as the shared covariance structure of activity levels across voxels. Our Bayesian RSA method estimates this covariance structure directly from data, avoiding the structured noise in point estimation of activity levels. Our method can be applied to neural recordings from other modalities as well. Using simulated data, we showed that, as compared to standard RSA, the covariance structure estimated by our method deviates much less from the true covariance structure, especially for low SNR and short experiments. Furthermore, our method has the advantage of taking into account the variation in SNR across voxels. In future work, we will use the pseudo-SNR map and the covariance structure learned from the data jointly as an empirical prior to constrain the estimation of activation levels ?. We believe that such structured priors learned directly from data can potentially provide more accurate estimation of neural activation patterns?the bread and butter of fMRI analyses. A number of approaches have recently been proposed to deal with the bias structure in RSA, such as using the correlation or Mahalanobis distance between neural activity patterns estimated from separate fMRI scans instead of from the same fMRI scan, or modeling the bias structure as a diagonal matrix or by a Taylor expansion of an unknown function of inter-events intervals [1, 21, 6]. Such approaches have different limitations. The correlation between patterns estimated from different scans [1] is severely underestimated if SNR is low (for example, unless there is zero noise, the correlation between the neural patterns corresponding to the same conditions estimated from different fMRI scans is always smaller than 1, while the true patterns should presumably be the same across scans in order for such an analysis to be justified). Similar problems exists for using Mahalanobis distance between patterns estimated from different scans [21]: with noise in the data, it is not guaranteed that the distance between patterns of the same condition estimated from separate scans is smaller than the distance between patterns of different conditions. Such a result cannot be interpreted as a measure of ?similarity? because, theoretically, neural patterns should be more similar if they belong to the same condition than if they belong to different conditions. Our approach does not suffer from such limitations, because we are directly estimating a covariance structure, which can always be converted to a correlation matrix. Modeling the bias as a diagonal matrix [6] is not sufficient, as the bias can be far from diagonal, as shown in Fig 1D. Taylor expansion of the bias covariance structure as a function of inter-event intervals can potentially account for off-diagonal elements of the bias structure, but it has the risk of removing structure in the true covariance matrix if it happens to co-vary with inter-event intervals, and becomes complicated to set up if conditions repeat multiple times [1]. One limitation of our model is the assumption that noise is spatially independent. Henriksson et al. [8] suggested that global fluctuations of fMRI time series over large areas (which is reflected as spatial correlation) might contribute largely to their RSA pattern. This might also be the reason that the overall correlation in Fig 1B is higher than the bias obtained from standard RSA on independent Gaussian noise (Fig 1D). Our future work will explicitly incorporate such global fluctuations of noise. Acknowledgement This publication was made possible through the support of grants from the John Templeton Foundation and the Intel Corporation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation. JWP was supported by grants from the McKnight Foundation, Simons Collaboration on the Global Brain (SCGB AWD1004351) and the NSF CAREER Award (IIS-1150186). We thank Andrew C. Connolly etc. for sharing of the data used in 4.2. Data used in the supplementary material were obtained from the MGH-USC Human Connectome Project (HCP) database. References [1] A. Alink, A. Walther, A. Krugliak, J. J. van den Bosch, and N. Kriegeskorte. Mind the drift-improving sensitivity to fmri pattern information by accounting for temporal pattern drift. bioRxiv, page 032391, 8 2015. [2] A. C. Connolly, J. S. Guntupalli, J. Gors, M. Hanke, Y. O. Halchenko, Y.-C. Wu, H. Abdi, and J. V. Haxby. The representation of biological classes in the human brain. The Journal of Neuroscience, 32(8): 2608?2618, 2012. [3] R. W. Cox. Afni: software for analysis and visualization of functional magnetic resonance neuroimages. Computers and Biomedical research, 29(3):162?173, 1996. [4] T. Davis and R. A. Poldrack. Measuring neural representations with fmri: practices and pitfalls. Annals of the New York Academy of Sciences, 1296(1):108?134, 2013. [5] R. C. Decharms and A. Zador. Neural representation and the cortical code. Annual review of neuroscience, 23(1):613?647, 2000. [6] J. Diedrichsen, G. R. Ridgway, K. J. Friston, and T. Wiestler. Comparing the similarity and spatial structure of neural representations: a pattern-component model. Neuroimage, 55(4):1665?1678, 2011. [7] J. V. Haxby, M. I. Gobbini, M. L. Furey, A. Ishai, J. L. Schouten, and P. Pietrini. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science, 293(5539):2425?2430, 2001. [8] L. Henriksson, S.-M. Khaligh-Razavi, K. Kay, and N. Kriegeskorte. Visual representations are dominated by intrinsic fluctuations correlated between areas. NeuroImage, 114:275?286, 2015. [9] P. Jazzard, P. Matthews, and S. Smith. Functional magnetic resonance imaging: An introduction to methods, 2003. [10] D. J. Kravitz, C. S. Peng, and C. I. Baker. Real-world scene representations in high-level visual cortex: it?s the spaces more than the places. The Journal of Neuroscience, 31(20):7322?7333, 2011. [11] N. Kriegeskorte, M. Mur, and P. A. Bandettini. Representational similarity analysis-connecting the branches of systems neuroscience. Frontiers in systems neuroscience, 2:4, 2008. [12] N. Kriegeskorte, M. Mur, D. A. Ruff, R. Kiani, J. Bodurka, H. Esteky, K. Tanaka, and P. A. Bandettini. Matching categorical object representations in inferior temporal cortex of man and monkey. Neuron, 60(6): 1126?1141, 2008. [13] J. B. Kruskal. Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika, 29(1):1?27, 1964. [14] S. Nishimoto, A. T. Vu, T. Naselaris, Y. Benjamini, B. Yu, and J. L. Gallant. Reconstructing visual experiences from brain activity evoked by natural movies. Current Biology, 21(19):1641?1646, 2011. [15] K. A. Norman, S. M. Polyn, G. J. Detre, and J. V. Haxby. Beyond mind-reading: multi-voxel pattern analysis of fmri data. Trends in cognitive sciences, 10(9):424?430, 2006. [16] M. V. Peelen and A. Caramazza. Conceptual object representations in human anterior temporal cortex. The Journal of Neuroscience, 32(45):15728?15736, 2012. [17] C. E. Rasmussen. Gaussian processes for machine learning. 2006. [18] M. Ritchey, E. A. Wing, K. S. LaBar, and R. Cabeza. Neural similarity between encoding and retrieval is related to memory via hippocampal interactions. Cerebral Cortex, page bhs258, 2012. [19] N. W. Schuck, M. B. Cai, R. C. Wilson, and Y. Niv. Human orbitofrontal cortex represents a cognitive map of state space. Neuron, 91:1?11, 2016. [20] E. P. Simoncelli and B. A. Olshausen. Natural image statistics and neural representation. Annual review of neuroscience, 24(1):1193?1216, 2001. [21] A. Walther, H. Nili, N. Ejaz, A. Alink, N. Kriegeskorte, and J. Diedrichsen. Reliability of dissimilarity measures for multi-voxel pattern analysis. NeuroImage, 2015. [22] M. W. Woolrich, B. D. Ripley, M. Brady, and S. M. Smith. Temporal autocorrelation in univariate linear modeling of fmri data. Neuroimage, 14(6):1370?1386, 2001. [23] G. Xue, Q. Dong, C. Chen, Z. Lu, J. A. Mumford, and R. A. Poldrack. Greater neural pattern similarity across repetitions is associated with better memory. Science, 330(6000):97?101, 2010. [24] E. Zarahn, G. K. Aguirre, and M. D?Esposito. Empirical analyses of bold fmri statistics. NeuroImage, 5 (3):179?197, 1997. 9
6131 |@word determinant:2 cox:1 kriegeskorte:5 stronger:1 proportion:1 confirms:1 pulse:1 simulation:4 covariance:56 decomposition:1 accounting:1 tr:1 series:3 halchenko:1 denoting:1 interestingly:1 rightmost:1 schuck:2 existing:1 reaction:1 recovered:13 com:1 comparing:5 nt:4 current:1 si:15 activation:3 anterior:1 attracted:1 written:1 john:2 shape:1 haxby:3 treating:1 generative:4 ith:3 smith:2 short:1 record:1 mental:2 contribute:3 location:2 rnt:3 undetectable:1 walther:2 fitting:1 pathway:1 autocorrelation:6 introduce:2 theoretically:1 peng:1 inter:4 indeed:2 roughly:1 examine:1 multi:3 brain:9 ming:1 relying:1 pitfall:1 little:2 hiding:1 spain:1 estimating:3 underlying:1 moreover:1 panel:3 furey:1 baker:1 project:1 becomes:1 interpreted:1 monkey:6 transformation:1 corporation:1 nj:4 brady:1 temporal:10 pseudo:11 hypothetical:2 multidimensional:1 scaled:2 grant:2 before:1 influencing:1 lemur:4 treat:1 severely:1 encoding:10 analyzing:1 fluctuation:4 approximately:1 might:4 plus:1 resembles:1 evoked:1 co:1 averaged:1 testing:1 vu:1 practice:1 rsa:49 block:1 area:7 empirical:4 projection:2 matching:1 pre:2 regular:1 seeing:1 cannot:6 close:2 judged:3 risk:1 optimize:1 equivalent:1 map:9 demonstrated:1 nishimoto:1 maximizing:1 zador:1 duration:6 resolution:1 simplicity:1 pure:3 importantly:1 kay:1 population:1 exploratory:1 variation:3 coordinate:1 henriksson:2 annals:1 decode:3 exact:1 hypothesis:1 element:2 trend:1 std:1 database:1 observed:1 polyn:1 calculate:1 region:3 counter:1 balanced:1 mentioned:1 environment:1 exit:2 joint:1 represented:5 s2i:1 derivation:1 sc:1 hyper:3 pearson:1 choosing:1 harnessing:1 outside:1 whose:1 apparent:1 larger:1 supplementary:3 triangular:2 cov:1 statistic:2 jointly:1 noisy:5 reproduced:1 hoc:1 advantage:2 analytical:1 cai:2 propose:2 interaction:1 product:1 relevant:1 representational:23 academy:1 razavi:1 ridgway:1 cluster:2 generating:1 object:6 derive:1 informs:3 andrew:1 bosch:1 measured:6 ij:1 indeterminacy:1 diedrichsen:3 strong:1 recovering:1 involves:1 indicate:1 implies:1 quantify:1 come:1 human:7 opinion:1 material:3 niv:2 investigation:1 biological:2 frontier:1 scanner:1 proximity:1 considered:1 roi:8 exp:4 presumably:1 mgh:1 visualize:1 matthew:1 rnc:1 substituting:1 major:2 continuum:1 ventral:3 vary:2 kruskal:1 purpose:1 perceived:1 estimation:19 guntupalli:1 grouped:1 repetition:2 successfully:2 tool:1 reflects:2 weighted:1 naselaris:1 behaviorally:1 gaussian:7 always:3 avoid:2 wilson:1 publication:2 earliest:1 caramazza:1 derived:1 focus:1 improvement:1 rank:1 indicates:1 likelihood:9 contrast:3 dependent:1 typically:2 lj:1 initially:1 spurious:3 interested:1 semantics:1 overall:1 resonance:3 spatial:8 animal:4 once:1 biology:1 represents:1 look:1 yu:1 fmri:33 mimic:1 others:1 stimulus:16 simplify:1 future:2 few:4 primarily:1 manipulated:2 simultaneously:1 individual:8 delayed:1 usc:1 neuroscientist:1 interest:1 highly:5 severe:3 introduces:1 predefined:1 accurate:1 closer:3 experience:1 orthogonal:2 unless:1 indexed:1 taylor:2 biorxiv:1 fitted:1 instance:1 column:9 modeling:3 markovian:1 ar:2 measuring:1 goodness:1 ordinary:1 deviation:2 snr:27 connolly:3 reported:1 ishai:1 xue:1 considerably:1 randomized:2 sensitivity:1 off:2 dong:1 decoding:2 connectome:1 together:3 quickly:1 connecting:1 again:1 central:1 reflect:3 woolrich:1 cognitive:9 wing:1 bandettini:2 account:3 converted:1 bold:3 sec:3 includes:1 coefficient:2 matter:2 b2:2 notable:1 explicitly:2 onset:2 depends:3 later:1 view:1 observing:2 red:1 participant:19 parallel:1 elicited:1 complicated:1 simon:1 hanke:1 contribution:2 square:1 variance:6 largely:1 correspond:1 modelled:2 bayesian:20 raw:1 lu:1 worth:1 researcher:3 tissue:2 explain:1 llt:1 simultaneous:1 influenced:1 suffers:1 sharing:1 definition:1 energy:1 ladybug:4 associated:1 whitens:1 sampled:4 dataset:3 experimenter:1 popular:1 color:2 emerges:2 knowledge:1 amplitude:4 nonmetric:1 reflecting:1 focusing:1 higher:8 originally:1 follow:2 reflected:1 response:14 done:1 psychometrika:1 strongly:2 furthermore:2 stage:1 biomedical:1 correlation:18 until:1 hand:1 overlapping:1 reveal:2 believe:2 olshausen:1 building:1 effect:1 hypothesized:1 true:16 norman:1 regularization:1 analytically:1 spatially:2 iteratively:1 semantic:2 illustrated:1 white:5 i2:3 adjacent:1 deal:1 during:4 mahalanobis:2 inferior:2 covering:1 davis:1 hippocampal:1 performs:1 motion:1 oxygen:1 meaning:2 wise:1 image:2 recently:3 common:1 functional:3 poldrack:2 cerebral:1 belong:2 relating:1 synthesized:1 significant:1 measurement:3 refer:2 enter:2 smoothness:1 similarly:1 benjamini:1 had:1 reliability:1 similarity:53 cortex:13 longer:1 subjectively:2 etc:1 inferotemporal:1 multivariate:1 recent:2 showed:2 khaligh:1 optimizing:1 scenario:1 certain:1 yi:11 captured:1 analyzes:1 greater:1 kit:1 impose:1 converting:2 freely:1 v3:1 maximize:1 signal:9 ii:2 branch:1 full:1 multiple:2 bread:1 infer:1 reduces:3 stem:1 smooth:1 simoncelli:1 offer:1 long:1 retrieval:1 post:1 equally:1 award:1 coded:1 watched:1 controlled:1 regression:2 circumstance:1 kernel:1 justified:1 addition:1 interval:4 underestimated:1 source:3 esteky:1 modality:1 biased:4 probably:1 induced:3 tend:1 recording:1 xtrue:3 noting:1 revealed:2 xj:1 fit:2 counterbalanced:1 identified:1 restrict:1 reduce:2 jwp:1 translates:3 expression:1 suffer:1 speaking:1 york:1 generally:1 processed:1 category:4 kiani:1 reduced:2 http:1 exist:3 nsf:1 neuroscience:13 estimated:19 popularity:1 li2:1 hyperparameter:1 blood:1 ruff:1 v1:1 imaging:9 sum:2 convert:1 run:6 inverse:1 respond:4 named:1 place:1 almost:3 wu:1 decision:1 scaling:3 orbitofrontal:1 entirely:1 esposito:1 guaranteed:1 display:1 annual:2 activity:23 occur:1 constrain:2 scene:1 software:1 bodurka:1 dominated:2 performing:1 structured:12 according:1 mcknight:1 across:10 slightly:1 smaller:2 reconstructing:1 templeton:2 making:1 happens:1 lasting:1 den:1 projecting:2 explained:3 anatomically:2 glm:1 intuitively:1 taken:1 equation:2 visualization:1 turn:1 mind:2 studying:1 available:1 operation:1 yael:2 parametrize:1 observe:1 indirectly:1 magnetic:3 alternative:2 pietrini:1 convolved:2 assumes:1 include:1 calculating:1 especially:4 build:1 noticed:1 gobbini:1 realized:1 mumford:1 dependence:1 diagonal:9 gradient:1 distance:6 separate:2 thank:1 simulated:16 mallard:4 reason:1 code:2 length:1 illustration:2 ratio:6 demonstration:1 nc:5 potentially:2 relate:1 decharms:1 negative:1 design:13 reliably:1 unknown:4 perform:1 warbler:4 contributed:1 gallant:1 neuron:2 markov:2 displayed:2 truncated:1 supporting:3 varied:1 drift:3 intensity:2 pair:2 required:1 learned:6 barcelona:1 tanaka:1 nip:1 beyond:1 bar:2 suggested:1 usually:1 pattern:42 reading:1 gaining:1 memory:3 including:1 belief:1 green:1 power:2 event:8 natural:4 neuroimages:1 circumvent:1 protracted:1 friston:1 residual:1 github:1 movie:2 misleading:1 rated:1 temporally:3 disappears:1 categorical:1 deviate:3 prior:5 voxels:25 understanding:1 geometric:2 acknowledgement:1 review:2 marginalizing:2 relative:2 fully:3 expect:1 limitation:3 foundation:3 degree:1 sufficient:1 proxy:1 imposes:1 kravitz:1 collaboration:1 row:6 course:3 awd1004351:1 supported:2 last:1 repeat:1 rasmussen:1 schouten:1 bias:48 weaker:2 understand:1 institute:4 mur:2 taking:2 afni:1 face:1 van:1 distributed:1 overcome:1 boundary:1 calculated:4 cortical:2 transition:4 pillow:2 valid:1 autoregressive:2 sensory:5 replicable:1 made:2 author:1 world:1 voxel:16 far:1 sj:1 implicitly:1 global:3 reveals:2 scgb:1 conceptual:1 assumed:4 knew:1 xi:1 ripley:1 search:1 latent:1 alink:2 nicolas:1 career:1 improving:1 expansion:2 necessarily:1 distinctively:1 linearly:1 arrow:1 whole:1 noise:43 fig:22 intel:1 detre:1 n:3 experienced:1 theme:1 decoded:1 comprises:1 inferring:1 neuroimage:5 xl:5 hrf:5 weighting:1 late:1 down:1 minute:1 removing:1 xt:22 aguirre:1 exists:5 incorporating:1 intrinsic:1 adding:2 drew:1 magnitude:4 dissimilarity:1 chen:1 lt:1 explore:1 univariate:1 visual:11 expressed:1 hcp:1 partially:1 bo:1 applies:1 goal:1 viewed:1 identity:1 shared:5 man:1 content:2 experimentally:1 change:2 butter:1 typical:1 determined:2 reducing:2 uniformly:3 averaging:6 specifically:1 called:1 total:1 secondary:2 experimental:6 rarely:1 internal:3 cholesky:1 support:3 scan:9 jonathan:1 abdi:1 incorporate:1 evaluate:1 hemodynamic:1 princeton:16 avoiding:1 correlated:1
5,672
6,132
Average-case hardness of RIP certification Tengyao Wang Centre for Mathematical Sciences Cambridge, CB3 0WB, United Kingdom t.wang@statslab.cam.ac.uk Quentin Berthet Centre for Mathematical Sciences Cambridge, CB3 0WB, United Kingdom q.berthet@statslab.cam.ac.uk Yaniv Plan 1986 Mathematics Road Vancouver BC V6T 1Z2, Canada yaniv@math.ubc.ca Abstract The restricted isometry property (RIP) for design matrices gives guarantees for optimal recovery in sparse linear models. It is of high interest in compressed sensing and statistical learning. This property is particularly important for computationally efficient recovery methods. As a consequence, even though it is in general NP-hard to check that RIP holds, there have been substantial efforts to find tractable proxies for it. These would allow the construction of RIP matrices and the polynomial-time verification of RIP given an arbitrary matrix. We consider the framework of average-case certifiers, that never wrongly declare that a matrix is RIP, while being often correct for random instances. While there are such functions which are tractable in a suboptimal parameter regime, we show that this is a computationally hard task in any better regime. Our results are based on a new, weaker assumption on the problem of detecting dense subgraphs. Introduction In many areas of data science, high-dimensional signals contain rich structure. It is of great interest to leverage this structure to improve our ability to describe characteristics of the signal and to make future predictions. Sparsity is a structure of wide applicability (see, e.g. Mallat, 1999; Rauhut and Foucart, 2013; Eldar and Kutyniok, 2012), with a broad literature dedicated to its study in various scientific fields. The sparse linear model takes the form y = X? + ?, where y ? Rn is a vector of observations, X ? Rn?p is a design matrix, ? ? Rn is noise, and the vector ? ? Rp is assumed to have a small number k of non-zero entries. Estimating ? or the mean response, X?, are among the most widely studied problems in signal processing, as well as in statistical learning. In high-dimensional problems, one would wish to recover ? with as few observations as possible. For an incoherent design matrix, it is known that an order of k 2 observations suffice (Donoho, Elad and Temlyakov, 2006; Donoho and Elad, 2003). However, this appears to require a number of observations far exceeding the information content of ?, which has only k variables, albeit with unknown locations. This dependence in k can be greatly improved by using design matrices that are almost isometries on some low dimensional subspaces, i.e., matrices that satisfy the restricted isometry property with parameters k and ?, or RIP(k, ?) (see Definition 1.1). It is a highly robust property, and in fact implies that many different polynomial time methods, such as greedy methods (Blumensath and Davies, 2009; Needell and Tropp, 2009; Dai and Milenkovic, 2009) and convex optimization (Cand?s, 2008; Cand?s, Romberg and Tao, 2006b; Cand?s and Tao, 2005), are stable in recovering ?. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Random matrices are known to satisfy the RIP when the number n of observation is more than about k log(p)/?2 . These results were developed in the field of compressed sensing (Cand?s, Romberg and Tao, 2006a; Donoho, 2006; Rauhut and Foucart, 2013; Eldar and Kutyniok, 2012) where the use of randomness still remains pivotal for near-optimal results. Properties related to the conditioning of design matrices have also been shown to play a key role in the statistical properties of computationally efficient estimators of ? (Zhang, Wainwright and Jordan, 2014). While the assumption of randomness allows great theoretical leaps, it leaves open questions for practitioners. Scientists working on data closely following this model cannot always choose their design matrix X, or at least choose one that is completely random. Moreover, it is in general practically impossible to check that a given matrix satisfies these desired properties, as RIP certification is NP-hard (Bandeira et al., 2012). Having access to a function, or statistic, of X that could be easily computed, which determines how well ? may be estimated, would therefore be of a great help. The search for such statistics has been of great importance for over a decade now, and several have been proposed (d?Aspremont and El Ghaoui, 2011; Lee and Bresler, 2008; Juditsky and Nemirovski, 2011; d?Aspremont, Bach and El Ghaoui, 2008). Perhaps the simplest and most popular is the incoherence parameter, which measures the maximum inner product between distinct, normalized, columns of X. However, all of these are known to necessarily fail to guarantee good recovery when p ? 2n unless n is of order k 2 (d?Aspremont and El Ghaoui, 2011). Given a specific problem instance, the strong recovery guarantees of compressed sensing cannot be verified based on these statistics. In this article, we study the problem of average-case certification of the Restricted Isometry Property (RIP). A certifier takes as input a design matrix X, always outputs ?false? when X does not satisfy the property, and outputs ?true? for a large proportion of matrices (see Definition 2.1). Indeed, worstcase hardness does not preclude a problem from being solvable for most instances. The link between restricted isometry and incoherence implies that polynomial time certifiers exists in a regime where n is of order k 2 log(p)/?2 . It is natural to ask whether the RIP can be certified for sample size n  k log(p)/?2 , where most matrices (with respect to, say, the Gaussian measure) are RIP. If it does, it would also provide a Las Vegas algorithm to construct RIP design matrices of optimal sizes. This should be compared with the currently existing limitations for the deterministic construction of RIP matrices. Our main result is that certification in this sense is hard even in a near-optimal regime, assuming a new, weaker assumption on detecting dense subgraphs, related to the Planted Clique hypothesis. Theorem (Informal). For any ? < 1, there is no computationally efficient, average-case certifier for the class RIPn,p (k, ?) uniformly over an asymptotic regime where n  k 1+? /?2 . This suggests that even in the average case, RIP certification requires almost k 2 log(p)/?2 observations. This contrasts highly with the fact that a random matrix satisfies RIP with high probability when n exceeds about k log(p)/?2 . Thus, there appears to be a large gap between what a practitioner may be able to certify given a specific problem instance, and what holds for a random matrix.On the other hand, if a certifier is found which fills this gap, the result would not only have huge practical implications in compressed sensing and statistical learning, but would also disprove a long-standing conjecture from computational complexity theory. We focus solely on the restricted isometry property, but other conditions under which compressed sensing is possible are also known. Extending our results to the restricted eigenvalue condition Bickel, Ritov and Tsybakov (2009) or other conditions (see, van de Geer and Buhlmann, 2009, and references therein) is an interesting path for future research. Our result shares many characteristics with a hypothesis by Feige (2002) on the hardness of refuting random satisfiability formulas. Indeed, our statement is also about the hardness of verifying that a property holds for a particular instance (RIP for design matrices, instead of unsatisfiability for boolean formulas). It concerns a regime where such a property should hold with high probability (n of order k 1+? /?2 , linear regime for satisfiability), cautiously allowing only one type of errors, false negatives, for a problem that is hard in the worst case. In these two examples, such certifiers exist in a sub-optimal regime. Our problem is conceptually different from results regarding the worst-case hardness of certifying this property (see, e.g. Bandeira et al., 2012; Koiran and Zouzias, 2012; Tillmann and Pfetsch, 2014). It is closer to another line of work concerned with computational lower bounds for statistical learning problems based on average-case assumptions. The planted clique assumption has been used to prove computational hardness results for statistical problems such as estimation and testing of sparse principal components (Berthet and Rigollet, 2013a,b; Wang, Berthet 2 and Samworth, 2016), testing and localization of submatrix signals (Ma and Wu, 2013; Chen and Xu, 2014), community detection (Hajek, Wu and Xu, 2015) and sparse canonical correlation analysis (Gao, Ma and Zhou, 2014). The intractability of noisy parity recovery problem (Blum, Kalai and Wasserman, 2003) has also been used recently as an average-case assumption to deduce computational hardness of detection of satisfiability formulas with lightly planted solutions (Berthet and Ellenberg, 2015). Additionally, several unconditional computational hardness results are shown for statistical problems under constraints of learning models (Feldman et al., 2013). The present work has two main differences compared to previous computational lower bound results. First, in a detection setting, these lower bounds concern two specific distributions (for the null and alternative hypothesis), while ours is valid for all sub-Gaussian distributions, and there is no alternative distribution. Secondly, our result is not based on the usual assumption for the Planted Clique problem. Instead, we use a weaker assumption on a problem of detecting planted dense graphs. This does not mean that the planted graph is a random graph with edge probability q > 1/2 as considered in (Arias-Castro and Verzelen, 2013; Bhaskara et al., 2010; Awasthi et al., 2015), but that it can be any graph with an unexpectedly high number of edges (see section 3.1). This choice is made to strengthen our result: it would ?survive? the discovery of an algorithm that would use very specific properties of cliques (or even of random dense graphs) to detect their presence. As a consequence, the analysis of our reduction is more technically complicated. Our work is organized in the following manner: We recall in Section 1 the definition of the restricted isometry property, and some of its known properties. In Section 2, we define the notion of certifier, and prove the existence of a computationally efficient certifier in a sub-optimal regime. Our main result is developed in Section 3, focused on the hardness of average-case certification. The proofs of the main results are in Appendix A of the supplementary material and those of auxiliary results in Appendix B of the the supplementary material. 1 1.1 Restricted Isometric Property Formulation We use the definition of Cand?s and Tao (2005), who introduced this notion. Below, for a vector u ? Rp , we define kuk0 is the number of its non-zero entries. Definition (RIP). A matrix X ? Rn?p satisfies the restricted isometry property with sparsity k ? {1, . . . , p} and distortion ? ? (0, 1), denoted by X ? RIPn,p (k, ?), if it holds that 1 ? ? ? kXuk22 ? 1 + ?, for every u ? Sp?1 (k) := {u ? Rp : kuk2 = 1, kuk0 ? k}. This can be equivalently defined by a property on submatrices of the design matrix: X is in RIPn,p (k, ?) if and only if for any set S of k columns of X, the submatrix, X?S , formed by taking any these columns is almost an isometry, i.e. if the spectrum of its Gram matrix is contained in the interval [1 ? ?, 1 + ?]: > kX?S X?S ? Ik kop ? ? . Denote by k ? kop,k the k-sparse operator norm, defined for a matrix A as kAkop,k = supx?Sp?1 (k) kAxk2 . This yields another equivalent formulation of the RIP property: X ? RIPn,p (k, ?) if and only if kX > X ? Ip kop,k ? ? . We assume in the following discussion that the distortion parameter ? is upper-bounded by 1. For v ? Rp and T ? {1, . . . , p}, we write vT for the #T -dimensional vector obtained by restricting v to coordinates indexed by T . Similarly, for an n ? p matrix A and subsets S ? {1, . . . , n} and T ? {1, . . . , p}, we write AS? for the submatrix obtained by restricting A to rows indexed by S, A?T for the submatrix obtained by restricting A to columns indexed by T . 1.2 Generation via random design Matrices that satisfy the restricted isometry property have many interesting applications in highdimensional statistics and compressed sensing. However, there is no known way to generate them 3 deterministically in general. It is even NP-hard to check whether a given matrix X belongs to RIPn,p (k, ?) (see, e.g Bandeira ? et al., 2012). Several deterministic constructions of RIP matrices tight frames and Gershgorin?s exist for sparsity level k . ? n. For example, using equitriangular ? circle theorem, one can construct RIP matrices with sparsity k ? ? n and distortion ? bounded away from 0 (see, e.g. Bandeira et al., 2012). The limitation k ? ? n is known as the ?square root bottleneck?. To date, the only constructions that break the ?square root bottleneck? are due to Bourgain et al. (2011) and Bandeira, Mixon and Moreira (2014), both of which give RIP guarantee for k of order n1/2+ for some small  > 0 and fixed ? (the latter construction is conditional on a number-theoretic conjecture being true). Interestingly though, it is easy to generate large matrices satisfying the restricted isometry property through random design, and compared to the fixed design matrices mentioned in the previous paragraph, these random design constructions are much less restrictive on the sparsity level, typically allowing k up to the order n/ log(p) (assuming ? is bounded away from zero). They can be constructed easily from any centred sub-Gaussian distribution. We recall that Ra distribution Q (and its 2 2 associated random variable) is said to be sub-Gaussian with parameter ? if R e?x dQ(x) ? e? ? /2 for all ? ? R. Definition. Define Q = Q? to be the set of sub-Gaussian distributions Q over R with zero mean, unit variance, and sub-Gaussian parameter at most ?. The most common choice for a Q ? Q is the standard normalR distribution N (0, 1). Note that by Taylor expansion, for any Q ? Q, we necessarily have ? 2 ? R x2 dQ(x) = 1. In the rest of the ? to be the distribution of Z/?n for paper, we treat ? as fixed. Define the normalized distribution Q Z ? Q. The following well-known result states that by concentration of measure, random matrices ? ?(n?p) satisfy restricted isometries (see, e.g. Cand?s and Tao (2005) generated with distribution Q and Baraniuk et al. (2008)). For completeness, we include a proof that establishes these particular constants stated here. All proofs are deferred to Appendix A or Appendix B of the supplementary material. ? ?(n?p) , where Q ? Q. It holds Proposition 1. Suppose X is a random matrix with distribution Q that      9ep n?2 P X ? RIPn,p (k, ?) ? 1 ? 2 exp k log ? . (1) k 256? 4 In order to clarify the notion of asymptotic regimes used in this paper, we introduce the following definition. Definition. For 0 ? ? ? 1, define the asymptotic regime   kn1+? log(pn ) R? := (pn , kn , ?n )n : p, k ? ? and n  . ?n2 We note that in this notation, Proposition 1 implies that for (p, k, ?) = (pn , kn , ?n ) ? R0 we have, ? ?(n?p) (X ? RIPn,p (k, ?)) = 1, and this convergence is uniform over Q ? Q. limn?? Q 2 2.1 Certification of Restricted Isometry Objectives and definition In practice, it is useful to know with certainty whether a particular realization of a random design matrix satisfies the RIP condition. It is known that the problem of deciding if a given matrix is RIP is NP-hard (Bandeira et al., 2012). However, NP-hardness is a only a statement about worst-case instances. It would still be of great use to have an algorithm that can correctly decide RIP property for an average instance of a design matrix, with some accuracy. Such an algorithm should identify a high proportion of RIP matrices generated through random design and make no false positive claims. We call such an algorithm an average-case certifier, or a certifier for short. Definition (Certifier). Given a parameter sequence (p, k, ?) = (pn , kn , ?n ), we define a certifier for ? ?(n?p) -random matrices to be a sequence (?n )n of measurable functions ?n : Rn?p ? {0, 1}, Q such that  ? ?(n?p) ?n?1 (0) ? 1/3. ?n?1 (1) ? RIPn,p (k, ?) and lim sup Q (2) n?? 4 Note the definition of a certifier depends on both the asymptotic parameter sequence (pn , kn , ?n ) and the sub-Gaussian distribution Q. However, when it is clear from the context, we will suppress the ? ?(n?p) -random matrices simply dependence and refer to certifiers for RIPn,p (k, ?) properties of Q as ?certifiers?. The two defining properties in (2) can be understood as follows. The first condition means that if a certifier outputs 1, we know with certainty that the matrix is RIP. The second condition means that the certifier is not overly conservative; it is allowed to output 0 for at most one third (with respect ? ?(n?p) measure) of the matrices. The choice of 1/3 in the definition of a certifier is made to to Q simplify proofs. However, all subsequent results will still hold if we replace 1/3 by any constant in (0, 1). In view of Proposition 1, the second condition in (2) can be equivalently stated as  ? ?(n?p) ?n (X) = 1 X ? RIPn,p (k, ?) ? 2/3. lim inf Q n?? With such a certifier, given an arbitrary problem fitting the sparse linear model, the matrix X could be tested for the restricted isometry property, with some expectation of a positive result. This would be particularly interesting given a certifier in the parameter regime n  ?n2 kn2 , in which presently known polynomial-time certifiers cannot give positive results. Even though it is not the main focus of our paper, we also note that a certifier ? with the above properties for some distribution Q ? Q would form a certifier/distribution couple (?, Q), that yields in the usual manner a Las Vegas algorithm to generate RIP matrices. The (random) algorithm keeps ? ?(n?p) until ?n (X) = 1. The number of times that the certifier generating random matrices X ? Q  ? ?(n?p) ?n?1 (1) . Hence, the is invoked has a geometric distribution with success probability Q Las Vegas algorithm runs in randomized polynomial time if and only if ?n runs in randomized polynomial time. 2.2 Certifier properties Although our focus is on algorithmically efficient certifiers, we establish first the properties of a certifier that is computationally intractable. This certifier serves as a benchmark for the performance of other candidates. Indeed, we exhibit in the following proposition a certifier, based on the k-sparse operator norm, that works uniformly well in the same asymptotic parameter regime R0 , where ? ?(n?p) -random matrices are RIP with asymptotic probability 1. For clarity, we stress that our Q criterion when judging a certifier will always be its uniform performance over asymptotic regimes R? for some ? ? [0, 1]. Proposition 2. Suppose (p, k, ?) = (pn , kn , ?n ) ? R0 . Furthermore, Let Q ? Q and X ? ? ?(n?p) . Then the sequence of tests (?op,k )n based on sparse operator norms, defined by Q   ?op,k (X) := 1 kX > X ? Ip kop,k ? ? . ? ?(n?p) -random matrices. is a certifier for Q By a direct reduction from the clique problem, one can show that it is NP-hard to compute the ksparse operator norm of a matrix. Hence the certifier ?op,k is computationally intractable. The next proposition concerns the certifier property of a test based on the maximum incoherence between columns of the design matrix. It follows directly from a well-known result on the incoherence parameter of a random matrix (see, e.g. Rauhut and Foucart (2013, Proposition 6.2)) and allows the construction of a polynomial-time certifier that works uniformly well in the asymptotic parameter regime R1 . Proposition 3. Suppose (p, k, ?) = (pn , kn , ?n ) satisfies n ? 196? 4 k 2 log(p)/?2 . Let Q ? Q and ? ?(n?p) , then the tests ?? defined by X?Q r   log(p) > 2 ?? (X) := 1 kX X ? Ip k? ? 14? n ? ?(n?p) -random matrices. is a certifier for Q 5 Proposition 3 shows that, when the sample size n is above k 2 log(p)/?2 in magnitude (in particular, this is satisfied asymptotically when (p, k, ?) = (pn , kn , ?n ) ? R1 ), there is a polynomial time certifier. In other words, in this high-signal regime, the average-case decision problem for RIP property is much more tractable than indicated by the worst-case result. On the other hand, the certifier in Proposition 3 works in a much smaller parameter range when compared to ?op,k in Proposition 2. Combining Proposition 2 and 3, we have the following schematic diagram (Figure 1). When the sample size is lower than specified in R0 , the property does not hold, with high probability, and no certifier exists. A computationally intractable certifier works uniformly over R0 . On the other end of the spectrum, when the sample size is large enough to be in R1 , a simple certifier based on the maximum incoherence of the design matrix is known to work in polynomial time. This leaves open the question of whether (randomized) polynomial time certifiers can work uniformly well in R0 , or R? for any ? ? [0, 1). We will see in the next section that, assuming a weaker variant of the Planted Clique hypothesis from computational complexity theory, R1 is essentially the largest asymptotic regime where a randomized polynomial time certifier can exist. Figure 1: Schematic digram for existence of certifiers in different asymptotic regimes. 3 3.1 Hardness of Certification Planted dense subgraph assumptions We show in this section that certification of RIP property is an average-case hard problem in the parameter regime R? for any ? < 1. This is precisely the regime not covered by Proposition 3. The average-case hardness result is proved via reduction to the planted dense subgraph assumption. For any integer m ? 0, denote Gm the collection of all graphs on m vertices. We write V (G) and E(G) for the set of vertices and edges of a graph G. For H ? G? where ? ? {0, . . . , m}, let G(m, 1/2, H) be the random graph model that generates a random graph G on m vertices as follows. It first picks ? random vertices K ? V (G) and plants an isomorphic copy of H on these ? vertices, then every pair of vertices not in K ? K is connected by an edge independently with probability 1/2. We write PH for the probability measure on Gm associated with G(m, 1/2, H). Note that if H is the empty graph, then G(m, 1/2, ?) describes the Erd?os?R?nyi random graph. With a slight abuse of notation, we write P0 in place of P? . On the other hand, for  ? (0, 1/2], if H belongs to the set   ?(? ? 1) H = H?, := H ? G? : #E(H) ? (1/2 + ) , 2 then G(m, 1/2, H) generates random graphs that contain elevated local edge density. The planted dense graph problem concerns testing apart the following two hypotheses: H0 : G ? G(m, 1/2, ?) and H1 : G ? G(m, 1/2, H) for some H ? H?, . 1/2?? (3) It is widely believed that for ? = O(m ), there does not exist randomized polynomial time tests to distinguish between H0 and H1 (see, e.g. Jerrum (1992); Feige and Krauthgamer (2003); Feldman et al. (2013)). More precisely, we have the following assumption. Assumption (A1) 1. Fix  ? (0, 1/2] and  ? ? (0, 1/2). let (?m )m be any sequence of integers such that ?m ? ? and ?m = O m1/2?? . For any sequence of randomized polynomial time tests (?m : Gm ? {0, 1})m , we have n  o lim inf P0 ?(G) = 1 + max PH ?(G) = 0) > 1/3 . m H?H?, 6 We remark that if  = 1/2, then H?, contains only the ?-complete graph and the testing problem becomes the well-known planted clique problem (cf. Jerrum (1992) and references in Berthet and Rigollet (2013a,b)). The difficulty of this problem has been used as a primitive for the hardness of other tasks, such as cryptographic applications, in Juels and Peinado (2000), testing for k-wise dependence in Alon et al. (2007), approximating Nash equilibria in Hazan and Krauthgamer (2011). In this case, Assumption (A1) is a version of the planted clique hypothesis (see, e.g. Berthet and Rigollet (2013b, Assumption APC )). We emphasize that Assumption A1 is significantly milder than the planted clique hypothesis (since it allows any ?  ? (0, 1/2]), or that a hypothesis on planted random graphs. We also note that when ? ? C m, spectral methods can be used to detect such graphs with high probability. Indeed, when G contains a graph of H, denoting AG its adjacency ? matrix, then AG ? 11> /2 has a leading eigenvalue greater than (? ? 1), whereas it is of order m for a usual Erd?os?R?nyi random graph. The following theorem relates the hardness of the planted dense subgraph testing problem to the hardness of certifying restricted isometry of random matrices. We recall that the distribution of X is ? d ?= that of an n?p random matrix with entries independently and identically sampled from Q Q/ n, for some Q ? Q. We also write ?rp for the class of randomized polynomial time certifiers. Theorem 4. Assume (A1) and fix any ? ? [0, 1). Then there exists a sequence (p, k, ?) = (pn , kn , ?n ) ? R? , such that there is no certifier/distribution couple (?, Q) ? ?rp ? Q with respect to this sequence of parameters. Our proof of Theorem 4 relies on the following ideas: Given a graph G, an instance of the planted clique problem in the assumed hard regime, we construct n random vectors based on the adjacency matrix of a bipartite subgraph of G, between two random sets of vertices. Each coefficient of these vectors is then randomly drawn from one of two carefully chosen distributions, conditionally on the presence or absence of a particular edge. This construction ensures that if the graph is an Erd?os? R?nyi random graph (i.e. with no planted graph), the vectors are independent with independent ? Otherwise, we show that with high probability, the presence of an coefficients, with distribution Q. unusually dense subgraph will make it very likely that the matrix does not satisfy the restricted isometry property, for a set of parameters in R? . As a consequence, if there existed a certifier/distribution couple (?, Q) ? ?rp ? Q in this range of parameters, it could be used - by using as input in the certifier the newly constructed matrix - to determine with high probability the distribution of G, violating our assumption (A1). We remark that this result holds for any distribution in Q, in contrast to computational lower bounds in statistical learning problems, that apply to a specific distribution. For the sake of simplicity, we have kept the coefficients of X identically distributed, but our analysis is not dependent on that fact, and our result can be directly extended to the case where the coefficients are independent, with different distributions in Q. Theorem 4 may be viewed as providing an asymptotic lower bound of the sample size n for the existence of a computationally feasible certifier. It establishes this computational lower bound by exhibiting some specific ?hard? sequences of parameters inside R? , and show that any algorithm violating the computational lower bound could be exploited to solve the planted dense subgraph problem. All hardness results, whether in a worst-case (NP-hardness, or other) or the average-case (by reduction from a hard problem), are by nature statements on the impossibility of accomplishing a task in a computationally efficient manner, uniformly over a range of parameters. They are therefore always based on the construction of a ?hard? sequence of parameters used in the reduction, for which a contradiction is shown. Here, the ?hard? sequence is explicitly constructed in the proof to be some (p, k, ?) = (pn , kn , ?n ) satisfying p ? n and n1/(3???4?)  k  n1/(2??)?? , for ? ? [0, (1 ? ?)/3) and any small ? > 0. The tuning parameter ? is to allow additional flexibility in choosing these ?hard? sequences. More precisely, using an averaging trick first seen in Ma and Wu (2013), we are able to show that the existence of such ?hard? sequences is not confined only in the sparsity regime k  n1/2 . We note that in all our ?hard? sequences, ?n must depend on n. An interesting extension is to see if similar computational lower bounds hold when restricted to a subset of R? where ? is constant. 7 References Alon, N., Andoni, A., Kaufman, T., Matulef, K., Rubinfeld, R., and Xie, N. (2007) Testing k-wise and almost k-wise independence. Proceedings of the Thirty-ninth ACM STOC. 496?505. Arias-Castro, E., Verzelen, N. (2013) Community Detection in Dense Random Networks. Ann. Statist.,42, 940-969 Awasthi, P., Charikar, M., Lai, K. A. and Risteki, A. (2015) Label optimal regret bounds for online local learning. J. Mach. Learn. Res. (COLT), 40. Bandeira, A. S., Dobriban, E., Mixon, D. G. and Sawin, W. F. (2012) Certifying the restricted isometry property is hard. IEEE Trans. Information Theory, 59, 3448?3450. Bandeira, A. S., Mixon, D. G. and Moreira, J. (2014) A conditional construction of restricted isometries. International Mathematics Research Notices, to appear. Baraniuk, R., Davenport, M., DeVore, R. and Wakin, M. (2008) A simple proof of the restricted isometry property for random matrices. Constructive Approximation, 28, 253?263. Berthet, Q. and Ellenberg, J. S. (2015) Detection of Planted Solutions for Flat Satisfiability Problems. Preprint Berthet, Q. and Rigollet P. (2013) Optimal detection of sparse principal components in high dimension. Ann. Statist., 41, 1780?1815. Berthet, Q. and Rigollet P. (2013) Complexity theoretic lower bounds for sparse principal component detection. J. Mach. Learn. Res. (COLT), 30, 1046?1066. Bhaskara, A., Charikar, M., Chlamtac, E., Feige, U. and Vijayaraghavan, A. (2010) Detecting High Log-Densities an O(n1/4 ) Approximation for Densest k-Subgraph. Proceedings of the fortysecond ACM symposium on Theory of computing, 201?210. Bickel, P., Ritov, Y. and Tsybakov, A. (2009) Simultaneous analysis of Lasso and Dantzig selector Ann. Statist., 37,1705?1732 Blum, A., Kalai, A. and Wasserman, H. (2003) Noise-tolerant learning, the parity problem, and the statistical query model. Journal of the ACM, 50, 506?519. Blumensath, T. and Davies, M. E. (2009) Iterative hard thresholding for compressed sensing. Applied and Computational Harmonic Analysis, 27, 265?274. Bourgain, J., Dilworth, S., Ford, K. and Konyagin, S. (2011) Explicit constructions of RIP matrices and related problems. Duke Math. J., 159, 145?185. Cand?s, E. J. (2008) The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematique, 346, 589?592. Cand?s, E. J., Romberg, J. and Tao, T. (2006) Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory, 52, 489?509. Cand?s, E. J., Romberg, J. K. and Tao, T. (2006) Stable signal recovery from incomplete and inaccurate measurements. Communications on pure and applied mathematics, 59, 2006. Cand?s E. J. and Tao, T. (2005) Decoding by Linear Programming. IEEE Trans. Inform. Theory, 51, 4203?4215. Chen, Y. and Xu, J. (2014) Statistical-computational tradeoffs in planted problems and submatrix localization with a growing number of clusters and submatrices. preprint, arXiv:1402.1267. d?Aspremont, A., Bach, F. and El Ghaoui, L. (2008) Optimal solutions for sparse principal component analysis. J. Mach. Learn. Res., 9, 1269?1294. d?Aspremont, A. and El Ghaoui, L. (2011) Testing the nullspace property using semidefinite programming. Mathematical programming, 127, 123?144. 8 Dai, W. and Milenkovic, O. (2009) Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inform. Theory, 55, 2230?2249. Donoho, D. L. (2006) Compressed sensing. IEEE Trans. Inform. Theory, 52, 1289?1306. Donoho, D. L., and Elad, M. (2003) mally sparse representation in general (nonorthogonal) dictionaries via `1 minimization. Proceedings of the National Academy of Sciences, 100, 2197?2202. Donoho, D. L., Elad, M. and Temlyakov, V. N. (2006) Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans. Inform. Theory, 52, 6?18. Eldar, Y. C. and Kutyniok, G. (2012) Compressed Sensing: Theory and Applications. Cambridge University Press, Cambridge. Feige, U. Relations between average case complexity and approximation complexity. Proceedings of the Thirty-Fourth Annual ACM Symposium on Theory of Computing, 534?543. Feige, U. and Krauthgamer, R. (2003) The probable value of the Lov?sz?Schrijver relaxations for a maximum independent set. SIAM J. Comput., 32, 345?370. Feldman, V., Grigorescu, E., Reyzin, L., Vempala, S. S. and Xiao, Y. (2013) Statistical Algorithms and a Lower Bound for Detecting Planted Cliques. Proceedings of the Forty-fifth Annual ACM Symposium on Theory of Computing. 655?664. Gao, C., Ma, Z. and Zhou, H. H. (2014) Sparse CCA: adaptive estimation and computational barriers. preprint, arXiv:1409.8565. Hajek, B., Wu, Y. and Xu, J.(2015) Computational Lower Bounds for Community Detection on Random Graphs, Proceedings of The 28th Conference on Learning Theory, 899?928. Hazan, E. and Krauthgamer, R. (2011) How hard is it to approximate the best nash equilibrium? SIAM J. Comput., 40, 79?91. Jerrum, M. (1992) Large cliques elude the Metropolis process. Random Struct. Algor., 3, 347?359. Juditsky, A. and Nemirovski, A. (2011) On verifiable sufficient conditions for sparse signal recovery via `1 minimization. Mathematical programming, 127, 57?88. Juels, A. and Peinado, M. (2000) Hiding cliques for cryptographic security. Des. Codes Cryptography. 20, 269-280. Koiran, P. and Zouzias, A. (2012) Hidden cliques and the certification of the restricted isometry property. preprint, arXiv:1211.0665. Lee, K. and Bresler, Y. (2008) Computing performance guarantees for compressed sensing. IEEE International Conference on Acoustics, Speech and Signal Processing, 5129?5132. Ma, Z. and Wu, Y. (2013) Computational barriers in minimax submatrix detection. arXiv preprint. Mallat, S. (1999) A wavelet tour of signal processing. Academic press, Cambridge, MA. Needell, D. and Tropp, J. A. (2009) CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Applied and Computational Harmonic Analysis, 26, 301?321. Rauhut, H. and Foucart, S. (2013) A Mathematical Introduction to Compressive Sensing. Birkh?user. Tillmann, A. N. and Pfetsch M. E. (2014) The computational complexity of the restricted isometry property, the nullspace property, and related concepts in compressed sensing. IEEE Trans. Inform. Theory, 60, 1248?1259. van de Geer, S. and Buhlmann, P. (2009) On the conditions used to prove oracle results for the lasso. Electron. J. Stat., 3, 1360?1392 Wang, T., Berthet, Q. and Samworth, R. J. (2016) Statistical and computational trade-offs in Estimation of Sparse Pincipal Components. Ann. Statist., 45, 1896?1930 Zhang, Y., Wainwright, M. J. and Jordan, M. I. (2014) Lower bounds on the performance of polynomial-time algorithms for sparse linear regression. JMLR: Workshop and Conference Proceedings (COLT), 35, 921?948. 9
6132 |@word milenkovic:2 version:1 polynomial:15 proportion:2 norm:4 open:2 p0:2 pick:1 reduction:5 contains:2 united:2 denoting:1 bc:1 ours:1 interestingly:1 mixon:3 existing:1 z2:1 must:1 subsequent:1 juditsky:2 greedy:1 leaf:2 short:1 detecting:5 math:2 completeness:1 location:1 zhang:2 mathematical:5 constructed:3 direct:1 symposium:3 ik:1 prove:3 blumensath:2 fitting:1 paragraph:1 inside:1 introduce:1 manner:3 lov:1 indeed:4 ra:1 hardness:17 cand:10 growing:1 preclude:1 becomes:1 spain:1 estimating:1 moreover:1 suffice:1 bounded:3 notation:2 hiding:1 null:1 what:2 kaufman:1 developed:2 compressive:2 algor:1 ag:2 guarantee:5 certainty:2 every:2 unusually:1 kutyniok:3 uk:2 unit:1 appear:1 positive:3 declare:1 scientist:1 understood:1 treat:1 local:2 consequence:3 mach:3 incoherence:5 solely:1 path:1 abuse:1 therein:1 studied:1 dantzig:1 suggests:1 nemirovski:2 range:3 practical:1 thirty:2 testing:8 practice:1 regret:1 area:1 submatrices:2 significantly:1 davy:2 word:1 road:1 cannot:3 romberg:4 wrongly:1 operator:4 context:1 impossible:1 equivalent:1 deterministic:2 measurable:1 primitive:1 independently:2 convex:1 focused:1 simplicity:1 recovery:9 needell:2 tillmann:2 wasserman:2 pure:1 subgraphs:2 estimator:1 contradiction:1 fill:1 quentin:1 notion:3 coordinate:1 construction:11 play:1 mallat:2 rip:32 strengthen:1 suppose:3 gm:3 densest:1 duke:1 hypothesis:8 exact:1 programming:4 trick:1 elude:1 satisfying:2 particularly:2 v6t:1 role:1 ep:1 preprint:5 wang:4 verifying:1 worst:5 unexpectedly:1 ensures:1 connected:1 trade:1 cautiously:1 substantial:1 mentioned:1 nash:2 complexity:6 kaxk2:1 cam:2 certification:10 depend:1 tight:1 technically:1 localization:2 bipartite:1 unsatisfiability:1 completely:1 easily:2 various:1 distinct:1 describe:1 birkh:1 query:1 choosing:1 h0:2 widely:2 elad:4 supplementary:3 say:1 distortion:3 otherwise:1 compressed:12 solve:1 ability:1 statistic:4 jerrum:3 noisy:1 ford:1 certified:1 ip:3 online:1 sequence:14 eigenvalue:2 reconstruction:2 product:1 combining:1 realization:1 date:1 reyzin:1 subgraph:7 flexibility:1 academy:1 convergence:1 yaniv:2 empty:1 extending:1 r1:4 cluster:1 generating:1 cosamp:1 help:1 alon:2 ac:2 stat:1 op:4 strong:1 recovering:1 auxiliary:1 implies:3 exhibiting:1 closely:1 correct:1 material:3 adjacency:2 require:1 mathematique:1 fix:2 proposition:13 probable:1 secondly:1 extension:1 clarify:1 hold:10 practically:1 considered:1 exp:1 great:5 deciding:1 equilibrium:2 nonorthogonal:1 claim:1 electron:1 koiran:2 bickel:2 dictionary:1 estimation:3 samworth:2 leap:1 currently:1 label:1 largest:1 establishes:2 minimization:2 awasthi:2 offs:1 moreira:2 always:4 gaussian:7 kalai:2 zhou:2 pn:10 focus:3 check:3 greatly:1 contrast:2 impossibility:1 sense:1 detect:2 milder:1 dependent:1 el:5 inaccurate:2 typically:1 hidden:1 relation:1 kn1:1 tao:8 among:1 colt:3 eldar:3 denoted:1 plan:1 field:2 construct:3 never:1 having:1 broad:1 survive:1 future:2 np:7 simplify:1 few:1 randomly:1 national:1 n1:5 detection:9 interest:2 huge:1 highly:3 deferred:1 semidefinite:1 unconditional:1 implication:2 edge:6 closer:1 disprove:1 unless:1 indexed:3 incomplete:3 taylor:1 desired:1 circle:1 re:3 overcomplete:1 theoretical:1 instance:8 column:5 wb:2 boolean:1 applicability:1 vertex:7 entry:3 subset:2 tour:1 uniform:2 kn:9 supx:1 density:2 international:2 randomized:7 siam:2 standing:1 lee:2 decoding:1 satisfied:1 choose:2 davenport:1 leading:1 de:3 centred:1 coefficient:4 satisfy:6 explicitly:1 depends:1 root:2 break:1 view:1 h1:2 hazan:2 sup:1 recover:1 complicated:1 kuk0:2 formed:1 square:2 accuracy:1 accomplishing:1 variance:1 characteristic:2 who:1 yield:2 identify:1 conceptually:1 rauhut:4 gershgorin:1 randomness:2 simultaneous:1 inform:6 definition:12 frequency:1 tengyao:1 proof:7 associated:2 couple:3 sampled:1 newly:1 proved:1 popular:1 ask:1 recall:3 lim:3 satisfiability:4 organized:1 hajek:2 carefully:1 appears:2 isometric:1 violating:2 xie:1 response:1 improved:1 erd:3 devore:1 ritov:2 formulation:2 though:3 furthermore:1 correlation:1 until:1 working:1 hand:3 tropp:2 o:3 kn2:1 perhaps:1 indicated:1 scientific:1 contain:2 normalized:2 true:2 concept:1 hence:2 statslab:2 conditionally:1 criterion:1 stress:1 apc:1 theoretic:2 complete:1 dedicated:1 ellenberg:2 invoked:1 vega:3 recently:1 wise:3 harmonic:2 common:1 rigollet:5 conditioning:1 slight:1 elevated:1 m1:1 refer:1 measurement:1 cambridge:5 feldman:3 tuning:1 mathematics:3 similarly:1 centre:2 stable:3 access:1 deduce:1 isometry:22 belongs:2 inf:2 apart:1 bandeira:8 success:1 vt:1 exploited:1 seen:1 dai:2 greater:1 additional:1 r0:6 zouzias:2 determine:1 forty:1 signal:12 relates:1 exceeds:1 academic:1 bach:2 long:1 believed:1 lai:1 a1:5 schematic:2 prediction:1 variant:1 regression:1 essentially:1 expectation:1 arxiv:4 confined:1 whereas:1 interval:1 diagram:1 limn:1 rest:1 vijayaraghavan:1 jordan:2 practitioner:2 call:1 integer:2 near:2 leverage:1 presence:4 easy:1 concerned:1 enough:1 identically:2 independence:1 lasso:2 suboptimal:1 inner:1 regarding:1 idea:1 tradeoff:1 bottleneck:2 whether:5 effort:1 speech:1 remark:2 useful:1 clear:1 covered:1 verifiable:1 tsybakov:2 ph:2 statist:4 simplest:1 generate:3 exist:4 canonical:1 notice:1 judging:1 estimated:1 certify:1 correctly:1 overly:1 algorithmically:1 write:6 key:1 blum:2 drawn:1 cb3:2 clarity:1 juels:2 verified:1 kept:1 graph:23 asymptotically:1 relaxation:1 run:2 baraniuk:2 uncertainty:1 fourth:1 place:1 almost:4 decide:1 wu:5 verzelen:2 decision:1 appendix:4 submatrix:6 bound:13 cca:1 distinguish:1 existed:1 annual:2 oracle:1 constraint:1 precisely:3 x2:1 flat:1 sake:1 certifying:3 lightly:1 generates:2 vempala:1 conjecture:2 charikar:2 rubinfeld:1 feige:5 smaller:1 describes:1 mally:1 metropolis:1 castro:2 presently:1 restricted:23 ghaoui:5 grigorescu:1 computationally:10 remains:1 rendus:1 fail:1 know:2 tractable:3 serf:1 refuting:1 informal:1 end:1 pursuit:1 apply:1 away:2 spectral:1 alternative:2 struct:1 rp:7 existence:4 include:1 cf:1 krauthgamer:4 wakin:1 restrictive:1 establish:1 nyi:3 approximating:1 objective:1 question:2 planted:21 dependence:3 usual:3 concentration:1 said:1 exhibit:1 subspace:2 link:1 assuming:3 code:1 providing:1 equivalently:2 kingdom:2 statement:3 stoc:1 negative:1 stated:2 suppress:1 design:19 cryptographic:2 unknown:1 allowing:2 upper:1 observation:6 benchmark:1 defining:1 extended:1 communication:1 frame:1 rn:5 ninth:1 arbitrary:2 buhlmann:2 canada:1 community:3 introduced:1 pair:1 specified:1 security:1 acoustic:1 barcelona:1 nip:1 trans:7 able:2 below:1 regime:22 sparsity:6 max:1 ksparse:1 wainwright:2 natural:1 difficulty:1 user:1 solvable:1 minimax:1 bourgain:2 improve:1 incoherent:1 aspremont:5 literature:1 discovery:1 geometric:1 vancouver:1 asymptotic:11 plant:1 bresler:2 interesting:4 limitation:2 generation:1 sufficient:1 verification:1 proxy:1 article:1 dq:2 thresholding:1 principle:1 intractability:1 xiao:1 share:1 row:1 parity:2 copy:1 allow:2 weaker:4 wide:1 peinado:2 taking:1 barrier:2 fifth:1 sparse:17 van:2 distributed:1 dimension:1 valid:1 gram:1 rich:1 berthet:11 made:2 collection:1 adaptive:1 far:1 temlyakov:2 approximate:1 emphasize:1 selector:1 keep:1 clique:14 sz:1 tolerant:1 assumed:2 dilworth:1 spectrum:2 search:1 iterative:2 decade:1 additionally:1 nature:1 learn:3 robust:2 ca:1 expansion:1 necessarily:2 sp:2 dense:11 main:5 noise:3 n2:2 allowed:1 cryptography:1 pivotal:1 xu:4 sub:8 wish:1 exceeding:1 deterministically:1 explicit:1 candidate:1 comput:2 jmlr:1 third:1 nullspace:2 wavelet:1 bhaskara:2 theorem:6 formula:3 kuk2:1 kop:4 specific:6 sensing:14 foucart:4 concern:4 exists:3 intractable:3 workshop:1 albeit:1 false:3 restricting:3 importance:1 andoni:1 aria:2 magnitude:1 kx:4 gap:2 chen:2 simply:1 likely:1 gao:2 contained:1 ubc:1 satisfies:5 determines:1 worstcase:1 ma:6 relies:1 acm:5 conditional:2 viewed:1 donoho:6 ann:4 replace:1 absence:1 content:1 hard:20 feasible:1 uniformly:6 averaging:1 principal:4 conservative:1 geer:2 comptes:1 isomorphic:1 schrijver:1 la:3 highdimensional:1 latter:1 constructive:1 tested:1
5,673
6,133
Learning in Games: Robustness of Fast Convergence Dylan J. Foster? Zhiyuan Li? Thodoris Lykouris? Karthik Sridharan? ?va Tardos? Abstract We show that learning algorithms satisfying a low approximate regret property experience fast convergence to approximate optimality in a large class of repeated games. Our property, which simply requires that each learner has small regret compared to a (1 + ?)-multiplicative approximation to the best action in hindsight, is ubiquitous among learning algorithms; it is satisfied even by the vanilla Hedge forecaster. Our results improve upon recent work of Syrgkanis et al. [28] in a number of ways. We require only that players observe payoffs under other players? realized actions, as opposed to expected payoffs. We further show that convergence occurs with high probability, and show convergence under bandit feedback. Finally, we improve upon the speed of convergence by a factor of n, the number of players. Both the scope of settings and the class of algorithms for which our analysis provides fast convergence are considerably broader than in previous work. Our framework applies to dynamic population games via a low approximate regret property for shifting experts. Here we strengthen the results of Lykouris et al. [19] in two ways: We allow players to select learning algorithms from a larger class, which includes a minor variant of the basic Hedge algorithm, and we increase the maximum churn in players for which approximate optimality is achieved. In the bandit setting we present a new algorithm which provides a ?small loss?-type bound with improved dependence on the number of actions in utility settings, and is both simple and efficient. This result may be of independent interest. 1 Introduction Consider players repeatedly playing a game, all acting independently to minimize their cost or maximize their utility. It is natural in this setting for each player to use a learning algorithm that guarantees small regret to decide on their strategy, as the environment is constantly changing due to each player?s choice of strategy. It is well known that such decentralized no-regret dynamics are guaranteed to converge to a form of equilibrium for the game. Furthermore, in a large class of games known as smooth games [23] they converge to outcomes with approximately optimal social welfare matching the worst-case efficiency loss of Nash equilibria (the price of anarchy). In smooth cost minimization games the overall cost is /(1 ?) times the minimum cost, while in smooth mechanisms [29] such as auctions it is / max(1, ?) times the maximum total utility (where and ? are parameters of the smoothness condition). Examples of smooth games and mechanisms include routing games and many forms of auction games (see e.g. [23, 29, 24]). The speed at which the game outcome converges to this approximately optimal welfare is governed by individual players? regret bounds. There are a large number of simple regret minimization algorithms (Hedge/Multiplicative Weights, Mirror Decent, Follow the Regularized Leader; see e.g. [12]) that ? Cornell University {djfoster,teddlyk,sridharan,eva}@cs.cornell.edu. Work supported in part under NSF grants CDS&E-MSS 1521544, CCF-1563714, ONR grant N00014-08-1-0031, a Google faculty research award, and an NDSEG fellowship. ? Tsinghua University, lizhiyuan13@mails.tsinghua.edu.cn. Research performed while author was visiting Cornell University. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. p guarantee that the average regret goes down as O(1/ T ) with time T , which is tight in adversarial settings. Taking advantage of the fact that playing a game against opponents who themselves are also using regret minimization is not a truly adversarial setting, a sequence of papers [9, 22, 28] showed that by using specific learning algorithms, the dependence on T of the convergence rate can be improved to O(1/T ) (?fast convergence?). Concretely, Syrgkanis et al. [28] show that all algorithms satisfying the so-called RVU property (Regret by Variation in Utilities), which include Optimistic Mirror Descent [22], converge at a O(1/T ) rate with a fixed number of players. One issue with the works of [9, 22, 28] is that they use expected cost as their feedback model for the players. In each round every player receives the expected cost for each of their available actions, in expectation over the current action distributions of all other players. This clearly represents more information than is realistically available to players in games ? at most each player sees the cost of each of their actions given the actions taken by the other players (realized feedback). In fact, even if each player had access to the action distributions of the other players, simply computing this expectation is generally intractable when n, the number of players, is large. We improve the result of [28] on the convergence to approximate optimality in smooth games in a number of different aspects. To achieve this, we relax the quality of approximation from the bound guaranteed by smoothness. Typical smoothness bounds on the price of anarchy in auctions are small constants, such a factor of 1.58 or 2 in item auctions. Increasing the approximation factor by an arbitrarily small constant ? > 0 enables the following results: ? We show that learning algorithms obtaining fast convergence are ubiquitous. ? We improve the speed of convergence by a factor of n, the number of players. ? For all our results, players only need feedback based on realized outcomes, instead of expected outcomes. ? We show that convergence occurs with high probability in most settings. ? We extend the results to show that it is enough for the players to observe realized bandit feedback, only seeing the outcome of the action they play. ? Our results apply to settings where the set of players in the game changes over time [19]. We strengthen previous results by showing that a broader class of algorithms achieve approximate efficiency under significant churn. We achieve these results using a property we term Low Approximate Regret, which simply states that an online learning algorithm achieves good regret against a multiplicative approximation of the best action in hindsight. This property is satisfied by many known algorithms including even the vanilla Hedge algorithm, as well as Optimistic Hedge [21, 28] (via a new analysis). The crux of our analysis technique is the simple observation that for many types of data-dependent regret bounds we can fold part of the regret bound into the comparator term, allowing us to explore the trade-off between additive and multiplicative approximation. In Section 3, we show that Low Approximate Regret implies fast convergence to the social welfare guaranteed by the price of anarchy via the smoothness property. This convergence only requires feedback from the realized actions played by other players, not their action distribution or the expectation over their actions. We further show that this convergence occurs with high probability in most settings. For games with a large number of players we also improve the speed of convergence. [28] shows that players using Optimistic Hedge in a repeated game with n players converge to the approximately optimal outcome guaranteed by smoothness at a rate of O(n2 /T ). They also offer an analysis guaranteeing convergence of O(n/T ) , at the expense of a constant factor decrease in the quality of approximation (e.g., a factor of 4 in atomic congestion games with affine congestion). We achieve the convergence bound of O(n/T ) with only an arbitrarily small loss in the approximation. Algorithms that satisfy the Low Approximate Regret property are ubiquitous and include simple, efficient algorithms such as Hedge and variants. The observation that this broad class of algorithms enjoys fast convergence in realistic settings suggests that fast convergence occurs in practice. Comparing our work to [28] with regard to feedback, Low Approximate Regret algorithms require only realized feedback, while the analysis of the RVU property in [28] requires expected feedback. To see the contrast, consider the load balancing game introduced in [17] with two players and two bins, where each player selects a bin and observes cost given by the number of players in that bin. Initialized at the uniform distribution, any learning algorithm with expectation feedback (e.g. those in [28]) will stay at the uniform distribution forever, because the expected cost vector distributes 2 cost equally across the two bins. This gives low regret under expected costs, but suppose we were interested in realized costs: The only ?black box? way to lift [28] to this case would be to simply p evaluate the regret bound above under realized costs, but here players will experience ?(1/ T ) variation because they select bins uniformly at random, ruining the fast convergence. Our analysis sidesteps this issue because players achieve Low Approximate Regret with high probability. In Section 4 we consider games where players can only observe the cost of the action they played given the actions taken by the other players, and receive no feedback for actions not played (bandit feedback). [22] analyzed zero-sum games with bandit feedback, but assumed that players receive expected cost over the strategies of all other players. In contrast, the Low Approximate Regret property can be satisfied by just observing realizations, even with bandit feedback. We propose a new bandit algorithm based on log-barrier regularization with importance sampling that guarantees fast convergence of O(d log T /?) where d is the number of actions. Known techniques would either result in a convergence rate of O(d3 log T ) (e.g. adaptations of SCRiBLe [21]) or would not extend to utility maximization settings (e.g. GREEN [2]). Our technique is of independent interest since it improves the dependence of approximate regret bounds on the number of experts while applying to both cost minimization and utility maximization settings. Finally, in Section 5, we consider the dynamic population game setting of [19], where players enter and leave the game over time. [19] showed that regret bounds for shifting experts directly influence the rate at which players can turn over and still guarantee close to optimal solutions on average. We show that a number of learning algorithms have the Low Approximate Regret property in the shifting experts setting, allowing us to extend the fast convergence result to dynamic games. Such learning algorithms include a noisy version of Hedge as well as AdaNormalHedge [18], which was previously studied in the dynamic setting in [19]. Low Approximate Regret allows us to increase the turnover rate from the one in [19], while also widening and simplifying the class of learning algorithms that players can use to guarantee the close to optimal average welfare. 2 Repeated Games and Learning Dynamics We consider a game G among a set of n players. Each player i has an action space Si and a cost function costi : S1 ? ? ? ? ? Sn ! [0, 1] that maps an action profile s = (s1 , . . . , sn ) to the cost costi (s) that player experiences1 . We assume that the action space of each player has cardinality d, i.e. |Si | = d. We let w = (w1 , . . . , wn ) denote a list of probability distributions over all players? actions, where wi 2 (Si ) and wi,x is the probability of action x 2 Si . The game is repeated for T rounds. At each round t each player i picks a probability distribution wit 2 (Si ) over actions and draws their action sti from this distribution. Depending on the game playing environment under consideration, players will receive different types of feedback after each round. In Sections 3 and 5 we consider feedback where at the end of the round each player i observes the utility they would have received had they played any possible action x 2 Si given the actions taken by the other players. More formally let cti,x = costi (x, st i ), where st i is the set of strategies of all but the ith player at round t, and let cti = (cti,x )x2Si . Note that the expected cost of player i at round t (conditioned on the other players? actions) is simply the inner product hwit , cti i. We refer to this form of feedback as realized feedback since it only depends on the realized actions st i sampled by the opponents; it does not directly depend on their distributions wt i . This should be contrasted with the expectation feedback used by [28, 9, 22], where player i observes Est i ?wt i [costi (x, st i )] for each x. Sections 4 and 5 consider extensions of our repeated game model. In Section 4 we examine partial information (?bandit?) feedback, where players observe only the cost of their own realized actions. In Section 5 we consider a setting where the player set is evolving over time. Here we use the dynamic population model of [19], where at each round t each player i is replaced (?turns over?) with some probability p. The new player has cost function costti (?) and action space Sit which may change arbitrarily subject to certain constraints. We will formalize this notion later on. Learning Dynamics We assume that players select their actions using learning algorithms satisfying a property we call Low Approximate Regret, which simply requires that the cumulative cost of the learner multiplicatively approximates the cost of the best action they could have chosen in hindsight. 1 See Appendix D for analogous definitions for utility maximization games. 3 We will see in subsequent sections that this property is ubiquitous and leads to fast convergence in a robust range of settings. Definition 1. (Low Approximate Regret) A learning algorithm for player i satisfies the Low Approximate Regret property for parameter ? > 0 and function A(d, T ) if for all action distributions f 2 (Si ), (1 ?) T X t=1 hwit , cti i ? T X t=1 hf, cti i + A(d, T ) . ? (1) A learning algorithm satisfies Low Approximate Regret against shifting experts if for all sequences f 1 , . . . , f T 2 (Si ), letting K = |{i > 2 : f t 1 6= f t }| be the number of shifts, (1 ?) T X t=1 hwit , cti i ? T X t=1 hf t , cti i + (1 + K) A(d, T ) . ? (2) In the bandit feedback setting, we require (1) to hold in expectation over the realized strategies of player i for any f 2 (Si ) fixed before the game begins. We use the version of the Low Approximate Regret property with shifting experts when considering players in dynamic population games in Section 5. In this case, the game environment is constantly changing due to churn in the population, and we need the players to have low approximate regret with shifting experts to guarantee high social welfare despite the churn. We emphasize that all algorithms we are aware of that satisfy Low Approximate Regret can be made to do so for any fixed choice of the approximation factor ? via an appropriate selection of parameters. Many algorithms have an even stronger property: They satisfy (1) or (2) for all ? > 0 simultaneously. We say that such algorithms satisfy the Strong Low Approximate Regret property. This property has favorable consequences in the context of repeated games. The Low Approximate Regret property differs from previous properties such as RVU in that it only requires that the learner?s cost be close to a multiplicative approximation to the cost of the best action in hindsight. Consequently, it is always smaller than the regret. For instance, if we consider p only uniform (i.e. not data-dependent) regret bounds the Hedge algorithm can only achieve O( T log d) exact regret, but can achieve Low Approximate Regret with parameters ? and A(d, T ) = O(log d) for any ? > 0. Low Approximate Regret is analogous to the notion of ?-regret from [15], with ? = (1 + ?). In Appendix D we show that the Low Approximate Regret property and our subsequent results naturally extend to utility maximization games. Smooth Games It is well-known that in a large class of games, termed smooth games by Roughgarden [23], traditional learning dynamics converge to approximately optimal social welfare. In subsequent sections we analyze the convergence of Low Approximate Regret learning dynamics in such smooth games. We will see that Low Approximate Regret (for sufficiently small A(d, T )) coupled with smoothness of the game implies fast convergence of learning dynamics to desirable social welfare under a variety of conditions. Before proving this result we review social welfare and smooth games. Pn For a given action profile s, the social cost is C(s) = i=1 costi (s). To bound the efficiency loss due to the selfish behavior of the players we define O PT = min o s n X costi (so ). i=1 Definition 2. (Smooth P game [23]) A cost minimization game is called ( , ?)-smooth if for all strategy profiles s and s? : i costi (s?i , s i ) ? ? costi (s? ) + ? ? costi (s). This property is typically applied using a (close to) optimal action profile s? = so . For this case the property implies that if s is an action profile with very high cost, then some player deviating to her share of the optimal profile s?i will improve her cost. For smooth games, the price of anarchy is at most /(1 ?), meaning that Nash equilibria of the game, as well as no-regret learning outcomes in the limit, have social cost at most a factor of /(1 ?) above the optimum. Smooth cost minimization games include congestion games such 4 as routing or load balancing. For example, atomic congestion games with affine cost functions are ( 53 , 13 )-smooth [8], non-atomic games are (1, 0.25) smooth [25], implying a price of anarchy of 2.5 and 1.33 respectively. While we focus on cost-minimization games for simplicity of exposition, an analogous definition also applies for utility maximization, including smooth mechanisms [29], which we elaborate on in Appendix D. Smooth mechanisms include most simple auctions. For example, the first price item auction is (1 1/e, 1)-smooth and all-pay actions are (1/2, 1)-smooth, implying a price of anarchy of 1.58 and 2 respectively. All of our results extend to such mechanisms. 3 Learning in Games with Full Information Feedback We now analyze the efficiency of algorithms with the Low Approximate Regret property in the full information setting. Our first proposition shows that, for smooth games with full information feedback, learners with the Low Approximate Regret property converge to efficient outcomes. Proposition 1. In any ( , ?)-smooth game, if all players use Low Approximate Regret algorithms satisfying Eq. (1) with parameters ? and A(d, T ), then for the action profiles st drawn on round t from the corresponding mixed actions of the players, ? 1 X ? E C(st ) ? T t 1 ? ? O PT + n ? T 1 1 ? ? ? A(d, T ) . ? Proof. This proof is a straightforward modification P of the usualPprice P of anarchy proof for smooth games. We obtain the claimed bound by writing t E[C(st )] = i t E[costi (st )], using the Low ? Approximate Regret property with f = s?i for each P player i?fort the optimal solution s , then using the smoothness property for each time t to bound i costi (si , s i ), and finally rearranging terms. For ? << (1 ?) the approximation factor of /(1 ? ?) is very close to the price of anarchy /(1 ?). This shows that Low Approximate Regret learning dynamics quickly converge to outcomes with social welfare arbitrarily close to the welfare guaranteed for exact Nash equilibria by the price of anarchy. A simple corollary of this proposition is that, when players use learning algorithms that satisfy the Strong Low Approximate Regret property, the bound above can be taken to depend on O PT even though this value is unknown to the players. Whenever the Low Approximate Regret property is satisfied, a high probability version of the property with similar dependence on ? and A(d, T ) is also satisfied. This implies that in addition to quickly converging to efficient outcomes in expectation, Low Approximate Regret learners experience fast convergence with high probability. Proposition 2. In any ( , ?)-smooth game, if all players use Low Approximate Regret algorithms satisfying Eq. (1) for parameters ? and A(d, T ), then for the action profile st drawn on round t from the players? mixed actions and = 2?/(1 + ?), we have that 8 > 0, with probability at least 1 , 1 X C(st ) ? T t 1 ? O PT + n ? T 1 1 ? ? ? 4A(d, T ) + 12 log(n log2 (T )/ )) , Examples of Simple Low Approximate Regret Algorithms Propositions 1 and 2 are informative when applied with algorithms for which A(d, T ) is sufficiently small. One would hope that such algorithms are relatively simple and easy to find. We show now that the well-known Hedge algorithm as well as basic variants such as Optimistic Hedge and Hedge with online learning rate tuning satisfy the property with A(d, T ) = O(log d), which will lead to fast convergence both in terms of n and T . For these algorithms and indeed all that we consider in this paper, we can achieve the Low Approximate Regret property for any fixed ? > 0 via an appropriate parameter setting. In Appendix A.2, we provide full descriptions and proofs for these algorithms. Example 1. Hedge satisfies the Low Approximate Regret property with A(d, T ) = log(d). In particular one can achieve the property for any fixed ? > 0 by using ? as the learning rate. Example 2. Hedge with online learning rate tuning satisfies the Strong Low Approximate Regret property with A(d, T ) = O(log d). Example 3. Optimistic Hedge satisfies the Low Approximate Regret property with A(d, T ) = 8 log(d). As with vanilla Hedge, we can choose the learning rate to achieve the property with any ?. 5 p Example 4. Any algorithm satisfying a ?small loss? regret bound of the form (Learner?s cost) ? A p or p(Cost of best action) ? A satisfies Strong Low Approximate Regret via the AM-GM inequality, i.e. (Learner?s cost) ? A / inf ?>0 [? ? (Learner?s cost) + A/?]. In particular, this implies that the following algorithms have Strong Low Approximate Regret: Canonical small loss and self-confident algorithms, e.g. [11, 4, 30], Algorithm of [7], Variation MW [13], AEG-Path [26], AdaNormalHedge [18], Squint [16], and Optimistic PAC-Bayes [10]. Example 4 shows that the Strong Low Approximate Regret property in fact is ubiquitous, as it is satisfied by any algorithm that provides small loss regret bounds or one of many variants on this type of bound. Moreover, all algorithms that satisfy the Low Approximate Regret property for all fixed ? can be made to satisfy the strong property using the doubling trick. Main Result for Full Information Games: Theorem 3. In any ( , ?)-smooth game, if all players use Low Approximate Regret algorithms satisfying (1) for parameter ? 2 and A(d, T ) = O(log d), then ? 1 X ? E C(st ) ? T t 1 ? ? O PT + n ? T 1 and furthermore, 8 > 0, with probability at least 1 ? 1 X ? E C(st ) ? T t 1 ? ? O PT + n ? T 1 1 ? , ? ? ? 1 ? ? ? O(log d) , ? O(log d) O(log(n log2 (T )/ )) + . ? ? Corollary 4. If all players use Strong Low Approximate Regret algorithms then: 1. The above results hold for all ? > 0 simultaneously. 2. Individual players have regret bounded by O(T 1/2 ), even in adversarial settings. 3. The players approach a coarse correlated equilibrium asymptotically. Comparison with Syrgkanis et al. [28]. By relaxing the standard /(1 ?) price of anarchy bound, Theorem 3 substantially broadens the class of algorithms that experience fast convergence to include even the common Hedge algorithm. The main result of [28] shows that learning algorithms that satisfy their RVU property converge to the price of anarchy bound /(1 ?) at rate n2 log d/T . They further achieve a worse approximation of (1 + ?)/(?(1 ?)) at the improved (in terms of n) rate of n log d/T . We converge to an approximation arbitrarily close to /(1 + ?) at a rate of n log d/T . Note that in atomic congestion games with affine congestion function ? = 1/3, so their bound of (1 + ?)/?(1 ?) loses a factor of 4 compared to the price of anarchy. Strong Low Approximate Regret algorithms such as Hedge with online learning rate tuning simultap neously experience both fast O(n/T ) convergence in games and an O(1/ T ) bound on individual p regret in adversarial settings. In contrast, [28] only shows O(n/ T ) individual regret and O(n3 /T ) convergence to price of anarchy simultaneously. Low Approximate Regret algorithms only need realized feedback, whereas [28] require expectation feedback. Having players receive expectation feedback is unrealistic in terms of both information and computation. Indeed, even if the necessary information was available, computing expectations over discrete probability distributions is not tractable unless n is taken to be constant. Our results imply that Optimistic Hedge enjoys the best of two worlds: It enjoys fast convergence to the exact /(1 ?) price of anarchy using expectation feedback as well as fast convergence to the ?-approximate price of anarchy using realized feedback. Our new analysis of Optimistic Hedge (Appendix A.2.2) sheds light on another desirable property of this algorithm: Its regret is bounded in terms of the net cost incurred by Hedge. Figure 1 summarizes the differences between our results. RVU property [28] LAR property (section 2) Feedback Expected costs Realized costs POA exact ?-approx Rate O(n2 log d/T ) O(n log d/(?T )) Time comp. dO(n) per round O(d) per round Figure 1: Comparison of Low Approximate Regret and RVU properties. 2 We can also show that the theorem holds if players satisfy the property for different values of ?, but with a dependence on the worst case value of ? across all players. 6 4 Bandit Feedback In many realistic scenarios, the players of a game might not even know what they would have lost or gained if they had deviated from the action they played. We model this lack of information with bandit feedback, in which each player observes a single scalar, costi (st ) = hsti , cti i, per round.3 When the game considered is smooth, one can use the Low Approximate Regret property as in the full information setting to show that players quickly converge to efficient outcomes. Our results here hold with the same generality as in the full information setting: As long as learners satisfy the Low Approximate Regret property (1), an efficiency result analogous to Proposition 1 holds. Proposition 5. Consider a ( , ?)-smooth game. If all players use bandit learning algorithms with Low Approximate Regret A(d, T ) then " # X 1 t E C(s ) ? T 1 t ? ? O PT + n ? T 1 1 ? ? ? A(d, T ) . ? Bandit Algorithms with Low Approximate Regret The bandit Low Approximate Regret property requires that (1) holds in expectation against any sequence of adaptive and potentially adversarially chosen costs, but only for an obliviously chosen comparator f .4 This is weaker than requiring that an algorithm achieve a true expected regret bound; it is closer to pseudo-regret. The Exp3Light algorithm [27] satisfies Low Approximate Regret with A(d, T ) = d2 log T . The SCRiBLe algorithm introduced in [1] (via the analysis in [21]) enjoys the Low Approximate Regret property with A(d, T ) = d3 log(dT ). The GREEN algorithm [2] achieves the Low Approximate Regret property with A(d, T ) = d log(T ), but only works with costs and not gains. This prevents it from being used in utility settings such as auctions, as in Appendix D. We present a new bandit algorithm (Algorithm 3) that achieves Low Approximate Regret with A(d, T ) = d log(T /d) and thus matches the performance of GREEN, but works in both cost minimization and utility maximization settings. This method is based on Online Mirror Descent with a logarithmic barrier for the positive orthant, but differs from earlier algorithms based on the logarithmic barrier (e.g. [21]) in that it uses the classical importance-weighted estimator for costs ? instead of sampling based on the Dikin elipsoid. It can be implemented in O(d) time per round, using line search to find . We provide proofs and further discussion of Algorithm 3 in Appendix B. Algorithm 3: Initialize w1 to the uniform distribution. On each round t, perform update: Algorithm 3 update: wstt 1 = wstt 1 + ?ctst 1 1 1 + wstt 1 1 and 8j 6= st 1 wjt = wjt 1 1 + wjt 1, (3) where ? 0 is chosen so that wt is a valid probability distribution. Lemma 6. Algorithm 3 with ? = ?/(1 + ?) has Low Approximate Regret with A(d, T ) = O(d log T ). Comparison to Other Algorithms In contrast to the full information setting where the most common algorithm, Hedge, achieves Low Approximate Regret with competitive parameters, the most common adversarial bandit algorithm Exp3 does not seem to satisfy Low Approximate Regret. [3] provide a small loss bound for bandits which would be sufficient for Low Approximate Regret, but their algorithm requires prior knowledge on the loss of the best action (or a bound on it), which is not appropriate in our game setting. Similarly, the small loss bound in [20] is not applicable in our setting as the work assumes an oblivious adversary and so does not apply to the games we consider. 5 Dynamic Population Games In this section we consider the dynamic population repeated game setting introduced in [19]. Detailed discussion and proofs are deferred to Appendix C. Given a game G as described in Section 2, a dynamic population game with stage game G is a repeated game where at each round t game G is played and every player i is replaced by a new player with a turnover probability p. Concretely, when a player turns over, their strategy set and cost function are changed arbitrarily subject to the rules 3 4 With slight abuse of notation, sti denotes the identity vector associated to the strategy player i used at time t. This is because we only need to evaluate (1) with the game?s optimal solution s? to prove efficiency results. 7 of the game. This models a repeated game setting where players have to adapt to an adversarially changing environment. We denote the cost function of player i at round t as costti (?). As in Section 3, we assume that the players receive full information feedback. At the end of each round they observe the entire cost vector cti = costti (?, st i ), but are not aware of the costs of other players in the game. Learning in Dynamic Population Games and the Price of Anarchy To guarantee small overall cost using the smoothness analysis from Section 2, players need to exhibitP low regret against a shifting t t ?t benchmark s?t i of socially optimal strategies achieving O PT = mins?t i costi (s ). Even with a small probability p of change, the sequence of optimal solutions can have too many changes to be able to achieve low regret. In spite of this apparent difficulty, [19] prove that at least a ? /(1 ? ?) fraction of the optimal welfare is guaranteed if 1. players are using low adaptive regret algorithms (see [14, 18]) and 2. for the underlying optimization problem there exists a relatively stable sequence of solutions which at each step approximate the optimal solution by a factor of ?. This holds as long as the turnover probability p is upper bounded by a function of ? (and of certain other properties of the game, such as the stability of the close to optimal solution). We consider dynamic population games where each player uses a learning algorithm satisfying Low Approximate Regret for shifting experts (2). This shifting version of Low Approximate Regret implies a dynamic game analog of our main efficiency result, Proposition 1. Algorithms with Low Approximate Regret for Shifting Experts A simple variant of Hedge we term Noisy Hedge, which mixes the Hedge update at each round with a small amount of uniform noise, satisfies the Low Approximate Regret property for shifting experts with A(d, T ) = O(log(dT )). Moreover, algorithms that satisfy a small loss version of the adaptive regret property [14] used in [19] satisfy the Strong Low Approximate Regret property. Proposition 7. Noisy Hedge with learning rate ? = ? satisfies the Low Approximate Regret property for shifting experts with A(d, T ) = 2 log(dT ). Extending Proposition 1 to the DynamicP Population Game Setting Let s?1:T denote a stable ?t sequence of near-optimal solutions s with i costti (s?t ) ? ? ? O PTt for all rounds t. As discussed in [19], such stable sequences can come from simple greedy algorithms (where each change in the input of one player affects the output of few other players) or via differentially private algorithms (where each change in the input of one player affects the output of all other players with small probability); in the latter case the sequence is randomized. For a deterministic sequence s?1:T of i player i?s actions, we let the random variable Ki denote the number of changes in the sequence. For a randomized sequence s?1:T , we let Ki be the sum of total variation distances between subsequent i P 1 ?t pairs s?t and s . The stability of a sequence of solutions is determined by E[ K ]. i i i i Proposition 8. (PoA with Dynamic Population) If all players use Low Approximate Regret algorithms satisfying (2) in a dynamic population game, where the stage game is ( , ?)-smooth, and Ki as defined above then ? 1 X ? 1 E C(st ) ? T t T 1 ?P ? ? n+E ?? X ? i Ki E O PTt + ? ? ? t T 1 1 ? ? ? A(d, T ) . ? (4) Here the expectation is taken over the random turnover in the population playing the game, as well as the random choices of the players on the left hand side. To claim a price of anarchy bound, we need to ensure that the additive term in (4) is a small fraction of P the optimal cost. The challenge is that high turnover probability reduces stability, increasing P E[ i Ki ]. By using algorithms with smaller A(d, T ), we can allow for higher E[ i Ki ] and hence higher turnover probability. Combining Noisy Hedge with Proposition 8 strengthens the results in [19] by both weakening the behavioral assumption on the players, allowing them to use simpler learning algorithms, and allowing a higher turnover probability. Comparison to Previous Results [19] use the more complex AdaNormalHedge algorithm of [18], which satisfies the adaptive regret property of [14], but has O(dT ) space complexity. In contrast, Noisy Hedge only requires space complexity of just O(d). Moreover, a broader class of algorithms satisfy the Low Approximate Regret property which makes the efficiency guarantees more prescriptive since this property serves as a behavioral assumption. Finally, the our guarantees we provide improve on the turnover probability that can be accommodated as discussed in Appendix C.1. Acknowledgements We thank Vasilis Syrgkanis for sharing his simulation software and the NIPS reviewers for pointing out the GREEN algorithm [2]. 8 References [1] Jacob Abernethy, Elad Hazan, and Alexander Rakhlin. Competing in the dark: An efficient algorithm for bandit linear optimization. In Proc. of the 21st Annual Conference on Learning Theory (COLT), 2008. [2] Chamy Allenberg, Peter Auer, L?szl? Gy?rfi, and Gy?rgy Ottucs?k. Hannan Consistency in On-Line Learning in Case of Unbounded Losses Under Partial Monitoring, pages 229?243. Springer Berlin Heidelberg, Berlin, Heidelberg, 2006. [3] Jean-Yves Audibert and S?bastien Bubeck. Regret bounds and minimax policies under partial monitoring. The Journal of Machine Learning Research, 11:2785?2836, 2010. [4] Peter Auer, Nicolo Cesa-Bianchi, and Claudio Gentile. Adaptive and self-confident on-line learning algorithms. Journal of Computer and System Sciences, 64(1):48?75, 2002. [5] Peter L Bartlett, Varsha Dani, Thomas Hayes, Sham Kakade, Alexander Rakhlin, and Ambuj Tewari. High-probability regret bounds for bandit online linear optimization. In Proceedings of 21st Annual Conference on Learning Theory (COLT), pages 335?342, 2008. [6] Nicolo Cesa-Bianchi and Gabor Lugosi. Prediction, Learning, and Games. Cambridge University Press, New York, NY, USA, 2006. [7] Nicolo Cesa-Bianchi, Yishay Mansour, and Gilles Stoltz. Improved second-order bounds for prediction with expert advice. Machine Learning, 66(2-3):321?352, 2007. [8] Giorgos Christodoulou and Elias Koutsoupias. The price of anarchy of finite congestion games. In Proceedings of the 37th Annual ACM Symposium on Theory of Computing (STOC), pages 67 ? 73, 2005. [9] Constantinos Daskalakis, Alan Deckelbaum, and Anthony Kim. Near-optimal no-regret algorithms for zero-sum games. Games and Economic Behavior, 92:327?348, 2015. [10] Dylan J Foster, Alexander Rakhlin, and Karthik Sridharan. Adaptive online learning. In Advances in Neural Information Processing Systems, pages 3357?3365, 2015. [11] Yoav Freund and Robert E Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci., 55(1):119?139, August 1997. [12] Elad Hazan. Introduction to Online Convex Optimization. Foundations and Trends in Optimization, 2016. [13] Elad Hazan and Satyen Kale. Extracting certainty from uncertainty: Regret bounded by variation in costs. Machine learning, 80(2-3):165?188, 2010. [14] Elad Hazan and C. Seshadhri. Efficient learning algorithms for changing environments. In Proceedings of the 26th Annual International Conference on Machine Learning (ICML), pages 393?400, 2009. [15] Sham M. Kakade, Adam Tauman Kalai, and Katrina Ligett. Playing games with approximation algorithms. SIAM J. Comput., 39:1088 ? 1106, 2009. [16] Wouter M Koolen and Tim Van Erven. Second-order quantile methods for experts and combinatorial games. In Proceedings of The 28th Conference on Learning Theory (COLT), pages 1155?1175, 2015. [17] Elias Koutsoupias and Christos Papadimitriou. Worst-case equilibria. Comp. sci. review, 3(2):65?69, 2009. [18] Haipeng Luo and Robert E Schapire. Achieving all with no parameters: Adanormalhedge. In Proceedings of The 28th Conference on Learning Theory (COLT), pages 1286?1304, 2015. [19] Thodoris Lykouris, Vasilis Syrgkanis, and ?va Tardos. Learning and efficiency in games with dynamic population. In Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 120?129. SIAM, 2016. [20] Gergely Neu. First-order regret bounds for combinatorial semi-bandits. In Proceedings of the 27th Annual Conference on Learning Theory (COLT), pages 1360?1375, 2015. [21] Alexander Rakhlin and Karthik Sridharan. Online learning with predictable sequences. In Conference on Learning Theory (COLT), pages 993?1019, 2013. [22] Alexander Rakhlin and Karthik Sridharan. Optimization, learning, and games with predictable sequences. In Advances in Neural Information Processing Systems (NIPS), pages 3066?3074, 2013. [23] Tim Roughgarden. Intrinsic robustness of the price of anarchy. Journal of the ACM, 2015. [24] Tim Roughgarden, Vasilis Syrgkanis, and Eva Tardos. The price of anarchy in auctions. Available at https://arxiv.org/abs/1607.07684, 2016. [25] Tim Roughgarden and Eva Tardos. How bad is selfish routing? Journal of the ACM, 49:236 ? 259, 2002. [26] Jacob Steinhardt and Percy Liang. Adaptivity and optimism: An improved exponentiated gradient algorithm. In Proceedings of the 31st International Conference on Machine Learning (ICML), pages 1593?1601, 2014. [27] Gilles Stoltz. Incomplete information and internal regret in prediction of individual sequences. PhD thesis, Universite Paris-Sud, 2005. [28] Vasilis Syrgkanis, Alekh Agarwal, Haipeng Luo, and Robert E Schapire. Fast convergence of regularized learning in games. In Advances in Neural Information Processing Systems (NIPS), pages 2989?2997, 2015. [29] Vasilis Syrgkanis and ?va Tardos. Composable and efficient mechanisms. In ACM Symposium on Theory of Computing (STOC), pages 211?220, 2013. [30] Rani Yaroshinsky, Ran El-Yaniv, and Steven S. Seiden. How to better use expert advice. Machine Learning, 55(3):271?309, 2004. 9
6133 |@word private:1 version:5 faculty:1 rani:1 stronger:1 d2:1 simulation:1 forecaster:1 simplifying:1 jacob:2 pick:1 prescriptive:1 erven:1 current:1 comparing:1 dikin:1 luo:2 si:10 allenberg:1 realistic:2 additive:2 subsequent:4 informative:1 enables:1 ligett:1 update:3 hwit:3 congestion:7 implying:2 greedy:1 item:2 ith:1 provides:3 coarse:1 boosting:1 org:1 simpler:1 unbounded:1 symposium:3 prove:2 behavioral:2 indeed:2 expected:11 behavior:2 themselves:1 examine:1 rvu:6 sud:1 socially:1 cardinality:1 increasing:2 considering:1 spain:1 begin:1 moreover:3 bounded:4 notation:1 underlying:1 what:1 costi:13 substantially:1 hindsight:4 guarantee:9 pseudo:1 certainty:1 every:2 shed:1 seshadhri:1 grant:2 anarchy:20 before:2 positive:1 tsinghua:2 limit:1 consequence:1 despite:1 path:1 approximately:4 abuse:1 black:1 might:1 lugosi:1 studied:1 suggests:1 relaxing:1 ms:1 range:1 atomic:4 practice:1 regret:107 lost:1 differs:2 evolving:1 gabor:1 matching:1 seeing:1 spite:1 close:8 selection:1 context:1 applying:1 influence:1 writing:1 map:1 deterministic:1 reviewer:1 syrgkanis:8 go:1 straightforward:1 independently:1 convex:1 kale:1 wit:1 simplicity:1 estimator:1 rule:1 his:1 population:15 proving:1 notion:2 variation:5 stability:3 analogous:4 tardos:5 pt:8 play:1 suppose:1 strengthen:2 exact:4 gm:1 yishay:1 us:2 trick:1 trend:1 satisfying:9 strengthens:1 steven:1 worst:3 eva:3 trade:1 decrease:1 observes:4 ran:1 environment:5 nash:3 complexity:2 predictable:2 turnover:8 dynamic:22 depend:2 tight:1 upon:2 efficiency:9 learner:9 lykouris:3 fast:20 lift:1 broadens:1 outcome:11 abernethy:1 apparent:1 jean:1 larger:1 elad:4 say:1 relax:1 katrina:1 satyen:1 noisy:5 online:9 advantage:1 sequence:15 net:1 propose:1 product:1 adaptation:1 vasilis:5 combining:1 realization:1 achieve:13 realistically:1 description:1 haipeng:2 rgy:1 differentially:1 convergence:35 yaniv:1 optimum:1 extending:1 guaranteeing:1 converges:1 leave:1 adam:1 tim:4 depending:1 minor:1 received:1 eq:2 strong:10 implemented:1 c:1 implies:6 come:1 routing:3 bin:5 require:4 crux:1 generalization:1 proposition:12 extension:1 obliviously:1 hold:7 sufficiently:2 considered:1 welfare:11 equilibrium:6 scope:1 claim:1 pointing:1 achieves:4 favorable:1 proc:1 applicable:1 combinatorial:2 weighted:1 minimization:8 hope:1 dani:1 clearly:1 always:1 kalai:1 pn:1 claudio:1 cornell:3 broader:3 corollary:2 focus:1 contrast:5 adversarial:5 kim:1 am:1 dependent:2 el:1 typically:1 entire:1 weakening:1 her:2 bandit:20 selects:1 interested:1 deckelbaum:1 issue:2 overall:2 among:2 colt:6 initialize:1 aware:2 having:1 sampling:2 represents:1 broad:1 adversarially:2 icml:2 constantinos:1 papadimitriou:1 oblivious:1 few:1 simultaneously:3 individual:5 deviating:1 replaced:2 karthik:4 ab:1 interest:2 wouter:1 deferred:1 szl:1 truly:1 analyzed:1 light:1 closer:1 partial:3 necessary:1 experience:5 unless:1 stoltz:2 incomplete:1 accommodated:1 initialized:1 instance:1 earlier:1 yoav:1 maximization:6 cost:51 uniform:5 seventh:1 too:1 considerably:1 confident:2 st:19 varsha:1 international:2 randomized:2 siam:3 stay:1 off:1 quickly:3 w1:2 thesis:1 gergely:1 cesa:3 satisfied:6 opposed:1 ndseg:1 choose:1 worse:1 expert:14 sidestep:1 li:1 syst:1 gy:2 includes:1 satisfy:15 audibert:1 depends:1 multiplicative:5 performed:1 later:1 optimistic:8 hazan:4 observing:1 analyze:2 competitive:1 hf:2 bayes:1 minimize:1 yves:1 who:1 monitoring:2 comp:2 churn:4 whenever:1 sharing:1 neu:1 definition:4 against:5 naturally:1 proof:6 associated:1 universite:1 sampled:1 gain:1 knowledge:1 improves:1 ubiquitous:5 formalize:1 auer:2 higher:3 dt:4 follow:1 improved:5 box:1 though:1 generality:1 furthermore:2 just:2 stage:2 hand:1 receives:1 lack:1 google:1 quality:2 thodoris:2 lar:1 usa:1 requiring:1 true:1 ccf:1 regularization:1 hence:1 round:20 game:93 self:2 theoretic:1 percy:1 auction:8 meaning:1 consideration:1 common:3 koolen:1 extend:5 slight:1 approximates:1 analog:1 discussed:2 significant:1 refer:1 cambridge:1 enter:1 smoothness:8 tuning:3 vanilla:3 approx:1 consistency:1 similarly:1 had:3 access:1 stable:3 alekh:1 nicolo:3 own:1 recent:1 showed:2 inf:1 termed:1 claimed:1 n00014:1 certain:2 scenario:1 inequality:1 onr:1 arbitrarily:6 minimum:1 gentile:1 converge:10 maximize:1 semi:1 full:9 desirable:2 sham:2 mix:1 reduces:1 hannan:1 smooth:27 alan:1 match:1 exp3:1 adapt:1 offer:1 long:2 ptt:2 equally:1 award:1 va:3 converging:1 variant:5 basic:2 prediction:3 expectation:13 arxiv:1 agarwal:1 achieved:1 receive:5 addition:1 fellowship:1 whereas:1 subject:2 sridharan:5 seem:1 call:1 extracting:1 mw:1 near:2 enough:1 decent:1 wn:1 variety:1 easy:1 affect:2 competing:1 inner:1 economic:1 cn:1 shift:1 optimism:1 utility:12 bartlett:1 peter:3 york:1 action:46 repeatedly:1 generally:1 rfi:1 detailed:1 tewari:1 amount:1 dark:1 schapire:3 http:1 nsf:1 canonical:1 per:4 discrete:2 achieving:2 drawn:2 d3:2 changing:4 asymptotically:1 fraction:2 sum:3 giorgos:1 sti:2 uncertainty:1 soda:1 decide:1 draw:1 decision:1 appendix:9 summarizes:1 bound:31 ki:6 pay:1 guaranteed:6 played:6 deviated:1 fold:1 annual:6 roughgarden:4 constraint:1 n3:1 software:1 aspect:1 speed:4 optimality:3 min:2 relatively:2 across:2 smaller:2 wi:2 kakade:2 modification:1 s1:2 zhiyuan:1 taken:6 previously:1 turn:3 mechanism:6 know:1 letting:1 tractable:1 end:2 serf:1 available:4 decentralized:1 opponent:2 apply:2 observe:5 appropriate:3 robustness:2 thomas:1 assumes:1 denotes:1 include:7 ensure:1 log2:2 quantile:1 classical:1 realized:15 occurs:4 strategy:9 dependence:5 traditional:1 visiting:1 gradient:1 distance:1 thank:1 berlin:2 sci:2 mail:1 ottucs:1 multiplicatively:1 christodoulou:1 liang:1 robert:3 potentially:1 stoc:2 expense:1 squint:1 unknown:1 perform:1 allowing:4 upper:1 policy:1 observation:2 bianchi:3 gilles:2 twenty:1 benchmark:1 finite:1 descent:2 orthant:1 payoff:2 mansour:1 august:1 introduced:3 fort:1 pair:1 paris:1 barcelona:1 nip:4 able:1 adversary:1 challenge:1 ambuj:1 max:1 including:2 green:4 shifting:12 unrealistic:1 natural:1 widening:1 regularized:2 difficulty:1 minimax:1 improve:7 imply:1 coupled:1 sn:2 review:2 prior:1 acknowledgement:1 freund:1 loss:12 mixed:2 adaptivity:1 composable:1 foundation:1 incurred:1 elia:2 affine:3 sufficient:1 foster:2 playing:5 share:1 cd:1 balancing:2 changed:1 supported:1 enjoys:4 side:1 allow:2 weaker:1 exponentiated:1 taking:1 barrier:3 tauman:1 van:1 regard:1 feedback:32 world:1 cumulative:1 valid:1 author:1 concretely:2 made:2 adaptive:6 social:9 approximate:75 emphasize:1 forever:1 hayes:1 assumed:1 leader:1 daskalakis:1 search:1 robust:1 rearranging:1 obtaining:1 heidelberg:2 complex:1 anthony:1 main:3 noise:1 profile:8 n2:3 repeated:9 advice:2 elaborate:1 ny:1 christos:1 dylan:2 comput:2 governed:1 down:1 theorem:3 load:2 specific:1 bastien:1 bad:1 showing:1 pac:1 list:1 rakhlin:5 sit:1 intractable:1 exists:1 intrinsic:1 importance:2 gained:1 mirror:3 phd:1 conditioned:1 logarithmic:2 simply:6 explore:1 selfish:2 bubeck:1 steinhardt:1 prevents:1 doubling:1 scalar:1 applies:2 springer:1 loses:1 satisfies:10 constantly:2 acm:5 hedge:28 cti:10 comparator:2 identity:1 consequently:1 exposition:1 wjt:3 adanormalhedge:4 price:20 change:7 typical:1 determined:1 uniformly:1 contrasted:1 acting:1 wt:3 distributes:1 lemma:1 koutsoupias:2 total:2 called:2 player:100 est:1 select:3 formally:1 internal:1 latter:1 alexander:5 evaluate:2 correlated:1
5,674
6,134
Stochastic Structured Prediction under Bandit Feedback Artem Sokolov,?, Julia Kreutzer? , Christopher Lo?,? , Stefan Riezler?,? ? Computational Linguistics & ? IWR, Heidelberg University, Germany {sokolov,kreutzer,riezler}@cl.uni-heidelberg.de ? Department of Mathematics, Tufts University, Boston, MA, USA chris.aa.lo@gmail.com  Amazon Development Center, Berlin, Germany Abstract Stochastic structured prediction under bandit feedback follows a learning protocol where on each of a sequence of iterations, the learner receives an input, predicts an output structure, and receives partial feedback in form of a task loss evaluation of the predicted structure. We present applications of this learning scenario to convex and non-convex objectives for structured prediction and analyze them as stochastic first-order methods. We present an experimental evaluation on problems of natural language processing over exponential output spaces, and compare convergence speed across different objectives under the practical criterion of optimal task performance on development data and the optimization-theoretic criterion of minimal squared gradient norm. Best results under both criteria are obtained for a non-convex objective for pairwise preference learning under bandit feedback. 1 Introduction We present algorithms for stochastic structured prediction under bandit feedback that obey the following learning protocol: On each of a sequence of iterations, the learner receives an input, predicts an output structure, and receives partial feedback in form of a task loss evaluation of the predicted structure. In contrast to the full-information batch learning scenario, the gradient cannot be averaged over the complete input set. Furthermore, in contrast to standard stochastic learning, the correct output structure is not revealed to the learner. We present algorithms that use this feedback to ?banditize? expected loss minimization approaches to structured prediction [18, 25]. The algorithms follow the structure of performing simultaneous exploration/exploitation by sampling output structures from a log-linear probability model, receiving feedback to the sampled structure, and conducting an update in the negative direction of an unbiased estimate of the gradient of the respective full information objective. The algorithms apply to situations where learning proceeds online on a sequence of inputs for which gold standard structures are not available, but feedback to predicted structures can be elicited from users. A practical example is interactive machine translation where instead of human generated reference translations only translation quality judgments on predicted translations are used for learning [20]. The example of machine translation showcases the complexity of the problem: For each input sentence, we receive feedback for only a single predicted translation out of a space that is exponential in sentence length, while the goal is to learn to predict the translation with the smallest loss under a complex evaluation metric. [19] showed that partial feedback is indeed sufficient for optimization of feature-rich linear structured prediction over large output spaces in various natural language processing (NLP) tasks. Their experiments follow the standard online-to-batch conversion practice in NLP applications where the ? The work for this paper was done while the authors were at Heidelberg University. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. model with optimal task performance on development data is selected for final evaluation on a test set. The contribution of our paper is to analyze these algorithms as stochastic first-order (SFO) methods in the framework of [7] and investigate the connection of optimization for task performance with optimization-theoretic concepts of convergence. Our analysis starts with revisiting the approach to stochastic optimization of a non-convex expected loss criterion presented by [20]. The iteration complexity of stochastic optimization of a non-convex objective J(wt ) can be analyzed in the framework of [7] as O(1/2 ) in terms of the number of iterations needed to reach an accuracy of  for the criterion E[k?J(wt )k2 ] ? . [19] attempt to improve convergence speed by introducing a cross-entropy objective that can be seen as a (strong) convexification of the expected loss objective. The known best iteration complexity for strongly convex stochastic optimization is O(1/) for the suboptimality criterion E[J(wt )] ? J(w? ) ? . Lastly, we analyze the pairwise preference learning algorithm introduced by [19]. This algorithm can also be analyzed as an SFO method for non-convex optimization. To our knowledge, this is the first SFO approach to stochastic learning form pairwise comparison feedback, while related approaches fall into the area of gradient-free stochastic zeroth-order (SZO) approaches [24, 1, 7, 4]. Convergence rate for SZO methods depends on the dimensionality d of the function to be evaluated, for example, the non-convex SZO algorithm of [7] has an iteration complexity of O(d/2 ). SFO algorithms do not depend on d which is crucial if the dimensionality of the feature space is large as is common in structured prediction. Furthermore, we present a comparison of empirical and theoretical convergence criteria for the NLP tasks of machine translation and noun-phrase chunking. We compare the empirical convergence criterion of optimal task performance on development data with the theoretically motivated criterion of minimal squared gradient norm. We find a correspondence of fastest convergence of pairwise preference learning on both tasks. Given the standard analysis of asymptotic complexity bounds, this result is surprising. An explanation can be given by measuring variance and Lipschitz constant of the stochastic gradient, which is smallest for pairwise preference learning and largest for crossentropy minimization by several orders of magnitude. This offsets the possible gains in asymptotic convergence rates for strongly convex stochastic optimization, and makes pairwise preference learning an attractive method for fast optimization in practical interactive scenarios. 2 Related Work The methods presented in this paper are related to various other machine learning problems where predictions over large output spaces have to be learned from partial information. Reinforcement learning has the goal of maximizing the expected reward for choosing an action at a given state in a Markov Decision Process (MDP) model, where unknown rewards are received at each state, or once at the final state. The algorithms in this paper can be seen as one-state MDPs with context where choosing an action corresponds to predicting a structured output. Most closely related are recent applications of policy gradient methods to exponential output spaces in NLP problems [22, 3, 15]. Similar to our expected loss minimization approaches, these approaches are based on non-convex models, however, convergence rates are rarely a focus in the reinforcement learning literature. One focus of our paper is to present an analysis of asymptotic convergence and convergence rates of non-convex stochastic first-order methods. Contextual one-state MDPs are also known as contextual bandits [11, 13] which operate in a scenario of maximizing the expected reward for selecting an arm of a multi-armed slot machine. Similar to our case, the feedback is partial, and the models consist of a single state. While bandit learning is mostly formalized as online regret minimization with respect to the best fixed arm in hindsight, we characterize our approach in an asymptotic convergence framework. Furthermore, our highdimensional models predict structures over exponential output spaces. Since we aim to train these models in interaction with real users, we focus on the ease of elicitability of the feedback and on speed of convergence. In the spectrum of stochastic versus adversarial bandits, our approach is semi-adversarial in making stochastic assumptions on inputs, but not on rewards [12]. Pairwise preference learning has been studied in the full information supervised setting [8, 10, 6] where given preference pairs are assumed. Work on stochastic pairwise learning has been formalized as derivative-free stochastic zeroth-order optimization [24, 1, 7, 4]. To our knowledge, our approach 2 Algorithm 1 Bandit Structured Prediction 1: Input: sequence of learning rates ?t 2: Initialize w0 3: for t = 0, . . . , T do 4: Observe xt 5: Sample y?t ? pwt (y|xt ) 6: Obtain feedback ?(? yt ) 7: wt+1 = wt ? ?t st 8: Choose a solution w ? from the list {w0 , . . . , wT } to pairwise preference learning from partial feedback is the first SFO approach to learning from pairwise preferences in form of relative task loss evaluations. 3 Expected Loss Minimization for Structured Prediction [18, 25] introduce the expected loss criterion for structured prediction as the minimization of the expectation of a given task loss function with respect to the conditional distribution over structured outputs. Let X be a structured input space, let Y(x) be the set of possible output structures for input x, and let ?y : Y ? [0, 1] quantify the loss ?y (y 0 ) suffered for predicting y 0 instead of the gold standard structure y. In the full information setting, for a given (empirical) data distribution p(x, y), the learning problem is defined as X X min Ep(x,y)pw (y0 |x) [?y (y 0 )] = min p(x, y) ?y (y 0 )pw (y 0 |x), (1) w?Rd where w?Rd x,y y 0 ?Y(x) pw (y|x) = exp(w> ?(x, y))/Zw (x) (2) d d is a Gibbs distribution with joint feature representation ? : X ? Y ? R , weight vector w ? R , and normalization constant Zw (x). Despite being a highly non-convex optimization problem, positive results have been obtained by gradient-based optimization with respect to h i ?Ep(x,y)pw (y0 |x) [?y (y 0 )] = Ep(x,y)pw (y0 |x) ?y (y 0 ) ?(x, y 0 ) ? Epw (y0 |x) [?(x, y 0 )] . (3) Unlike in the full information scenario, in structured learning under bandit feedback the gold standard output structure y with respect to which the objective function is evaluated is not revealed to the learner. Thus we can neither evaluate the task loss ? nor calculate the gradient (3) as in the full information case. A solution to this problem is to pass the evaluation of the loss function to the user, i.e, we access the loss directly through user feedback without assuming existence of a fixed reference y. In the following, we will drop the subscript referring to the gold standard structure in the definition of ? to indicate that the feedback is in general independent of gold standard outputs. In particular, we allow ? to be equal to 0 for several outputs. 4 Stochastic Structured Prediction under Partial Feedback Algorithm Structure. Algorithm 1 shows the structure of the methods analyzed in this paper. It assumes a sequence of input structures xt , t = 0, . . . , T that are generated by a fixed, unknown distribution p(x) (line 4). For each randomly chosen input, an output y?t is sampled from a Gibbs model to perform simultaneous exploitation (use the current best estimate) / exploration (get new information) on output structures (line 5). Then, feedback ?(? yt ) to the predicted structure is obtained (line 6). An update is performed by taking a step in the negative direction of the stochastic gradient st , at a rate ?t (line 7). As a post-optimization step, a solution w ? is chosen from the list of vectors wt ? {w0 , . . . , wT } (line 8). Given Algorithm 1, we can formalize the notion of ?banditization? of objective functions by presenting different instantiations of the vector st , and showing them to be unbiased estimates of the gradients of corresponding full information objectives. 3 Expected Loss Minimization. [20] presented an algorithm that minimizes the following expected loss objective. It is non-convex for the specific instantiations in this paper: X X Ep(x)pw (y|x) [?(y)] = p(x) ?(y)pw (y|x). (4) x y?Y(x) The vector st used in their algorithm can be seen as a stochastic gradient of this objective, i.e., an evaluation of the full gradient at a randomly chosen input xt and output y?t :  st = ?(? yt ) ?(xt , y?t ) ? Epwt (y|xt ) [?(xt , y)] . (5) Instantiating st in Algorithm 1 to the stochastic gradient in equation (5) yields an update that compares the sampled feature vector to the average feature vector, and performs a step into the opposite direction of this difference, the more so the higher the loss of the sampled structure is. In the following, we refer to the algorithm for expected loss minimization defined by the update (5) as Algorithm EL. Pairwise Preference Learning. Decomposing complex problems into a series of pairwise comparisons has been shown to be advantageous for human decision making [23]. For the example of machine translation, this means that instead of requiring numerical assessments of translation quality from human users, only a relative preference judgement on a pair of translations needs to be elicited. This idea is formalized in [19] as an expected loss objective with respect to a conditional distribution of pairs of structured outputs. Let P(x) = {hyi , yj i |yi , yj ? Y(x)} denote the set of output pairs for an input x, and let ?(hyi , yj i) : P(x) ? [0, 1] denote a task loss function that specifies a dispreference of yi compared to yj . In the experiments reported in this paper, we simulate two types of pairwise feedback. Firstly, continuous pairwise feedback is computed as  ?(yi ) ? ?(yj ) if ?(yi ) > ?(yj ), ?(hyi , yj i) = (6) 0 otherwise. A binary feedback function is computed as  ?(hyi , yj i) = 1 0 if ?(yi ) > ?(yj ), otherwise. (7) Furthermore, we assume a feature representation ?(x, hyi , yj i) = ?(x, yi ) ? ?(x, yj ) and a Gibbs model on pairs of output structures pw (hyi , yj i |x) = ew P > (?(x,yi )??(x,yj )) ew > (?(x,y i )??(x,yj )) = pw (yi |x)p?w (yj |x). (8) hyi ,yj i?P(x) The factorization of this model into the product pw (yi |x)p?w (yj |x) allows efficient sampling and calculation of expectations. Instantiating objective (4) to the case of pairs of output structures defines the following objective that is again non-convex in the use cases in this paper: X X Ep(x)pw (hyi ,yj i|x) [?(hyi , yj i)] = p(x) ?(hyi , yj i) pw (hyi , yj i |x). (9) x hyi ,yj i?P(x) Learning from partial feedback on pairwise preferences will ensure that the model finds a ranking function that assigns low probabilities to discordant pairs with respect the the observed preference pairs. Stronger assumptions on the learned ranking can be made if asymmetry and transitivity of the observed ordering of pairs is required.2 An algorithm for pairwise preference learning can be defined by instantiating Algorithm 1 to sampling output pairs h? yi , y?j it , receiving feedback ?(h? yi , y?j it ), and performing a stochastic gradient update using  st = ?(h? yi , y?j it ) ?(xt , h? yi , y?j it ) ? Epwt (hyi ,yj i|xt ) [?(xt , hyi , yj i)] . (10) The algorithms for pairwise preference ranking defined by update (10) are referred to as Algorithms PR(bin) and PR(cont), depending on the use of binary or continuous feedback. 2 See [2] for an overview of bandit learning from consistent and inconsistent pairwise comparisons. 4 Cross-Entropy Minimization. The standard theory of stochastic optimization predicts considerable improvements in convergence speed depending on the functional form of the objective. This motivated the formalization of a convex upper bounds on expected normalized loss in [19]. If a P normalized gain function g?(y) = Zg(y) is used where Zg (x) = y?Y(x) g(y), and g = 1 ? ?, the g (x) objective can be seen as the cross-entropy of model pw (y|x) with respect to g?(y): X X Ep(x)?g(y) [? log pw (y|x)] = ? p(x) g?(y) log pw (y|x). (11) x y?Y(x) For a proper probability distribution g?(y), an application of Jensen?s inequality to the convex negative logarithm function shows that objective (11) is a convex upper bound on objective (4). However, normalizing the gain function is prohibitive in a partial feedback setting since it would require to elicit user feedback for each structure in the output space. [19] thus work with an unnormalized gain function g(y) that preserves convexity. This can be seen by rewriting the objective as the sum of a linear and a convex function in w: X X Ep(x)g(y) [? log pw (y|x)] = ? p(x) g(y)w> ?(x, y) (12) x + X y?Y(x) p(x)(log x X exp(w> ?(x, y)))?(x), y?Y(x) P where ?(x) = y?Y(x) g(y) is a constant factor not depending on w. Instantiating Algorithm 1 to the following stochastic gradient st of this objective yields an algorithm for cross-entropy minimization: st = g(? yt ) pwt (? yt |xt )  ? ?(xt , y?t ) + Epwt [?(xt , yt )] . (13) Note that the ability to sample structures from pwt (? yt |xt ) comes at the price of having to normalize yt |xt ). While minimization of this objective will assign high probabilities to structures st by 1/pwt (? with high gain, as desired, each update is affected by a probability that changes over time and is unreliable when training is started. This further increases the variance already present in stochastic optimization. We deal with this problem by clipping too small sampling probabilities to p?wt (? yt |xt ) = max{pwt (? yt |xt ), k} for a constant k [9]. The algorithm for cross-entropy minimization based on the stochastic gradient (13) is referred to as Algorithm CE in the following. 5 Convergence Analysis To analyze convergence, we describe Algorithms EL, PR, and CE as stochastic first-order (SFO) methods in the framework of [7]. We assume lower bounded, differentiable objective functions J(w) with Lipschitz continuous gradient ?J(w) satisfying k?J(w + w0 ) ? ?J(w)k ? Lkw0 k ?w, w0 , ?L ? 0. (14) For an iterative process of the form wt+1 = wt ? ?t st , the conditions to be met concern unbiasedness of the gradient estimate E[st ] = ?J(wt ), ?t ? 0, (15) and boundedness of the variance of the stochastic gradient E[||st ? ?J(wt )||2 ] ? ? 2 , ?t ? 0. (16) Condition (15) is met for all three Algorithms by taking expectations over all sources of randomness, i.e., over random inputs and output structures. Assuming k?(x, y)k ? R, ?(y) ? [0, 1] and yt ) g(y) ? [0, 1] for all x, y, and since the ratio p?w g(? yt |xt ) is bounded, the variance in condition (16) is t (? bounded. Note that the analysis of [7] justifies the use of constant learning rates ?t = ?, t = 0, . . . , T . Convergence speed can be quantified in terms of the number of iterations needed to reach an accuracy of  for a gradient-based criterion E[k?J(wt )k2 ] ? . For stochastic optimization of non-convex objectives, the iteration complexity with respect to this criterion is analyzed as O(1/2 ) in [7]. This complexity result applies to our Algorithms EL and PR. 5 The iteration complexity of stochastic optimization of (strongly) convex objectives has been analyzed as at best O(1/) for the suboptimality criterion E[J(wt )] ? J(w? ) ?  for decreasing learning rates [14].3 Strong convexity of objective (12) can be achieved easily by adding an `2 regularizer ?2 kwk2 with constant ? > 0. Algorithm CE is then modified to use the following regularized update rule wt+1 = wt ? ?t (st + T? wt ). This standard analysis shows two interesting points: First, Algorithms EL and PR can be analyzed as SFO methods where the latter only requires relative preference feedback for learning, while enjoying an iteration complexity that does not depend on the dimensionality of the function as in gradient-free stochastic zeroth-order (SZO) approaches. Second, the standard asymptotic complexity bound of O(1/2 ) for non-convex stochastic optimization hides the constants L and ? 2 in which iteration complexity increases linearly. As we will show, these constants have a substantial influence, possibly offsetting the advantages in asymptotic convergence speed of Algorithm CE. 6 Experiments Measuring Numerical Convergence and Task Loss Performance. In the following, we will present an experimental evaluation for two complex structured prediction tasks from the area of NLP, namely statistical machine translation and noun phrase chunking. Both tasks involve dynamic programming over exponential output spaces, large sparse feature spaces, and non-linear nondecomposable task loss metrics. Training for both tasks was done by simulating bandit feedback by evaluating ? against gold standard structures which are never revealed to the learner. We compare the empirical convergence criterion of optimal task performance on development data with numerical results on theoretically motivated convergence criteria. For the purpose of measuring convergence with respect to optimal task performance, we report an evaluation of convergence speed on a fixed set of unseen data as performed in [19]. This instantiates the selection criterion in line (8) in Algorithm 1 to an evaluation of the respective task loss function ?(? ywt (x)) under MAP prediction y?w (x) = arg maxy?Y(x) pw (y|x) on the development data. This corresponds to the standard practice of online-to-batch conversion where the model selected on the development data is used for final evaluation on a further unseen test set. For bandit structured prediction algorithms, final results are averaged over three runs with different random seeds. For the purpose of obtaining numerical results on convergence speed, we compute estimates of the expected squared gradient norm E[k?J(wt )k2 ], the Lipschitz constant L and the variance ? 2 in which the asymptotic bounds on iteration complexity grow linearly.4 We estimate the squared gradient norm by the squared norm of the stochastic gradient ksT k2 at a fixed time horizon T . The Lipschitz ks ?s k constant L in equation (14) is estimated by maxi,j kwii ?wjj k for 500 pairs wi and wj randomly drawn from the weights produced during training. The variance ? 2 in equation (16) is computed as the empirical variance of the stochastic gradient, taken at regular intervals after each epoch of size D, PK PK 1 1 T 2 yielding the estimate K k=1 kskD ? K k=1 skD k where K = b D c. All estimates include multiplication of the stochastic gradient with the learning rate. For comparability of results across different algorithms, we use the same T and the same constant learning rates for all algorithms.5 Statistical Machine Translation. In this experiment, an interactive machine translation scenario is simulated where a given machine translation system is adapted to user style and domain based on feedback to predicted translations. Domain adaptation from Europarl to NewsCommentary domains using the data provided at the WMT 2007 shared task is performed for French-to-English translation.6 The MT experiments are based on the synchronous context-free grammar decoder cdec [5]. The models use a standard set of dense and lexicalized sparse features, including an out-of and an in3 For constant learning rates, [21] show even faster convergence in the search phase of strongly-convex stochastic optimization. 2 4 For example, these constants appear as O( L + L? ) in the complexity bound for non-convex stochastic 2 optimization of [7]. 5 Note that the squared gradient norm upper bounds the suboptimality criterion s.t. k?J(wt )k2 ? 2?J(wt )] ? J(w? ) for strongly convex functions. Together with the use of constant learning rates this means that we measure convergence to a point near an optimum for strongly convex objectives. 6 http://www.statmt.org/wmt07/shared-task.html 6 Task SMT Chunking Algorithm CE EL PR(bin) Iterations 281k 370k 115k Score 0.271?0.001 0.267?8e?6 0.273?0.0005 ? 1e-6 1e-5 1e-4 ? 1e-6 k 5e-3 CE EL PR(cont) 5.9M 7.5M 4.7M 0.891?0.005 0.923?0.002 0.914?0.002 1e-6 1e-4 1e-4 1e-6 1e-2 Table 1: Test set evaluation for stochastic learning under bandit feedback from [19], for chunking under F1-score, and for machine translation under BLEU. Higher is better for both scores. Results for stochastic learners are averaged over three runs of each algorithm, with standard deviation shown in subscripts. The meta-parameter settings were determined on dev sets for constant learning rate ?, clipping constant k, `2 regularization constant ?. domain language model. The out-of-domain baseline model has around 200k active features. The pre-processing, data splits, feature sets and tuning strategies are described in detail in [19]. The difference in the task loss evaluation between out-of-domain (BLEU: 0.2651) and in-domain (BLEU: 0.2831) models gives the range of possible improvements (1.8 BLEU points) for bandit learning. Learning under bandit feedback starts at the learned weights of the out-of-domain median models. It uses parallel in-domain data (news-commentary, 40,444 sentences) to simulate bandit feedback, by evaluating the sampled translation against the reference using as loss function ? a smoothed per-sentence 1 ? BLEU (zero n-gram counts being replaced with 0.01). After each update, the hypergraph is re-decoded and all hypotheses are re-ranked. Training is distributed across 38 shards using a multitask-based feature selection algorithm [17]. Noun-phrase Chunking. The experimental setting for chunking is the same as in [19]. Following [16], conditional random fields (CRF) are applied to the noun phrase chunking task on the CoNLL2000 dataset7 . The implemented set of feature templates is a simplified version of [16] and leads to around 2M active features. Training under full information with a log-likelihood objective yields 0.935 F1. In difference to machine translation, training with bandit feedback starts from w0 = 0, not from a pre-trained model. Task Loss Evaluation. Table 1 lists the results of the task loss evaluation for machine translation and chunking as performed in [19], together with the optimal meta-parameters and the number of iterations needed to find an optimal result on the development set. Note that the pairwise feedback type (cont or bin) is treated as a meta-parameter for Algorithm PR in our simulation experiment. We found that bin is preferable for machine translation and cont for chunking in order to obtain the highest task scores. For machine translation, all bandit learning runs show significant improvements in BLEU score over the out-of-domain baseline. Early stopping by task performance on the development led to the selection of algorithm PR(bin) at a number of iterations that is by a factor of 2-4 smaller compared to Algorithms EL and CE. For the chunking experiment, the F1-score results obtained for bandit learning are close to the fullinformation baseline. The number of iterations needed to find an optimal result on the development set is smallest for Algorithm PR(cont), compared to Algorithms EL and CE. However, the best F1-score is obtained by Algorithm EL. Numerical Convergence Results. Estimates of E[k?J(wt )k2 ], L and ? 2 for three runs of each algorithm and task with different random seeds are listed in Table 2. For machine translation, at time horizon T , the estimated squared gradient norm for Algorithm PR is several orders of magnitude smaller than for Algorithms EL and CE. Furthermore, the estimated Lipschitz constant L and the estimated variance ? 2 are smallest for Algorithm PR. Since the iteration complexity increases linearly with respect to these terms, smaller constants L and ? 2 and a smaller 7 http://www.cnts.ua.ac.be/conll2000/chunking/ 7 Task SMT Chunking Algorithm CE EL PR(bin) PR(cont) T 767,000 767,000 767,000 767,000 ksT k2 3.04?0.02 0.02?0.03 2.88e-4?3.40e?6 1.03e-8?2.91e?10 L 0.54?0.3 1.63?0.67 0.08?0.01 0.10?5.70e?3 ?2 35 ?6 3.13e-4?3.60e?6 3.79e-5?9.50e?8 1.78e-7?1.45e?10 CE EL PR(bin) PR(cont) 3,174,400 3,174,400 3,174,400 3,174,400 4.20?0.71 1.21e-3?1.1e?4 7.71e-4?2.53e?4 5.99e-3?7.24e?4 1.60?0.11 1.16?0.31 1.33?0.24 1.11?0.30 4.88?0.07 0.01?9.51e?5 4.44e-3?2.66e?5 0.03?4.68e?4 Table 2: Estimates of squared gradient norm ksT k2 , Lipschitz constant L and variance ? 2 of stochastic gradient (including multiplication with learning rate) for fixed time horizon T and constant learning rates ? = 1e ? 6 for SMT and for chunking. The clipping and regularization parameters for CE are set as in Table 1 for machine translation, except for chunking CE ? = 1e ? 5. Results are averaged over three runs of each algorithm, with standard deviation shown in subscripts. value of the estimate E[k?J(wt )k2 ] at the same number of iterations indicates fastest convergence for Algorithm PR. This theoretically motivated result is consistent with the practical convergence criterion of early stopping on development data: Algorithm PR which yields the smallest squared gradient norm at time horizon T also needs the smallest number of iterations to achieve optimal performance on the development set. In the case of machine translation, Algorithm PR even achieves the nominally best BLEU score on test data. For the chunking experiment, after T iterations, the estimated squared gradient norm and either of the constants L and ? 2 for Algorithm PR are several orders of magnitude smaller than for Algorithm CE, but similar to the results for Algorithm EL. The corresponding iteration counts determined by early stopping on development data show an improvement of Algorithm PR over Algorithms CE and EL, however, by a smaller factor than in the machine translation experiment. Note that for comparability across algorithms, the same constant learning rates were used in all runs. However, we obtained similar relations between algorithms by using the meta-parameter settings chosen on development data as shown in Table 1. Furthermore, the above tendendencies hold for both settings of the meta-parameter bin or cont of Algorithm PR. 7 Conclusion We presented learning objectives and algorithms for stochastic structured prediction under bandit feedback. The presented methods ?banditize? well-known approaches to probabilistic structured prediction such as expected loss minimization, pairwise preference ranking, and cross-entropy minimization. We presented a comparison of practical convergence criteria based on early stopping with theoretically motivated convergence criteria based on the squared gradient norm. Our experimental results showed fastest convergence speed under both criteria for pairwise preference learning. Our numerical evaluation showed smallest variance for pairwise preference learning, which possibly explains fastest convergence despite the underlying non-convex objective. Furthermore, since this algorithm requires only easily obtainable relative preference feedback for learning, it is an attractive choice for practical interactive learning scenarios. Acknowledgments. This research was supported in part by the German research foundation (DFG), and in part by a research cooperation grant with the Amazon Development Center Germany. 8 References [1] Agarwal, A., Dekel, O., and Xiao, L. (2010). Optimal algorithms for online convex optimization with multi-point bandit feedback. In COLT. [2] Busa-Fekete, R. and H?llermeier, E. (2014). A survey of preference-based online learning with bandit algorithms. In ALT. [3] Chang, K.-W., Krishnamurthy, A., Agarwal, A., Daume, H., and Langford, J. (2015). Learning to search better than your teacher. In ICML. [4] Duchi, J. C., Jordan, M. I., Wainwright, M. J., and Wibisono, A. (2015). Optimal rates for zero-order convex optimization: The power of two function evaluations. IEEE Translactions on Information Theory, 61(5):2788?2806. [5] Dyer, C., Lopez, A., Ganitkevitch, J., Weese, J., Ture, F., Blunsom, P., Setiawan, H., Eidelman, V., and Resnik, P. (2010). cdec: A decoder, alignment, and learning framework for finite-state and context-free translation models. In ACL Demo. [6] Freund, Y., Iyer, R., Schapire, R. E., and Singer, Y. (2003). An efficient boosting algorithm for combining preferences. JMLR, 4:933?969. [7] Ghadimi, S. and Lan, G. (2012). Stochastic first- and zeroth-order methods for nonconvex stochastic programming. SIAM J. on Optimization, 4(23):2342?2368. [8] Herbrich, R., Graepel, T., and Obermayer, K. (2000). Large margin rank boundaries for ordinal regression. In Advances in Large Margin Classifiers, pages 115?132. [9] Ionides, E. L. (2008). Truncated importance sampling. J. of Comp. and Graph. Stat., 17(2):295?311. [10] Joachims, T. (2002). Optimizing search engines using clickthrough data. In KDD. [11] Langford, J. and Zhang, T. (2007). The epoch-greedy algorithm for contextual multi-armed bandits. In NIPS. [12] Lazaric, A. and Munos, R. (2012). Learning with stochastic inputs and adversarial outputs. Journal of Computer and System Sciences, (78):1516?1537. [13] Li, L., Chu, W., Langford, J., and Schapire, R. E. (2010). A contextual-bandit approach to personalized news article recommendation. In WWW. [14] Polyak, B. T. (1987). Introduction to Optimization. Optimization Software, Inc., New York. [15] Ranzato, M., Chopra, S., Auli, M., and Zaremba, W. (2016). Sequence level training with recurrent neural networks. In ICLR. [16] Sha, F. and Pereira, F. (2003). Shallow parsing with conditional random fields. In NAACL. [17] Simianer, P., Riezler, S., and Dyer, C. (2012). Joint feature selection in distributed stochastic learning for large-scale discriminative training in SMT. In ACL. [18] Smith, N. A. (2011). Linguistic Structure Prediction. Morgan and Claypool. [19] Sokolov, A., Kreutzer, J., Lo, C., and Riezler, S. (2016). Learning structured predictors from bandit feedback for interactive NLP. In ACL. [20] Sokolov, A., Riezler, S., and Urvoy, T. (2015). Bandit structured prediction for learning from user feedback in statistical machine translation. In MT Summit XV. [21] Solodov, M. V. (1998). Incremental gradient algorithms with stepsizes bounded away from zero. Computational Optimization and Applications, 11:23?35. [22] Sutton, R. S., McAllester, D., Singh, S., and Mansour, Y. (2000). Policy gradient methods for reinforcement learning with function approximation. In NIPS. [23] Thurstone, L. L. (1927). A law of comparative judgement. Psychological Review, 34:278?286. [24] Yue, Y. and Joachims, T. (2009). Interactively optimizing information retrieval systems as a dueling bandits problem. In ICML. [25] Yuille, A. and He, X. (2012). Probabilistic models of vision and max-margin methods. Frontiers of Electrical and Electronic Engineering, 7(1):94?106. 9
6134 |@word multitask:1 exploitation:2 version:1 pw:17 judgement:2 norm:11 advantageous:1 stronger:1 dekel:1 simulation:1 boundedness:1 series:1 score:8 selecting:1 skd:1 current:1 com:1 contextual:4 surprising:1 gmail:1 chu:1 parsing:1 numerical:6 kdd:1 drop:1 update:9 greedy:1 selected:2 prohibitive:1 smith:1 boosting:1 preference:22 herbrich:1 firstly:1 org:1 zhang:1 lopez:1 busa:1 introduce:1 theoretically:4 pairwise:22 indeed:1 expected:15 nor:1 multi:3 decreasing:1 armed:2 ua:1 spain:1 provided:1 bounded:4 underlying:1 minimizes:1 hindsight:1 interactive:5 zaremba:1 preferable:1 k2:9 classifier:1 grant:1 appear:1 positive:1 engineering:1 xv:1 riezler:5 despite:2 sutton:1 subscript:3 blunsom:1 zeroth:4 acl:3 studied:1 quantified:1 k:1 fastest:4 ease:1 factorization:1 range:1 averaged:4 practical:6 acknowledgment:1 yj:24 offsetting:1 practice:2 regret:1 nondecomposable:1 area:2 empirical:5 elicit:1 pre:2 regular:1 get:1 cannot:1 close:1 selection:4 context:3 influence:1 www:3 ghadimi:1 map:1 center:2 maximizing:2 yt:12 convex:28 survey:1 amazon:2 formalized:3 assigns:1 rule:1 notion:1 thurstone:1 krishnamurthy:1 weese:1 user:8 programming:2 us:1 hypothesis:1 cnts:1 satisfying:1 showcase:1 summit:1 predicts:3 convexification:1 ep:7 observed:2 electrical:1 calculate:1 revisiting:1 wj:1 news:2 ranzato:1 ordering:1 highest:1 substantial:1 shard:1 elicitability:1 complexity:14 convexity:2 reward:4 hypergraph:1 wjj:1 dynamic:1 trained:1 depend:2 singh:1 yuille:1 learner:6 easily:2 joint:2 various:2 regularizer:1 train:1 fast:1 describe:1 choosing:2 otherwise:2 grammar:1 ability:1 unseen:2 final:4 online:6 sequence:6 differentiable:1 advantage:1 interaction:1 product:1 adaptation:1 combining:1 achieve:1 gold:6 normalize:1 convergence:33 asymmetry:1 optimum:1 comparative:1 incremental:1 depending:3 recurrent:1 ac:1 stat:1 received:1 strong:2 implemented:1 predicted:7 indicate:1 come:1 quantify:1 met:2 direction:3 closely:1 correct:1 stochastic:46 exploration:2 human:3 mcallester:1 bin:8 explains:1 require:1 assign:1 f1:4 frontier:1 hold:1 around:2 exp:2 claypool:1 seed:2 urvoy:1 predict:2 achieves:1 early:4 smallest:7 purpose:2 largest:1 stefan:1 minimization:14 aim:1 modified:1 stepsizes:1 linguistic:1 crossentropy:1 focus:3 joachim:2 improvement:4 rank:1 likelihood:1 indicates:1 contrast:2 adversarial:3 baseline:3 el:14 stopping:4 bandit:27 relation:1 germany:3 statmt:1 arg:1 html:1 colt:1 development:15 noun:4 initialize:1 equal:1 once:1 never:1 having:1 field:2 sampling:5 icml:2 report:1 randomly:3 preserve:1 dfg:1 replaced:1 phase:1 attempt:1 investigate:1 highly:1 evaluation:18 alignment:1 analyzed:6 yielding:1 partial:9 respective:2 enjoying:1 logarithm:1 desired:1 re:2 theoretical:1 minimal:2 psychological:1 dev:1 measuring:3 phrase:4 clipping:3 introducing:1 deviation:2 predictor:1 too:1 characterize:1 reported:1 teacher:1 referring:1 st:14 unbiasedness:1 siam:1 probabilistic:2 receiving:2 together:2 squared:11 again:1 interactively:1 choose:1 possibly:2 derivative:1 style:1 li:1 dataset7:1 de:1 ywt:1 inc:1 ranking:4 depends:1 performed:4 analyze:4 start:3 elicited:2 parallel:1 contribution:1 accuracy:2 variance:10 conducting:1 judgment:1 yield:4 produced:1 comp:1 randomness:1 simultaneous:2 reach:2 definition:1 against:2 gain:5 sampled:5 knowledge:2 dimensionality:3 graepel:1 formalize:1 obtainable:1 higher:2 supervised:1 follow:2 evaluated:2 done:2 strongly:6 furthermore:7 lastly:1 langford:3 receives:4 christopher:1 assessment:1 french:1 defines:1 quality:2 mdp:1 usa:1 naacl:1 concept:1 unbiased:2 requiring:1 normalized:2 regularization:2 deal:1 attractive:2 transitivity:1 during:1 unnormalized:1 suboptimality:3 criterion:21 presenting:1 theoretic:2 complete:1 julia:1 crf:1 performs:1 duchi:1 cdec:2 epwt:3 common:1 functional:1 mt:2 overview:1 he:1 kwk2:1 kreutzer:3 refer:1 significant:1 gibbs:3 discordant:1 rd:2 tuning:1 mathematics:1 language:3 wmt:1 access:1 showed:3 recent:1 hide:1 optimizing:2 scenario:7 nonconvex:1 inequality:1 binary:2 meta:5 yi:13 seen:5 morgan:1 commentary:1 hyi:14 semi:1 full:9 faster:1 calculation:1 cross:6 retrieval:1 post:1 prediction:19 instantiating:4 regression:1 vision:1 metric:2 expectation:3 iteration:21 normalization:1 agarwal:2 achieved:1 receive:1 interval:1 grow:1 sfo:7 suffered:1 crucial:1 source:1 zw:2 operate:1 unlike:1 median:1 yue:1 smt:4 inconsistent:1 ionides:1 jordan:1 near:1 chopra:1 revealed:3 split:1 ture:1 opposite:1 polyak:1 idea:1 epw:1 synchronous:1 motivated:5 york:1 action:2 involve:1 listed:1 http:2 specifies:1 schapire:2 llermeier:1 estimated:5 lazaric:1 per:1 affected:1 lan:1 drawn:1 neither:1 rewriting:1 ce:15 graph:1 sum:1 run:6 electronic:1 decision:2 bound:7 correspondence:1 adapted:1 your:1 software:1 personalized:1 speed:9 simulate:2 min:2 performing:2 structured:22 department:1 instantiates:1 across:4 smaller:6 y0:4 wi:1 shallow:1 making:2 maxy:1 pr:22 taken:1 chunking:15 equation:3 count:2 german:1 needed:4 singer:1 ordinal:1 dyer:2 available:1 decomposing:1 solodov:1 apply:1 obey:1 observe:1 away:1 tuft:1 simulating:1 batch:3 existence:1 assumes:1 linguistics:1 nlp:6 ensure:1 include:1 objective:30 already:1 strategy:1 sha:1 obermayer:1 gradient:36 comparability:2 iclr:1 berlin:1 simulated:1 decoder:2 w0:6 chris:1 bleu:7 assuming:2 length:1 kst:3 cont:8 ratio:1 mostly:1 negative:3 proper:1 policy:2 unknown:2 perform:1 clickthrough:1 conversion:2 upper:3 markov:1 finite:1 truncated:1 situation:1 auli:1 mansour:1 smoothed:1 introduced:1 pair:11 required:1 namely:1 sentence:4 connection:1 engine:1 learned:3 barcelona:1 nip:3 proceeds:1 in3:1 max:2 including:2 explanation:1 wainwright:1 power:1 dueling:1 natural:2 ranked:1 regularized:1 predicting:2 treated:1 arm:2 improve:1 mdps:2 started:1 epoch:2 literature:1 review:1 multiplication:2 asymptotic:7 relative:4 freund:1 loss:30 law:1 interesting:1 versus:1 foundation:1 sufficient:1 consistent:2 xiao:1 article:1 translation:29 lo:3 cooperation:1 supported:1 free:5 english:1 allow:1 fullinformation:1 fall:1 template:1 taking:2 munos:1 sparse:2 distributed:2 feedback:42 boundary:1 evaluating:2 gram:1 rich:1 author:1 made:1 reinforcement:3 simplified:1 uni:1 unreliable:1 active:2 instantiation:2 assumed:1 discriminative:1 demo:1 spectrum:1 continuous:3 iterative:1 search:3 table:6 learn:1 obtaining:1 conll2000:2 heidelberg:3 cl:1 complex:3 protocol:2 domain:10 pk:2 dense:1 linearly:3 daume:1 referred:2 resnik:1 formalization:1 decoded:1 pereira:1 exponential:5 jmlr:1 artem:1 xt:18 specific:1 showing:1 jensen:1 maxi:1 offset:1 list:3 alt:1 normalizing:1 concern:1 consist:1 lexicalized:1 adding:1 importance:1 iwr:1 magnitude:3 iyer:1 justifies:1 horizon:4 margin:3 boston:1 entropy:6 led:1 nominally:1 chang:1 applies:1 fekete:1 aa:1 corresponds:2 recommendation:1 ma:1 slot:1 conditional:4 goal:2 lipschitz:6 price:1 considerable:1 change:1 shared:2 determined:2 except:1 wt:23 pas:1 experimental:4 ew:2 rarely:1 zg:2 highdimensional:1 latter:1 wibisono:1 evaluate:1 europarl:1
5,675
6,135
The Multiscale Laplacian Graph Kernel Risi Kondor Department of Computer Science Department of Statistics University of Chicago Chicago, IL 60637 risi@cs.uchicago.edu Horace Pan Department of Computer Science University of Chicago Chicago, IL 60637 hopan@uchicago.edu Abstract Many real world graphs, such as the graphs of molecules, exhibit structure at multiple different scales, but most existing kernels between graphs are either purely local or purely global in character. In contrast, by building a hierarchy of nested subgraphs, the Multiscale Laplacian Graph kernels (MLG kernels) that we define in this paper can account for structure at a range of different scales. At the heart of the MLG construction is another new graph kernel, called the Feature Space Laplacian Graph kernel (FLG kernel), which has the property that it can lift a base kernel defined on the vertices of two graphs to a kernel between the graphs. The MLG kernel applies such FLG kernels to subgraphs recursively. To make the MLG kernel computationally feasible, we also introduce a randomized projection procedure, similar to the Nystr?om method, but for RKHS operators. 1 Introduction There is a wide range of problems in applied machine learning from web data mining [1] to protein function prediction [2] where the input space is a space of graphs. A particularly important application domain is chemoinformatics, where the graphs capture the structure of molecules. In the pharmaceutical industry, for example, machine learning algorithms are regularly used to screen candidate drug compounds for safety and efficacy against specific diseases [3]. Because kernel methods neatly separate the issue of data representation from the statistical learning component, it is natural to formulate graph learning problems in the kernel paradigm. Starting with [4], a number of different graph kernels have appeared in the literature (for an overview, see [5]). In general, a graph kernel k(G1 , G2 ) must satisfy the following requirements: 1. The kernel should capture the right notion of similarity between G1 and G2 . For example, if G1 and G2 are social networks, then k might capture to what extent their clustering structure, degree distribution, etc. match. If, on the other hand, G1 and G2 are molecules, then we are probably more interested in what functional groups are present, and how they are arranged relative to each other. 2. The kernel is usually computed from the adjacency matrices A1 and A2 of the two graphs, but it must be invariant to the ordering of the vertices. In other words, writing the kernel explicitly in terms of A1 and A2 , we must have k(A1 , A2 ) = k(A1 ,P A2 P >) for any permutation matrix P . Permutation invariance has proved to be the central constraint around which much of the graph kernels literature is organized, effectively stipulating that graph kernels must be built out of graph invariants. Efficiently computable graph invariants offered by the mathematics literature tend to fall in one of two categories: 1. Local invariants, which can often be reduced to simply counting some local properties, such as the number of triangles, squares, etc. that appear in G as subgraphs. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 2. Spectral invariants, which can be expressed as functions of the eigenvalues of the adjacency matrix or the graph Laplacian. Correspondingly, while different graph kernels are motivated in very different ways from random walks [4] through shortest paths [6, 7] to Fourier transforms on the symmetric group [8], most graph kernels in the literature ultimately reduce to computing a function of the two graphs that is either purely local or purely spectral. Any of the kernels based on the ?subgraph counting? idea (e.g., [9]) are local. On the other hand, most of the random walk based kernels are reducible to a spectral form involving the eigenvalues of either the two graphs individually, or their Kronecker product [5] and therefore are really only sensitive to the large scale structure of graphs. In practice, it would be desirable to have a kernel that can take structure into account at multiple different scales. A kernel between molecules, for example, should not only be sensitive to the overall large-scale shape of the graphs (whether they are more like a chain, a ring, a chain that branches, etc.), but also to what smaller structures (e.g., functional groups) are present in the graphs, and how they are related to the global structure (e.g., whether a particular functional group is towards the middle or one of the ends of a chain). For the most part, such a multiscale graph kernel has been missing from the literature. Two notable exceptions are the Weisfeiler?Lehman kernel [10] and Propagation Kernel [11]. The WL kernel uses a combination of message passing and hashing to build summaries of the local neighborhoods of vertices at different scales. While shown to be effective, the Weisfeiler?Lehman kernel?s hashing step is somewhat ad-hoc; perturbing the edges by a small amount leads to completely different hash features. Similarly, the propagation kernel monitors how the distribution of node/edge labels spreads through the graph and then uses locality sensitivity hashing to efficiently bin the label distributions into feature vectors. Most recently, structure2vec[12] attempts to represent each graph with a latent variable model and then embeds them into a feature space, using the inner product as a kernel. This approach compares favorably to the standard kernel methods in both accuracy and computational efficiency. In this paper we present a new graph kernel, the Multiscale Laplacian Graph Kernel (MLG kernel), which, we believe, is the first kernel in the literature that can truly compare structure in graphs simultaneously at multiple different scales. We begin by introducing the Feature Space Laplacian Graph Kernel(FLG kernel) in Section 2. The FLG kernel operates at a single scale, while combining information from the nodes?s vertex features with topological information through its Laplacian. An important property of the FLG kernel is that it can work with vertex labels provided as a ?base kernel? on the vertices, which allows us to apply the FLG kernel recursively. The MLG kernel, defined in Section 3, uses the FLG kernel?s recursive property to build a hierarchy of subgraph kernels that are sensitive to both the topological relationships between individual vertices, and between subgraphs of increasing sizes. Each kernel is defined in terms of the preceding kernel in the hierarchy. Efficient computability is a major concern in our paper, and recursively defined kernels on combinatorial data structures, can be very expensive. Therefore, in Section 4 we describe a strategy based on a combination of linearizing each level of the kernel (relative to a given dataset) and a randomized low rank projection step, which reduces every stage of the kernel computation to simple operations involving small matrices, leading to a very fast algorithm. Finally, section 5 presents experimental comparisons of our kernel with competing methods. 2 Laplacian Graph Kernels Let G be a weighted undirected graph with vertex set V = {v1 , . . . , vn } and edge set E. Recall that the graph Laplacian of G is an n ? n matrix LG , with LGi,j = ? ? ??w P i,j j : {vi ,vj }?E wi,j ? ?0 if {vi , vj } ? E if i = j otherwise, where wi,j is the weight of edge {vi , vj }. The graph Laplacian is positive semi-definite, and in terms of the adjacency matrix A and the weighted degree matrix D it can be expressed as L = D ? A. 2 Spectral graph theory tells us that the low eigenvalue eigenvectors of LG are informative about the overall shape of G. One way of seeing this is to note that for any vector z ? Rn X z>LG z = wi,j (zi ? zj )2 , {i,j}?E so the low eigenvalue eigenvectors are the smoothest functions on G, in the sense that they vary the least between adjacent vertices. An alternative interpretation emerges if we use G to construct a Gaussian graphical model (Markov Random Field or MRF) over n variables x1 , . . . , xn with clique 2 2 potentials ?(xi , xj ) = e?wi,j (xi ?xj ) /2 for each edge and ?(xi ) = e??xi /2 for each vertex. The > joint distribution of x = (x1 , . . . , xn ) is then  Y  Y  2 2 > p(x) ? e?wi,j (xi?xj ) /2 e??xi /2 = e?x (L+?I) x/2 , (1) vi ?V {vi ,vj }?E G showing that the covariance matrix of x is (L + ?I)?1 . Note that the ? factors were added to ensure that the distribution is normalizable, and ? is typically just a small constant ?regularizer?: LG actually has a zero eigenvalue eigenvector (namely the constant vector n?1/2 (1, 1, . . . , 1)> ), so without adding ?I we would not be able to invert it. In the following we will call LG + ?I the regularized Laplacian, and denote it simply by L. Both the above views suggest that if we want define a kernel between graphs that is sensitive to their overall shape, comparing the low eigenvalue eigenvectors of their Laplacians is a good place to start. Previous work by [13] also used the graph Laplacian for constructing a similarity function on graphs. Following the MRF route, given two graphs G1 and G2 of n vertices, we can define the kernel between them to be a kernel between the corresponding distributions p1 = N (0, L?1 1 ) and p2 = N (0, L?1 ). Specifically, we will use the Bhattacharyya kernel [14] 2 Z p p p1 (x) p2 (x) dx, (2) k(p1 , p2 ) = because for Gaussian distributions it can be computed in closed form, giving 1  L1 + 1 L2 ?1 1/2 2 2 k(p1 , p2 ) = . L?1 1/4 L?1 1/4 1 L?1 1 2 L?1 2 If some of the eigenvalues of or are zero or very close to zero, along certain directions in space the two distributions in (2) become very flat, leading to vanishingly small kernel values (unless the ?flat? directions of the two Gaussians are perfectly aligned). To remedy this problem, similarly to [15], we ?soften? (or regularize) the kernel by adding some small constant ? times the ?1 identity to L?1 1 and L2 . This leads to what we call the Laplacian Graph Kernel. Definition 1. Let G1 and G2 be two graphs with n vertices with (regularized) Laplacians L1 and L2 , respectively. We define the Laplacian graph kernel (LG kernel) with parameter ? between G1 1 ?1 1 ?1 ?1 1/2 and G2 as S + S kLG (G1 , G2 ) = 2 1 1/42 2 1/4 , (3) |S1 | |S2 | ?1 where S1 = L?1 1 + ?I and S2 = L2 + ?I. By virtue of (2), the LG kernel is positive semi-definite, and because the value of the overlap integral is largely determined by the extent to which the subspaces spanned by the largest eigenvalue eigen?1 vectors of L?1 1 and L2 are aligned, it effectively captures similarity between the overall shapes of G1 and G2 . However, the LG kernel does suffer from three major limitations: it assumes that both graphs have the same number of vertices, it is only sensitive to the overall structure of the two graphs, and it is not invariant to permuting the vertices. Our goal for the rest of this paper is to overcome each of these limitations, while retaining the LG kernel?s attractive spectral interpretation. 2.1 The feature space Laplacian graph kernel (FLG kernel) > In the probabilistic view of the LG kernel, every graph generates random vectors x = (x1 , . . . , xn ) according to (1), and the kernel between two graphs is determined by comparing the corresponding 3 distributions. The invariance problem arises because the ordering of the variables x1 , . . . , xn is arbitrary: even if G1 and G2 are topologically the same, kLG (G1 , G2 ) might be low if their vertices happen to be numbered differently. One of the central ideas of this paper is to address this issue by transforming from P the ?vertex space variables? x1 , . . . , xn to ?feature space variables? y1 , . . . , ym , where yi = j ti,j (xj ), and each ti,j function may only depend on j through local and reordering invariant properties of vertex vj . If we then compute an analogous kernel to the LG kernel, but now between the distributions of the y?s rather than the x?s, the resulting kernel will be permutation invariant. In the simplest case, the ti,j functions are linear, i.e., ti,j (xj ) = ?i (vj ) ? xj , where (?1 , . . . , ?m ) is a collection of m local (and permutation invariant) vertex features. For example, ?i (vj ) may be the degree of vertex vj , or the value of h? (vj , vj ), where h is the diffusion kernel on G with length scale parameter ? (c.f., [16]). In the chemoinformatics setting, the ?i ?s might be some way of encoding what type of atom is located at vertex vj . The linear transform of a multivariate normal random variable is multivariate normal. In our case, defining Ui,j = ?i (vj )i,j and y = U x, we have E(y) = 0 and Cov(y, y) = U Cov(x, x)U > = U L?1 U > , leading to the following kernel, which is the workhorse of the present paper. Definition 2. Let G1 and G2 be two graphs with regularized Laplacians L1 and L2 , respectively, ? ? 0 a parameter, and (?1 , . . . , ?m ) a collection of m local vertex features. Define the corresponding feature mapping matrices [U2 ]i,j = ?i (vj0 ), [U1 ]i,j = ?i (vj ) where vj is the j?th vertex of G1 and vj0 is the j?th vertex of G2 . The corresponding Feature space Laplacian graph kernel (FLG kernel) is defined 1 ?1 1 ?1 ?1 1/2 S + S , (4) kFLG (G1 , G2 ) = 2 1 1/42 2 1/4 |S1 | |S2 | ?1 > > where S1 = U1 L?1 1 U1 + ?I and S2 = U2 L2 U2 + ?I. Since the ?1 , . . . , ?m vertex features, by definition, are local and invariant to vertex renumbering, the FLG kernel is permutation invariant. Moreover, the distributions now live in the space of features rather than the space defined by the vertices, so we can apply the kernel to two graphs with different numbers of vertices. The major remaining shortcoming of the FLG kernel is that it cannot take into account structure at multiple different scales. 2.2 The ?kernelized? FLG kernel The key to boosting kFLG to a multiscale kernel is that it itself can be ?kernelized?, i.e., it can be computed from just the inner products between the feature vectors of the vertices (which we call the base kernel) without having to know the actual ?i (vj ) features values. Definition 3. Given a collection ? = (?1 , . . . , ?m )> of local vertex features, we define the corresponding base kernel ? between two vertices v and v 0 as the dot product of their feature vectors: ?(v, v 0 ) = ?(v)> ?(v 0 ). Note that in this definition v and v 0 may be two vertices of the same graph, or of two different graphs. We first show that, similarly to other kernel methods [17], to compute kFLG (G1 , G2 ) one only needs to consider the subspace of Rm spanned by the feature vectors of their vertices. Proposition 1. Let G1 and G2 be two graphs with vertex sets V1 = {v1 . . . vn1 } and V2 = {v10 . . . vn0 2 }, and let {?1 , . . . , ?p }  be an orthonormal basis for the subspace W = span ?(v1 ), . . . , ?(vn1 ), ?(v10 ), . . . , ?(vn0 2 ) . dim(W ) = p. Then, (4) can be rewritten as 1 ?1 1 ?1 ?1 1/2 S + S kFLG (G1 , G2 ) = 2 1 1/42 2 1/4 , (5) |S 1 | |S 2 | where [S 1 ]i,j = ?i>S1 ?j and [S 2 ]i,j = ?i>S2 ?j . In other words, S 1 and S 2 are the projections of S1 and S2 to W . 4 Similarly to kernel PCA [18] or the Bhattacharyya kernel [15], the easiest way to construct the basis {?1 , . . . , ?p } required by (5) is to compute the eigendecomposition of the joint Gram matrix of the vertices of the two graphs. Proposition 2. Let G1 and G be as in Proposition 1, V = {v 1 , . . . , v n1 +n2 } be the union of their vertex  sets (where it is assumed that the first n1 vertices are {v1 , . . . , vn1 } and the second n2 vertices are v10 , . . . , vn0 2 ), and define the joint Gram matrix K ? R(n1 +n2 )?(n1 +n2 ) as Ki,j = ?(v i , v j ) = ?(v i )> ?(v j ). Let u1 , . . . , up be a maximal orthonormal set of the non-zero eigenvalue eigenvectors of K with corresponding eigenvalues. Then the vectors n1 +n2 1 X ?i = ? [ui ]` ?(v ` ) ?i `=1 (6) 1/2 1/2 form an orthonormal basis for W . Moreover, defining Q = [?1 u1 , . . . , ?p up ] ? Rp?p and setting Q1 = Q1:n1 , : and Q2 = Qn1 +1:n2 , : (the first n1 and remaining n2 rows of Q, respectively), the matrices S 1 and S 2 appearing in (5) can be computed as ?1 S 1 = Q> 1 L1 Q1 + ?I, ?1 S 2 = Q> 2 L2 Q2 + ?I. (7) Proofs of these two propositions are given in the Supplemental Material. As in other kernel methods, the significance of Propositions 1 and 2 is not just that they show how kFLG (G1 , G2 ) can be efficiently computed when ? is very high dimensional, but that they make it clear that the FLG kernel may be induced from any base kernel. For completeness, we close this section with the generalized definition of the FLG kernel. Definition 4. Let G1 and G2 be two graphs. Assume that each of their vertices comes from an abstract vertex space V and that ? : V ? V ? R is a symmetric positive semi-definite kernel on V. The generalized FLG kernel induced from ? is then defined as 1 ?1 1 ?1 ?1 1/2 S + S ? , (8) kFLG (G1 , G2 ) = 2 1 1/42 2 1/4 |S 1 | |S 2 | where S 1 and S 2 are as defined in Proposition 2. 3 The multiscale Laplacian graph kernel (MLG kernel) By multiscale graph kernel we mean a kernel that is able to capture similarity between graphs not just based on the topological relationships between their individual vertices, but also the topological relationships between subgraphs. The key property of the FLG kernel that allows us to build such a kernel is that it can be applied recursively. In broad terms, the construction goes as follows: 1. Given a graph G, associate each vertex with a subgraph centered around it and compute the FLG kernel between every pair of these subgraphs. 2. Reinterpret the FLG kernel between these subgraphs as a new base kernel between the center vertices of the subgraphs. 3. Consider larger subgraphs centered at each vertex, compute the FLG kernel between them induced from the new base kernel constructed in the previous step, and recurse. To compute the actual multiscale graph kernel K between G and another graph G 0 , we follow the same process for G 0 and then set K(G, G 0 ) equal to the FLG kernel induced from their top level base kernels. The following definitions formalize this construction. Definition 5. Let G be a graph with vertex set V , and ? a positive semi-definite kernel on V . Assume that for each v ? V we have a nested sequence of L neighborhoods v ? N1 (v) ? N2 (v) ? . . . ? NL (v) ? V, (9) and for each N` (v), let G` (v) be the corresponding induced subgraph of G. We define the Multiscale Laplacian Subgraph Kernels (MLS kernels), K1 , . . . , KL : V ? V ? R as follows: ? 1. K1 is just the FLG kernel kFLG induced from the base kernel ? between the lowest level subgraphs: ? K1 (v, v 0 ) = kFLG (G1 (v), G1 (v 0 )). 5 2. For ` = 2, 3, . . . , L, K` is the FLG kernel induced from K`?1 between G` (v) and G` (v 0 ): K `?1 K` (v, v 0 ) = kFLG (G` (v), G` (v 0 )). Definition 5 defines the MLS kernel as a kernel between different subgraphs of the same graph G. However, if two graphs G1 and G2 share the same base kernel, the MLS kernel can also be used to compare any subgraph of G1 with any subgraph of G2 . This is what allows us to define an L + 1?th FLG kernel, which compares the two full graphs. Definition 6. Let G be a collection of graphs such that all their vertices are members of an abstract vertex space V endowed with a symmetric positive semi-definite kernel ? : V ? V ? R. Assume that the MLS kernels K1 , . . . , KL are defined as in Definition 5, both for pairs of subgraphs within the same graph and across pairs of different graphs. We define the Multiscale Laplacian Graph Kernel (MLG kernel) between any two graphs G1 , G2 ? G as KL K(G1 , G2 ) = kFLG (G1 , G2 ). Definition 5 leaves open the question of how the neighborhoods N1 (v), . . . , NL (v) are to be defined. In the simplest case, we set N` (v) to be the ball Br (v) (i.e., the set of vertices at a distance at most r from v), where r = r0 d`?1 for some d > 1. 3.1 Computational complexity Definitions 5 and 6 suggest a recursive approach to computing the MLG kernel: computing  2 K(G1 , G2 ) first requires computing KL (v, v 0 ) between all n1 +n pairs of top level subgraphs across 2 G1 and G2 ; each of these kernel evaluations requires computing KL?1 (v, v 0 ) between up to O(n2 ) level L ? 1 subgraphs, and so on. Following this recursion blindly would require up to O(n2L+2 ) kernel evaluations, which is clearly infeasible. The recursive strategy is wasteful because it involves evaluating the same kernel entries over and over again in different parts of the recursion tree. An alternative solution that requires only O(Ln2 ) kernel evaluations would be to first compute K1 (v, v 0 ) for all (v, v 0 ) pairs, then compute K2 (v, v 0 ) for all (v, v 0 ) pairs and so on. 4 Linearized Kernels and Low Rank Approximation Computing the MLG kernel between two graphs, as described in the previous section, may involve O(Ln2 ) kernel evaluations. At the top levels of the hierarchy each G` (v) might have ?(n) vertices, so the cost of a single FLG kernel evaluation can be as high as O(n3 ). Somewhat pessimistically, this means that the overall cost of computing kFLG (G1 , G2 ) is O(Ln5 ). Given a dataset of M graphs, computing their Gram matrix requires repeating this for all {G1 , G2 } pairs, giving O(LM 2 n5 ), which is even more problematic. The solution that we propose in this section is to compute for each level ` = 1, 2, . . . , L + 1 a single joint basis for all subgraphs at the given level across all graphs G1 , . . . , GM . For concreteness, we go back to the definition of the FLG kernel. Definition 7. Let G = {G1 , . . . , GM } be a collection of graphs, V1 , . . . , VM their vertex sets, and assume that V1 , . . . , VM ? V for some general vertex space V. Further, assume that ? : V ? V ? R is a positive semi-definite kernel on V, H? is its Reproducing Kernel Hilbert Space, and ? : V ? H? is the corresponding feature map satisfying ?(v, v 0 ) = h?(v), ?(v 0 )i for any v, v 0 ? V. The joint  SM S vertex feature space of {G1 , . . . , GM } is then WG = span i=1 v?Vi {?(v)} . WG is just the generalization of the W space defined in Proposition 1 from two graphs to M . The following generalization of Propositions 1 and 2 is then immediate. PM Proposition 3. Let N = i=1 | Vi |, V = (v 1 , . . . , v N ) be the concatenation of the vertex sets V1 , . . . , VM , and K the corresponding joint Gram matrix Ki,j = ?(v i , v j ) = h?(v i ), ?(v j )i . Let u1 , . . . , uP be a maximal orthonormal set of non-zero eigenvalue eigenvectors of K with corresponding eigenvalues ?1 , . . . , ?P , and P = dim(WG ). Then the vectors N 1 X ? [ui ]` ?(v ` ) ?i = ?i `=1 6 i = 1, . . . , P 1/2 1/2 form an orthonormal basis for WG . Moreover, defining Q = [?1 u1 , . . . , ?p uP ] ? RP ?P , and setting Q1 to be the submatrix of Q composed of its first |V1 | rows; Q2 be the submatrix composed of the next |V2 | rows, and so on, for any Gi , Gj ? G, the generalized FLG kernel induced from ? (Definition 4) can be expressed as 1 ?1 1 ?1 ?1 1/2 Si + Sj 2 , (10) kFLG (Gi , Gj ) = 2 |S i |1/4 |S j |1/4 ?1 > ?1 where S i = Q> i Li Qi + ?I and S j = Qj Lj Qj + ?I. The significance of Proposition 3 is that S1 , . . . , SM are now fixed matrices that do not need to be recomputed for each kernel evaluation. Once we have constructed the joint basis {?1 , . . . , ?P }, the Si matrix of each graph Gi can be computed independently, as a precomputation step, and individual kernel evaluations reduce to just plugging them into (10). At a conceptual level, Proposition 3 linearizes the kernel ? by projecting everything down to WG . In particular, it replaces the {?(v i )} RKHS vectors with explicit finite dimensional feature vectors given by the corresponding rows of Q, just like we had in the ?unkernelized? FLG kernel of Definition 2. ? For our multiscale kernels this is particularly important, because linearizing not just kFLG , but also K1 K2 kFLG , kFLG , . . ., allows us to compute the MLG kernel level by level, without recursion. After linearizing the base kernel ?, we attach explicit, finite dimensional vectors to each vertex of each graph. K1 Then we compute compute kFLG between all pairs of lowest level subgraphs, and linearizing this kernel as well, each vertex effectively just gets an updated feature vector. Then we repeat the process KL K2 for kFLG . . . kFLG , and finally we compute the MLG kernel K(G1 , G2 ). 4.1 Randomized low rank approximation The difficulty in the above approach of course is that at each level (3) is a Gram matrix between all vertices of all graphs, so storing it is already very costly, let along computing its eigendecomposition. Morever, P = dim(WG ) is also very large, so managing the S 1 , . . . , S M matrices (each of which is of size P?P ) becomes infeasible. The natural alternative is to replace WG by a smaller, approximate joint features space, defined as follows. ? N Definition 8. Let G, ?, H? and ? be defined as in Definition 7. Let V? = (? v1 , . . . , v? ? ) be N N vertices sampled from the joint vertex set V = (v 1 , . . . , v N ). Then the corresponding subsampled vertex feature space is ? G = span{ ?(? W v ) | v? ? V? }. ? G ). Similarly to before, we construct an orthonormal basis {?1 , . . . , ? ? } for W ?G Let P? = dim(W P ? by forming the (now much smaller) Gram matrix Ki,j = ?(? vi , v?j ), computing its eigenvalues and PN? 1 ? v` ). The resulting approximate FLG kernel is eigenvectors, and setting ?i = ? `=1 [ui ]` ?(? i kFLG (Gi , Gj ) =  1 ??1 1 ??1 ?1 1/2 2 Si + 2 Sj , | S?i |1/4 | S?j |1/4 (11) ? > L?1 Q ? i + ?I and S?j = Q ? > L?1 Q ? j + ?I are the projections of S i and S j to W ? G. where S?i = Q i j j i ? We introduce a further layer of approximation by restricting WG to be the space spanned by the first P? < P? basis vectors (ordered by descending eigenvalue), effectively doing kernel PCA on ? {?(? v )}v??V? , equivalently, a low rank approximation of K. ?s Assuming that vjg is the j?th vertex of Gg , in contrast to Proposition 2, now the j?th row of Q g ? consists of the coordinates of the projection of ?(vj ) onto WG , i.e., ? ? N N 1 X 1 X g g ? [Q ]j,i = ? [ui ]` ?(vj ), ?(? v` ) = ? [ui ]` ?(vjg , v?` ). ?i `=1 ?i `=1 The above procedure is similar to the popular Nystr?om approximation for kernel matrices [19, 20], except that in our case the ultimate goal is not to approximate the Gram matrix (3), but the 7 Table 1: Classification Results (Mean Accuracy ? Standard Deviation) Method WL WL-Edge SP Graphlet p?RW MLG MUTAG[22] 84.50(?2.16) 82.94(?2.33) 85.50(?2.50) 82.44(?1.29) 80.33(?1.35) 84.21(?2.61) PTC[23] 59.97(?1.60) 60.18(?2.19) 59.53(?1.71) 55.88(?0.31) 59.85(?0.95) 63.62(?4.69) ENZYMES[2] 53.75(?1.37) 52.00(?0.72) 42.31(?1.37) 30.95(?0.73) 28.17(?0.76) 57.92(?5.39) PROTEINS[2] 75.43(?1.95) 73.63(?2.12) 75.61(?0.45) 71.63(?0.33) 71.67(?0.78) 76.14(?1.95) NCI1[24] 84.76(?0.32) 84.65(?0.25) 73.61(?0.36) 62.40(?0.27) TIMED OUT 80.83(?1.29) NCI109[24] 85.12(?0.29) 85.32(?0.34) 73.23(?0.26) 62.35(?0.28) TIMED OUT 81.30(?0.80) S1 , . . . , SM matrices used to form the FLG kernel. In practice, we found that the eigenvalues of K usually drop off very rapidly, suggesting that W can be safely approximated by a surprisingly ? can be kept quite small dimensional subspace (P? ? 10), and correspondingly the sample size N small as well (on the order of 100). The combination of these two factors makes computing the entire stack of kernels feasible, reducing the complexity of computing the Gram matrix for a dataset ? 2 P? 3 + M LN ? 3 + M 2 P? 3 ). It is also important to of M graphs of ?(n) vertices each to O(M LN note that this linearization step requires the graphs(not the labels) in the test set to be known during ? G. training in order to project the features of the test graphs onto the low rank approximation of W 5 Experiments We tested the efficacy of the MLG kernel by performing classification on benchmark bioinformatics datasets using a binary C-SVM solver [21], and compared our classification results against those from other representative graph kernels from the literature: the Weisfeiler?Lehman Kernel, the Weisfeiler?Lehman Edge Kernel [9], the Shortest Path Kernel [6], the Graphlet Kernel [9], and the p-random Walk Kernel [5]. We randomly selected 20% of each dataset to be used as a test set. On the other 80% we did 10 fold cross validation to select the parameters for each kernel method to be used on the test set and repeated this setup 10 times. For the Weisfeiler?Lehman kernels, the height parameter h is chosen from {1, 2, ..., 5}, the random walk size p for the p-random walk kernel was chosen from {1, 2, ..., 5}, for the Graphlets kernel the graphlet size n was chosen from {3, 4, 5}. For the parameters of the MLG kernel: we chose ? from {0.01, 0.1, 1}, radius size n from {1, 2, 3}, number of levels l from {1, 2, 3}, and fixed gamma to be 0.01. For the MLG kernel, we used the given discrete node labels to create a one-hot binary feature vector for each node and used the dot product between nodes? binary feature vector labels as the base kernel. All experiments were done on a 16 core Intel E5-2670 @ 2.6GHz processor with 32 GB of memory. We are fairly competitive in accuracy for all datasets except NCI1, and NCI109, where it performs better than all non-Weisfeiler Lehman kernels. The Supplementary Materials give a more detailed discussion of the experiments and datasets. 6 Conclusions In this paper we have proposed two new graph kernels: (1) The FLG kernel, which is a very simple single level kernel that combines information attached to the vertices with the graph Laplacian; (2) The MLG kernel, which is a multilevel, recursively defined kernel that captures topological relationships between not just individual vertices, but also subgraphs. Clearly, designing kernels that can optimally take into account the multiscale structure of actual chemical compounds is a challenging task that will require further work and domain knowledge. However, it is encouraging that even just ?straight out of the box?, tuning only two or three parameters, the MLG kernel is competitive with other well known kernels in the literature. Beyond just graphs, the general idea of multiscale kernels is of interest for other types of data as well (such as images) that have multiresolution structure, and the way that the MLG kernel chains together local spectral analysis at multiple scales is potentially applicable to these domains as well, which will be the subject of further research. Acknowledgements This work was completed in part with computing resources provided by the University of Chicago Research Computing Center and with the support of DARPA-D16AP00112 and NSF-1320344. 8 References [1] Akihiro Inokuchi, Takashi Washio, and Hiroshi Motoda. Complete mining of frequent patterns from graphs: Mining graph data. Machine Learning, 50(3):321?354, 2003. [2] K. M. Borgwardt, C. S. Ong, S. Sch?onauer, S. V. N. Vishwanathan, A. J. Smola, and H.-P. Kriegel. Protein function prediction via graph kernels. In Proceedings of Intelligent Systems in Molecular Biology (ISMB), Detroit, USA, 2005. [3] H. Kubinyi. Drug research: myths, hype and reality. Nature Reviews: Drug Discovery, 2(8):665?668, August 2003. [4] T. G?artner. Exponential and geometric kernels for graphs. In NIPS*02 workshop on unreal data, volume Principles of modeling nonvectorial data, 2002. [5] S. V. N. Vishwanathan, Karsten Borgwardt, Risi Kondor, and Nicol Schraudolph. On graph kernels. Journal of Machine Learning Research (JMLR), 11, 2010. [6] Karsten M. Borgwardt and Hans Peter Kriegel. Shortest-path kernels on graphs. In Proceedings of the 5th IEEE International Conference on Data Mining(ICDM) 2005), 27-30 November 2005, Houston, Texas, USA, pages 74?81, 2005. [7] Aasa Feragen, Niklas Kasenburg, Jens Petersen, Marleen de Bruijne, and Karsten M. Borgwardt. Scalable kernels for graphs with continuous attributes. In Advances in Neural Information Processing Systemss, 2013. [8] Risi Kondor and Karsten Borgwardt. The skew spectrum of graphs. In Proceedings of the International Conference on Machine Learning (ICML), pages 496?503. ACM, 2008. [9] Nino Shervashidze, S. V. N. Vishwanathan, Tobias Petri, Kurt Mehlhorn, and Karsten M. Borgwardt. Efficient graphlet kernels for large graph comparison. In Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics, AISTATS, pages 488?495, 2009. [10] Nino Shervashidze, Pascal Schweitzer, Erik Jan van Leeuwen, Kurt Mehlhorn, and Karsten M. Borgwardt. Weisfeiler-lehman graph kernels. Journal of Machine Learning Research(JMLR), 12:2539?2561, November 2011. [11] Marion Neumann, Roman Garnett, Christian Baukhage, and Kristian Kersting. Propagation kernels: efficient graph kernels from propagated information. In Machine Learning, 2016. [12] Hanjun Dai, Bo Dai, and Le Song. Discriminative embeddings of latent variable models for structured data. In Proceedings of the International Conference on Machine Learning (ICML), 2016. [13] Fredrik D. Johansson and Devdatt Dubhashi. Learning with similarity functions on graphs using matchings of geometric embeddings. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 467?476, 2015. [14] Tony Jebara and Risi Kondor. Bhattacharyya and expected likelihood kernels. In Proceedings of the Annual Conference on Computational Learning Theory and Kernels Workshop (COLT/KW), 2003. [15] Risi Kondor and Tony Jebara. A kernel between sets of vectors. In Proceedings of the International Conference on Machine Learning (ICML), 2003. [16] Marc Alexa, Michael Kazhdan, and Leonidas Guibas. A Concise and Provably Informative Multi-Scale Signature Based on Heat Diffusion. In Processing of Eurographics Symposium on Geometry Processing, volume 28, 2009. [17] Bernhard Sch?olkopf and Alexander J. Smola. Learning with Kernels. MIT Press, 2002. [18] S. Mika, B. Sch?olkopf, A. J. Smola, K.-R. M?uller, Matthias Scholz, and G. R?atsch. Kernel PCA and de-noising in feature spaces. In Advances in Neural Information Processing Systems 11, pages 536?542, 1999. [19] Christopher K. I. Williams and Mattias Seeger. Using the Nystr?om method to speed up kernel machines. In Advances in Neural Information Processing Systems (NIPS), 2001. [20] Petros Drineas and Michael W. Mahoney. On the Nystr?om method for approximating a Gram matrix for improved kernel-based learning. Journal of Machine Learning Research, 6:2153?2175, 2005. [21] Chih-Chung Chang and Chih-Jen Lin. Libsvm: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 3, 2011. [22] A.K. Debnat, R. L. Lopez de Compadre, G. Debnath, A. j. Shusterman, and C. Hansch. Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. J Med Chem, 34:786?97, 1991. [23] H.Toivonen, A. Srinivasan, R. D. King, S. Kramer, and C. Helma. Statistical evaluation of the predictive toxicology challenge. Bioinformatics, pages 1183?1193, 2003. [24] N. Wale, I. A. Watson, and G. Karypis. Comparison of descriptor spaces for chemical compound retrieval and classification. Knowledge and Information Systems, pages 347?375, 2008. 9
6135 |@word middle:1 kondor:5 johansson:1 twelfth:1 open:1 motoda:1 linearized:1 covariance:1 q1:4 concise:1 nystr:4 recursively:5 efficacy:2 rkhs:2 bhattacharyya:3 kurt:2 existing:1 comparing:2 si:3 dx:1 must:4 chicago:5 happen:1 informative:2 mutagenic:1 shape:4 christian:1 drop:1 graphlets:1 hash:1 intelligence:1 leaf:1 selected:1 core:1 completeness:1 boosting:1 node:5 height:1 mehlhorn:2 along:2 constructed:2 schweitzer:1 become:1 symposium:1 lopez:1 consists:1 artner:1 combine:1 wale:1 introduce:2 orbital:1 expected:1 karsten:6 p1:4 multi:1 actual:3 encouraging:1 solver:1 increasing:1 becomes:1 spain:1 begin:1 provided:2 moreover:3 project:1 lowest:2 what:6 easiest:1 eigenvector:1 q2:3 supplemental:1 safely:1 every:3 reinterpret:1 ti:4 precomputation:1 rm:1 k2:3 appear:1 safety:1 positive:6 before:1 local:12 unreal:1 encoding:1 path:3 might:4 chose:1 mika:1 challenging:1 scholz:1 range:2 karypis:1 ismb:1 practice:2 recursive:3 definite:6 union:1 graphlet:4 procedure:2 jan:1 drug:3 projection:5 word:2 seeing:1 numbered:1 protein:3 suggest:2 get:1 cannot:1 close:2 onto:2 operator:1 petersen:1 noising:1 live:1 writing:1 descending:1 map:1 missing:1 center:2 go:2 williams:1 starting:1 independently:1 formulate:1 subgraphs:17 spanned:3 regularize:1 orthonormal:6 notion:1 coordinate:1 analogous:1 updated:1 hierarchy:4 construction:3 gm:3 us:3 designing:1 associate:1 expensive:1 particularly:2 located:1 satisfying:1 approximated:1 akihiro:1 reducible:1 capture:6 ordering:2 devdatt:1 disease:1 transforming:1 ui:6 complexity:2 tobias:1 ong:1 ultimately:1 signature:1 depend:1 debnath:1 predictive:1 purely:4 efficiency:1 completely:1 triangle:1 basis:8 matchings:1 drineas:1 joint:9 darpa:1 differently:1 regularizer:1 heat:1 fast:1 effective:1 describe:1 shortcoming:1 hiroshi:1 artificial:1 tell:1 lift:1 shervashidze:2 neighborhood:3 quite:1 larger:1 supplementary:1 tested:1 otherwise:1 wg:9 statistic:2 cov:2 g1:36 gi:4 transform:1 itself:1 hoc:1 sequence:1 eigenvalue:15 matthias:1 propose:1 product:5 vanishingly:1 maximal:2 frequent:1 aligned:2 combining:1 rapidly:1 subgraph:7 multiresolution:1 olkopf:2 requirement:1 neumann:1 ring:1 v10:3 p2:4 c:1 involves:1 come:1 fredrik:1 direction:2 radius:1 attribute:1 centered:2 material:2 everything:1 adjacency:3 bin:1 require:2 multilevel:1 shusterman:1 generalization:2 really:1 proposition:12 around:2 normal:2 guibas:1 mapping:1 lm:1 major:3 vary:1 a2:4 applicable:1 label:6 combinatorial:1 sensitive:5 individually:1 largest:1 wl:3 create:1 detroit:1 weighted:2 uller:1 mit:1 clearly:2 gaussian:2 onauer:1 rather:2 vn1:3 pn:1 kersting:1 takashi:1 rank:5 likelihood:1 contrast:2 sigkdd:1 seeger:1 sense:1 dim:4 typically:1 lj:1 entire:1 kernelized:2 interested:1 provably:1 issue:2 overall:6 classification:4 nci1:2 pascal:1 colt:1 retaining:1 fairly:1 field:1 construct:3 equal:1 having:1 once:1 atom:1 biology:1 kw:1 broad:1 icml:3 petri:1 intelligent:2 roman:1 randomly:1 composed:2 simultaneously:1 gamma:1 individual:4 pharmaceutical:1 subsampled:1 geometry:1 n1:10 attempt:1 mattias:1 interest:1 message:1 mining:5 evaluation:8 mahoney:1 truly:1 recurse:1 nl:2 permuting:1 chain:4 edge:7 integral:1 unless:1 tree:1 walk:5 timed:2 leeuwen:1 industry:1 modeling:1 soften:1 cost:2 introducing:1 vertex:62 entry:1 marion:1 deviation:1 optimally:1 borgwardt:7 international:6 randomized:3 sensitivity:1 nitro:1 probabilistic:1 vm:3 off:1 alexa:1 ym:1 together:1 michael:2 again:1 central:2 eurographics:1 weisfeiler:7 chung:1 leading:3 li:1 account:4 potential:1 suggesting:1 de:3 lehman:7 satisfy:1 notable:1 explicitly:1 ad:1 vi:8 leonidas:1 view:2 closed:1 doing:1 start:1 competitive:2 om:4 square:1 il:2 accuracy:3 descriptor:1 largely:1 efficiently:3 lgi:1 straight:1 processor:1 aromatic:1 mlg:19 definition:20 against:2 energy:1 proof:1 petros:1 propagated:1 sampled:1 proved:1 dataset:4 popular:1 recall:1 knowledge:3 emerges:1 organized:1 formalize:1 hilbert:1 actually:1 back:1 hashing:3 follow:1 improved:1 arranged:1 done:1 box:1 just:13 stage:1 smola:3 myth:1 correlation:1 hand:2 web:1 christopher:1 multiscale:13 mutag:1 propagation:3 defines:1 vj0:2 believe:1 building:1 usa:2 compadre:1 remedy:1 chemical:2 symmetric:3 attractive:1 adjacent:1 toxicology:1 during:1 qn1:1 linearizing:4 generalized:3 ln2:2 gg:1 complete:1 workhorse:1 performs:1 l1:4 image:1 recently:1 functional:3 overview:1 perturbing:1 attached:1 volume:2 interpretation:2 n2l:1 tuning:1 nonvectorial:1 mathematics:1 similarly:5 pm:1 neatly:1 had:1 dot:2 han:1 similarity:5 gj:3 etc:3 base:12 enzyme:1 multivariate:2 compound:4 route:1 certain:1 binary:3 watson:1 yi:1 jens:1 dai:2 somewhat:2 preceding:1 houston:1 r0:1 managing:1 paradigm:1 shortest:3 bruijne:1 semi:6 branch:1 multiple:5 desirable:1 full:1 reduces:1 match:1 cross:1 renumbering:1 schraudolph:1 lin:1 retrieval:1 icdm:1 molecular:2 a1:4 laplacian:20 qi:1 prediction:2 involving:2 mrf:2 plugging:1 n5:1 scalable:1 blindly:1 kernel:191 represent:1 invert:1 want:1 sch:3 rest:1 probably:1 induced:8 tend:1 subject:1 undirected:1 med:1 member:1 regularly:1 call:3 linearizes:1 counting:2 embeddings:2 xj:6 zi:1 competing:1 perfectly:1 reduce:2 idea:3 inner:2 computable:1 br:1 texas:1 qj:2 whether:2 motivated:1 pca:3 ultimate:1 gb:1 song:1 suffer:1 morever:1 peter:1 passing:1 clear:1 eigenvectors:6 involve:1 detailed:1 transforms:1 amount:1 repeating:1 category:1 simplest:2 reduced:1 rw:1 zj:1 problematic:1 nsf:1 discrete:1 srinivasan:1 group:4 key:2 recomputed:1 helma:1 monitor:1 ptc:1 wasteful:1 libsvm:1 diffusion:2 kept:1 v1:10 computability:1 graph:101 concreteness:1 topologically:1 place:1 kasenburg:1 chih:2 vn:1 submatrix:2 ki:3 layer:1 nci109:2 fold:1 topological:5 replaces:1 annual:1 activity:1 aasa:1 constraint:1 kronecker:1 vishwanathan:3 normalizable:1 n3:1 flat:2 generates:1 fourier:1 u1:7 speed:1 span:3 performing:1 department:3 structured:1 according:1 combination:3 ball:1 smaller:3 across:3 pan:1 character:1 wi:5 s1:8 projecting:1 invariant:11 heart:1 computationally:1 ln:2 resource:1 skew:1 structure2vec:1 know:1 end:1 operation:1 gaussians:1 rewritten:1 endowed:1 apply:2 v2:2 spectral:6 appearing:1 alternative:3 eigen:1 rp:2 pessimistically:1 assumes:1 clustering:1 ensure:1 remaining:2 top:3 graphical:1 completed:1 tony:2 giving:2 risi:6 k1:7 build:3 approximating:1 dubhashi:1 added:1 question:1 already:1 strategy:2 costly:1 exhibit:1 subspace:4 distance:1 separate:1 concatenation:1 extent:2 assuming:1 erik:1 length:1 relationship:5 equivalently:1 lg:11 setup:1 potentially:1 favorably:1 toivonen:1 markov:1 sm:3 benchmark:1 finite:2 datasets:3 november:2 immediate:1 defining:3 y1:1 rn:1 niklas:1 reproducing:1 stack:1 arbitrary:1 august:1 jebara:2 hansch:1 namely:1 required:1 pair:8 kl:6 vn0:3 barcelona:1 nip:3 address:1 able:2 beyond:1 kriegel:2 usually:2 pattern:1 appeared:1 laplacians:3 challenge:1 built:1 memory:1 hot:1 overlap:1 natural:2 difficulty:1 regularized:3 attach:1 recursion:3 technology:1 library:1 review:1 literature:8 l2:8 acknowledgement:1 discovery:2 geometric:2 nicol:1 relative:2 reordering:1 permutation:5 limitation:2 validation:1 eigendecomposition:2 hydrophobicity:1 degree:3 offered:1 principle:1 storing:1 share:1 row:5 course:1 summary:1 repeat:1 surprisingly:1 infeasible:2 uchicago:2 wide:1 fall:1 correspondingly:2 ghz:1 van:1 overcome:1 xn:5 world:1 gram:9 evaluating:1 collection:5 social:1 transaction:1 sj:2 approximate:3 bernhard:1 clique:1 ml:4 global:2 conceptual:1 assumed:1 nino:2 xi:6 discriminative:1 chemoinformatics:2 spectrum:1 continuous:1 latent:2 table:1 reality:1 nature:1 molecule:4 e5:1 constructing:1 garnett:1 marc:1 domain:3 vj:17 sp:1 significance:2 spread:1 did:1 aistats:1 s2:6 n2:9 inokuchi:1 repeated:1 x1:5 representative:1 intel:1 screen:1 embeds:1 feragen:1 explicit:2 exponential:1 smoothest:1 candidate:1 jmlr:2 hanjun:1 down:1 flg:30 specific:1 horace:1 jen:1 showing:1 svm:1 virtue:1 concern:1 workshop:2 restricting:1 adding:2 effectively:4 linearization:1 locality:1 simply:2 heteroaromatic:1 forming:1 expressed:3 ordered:1 g2:30 bo:1 u2:3 chang:1 applies:1 kristian:1 nested:2 stipulating:1 acm:3 identity:1 goal:2 king:1 kramer:1 towards:1 replace:1 feasible:2 kazhdan:1 specifically:1 determined:2 operates:1 except:2 reducing:1 called:1 invariance:2 experimental:1 atsch:1 exception:1 select:1 support:2 arises:1 chem:1 alexander:1 bioinformatics:2 marleen:1 washio:1
5,676
6,136
Learning Bound for Parameter Transfer Learning Wataru Kumagai Faculty of Engineering Kanagawa University kumagai@kanagawa-u.ac.jp Abstract We consider a transfer-learning problem by using the parameter transfer approach, where a suitable parameter of feature mapping is learned through one task and applied to another objective task. Then, we introduce the notion of the local stability and parameter transfer learnability of parametric feature mapping, and thereby derive a learning bound for parameter transfer algorithms. As an application of parameter transfer learning, we discuss the performance of sparse coding in selftaught learning. Although self-taught learning algorithms with plentiful unlabeled data often show excellent empirical performance, their theoretical analysis has not been studied. In this paper, we also provide the first theoretical learning bound for self-taught learning. 1 Introduction In traditional machine learning, it is assumed that data are identically drawn from a single distribution. However, this assumption does not always hold in real-world applications. Therefore, it would be significant to develop methods capable of incorporating samples drawn from different distributions. In this case, transfer learning provides a general way to accommodate these situations. In transfer learning, besides the availability of relatively few samples related with an objective task, abundant samples in other domains that are not necessarily drawn from an identical distribution, are available. Then, transfer learning aims at extracting some useful knowledge from data in other domains and applying the knowledge to improve the performance of the objective task. In accordance with the kind of knowledge that is transferred, approaches to solving transfer-learning problems can be classified into cases such as instance transfer, feature representation transfer, and parameter transfer (Pan and Yang (2010)). In this paper, we consider the parameter transfer approach, where some kind of parametric model is supposed and the transferred knowledge is encoded into parameters. Since the parameter transfer approach typically requires many samples to accurately learn a suitable parameter, unsupervised methods are often utilized for the learning process. In particular, transfer learning from unlabeled data for predictive tasks is known as self-taught learning (Raina et al. (2007)), where a joint generative model is not assumed to underlie unlabeled samples even though the unlabeled samples should be indicative of a structure that would subsequently be helpful in predicting tasks. In recent years, self-taught learning has been intensively studied, encouraged by the development of strong unsupervised methods. Furthermore, sparsity-based methods such as sparse coding or sparse neural networks have often been used in empirical studies of self-taught learning. Although many algorithms based on the parameter transfer approach have empirically demonstrated impressive performance in self-taught learning, some fundamental problems remain. First, the theoretical aspects of the parameter transfer approach have not been studied, and in particular, no learning bound was obtained. Second, although it is believed that a large amount of unlabeled data help to improve the performance of the objective task in self-taught learning, it has not been sufficiently clarified how many samples are required. Third, although sparsity-based methods are typically employed in self-taught learning, it is unknown how the sparsity works to guarantee the performance of self-taught learning. The aim of the research presented in this paper is to shed light on the above problems. We first consider a general model of parametric feature mapping in the parameter transfer approach. Then, we newly formulate the local stability of parametric feature mapping and the parameter transfer learnability for this mapping, and provide a theoretical learning bound for parameter transfer learning algorithms based on the notions. Next, we consider the stability of sparse coding. Then we discuss the parameter transfer learnability by dictionary learning under the sparse model. Applying the learning bound for parameter transfer learning algorithms, we provide a learning bound of the sparse coding algorithm in self-taught learning. This paper is organized as follows. In the remainder of this section, we refer to some related studies. In Section 2, we formulate the stability and the parameter transfer learnability of the parametric feature mapping. Then, we present a learning bound for parameter transfer learning. In Section 3, we show the stability of the sparse coding under perturbation of the dictionaries. Then, by imposing sparsity assumptions on samples and by considering dictionary learning, we derive the parameter transfer learnability for sparse coding. In particular, a learning bound is obtained for sparse coding in the setting of self-taught learning. In Section 4, we conclude the paper. 1.1 Related Works Approaches to transfer learning can be classified into some cases based on the kind of knowledge being transferred (Pan and Yang (2010)). In this paper, we consider the parameter transfer approach. This approach can be applied to various notable algorithms such as sparse coding, multiple kernel learning, and deep learning since the dictionary, weights on kernels, and weights on the neural network are regarded as parameters, respectively. Then, those parameters are typically trained or tuned on samples that are not necessarily drawn from a target region. In the parameter transfer setting, a number of samples in the source region are often needed to accurately estimate the parameter to be transferred. Thus, it is desirable to be able to use unlabeled samples in the source region. Self-taught learning corresponds to the case where only unlabeled samples are given in the source region while labeled samples are available in the target domain. In this sense, self-taught learning is compatible with the parameter transfer approach. Actually, in Raina et al. (2007) where self-taught learning was first introduced, the sparse coding-based method is employed and the parameter transfer approach is already used regarding the dictionary learnt from images as the parameter to be transferred. Although self-taught learning has been studied in various contexts (Dai et al. (2008); Lee et al. (2009); Wang et al. (2013); Zhu et al. (2013)), its theoretical aspects have not been sufficiently analyzed. One of the main results in this paper is to provide a first theoretical learning bound in self-taught learning with the parameter transfer approach. We note that our setting differs from the environment-based setting (Baxter (2000), Maurer (2009)), where a distribution on distributions on labeled samples, known as an environment, is assumed. In our formulation, the existence of the environment is not assumed and labeled data in the source region are not required. Self-taught learning algorithms are often based on sparse coding. In the seminal paper by Raina et al. (2007), they already proposed an algorithm that learns a dictionary in the source region and transfers it to the target region. They also showed the effectiveness of the sparse coding-based method. Moreover, since remarkable progress has been made in unsupervised learning based on sparse neural networks (Coates et al. (2011), Le (2013)), unlabeled samples of the source domain in self-taught learning are often preprocessed by sparsity-based methods. Recently, a sparse coding-based generalization bound was studied (Mehta and Gray (2013); Maurer et al. (2012)) and the analysis in Section 3.1 is based on (Mehta and Gray (2013)). 2 2.1 Learning Bound for Parameter Transfer Learning Problem Setting of Parameter Transfer Learning We formulate parameter transfer learning in this subsection. We first briefly introduce notations and terminology in transfer learning (Pan and Yang (2010)). Let X and Y be a sample space and a label space, respectively. We refer to a pair of Z := X ? Y and a joint distribution P (x, y) on Z as a region. Then, a domain comprises a pair consisting of a sample space X and a marginal probability of P (x) on X and a task consists of a pair containing a label set Y and a conditional distribution P (y|x). In addition, let H = {h : X ? Y} be a hypothesis space and ? : Y ? Y ? R?0 2 represent a loss function. Then, the expected risk and the empirical risk are defined by R(h) := b n (h) := 1 ?n ?(yj , h(xj )), respectively. In the setting of transfer E(x,y)?P [?(y, h(x))] and R j=1 n learning, besides samples from a region of interest known as a target region, it is assumed that samples from another region known as a source region are also available. We distinguish between the target and source regions by adding a subscript T or S to each notation introduced above, (e.g. PT , RS ). Then, the homogeneous setting (i.e., XS = XT ) is not assumed in general, and thus, the heterogeneous setting (i.e., XS ?= XT ) can be treated. We note that self-taught learning, which is treated in Section 3, corresponds to the case when the label space YS in the source region is the set of a single element. We consider the parameter transfer approach, where the knowledge to be transferred is encoded into a parameter. The parameter transfer approach aims to learn a hypothesis with low expected risk for the target task by obtaining some knowledge about an effective parameter in the source region and transfer it to the target region. In this paper, we suppose that there are parametric models on both the source and target regions and that their parameter spaces are partly shared. Then, our strategy is to learn an effective parameter in the source region and then transfer a part of the parameter to the target region. We describe the formulation in the following. In the target region, we assume that YT ? R and there is a parametric feature mapping ?? : XT ? Rm on the target domain such that each hypothesis hT ,?,w : XT ? YT is represented by hT ,?,w (x) := ?w, ?? (x)? (1) with parameters ? ? ? and w ? WT , where ? is a subset of a normed space with a norm ? ? ? and WT is a subset of Rm . Then the hypothesis set in the target region is parameterized as HT = {hT ,?,w |? ? ?, w ? WT }. b T (hT ,?,w ) by RT (?, w) and R b T (?, w), In the following, we simply denote RT (hT ,?,w ) and R respectively. In the source region, we suppose that there exists some kind of parametric model such as a sample distribution PS,?,w or a hypothesis hS,?,w with parameters ? ? ? and w ? WS , and a part ? of the parameter space is shared with the target region. Then, let ? ?S ? ? and wS? ? WS be parameters that are supposed to be effective in the source region (e.g., the true parameter of the sample distribution, the parameter of the optimal hypothesis with respect to the expected risk RS ); however, explicit assumptions are not imposed on the parameters. Then, the parameter transfer algorithm treated in this paper is described as follows. Let N - and n-samples be available in the source and target regions, respectively. First, a parameter transfer algorithm outputs the estimator bN ? ? of ? ? by using N -samples. Next, for the parameter ? S wT? := argmin RT (? ?S , w) w?WT in the target region, the algorithm outputs its estimator b N,n w bN , w) + ?r(w) bT ,n (? := argmin R w?WT by using n-samples, where r(w) is a 1-strongly convex function with respect to ? ? ?2 and ? > 0. If the source region relates to the target region in some sense, the effective parameter ? ?S in the source region is expected to also be useful for the target task. In the next subsection, we regard RT (? ?S , wT? ) as the baseline of predictive performance and derive a learning bound. 2.2 Learning Bound Based on Stability and Learnability We newly introduce the local stability and the parameter transfer learnability as below. These notions are essential to derive a learning bound in Theorem 1. Definition 1 (Local Stability). A parametric feature mapping ?? is said to be locally stable if there exist ?? : X ? R>0 for each ? ? ? and L? > 0 such that for ? ? ? ? ?? ? ? ? ? ? ?? (x) ? ??? (x) ? ??? (x)?2 ? L? ?? ? ? ? ?. We term ?? (x) the permissible radius of perturbation for ? at x. For samples Xn = {x1 , . . . xn }, we denote as ?? (Xn ) := minj?[n] ?? (xj ), where [n] := {1, . . . , n} for a positive integer n. Next, we formulate the parameter transfer learnability based on the local stability. 3 Definition 2 (Parameter Transfer Learnability). Suppose that N -samples in the source domain and n-samples Xn in the target domain are available. Let a parametric feature mapping {?? }??? be locally stable. For ?? ? [0, 1), {?? }??? is said to be parameter transfer learnable with probability 1 ? ?? if there exists an algorithm that depends only on N -samples in the source domain such that, bN of the algorithm satisfies the output ? [ ] bN ? ? ? ? ? ??? (Xn ) ? 1 ? ?. ? Pr ?? S S In the following, we assume that parametric feature mapping is bounded as ??? (x)?2 ? R? for arbitrary x ? X and ? ? ? and linear predictors are also bounded as ?w?2 ? RW for any w ? W. In addition, we suppose that a loss function ?(?, ?) is L? -Lipschitz and convex with respect to the second variable. We denote as Rr := supw?W |r(w)|. Then, the following learning bound is obtained, where the strong convexity of the regularization term ?r(w) is essential. Theorem 1 (Learning Bound). Suppose that the parametric feature mapping ?? is locally stable bN learned in the source region satisfies the parameter transfer learnability with and an estimator ? ? ? When ? = L? R? 8(32+log(2/?)) , the following inequality holds with probability probability 1 ? ?. Rr n ? 1 ? (? + 2?): ( ) bN , w b N,n ? RT (? ?S , wT? ) RT ? ( ) 1 ? ? b ? ? L? R? RW 2 log(2/?) + 2 2Rr (32 + log(2/?)) ? + L? L? R? ? N ? ?S n ? ( ) 14 ? 1 Rr b ? n 4 ? +L? L? RW R? (2) N ? ? S . 2(32 + log(2/?)) bN ? ? ? ? can be evaluated in terms of the number N of samples, Theorem If the estimation error ?? S 1 clarifies which term is dominant, and in particular, the number of samples required in the source domain such that this number is sufficiently large compared to the samples in the target domain. 2.3 Proof of Learning Bound We prove Theorem 1 in this subsection. In this proof, we omit the subscript T for simplicity. In addition, we denote ? ?S simply by ? ? . We set as 1? ?(yj , ?w, ??? (xj )?) + ?r(w). n j=1 n b n? w := argmin w?W Then, we have ( ) bN , w b N,n ? RT (? ? , w? ) RT ? [ ] b N,n , ??bN (x)?) ? E(x,y)?P [?(y, ?w b N,n , ??? (x)?)] = E(x,y)?P ?(y, ?w b N,n , ??? (x)?)] ? E(x,y)?P [?(y, ?w b n? , ??? (x)?)] +E(x,y)?P [?(y, ?w b n? , ??? (x)?)] ? E(x,y)?P [?(y, ?w? , ??? (x)?)] . +E(x,y)?P [?(y, ?w (3) In the following, we bound three parts of (3). First, we have the following inequality with probability ? 1 ? (?/2 + ?): [ ] b N,n , ??bN (x)?) ? E(x,y)?P [?(y, ?w b N,n , ??? (x)?)] E(x,y)?P ?(y, ?w ] [ ? L? RW E(x,y)?P ??bN (x) ? ??? (x) ? n 1 ? 2 log(2/?) ? L? RW ??bN (xj ) ? ??? (xj ) + L? RW R? n j=1 n ? 2 log(2/?) b ? ? L? L? RW ? , N ? ? + L? RW R? n 4 where we used Hoeffding?s inequality as the third inequality, and the local stability and parameter transfer learnability in the last inequality. Second, we have the following inequality with probability ? 1 ? ?: ? ? ? b N,n , ??? (x)?)] ? E(x,y)?P [?(y, ?w b n? , ??? (x)?)] E(x,y)?P [?(y, ?w b N,n , ??? (x)? ? ?w b n? , ??? (x)?|] L? E(x,y)?P [|?w b N,n ? w b n? ?2 L? R? ?w ? 2L? L? RW b L? R? ? N ? ? ? , ? (4) where the last inequality is derived by the strong convexity of the regularizer ?r(w) in the Appendix. Third, the following holds by Theorem 1 of Sridharan et al. (2009) with probability 1 ? ?/2: b n? , ??? (x)?)] ? E(x,y)?P [?(y, ?w? , ??? (x)?)] E(x,y)?P [?(y, ?w b n? )] b n? , ??? (x)?) + ?r(w = E(x,y)?P [?(y, ?w b n? )) ?E(x,y)?P [?(y, ?w? , ??? (x)?) + ?r(w? )] + ?(r(w? ) ? r(w ( ) 2 8L2? R? (32 + log(2/?)) ? + ?Rr . ?n ? Thus, when ? = L? R? 3 8(32+log(2/?)) , Rr n ? we have (2) with probability 1 ? (? + 2?). Stability and Learnability in Sparse Coding In this section, we consider the sparse coding in self-taught learning, where the source region essentially consists of the sample space XS without the label space YS . We assume that the sample spaces in both regions are Rd . Then, the sparse coding method treated here consists of a two-stage procedure, where a dictionary is learnt on the source region, and then a sparse coding with the learnt dictionary is used for a predictive task in the target region. First, we show that sparse coding satisfies the local stability in Section 3.1 and next explain that appropriate dictionary learning algorithms satisfy the parameter transfer learnability in Section 3.4. As a consequence of Theorem 1, we obtain the learning bound of self-taught learning algorithms based on sparse coding. We note that the results in this section are useful independent of transfer learning. We here summarize the notations used in this section. Let ? ? ?p be the p-norm on Rd . We define as supp(a) := {i ? [m]|ai ?= 0} for a ? Rm . We denote the number of elements of a set S by |S|. When a vector a satisfies ?a?0 = |supp(a)| ? k, a is said to be k-sparse. We denote the ball with radius R centered at 0 by BRd (R) := {x ? Rd |?x?2 ? R}. We set as D := {D = [d1 , . . . , dm ] ? BRd (1)m | ?dj ?2 = 1 (i = 1, . . . , m)} and each D ? D a dictionary with size m. Definition 3 (Induced matrix norm). For an arbitrary matrix E = [e1 , . . . , em ] ? Rd?m , induced matrix norm is defined by ?E?1,2 := maxi?[m] ?ei ?2 . 1) the We adopt ? ? ?1,2 to measure the difference of dictionaries since it is typically used in the framework ? 1,2 ? 2 holds for arbitrary dictionaries D, D ? ? D. of dictionary learning. We note that ?D ? D? 3.1 Local Stability of Sparse Representation We show the local stability of sparse representation under a sparse model. A sparse representation with dictionary parameter D of a sample x ? Rd is expressed as follows: 1 ?D (x) := argmin ?x ? Dz?22 + ??z?1 , z?Rm 2 In general, the (p, q)-induced norm for p, q ? 1 is defined by ?E?p,q := supv?Rm ,?v?p =1 ?Ev?q . Then, ? ? ?1,2 in this general definition coincides with that in Definition 3 by Lemma 17 of Vainsencher et al. (2011). 1) 5 where ? > 0 is a regularization parameter. This situation corresponds to the case where ? = D and ?? = ?D in the setting of Section 2.1. We prepare some notions to the stability of the sparse representation. The following margin and incoherence were introduced by Mehta and Gray (2013). Definition 4 (k-margin). Given a dictionary D = [d1 , . . . , dm ] ? D and a point x ? Rd , the k-margin of D on x is Mk (D, x) := max min {? ? |?dj , x ? D?D (x)?|} . I?[m],|I|=m?k j?I Definition 5 (?-incoherence). A dictionary matrix D = [d1 , . . . , dm ] ? D is termed ?-incoherent ? if |?di , dj ?| ? ?/ d for all i ?= j. Then, the following theorem is obtained. ? 1,2 ? ?. When Theorem 2 (Sparse Coding Stability). Let D ? D be ?-incoherent and ?D ? D? ? 1,2 ? ?k,D (x) := ?D ? D? Mk,D (x)2 ? , 64 max{1, ?x?}4 (5) the following stability bound holds: ? 4?x?2 k ? 1,2 . ? ?D ? D? ??D (x) ? ?D ? (x)?2 ? (1 ? ?k/ d)? From Theorem 2, ?k,D (x) becomes the permissible radius of perturbation in Definition 1. Here, we refer to the relation with the sparse coding stability (Theorem 4) of Mehta and Gray (2013), who measured the difference of dictionaries by ? ? ?2,2 instead of ? ? ?1,2 and the permissible radius of perturbation is given by Mk,D (x)2 ? except for a constant factor. Applying the simple inequality ? ?E?2,2 ? m?E?1,2 for E ? Rd?m , we can obtain a variant of the sparse coding stability with the norm ? ? ?1,2 . However, then the dictionary size m affects the permissible radius of perturbation and the stability bound of the sparse coding stability. On the other hand, the factor of m does not appear in Theorem 2, and thus, the result is effective even for a large m. In addition, whereas ?x? ? 1 is assumed in Mehta and Gray (2013), Theorem 2 does not assume that ?x? ? 1 and clarifies the dependency for the norm ?x?. In existing studies related to sparse coding, the sparse representation ?D (x) is modified as ?D (x) ? x (Mairal et al. (2009)) or ?D (x) ? (x ? D?D (x)) (Raina et al. (2007)) where ? is the tensor product. By the stability of sparse representation (Theorem 2), it can be shown that such modified representations also have local stability. 3.2 Sparse Modeling and Margin Bound In this subsection, we assume a sparse structure for samples x ? Rd and specify a lower bound for the k-margin used in (5). The result obtained in this section plays an essential role to show the parameter transfer learnability in Section 3.4. Assumption 1 (Model). There exists a dictionary matrix D? such that every sample x is independently generated by a representation a and noise ? as x = D? a + ?. Moreover, we impose the following three assumptions on the above model. Assumption 2 (Dictionary). The dictionary matrix D? = [d1 , . . . , dm ] ? D is ?-incoherent. Assumption 3 (Representation). The representation a is a random variable that is k-sparse (i.e., ?a?0 ? k) and the non-zero entries are lower bounded by C > 0 (i.e., ai ?= 0 satisfy |ai | ? C). Assumption ? 4 (Noise). The noise ? is independent across coordinates and sub-Gaussian with parameter ?/ d on each component. We note that the assumptions do not require the representation a or noise ? to be identically distributed while those components are independent. This is essential because samples in the source and target domains cannot be assumed to be identically distributed in transfer learning. 6 Theorem 3 (Margin Bound). Let 0 < t < 1. We set as ( ) ( ) 2? (1 ? t)2 d?2 2?m d?2 ? exp ? ?t,? := + ? exp ? 2 8? 2 8? (1 ? t) d? d? ( ? ) ) ( 2 4?k C d(1 ? ?k/ d) 8?(d ? k) d?2 + ? + ? exp ? . (6) ? exp ? 8? 2 32? 2 d? C d(1 ? ?k/ d) {( ) }2 6 We suppose that d ? 1 + (1?t) ?k and ? = d?? for arbitrary 1/4 ? ? ? 1/2. Under Assumptions 1-4, the following inequality holds with probability 1 ? ?t,? at least: Mk,D? (x) ? t?. (7) We refer to the regularization parameter ?. An appropriate reflection of the sparsity of samples requires the regularization parameter ? to be set suitably. According to Theorem 4 of Zhao and Yu (2006)2) , when samples follow the sparse model as in Assumptions 1-4 and ? ? = d?? for 1/4 ? ? ? 1/2, the representation ?D (x) reconstructs the true sparse representation a of sample x with a small error. In particular, when ? = 1/4 (i.e., ? ? = d?1/4 ) in Theorem 3, the failure probability ?t,? ? = ? ? d e on the margin is guaranteed to become sub-exponentially small with respect to dimension d and is negligible for the high-dimensional case. On the other hand, the typical choice ? = 1/2 (i.e., ?? = d?1/2 ) does not provide a useful result because ?t,? is not small at all. 3.3 Proof of Margin Bound We give a sketch of proof of Theorem 3. We denote the first term, the second term and the sum of the third and fourth terms of (6) by ?1 , ?2 and ?3 , respectively From Assumptions 1 and 3, a sample is represented as x = D? a + ? and ?a?0 ? k. Without loss of generality, we assume that the first m ? k components of a are 0 and the last k components are not 0. Since Mk,D? (x) ? min 1?j?m?k ? ? ?dj , x ? D? ?D (x)? = min 1?j?m?k ? ? ?dj , ?? ? ?D?? dj , a ? ?D (x)?, it is enough to show that the following holds an arbitrary 1 ? j ? m ? k to prove Theorem 3: Pr[?dj , ?? + ?D?? dj , a ? ?D (x)? > (1 ? t)?] ? ?t,? . Then, (8) follows from the following inequalities: ] [ 1?t ? ? ?1 , Pr ?dj , ?? > 2 [ ] 1?t Pr ?D?? dj , a ? ?D (x)? > ? ? ?2 + ?3 . 2 (8) (9) (10) The inequality (9) holds since ?dj ? = 1 by the definition and Assumption 4. Thus, all we have to do is to show (10). We have ?D?? dj , a ? ?D (x)? = ?[?d1 , dj ?, . . . , ?dm , dj ?]? , a ? ?D (x)? = ?(1supp(a??D (x)) ? [?d1 , dj ?, . . . , ?dm , dj ?])? , a ? ?D (x)? ? ?1supp(a??D (x)) ? [?d1 , dj ?, . . . , ?dm , dj ?]?2 ?a ? ?D (x)?2 ,(11) where u ? v is the Hadamard product (i.e. component-wise product) between u and v, and 1A for a set A ? [m] is a vector whose i-th component is 1 if i ? A and 0 otherwise. Applying Theorem 4 of Zhao and Yu (2006) and using the condition for ?, the following holds with probability 1 ? ?3 : supp(a) = supp(?D (x)). 2) (12) Theorem 4 of Zhao and Yu (2006) is stated for Gaussian noise. However, it can be easily generalized to sub-Gaussian noise as in Assumption 4. Our setting corresponds to the case in which c1 = 1/2, c2 = 1, c3 = c3 (log ? + log log d)/ log d for some ? > 1 (i.e., ed ? = d? ) and c4 = c in Theorem 4 of Zhao and Yu (2006). Note that our regularization parameter ? corresponds to ?d /d in (Zhao and Yu (2006)). 7 Moreover, under (12), the following holds with probability 1 ? ?2 by modifying Corollary 1 of Negahban et al. (2009) and using the condition for ?: ? 6 k? ?a ? ?D (x)?2 ? . (13) ?k 1? ? d Thus, if both of (12) and (13) hold, the right hand side of (11) is bounded as follows: ?1supp(a??D (x)) ? [?d1 , dj ?, . . . , ?dm , dj ?]?2 ?a ? ?D (x)?2 ? ? ? 6 k? 1?t 6?k ? |supp(a ? ?D (x))| ? ? ? ?, =? ?k 2 ? d1? d d ? ?k where we used Assumption 2 in the first inequality, (12) and Assumption 3 in the equality and the condition for d in the last inequality. From the above discussion, the left hand side of (10) is bounded by the sum of the probability ?3 that (12) does not hold and the probability ?2 that (12) holds but (13) does not hold. 3.4 Transfer Learnability for Dictionary Learning b N of a suitable When the true dictionary D? exists as in Assumption 1, we show that the output D dictionary learning algorithm from N -unlabeled samples satisfies the parameter transfer learnability for the sparse coding ?D . Then, Theorem 1 guarantees the learning bound in self-taught learning since the discussion in this section does not assume the label space in the source region. This bN = D b N and ? ? ? = ? ? ?1,2 in Section 2.1. situation corresponds to the case where ? ?S = D? , ? We show that an appropriate dictionary learning algorithm satisfies the parameter transfer learnability for the sparse coding ?D by focusing on the permissible radius of perturbation in (5) under some assumptions. When Assumptions 1-4 hold and ? = d?? for 1/4 ? ? ? 1/2, the margin bound (7) for x ? X holds with probability 1 ? ?t,? , and thus, we have t2 ?3 = ?(d?3? ). 64 max{1, ?x?}4 b N such that Thus, if a dictionary learning algorithm outputs the estimator D ?k,D? (x) ? b N ? D? ?1,2 ? O(d?3? ) ?D (14) ? b N of D satisfies the parameter transfer learnability for the with probability 1 ? ?N , the estimator D sparse coding ?D with probability ?? = ?N + n?t,? . Then, by the local stability of the sparse representation and the parameter transfer learnability of such a dictionary learning, Theorem 1 guarantees that sparse coding in self-taught learning satisfies the learning bound in (2). We note that Theorem 1 can apply to any dictionary learning ? algorithm as long as (14) is satisfied. For example, Arora et al. (2015) show that, when k = O( d/ log d), m = O(d), Assumptions 1-4 b N which and some additional conditions are assumed, their dictionary learning algorithm outputs D satisfies b N ? D? ?1,2 = O(d?M ) ?D ? with probability 1 ? d?M for arbitrarily large M, M ? as long as N is sufficiently large. 4 Conclusion We derived a learning bound (Theorem 1) for a parameter transfer learning problem based on the local stability and parameter transfer learnability, which are newly introduced in this paper. Then, applying it to a sparse coding-based algorithm under a sparse model (Assumptions 1-4), we obtained the first theoretical guarantee of a learning bound in self-taught learning. Although we only consider sparse coding, the framework of parameter transfer learning includes other promising algorithms such as multiple kernel learning and deep neural networks, and thus, our results are expected to be effective to analyze the theoretical performance of these algorithms. Finally, we note that our learning bound can be applied to different settings from self-taught learning because Theorem 1 includes the case in which labeled samples are available in the source region. 8 References [1] S. Arora, R. Ge, T. Ma, and A. Moitra (2015) ?Simple, efficient, and neural algorithms for sparse coding,? arXiv preprint arXiv:1503.00778. [2] J. Baxter (2000) ?A model of inductive bias learning,? J. Artif. Intell. Res.(JAIR), Vol. 12, p. 3. [3] A. Coates, A. Y. Ng, and H. Lee (2011) ?An analysis of single-layer networks in unsupervised feature learning,? in International conference on artificial intelligence and statistics, pp. 215? 223. [4] W. Dai, Q. Yang, G.-R. Xue, and Y. Yu (2008) ?Self-taught clustering,? in Proceedings of the 25th international conference on Machine learning, pp. 200?207, ACM. [5] Q. V. Le (2013) ?Building high-level features using large scale unsupervised learning,? in Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pp. 8595?8598, IEEE. [6] H. Lee, R. Raina, A. Teichman, and A. Y. Ng (2009) ?Exponential Family Sparse Coding with Application to Self-taught Learning,? in IJCAI, Vol. 9, pp. 1113?1119, Citeseer. [7] J. Mairal, J. Ponce, G. Sapiro, A. Zisserman, and F. R. Bach (2009) ?Supervised dictionary learning,? in Advances in neural information processing systems, pp. 1033?1040. [8] A. Maurer (2009) ?Transfer bounds for linear feature learning,? Machine learning, Vol. 75, pp. 327?350. [9] A. Maurer, M. Pontil, and B. Romera-Paredes (2012) ?Sparse coding for multitask and transfer learning,? arXiv preprint arXiv:1209.0738. [10] N. Mehta and A. G. Gray (2013) ?Sparsity-based generalization bounds for predictive sparse coding,? in Proceedings of the 30th International Conference on Machine Learning (ICML13), pp. 36?44. [11] S. Negahban, B. Yu, M. J. Wainwright, and P. K. Ravikumar (2009) ?A unified framework for high-dimensional analysis of M -estimators with decomposable regularizers,? in Advances in Neural Information Processing Systems, pp. 1348?1356. [12] S. J. Pan and Q. Yang (2010) ?A survey on transfer learning,? Knowledge and Data Engineering, IEEE Transactions on, Vol. 22, pp. 1345?1359. [13] R. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng (2007) ?Self-taught learning: transfer learning from unlabeled data,? in Proceedings of the 24th international conference on Machine learning, pp. 759?766, ACM. [14] K. Sridharan, S. Shalev-Shwartz, and N. Srebro (2009) ?Fast rates for regularized objectives,? in Advances in Neural Information Processing Systems, pp. 1545?1552. [15] D. Vainsencher, S. Mannor, and A. M. Bruckstein (2011) ?The sample complexity of dictionary learning,? The Journal of Machine Learning Research, Vol. 12, pp. 3259?3281. [16] H. Wang, F. Nie, and H. Huang (2013) ?Robust and discriminative self-taught learning,? in Proceedings of The 30th International Conference on Machine Learning, pp. 298?306. [17] P. Zhao and B. Yu (2006) ?On model selection consistency of Lasso,? The Journal of Machine Learning Research, Vol. 7, pp. 2541?2563. [18] X. Zhu, Z. Huang, Y. Yang, H. T. Shen, C. Xu, and J. Luo (2013) ?Self-taught dimensionality reduction on the high-dimensional small-sized data,? Pattern Recognition, Vol. 46, pp. 215? 229. 9
6136 |@word h:1 multitask:1 faculty:1 briefly:1 norm:7 paredes:1 suitably:1 mehta:6 r:2 bn:13 citeseer:1 thereby:1 accommodate:1 reduction:1 plentiful:1 tuned:1 romera:1 existing:1 luo:1 generative:1 intelligence:1 indicative:1 provides:1 mannor:1 clarified:1 c2:1 become:1 consists:3 prove:2 introduce:3 expected:5 considering:1 becomes:1 moreover:3 notation:3 bounded:5 kind:4 argmin:4 unified:1 guarantee:4 sapiro:1 every:1 shed:1 rm:5 supv:1 underlie:1 omit:1 appear:1 positive:1 negligible:1 engineering:2 local:12 accordance:1 consequence:1 subscript:2 incoherence:2 studied:5 yj:2 differs:1 procedure:1 pontil:1 empirical:3 cannot:1 unlabeled:10 selection:1 context:1 applying:5 risk:4 seminal:1 imposed:1 demonstrated:1 yt:2 dz:1 normed:1 convex:2 survey:1 formulate:4 shen:1 simplicity:1 decomposable:1 independently:1 estimator:6 regarded:1 stability:25 notion:4 coordinate:1 target:21 pt:1 suppose:6 play:1 homogeneous:1 hypothesis:6 element:2 recognition:1 utilized:1 labeled:4 role:1 preprint:2 wang:2 region:36 environment:3 convexity:2 complexity:1 nie:1 trained:1 solving:1 predictive:4 easily:1 joint:2 icassp:1 various:2 represented:2 brd:2 regularizer:1 fast:1 effective:6 describe:1 artificial:1 shalev:1 whose:1 encoded:2 otherwise:1 statistic:1 rr:6 product:3 remainder:1 hadamard:1 supposed:2 ijcai:1 p:1 help:1 derive:4 develop:1 ac:1 measured:1 progress:1 strong:3 radius:6 modifying:1 subsequently:1 centered:1 require:1 generalization:2 hold:16 sufficiently:4 exp:4 mapping:11 dictionary:31 adopt:1 estimation:1 label:5 prepare:1 always:1 gaussian:3 aim:3 modified:2 corollary:1 derived:2 ponce:1 baseline:1 sense:2 helpful:1 typically:4 bt:1 w:3 relation:1 supw:1 development:1 marginal:1 ng:3 encouraged:1 identical:1 yu:8 unsupervised:5 t2:1 few:1 packer:1 intell:1 consisting:1 interest:1 analyzed:1 light:1 regularizers:1 capable:1 maurer:4 abundant:1 re:1 theoretical:8 mk:5 instance:1 modeling:1 subset:2 entry:1 predictor:1 learnability:20 dependency:1 learnt:3 xue:1 fundamental:1 negahban:2 international:6 lee:4 satisfied:1 moitra:1 containing:1 reconstructs:1 huang:2 hoeffding:1 zhao:6 supp:8 coding:33 availability:1 includes:2 satisfy:2 notable:1 depends:1 analyze:1 who:1 clarifies:2 accurately:2 selftaught:1 classified:2 explain:1 minj:1 ed:1 definition:9 failure:1 pp:15 dm:8 proof:4 di:1 vainsencher:2 newly:3 intensively:1 knowledge:8 subsection:4 dimensionality:1 organized:1 actually:1 focusing:1 jair:1 supervised:1 follow:1 specify:1 zisserman:1 formulation:2 evaluated:1 though:1 strongly:1 generality:1 furthermore:1 stage:1 hand:4 sketch:1 ei:1 gray:6 artif:1 building:1 true:3 wataru:1 inductive:1 regularization:5 equality:1 self:30 coincides:1 generalized:1 reflection:1 image:1 wise:1 recently:1 empirically:1 jp:1 exponentially:1 significant:1 refer:4 imposing:1 ai:3 rd:8 consistency:1 dj:20 stable:3 impressive:1 dominant:1 recent:1 showed:1 termed:1 inequality:13 arbitrarily:1 dai:2 additional:1 impose:1 employed:2 signal:1 relates:1 multiple:2 desirable:1 believed:1 long:2 bach:1 e1:1 y:2 ravikumar:1 variant:1 heterogeneous:1 essentially:1 arxiv:4 kernel:3 represent:1 c1:1 whereas:1 addition:4 source:26 permissible:5 induced:3 sridharan:2 effectiveness:1 integer:1 extracting:1 yang:6 identically:3 baxter:2 enough:1 xj:5 affect:1 lasso:1 regarding:1 speech:1 deep:2 useful:4 amount:1 locally:3 rw:9 exist:1 coates:2 vol:7 taught:30 terminology:1 drawn:4 preprocessed:1 ht:6 year:1 sum:2 parameterized:1 fourth:1 family:1 appendix:1 bound:34 layer:1 guaranteed:1 distinguish:1 aspect:2 min:3 relatively:1 transferred:6 according:1 ball:1 battle:1 remain:1 across:1 pan:4 em:1 pr:4 discus:2 needed:1 ge:1 available:6 apply:1 appropriate:3 existence:1 clustering:1 tensor:1 objective:5 already:2 parametric:12 strategy:1 rt:8 traditional:1 said:3 besides:2 stated:1 teichman:1 unknown:1 situation:3 perturbation:6 arbitrary:5 introduced:4 pair:3 required:3 c3:2 c4:1 acoustic:1 learned:2 able:1 below:1 pattern:1 ev:1 sparsity:7 summarize:1 max:3 wainwright:1 suitable:3 treated:4 regularized:1 predicting:1 raina:6 zhu:2 improve:2 arora:2 incoherent:3 l2:1 loss:3 srebro:1 remarkable:1 compatible:1 last:4 side:2 bias:1 sparse:51 distributed:2 regard:1 dimension:1 xn:5 world:1 made:1 transaction:1 bruckstein:1 mairal:2 assumed:9 conclude:1 discriminative:1 shwartz:1 promising:1 learn:3 transfer:66 kanagawa:2 robust:1 obtaining:1 excellent:1 necessarily:2 domain:12 main:1 noise:6 kumagai:2 x1:1 xu:1 sub:3 comprises:1 explicit:1 exponential:1 third:4 learns:1 theorem:26 xt:4 learnable:1 maxi:1 x:3 incorporating:1 exists:4 essential:4 adding:1 margin:9 simply:2 expressed:1 corresponds:6 satisfies:9 acm:2 ma:1 conditional:1 sized:1 shared:2 lipschitz:1 typical:1 except:1 wt:8 lemma:1 partly:1 d1:9
5,677
6,137
Combinatorial semi-bandit with known covariance R?my Degenne LMPA, Universit? Paris Diderot CMLA, ENS Paris-Saclay degenne@cmla.ens-cachan.fr Vianney Perchet CMLA, ENS Paris-Saclay CRITEO Research, Paris perchet@normalesup.org Abstract The combinatorial stochastic semi-bandit problem is an extension of the classical multi-armed bandit problem in which an algorithm pulls more than one arm at each stage and the rewards of all pulled arms are revealed. One difference with the single arm variant is that the dependency structure of the arms is crucial. Previous works on this setting either used a worst-case approach or imposed independence of the arms. We introduce a way to quantify the dependency structure of the problem and design an algorithm that adapts to it. The algorithm is based on linear regression and the analysis develops techniques from the linear bandit literature. By comparing its performance to a new lower bound, we prove that it is optimal, up to a poly-logarithmic factor in the number of pulled arms. 1 Introduction and setting The multi-armed bandit problem (MAB) is a sequential learning task in which an algorithm takes at each stage a decision (or, ?pulls an arm?). It then gets a reward from this choice, with the goal of maximizing the cumulative reward [Robbins, 1985]. We consider here its stochastic combinatorial extension, in which the algorithm chooses at each stage a subset of arms [Audibert et al., 2013, Cesa-Bianchi and Lugosi, 2012, Chen et al., 2013, Gai et al., 2012]. These arms could form, for example, the path from an origin to a destination in a network. In the combinatorial setting, contrary to the the classical MAB, the inter-dependencies between the arms can play a role (we consider that the distribution of rewards is invariant with time). We investigate here how the covariance structure of the arms affects the difficulty of the learning task and whether it is possible to design a unique algorithm capable of performing optimally in all cases from the simple scenario with independent rewards to the more challenging scenario of general correlated rewards. Formally, at each stage t ? N, t ? 1, an algorithm pulls m ? 1 arms among d ? m. Such a set of m arms is called an ?action? and will be denoted by At ? {0, 1}d , a vector with exactly m non-zero entries. The possible actions are restricted to an arbitrary fixed subset A ? {0, 1}d . After choosing d action At , the algorithm receives the reward A> t Xt , where Xt ? R is the vector encapsulating the reward of the d arms at stage t. The successive reward vectors (Xt )t?1 are i.i.d with unknown mean ? ? Rd . We consider a semi-bandit feedback system: after choosing the action At , the algorithm observes the reward of each of the arms in that action, but not the other rewards. Other possible feedbacks previously studied include bandit (only A> t Xt is revealed) and full information (Xt is revealed). The goal of the algorithm is to maximize the cumulated reward up to stage T ? 1 or equivalently to minimize the expected regret, which is the difference of the reward that would have been gained by choosing the best action in hindsight A? and what was actually gained: ERT = E T X (A?> ? ? A> t ?) . t=1 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. For an action A ? A, the difference ?A = (A?> ? ? A> ?) is called gap of A. We denote by ?t the PT gap of At , so that regret rewrites as ERT = E t=1 ?t . We also define the minimal gap of an arm, ?i,min = min{A?A:i?A} ?A . This setting was already studied Cesa-Bianchi and Lugosi [2012], most recently in Combes et al. [2015], Kveton et al. [2015], where two different algorithms are used to tackle on one hand the case where the arms have independent rewards and on the other hand the general bounded case. The regret guaranties of the two algorithms are different and reflect that the independent case is easier. Another algorithm for the independent arms case based on Thompson Sampling was introduced in Komiyama et al. [2015]. One of the main objectives of this paper is to design a unique algorithm that can adapt to the covariance structure of the problem when prior information is available. The following notations will be used throughout the paper: given a matrix M (resp. vector v), its (i, j)th (resp. ith ) coefficient is denoted by M (ij) (resp. v (i) ). For a matrix M , the diagonal matrix with same diagonal as M is denoted by ?M . We denote by ?t the noise in the reward, i.e. ?t := Xt ? ?. We consider a subgaussian setting, in which we suppose that there is a positive semi-definite matrix C such that for all t ? 1, ?u ? Rd , E[eu > ?t 1 > ] ? e2u Cu . This is equivalent to the usual setting for bandits where we suppose ? that the individual arms are (i) subgaussian. Indeed if we have such a matrix C then each ?t is C (ii) -subgaussian. And under a subgaussian arms assumption, such a matrix always exists. This setting encompasses the case of bounded rewards. We call C a subgaussian covariance matrix of the noise (see appendix A of the supplementary material). A good knowledge of C can simplify the problem greatly, as we will show. In the case of 1-subgaussian independent rewards, in which C can be chosen diagonal, a known lower bound d on the regret appearing in Combes et al. [2015] is ? log T , while Kveton et al. [2015] proves a dm log T lower bound in general. Our goal here is to investigate the spectrum of intermediate cases ? between these two settings, from the uninformed general case to the independent case in which one has much information on the relations between the arm rewards. We characterize the difficulty of the problem as a function of the subgaussian covariance matrix C. We suppose that we know a positive semi-definite matrix ? such that for all vectors v with positive coordinates, v > Cv ? v > ?v, property that we denote by C + ?. ? reflects the prior information available about the possible degree of independence of the arms. We will study algorithms that enjoy regret bounds as functions of ?. The matrix ? ? can be chosen such that all its coefficients are non-negative and verify for all i, j, ?(ij) ? ?(ii) ?(jj) . From now on, we suppose that it is the case. In the following, we will use t such that ?t = C 1/2 t and write for the reward: Xt = ? + C 1/2 t . 2 Lower bound We first prove a lower bound on the regret of any algorithm, demonstrating the link between the subgaussian covariance matrix and the difficulty of the problem. It depends on the maximal off-diagonal (ij) correlation coefficient of the covariance matrix. This coefficient is ? = max{(i,j)?[d],i6=j} ? C . C (ii) C (jj) The bound is valid for consistent algorithms [Lai and Robbins, 1985], for which the regret on any problem verifies ERt = o(ta ) as t ? +? for all a > 0. Theorem 1. Suppose to simplify that d is a multiple of m. Then, for any ? > 0, for any consistent algorithm, there is a problem with gaps ?, ?-subgaussian arms and correlation coefficients smaller than ? ? [0, 1] on which the regret is such that lim inf t?+? ERt 2? 2 (d ? m) ? (1 + ?(m ? 1)) log t ? This bound is a consequence of the classical result of Lai and Robbins [1985] for multi-armed bandits, applied to the problem of choosing one among d/m paths, each of which has m different successive edges (Figure 1). The rewards in the same path are correlated but the paths are independent. A complete proof can be found in appendix B.1 of the supplementary material. 2 Figure 1: Left: parallel paths problem. Right: regret of OLS-UCB as a function of m and ? in the parallel paths problem with 5 paths (average over 1000 runs). 3 OLS-UCB Algorithm and analysis Faced with the combinatorial semi-bandit at stage t ? 1, the observations from t ? 1 stages form as many linear equations and the goal of an algorithm is to choose the best action. To find the action with the highest mean, we estimate the mean of all arms. This can be viewed as a regression problem. The design of our algorithm stems from this observation and is inspired by linear regression in the fixed design setting, similarly to what was done in the stochastic linear bandit literature [Rusmevichientong and Tsitsiklis, 2010, Filippi et al., 2010]. There are many estimators for linear regression and we focus on the one that is simple enough and adaptive: Ordinary Least Squares (OLS). 3.1 Fixed design linear regression and OLS-UCB algorithm For an action A ? A, let IA be the diagonal matrix with a 1 at line i if A(i) = 1 and 0 otherwise. For a matrix M , we also denote by MA the matrix IA M IA . At stage t, if all actions A1 , . . . , At were independent of the rewards, we would have observed a set of linear equations IA1 X1 = IA1 ? + IA1 ?1 .. . IAt?1 Xt?1 = IAt?1 ? + IAt?1 ?t?1 and we could use the OLS estimator to estimate ?, which is unbiased and has a known subgaussian constant controlling its variance. This is however not true in our online setting since the successive actions are not independent. At stage t, we define (i) nt = t?1 X (ij) I{i ? As }, nt s=1 = t?1 X I{i ? As }I{j ? As } and Dt = s=1 t?1 X IAs , s=1 (i) where nt is the number of times arm i has been pulled before stage t and Dt is a diagonal matrix of these numbers. The OLS estimator is, for an arm i ? [d], (i) ? ?t = 1 X (i) nt s<t:i?As Xs(i) = ?(i) + (Dt?1 t?1 X IAs C 1/2 s )(i) . s=1 Then for all A ? A, A> (? ?t ? ?) in the fixed design setting has a subgaussian matrix equal to Pt?1 Dt?1 ( s=1 CAs )Dt?1 . We get confidence intervals for the estimates and can use an upper confidence bound strategy [Lai and Robbins, 1985, Auer et al., 2002]. In the online learning setting the actions are not independent but we will show that using this estimator still leads to estimates that are well concentrated around ?, with confidence intervals given by the same subgaussian matrix. The algorithm OLS-UCB (Algorithm 1) results from an application of an upper confidence bound strategy with this estimator. We now turn to an analysis of the regret of OLS-UCB. At any stage t ? 1 of the algorithm, let ?(ij) ?t = max{(i,j)?At ,i6=j} ? (ii) be the maximal off-diagonal correlation coefficient of ?At and ? ?(jj) let ? = max{t?[T ]} ?t be the maximum up to stage T . 3 Algorithm 1 OLS-UCB. Require: Positive semi-definite matrix ?, real parameter ? > 0. 1: Choose actions such that each arm is pulled at least one time. 2: loop: at stage t, 3: At = arg maxA A> ? ?t +qEt (A) p Pt?1 with Et (A) = 2f (t) A> Dt?1 (??? Dt + s=1 ?As )Dt?1 A. 4: Choose action At , observe IAt Xt . 5: Update ? ?t , Dt . 6: end loop Theorem 2. The OLS-UCB algorithm with parameter ? > 0 and f (t) = log t + (m + 2) log log t + m e 2 log(1 + ? ) enjoys for all times T ? 1 the regret bound !  2 X ?(ii) log m E[RT ] ?16f (T ) 5(? + 1 ? ?) + 45?m ?i,min 1.6 i?[d] 8dm2 maxi {C (ii) }?max + + 4?max , ?2min where dxe stands for the smallest positive integer bigger than or equal to x. In particular, d0e = 1. This bound shows the transition between a general case with a 2 dm log T ? regime and an independent d log m log T ? T case with a upper bound (we recall that the lower bound is of the order of d log ? ). The weight of each case is given by the maximum correlation parameter ?. The parameter ? seems to be an artefact of the analysis and can in practice be taken very small or even equal to 0. Figure 1 illustrates the regret of OLS-UCB on the parallel paths problem used to derive the lower bound. It shows a linear dependency in ? and supports the hypothesis that the true upper bound matches the lower bound with a dependency in m and ? of the form (1 + ?(m ? 1)). Corollary 1. The OLS-UCB algorithm with matrix ? and parameter ? > 0 has a regret bounded as v ! u 2  u log m t (ii) + 45?m ) . E[RT ] ? O( dT log T max{? } 5(? + 1 ? ?) 1.6 i?[d] Proof. We write that the regret up to stage T is bounded by ?T for actions with gap smaller than some ? and bounded using theorem 2 for other actions (with ?min ? ?). Maximizing over ? then gives the result. 3.2 Comparison with other algorithms Previous works supposed that the rewards of the individual arms are in [0, 1], which gives them a property. Hence we suppose (?i ? [d], C (ii) = 1/2) for our comparison. 1/2-subgaussian In the independent case, our algorithm is the same as ESCB-2 from Combes et al. [2015], up to the ? d m log T parameter ?. That paper shows that ESCB-2 enjoys an O( ) upper bound but our analysis ? tighten it to O( d log 2 m log T ? ). log T In the general (worst) case, Kveton et al. [2015] prove an O( dm ? ) upper bound (which is tight) using CombUCB1, a UCB based algorithm introduced in Chen et al. [2013] which at stage t uses q ? P (i) the exploration term 1.5 log t i?A 1/ nt . Our exploration term always verifies Et (A) ? q p P (i) f (t) i?A 1/ nt with f (t) ? log t (see section 3.6). Their exploration term is a worst-case confidence interval for the means. Their broader confidence intervals however have the desirable property that one can find the action that realizes the maximum index by solving a linear optimization problem, making their algorithm computationally efficient, quality that both ESCB and OLS-UCB are lacking. 4 None of the two former algorithms benefits from guaranties in the other regime. The regret of ESCB in the general possibly correlated case is unknown and the regret bound for CombUCB1 is not improved in the independent case. In contrast, OLS-UCB is adaptive in the sense that its performance gets better when more information is available on the independence of the arms. 3.3 Regret Decomposition (i) ?t Let Hi,t = {|? ?t ? ?(i) | ? 2m } and Ht = ?di=1 Hi,t . Ht is the event that at least one coordinate of ? ?t is far from the true mean. Let Gt = {A?> ? ? A?> ? ?t + Et (A? )} be the event that the estimate of the optimal action is below its true mean by a big margin. We decompose the regret according to these events: RT ? T X ?t I{Gt , Ht } + t=1 T X ?t I{Gt } + t=1 T X ?t I{Ht } t=1 Events Gt and Ht are rare and lead to a finite regret (see below). We first simplify the regret due to Gt ? Ht and show that it is bounded by the "variance" term of the algorithm. Lemma 1. With the algorithm choosing at stage t the action At = arg maxA (A> ? ?t + Et (A)), we have ?t I{Gt , Ht } ? 2Et (At )I{?t ? Et (At )}. Proof in appendix B.2 of the supplementary material. Then the regret is cut into three terms, RT ? 2 T X Et (At )I{?t ? 2Et (At )} + T X t=1 t=1 ?t I{Gt } + T X ?t I{Ht } . t=1 The three terms will be bounded as follows: ? The Ht term leads to a finite regret from a simple application of Hoeffding?s inequality. ? The Gt term leads to a finite regret for a good choice of f (t). This is where we need to show that the exploration term of the algorithm gives a high probability upper confidence bound of the reward. ? The Et (At ) term, or variance term, is the main source of the regret and is bounded using ideas similar to the ones used in existing works on semi-bandits. 3.4 Expected regret from Ht PT Lemma 2. The expected regret due to the event Ht is E[ t=1 ?t I{Ht }] ? 8dm2 maxi {C (ii) }?max ?2min . The proof uses Hoeffding?s inequality on the arm mean estimates and can be found in appendix B.2 of the supplementary material. 3.5 Expected regret from Gt We want to bound the probability that the estimated reward for the optimal action is far from its mean. We show that it is sufficient to control a self-normalized sum and do it using arguments from Pe?a et al. [2008], or Abbasi-Yadkori et al. [2011] who applied them to linear bandits. The analysis also involves a peeling argument, as was done in one dimension by Garivier [2013] to bound a similar quantity. e Lemma 3. Let ?t > 0. With f?(?t ) = log(1/?t ) + m log log t + m 2 log(1 + ? ) and an algorithm q q P t?1 given by the exploration term Et (A) = 2f?(?t ) A> Dt?1 (??? Dt + s=1 ?As )Dt?1 A , then the event Gt = {A?> ? ? A?> ? ?t + Et (A? )} verifies P{Gt } ? ?t . 1 ? With ?1 = 1 and ?t = t log 2 t for t ? 2, such that f (?t ) = f (t), the regret due to Gt is finite in expectation, bounded by 4?max . 5 Proof. We use a peeling argument: let ? > 0 and for a = (a1 , . . . , am ) ? Nm , let Da ? [T ] be a (i) subset of indices defined by (t ? Da ? ?i ? A? , (1 + ?)ai ? nt < (1 + ?)ai +1 ). For any Bt ? R,  X  ?> P A?> (? ? ? ?t ) ? Bt ? P A (? ? ? ?t ) ? Bt |t ? Da . a The number of possible sets Da for t is bounded by (log t/ log(1 + ?))m , since each number of pulls  (i) nt for i ? A? is bounded by t. We now search a bound of the form P A?> (? ? ? ?t ) ? Bt |t ? Da . Suppose t ? Da and let D be a positive definite diagonal matrix (that depends on a). Pt?1 Pt?1 2 Let St = s=1 IAs ?A? C 1/2 s , Vt = s=1 CAs ?A? and IVt +D () = 12 kSt k(Vt +D)?1 . Lemma 4. Let ?t > 0 and let f?(?t ) be a function of ?t . With a choice of D such that IA? D  ?IA? ?C Dt for all t in Da ,   q n o P A?> (??? ?t )? 2f?(?t )A?> Dt?1 (??C Dt +Vt )Dt?1 A? t?Da ? P IVt +D ()?f?(?t )|t?Da . Proof in appendix B.2 of the supplementary material. The self-normalized sum IVt () is an interesting quantity for the following reason: exp( 12 IVt ()) = Qt?1 maxu?Rd s=1 exp(u> IAs ?A? C 1/2 s ? u> CAs ?A? u). For a given u, the exponential is smaller that 1 in expectation, from the subgaussian hypothesis. The maximum of the expectation is then smaller than 1. To control IVt (), we are however interested in the expectation of this maximum and cannot interchange max and E. The method of mixtures circumvents this difficulty: it provides an approximation of the maximum by integrating the exponential against a multivariate normal centered at the point Vt?1 St , where the maximum is attained. The integrals over u and  can then be swapped by Fubini?s theorem to get an approximation of the expectation of the maximum using an integral of the expectations. Doing so leads to the following lemma, extracted from the proof of Theorem 1 of Abbasi-Yadkori et al. [2011]. Lemma 5.q Let D be a positive definite matrix that does not depend on t and det D Mt (D) = det(V exp(IVt +D ()). Then E[Mt (D)] ? 1. n t +D) o We rewrite P IVt +D () ? f?(?t ) to introduce Mt (D), ( n o P IVt +D () ? f?(?t )|t?Da = P Mt (D) ? p 1 det(Id + D?1/2 Vt D?1/2 ) exp(f?(?t )) t?Da ) . The peeling lets us bound Vt . Let Da be the diagonal matrix with entry (i, i) equal to (1 + ?)ai for i ? A? and 0 elsewhere. Lemma 6. With D = ??C Da + I[d]\A? , det(Id + D?1/2 Vt D?1/2 ) ? (1 + 1+? m ? ) . The union bound on the sets Da and Markov?s inequality give   q q P A?> (? ? ? ?t ) ? 2f?(?t ) ?A?> ?C Dt?1 A? + A?> Dt?1 Vt Dt?1 A?  X  1 + ? ?m/2 ) exp(f?(?t ))|t ? Da ? P Mt (D) ? (1 + ? Da  m log t 1 + ? m/2 ? (1 + ) exp(?f?(?t )) log(1 + ?) ? For ? = e ? 1 and f?(?t ) as in lemma 3, this is bounded by ?t . The result with ? instead of C is a consequence of C + ?. With ?1 = 1 and ?t = 1/(t log2 t) for t ? 2, the regret due to Gt is E[ T X ?t I{Gt }] ? ?max (1 + t=1 T X t=2 6 1 ) ? 4?max . t log2 t 3.6 Bounding the variance term The goal of this section is to bound Et (At ) under ? the event {?t ? Et (At )}. Let ?t ? [0, 1] such that for q all i, j ? At with i 6= j, ?(ij) ? ?t ?(ii) ?(jj) . From the Cauchy-Schwartz inequality, (ij) nt (i) (j) ? nt nt . Using these two inequalities, t?1 X ?1 A> t Dt ( ?As )Dt?1 At X n(ij) ?(ij) t = s=1 i,j?At (i) (j) nt nt ? (1 ? ?t ) X ?(ii) i?At (i) nt s X ?(ii) i?At (i) nt + ?t ( )2 . We recognize here the forms of the indexes used in Combes et al. [2015] for independent arms (left term) and Kveton et al. [2015] for general arms (right term). Using ?t ? Et (At ) we get s X ?(ii) X ?(ii) ?2t ? (? + 1 ? ?t ) + ?t ( )2 . (1) (i) (i) 8f (t) nt nt i?At i?At The strategy from here is to find events that must happen when (1) holds and to show that these events cannot happen very often. For positive integers j and t and for e ? {1, 2}, we define the set of arms (ii) (i) j in At that were pulled less than a given threshold: St,e = {i ? At , nt ? ?j,e 8f (t)? ge (m,?t ) }, ?2t j At . (St,e )j?0 0 with ge (m, ?t ) to be stated later and (?i,e )i?1 a decreasing sequence. Let also St,e = is decreasing for the inclusion of sets and we impose limj?+? ?j,e = 0, such that there is an index j? j? with St,e = ?. We introduce another positive sequence (?j,e )j?0 and consider the events that j at least m?j,e arms in At are in the set St,e and that the same is false for k < j, i.e. for t ? 1, j j k At,e = {|St,e | ? m?j,e ; ?k < j, |St,e | < m?k,e }. To avoid having some of these events being 0 impossible we choose (?j,e )j?0 decreasing. We also impose ?0,e = 1, such that |St,e | = m?0,e . j Let then At,e = ?+? j=1 At,e and At = At,1 ? At,2 . We will show that At must happen for (1) to be true. First, remark that under a condition on (?j,e )j?0 , At is a finite union of events, 0 Lemma 7. For e ? {1, 2}, if there exists j0,e such that ?j0,e ,e ? 1/m, then At,e = ?jj=1 Ajt,e . We now show that At is impossible by proving a contradiction in (1). Lemma 8. Under the event At,1 , if there exists j0 such that ?j0 ,1 ? 1/m, then ? ? j0 X ?(ii) X m?2t ? ? ? ? j?1,1 j,1 j ,1 ? < + 0 ?. (i) 8f (t)g1 (m, ?t ) j=1 ?j,1 ?j0 ,1 nt i?At P+? ? Under the event At,2 , if limj?+? ?j,2 / ?j,2 = 0 and j=1 s X ?(ii) i?At nt (i) ?p ?j?1,2 ??j,2 ? ?j,2 exists, then +? X ?j?1,2 ? ?j,2 . ? ?j,2 8f (t)g2 (m, ?t ) j=1 m?t A proof can be found in appendix B.2 of the supplementary material. To ensure that the conditions of these lemmas are fulfilled, we impose that (?i,1 )i?0 and (?i,2 )i?0 have limit 0 and ? that limj?+? ?j,2 / ?j,2 = 0. Let j0,1 be the smallest integer such that ?j0,1 ,1 ? 1/m. Let Pj0,1 ?j?1,1 ??j,1 P+? ? ?j ,1 ??j,2 ? l1 = ?j0,1 ,1 + j=1 and l2 = j=1 j?1,2 . Using the two last lemmas with (1), ?j,1 ?j,2 0,1 we get that if At is true, ?2t ?2t < 8f (t) 8f (t)  (? + 1 ? ?t ) ml1 m2 l22 + ?t g1 (m, ?t ) g2 (m, ?t )  . Taking g1 (m, ?t ) = 2(? + 1 ? ?t )ml1 and g2 (m, ?t ) = 2?t m2 l22 , we get a contradiction. Hence with these choices At must happen. The regret bound will be obtained by a union bound on the events that form At . First suppose that all gaps are equal to the same ?. 7 Lemma 9. Let ? (ii) d?j,e 8f (T ) maxi {? m?j,e ?2 = maxt?1 ?t . }ge (m,?) For j N? , the event Ajt,e happens at most ? times. (i) Proof. Each time that Ajt,e happens, the counter of plays nt of at least m?je arms is incremented. (ii) After ?j,e 8f (T ) maxi {? ?2 }ge (m,?) (i) increments, an arm cannot verify the condition on nt any more. There are d arms, so the event can happen at most d m?1j e ?j,e 8f (T ) maxi {?(ii) }ge (m,?) ?2 times. If all gaps are equal to ?, an union bound for At gives ? ? j0,1 +? T X X X ?j,1 ?j,2 ? f (T ) ? . E[ d (? + 1 ? ?)l1 + ?ml22 ?I{Ht ? Gt }] ? 16 max{?(ii) } ? ? ? i?[d] j,1 t=1 j=1 j=1 j,2 The general case requires more involved manipulations but the result is similar and no new important idea is used. The following lemma is proved in appendix B.2 of the supplementary material: Lemma 10. Let ? (i) = max{t,i?At } ?t . The regret from the event Ht ? Gt is such that ? ? j0 T +? X X ?(ii) X X ?j,1 ?j,2 ? ?(? + 1 ? ?)l1 ?t I{Ht ? Gt }] ? 16f (T ) E[ . + ?ml22 ? ? ? i,min j,1 t=1 j=1 j=1 j,2 i?[d] Finally we can find sequences (?j,1 )j?1 , (?j,2 )j?1 , (?j,1 )j?0 and (?j,2 )j?0 such that !  2 T X X ?(ii) log m (i) (i) 5(? + 1 ? ? ) + 45? m ?I{Ht ? Gt }] ? 16f (T ) E[ ?i,min 1.6 t=1 i?[d] See appendix C of the supplementary material. In Combes et al. [2015], ?i,1 and ?i,1 were such ? that the log2 m term was replaced by m. Our choice is also applicable to their ESCB algorithm. Our use of geometric sequences is only optimal among sequences such that ?i,1 = ?i,1 for all i ? 1. It is unknown to us if one can do better. With this control of the variance term, we finally proved Theorem 2. 4 Conclusion We defined a continuum of settings from the general to the independent arms cases which is suitable for the analysis of semi-bandit algorithms. We exhibited a lower bound scaling with a parameter that quantifies the particular setting in this continuum and proposed an algorithm inspired from linear regression with an upper bound that matches the lower bound up to a log2 m term. Finally we showed how to use tools from the linear bandits literature to analyse algorithms for the combinatorial bandit case that are based on linear regression. It would be interesting to estimate the subgaussian covariance matrix online to attain good regret bounds without prior knowledge. Also, our algorithm is not computationally efficient since it requires the computation of an argmax over the actions at each stage. It may be possible to compute this argmax less often and still keep the regret guaranty, as was done in Abbasi-Yadkori et al. [2011] and Combes et al. [2015]. On a broader scope, the inspiration from linear regression could lead to algorithms using different estimators, adapted to the structure of the problem. For example, the weighted least-square estimator is also unbiased and has smaller variance than OLS. Or one could take advantage of a sparse covariance matrix by using sparse estimators, as was done in the linear bandit case in Carpentier and Munos [2012]. Acknowledgements The authors would like to acknowledge funding from the ANR under grant number ANR-13-JS010004 as well as the Fondation Math?matiques Jacques Hadamard and EDF through the Program Gaspard Monge for Optimization and the Irsdi project Tecolere. 8 References Yasin Abbasi-Yadkori, David Pal, and Csaba Szepesvari. Improved Algorithms for Linear Stochastic Bandits. Neural Information Processing Systems, pages 1?19, 2011. Jean-Yves Audibert, S?bastien Bubeck, and G?bor Lugosi. Regret in online combinatorial optimization. Mathematics of Operations Research, 39(1):31?45, 2013. Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2-3):235?256, 2002. Alexandra Carpentier and R?mi Munos. Bandit Theory meets Compressed Sensing for high dimensional Stochastic Linear Bandit. Advances in Neural Information Processing Systems (NIPS), pages 251?259, 2012. Nicolo Cesa-Bianchi and G?bor Lugosi. Combinatorial bandits. Journal of Computer and System Sciences, 78 (5):1404?1422, 2012. Wei Chen, Yajun Wang, and Yang Yuan. Combinatorial multi-armed bandit: General framework and applications. Proceedings of the 30th International Conference on Machine Learning (ICML), pages 151?159, 2013. Richard Combes, M. Sadegh Talebi, Alexandre Proutiere, and Marc Lelarge. Combinatorial Bandits Revisited. Neural Information Processing Systems, pages 1?9, 2015. Sarah Filippi, Olivier Capp?, Aur?lien Garivier, and Csaba Szepesv?ri. Parametric Bandits: The Generalized Linear Case. Neural Information Processing Systems, pages 1?9, 2010. Yi Gai, Bhaskar Krishnamachari, and Rahul Jain. Combinatorial network optimization with unknown variables: Multi-armed bandits with linear rewards and individual observations. IEEE/ACM Transactions on Networking, 20(5):1466?1478, 2012. Aur?lien Garivier. Informational confidence bounds for self-normalized averages and applications. 2013 IEEE Information Theory Workshop, ITW 2013, 2013. Junpei Komiyama, Junya Honda, and Hiroshi Nakagawa. Optimal Regret Analysis of Thompson Sampling in Stochastic Multi-armed Bandit Problem with Multiple Plays. Proceedings of the 32nd International Conference on Machine Learning, 2015. Branislav Kveton, Zheng Wen, Azin Ashkan, and Csaba Szepesvari. Tight regret bounds for stochastic combinatorial semi-bandits. Proceedings of the 18th International Conference on Artificial Intelligence and Statistics, 2015. Tze Leung Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances in applied mathematics, 6(1):4?22, 1985. Victor H Pe?a, Tze Leung Lai, and Qi-Man Shao. Self-normalized processes: Limit theory and Statistical Applications. Springer Science & Business Media, 2008. Herbert Robbins. Some aspects of the sequential design of experiments. In Herbert Robbins Selected Papers, pages 169?177. Springer, 1985. Paat Rusmevichientong and John N. Tsitsiklis. Linearly Parameterized Bandits. Mathematics of Operations Research, (1985):1?40, 2010. 9
6137 |@word cu:1 seems:1 gaspard:1 nd:1 covariance:9 decomposition:1 existing:1 yajun:1 comparing:1 nt:22 must:3 john:1 happen:5 update:1 intelligence:1 selected:1 ith:1 provides:1 math:1 revisited:1 honda:1 successive:3 org:1 yuan:1 prove:3 introduce:3 inter:1 indeed:1 expected:4 multi:6 yasin:1 inspired:2 informational:1 decreasing:3 dm2:2 armed:6 spain:1 project:1 notation:1 bounded:12 medium:1 what:2 maxa:2 hindsight:1 csaba:3 tackle:1 exactly:1 universit:1 schwartz:1 control:3 grant:1 enjoy:1 positive:9 before:1 limit:2 consequence:2 id:2 meet:1 path:8 lugosi:4 studied:2 challenging:1 unique:2 kveton:5 practice:1 regret:36 definite:5 union:4 j0:11 attain:1 confidence:8 integrating:1 get:7 cannot:3 impossible:2 equivalent:1 imposed:1 branislav:1 maximizing:2 thompson:2 contradiction:2 estimator:8 m2:2 rule:1 pull:4 proving:1 coordinate:2 increment:1 ert:4 resp:3 pt:6 play:3 suppose:8 controlling:1 cmla:3 olivier:1 us:2 hypothesis:2 origin:1 perchet:2 cut:1 observed:1 role:1 wang:1 worst:3 eu:1 counter:1 highest:1 incremented:1 observes:1 reward:25 qet:1 depend:1 rewrite:2 tight:2 solving:1 capp:1 shao:1 jain:1 hiroshi:1 artificial:1 choosing:5 jean:1 guaranty:3 supplementary:8 otherwise:1 anr:2 compressed:1 statistic:1 fischer:1 g1:3 analyse:1 online:4 sequence:5 advantage:1 maximal:2 fr:1 loop:2 hadamard:1 adapts:1 supposed:1 escb:5 normalesup:1 paat:1 derive:1 sarah:1 uninformed:1 ij:9 qt:1 involves:1 quantify:1 diderot:1 artefact:1 stochastic:7 exploration:5 centered:1 material:8 iat:4 require:1 decompose:1 mab:2 extension:2 hold:1 around:1 normal:1 exp:6 maxu:1 scope:1 continuum:2 smallest:2 applicable:1 realizes:1 combinatorial:12 robbins:7 tool:1 reflects:1 weighted:1 always:2 avoid:1 broader:2 corollary:1 focus:1 greatly:1 contrast:1 criteo:1 sense:1 am:1 leung:2 bt:4 bandit:29 relation:1 proutiere:1 lien:2 interested:1 arg:2 among:3 denoted:3 equal:6 having:1 sampling:2 icml:1 develops:1 simplify:3 richard:1 wen:1 recognize:1 individual:3 replaced:1 argmax:2 investigate:2 zheng:1 mixture:1 pj0:1 edge:1 capable:1 integral:2 minimal:1 ordinary:1 subset:3 entry:2 rare:1 pal:1 optimally:1 characterize:1 dependency:5 my:1 chooses:1 st:10 international:3 aur:2 destination:1 off:2 talebi:1 abbasi:4 nm:1 reflect:1 cesa:4 choose:4 possibly:1 hoeffding:2 l22:2 ml1:2 filippi:2 rusmevichientong:2 coefficient:6 audibert:2 depends:2 later:1 doing:1 parallel:3 minimize:1 square:2 yves:1 variance:6 who:1 bor:2 none:1 networking:1 ashkan:1 against:1 lelarge:1 involved:1 dm:3 proof:9 di:1 mi:1 proved:2 recall:1 knowledge:2 lim:1 actually:1 auer:2 alexandre:1 ta:1 dt:22 attained:1 fubini:1 improved:2 wei:1 rahul:1 done:4 stage:18 correlation:4 hand:2 receives:1 combes:7 quality:1 alexandra:1 verify:2 unbiased:2 true:6 normalized:4 former:1 hence:2 inspiration:1 limj:3 self:4 generalized:1 complete:1 l1:3 matiques:1 recently:1 funding:1 ols:15 mt:5 multiarmed:1 cv:1 ai:3 rd:3 mathematics:3 i6:2 similarly:1 inclusion:1 gt:18 nicolo:2 multivariate:1 showed:1 inf:1 scenario:2 manipulation:1 inequality:5 vt:8 itw:1 yi:1 victor:1 herbert:3 impose:3 maximize:1 semi:10 ii:23 full:1 multiple:2 desirable:1 stem:1 match:2 adapt:1 lai:5 bigger:1 a1:2 qi:1 variant:1 regression:8 expectation:6 szepesv:1 want:1 interval:4 source:1 crucial:1 swapped:1 exhibited:1 contrary:1 bhaskar:1 call:1 integer:3 subgaussian:15 yang:1 revealed:3 intermediate:1 enough:1 independence:3 affect:1 idea:2 det:4 whether:1 peter:1 azin:1 jj:5 action:22 remark:1 concentrated:1 estimated:1 fulfilled:1 jacques:1 write:2 ivt:8 demonstrating:1 threshold:1 carpentier:2 garivier:3 ht:16 asymptotically:1 sum:2 run:1 parameterized:1 throughout:1 circumvents:1 decision:1 cachan:1 appendix:8 scaling:1 sadegh:1 bound:36 hi:2 adapted:1 ri:1 junya:1 aspect:1 argument:3 min:8 performing:1 according:1 smaller:5 combucb1:2 making:1 happens:2 invariant:1 restricted:1 taken:1 computationally:2 equation:2 previously:1 turn:1 encapsulating:1 know:1 ge:5 end:1 available:3 operation:2 komiyama:2 observe:1 appearing:1 yadkori:4 vianney:1 include:1 ensure:1 log2:4 prof:1 classical:3 objective:1 already:1 quantity:2 strategy:3 parametric:1 rt:4 usual:1 diagonal:9 link:1 cauchy:1 reason:1 kst:1 index:4 equivalently:1 negative:1 stated:1 design:8 unknown:4 bianchi:4 upper:8 observation:3 markov:1 finite:6 acknowledge:1 arbitrary:1 introduced:2 david:1 paris:4 barcelona:1 nip:2 below:2 regime:2 encompasses:1 saclay:2 program:1 max:13 ia:9 event:18 suitable:1 difficulty:4 business:1 arm:38 faced:1 prior:3 literature:3 l2:1 geometric:1 acknowledgement:1 lacking:1 interesting:2 allocation:1 monge:1 degree:1 sufficient:1 consistent:2 edf:1 maxt:1 elsewhere:1 last:1 enjoys:2 tsitsiklis:2 pulled:5 d0e:1 taking:1 munos:2 sparse:2 benefit:1 feedback:2 dxe:1 dimension:1 valid:1 cumulative:1 stand:1 transition:1 interchange:1 author:1 adaptive:3 far:2 tighten:1 transaction:1 keep:1 spectrum:1 search:1 quantifies:1 szepesvari:2 ca:3 ajt:3 poly:1 marc:1 da:16 main:2 linearly:1 big:1 noise:2 bounding:1 paul:1 verifies:3 x1:1 je:1 en:3 gai:2 exponential:2 pe:2 ia1:3 peeling:3 fondation:1 theorem:6 xt:9 bastien:1 maxi:5 sensing:1 x:1 krishnamachari:1 exists:4 workshop:1 false:1 sequential:2 cumulated:1 gained:2 illustrates:1 margin:1 chen:3 gap:7 easier:1 logarithmic:1 tze:2 bubeck:1 g2:3 springer:2 extracted:1 ma:1 acm:1 goal:5 viewed:1 man:1 nakagawa:1 lemma:15 called:2 ucb:12 formally:1 junpei:1 support:1 correlated:3
5,678
6,138
Automatic Neuron Detection in Calcium Imaging Data Using Convolutional Networks Noah J. Apthorpe1? Alexander J. Riordan2? Rob E. Aguilar1 Jan Homann2 Yi Gu2 David W. Tank2 H. Sebastian Seung12 1 Computer Science Department 2 Princeton Neuroscience Institute Princeton University {apthorpe, ariordan, dwtank, sseung}@princeton.edu ? These authors contributed equally to this work Abstract Calcium imaging is an important technique for monitoring the activity of thousands of neurons simultaneously. As calcium imaging datasets grow in size, automated detection of individual neurons is becoming important. Here we apply a supervised learning approach to this problem and show that convolutional networks can achieve near-human accuracy and superhuman speed. Accuracy is superior to the popular PCA/ICA method based on precision and recall relative to ground truth annotation by a human expert. These results suggest that convolutional networks are an efficient and flexible tool for the analysis of large-scale calcium imaging data. 1 Introduction Two-photon calcium imaging is a powerful technique for monitoring the activity of thousands of individual neurons simultaneously in awake, behaving animals [1, 2]. Action potentials cause transient changes in the intracellular concentration of calcium ions. Such changes are detected by observing the fluorescence of calcium indicator molecules, typically using two-photon microscopy in the mammalian brain [3]. Repeatedly scanning a single image plane yields a time series of 2D images. This is effectively a video in which neurons blink whenever they are active [4, 5]. In the traditional workflow for extracting neural activities from the video, a human expert manually annotates regions of interest (ROIs) corresponding to individual neurons [5, 1, 2]. Within each ROI, pixel values are summed for each frame of the video, which yields the calcium signal of the corresponding neuron versus time. A subsequent step may deconvolve the temporal filtering of the intracellular calcium dynamics for an estimate of neural activity with better time resolution. The traditional workflow has the deficiency that manual annotation becomes laborious and timeconsuming for very large datasets. Furthermore, manual annotation does not de-mix the signals from spatially overlapping neurons. Unsupervised basis learning methods (PCA/ICA [6], CNMF [7], dictionary learning [8], and sparse space-time deconvolution [9]) express the video as a time-varying superposition of basis images. The basis images play a similar role as ROIs in the traditional workflow, and their time-varying coefficients are intended to correspond to neural activities. While basis learning methods are useful for finding active neurons, they do not detect low-activity cells?making these methods inappropriate for studies involving neurons that may be temporarily inactive depending on context or learning [10]. Such subtle difficulties may explain the lasting popularity of manual annotation. At first glance, the videos produced by calcium imaging seem simple (neurons blinking on and off). Yet automating image analysis has not been trivial. One difficulty is that images are corrupted by noise and artifacts due to brain motion. Another difficulty is variability in the appearance of cell bodies, which vary 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. in shape, size, spacing, and resting-level fluorescence. Additionally, different neuroscience studies may require differing ROI selection criteria. Some may require only cell bodies [5, 11], while others involve dendrites [6]. Some may require only active cells, while others necessitate both active and inactive cells [10]. Some neuroscientists may wish to reject slightly out-of-focus neurons. For all of these reasons, a neuroscientist may spend hours or days tuning the parameters of nominally automated methods, or may never succeed in finding a set of parameters that produces satisfactory results. As a way of dealing with these difficulties, we focus here on a supervised learning approach to automated ROI detection. An automated ROI detector could be used to replace manual ROI detection by a human expert in the traditional workflow, or could be used to make the basis learning algorithms more reliable by providing good initial conditions for basis images. However, the usability of an automated algorithm strongly depends on it attaining high accuracy. A supervised learning method can adapt to different ROI selection criteria and generalize them to new datasets. Supervised learning has become the dominant approach for attaining high accuracy in many computer vision problems [12]. We assemble ground truth datasets consisting of calcium imaging videos along with ROIs drawn by human experts and employ a precision-recall formalism for quantifying accuracy. We train a sliding window convolutional network (ConvNet) to take a calcium video as input and output a 2D image that matches the human-drawn ROIs as well as possible. The ConvNet achieves near-human accuracy and exceeds that of PCA/ICA [6]. The prior work most similar to ours used supervised learning based on boosting with hand-designed features [13]. Other previous attempts to automate ROI detection did not employ supervised machine learning. For example, hand-designed filtering operations [14] and normalized cuts [15] were applied to image pixel correlations. The major cost of supervised learning is the human effort required to create the training set. As a rough guide, our results suggest that on the order of 10 hours of effort or 1000 annotated cells are sufficient to yield a ConvNet with usable accuracy. This initial time investment, however, is more than repaid by the speed of a ConvNet at classifying new data. Furthermore, the marginal effort required to create a training set is essentially zero for those neuroscientists who already have annotated data. Neuroscientists can also agree to use the same trained ConvNets for uniformity of ROI selection across labs. From the deep learning perspective, an interesting aspect of our work is that a ConvNet that processes a spatiotemporal (2+1)D image is trained using only spatial (2D) annotations. Full spatiotemporal annotations (spatial locations and times of activation) would have been more laborious to collect. The use of purely spatial annotations is possible because the neurons in our videos are stationary (apart from motion artifacts). This makes our task simpler than other applications of ConvNets to video processing [16]. 2 Neuron detection benchmark We use a precision-recall framework to quantify accuracy of neuron detection. Predicted ROIs are classified as false positives (FP), false negatives (FN), and true positives (TP) relative to ground truth ROIs. Precision and recall are defined by TP TP precision = recall = (1) TP + FP TP + FN Both measures would be equal to 1 if predictions were perfectly accurate, i.e. higher numbers are better. If a single measure of accuracy is required, we use the harmonic mean of precision and recall, 1/F1 = (1/precision + 1/recall) /2. The F1 score favors neither precision nor recall, but in practice a neuroscientist may care more about one measure than the other. For example, some neuroscientists may be satisfied if the algorithm fails to detect many neurons (low recall) so long as it produces few false positives (high precision). Other neuroscientists may want the algorithm to find as many neurons as possible (high recall) even if there are many false positives (low precision). For computing precision and recall, it is helpful to define the overlap between two ROIs R1 and R2 as the Jaccard similarity coefficient |R1 ? R2 |/|R1 ? R2 | where |R| denotes the number of pixels in R. For each predicted ROI, we find the ground truth ROI with maximal overlap. The ground truth ROIs with overlap greater than 0.5 are assigned to the predicted ROIs with which they overlap the most. These assignments are true positives. Leftover ROIs are the false positives and false negatives. 2 We prefer the precision-recall framework over the receiver operating characteristic (ROC), which was previously used as a quantitative measure of neuron detection accuracy [13]. This is because precision and recall do not depend on true negatives, which are less well-defined. (The ROC depends on true negatives through the false positive rate.) Ground truth generation by human annotation The quantitative measures of accuracy proposed above depend on the existence of ground truth. For the vast majority of calcium imaging datasets, no objectively defined ground truth exists, and we must rely on subjective evaluation by human experts. For a dataset with low noise in which the desired ROIs are cell bodies, human experts are typically confident about most of their ROIs, though some are borderline cases that may be ambiguous. Therefore our measures of accuracy should be able to distinguish between algorithms that differ widely in their performance but may not be adequate to distinguish between algorithms that are very similar. Two-photon calcium imaging data were gathered from both the primary visual cortex (V1) and medial entorhinal cortex (MEC) from awake-behaving mice (Supplementary Methods). All experiments were performed according to the Guide for the Care and Use of Laboratory Animals, and procedures were approved by Princeton University?s Animal Care and Use Committee. Each time series of calcium images was corrected for motion artifacts (Supplementary Methods), average-pooled over time with stride 167, and then max-pooled over time with stride 6. This downsampling in time was arbitrarily chosen to reduce noise and make the dataset into a more manageable size. Human experts then annotated ROIs using the ImageJ Cell Magic Wand Tool [17], which automatically generates a region of interest (ROI) based on a single mouse click. The human experts found 4006 neurons in the V1 dataset with an average of 148 neurons per image series and 538 neurons in the MEC dataset with an average of 54 neurons per image series. Human experts used the following criteria to select neurons: 1. the soma was in the focal plane of the image?apparent as a light doughnut-like ring (the soma cytosol) surrounding a dark area (the nucleus), or 2. the area showed significantly changing brightness distinguishable from background and had the same general size and shape expected from a neuron in the given brain region. After motion correction, downsampling, and human labeling, the V1 dataset consisted of 27 16-bit grayscale multi-page TIFF image series ranging from 28 to 142 frames per series with 512 ? 512 pixels per frame. The MEC dataset consisted of 10 image series ranging from 5 to 28 frames in the same format. Human annotation time was estimated at one hour per image series for the V1 dataset and 40 minutes per images series for the MEC dataset. Each human-labeled ROI was represented as a 512 ? 512 pixel binary mask. 3 Convolutional network Preprocessing of images and ground truth ROIs. Microscopy image series from the V1 and MEC datasets were preprocessed prior to network training (Figure 1). Image contrast was enhanced by clipping all pixel values above the 99th percentile and below the 3rd percentile. Pixel values were then normalized to [0, 1]. We divided the V1 series into 60% training, 20% validation, and 20% test sets and the MEC series into 50% training, 20% validation, and 30% test sets. Neighboring ground truth ROIs often touched or even overlapped with each other. For the purpose of ConvNet training, we shrank the ground truth ROIs by replacing each one with a 4-pixel radius disk located at the centroid of the ROI. The shrinkage was intended to encourage the ConvNets to separate neighboring neurons. Convolutional network architecture and training. The architecture of the (2+1)D ConvNet is depicted in Figure 2. The input is an image stack containing T time slices. There are four convolutional layers, a max pooling over all time slices, and then two pixelwise fully connected layers. This yields two 2D grayscale images as output, which together represent the softmax probability of each pixel being inside an ROI centroid. The convolutional layers were chosen to contain only 2D kernels, because the temporal downsampling used in the preprocessing (?2) caused most neural activity to last for only a single time frame. Each output pixel depended on a 37 ? 37 ? T pixel field of view in the input, where T is the number of frames in the input image stack?governed by the length of the imaging experiment and the imaging 3 Initial#Image Contrast#Enhancement Human#Labeled#ROIs ROI#Centroids V1#Dataset 20?m MEC#Dataset Figure 1: Preprocessing steps for calcium images and human-labeled ROIs. Col 1) Calcium imaging stacks were motion-corrected and downsampled in time. Col 2) Image contrast was enhanced by clipping pixel intensities below the 3rd and above the 99th percentile then linearly rescaling pixel intensities between these new bounds. Col 3) Human-labeled ROIs were converted into binary masks. Col 4) Networks were trained to detect 4-pixel radius circular centroids of human-labeled ROIs. Primary visual cortex (V1, Row 1) and medial entorhinal cortex (MEC, Row 2) datasets were preprocessed identically. sampling rate. T was equalized to 50 for all image stacks in the V1 dataset and 5 for all image stacks in the MEC dataset using averaging and bicubic interpolation. In the future, we will consider less temporal downsampling and the use of 3D kernels in the convolutional layers. The ConvNet was applied in a 37 ? 37 ? T window, sliding in two dimensions over the input image stack to produce an output pixel for every location of the window fully contained within the image bounds. For comparison, we also trained a 2D ConvNet that took as input the time-averaged image stack and did no temporal computation (Figure 2). We used ZNN, an open-source sliding window ConvNet package with multi-core CPU parallelism and FFT-based convolution [18]. ZNN automatically augmented training sets by random rotations (multiples of 90 degrees) and reflections of image patches to facilitate ConvNet learning of invariances. The training sets were also rebalanced by the fraction of pixels in human-labeled ROIs to the total number of pixels. See Supplementary Methods for further details. The (2+1)D network was trained with softmax loss and output patches of size 120 ? 120. The learning rate parameter was annealed by hand from 0.01 to 0.002, and the momentum parameter was annealed by hand from 0.9 to 0.5. The network was trained for 16800 stochastic gradient descent (SGD) updates for the V1 dataset, which took approximately 1.2 seconds/update (? 5.5hrs) on an Amazon EC2 c4.8xlarge instance (Supplementary Figure 1). The network was trained for 200000 SGD updates for the MEC dataset, which took approximately 0.1 seconds/update (? 5.5hrs). The 2D network training omitted annealing of the learning rate and momentum parameters. The 2D network was trained for 14000 SGD updates for the V1 dataset, which took approximately 0.9 seconds/update (? 3.75hrs) on an Amazon EC2 c4.8xlarge instance (Supplementary Figure 1). We performed early stopping on the network after 10200 SGD updates based on the validation loss. Network output postprocessing. Network outputs were converted into individual ROIs by: 1. Thresholding out pixels with low probability values, 2. Removing small connected components, 3. Weighting resulting pixels with a normalized distance transform, 4. Performing marker-based watershed labeling with local max markers, 5. Merging small watershed regions, and 6. Automatically applying the ImageJ Cell Magic Wand tool to the original images at the centroids of the watershed regions. Thresholding and minimum size values were optimized using the validation sets (Supplementary Methods). Source code. A ready-to-use pipeline, including pre- and postprocessing, ConvNet training, and precision-recall scoring, will be publicly available for community use (https://github.com/ NoahApthorpe/ConvnetCellDetection). 4 A. conv 10x10x1 conv 10x10x1 conv conv max'filter' conv 1x1x1 10x10x1 10x10x1 1x1xT 2D'output' image 3D'image' input 20'unit'FC' layer 10'units/conv layer B. conv 3x3 conv max'filter' conv 3x3 2x2 3x3 conv max'filter' conv 3x3 2x2 3x3 conv 1x1 2D'image' output 2D'image' input 24 units 48 48 72 96 96 120 50'unit'FC' layer Figure 2: A) Schematic of the (2+1)D network architecture. The (2+1)D network transforms 3D calcium imaging stacks ? stacks of 2D calcium images changing over time ? into 2D images of predicted neuron locations. All convolutional filters are 2D except for the 1x1xT max filter layer, where T is the number of frames in the image stack. B) The 2D network architecture. The 2D network takes as input calcium imaging stacks that are mean projected over time down to two dimensions. 4 Results ConvNets successfully detect cells in calcium images. A sample image from the V1 test set and ConvNet output is shown in Figure 4. Postprocessing of the ConvNet output yielded predicted ROIs, many of which are the same as the human ROIs (Figure 4c). As described in Section 2, we quantified agreement between ConvNet and human using the precision-recall formalism. Both (2+1)D and 2D networks attained the same F1 score (0.71). Full precision-recall curves are given in Supplementary Figure 1. Inspection of the ConvNet-human disagreements suggested that some were not actually ConvNet errors. To investigate this hypothesis, the original human expert reevaluated all disagreements with the (2+1)D network. After reevaluation, 131 false positives became true positives, and 30 false negatives became true negatives (Figure 4D). Some of these reversals appeared to involve unambiguous human errors in the original annotation, while others were ambiguous cases (Figure 4E? G). After reevaluation, the F1 score of the (2+1)D network increased to 0.82. The F1 score of the human expert?s reevaluation relative to his original annotation was 0.89. These results indicate that the ConvNet is nearing human performance. (2+1)D versus 2D network. The (2+1)D and 2D networks achieved similar precision, recall, and F1 scores on the V1 dataset; however, the (2+1)D network produced raw output with less noise than the 2D network (Figure 3). Qualitative inspection also indicates that the (2+1)D network finds transiently active and transiently in focus neurons missed by the 2D network (Figure 3). Although such neurons occurred infrequently in the V1 dataset and did not noticeably affect network scores, these results suggest that datasets with larger populations of transiently active or variably focused cells will particularly benefit from (2+1)D network architectures. ConvNet segmentation outperforms PCA/ICA. The (2+1)D network was also able to successfully locate neurons in the MEC dataset (Figure 5). For comparison, we also implemented and applied PCA/ICA as described by Ref. [6]. The (2+1)D network achieved an F1 score of 0.51, while PCA/ICA achieved 0.27. Precision and recall numbers are given in Figure 5. 5 A. Transiently'active'neuron (2+1)D network 2D'network Overlay 20?m Neuron'falls'in'and'out'of'focus 20?m (2+1)D network 2D'network 2 number pixels in image C. B. #10 4 (2+1)D network 2D network 1.5 1 0.5 0 0 0.2 0.4 0.6 0.8 1 pixel intensity Figure 3: A) The (2+1)D network detected neurons that the 2D network failed to locate. The sequence of greyscale images shows a patch of V1 neurons over time. Both transiently active neurons and neurons that wane in and out of the focal plane are visible. The color image shows the output of both networks. The (2+1)D network detects these transiently visible neurons, whereas the 2D network is unable to find these cells using only the mean-flattened image. B) The raw outputs of the (2+1)D and 2D networks. C) Representative histogram of output pixel intensities. The (2+1)D network output has more values clustered around 0 and 1 compared to the 2D network. This suggests that (2+1)D network output has a higher signal to noise ratio than 2D network output. ConvNet accuracy was lower on the MEC dataset than the V1 dataset, probably because the former has more noise and larger motion artifacts. The amount of training data for the MEC dataset was also much smaller. PCA/ICA accuracy was numerically worse, but this result should be interpreted cautiously. PCA/ICA is intended to identify active neurons, while the ground truth included both active and inactive neurons. Furthermore, the ground truth depends on the human expert?s selection criteria, which are not accessible to PCA/ICA. Training and post-processing optimization for ConvNet segmentation took ?6 hours with a forward pass taking ?1.2 seconds per image series. Parameter optimization for PCA/ICA performed by a human expert took ?2.5 hours with a forward pass taking ?40 minutes. This amounted to ?6 hours total computation time for the ConvNet and ?9 hours for the PCA/ICA algorithm. This suggests that ConvNet segmentation is faster than PCA/ICA for all but the smallest datasets. 5 Discussion The lack of quantitative difference between (2+1)D and 2D ConvNet accuracy (same F1 score on the V1 dataset) may be due to limitations of our study, such as imperfect ground truth and temporal downsampling in preprocessing. It may also be because the vast majority of neurons in the V1 dataset are clearly visible in the time-averaged image. We do have qualitative evidence that the (2+1)D architecture may turn out to be superior for other datasets, because its output looks cleaner, and it is able to detect transiently active or transiently in-focus cells (Figure 3). The (2+1)D ConvNet outperformed PCA/ICA in the precision-recall metrics. We are presently working to compare against recently released basis learning methods [7]. ConvNets readily locate inactive neurons and process new images rapidly once trained. ConvNets adapt to the selection criteria of the neuroscientist if they are implicitly contained in the training set. They do not depend on hand-designed features and so require little expertise in computer vision. ConvNet speed could enable novel applications involving online ROI detection, such as computer-guided single-cell optogenetics [11] or real-time neural feedback experiments. 6 B. A. 20?m C. D. Human (2+1)D&network Added&by&human&relabeling& Removed&by&human&relabeling& Human&&&(2+1)D&network&overlay B. F. C. G. H. F1&Score A. E. 2D Original Labels 20?m (2+1)D Temporal Original Labels 2D Relabeled (2+1)D Temporal Relabeled Human Original to Relabeled Figure 4: The (2+1)D network successfully detected neurons in the V1 test set with near-human accuracy. A) Slice from preprocessed calcium imaging stack input to network. B) Network softmax probability output. Brighter regions are considered by the network to have higher probability of being a neuron. C) ROIs found by the (2+1)D network after post-processing, overlaid with human labels. Network output is shown by green outlines, whereas human labels are red. Regions of agreement are indicated by yellow overlays. D) ROI labels added by human reevaluation are shown in blue. ROI labels removed by reevaluation are shown in magenta. Post hoc assessment of network output revealed a sizable portion of ROIs that were initially missed by human labeling. E) Examples of formerly negative ROIs that were reevaluated as positive. F) Initial positive labels that were reevaluated to be false. G) Examples of ROIs that remained negative even after reevaluation. H) F1 scores for (2+1)D and 2D networks before and after ROI reevaluation. Human labels before and after reevaluation were also compared to assess human labeling variability. Boxplots depict the variability of F1 scores around the median score across test images. 7 A. B. 20?m C. D. Human (2+1)D)network PCA/ICA Human)&)(2+1)D)network)overlay E. (2+1)D Temporal network PCA/ PCA ICA (2+1)D Temporal network PCA/ PCA ICA (2+1)D Temporal network PCA/ PCA ICA Figure 5: The (2+1)D network successfully detected neurons in the MEC test set with higher precision and recall than PCA/ICA. A) Slice from preprocessed calcium imaging stack that was input to network. B) Network output, normalized by softmax. C) ROIs found by the (2+1)D network after postprocessing, overlaid with ROIs previously labeled by a human. Network output is shown by red outlines, whereas human labels are green. Regions of agreement are indicated by yellow overlays. D) The ROIs found by PCA/ICA are overlaid in blue. E) Quantitative comparison of F1 score, precision, and recall for (2+1)D network and PCA/ICA on human-labeled MEC data. 8 Acknowledgments We thank Kisuk Lee, Jingpeng Wu, Nicholas Turner, and Jeffrey Gauthier for technical assistance. We also thank Sue Ann Koay, Niranjani Prasad, Cyril Zhang, and Hussein Nagree for discussions. This work was supported by IARPA D16PC00005 (HSS), the Mathers Foundation (HSS), NIH R01 MH083686 (DWT), NIH U01 NS090541 (DWT, HSS), NIH U01 NS090562 (HSS), Simons Foundation SCGB (DWT), and U.S. Army Research Office W911NF-12-1-0594 (HSS). References [1] Daniel Huber, DA Gutnisky, S Peron, DH O?connor, JS Wiegert, Lin Tian, TG Oertner, LL Looger, and K Svoboda. Multiple dynamic representations in the motor cortex during sensorimotor learning. Nature, 484(7395):473?478, 2012. [2] Daniel A Dombeck, Anton N Khabbaz, Forrest Collman, Thomas L Adelman, and David W Tank. Imaging large-scale neural activity with cellular resolution in awake, mobile mice. Neuron, 56(1):43?57, 2007. [3] Winfried Denk, James H Strickler, Watt W Webb, et al. Two-photon laser scanning fluorescence microscopy. Science, 248(4951):73?76, 1990. [4] Tsai-Wen Chen, Trevor J Wardill, Yi Sun, Stefan R Pulver, Sabine L Renninger, Amy Baohan, Eric R Schreiter, Rex A Kerr, Michael B Orger, Vivek Jayaraman, et al. Ultrasensitive fluorescent proteins for imaging neuronal activity. Nature, 499(7458):295?300, 2013. [5] Christopher D Harvey, Philip Coen, and David W Tank. Choice-specific sequences in parietal cortex during a virtual-navigation decision task. Nature, 484(7392):62?68, 2012. [6] Eran A Mukamel, Axel Nimmerjahn, and Mark J Schnitzer. Automated analysis of cellular signals from large-scale calcium imaging data. Neuron, 63(6):747?760, 2009. [7] Eftychios A Pnevmatikakis, Daniel Soudry, Yuanjun Gao, Timothy A Machado, Josh Merel, David Pfau, Thomas Reardon, Yu Mu, Clay Lacefield, Weijian Yang, et al. Simultaneous denoising, deconvolution, and demixing of calcium imaging data. Neuron, 89(2):285?299, 2016. [8] Marius Pachitariu, Adam M Packer, Noah Pettit, Henry Dalgleish, Michael Hausser, and Maneesh Sahani. Extracting regions of interest from biological images with convolutional sparse block coding. In Advances in Neural Information Processing Systems, pages 1745?1753, 2013. [9] Ferran Diego Andilla and Fred A Hamprecht. Sparse space-time deconvolution for calcium image analysis. In Advances in Neural Information Processing Systems, pages 64?72, 2014. [10] David S Greenberg, Arthur R Houweling, and Jason ND Kerr. Population imaging of ongoing neuronal activity in the visual cortex of awake rats. Nature neuroscience, 11(7):749?751, 2008. [11] John Peter Rickgauer, Karl Deisseroth, and David W Tank. Simultaneous cellular-resolution optical perturbation and imaging of place cell firing fields. Nature neuroscience, 17(12):1816?1824, 2014. [12] Yann LeCun, Koray Kavukcuoglu, Cl?ment Farabet, et al. Convolutional networks and applications in vision. In ISCAS, pages 253?256, 2010. [13] Ilya Valmianski, Andy Y Shih, Jonathan D Driscoll, David W Matthews, Yoav Freund, and David Kleinfeld. Automatic identification of fluorescently labeled brain cells for rapid functional imaging. Journal of neurophysiology, 104(3):1803?1811, 2010. [14] Spencer L Smith and Michael H?usser. Parallel processing of visual space by neighboring neurons in mouse visual cortex. Nature neuroscience, 13(9):1144?1149, 2010. [15] Patrick Kaifosh, Jeffrey D Zaremba, Nathan B Danielson, and Attila Losonczy. Sima: Python software for analysis of dynamic fluorescence imaging data. Frontiers in neuroinformatics, 8, 2014. [16] Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei. Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 1725?1732, 2014. [17] Theo Walker. Cell magic wand tool. http://www.maxplanckflorida.org/fitzpatricklab/ software/cellMagicWand/index.html. 2014. [18] Aleksandar Zlateski, Kisuk Lee, and H Sebastian Seung. ZNN ? a fast and scalable algorithm for training 3d convolutional networks on multi-core and many-core shared memory machines. arXiv:1510.06706, 2015. 9
6138 |@word neurophysiology:1 manageable:1 houweling:1 approved:1 nd:1 disk:1 open:1 prasad:1 brightness:1 sgd:4 schnitzer:1 deisseroth:1 initial:4 series:13 score:13 daniel:3 ours:1 subjective:1 outperforms:1 com:1 activation:1 yet:1 must:1 readily:1 john:1 fn:2 subsequent:1 visible:3 shape:2 motor:1 designed:3 medial:2 update:7 depict:1 stationary:1 plane:3 inspection:2 smith:1 core:3 boosting:1 location:3 org:1 simpler:1 zhang:1 along:1 become:1 qualitative:2 inside:1 jayaraman:1 huber:1 expected:1 mask:2 ica:20 rapid:1 nor:1 multi:3 brain:4 detects:1 automatically:3 cpu:1 little:1 inappropriate:1 window:4 toderici:1 becomes:1 spain:1 conv:12 interpreted:1 differing:1 finding:2 pulver:1 temporal:10 quantitative:4 every:1 zaremba:1 unit:4 positive:11 before:2 local:1 depended:1 soudry:1 firing:1 becoming:1 interpolation:1 approximately:3 quantified:1 collect:1 suggests:2 tian:1 averaged:2 acknowledgment:1 lecun:1 investment:1 practice:1 borderline:1 block:1 x3:5 procedure:1 jan:1 area:2 maneesh:1 reject:1 significantly:1 pre:1 downsampled:1 suggest:3 protein:1 selection:5 andrej:1 deconvolve:1 context:1 applying:1 sukthankar:1 www:1 timeconsuming:1 annealed:2 focused:1 resolution:3 renninger:1 amazon:2 amy:1 his:1 population:2 enhanced:2 play:1 diego:1 svoboda:1 hypothesis:1 agreement:3 overlapped:1 infrequently:1 wardill:1 particularly:1 located:1 variably:1 recognition:1 mammalian:1 cut:1 labeled:9 role:1 thousand:2 region:9 connected:2 reevaluation:8 sun:1 removed:2 cautiously:1 mu:1 seung:1 dynamic:3 denk:1 trained:9 uniformity:1 depend:3 purely:1 eric:1 basis:7 cnmf:1 represented:1 looger:1 surrounding:1 train:1 laser:1 fast:1 detected:4 equalized:1 labeling:4 neuroinformatics:1 apparent:1 reardon:1 spend:1 widely:1 supplementary:7 larger:2 optogenetics:1 favor:1 objectively:1 transform:1 online:1 hoc:1 sequence:2 took:6 ment:1 maximal:1 neighboring:3 rapidly:1 achieve:1 enhancement:1 r1:3 produce:3 adam:1 ring:1 depending:1 yuanjun:1 sizable:1 orger:1 implemented:1 predicted:5 indicate:1 quantify:1 differ:1 guided:1 radius:2 annotated:3 filter:5 stochastic:1 human:48 transient:1 enable:1 virtual:1 noticeably:1 require:4 f1:12 clustered:1 pettit:1 biological:1 spencer:1 frontier:1 correction:1 around:2 considered:1 ground:14 roi:50 overlaid:3 mathers:1 automate:1 matthew:1 major:1 dictionary:1 achieves:1 smallest:1 omitted:1 released:1 purpose:1 vary:1 schreiter:1 early:1 outperformed:1 label:9 superposition:1 fluorescence:4 leftover:1 pnevmatikakis:1 create:2 successfully:4 tool:4 ferran:1 stefan:1 rough:1 clearly:1 shrinkage:1 varying:2 mobile:1 office:1 focus:5 indicates:1 contrast:3 centroid:5 detect:5 helpful:1 stopping:1 leung:1 typically:2 initially:1 pixel:22 tank:3 classification:1 flexible:1 html:1 animal:3 spatial:3 summed:1 softmax:4 marginal:1 equal:1 oertner:1 field:2 never:1 once:1 sampling:1 manually:1 koray:1 look:1 unsupervised:1 yu:1 future:1 others:3 transiently:8 employ:2 few:1 mec:15 wen:1 simultaneously:2 packer:1 individual:4 relabeling:2 intended:3 consisting:1 iscas:1 jeffrey:2 attempt:1 detection:9 neuroscientist:8 interest:3 wiegert:1 circular:1 investigate:1 hussein:1 evaluation:1 laborious:2 navigation:1 light:1 hamprecht:1 watershed:3 accurate:1 andy:1 bicubic:1 encourage:1 arthur:1 desired:1 instance:2 formalism:2 increased:1 tp:5 w911nf:1 yoav:1 assignment:1 clipping:2 cost:1 tg:1 rex:1 pixelwise:1 scanning:2 corrupted:1 spatiotemporal:2 confident:1 ec2:2 accessible:1 automating:1 lee:2 off:1 axel:1 michael:3 together:1 mouse:4 ilya:1 satisfied:1 containing:1 necessitate:1 nearing:1 worse:1 expert:13 usable:1 rescaling:1 li:1 potential:1 photon:4 de:1 attaining:2 stride:2 converted:2 pooled:2 u01:2 coding:1 coefficient:2 caused:1 depends:3 performed:3 view:1 jason:1 lab:1 observing:1 red:2 portion:1 dalgleish:1 parallel:1 annotation:11 simon:1 shrank:1 ass:1 publicly:1 accuracy:16 convolutional:14 became:2 who:1 characteristic:1 yield:4 correspond:1 gathered:1 blink:1 identify:1 yellow:2 generalize:1 anton:1 raw:2 identification:1 kavukcuoglu:1 produced:2 reevaluated:3 monitoring:2 expertise:1 classified:1 explain:1 detector:1 simultaneous:2 sebastian:2 whenever:1 manual:4 trevor:1 farabet:1 against:1 sensorimotor:1 james:1 dataset:23 popular:1 recall:21 color:1 usser:1 blinking:1 segmentation:3 subtle:1 clay:1 actually:1 higher:4 attained:1 supervised:7 day:1 baohan:1 rahul:1 though:1 strongly:1 furthermore:3 correlation:1 convnets:6 hand:5 working:1 replacing:1 gauthier:1 christopher:1 overlapping:1 marker:2 glance:1 lack:1 kleinfeld:1 assessment:1 artifact:4 indicated:2 facilitate:1 normalized:4 true:6 consisted:2 contain:1 former:1 fluorescently:1 assigned:1 spatially:1 laboratory:1 satisfactory:1 vivek:1 sima:1 assistance:1 ll:1 during:2 adelman:1 ambiguous:2 unambiguous:1 percentile:3 rat:1 criterion:5 nimmerjahn:1 outline:2 attila:1 motion:6 reflection:1 postprocessing:4 image:53 harmonic:1 ranging:2 novel:1 recently:1 nih:3 superior:2 rotation:1 znn:3 machado:1 functional:1 occurred:1 resting:1 numerically:1 connor:1 automatic:2 tuning:1 focal:2 rd:2 ongoing:1 had:1 henry:1 rebalanced:1 similarity:1 behaving:2 annotates:1 operating:1 cortex:8 patrick:1 dominant:1 j:1 showed:1 perspective:1 apart:1 harvey:1 binary:2 arbitrarily:1 yi:2 scoring:1 minimum:1 greater:1 care:3 george:1 signal:4 sliding:3 full:2 mix:1 multiple:2 exceeds:1 technical:1 match:1 usability:1 adapt:2 faster:1 long:1 lin:1 divided:1 post:3 equally:1 schematic:1 prediction:1 involving:2 scalable:1 vision:4 essentially:1 metric:1 sue:1 arxiv:1 histogram:1 represent:1 kernel:2 microscopy:3 ion:1 cell:17 achieved:3 background:1 want:1 whereas:3 spacing:1 annealing:1 grow:1 source:2 median:1 walker:1 probably:1 pooling:1 superhuman:1 seem:1 extracting:2 near:3 yang:1 revealed:1 identically:1 automated:6 fft:1 affect:1 brighter:1 architecture:6 perfectly:1 click:1 reduce:1 imperfect:1 eftychios:1 inactive:4 pca:23 effort:3 peter:1 cause:1 cyril:1 action:1 repeatedly:1 deep:1 workflow:4 useful:1 adequate:1 weijian:1 involve:2 cleaner:1 karpathy:1 transforms:1 amount:1 dark:1 http:2 overlay:5 neuroscience:5 estimated:1 popularity:1 per:7 blue:2 express:1 four:1 soma:2 shih:1 drawn:2 changing:2 preprocessed:4 neither:1 boxplots:1 v1:19 imaging:24 vast:2 fraction:1 wand:3 package:1 powerful:1 place:1 wu:1 forrest:1 patch:3 missed:2 yann:1 decision:1 prefer:1 jaccard:1 bit:1 layer:8 bound:2 distinguish:2 assemble:1 yielded:1 activity:10 imagej:2 kaifosh:1 noah:2 deficiency:1 fei:2 awake:4 x2:2 software:2 generates:1 aspect:1 speed:3 nathan:1 performing:1 optical:1 format:1 marius:1 department:1 according:1 watt:1 across:2 slightly:1 smaller:1 rob:1 making:1 lasting:1 presently:1 pipeline:1 agree:1 previously:2 turn:1 committee:1 kerr:2 reversal:1 available:1 operation:1 pachitariu:1 apply:1 disagreement:2 nicholas:1 dwt:3 shetty:1 lacefield:1 existence:1 original:7 thomas:3 denotes:1 r01:1 zlateski:1 sabine:1 already:1 added:2 concentration:1 primary:2 eran:1 traditional:4 losonczy:1 gradient:1 convnet:26 separate:1 distance:1 unable:1 thank:2 majority:2 philip:1 kisuk:2 cellular:3 trivial:1 reason:1 driscoll:1 length:1 code:1 index:1 providing:1 downsampling:5 ratio:1 webb:1 greyscale:1 negative:8 magic:3 calcium:26 contributed:1 neuron:47 convolution:1 datasets:10 benchmark:1 descent:1 parietal:1 sanketh:1 variability:3 frame:7 locate:3 perturbation:1 stack:13 community:1 intensity:4 david:8 required:3 optimized:1 pfau:1 c4:2 hausser:1 barcelona:1 hour:7 nip:1 able:3 suggested:1 below:2 parallelism:1 pattern:1 fp:2 appeared:1 reliable:1 max:7 video:10 including:1 green:2 memory:1 overlap:4 difficulty:4 rely:1 indicator:1 hr:3 turner:1 github:1 ready:1 sahani:1 formerly:1 prior:2 python:1 relative:3 freund:1 fully:2 loss:2 interesting:1 generation:1 filtering:2 limitation:1 hs:5 versus:2 fluorescent:1 merel:1 validation:4 foundation:2 nucleus:1 degree:1 sufficient:1 thresholding:2 classifying:1 row:2 karl:1 supported:1 last:1 theo:1 guide:2 institute:1 fall:1 taking:2 sparse:3 benefit:1 slice:4 curve:1 dimension:2 feedback:1 fred:1 xlarge:2 greenberg:1 author:1 forward:2 preprocessing:4 projected:1 implicitly:1 dealing:1 active:11 scgb:1 receiver:1 grayscale:2 additionally:1 nature:6 molecule:1 dendrite:1 cl:1 gutnisky:1 da:1 did:3 intracellular:2 linearly:1 noise:6 iarpa:1 ref:1 body:3 augmented:1 x1:1 representative:1 neuronal:2 roc:2 sseung:1 precision:21 fails:1 momentum:2 wish:1 col:4 governed:1 weighting:1 touched:1 minute:2 removing:1 down:1 magenta:1 remained:1 specific:1 r2:3 evidence:1 deconvolution:3 exists:1 demixing:1 false:10 merging:1 effectively:1 flattened:1 relabeled:3 mukamel:1 entorhinal:2 chen:1 depicted:1 distinguishable:1 fc:2 appearance:1 army:1 peron:1 gao:1 visual:5 failed:1 timothy:1 josh:1 contained:2 danielson:1 temporarily:1 nominally:1 truth:14 ultrasensitive:1 coen:1 dh:1 wane:1 succeed:1 x1x1:1 quantifying:1 ann:1 replace:1 shared:1 change:2 apthorpe:1 included:1 except:1 corrected:2 averaging:1 denoising:1 total:2 pas:2 invariance:1 amounted:1 select:1 winfried:1 mark:1 jonathan:1 alexander:1 tsai:1 aleksandar:1 princeton:4
5,679
6,139
Supervised Word Mover?s Distance Gao Huang? , Chuan Guo? Cornell University {gh349,cg563}@cornell.edu Yu Sun, Kilian Q. Weinberger Cornell University {ys646,kqw4}@cornell.edu Matt J. Kusner? Alan Turing Institute, University of Warwick mkusner@turing.ac.uk Fei Sha University of California, Los Angeles feisha@cs.ucla.edu Abstract Recently, a new document metric called the word mover?s distance (WMD) has been proposed with unprecedented results on kNN-based document classification. The WMD elevates high-quality word embeddings to a document metric by formulating the distance between two documents as an optimal transport problem between the embedded words. However, the document distances are entirely unsupervised and lack a mechanism to incorporate supervision when available. In this paper we propose an efficient technique to learn a supervised metric, which we call the Supervised-WMD (S-WMD) metric. The supervised training minimizes the stochastic leave-one-out nearest neighbor classification error on a perdocument level by updating an affine transformation of the underlying word embedding space and a word-imporance weight vector. As the gradient of the original WMD distance would result in an inefficient nested optimization problem, we provide an arbitrarily close approximation that results in a practical and efficient update rule. We evaluate S-WMD on eight real-world text classification tasks on which it consistently outperforms almost all of our 26 competitive baselines. 1 Introduction Document distances are a key component of many text retrieval tasks such as web-search ranking [24], book recommendation [16], and news categorization [25]. Because of the variety of potential applications, there has been a wealth of work towards developing accurate document distances [2, 4, 11, 27]. In large part, prior work focused on extracting meaningful document representations, starting with the classical bag of words (BOW) and term frequency-inverse document frequency (TF-IDF) representations [30]. These sparse, high-dimensional representations are frequently nearly orthogonal [17] and a pair of similar documents may therefore have nearly the same distance as a pair that are very different. It is possible to design more meaningful representations through eigendecomposing the BOW space with Latent Semantic Indexing (LSI) [11], or learning a probabilistic clustering of BOW vectors with Latent Dirichlet Allocation (LDA) [2]. Other work generalizes LDA [27] or uses denoising autoencoders [4] to learn a suitable document representation. Recently, Kusner et al. [19] proposed the Word Mover?s Distance (WMD), a new distance for text documents that leverages word embeddings [22]. Given these high-quality embeddings, the WMD defines the distances between two documents as the optimal transport cost of moving all words from one document to another within the word embedding space. This approach was shown to lead to state-of-the-art error rates in k-nearest neighbor (kNN) document classification. ? ? Authors contributing equally This work was done while the author was a student at Washington University in St. Louis 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Importantly, these prior works are entirely unsupervised and not learned explicitly for any particular task. For example, text documents could be classified by topic or by author, which would lead to very different measures of dissimilarity. Lately, there has been a vast amount of work on metric learning [10, 15, 36, 37], most of which focuses on learning a generalized linear Euclidean metric. These methods often scale quadratically with the input dimensionality, and can only be applied to high-dimensional text documents after dimensionality reduction techniques such as PCA [36]. In this paper we propose an algorithm for learning a metric to improve the Word Mover?s Distance. WMD stands out from prior work in that it computes distances between documents without ever learning a new document representation. Instead, it leverages low-dimensional word representations, for example word2vec, to compute distances. This allows us to transform the word embedding instead of the documents, and remain in a low-dimensional space throughout. At the same time we propose to learn word-specific ?importance? weights, to emphasize the usefulness of certain words for distinguishing the document class. At first glance, incorporating supervision into the WMD appears computationally prohibitive, as each individual WMD computation scales cubically with respect to the (sparse) dimensionality of the documents. However, we devise an efficient technique that exploits a relaxed version of the underlying optimal transport problem, called the Sinkhorn distance [6]. This, combined with a probabilistic filtering of the training set, reduces the computation time significantly. Our metric learning algorithm, Supervised Word Mover?s Distance (S-WMD), directly minimizes a stochastic version of the leave-one-out classification error under the WMD metric. Different from classic metric learning, we learn a linear transformation of the word representations while also learning re-weighted word frequencies. These transformations are learned to make the WMD distances match the semantic meaning of similarity encoded in the labels. We show across 8 datasets and 26 baseline methods the superiority of our method. 2 Background Here we describe the word embedding technique we use (word2vec) and the recently introduced Word Mover?s Distance. We then detail the setting of linear metric learning and the solution proposed by Neighborhood Components Analysis (NCA) [15], which inspires our method. word2vec may be the most popular technique for learning a word embedding over billions of words and was introduced by Mikolov et al. [22]. Each word in the training corpus is associated with an initial word vector, which is then optimized so that if two words w1 and w2 frequently occur together, they have high conditional probability p(w2 |w1 ). This probability is the hierarchical softmax of the word vectors vw1 and vw2 [22], an easily-computed quantity which allows a simplified neural language model (the word2vec model) to be trained efficiently on desktop computers. Training an embedding over billions of words allows word2vec to capture surprisingly accurate word relationships [23]. Word embeddings can learn hundreds of millions of parameters and are typically by design unsupervised, allowing them to be trained on large unlabeled text corpora ahead of time. Throughout this paper we use word2vec, although many word embeddings could be used [5, 21? ]. Word Mover?s Distance. Leveraging the compelling word vector relationships of word embeddings, Kusner et al. [19] introduced the Word Mover?s Distance (WMD) as a distance between text documents. At a high level, the WMD is the minimum distance required to transport the words from one document to another. We assume that we are given a word embedding matrix X ? Rd?n for a vocabulary of n words. Let xi ? Rd be the representation of the ith word, as defined by this embedding. Additionally, let da , db be the n-dimensional normalized bag-of-words (BOW) vectors for two documents, where dai is the number of times word i occurs in da (normalized over all words in da ). The WMD introduces an auxiliary ?transport? matrix T ? Rn?n , such that Tij describes how much of dai should be transported to dbj . Formally, the WMD learns T to minimize n n n X X X D(xi , xj ) = min Tij kxi ? xj kp2 , subject to, Tij = dai , Tij = dbj ?i, j, (1) T?0 j=1 i,j=1 i=1 where p is usually set to 1 or 2. In this way, documents that share many words (or even related ones) should have smaller distances than documents with very dissimilar words. It was noted in Kusner et al. [19] that the WMD is a special case of the Earth Mover?s Distance (EMD) [29], also known more generally as the Wasserstein distance [20]. The authors also introduce the word centroid distance (WCD), which uses a fast approximation first described by Rubner et al. [29]: kXd ? Xd0 k2 . 2 It can be shown that the WCD always lower bounds the WMD. Intuitively the WCD represents each document by the weighted average word vector, where the weights are the normalized BOW counts. The time complexity of solving the WMD optimization problem is O(q 3 log q) [26], where q is the maximum number of unique words in either d or d0 . The WCD scales asymptotically by O(dq). Regularized Transport Problem. To alleviate the cubic time complexity of the Wasserstein distance computation, Cuturi [6] formulated a smoothed version of the underlying transport problem by adding an entropy regularizer to the transport objective. This makes the objective function strictly convex, and efficient Pn algorithms can be adopted to solve it. In particular, given a transport matrix T, let h(T) = ? i,j=1 Tij log(Tij ) be the entropy of T. For any ? > 0, the regularized (primal) transport problem is defined as n n n X X X 1 min Tij kxi ? xj kp2 ? h(T) subject to, Tij = dai , Tij = dbj ?i, j. (2) T?0 ? i,j=1 j=1 i=1 The larger ? is, the closer this relaxation is to the original Wasserstein distance. Cuturi [6] propose an efficient algorithm to solve for the optimal transport T?? using a clever matrix-scaling algorithm. Specifically, we may define the matrix Kij = exp(??kxi ? xj k2 ) and solve for the scaling vectors u, v to a fixed-point by computing u = da ./(Kv), v = db ./(K> u) in an alternating fashion. These yield the relaxed transport T?? = diag(u)K diag(v). This algorithm can be shown to have empirical time complexity O(q 2 ) [6], which is significantly faster than solving the WMD problem exactly. Linear Metric Learning. Assume that we have access to a training set {x1 , . . . , xn } ? Rd , arranged as columns in matrix X ? Rd?n , and corresponding labels {y1 , . . . , yn } ? Y n , where Y contains some finite number of classes C = |Y|. Linear metric learning learns a matrix A ? Rr?d , where r ? d, and defines the generalized Euclidean distance between two documents xi and xj as dA (xi , xj ) = kA(xi ?xj )k2 . Popular linear metric learning algorithms are NCA [15], LMNN [36], and ITML [10] amongst others [37]. These methods learn a matrix A to minimize a loss function that is often an approximation of the leave-one-out (LOO) classification error of the kNN classifier. Neighborhood Components Analysis (NCA) was introduced by Goldberger et al. [15] to learn a generalized Euclidean metric. Here, the authors approximate the non-continuous leave-one-out kNN error by defining a stochastic neighborhood process. An input xi is assigned input xj as its nearest neighbor with probability exp(?d2A (xi , xj )) , (3) pij = P 2 k6=i exp (?dA (xi , xk )) where we define pii = 0. Under this stochastic neighborhood assignment, an input xi with label yi is classified correctly if its nearest neighbor isP any xj 6= xi from the same class (yj = yi ). The probability of this event can be stated as pi = learns A by maximizing the j:yj =yi pij . NCA P P expected LOO accuracy i pi , or equivalently by minimizing ? i log(pi ), the KL-divergence from a perfect classification distribution (pi = 1 for all xi ). 3 Learning a Word Embedding Metric In this section we propose a method for learning a supervised document distance, by way of learning a generalized Euclidean metric within the word embedding space and a word importance vector. We will refer to the learned document distance as the Supervised Word Mover?s Distance (SWMD). To learn such a metric we assume we have a training dataset consisting of m documents {d1 , . . . , dm } ? ?n , where ?n is the (n?1)-dimensional simplex (thus each document is represented as a normalized histogram over the words in the vocabulary, of size n). For each document we are given a label out of C possible classes, i.e. {y1 , . . . , ym } ? {1, . . . , C}m . Additionally, we are given a word embedding matrix X ? Rd?n (e.g., the word2vec embedding) which defines a d-dimensional word vector for each of the words in the vocabulary. Supervised WMD. As described in the previous section, it is possible to define a distance between any two documents da and db as the minimum cumulative word distance of moving da to db in word embedding space, as is done in the WMD. Given a labeled training set we would like to improve the distance so that documents that share the same labels are close, and those with different labels are far apart. We capture this notion of similarity in two ways: First we transform the word embedding, which captures a latent representation of words. We adapt this representation with a 3 linear transformation xi ? Axi , where xi represents the embedding of the ith word. Second, as different classification tasks and data sets may value words differently, we also introduce a histogram importance vector w that re-weighs the word histogram values to reflect the importance of words for distinguishing the classes: ? a = (w ? da )/(w> da ), d (4) where ??? denotes the element-wise Hadamard product. After applying the vector w and the linear mapping A, the WMD distance between documents da and db becomes DA,w (da , db ) , min T?0 n X Tij kA(xi ? xj )k22 s.t. i,j=1 n X j=1 Tij = d?ai and n X Tij = d?bj ?i, j. (5) i=1 Loss Function. Our goal is to learn the matrix A and vector w to make the distance DA,w reflect the semantic definition of similarity encoded in the labeled data. Similar to prior work on metric learning [10, 15, 36] we achieve this by minimizing the kNN-LOO error with the distance DA,w in the document space. As the LOO error is non-differentiable, we use the stochastic neighborhood relaxation proposed by Hinton & Roweis [18], which is also used for NCA. Similar to prior work we use the squared Euclidean word distance in Eq. (5). We use the KL-divergence loss proposed in NCA alongside the definition of neighborhood probability in (3) which yields, ? ? m m X X exp(?D (d , d )) A,w a b ?. P (6) `(A, w) = ? log ? c6=a exp (?DA,w (da , dc )) a=1 b:yb =ya Gradient. We can compute the gradient of the loss `(A, w) with respect to A and w as follows, m X X ? pab ? `(A, w) = DA,w (da , db ), (?ab ? pa ) ?(A, w) p ?(A, w) a a=1 (7) b6=a where ?ab = 1 if and only if ya = yb , and ?ab = 0 otherwise. 3.1 Fast computation of ?DA,w (da , db )/?(A, w) Notice that the remaining gradient term above ?DA,w (da , db )/?(A, w) contains the nested linear program defined in (5). In fact, computing this gradient just for a single pair of documents will require time complexity O(q 3 log q), where q is the largest set of unique words in either document [8]. This quickly becomes prohibitively slow as the document size becomes large and the number of documents increase. Further, the gradient is not always guaranteed to exist [1, 7] (instead we must resort to subgradient descent). Motivated by the recent works on fast Wasserstein distance computation [6, 8, 12], we propose to relax the modified linear program in eq. (5) using the entropy as in eq. (2). As described in Section 2, this allows us to approximately solve eq. (5) in O(q 2 ) time via T?? = diag(u)K diag(v). We will use this approximate solution in the following gradients. Gradient w.r.t. A. It can be shown that, n X ? > DA,w (da , db ) = 2A Tab ij (xi ? xj )(xi ? xj ) , ?A i,j=1 (8) where Tab is the optimizer of (5), so long as it is unique (otherwise it is a subgradient) [1]. We replace Tab by T?? which is always unique as the relaxed transport is strongly convex [9]. Gradient w.r.t. w. To obtain the gradient with respect to w, we need the optimal solution to the dual transport problem: ? ? a + ?> d ? b ; s.t. ?i + ?j ? kA(xi ? xj )k2 ?i, j. DA,w (da , db ) , max ?> d 2 (?,?) (9) ? a and d ? b are functions of w, we have Given that both d ? ? ? a ?DA,w ? b ?? ?da ?(?? > d ? a)da ? ? ?db ?(? ? > d ? b)db ?DA,w ? ?d ?d DA,w (da , db ) = + = + . > a > b ? a ?w ? b ?w ?w w d w d ?d ?d (10) 4 Instead of solving the dual directly, we obtain the relaxed optimal dual variables ??? , ? ?? via the vectors u, v that were used to derive our relaxed transport T?? . Specifically, we can solve for the > > 1 1 ? log(u) 1 and ? ?? = log(v) ? log(v) 1, where 1 is the dual variables as such: ??? = log(u) ? p ? p p-dimensional all ones vector. In general, we can observe from eq. (2) that the above approximation process becomes more accurate as ? grows. However, setting ? too large can make the algorithm converges slower. In our experiments, we use ? = 10, which leads to a nice trade-off between speed and approximation accuracy. 3.2 Optimization Alongside the fast gradient computation process in- Algorithm 1 S-WMD troduced above, we can further speed up the training with a clever initialization and batch gradient de- 1: Input: word1 embedding: X, 2: dataset: {(d , y1 ), . . . , (dm , ym )} scent. 3: ca = Xda , ?a ? {1, . . . , m} Initialization. The loss function in eq. (6) is non4: A = NCA((c1 , y1 ), . . . , (cm , ym )) convex and is thus highly dependent on the initial 5: w = 1 setting of A and w. A good initialization also dras- 6: while loop until convergence do tically reduces the number of gradient steps required. 7: Randomly select B ? {1, . . . , m} For w, we initialize all its entries to 1, i.e., all words 8: Compute gradients using eq. (11) are assigned with the same weights at the begin- 9: A ? A ? ?A gA ning. For A, we propose to learn an initial projection 10: w ? w ? ?w gw within the word centroid distance (WCD), defined 11: end while as D0 (da , db ) = kXda ? Xdb k2 , described in Section 2. The WCD should be a reasonable approximation to the WMD. Kusner et al. [19] point out that the WCD is a lower bound on the WMD, which holds true after the transformation with A. We obtain our initialization by applying NCA in word embedding space using the WCD distance between documents. This is to say that we can construct the WCD dataset: {c1 , . . . , cm } ? Rd , representing each text document as its word centroid, and apply NCA in the usual way as described in Section 2. We call this learned word distance Supervised Word Centroid Distance (S-WCD). Batch Gradient Descent. Once the initial matrix A is obtained, we minimize the loss `(A, w) in (6) with batch gradient descent. At each iteration, instead of optimizing over the full training set, we randomly pick a batch of documents B from the training set, and compute the gradient for these documents. We can further speed up training by observing that the vast majority of NCA probabilities pab near zero. This is because most documents are far away from any given document. Thus, for a document da we can use the WCD to get a cheap neighbor ordering and only compute the NCA probabilities for the closest set of documents Na , based on the WCD. When we compute the gradient for each of the selected documents, we only use the document?s M nearest neighbor documents (defined by WCD distance) to compute the NCA neighborhood probabilities. In particular, the gradient is computed as follows, X X ? gA,w = (pab /pa )(?ab ? pa ) D(A,w) (da , db ), (11) ?(A, w) a?B b?Na where again Na is the set of nearest neighbors of document a. With the gradient, we update A and w with learning rates ?A and ?w , respectively. Algorithm 1 summarizes S-WMD in pseudo code. Complexity. The empirical time complexity of solving the dual transport problem scales quadratically with p [26]. Therefore, the complexity of our algorithm is O(T BN [p2 + d2 (p + r)]), where T denotes the number of batch gradient descent iterations, B = |B| the batch size, N = |Na | the size of the nearest neighbor set, and p the maximum number of unique words in a document. This is because computing T?ij , ?? and ? ? using the alternating fixed point algorithm in Section 3.1 requires O(p2 ) time, while constructing the gradients from eqs. (8) and (10) takes O(d2 (p + r)) time. The approximated gradient eq. (11) requires this computation to be repeated BN times. In our experiments, we set B = 32 and N = 200, and computing the gradient at each iteration can be done in seconds. 4 Results We evaluate S-WMD on 8 different document corpora and compare the kNN error with unsupervised WCD, WMD, and 6 document representations. In addition, all 6 document representation baselines 5 Table 1: The document datasets (and their descriptions) used for visualization and evaluation. name BBCSPORT TWITTER RECIPE OHSUMED CLASSIC REUTERS AMAZON 20 NEWS twitter recipe ohsumed C 5 3 15 10 4 8 4 20 classic n 517 2176 3059 3999 4965 5485 5600 11293 ne 220 932 1311 5153 2128 2189 2400 7528 reuters BOW dim. 13243 6344 5708 31789 24277 22425 42063 29671 avg words 117 9.9 48.5 59.2 38.6 37.1 45.0 72 amazon 20news S-WMD WMD bbcsport description BBC sports articles labeled by sport tweets categorized by sentiment [31] recipe procedures labeled by origin medical abstracts (class subsampled) academic papers labeled by publisher news dataset (train/test split [3]) reviews labeled by product canonical news article dataset [3] Figure 1: t-SNE plots of WMD and S-WMD on all datasets. are used with and without 3 leading supervised metric learning algorithms?resulting in an overall total of 26 competitive baselines. Our code is implemented in Matlab and is freely available at https://github.com/gaohuang/S-WMD. Datasets and Baselines. We evaluate all approaches on 8 document datasets in the settings of news categorization, sentiment analysis, and product identification, among others. Table 1 describes the classification tasks as well as the size and number of classes C of each of the datasets. We evaluate against the following document representation/distance methods: 1. bag-of-words (BOW): a count of the number of word occurrences in a document, the length of the vector is the number of unique words in the corpus; 2. term frequency-inverse document frequency (TF-IDF): the BOW vector normalized by the document frequency of each word across the corpus; 3. Okapi BM25 [28]: a TF-IDF-like ranking function, first used in search engines; 4. Latent Semantic Indexing (LSI) [11]: projects the BOW vectors onto an orthogonal basis via singular value decomposition; 5. Latent Dirichlet Allocation (LDA) [2]: a generative probabilistic method that models documents as mixtures of word ?topics?. We train LDA transductively (i.e., on the combined collection of training & testing words) and use the topic probabilities as the document representation ; 6. Marginalized Stacked Denoising Autoencoders (mSDA) [4]: a fast method for training stacked denoising autoencoders, which have state-of-the-art error rates on sentiment analysis tasks [14]. For datasets larger than RECIPE we use either a high-dimensional variant of mSDA or take 20% of the features that occur most often, whichever has better performance.; 7. Word Centroid Distance (WCD), described in Section 2; 8. Word Mover?s Distance (WMD), described in Section 2. For completeness, we also show results for the Supervised Word Centroid Distance (S-WCD) and the initialization of SWMD (S-WMD init.), described in Section 3. For methods that propose a document representation (as opposed to a distance), we use the Euclidean distance between these vector representations for visualization and kNN classification. For the supervised metric learning results we first reduce the dimensionality of each representation to 200 dimensions (if necessary) with PCA and then run either NCA, ITML, or LMNN on the projected data. We tune all free hyperparameters in all compared methods with Bayesian optimization (BO), using the implementation of Gardner et al. [13]3 . kNN classification. We show the kNN test error of all document representation and distance methods in Table 2. For datasets that do not have a predefined train/test split: BBCSPORT, TWITTER, RECIPE , CLASSIC , and AMAZON we average results over five 70/30 train/test splits and report standard errors. For each dataset we highlight the best results in bold (and those whose standard error 3 http://tinyurl.com/bayesopt 6 Table 2: The kNN test error for all datasets and distances. DATASET BBCSPORT TWITTER BOW TF-IDF O KAPI BM25 [28] LSI [11] LDA [2] M SDA [4] 20.6 ? 1.2 21.5 ? 2.8 16.9 ? 1.5 4.3 ? 0.6 6.4 ? 0.7 8.4 ? 0.8 43.6 ? 0.4 33.2 ? 0.9 42.7 ? 7.8 31.7 ? 0.7 33.8 ? 0.3 32.3 ? 0.7 BOW TF-IDF O KAPI BM25 [28] LSI [11] LDA [2] M SDA [4] 7.4 ? 1.4 1.8 ? 0.2 3.7 ? 0.5 5.0 ? 0.7 6.5 ? 0.7 25.5 ? 9.4 32.0 ? 0.4 31.1 ? 0.3 31.9 ? 0.3 32.3 ? 0.4 33.9 ? 0.9 43.7 ? 7.4 BOW TF-IDF O KAPI BM25 [28] LSI [11] LDA [2] M SDA [4] 2.4 ? 0.4 4.0 ? 0.6 1.9 ? 0.7 2.4 ? 0.5 4.5 ? 0.4 22.7 ? 10.0 31.8 ? 0.3 30.8 ? 0.3 30.5 ? 0.4 31.6 ? 0.2 31.9 ? 0.6 50.3 ? 8.6 BOW TF-IDF O KAPI BM25 [28] LSI [11] LDA [2] M SDA [4] 9.6 ? 0.6 0.6 ? 0.3 4.5 ? 0.5 2.4 ? 0.7 7.1 ? 0.9 21.8 ? 7.4 31.1 ? 0.5 30.6 ? 0.5 31.8 ? 0.4 31.1 ? 0.8 32.7 ? 0.3 37.9 ? 2.8 WCD [19] WMD [19] S-WCD S-WMD INIT. S-WMD 11.3 ? 1.1 4.6 ? 0.7 4.6 ? 0.5 2.8 ? 0.3 2.1 ? 0.5 30.7 ? 0.9 28.7 ? 0.6 30.4 ? 0.5 28.2 ? 0.4 27.5 ? 0.5 RECIPE OHSUMED CLASSIC REUTERS U NSUPERVISED 61.1 36.0 ? 0.5 13.9 62.7 35.0 ? 1.8 29.1 66.2 40.6 ? 2.7 32.8 44.2 6.7 ? 0.4 6.3 51.0 5.0 ? 0.3 6.9 49.3 6.9 ? 0.4 8.1 ITML [10] 63.1 ? 0.9 70.1 7.5 ? 0.5 7.3 51.0 ? 1.4 55.1 9.9 ? 1.0 6.6 53.8 ? 1.8 77.0 18.3 ? 4.5 20.7 55.7 ? 0.8 54.7 5.5 ? 0.7 6.9 59.3 ? 0.8 59.6 6.6 ? 0.5 9.2 54.5 ? 1.3 61.8 14.9 ? 2.2 5.9 LMNN [36] 48.4 ? 0.4 49.1 4.7 ? 0.3 3.9 43.7 ? 0.3 40.0 4.9 ? 0.3 5.8 41.7 ? 0.7 59.4 19.0 ? 9.3 9.2 44.8 ? 0.4 40.8 3.0 ? 0.1 3.2 51.4 ? 0.4 49.9 4.9 ? 0.4 5.6 46.3 ? 1.2 41.6 11.1 ? 1.9 5.3 NCA [15] 55.2 ? 0.6 57.4 4.0 ? 0.1 6.2 41.4 ? 0.4 35.8 5.5 ? 0.2 3.8 45.8 ? 0.5 56.6 20.6 ? 4.8 10.5 41.6 ? 0.5 37.5 3.1 ? 0.2 3.3 50.9 ? 0.4 50.7 5.0 ? 0.2 7.9 48.0 ? 1.6 40.4 11.2 ? 1.8 5.2 D ISTANCES IN THE W ORD M OVER ? S FAMILY 49.4 ? 0.3 48.9 6.6 ? 0.2 4.7 42.6 ? 0.3 44.5 2.8 ? 0.1 3.5 51.3 ? 0.2 43.3 5.8 ? 0.2 3.9 39.8 ? 0.4 38.0 3.3 ? 0.3 3.5 39.2 ? 0.3 34.3 3.2 ? 0.2 3.2 59.3 ? 1.0 53.4 ? 1.0 53.4 ? 1.9 45.4 ? 0.5 51.3 ? 0.6 48.0 ? 1.4 AMAZON 20 NEWS AVERAGE - RANK 28.5 ? 0.5 41.5 ? 1.2 58.8 ? 2.6 9.3 ? 0.4 11.8 ? 0.6 17.1 ? 0.4 57.8 54.4 55.9 28.9 31.5 39.5 26.1 25.0 26.1 12.0 16.6 18.0 20.5 ? 2.1 11.1 ? 1.9 11.4 ? 2.9 10.6 ? 2.2 15.7 ? 2.0 37.4 ? 4.0 60.6 45.3 81.5 39.6 87.8 47.7 23.0 14.8 21.5 17.6 22.5 23.9 10.7 ? 0.3 6.8 ? 0.3 6.9 ? 0.2 6.6 ? 0.2 12.1 ? 0.6 24.0 ? 3.6 40.7 28.1 57.4 25.1 32.0 27.1 11.5 7.8 14.4 5.1 14.6 17.3 16.8 ? 0.3 6.5 ? 0.2 8.5 ? 0.4 7.7 ? 0.4 11.6 ? 0.8 23.6 ? 3.1 46.4 29.3 55.9 30.7 30.9 26.8 17.5 5.4 17.9 6.3 16.5 16.1 9.2 ? 0.2 7.4 ? 0.3 7.6 ? 0.3 5.8 ? 0.2 5.8 ? 0.1 36.2 26.8 33.6 28.4 26.8 13.5 6.1 11.4 4.3 2.4 overlaps the mean of the best result). On the right we also show the average rank across datasets, relative to unsupervised BOW (bold indicates the best method). We highlight the unsupervised WMD in blue (WMD) and our new result in red (S-WMD). Despite the very large number of competitive baselines, S-WMD achieves the lowest kNN test error on 5/8 datasets, with the exception of BBCSPORT, CLASSIC and AMAZON. On these datasets it achieves the 4th lowest on BBCSPORT and CLASSIC, and tied at 2nd on 20 NEWS. On average across all datasets it outperforms all other 26 methods. Another observation is that S-WMD right after initialization (S-WMD init.) performs quite well. However, as training S-WMD is efficient (shown in Table 3), it is often well worth the training time. For unsupervised baselines, on datasets BBCSPORT and OHSUMED, where the previous state-of-the-art MazdaTIFFBoone fit motherboardhappening WMD was beaten by LSI, S-WMD reduces the eraskedautomotivehomosexuals playoff motorcycles ror of LSI relatively by 51% and 22%, respectively. gay dolphins computer animation western atheism moon In general, supervision seems to help all methods Keith controller tappedmotif graphics clippersecurity on average. One reason why NCA with a TF-IDF orbitIsrael pro lists Rutgershell document representation may be performing better talkNHL driver hockeybiblical firedbikerguns auto saint than S-WMD could be because of the long docu- virtual gunaltcircuit autos sell cute ment lengths in BBCSPORT and OHSUMED. Havlabelride riderrocket Islamic offerflight ing denser BOW vectors may improve the inverse riding keyDOD cardrivers document frequency weights, which in turn may be IDEshipping baseballcrypto image bikes a good initialization for NCA to further fine-tune. monitor story Armenian card polygon Israeli forsalefirearmsspace On datasets with smaller documents such as TWITwarning Turkishcopyencryption RISC bus mouse TER , CLASSIC , and REUTERS, S-WMD outperforms compatiblemotorcycle summarized powerbookelectronics diamond NCA with TF-IDF relatively by 10%, 42%, and SCSI government chip sun doctorNASA 15%, respectively. On CLASSIC WMD outperforms DOS S-WMD possibly because of a poor initialization and that S-WMD uses the squared Euclidean dis- Figure 2: The Top-100 words upweighted by tance between word vectors, which may be subop- S-WMD on 20 NEWS. timal for this dataset. This however, does not occur for any other dataset. apple bike windows sale mac Visualization. Figure 1 shows a 2D embedding of the test split of each dataset by WMD and S-WMD using t-Stochastic Neighbor Embedding (t-SNE) [33]. The quality of a distance can be visualized by how clustered points in the same class are. Using this metric, S-WMD noticeably improves upon WMD on almost all the 8 datasets. Figure 2 visualizes the top 100 words with 7 1/1 largest weights learned by S-WMD on the 20 NEWS dataset. The size of each word is proportional its learned weight. We can observe that these upweighted words are indeed most representative for the true classes of this dataset. More detailed results and analysis can be found in the supplementary. Training time. Table 3 shows the training times for S-WMD. Note that the time to learn the initial metric A is not included in time shown in the second column. Relative to the initialization, S-WMD is surprisingly fast. This is due to the fast gradient approximation and the batch gradient descent introduced in Section 3.1 and 3.2. We note that these times are comparable or even faster than the time it takes to train a linear metric on the baseline methods after PCA. 5 Table 3: Distance computation times. F ULL T RAINING T IMES DATASET BBCSPORT TWITTER RECIPE OHSUMED CLASSIC REUTERS AMAZON 20 NEWS Related Work METRICS S - WCD / S - WMD INIT. 1M 25S 28M 59S 23M 21S 46M 18S 1H 18M 2H 7M 2H 15M 14M 42S S - WMD 4M 56S 7M 53S 23M 58S 29M 12S 36M 22S 34M 56S 20M 10S 1H 55M Metric learning is a vast field that includes both supervised and unsupervised techniques (see Yang & Jin [37] for a large survey). Alongside NCA [15], described in Section 2, there are a number of popular methods for generalized Euclidean metric learning. Large Margin Nearest Neighbors (LMNN) [36] learns a metric that encourages inputs with similar labels to be close in a local region, while encouraging inputs with different labels to be farther by a large margin. Information-Theoretic Metric Learning (ITML) [10] learns a metric by minimizing a KL-divergence subject to generalized Euclidean distance constraints. Cuturi & Avis [7] was the first to consider learning the ground distance in the Earth Mover?s Distance (EMD). In a similar work, Wang & Guibas [34] learns a ground distance that is not a metric, with good performance in certain vision tasks. Most similar to our work Wang et al. [35] learn a metric within a generalized Euclidean EMD ground distance using the framework of ITML for image classification. They do not, however, consider re-weighting the histograms, which allows our method extra flexibility. Until recently, there has been relatively little work towards learning supervised word embeddings, as state-of-the-art results rely on making use of large unlabeled text corpora. Tang et al. [32] propose a neural language model that uses label information from emoticons to learn sentiment-specific word embeddings. 6 Conclusion We proposed a powerful method to learn a supervised word mover?s distance, and demonstrated that it may well be the best performing distance metric for documents to date. Similar to WMD, our S-WMD benefits from the large unsupervised corpus, which was used to learn the word2vec embedding [22, 23]. The word embedding gives rise to a very good document distance, which is particularly forgiving when two documents use syntactically different but conceptually similar words. Two words may be similar in one sense but dissimilar in another, depending on the articles in which they are contained. It is these differences that S-WMD manages to capture through supervised training. By learning a linear metric and histogram re-weighting through the optimal transport of the word mover?s distance, we are able to produce state-of-the-art classification results efficiently. Acknowledgments The authors are supported in part by the, III-1618134, III-1526012, IIS-1149882 grants from the National Science Foundation and the Bill and Melinda Gates Foundation. We also thank Dor Kedem for many insightful discussions. References [1] Bertsimas, D. and Tsitsiklis, J. N. Introduction to linear optimization. Athena Scientific, 1997. [2] Blei, D. M., Ng, A. Y., and Jordan, M. I. Latent dirichlet allocation. JMLR, 2003. [3] Cardoso-Cachopo, A. Improving Methods for Single-label Text Categorization. PdD Thesis, Instituto Superior Tecnico, Universidade Tecnica de Lisboa, 2007. [4] Chen, M., Xu, Z., Weinberger, K. Q., and Sha, F. Marginalized denoising autoencoders for domain adaptation. In ICML, 2012. [5] Collobert, R. and Weston, J. A unified architecture for natural language processing: Deep neural networks with multitask learning. In ICML, pp. 160?167. ACM, 2008. 8 [6] Cuturi, M. Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in Neural Information Processing Systems, pp. 2292?2300, 2013. [7] Cuturi, M. and Avis, D. Ground metric learning. JMLR, 2014. [8] Cuturi, M. and Doucet, A. Fast computation of wasserstein barycenters. In Jebara, Tony and Xing, Eric P. (eds.), ICML, pp. 685?693. JMLR Workshop and Conference Proceedings, 2014. [9] Cuturi, M. and Peyre, G. A smoothed dual approach for variational wasserstein problems. SIAM Journal on Imaging Sciences, 9(1):320?343, 2016. [10] Davis, J.V., Kulis, B., Jain, P., Sra, S., and Dhillon, I.S. Information-theoretic metric learning. In ICML, pp. 209?216, 2007. [11] Deerwester, S. C., Dumais, S. T., Landauer, T. K., Furnas, G. W., and Harshman, R. A. Indexing by latent semantic analysis. Journal of the American Society of Information Science, 41(6):391?407, 1990. [12] Frogner, C., Zhang, C., Mobahi, H., Araya, M., and Poggio, T.A. Learning with a wasserstein loss. In Advances in Neural Information Processing Systems, pp. 2044?2052, 2015. [13] Gardner, J., Kusner, M. J., Xu, E., Weinberger, K. Q., and Cunningham, J. Bayesian optimization with inequality constraints. In ICML, pp. 937?945, 2014. [14] Glorot, X., Bordes, A., and Bengio, Y. Domain adaptation for large-scale sentiment classification: A deep learning approach. In ICML, pp. 513?520, 2011. [15] Goldberger, J., Hinton, G.E., Roweis, S.T., and Salakhutdinov, R. Neighbourhood components analysis. In NIPS, pp. 513?520. 2005. [16] Gopalan, P. K., Charlin, L., and Blei, D. Content-based recommendations with poisson factorization. In NIPS, pp. 3176?3184, 2014. [17] Greene, D. and Cunningham, P. Practical solutions to the problem of diagonal dominance in kernel document clustering. In ICML, pp. 377?384. ACM, 2006. [18] Hinton, G.E. and Roweis, S.T. Stochastic neighbor embedding. In NIPS, pp. 833?840. MIT Press, 2002. [19] Kusner, M. J., Sun, Y., Kolkin, N. I., and Weinberger, K. Q. From word embeddings to document distances. In ICML, 2015. [20] Levina, E. and Bickel, P. The earth mover?s distance is the mallows distance: Some insights from statistics. In ICCV, volume 2, pp. 251?256. IEEE, 2001. [21] Levy, O. and Goldberg, Y. Neural word embedding as implicit matrix factorization. In NIPS, 2014. [22] Mikolov, T., Chen, K., Corrado, G., and Dean, J. Efficient estimation of word representations in vector space. In Workshop at ICLR, 2013. [23] Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. Distributed representations of words and phrases and their compositionality. In NIPS, pp. 3111?3119, 2013. [24] Mohan, A., Chen, Z., and Weinberger, K. Q. Web-search ranking with initialized gradient boosted regression trees. JMLR, 14:77?89, 2011. [25] Ontrup, J. and Ritter, H. Hyperbolic self-organizing maps for semantic navigation. In NIPS, 2001. [26] Pele, O. and Werman, M. Fast and robust earth mover?s distances. In ICCV, pp. 460?467. IEEE, 2009. [27] Perina, A., Jojic, N., Bicego, M., and Truski, A. Documents as multiple overlapping windows into grids of counts. In NIPS, pp. 10?18. 2013. [28] Robertson, S. E., Walker, S., Jones, S., Hancock-Beaulieu, M. M., Gatford, M., et al. Okapi at trec-3. NIST SPECIAL PUBLICATION SP, pp. 109?109, 1995. [29] Rubner, Y., Tomasi, C., and Guibas, L. J. A metric for distributions with applications to image databases. In ICCV, pp. 59?66. IEEE, 1998. [30] Salton, G. and Buckley, C. Term-weighting approaches in automatic text retrieval. Information processing & management, 24(5):513?523, 1988. [31] Sanders, N. J. Sanders-twitter sentiment corpus, 2011. [32] Tang, D., Wei, F., Yang, N., Zhou, M., Liu, T., and Qin, B. Learning sentiment-specific word embedding for twitter sentiment classification. In ACL, pp. 1555?1565, 2014. [33] Van der Maaten, L. and Hinton, G. Visualizing data using t-sne. JMLR, 9(2579-2605):85, 2008. [34] Wang, F. and Guibas, L. J. Supervised earth movers distance learning and its computer vision applications. In ECCV. 2012. [35] Wang, X-L., Liu, Y., and Zha, H. Learning robust cross-bin similarities for the bag-of-features model. Technical report, Peking University, China, 2009. [36] Weinberger, K.Q. and Saul, L.K. Distance metric learning for large margin nearest neighbor classification. JMLR, 10:207?244, 2009. [37] Yang, L. and Jin, R. Distance metric learning: A comprehensive survey. 2, 2006. 9
6139 |@word multitask:1 kulis:1 version:3 seems:1 nd:1 d2:2 bn:2 decomposition:1 pick:1 reduction:1 initial:5 liu:2 contains:2 document:77 outperforms:4 ka:3 com:2 goldberger:2 must:1 cheap:1 plot:1 update:2 generative:1 prohibitive:1 selected:1 desktop:1 xk:1 ith:2 farther:1 blei:2 completeness:1 c6:1 zhang:1 five:1 driver:1 scent:1 introduce:2 indeed:1 expected:1 frequently:2 salakhutdinov:1 lmnn:4 encouraging:1 little:1 ohsumed:6 window:2 becomes:4 spain:1 begin:1 underlying:3 project:1 bike:2 lowest:2 cm:2 minimizes:2 unified:1 transformation:5 pseudo:1 exactly:1 prohibitively:1 k2:5 classifier:1 uk:1 sale:1 ull:1 medical:1 grant:1 superiority:1 louis:1 yn:1 harshman:1 elevates:1 local:1 instituto:1 despite:1 approximately:1 acl:1 initialization:9 china:1 factorization:2 practical:2 nca:18 unique:6 yj:2 testing:1 acknowledgment:1 mallow:1 procedure:1 empirical:2 significantly:2 hyperbolic:1 projection:1 word:99 get:1 onto:1 close:3 unlabeled:2 clever:2 ga:2 applying:2 bill:1 dean:2 demonstrated:1 map:1 maximizing:1 starting:1 convex:3 focused:1 survey:2 amazon:6 pdd:1 rule:1 insight:1 importantly:1 embedding:24 classic:10 notion:1 us:4 distinguishing:2 goldberg:1 origin:1 pa:3 element:1 robertson:1 approximated:1 particularly:1 updating:1 labeled:6 database:1 perdocument:1 kedem:1 wang:4 capture:4 imes:1 region:1 news:11 sun:3 kilian:1 ordering:1 trade:1 complexity:7 cuturi:7 trained:2 solving:4 ror:1 upon:1 bbc:1 eric:1 basis:1 easily:1 isp:1 differently:1 chip:1 represented:1 polygon:1 regularizer:1 train:5 stacked:2 jain:1 fast:9 describe:1 hancock:1 neighborhood:7 whose:1 encoded:2 larger:2 solve:5 warwick:1 say:1 relax:1 otherwise:2 pab:3 denser:1 supplementary:1 statistic:1 knn:11 transform:2 rr:1 unprecedented:1 differentiable:1 propose:9 ment:1 product:3 adaptation:2 qin:1 hadamard:1 loop:1 motorcycle:1 bow:15 date:1 organizing:1 flexibility:1 achieve:1 roweis:3 description:2 kv:1 los:1 recipe:7 billion:2 convergence:1 dolphin:1 sutskever:1 produce:1 categorization:3 perfect:1 leave:4 converges:1 armenian:1 help:1 derive:1 depending:1 ac:1 ij:2 nearest:9 keith:1 eq:9 p2:2 auxiliary:1 c:1 implemented:1 pii:1 ning:1 stochastic:7 virtual:1 noticeably:1 bin:1 require:1 government:1 clustered:1 alleviate:1 emoticon:1 strictly:1 hold:1 ground:4 guibas:3 exp:5 mapping:1 bj:1 werman:1 optimizer:1 achieves:2 bickel:1 earth:5 estimation:1 bag:4 label:10 largest:2 tf:9 weighted:2 universidade:1 mit:1 feisha:1 always:3 modified:1 pn:1 zhou:1 cornell:4 boosted:1 publication:1 focus:1 consistently:1 rank:2 indicates:1 centroid:6 baseline:8 sense:1 kp2:2 dim:1 twitter:7 dependent:1 transductively:1 okapi:2 cubically:1 typically:1 cunningham:2 overall:1 classification:16 dual:6 among:1 k6:1 art:5 softmax:1 special:2 initialize:1 field:1 construct:1 once:1 washington:1 emd:3 ng:1 represents:2 sell:1 yu:1 unsupervised:9 nearly:2 icml:8 perina:1 jones:1 simplex:1 report:2 others:2 word1:1 xda:1 randomly:2 mover:17 divergence:3 individual:1 comprehensive:1 national:1 subsampled:1 consisting:1 dor:1 ab:4 highly:1 playoff:1 evaluation:1 introduces:1 mixture:1 navigation:1 primal:1 word2vec:8 predefined:1 accurate:3 closer:1 necessary:1 poggio:1 orthogonal:2 tree:1 euclidean:10 initialized:1 re:4 weighs:1 kij:1 column:2 kqw4:1 compelling:1 bayesopt:1 assignment:1 phrase:1 cost:1 mac:1 entry:1 hundred:1 usefulness:1 inspires:1 too:1 itml:5 loo:4 graphic:1 kxi:3 combined:2 dumais:1 st:1 sda:4 siam:1 probabilistic:3 off:1 ritter:1 together:1 ym:3 quickly:1 mouse:1 na:4 w1:2 squared:2 reflect:2 again:1 thesis:1 opposed:1 huang:1 possibly:1 management:1 book:1 resort:1 inefficient:1 leading:1 american:1 potential:1 de:2 student:1 bold:2 summarized:1 includes:1 explicitly:1 ranking:3 collobert:1 tab:3 observing:1 red:1 competitive:3 xing:1 zha:1 b6:1 minimize:3 accuracy:2 moon:1 efficiently:2 yield:2 conceptually:1 identification:1 bayesian:2 manages:1 worth:1 apple:1 visualizes:1 classified:2 ed:1 definition:2 bicego:1 against:1 frequency:7 pp:18 dm:2 associated:1 salton:1 dataset:13 popular:3 dimensionality:4 improves:1 appears:1 supervised:17 wei:1 yb:2 arranged:1 done:3 charlin:1 strongly:1 just:1 implicit:1 autoencoders:4 until:2 web:2 transport:18 overlapping:1 lack:1 glance:1 western:1 defines:3 lda:8 quality:3 scientific:1 grows:1 riding:1 name:1 matt:1 k22:1 eigendecomposing:1 normalized:5 true:2 gay:1 assigned:2 alternating:2 jojic:1 dhillon:1 semantic:6 gw:1 visualizing:1 self:1 encourages:1 davis:1 noted:1 cute:1 generalized:7 theoretic:2 performs:1 syntactically:1 pro:1 tinyurl:1 meaning:1 wise:1 image:3 variational:1 recently:4 dbj:3 superior:1 volume:1 million:1 refer:1 ai:1 rd:6 automatic:1 grid:1 language:3 moving:2 access:1 supervision:3 sinkhorn:2 similarity:4 closest:1 recent:1 optimizing:1 apart:1 certain:2 inequality:1 arbitrarily:1 yi:3 devise:1 der:1 minimum:2 dai:4 relaxed:5 wasserstein:7 freely:1 corrado:2 ii:1 multiple:1 full:1 lisboa:1 reduces:3 d0:2 alan:1 ing:1 match:1 faster:2 adapt:1 academic:1 long:2 retrieval:2 levina:1 cross:1 technical:1 equally:1 peking:1 variant:1 regression:1 controller:1 vision:2 metric:39 poisson:1 histogram:5 iteration:3 kernel:1 c1:2 mkusner:1 background:1 addition:1 fine:1 wealth:1 singular:1 walker:1 publisher:1 w2:2 extra:1 xd0:1 subject:3 db:16 leveraging:1 jordan:1 call:2 extracting:1 near:1 leverage:2 ter:1 yang:3 split:4 embeddings:9 iii:2 bengio:1 variety:1 xj:14 quite:1 fit:1 sander:2 architecture:1 lightspeed:1 reduce:1 angeles:1 motivated:1 pca:3 pele:1 sentiment:8 wmd:68 matlab:1 deep:2 buckley:1 tij:12 generally:1 detailed:1 cardoso:1 tune:2 gopalan:1 amount:1 chuan:1 visualized:1 risc:1 msda:2 http:2 bm25:5 exist:1 lsi:8 canonical:1 notice:1 correctly:1 blue:1 dominance:1 key:1 scsi:1 monitor:1 imaging:1 bertsimas:1 vast:3 asymptotically:1 relaxation:2 subgradient:2 tweet:1 forgiving:1 run:1 turing:2 inverse:3 powerful:1 deerwester:1 throughout:2 almost:2 reasonable:1 family:1 maaten:1 summarizes:1 scaling:2 comparable:1 entirely:2 bound:2 guaranteed:1 greene:1 vw2:1 occur:3 ahead:1 constraint:2 idf:9 fei:1 ucla:1 speed:3 min:3 formulating:1 mikolov:3 performing:2 relatively:3 developing:1 poor:1 remain:1 across:4 describes:2 smaller:2 kusner:7 making:1 intuitively:1 iccv:3 indexing:3 computationally:1 wcd:19 visualization:3 bus:1 turn:1 count:3 mechanism:1 whichever:1 end:1 adopted:1 available:2 generalizes:1 eight:1 observe:2 hierarchical:1 apply:1 away:1 occurrence:1 neighbourhood:1 batch:7 weinberger:6 slower:1 gate:1 original:2 denotes:2 clustering:2 dirichlet:3 remaining:1 saint:1 top:2 tony:1 marginalized:2 exploit:1 classical:1 society:1 objective:2 quantity:1 occurs:1 barycenter:1 sha:2 usual:1 diagonal:1 d2a:1 gradient:27 amongst:1 iclr:1 distance:72 thank:1 card:1 ontrup:1 majority:1 athena:1 topic:3 reason:1 code:2 length:2 relationship:2 minimizing:3 equivalently:1 sne:3 stated:1 rise:1 design:2 implementation:1 diamond:1 allowing:1 ord:1 observation:1 datasets:16 cachopo:1 finite:1 nist:1 descent:5 jin:2 defining:1 hinton:4 ever:1 y1:4 rn:1 dc:1 peyre:1 smoothed:2 trec:1 jebara:1 dras:1 compositionality:1 introduced:5 pair:3 required:2 kl:3 timal:1 optimized:1 tomasi:1 california:1 engine:1 learned:6 quadratically:2 barcelona:1 nip:8 israeli:1 able:1 alongside:3 usually:1 program:2 max:1 tance:1 suitable:1 event:1 overlap:1 rely:1 regularized:2 natural:1 representing:1 improve:3 github:1 ne:1 gardner:2 lately:1 auto:2 text:11 prior:5 nice:1 review:1 contributing:1 relative:2 embedded:1 loss:7 araya:1 highlight:2 allocation:3 filtering:1 proportional:1 foundation:2 rubner:2 affine:1 pij:2 article:3 dq:1 story:1 vw1:1 share:2 pi:4 bordes:1 eccv:1 surprisingly:2 supported:1 free:1 dis:1 tsitsiklis:1 institute:1 neighbor:12 avis:2 saul:1 sparse:2 benefit:1 distributed:1 van:1 raining:1 dimension:1 axi:1 vocabulary:3 world:1 stand:1 xn:1 computes:1 cumulative:1 author:6 collection:1 avg:1 projected:1 simplified:1 far:2 approximate:2 emphasize:1 doucet:1 corpus:8 xi:17 landauer:1 search:3 latent:7 continuous:1 why:1 table:7 additionally:2 learn:15 transported:1 robust:2 ca:1 sra:1 init:4 improving:1 constructing:1 domain:2 da:36 diag:4 sp:1 reuters:5 hyperparameters:1 animation:1 repeated:1 atheism:1 categorized:1 x1:1 xu:2 representative:1 cubic:1 fashion:1 slow:1 furnas:1 tied:1 jmlr:6 weighting:3 levy:1 learns:6 tang:2 beaulieu:1 specific:3 insightful:1 mobahi:1 list:1 beaten:1 glorot:1 frogner:1 incorporating:1 workshop:2 adding:1 importance:4 dissimilarity:1 mohan:1 margin:3 chen:4 entropy:3 gao:1 contained:1 sport:2 bo:1 recommendation:2 nested:2 acm:2 tically:1 weston:1 conditional:1 goal:1 formulated:1 towards:2 replace:1 content:1 included:1 specifically:2 denoising:4 called:2 total:1 ya:2 meaningful:2 exception:1 formally:1 select:1 guo:1 dissimilar:2 incorporate:1 evaluate:4 d1:1
5,680
614
Explanation-Based Neural Network Learning for Robot Control Tom M. Mitchell School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 E-mail: mitchell@cs.cmu.edu Sebastian B. Thrun University of Bonn Institut fUr Infonnatik III ROmerstr. 164, D-5300 Bonn, Germany thrnn@uran.informatik.uni-bonn.de Abstract How can artificial neural nets generalize better from fewer examples? In order to generalize successfully, neural network learning methods typically require large training data sets. We introduce a neural network learning method that generalizes rationally from many fewer data points, relying instead on prior knowledge encoded in previously learned neural networks. For example, in robot control learning tasks reported here, previously learned networks that model the effects of robot actions are used to guide subsequent learning of robot control functions. For each observed training example of the target function (e.g. the robot control policy), the learner explains the observed example in terms of its prior knowledge, then analyzes this explanation to infer additional information about the shape, or slope, of the target function. This shape knowledge is used to bias generalization when learning the target function. Results are presented applying this approach to a simulated robot task based on reinforcement learning. 1 Introduction Neural network learning methods generalize from observed training data to new cases based on an inductive bias that is similar to smoothly interpolating between observed training points. Theoretical results [Valiant, 1984], [Baum and Haussler, 1989] on learnability, as well as practical experience, show that such purely inductive methods require significantly larger training data sets to learn functions of increasing complexity. This paper introduces explanation-based neural network learning (EBNN), a method that generalizes successfully from fewer training examples, relying instead on prior knowledge encoded in previously learned neural networks. EBNN is a neural network analogue to symbolic explanation-based learning methods (EBL) [DeJong and Mooney, 1986], [Mitchell et al., 19861 Symbolic EBL methods generalize based upon pre-specified domain knowledge represented by collections of symbolic rules. 287 288 Mitchell and Thrun For example. in the task of learning general rules for robot control EBL can use prior know ledge about the effects of robot actions to analytically generalize from specific training examples of successful control actions. This is achieved by a. observing a sequence of states and actions leading to some goal. b. explaining (i.e.? post-facto predicting) the outcome of this sequence using the domain theory. then c. analyzing this explanation in order to determine which features of the initial state are relevant to achieving the goal of the sequence. and which are nol In previous approaches to EBL. the initial domain knowledge has been represented symbolically. typically by propositional rules or hom clauses. and has typically been assumed to be complete and correct 2 EBNN: Integrating inductive and analytical learning EBNN extends explanation-based learning to cover situations in which prior knowledge (also called the domain theory) is approximate and is itself1earned from scratch. In EBNN, this domain theory is represented by real-valued neural networks. By using neural network representations. it becomes possible to learn the domain theory using training algorithms such as the Backpropagation algorithm [Rumelhart et al., 19861 In the robot domains addressed in this paper. such domain theory networks correspond to action models. i.e., networks that model the effect of actions on the state of the world M:s x a - + Sf (here a denotes an action, s a state. and Sf the successor state). This domain theory is used by EBNN to bias the learning of the 'robot control function. Because the action models may be only approximately correct. we require that EBNN be robust with respect to severe errors in the domain theory. The remainder of this section describes the EBNN learning algorithm. Assume that the robot agent's action space is discrete. and that its domain knowledge is represented by a collection of pre-trained action models Mi:S - + Sf. one for each discrete action i. The learning task of the robot is to learn a policy for action selection that maximizes the reward, denoted by R. which defines the task. More specifically. the agent has to learn an evaluation function Q(s, a). which measures the cumulativefuture expected reward when action a is executed at state s. Once learned. the function Q(s, a) allows the agent to select actions that maximize the reward R (greedy policy). Hence learning control reduces to learning the evaluation function Q. 1 How can the agent use its previously learned action models to focus its learning of Q? To illustrate. consider the episode shown in Figure 1. The EBNN learning algorithm for learning the target function Q consists of two components, an inductive learning component and an analytical learning component. 2.1 The inductive component of EBNN The observed episode is used by the agent to construct training examples, denoted by for the evaluation function Q: Q(sl,ad:= R Q(s2,a2):= R Q, Q(s3,a3):= R Q could for example be realized by a monolithic neural network, or by a collection of networks trained with the Backpropagation training procedure. As observed training episodes are accumulated, Q will become increasingly accurate. Such pure inductive learning typIThis approach to learning a policy is adopted from recent research on reinforcenuml learning [Barto et al., 1991]. Explanation-Based Neural Network Learning for Robot Control _-----~~ reward: R (goal state) Figure 1: Episode: Starting with the initial state SI. the action sequence aI, az, a3 was observed to produce the final reward R. The domain knowledge represented by neural network action models is used to post-facto predict and analyze each step of the observed episode. ically requires large amounts of training data (which will be costly in the case of robot learning). 2.2 The analytical component or EBNN In EBNN, the agent exploits its domain lrnowledge to extract additional shape lrnowledge about the target function Q. to speed convexgence and reduce the number of training examples required. This shape lrnowledge. represented by the estimated slope of the target function Q. is then used to guide the generalization process. More specifically. EBNN combines the above inductive learning component with an analytical learning component that performs the following three steps for each observed training episode: 1. Explain: Post-facto predict the obsexved episode (states and final reward), using the action models Mi (c.f. Fig. 1). Note that thexe may be a deviation between predicted and observed states. since the domain lrnowledge is only approximately correct. 2. Analyze: Analyze the explanation to estimate the slope of the target function for 1..3). i.e.? extract the derivative of the each observed state-action pair (81:, a1:) (k final reward R with respect to the features of the states 81:. according to the action models Mi. For instance. consider the explanation of the episode shown in Fig. 1. The domain theory networks Mi represent differentiable functions. Therefore it is possible to extract the derivative of the final reward R with respect to the preceding state 83. denoted by "V '3R. Using the chain rule of differentiation. the derivatives of the final reward R with respect to all states 81: can be extracted. These derivatives "V,,, R describe the dependence of the final reward upon features of the previous states. They provide the target slopes. denoted by "V,,, Q. for the target function Q: = - ( "V '3 Q 83, a3) "V '3 R "V'2 R 8M43 (S3) 0 83 oM4,(83) OM42 (82) OS3 082 OM43 (83) 8M42 (82) 8M41 (81) 883 082 881 3. Learn: Update the learned target function to better fit both the target values and target slopes. Fig. 2 illustrates training information extracted by both the inductive (values) and the analytical (slopes) components ofEBNN. Assume that the "true" Q-function 289 290 Mitchell and Thrun Figure2: Fitting slopes: Let/bea target function for which tbreeexampies (Xl, I(xt)}. (X2, I(X2)). and (X3, 1(X3)) are known. Based on these points the learner might generate the hypothesis g. If the slopes are also known. the learner can do much better: h. is shown in Fig. 2a, and that three training instances at Xl, X2 and X3 are given. When only values are used for learning, i.e., as in standard inductive learning, the learner might conclude the hypothesis g depicted in Fig. 2b. If the slopes are known as well, the learner can better estimate the target function (Fig. 2c). From this example it is clear that the analysis in EBNN may reduce the need for training data, provided that the estimated slopes extracted from the explanations are sufficiently accurate. In EBNN, the function Q is learned by a real-valued function approximator that fits both the target values and target slopes. If this approximator is a neural network, an extended version of the Backpropagation algorithm can be employed to fit these slope constraints as well, as originally shown by [Simard et al., 19921 Their algorithm "Tangent Prop" extends the Backpropagation error function by a second term measuring the mean square error of the slopes. Gradient descent in slope space is then combined with Backpropagation to minimize both error functions. In the experiments reported here, however, we used an instance-based function approximation technique described in Sect. 3. 2.3 Accommodating imperfect domain theories Notice that the slopes extracted from explanations will be only approximately correct, since they are derived from the approximate action models Mi. If this domain knowledge is weak, the slopes can be arbitrarily poor, which may mislead generalization. EBNN reduces this undesired effect by estimating the accuracy of the extracted slopes and weighting the analytical component of learning by these estimated slope accuracies. Generally speaking, the accuracy of slopes is estimated by the prediction accuracy of the explanation (this heuristic has been named LOB *). More specifically, each time the domain theory is used to post-facto predict a state sk+1, its prediction st~icted may deviate from the observed state sr+ied ? Hence the I-step prediction accuracy at state Sk, denoted by Cl (i), is defined as 1 minus the normalized prediction error: ( .) Cl Z := 1 _ II st~cted - skb+red II max..prediction...error For a given episode we define the n-step accuracy cn(i) as the product of the I-step accuracies in the next n steps. The n-step accuracy, which measures the accuracy of the deri ved slopes n steps away from the end of the episode, posseses three desireable properties: a. It is I if the learned domain theory is perfectly correct, b. it decreases monotonically as the length of the chain of inferences increases, and c. it is bounded below by O. The n-step accuracy is used to determine the ratio with which the analytical and inductive components Explanation-Based Neural Network Learning for Robot Control are weighted when learning the target concept. If an observation is n steps away from the end of the episode. the analytically derived training information (slopes) is weighted by the n-step accuracy times the weight of the inductive component (values). Although the experimental results reported in section 3 are promising. the generality of this approach is an open question. due to the heuristic nature of the assumption LOB *. 2.4 EBNN and Reinforcement Learning To make EBNN applicable to robot learning, we extend it here to a more sophisticated scheme for learning the evaluation function Q. namely Watkins' Q-Learning [Watkins, 1989] combined with Sutton's temporal difference methods [Sutton, 19881 The reason for doing so is the problem 0/ suboptimal action choices in robot learning: Robots must explore their environment. i.e., they must select non-optimal actions. Such non-optimal actions can have a negative impact on the final reward of an episode which results in both underestimating target values and misleading slope estimates. Watkins' Q-Learning [Watkins, 1989] permits non-optimal actions during the course of learning Q. In his algorithm targets for Q are constructed recursively, based on the maximum possible Q-value at the next state: 2 ... Q(Sk, ak) = { R , if k is the final step and R final reward m~ Q(Sk+l, a) otherwise a acuon Here , (O~,~I) is a discount/actor that discounts reward over time, which is commonly used for minimizing the number of actions. Sutton's TD(A) [Sutton, 1988] can be used to combine both Watkins' Q-Learning and the non-recursive Q-estimation scheme underlying the previous section. Here the parameter A(0 ~ A~ 1) determines the ratio between recursive and non-recursive components: ... Q(sk,ak) = { R (I-A),max a Q(sk+l,a) + if k final step ) A,Q(sk+l,ak+d otherwise (1 Eq. (1) describes the extended inductive component of the EBNN learning algoriLhm. The extension of the analytical component in EBNN is straightforward. Slopes are extracted via the derivative of Eq. (1), which is computed via the derivative of both the models !IIi and the derivative of Q. if k last step otherwise 3 Experimental results EBNN has been evaluated in a simulated robot navigation domain. The world and the action space are depicted in Fig. 3a&b. The learning task is to find a Q function, for which the greedy policy navigates the agent to its goal location (circle) from arbitrary starting locations, while avoiding collisions with the walls or the obstacle (square). States are 210 order to simplify the notation. we assume that reward is only received at the end of the episode, and is also modeled by the action models. The extension to more general cases is straightforward. 291 292 Mitchell and Thrun (a) (c) (b) error 0'*' 0.2 0.15 0.1 robot 0.05 0 number of training examples Figure 3: a. The simulated robot world. b. Actions. c. The squared generalization error of the domain theory networks decreases monotonically as the amount of training data increases. These nine alternative domain theories were used in the experiments. described by the local view of the agent. in terms of distances and angles to the center of the goal and to the center of the obstacle. Note that the world is deterministic in these experiments, and that there is no sensor noise. We applied Watkins' Q-Learning and TD(~) as described in the previous section with A=0.7 and a discount factor ;=0.8. Each of the five actions was modeled by a separate neural network (12 hidden units) and each had a separate Q evaluation function. The latter functions were represented by a instance-based local approximation technique. In a nutshell, this technique memorizes all training instances and their slopes explicitly, and fits a local quadratic model over the [=3 nearest neighbors to the query point, fitting both target values and target slopes. We found empirically that this technique outperformed Tangent Prop in the domain at hand.3 We also applied an experience replay technique proposed by Lin [Lin, 19911 in order to optimally exploit the information given by the observed training episodes. Fig. 4 shows average performance curves for EBNN using nine different domain theories (action models) trained to different accuracies, with (Fig. 4a) and without (Fig. 4b) taking the n-step accuracy of the slopes into account. Fig. 4a shows the main result. It shows clearly that (1) EBNN outperfonns purely inductive learning, (2) more accurate domain theories yield better performance than less accurate theories, and (3) EBNN learning degrades gracefully as the accuracy of the domain theory decreases, eventually matching the performance of purely inductive learning. In the limit, as the size of the training data set grows, we expect all methods to converge to the same asymptotic performance. 4 Conclusion Explanation-based neural network learning. compared to purely inductive learning, generalizes more accurately from less training data. It replaces the need for large training data sets by relying instead on a previously learned domain theory. represented by neural networks. In this paper, EBNN has been described and evaluated in terms of robot learning tasks. Because the learned action models Mi are independent of the particular control task (reward function), this knowledge acquired during one task transfers directly to other tasks. 3Note that in a second experiment not reported here, we applied EBNN using neural network representation for Q and Tangent Prop successfully in a real robot domain. Explanation-Based Neural Network Learning for Robot Control (a) Prob(success) 1 0.8 0.6 0 .4 0.2 100 (b) n\lI1lber of epiSOdes Prob(success) 1 0.8 0. 6 0.4 0.2 " ...... .. ~. " 0 20 40 60 80 ~ ,"",_10 100 nU!Jlberof episodes Figure 4: How does domain knowledge improve generalization? a. Averaged results for EBNN domain theories of differing accuracies, pre-trained with from 5 to 8 192 training examples for each action model network. In contrast, the bold grey line reflects the learning curve for pure inductive learning, i.e., Q-Leaming and TD(A). b. Same experiments, but without weighting the analytical component of EBNN by its accuracy, illustrating the importance of the WB* heuristic. All curves are averaged over 3 runs and are also locally window-averaged. The perfonnance (vertical axis) is measured on an independent test set of starting positions. EBNN differs from other approaches to knowledge-based neural network learning, such as Shavlik/fowell's KBANNs [Shavlik and Towell, 1989]. in that the domain knowledge and the target function are strictly separated, and that both are learned from scratch. A major difference from other model-based approaches to robot learning, such as Sutton's DYNA architecture [Sutton, 1990] or Jordan!Rumelhart's distal teacher method [Jordan and Rumelhart, 1990], is the ability of EBNN to operate across the spectrum of strong to weak domain theories (using LOB*). EBNN has been found to degrade gracefully as the accuracy of the domain theory decreases. We have demonstrated the ability of EBNN to transfer knowledge among robot learning tasks. However, there are several open questions which will drive future research, the most significant of which are: a. Can EBNN be extended to real-valued, parameterized 293 294 Mitchell and Thrun action spaces? So far we assume discrete actions. b. Can EBNN be extended to handle first-order predicate logic. which is common in symbolic approaches to EBL? c. How will EBNN perform in highly stochastic domains? d. Can knowledge other than slopes (such as higher order derivatives) be extracted via explanations? e. Is it feasible to automatically partition/modularize the domain theory as well as the target function, as this is the case with symbolic EBL methods? More research on these issues is warranted. Acknowledgments We thank Ryusuke Masuoka, Long-Ji Lin. the CMU Robot Learning GrouP. Jude Shavlik, and Mike Jordan for invaluable discussions and suggestions. This research was sponsored in part by the Avionics Lab, Wright Research and Development Center. Aeronautical Systems Division (APSC). U. S. Air Force. Wright-Patterson AFB. OH 45433-6543 under Contract F33615-90-C-1465.Arpa Order No. 7597 and by a grant from Siemens Corporation. References [Barto et aI., 1991] Andy G. Barto, StevenJ. Bradtke, and Satinder P. Singh. Real-time learning and control using asynchronous dynamic programming. Technical Report COINS 91-57, Department of Computer Science, University of Massachusetts, MA, August 1991. [Baum and Haussler, 1989] Eric Baum and David Haussler. What size net gives valid generalization? Neural Computation, 1(1):151-160,1989. [Dejong and Mooney, 1986] Gerald DeJong and Raymond Mooney. Explanation-based learning: An alternative view. Machine Learning, 1(2):145-176, 1986. [Jordan and Rumelhart, 1990] Michael I. Jordan and David E. Rumelhart. Forward models: Supervised learning with a distal teacher. submitted to Cognitive Science, 1990. [Lin, 19911 Long-Ji Lin. Programming robots using reinforcement learning and teaching. In Proceedings of AAAl-91 , Menlo Park, CA, July 1991. AAAI Press I The MIT Press. [Mitchell et al., 1986] Tom M. Mitchell, Rich Keller, and Smadar Kedar-Cabelli. Explanation-based generalization: A unifying view. Machine Learning, 1(1):47-80, 1986. [Pratt, 1993] Lori Y. Pratt. Discriminability-based transfer between neural networks. Same volume. [Rumelhart et aI., 1986] David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. Learning internal representations by error propagation. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Processing. Vol. I + II. MIT Press, 1986. [Shavlik and Towell, 1989] Jude W. Shavlik and G.G. Towell. An approach to combining explanation-based and neural learning algorithms. Connection Science, 1(3):231-253, 1989. [Simard et al., 1992] Patrice Simard, Bernard Victorri, Yann LeCun, and John Denker. Tangent prop - a formalism for specifying selected invariances in an adaptive network. In J. E. Moody, S. J. Hanson, and R. P. Lippmann, editors, Advances in Neural Information Processing Systems 4, pages 895-903, San Mateo, CA, 1992. Morgan Kaufmann. [SUlton, 1988] Richard S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3. 1988. [Sutton, 1990] Richard S. Sutton. Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In Proceedings of the Seventh International Conference on Machine Learning, June 1990, pages 216-224,1990. [Valiant, 1984] Leslie G. Valiant A theory of the learnable. Communications of the ACM, 27:11341142, 1984. [Watkins,1989] Chris J. C. H. Watkins. Learning from Delayed Rewards. PhD thesis, King's College, Cambridge, England,1989.
614 |@word illustrating:1 version:1 open:2 grey:1 minus:1 recursively:1 initial:3 si:1 must:2 john:1 ronald:1 subsequent:1 partition:1 shape:4 sponsored:1 update:1 greedy:2 fewer:3 selected:1 underestimating:1 ebnn:35 location:2 five:1 constructed:1 become:1 consists:1 combine:2 fitting:2 introduce:1 acquired:1 expected:1 planning:1 relying:3 td:3 automatically:1 window:1 increasing:1 becomes:1 provided:1 estimating:1 bounded:1 underlying:1 maximizes:1 notation:1 what:1 dejong:3 differing:1 differentiation:1 corporation:1 temporal:2 nutshell:1 facto:4 control:13 unit:1 grant:1 monolithic:1 local:3 limit:1 sutton:9 ak:3 analyzing:1 reacting:1 approximately:3 might:2 discriminability:1 mateo:1 specifying:1 nol:1 averaged:3 practical:1 acknowledgment:1 lecun:1 recursive:3 differs:1 backpropagation:5 x3:3 procedure:1 significantly:1 matching:1 pre:3 integrating:1 cted:1 symbolic:5 selection:1 applying:1 deterministic:1 demonstrated:1 center:3 baum:3 straightforward:2 williams:1 starting:3 keller:1 mislead:1 pure:2 rule:4 haussler:3 oh:1 his:1 handle:1 target:23 programming:3 hypothesis:2 pa:1 rumelhart:8 observed:13 mike:1 episode:16 sect:1 decrease:4 ebl:6 environment:1 complexity:1 reward:16 dynamic:2 gerald:1 trained:4 singh:1 purely:4 upon:2 division:1 patterson:1 learner:5 eric:1 represented:8 separated:1 describe:1 artificial:1 query:1 outcome:1 encoded:2 larger:1 valued:3 heuristic:3 otherwise:3 ability:2 final:10 patrice:1 sequence:4 differentiable:1 net:2 analytical:9 product:1 remainder:1 relevant:1 combining:1 az:1 produce:1 illustrate:1 measured:1 nearest:1 school:1 received:1 eq:2 strong:1 c:1 predicted:1 correct:5 stochastic:1 successor:1 explains:1 require:3 generalization:7 wall:1 ied:1 extension:2 strictly:1 sufficiently:1 wright:2 predict:4 major:1 a2:1 estimation:1 outperformed:1 applicable:1 successfully:3 weighted:2 reflects:1 mit:2 clearly:1 sensor:1 barto:3 derived:2 focus:1 june:1 fur:1 contrast:1 ved:1 inference:1 accumulated:1 typically:3 integrated:1 hidden:1 germany:1 issue:1 among:1 denoted:5 uran:1 development:1 once:1 construct:1 park:1 future:1 report:1 simplify:1 richard:2 delayed:1 highly:1 evaluation:5 severe:1 introduces:1 navigation:1 chain:2 accurate:4 andy:1 experience:2 institut:1 perfonnance:1 circle:1 theoretical:1 arpa:1 instance:5 formalism:1 desireable:1 obstacle:2 wb:1 cover:1 measuring:1 leslie:1 deviation:1 successful:1 predicate:1 seventh:1 learnability:1 optimally:1 reported:4 teacher:2 combined:2 st:2 international:1 cabelli:1 contract:1 michael:1 moody:1 squared:1 aaai:1 thesis:1 cognitive:1 derivative:8 leading:1 simard:3 account:1 de:1 bold:1 explicitly:1 ad:1 view:3 memorizes:1 lab:1 observing:1 analyze:3 red:1 doing:1 parallel:1 slope:27 minimize:1 square:2 air:1 accuracy:17 kaufmann:1 correspond:1 yield:1 generalize:5 weak:2 accurately:1 informatik:1 drive:1 mooney:3 submitted:1 explain:1 sebastian:1 mi:6 massachusetts:1 mitchell:9 knowledge:16 sophisticated:1 originally:1 higher:1 supervised:1 tom:2 afb:1 evaluated:2 generality:1 hand:1 propagation:1 defines:1 grows:1 effect:4 normalized:1 true:1 deri:1 concept:1 inductive:16 analytically:2 hence:2 undesired:1 distal:2 during:2 lob:3 complete:1 performs:1 invaluable:1 bradtke:1 common:1 clause:1 empirically:1 ji:2 volume:1 avionics:1 extend:1 mellon:1 significant:1 cambridge:1 ai:3 teaching:1 had:1 robot:28 actor:1 navigates:1 recent:1 skb:1 arbitrarily:1 success:2 morgan:1 analyzes:1 additional:2 preceding:1 employed:1 determine:2 maximize:1 converge:1 monotonically:2 july:1 ii:3 infer:1 reduces:2 technical:1 england:1 long:2 lin:5 post:4 a1:1 impact:1 prediction:5 cmu:2 jude:2 represent:1 achieved:1 addressed:1 victorri:1 operate:1 sr:1 posse:1 jordan:5 iii:2 pratt:2 fit:4 architecture:2 perfectly:1 suboptimal:1 reduce:2 figure2:1 imperfect:1 cn:1 speaking:1 nine:2 action:35 generally:1 collision:1 clear:1 amount:2 discount:3 locally:1 mcclelland:1 generate:1 sl:1 s3:2 notice:1 estimated:4 towell:3 carnegie:1 discrete:3 vol:1 group:1 achieving:1 aeronautical:1 symbolically:1 run:1 angle:1 prob:2 parameterized:1 named:1 extends:2 yann:1 hom:1 bea:1 quadratic:1 replaces:1 modularize:1 constraint:1 x2:3 bonn:3 speed:1 f33615:1 romerstr:1 department:1 according:1 poor:1 describes:2 across:1 increasingly:1 previously:5 eventually:1 dyna:1 know:1 end:3 adopted:1 generalizes:3 permit:1 denker:1 away:2 alternative:2 coin:1 ledge:1 denotes:1 unifying:1 exploit:2 approximating:1 question:2 realized:1 degrades:1 costly:1 dependence:1 rationally:1 gradient:1 distance:1 separate:2 thank:1 thrun:5 simulated:3 accommodating:1 gracefully:2 degrade:1 mail:1 chris:1 reason:1 length:1 modeled:2 ratio:2 minimizing:1 executed:1 negative:1 policy:5 perform:1 vertical:1 observation:1 descent:1 situation:1 extended:4 hinton:1 communication:1 arbitrary:1 august:1 david:3 propositional:1 pair:1 required:1 specified:1 namely:1 connection:1 hanson:1 learned:11 nu:1 below:1 max:2 explanation:19 analogue:1 force:1 predicting:1 scheme:2 improve:1 misleading:1 axis:1 extract:3 raymond:1 deviate:1 prior:5 tangent:4 asymptotic:1 expect:1 ically:1 suggestion:1 approximator:2 geoffrey:1 agent:8 editor:2 course:1 infonnatik:1 last:1 asynchronous:1 guide:2 bias:3 shavlik:5 explaining:1 neighbor:1 taking:1 distributed:1 curve:3 world:4 om4:1 valid:1 rich:1 forward:1 collection:3 reinforcement:3 commonly:1 adaptive:1 san:1 far:1 approximate:2 lippmann:1 uni:1 logic:1 satinder:1 pittsburgh:1 assumed:1 conclude:1 spectrum:1 sk:7 scratch:2 promising:1 learn:5 nature:1 robust:1 transfer:3 ca:2 menlo:1 warranted:1 interpolating:1 cl:2 domain:35 main:1 s2:1 noise:1 fig:11 position:1 sf:3 xl:2 replay:1 watkins:8 weighting:2 specific:1 xt:1 learnable:1 a3:3 kedar:1 valiant:3 importance:1 phd:1 illustrates:1 lori:1 smoothly:1 depicted:2 explore:1 determines:1 extracted:7 ma:1 prop:4 acm:1 goal:5 king:1 leaming:1 feasible:1 specifically:3 called:1 bernard:1 invariance:1 experimental:2 siemens:1 select:2 college:1 internal:1 latter:1 avoiding:1
5,681
6,140
Graph Clustering: Block-models and model free results Marina Meil?a Department of Statistics University of Washington Seattle, WA 98195-4322, USA mmp@stat.washington.edu Yali Wan Department of Statistics University of Washington Seattle, WA 98195-4322, USA yaliwan@washington.edu Abstract Clustering graphs under the Stochastic Block Model (SBM) and extensions are well studied. Guarantees of correctness exist under the assumption that the data is sampled from a model. In this paper, we propose a framework, in which we obtain ?correctness? guarantees without assuming the data comes from a model. The guarantees we obtain depend instead on the statistics of the data that can be checked. We also show that this framework ties in with the existing model-based framework, and that we can exploit results in model-based recovery, as well as strengthen the results existing in that area of research. 1 Introduction: a framework for clustering with guarantees without model assumptions In the last few years, model-based clustering in networks has witnessed spectacular progress. At the central of intact are the so-called block-models, the Stochastic Block Model (SBM), DegreeCorrected SBM (DC-SBM) and Preference Frame Model (PFM). The understanding of these models has been advanced, especially in understanding the conditions when recovery of the true clustering is possible with small or no error. The algorithms for recovery with guarantees have also been improved. However, the impact of the above results is limited by the assumption that the observed data comes from the model. This paper proposes a framework to provide theoretical guarantees for the results of model based clustering algorithms, without making any assumption about the data generating process. To describe the idea, we need some notation. Assume that a graph G on n nodes is observed. A modelbased algorithm clusters G, and outputs clustering C and parameters M(G, C). The framework is as follows: if M(G, C) ?ts the data G well, then we shall prove that any other clustering C ? of G that also ?ts G well will be a small perturbation of C. If this holds, then C with model parameters M(G, C) can be said to capture the data structure in a meaningful way. We exemplify our approach by obtaining model-free guarantees for the SBM and PFM models. Moreover, we show that model-free and model-based results are intimately connected. 2 Background: graphs, clusterings and block models Graphs, degrees, Laplacian, and clustering Let G be a graph on n nodes, described by its ad? De?ne d?i = ?n A?ij the degree of node i, and D ? = diag{d?i } the diagonal jacency matrix A. j=1 ?=D ? ?1/2 A?D ? ?1/2 . In matrix of the node degrees. The (normalized) Laplacian of G is de?ned as1 L 1 ? [10]. Rigorously speaking, the normalized graph Laplacian is I ? L 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. extension, we de?ne the degree matrix D and the Laplacian L associated to any matrix A ? Rn?n , with Aij = Aji ? 0, in a similar way. Let C be a partitioning (clustering) of the nodes of G into K clusters. We use the shorthand notation i ? k for ?node i belongs to cluster k?. We will represent C by its n ? K indicator matrix Z, de?ned by Zik = 1 if i ? k, 0 otherwise, for i = 1, . . . n, k = 1, . . . K. (1) T T ? Note that Z Z = diag{nk } with nk counting the number of nodes in cluster k, and Z AZ = [nkl ]K k,l=1 with nkl counting the edges in G between clusters k and l. Moreover, for two indicator matrices Z, Z ? for clusterings C, C ? , (Z T Z ? )kk? counts the number of points in the intersection of ? ? ? )kk? computes ? cluster k of C with cluster k ? of C ? , and (Z T DZ i?k?k? di the volume of the same intersection. ?Block models? for random graphs (SBM, DC-SBM, PFM) This family of models contains Stochastic Block Models (SBM) [1, 18], Degree-Corrected SBM (DC-SBM) [17] and Preference Frame Models (PFM) [20]. Under each of these model families, a graph G with adjacency matrix A? over n nodes is generated by sampling its edges independently following the law A?ij ? Bernoulli(Aij ), for all i > j. The symmetric matrix A = [Aij ] describing the graph is the edge probability matrix. The three model families differ in the constraints they put on an acceptable A. Let C ? be a clustering. The entries of A are de?ned w.r.t C ? as follows (and we say that A is compatible with C ? ). SBM : Aij = Bkl whenever i ? k, j ? l, with B = [Bkl ] ? RK?K symmetric and nonnegative. DC-SBM : Aij = wi wj Bkl whenever i ? k, j ? l, with B as above and w1 , . . . wn non-negative weights associated with the graph nodes. PFM : A satis?es D = diag(A1), D?1 AZ = ZR where 1 denotes the vector of all ones, Z is the indicator matrix of C ? , and R is a stochastic matrix (R1 = 1, Rkl ? 0), the details are in [20] While perhaps not immediately obvious, the SBM is a subclass of the DC-SBM, and the latter a subclass of the PFM. Another common feature of block-models, that will be signi?cant throughout this work is that for all three, Spectral Clustering algorithms [15] have been proved to work well estimating C ? . 3 Main theorem: blueprint and results for PFM, SBM Let M be a model class, such as SBM, DC-SBM, PFM, and denote M(G, C) ? M to be a model that is compatible with C and is ?tted in some way to graph G (we do not assume in general that this ?t is optimal). Theorem 1 (Generic Theorem) We say that clustering C ?ts G well w.r.t M iff M(G, C) is ?close to? G. If C ?ts G well w.r.t M, then (subject to other technical conditions) any other clustering C ? which also ?ts G well is close to C, i.e. dist(C, C ? ) is small. In what follows, we will instantiate this Generic Theorem, and the concepts therein; in particular the following will be formally de?ned. (1) Model construction, i.e an algorithm to ?t a model in M to (G, C). This is necessary since we want our results to be computable in practice. (2) A goodness of ?t measure between M(C, G) and the data G. (3) A distance between clusterings. We adopt the widely used Misclassi?cation Error (or Hamming) distance de?ned below. The Misclassi?cation Error (ME) distance between two clusterings C, C ? over the same set of n points is ? 1 dist(C, C ? ) = 1 ? max 1, (2) n ??SK i?k??(k) where ? ranges over all permutations of K elements SK , and ?(k) indexes a cluster in C ? . If the points are weighted by their degrees, a natural measure on the node set, the Weighted ME (wME) 2 distance is ? 1 distd?(C, C ? ) = 1 ? ?n max d?i . ? ??SK i=1 di i?k??(k) (3) ? In the above, i?k?k? d?i represents the total weight of the set of points assigned to cluster k by C and to cluster k ? ( or ?(k)) by C ? . Note that in the indicator matrix representation of clusterings, this ? ? ? RK?K . While dist is more popular, we believe dist ? is the (k, k ? ) element of the matrix Z T DZ d is more natural, especially when node degrees are dissimilar, as d? can be seen as a natural measure on the set of nodes, and distd? is equivalent to the earth-mover?s distance. 3.1 Main result for PFM Constructing a model Given a graph G and a clustering C of its nodes, we wish to construct a ? ? L|| is small. PFM compatible with C, so that its Laplacian L satis?es that ||L ? be Let the spectral decomposition of L ?? ? ? ? Y? T ? 0 T ? low Y?low ? Y? T + Y?low ? ? ? ? = Y? ? L = [Y Ylow ] (4) T ? low Y?low 0 ? ?1, ? ? ? , ? ? K ), ? ? K+1 , ? ? ? , ? ? n ). To ? = diag(? ? low = diag(? where Y? ? Rn?K , Y?low ? Rn?(n?K) , ? ? ? ? ensure that the matrices Y , Ylow are uniquely de?ned we assume throughout the paper that L?s K-th eigengap, i.e, |?K | ? |?K+1 |, is non-zero. ? 2 | ? . . . ? |? ? K | > |? ? K+1 | ? . . . |? ? n |. ? 1 = 1 ? |? ? satisfy ? Assumption 1 The eigenvalues of L Denote the subspace spanned by the columns of M , for any M matrix, by R(M ), and || || the Euclidean or spectral norm. PFM Estimation Algorithm ? D, ? L, ? Y? , ?, ? clustering C with indicator matrix Z. Input Graph G with A, Output (A, L) = P F M (G, C) 1. Construct an orthogonal matrix derived from Z. ? the column normalization of Z. ? 1/2 ZC ?1/2 , with C = Z TDZ YZ = D ? Note Ckk = i?k d?i is the volume of cluster k. (5) 2. Project YZ on Y? and perform Singular Value Decomposition. F = YZT Y? = U ?V T (6) 3. Change basis in R(YZ ) to align with Y? . Y = YZ U V T . Complete Y to an orthonormal basis [Y B] of Rn . (7) 4. Construct Laplacian L and edge probability matrix A. T ? T + (BB T )L(BB ? L = Y ?Y ), ? 1/2 LD ? 1/2 . A = D (8) ? D, ? L, ? Y? , ? ? and Z be de?ned as above, and (A, L) = P F M (G, C). Then, Proposition 2 Let G, A, ? and L, or A de?ne a PFM with degrees d?1:n . 1. D ? 1:K . 2. The columns of Y are eigenvectors of L with eigenvalues ? ? 1 = 1. ? 1/2 1 is an eigenvector of both L and L ? with eigenvalue ? 3. D The proof is relegated to the Supplement, as are all the omitted proofs. P F M (G, C) is an estimator for the PFM parameters given the clustering. It is evidently not the Maximum Likelihood estimator, but we can show that it is consistent in the following sense. 3 Proposition 3 (Informal) Assume that G is sampled from a PFM with parameters D? , L? and compatible with C ? , and let L = P F M (G, C ? ). Then, under standard recovery conditions for PFM (e.g [20]) ||L? ? L|| = o(1) w.r.t. n. ? ? L|| ? ?. Assumption 2 (Goodness of ?t for PFM) ||L P F M (G, C) instantiates M(G, C), and Assumption 2 instantiates the goodness of ?t measure. It remains to prove an instance of Generic Theorem 1 for these choices. ? 1:n as de?ned, and L ? L, ? ? ? satTheorem 4 (Main Result (PFM)) Let G be a graph with d?1:n , D, isfy Assumption 1. Let C, C ? be two clusterings with K clusters, and L, L? be their corresponding 2 and Laplacians, de?ned as in (8), and satisfy Assumption 2 respectively. Set ? = (|?? (K?1)? ? |?|? |)2 K K+1 ?0 = mink Ckk / maxk Ckk with C de?ned as in (5), where k indexes the clusters of C. Then, whenever ? ? ?0 , maxk Ckk ?, distd?(C, C ? ) ? ? k Ckk (9) with distd? being the weighted ME distance (3). In the remainder of this section we outline the proof steps, while the partial results of Proposition 5, 6, 7 are proved in the Supplement. First, we apply the perturbation bound called the Sinus Theorem of Davis and Kahan, in the form presented in Chapter V of [19]. ? 1:n , Y be de?ned as usual. If Assumptions 1 and 2 hold, then Proposition 5 Let Y? , ? || diag(sin ?1:K (Y? , Y ))|| ? ? = ?? ? ? K+1 | | ?K | ? | ? (10) where ?1:K are the canonical (or principal) angles between R(Y? ) and R(Y ) (see e.g [8]). The next step concerns the closeness of Y, Y? in Frobenius norm. Since Proposition 5 bounds the sinuses of the canonical angles, we exploit the fact that the cosines of the same angles are the singular values of F = Y T Y? of (6). ? = Y? Y? T and F, ?? as above. Assumptions 1 and 2 imply that Proposition 6 Let M = Y Y T , M ? T ? K ? (K ? 1)??2 . 1. ||F ||2F = trace M M ? ||2 ? 2(K ? 1)??2 . 2. ||M ? M F Now we show that all clusterings which satisfy Proposition 6 must be close to each other in the weighted ME distance. For this, we ?rst need an intermediate result. Assume we have two clusterings C, C ? , with K clusters, for which we construct YZ , Y, L, M , respectively YZ? , Y ? , L? , M ? as above. Then, the subspaces spanned by Y and Y ? will be close. ? satisfy Assumption 1 and let C, C ? represent two clusterings for which L, L? Proposition 7 Let L satisfy Assumption 2. Then, ||YZT YZ? ||2F ? K ? 4(K ? 1)??2 = K ? ? The main result now follows from Proposition 7 and Theorem 9 of [13], as shown in the Supplement. This proof approach is different from the existing perturbation bounds for clustering, which all use counting arguments. The result of [13] is a local equivalence, which bounds the error we need in terms of ? de?ned above (?local? meaning the result only holds for small ?). 4 3.2 Main Theorem for SBM In this section, we offer an instantiation of Generic Theorem 1 for the case of the SBM. As before, we start with a model estimator, which in this case is the Maximum Likelihood estimator. SBM Estimation Algorithm ? clustering C with indicator matrix Z. Input Graph with A, Output A = SBM (G, C) 1. Construct an orthogonal matrix derived from Z: YZ = ZC ?1/2 with C = Z T Z. ?1 ? 2. Estimate the edge probabilities: B = C ?1 Z T AZC . 3. Construct A from B by A = ZBZ T . ? = C 1/2 BC 1/2 and denote the eigenvalues of B, ? ordered by decreasing magProposition 8 Let B ? be B ? = U ?U T , with U an orthogonal matrix nitude, by ?1:K . Let the spectral decomposition of B and ? = diag(?1:K ). Then 1. A is a SBM. 2. ?1:K are the K principal eigenvalues of A. The remaining eigenvalues of A are zero. 3. A = Y ?Y T where Y = YZ U . Assumption 3 (Eigengap) B is non-singular (or, equivalently, |?K | > 0. Assumption 4 (Goodness of ?t for SBM) ||A? ? A|| ? ?. With the model (SBM), estimator, and goodness of ?t de?ned, we are ready for the main result. ? A the K-th ? and ? Theorem 9 (Main Result (SBM)) Let G be a graph with incidence matrix A, K ? ? singular value of A. Let C, C be two clusterings with K clusters, satisfying Assumptions 3 and 4. 2 Set ? = |4K? ? A |2 and ?0 = mink nk / maxk nk , where k indexes the clusters of C. Then, whenever ? K ? ? ?0 , dist(C, C ? ) ? ? maxk nk /n, where dist represents the ME distance (2). ? ? ? A is not bounded above, and neither is ?. Since the SBM is less Note that the eigengap of A, K ?exible than the PFM, we expect that for the same data G, Theorem 9 will be more restrictive than Theorem 4. 4 4.1 The results in perspective Cluster validation Theorems like 4, 9 can provide model free guarantees for clustering. We exemplify this procedure in the experimental Section 6, using standard spectral clustering as described in e.g [18, 17, 15]. What is essential is that all the quantities such as ? and ? are computable from the data. Moreover, if Y is available, then the bound in Theorem 4 can be improved. ? , M >F +(K ? 1)(?? )2 + Proposition 10 Theorem 4 holds when ? is replaced by ?Y = K? < M ? ? ? ? ? ? ? 2 2(K ? 1)? ||M ? M ||F , with ? = ?/(|?K | ? |?K+1 |) and M, M de?ned in Proposition 6. 4.2 Using existing model-based recovery theorems to prove model-free guarantees We exemplify this by using (the proof of) Theorem 3 of [20] to prove the following. Theorem 11 (Alternative result based on [20] for PFM) Under the same conditions as in TheoK?2 . rem 4, distd?(C, C ? ) ? ?WM , with ?WM = 128 (|?? |?| ? ? |)2 K 5 K+1 It follows, too, that with the techniques in this paper, the error bound in [20] can be improved by a factor of 128. Similarly, if we use the results of [18] we obtain alternative model-free guarantee for the SBM. ? 2 ?L2 ||F ? ?, where L, ? L are the LaplaAssumption 5 (Alternative goodness of ?t for SBM) ||L ? cians of A and A = SBM (G, C) respectively. Theorem 12 (Alternative result based on [18] for SBM) Under the same conditions as in Theo2 k nk . rem 9, except for replacing Assumption 4 with 5, dist(C, C ? ) ? ?RCY with ?RCY = |??? |4 16 max n K A problem with this result is that Assumption 5 is much stronger than 4 (being in Frobenius norm). The more recent results of [17] (with unspeci?ed constants) in conjunction with our original As?2 ) sumptions 3, 4, and the assumption that all clusters have equal sizes, give a bound of O(K?2 /? K for the SBM; hence our model-free Theorem 9 matches this more restrictive model-based theorem. 4.3 Sanity checks and Extensions It can be easily veri?ed that if indeed G is sampled from a SBM, or PFM, then for large enough n, and large enough model eigengap, Assumptions 1 and 2 (or 3 and 4) will hold. Some immediate extensions and variations of Theorems 4, 9 are possible. For example, one could replace the spectral norm by the Frobenius norm in Assumptions 2 and 4, which would simplify some of the proofs. However, using the Frobenius norm would be a much stronger assumption [18] Theorem 4 holds not just for simple graphs, but in the more general case when A? is a weighted graph (i.e. a similarity matrix). The theorems can be extended to cover the case when C ? is a clustering ? ? ||L ? L||(1 ? that is ?-worse than C, i.e when ||L? ? L|| ? ?). 4.4 Clusterability and resilience ?? Our Theorems also imply the stability of a clustering to perturbations of the graph G. Indeed, let L ? ? ? ? ? ? ? be the Laplacian of G , a perturbation of G. If ||L ? L|| ? ?, then ||L ? L|| ? 2?, and (1) G is well ?tted by a PFM whenever G is, and (2) C is ? stable w.r.t G ? , hence C is what some authors [9] call resilient. A graph G is clusterable when G can be ?tted well by some clustering C ? . Much work [4, 7] has been devoted to showing that clusterability implies that ?nding a C close to C ? is computationally ef?cient. Such results can be obtained in our framework, by exploiting existing recovery theorems such as [18, 17, 20], which give recovery guarantees for Spectral Clustering, under the assumption of sampling from the model. For this, we can simply replace the model assumption with the assumption that there is a C ? for which L (or A) satis?es Assumptions 1 and 2 (or 3 and 4). 5 Related work To our knowledge, there is no work of the type of Theorem 1 in the literature on SBM, DC-SBM, PFM. The closest work is by [6] which guarantees approximate recovery assuming G is close to a DC-SBM. Spectral clustering is also used for loss-based clustering in (weighted) graphs and some stability results exist in this context. Even though they measure clustering quality by different criteria, so that the ? values are not comparable, we review them here. The recent paper of [16], Theorem 1.2 states ? K+1 )/(cK 3 ) then the clustering error2 that if the K-way Cheeger constant of G is ?(k) ? (1 ? ? opt distd?(C, C ) ? C/c = ?P SZ . In the current proof, the constant C = 2 ? 105 ; moreover, ?(K) cannot be computed tractably. In [14], the bound ?M SX depends on ?M SX , the Normalized Cut scaled by the eigengap. Since both bounds refer to the result of spectral clustering, we can compare the relationship between ?M SX and ?M SX ; for [14], this is ?M SX = 2?M SX [1 ? ?M SX /(K ? 1)], 2 The results is stronger, bounding the perturbation of each cluster individually by ?P SZ , but it also includes a factor larger than 1, bounding the error of K-means algorithm. 6 which is about K ? 1 times larger than ? when ? = ?M SX . In [5], dist(C, C ? ) is de?ned in terms of ||YZT ? YZ? ||2F , and the loss is (closely related) to ||A? ? SBM (G, C)||2F . The bound does not take into account the eigengap, that is, the stability of the subspace Y? itself. Bootstrap for validating a clustering C was studied in [11] (see also references therein for earlier work). In [3] the idea is to introduce a statistics, and large deviation bounds for it, conditioned on sampling from a SBM (with covariates) and on a given C. 6 Experimental evaluation Experiment Setup Given G, we obtain a clustering C0 by spectral clustering [15]. Then we calculate clustering C by perturbing C0 with gradually increasing noise. For each C, we construct PFM (C, G)and SBM(C, G) model, and further compute ?, ? and ?0 . If ? ? ?0 , C is guaranteed to be stable by the theorems. In the remainder of this section, we describe the data generating process for the simulated datasets and the results we obtained. PFM Datasets We generate from PFM model with K = 5, n = 10000, ?1:K = (1, 0.875, 0.75, 0.625, 0.5). eigengap = 0.48, n1:K = (2000, 2000, 2000, 2000, 2000). The stochastic matrix R and its stationary distribution ? are shown below. We sample an adjacency matrix A? from A (shown below). A ? 25 .12 .17 .18 .28 .79 .02 .06 .03 .10 .71 .23 .00 .02 ? .03 ? .16 .69 .00 .06 R = ? .09 ? .04 .00 .00 .80 .16 .10 .01 .03 .11 .76 ?= ? ? A? ? ? ? ? ? Perturbed PFM Datasets A is obtained from the previous model by perturbing its principal subspace (details in Supplement). Then we sample A? from A. Lancichinetti-Fortunato-Radicchi (LFR) simulated matrix [12] The LFR benchmark graphs are widely used for community detection algorithms, due to heterogeneity in the distribution of node degree and community size. A LFR matrix is simulated with n = 10000, K = 4, nk = (2467, 2416, 2427, 2690) and ? = 0.2, where ? is the mixing parameter indicating the fraction of edges shared between a node and the other nodes from outside its community. ? of hyperlinks between weblogs on US politics, Political Blogs Dataset A directed network A compiled from online directories by Adamic and Glance [2], where each blog is assigned a political leaning, liberal or conservative, based on its blog content. The network A contains 1490 blogs. ? T A) ? 3 , which is a smoothed After erasing the disconnected nodes, n = 983. We study A? = (A T ? ? undirected graph. For A A we ?nd no guarantees. The ?rst two data sets are expected to ?t the PFM well, but not the SBM, while the LFR data is expected to be a good ?t for a SBM. Since all bounds can be computed on weighted graphs as well, we have run the experiments also on the edge probability matrices A used to generate the PFM and perturbed PFM graphs. The results of these experiments are summarized in Figure 1. For all of the experiments, the clustering C is ensured to be stable by Theorem 4 as the unweighted error grows to a breaking point, then the assumptions of the theorem fail. In particular, the C0 is always stable in the PFM framework. 7 Comparing ? from Theorem 9 to that from Theorem 4, we ?nd that Theorem 9 (guarantees for SBM) is much harder to satisfy. All ? values from Theorem 9 are above 1, and not shown.3 In particular, for the SBM model class, the C cannot be proved stable even for the LFR data. Note that part of the reason why with the PFM model very little difference from the clustering C0 can be tolerated for a clustering to be stable is that the large eigengap makes P F M (G, C) differ from P F M (G, C0 ) even for very small perturbations. By comparing the bounds for A? with the bounds for the ?weighted graphs? A, we can evaluate that the sampling noise on ? is approximately equal to that of the clustering perturbation. Of course, the sampling noise varies with n, decreasing for larger graphs. Moreover, from Political Blogs data, we see that ?smoothing? a graph, by e.g. taking powers of its adjacency matrix, has a stability inducing effect. Figure 1: ? denotes a simple graph, while A denotes a Quantities ?, ?, ?0 from Theorem 4 plotted vs dist(C, C0 ) for various datasets: A weighted graph (i.e. a non-negative matrix). For the Political Blogs: Truth means C0 is true clustering of [2], spectral means C0 is obtained from spectral clustering. For SBM, ? is always greater than ?0 . 7 Discussion This paper makes several contributions. At a high level, it poses the problem of model free validation in the area of community detection in networks. The stability paradigm is not entirely new, but using it explicitly with model-based clustering (instead of cost-based) is. So is ?turning around? the model-based recovery theorems to be used in a model-free framework. All quantities in our theorems are computable from the data and the clustering C, i.e do not contain undetermined constants, and do not depend on parameters that are not available. As with distribution-free results in general, making fewer assumptions allows for less con?dence in the conclusions, and the results are not always informative. Sometimes this should be so, e.g when the data does not ?t the model well. But it is also possible that the ?t is good, but not good enough to satisfy the conditions of the theorems as they are currently formulated. This happens with the SBM bounds, and we believe tighter bounds are possible for this model. It would be particularly interesting to study the non-spectral, sharp thresholds of [1] from the point of view of model-free recovery. A complementary problem is to obtain negative guarantees (i.e that C is not unique up to perturbations). At the technical level, we obtain several different and model-speci?c stability results, that bound the perturbation of a clustering by the perturbation of a model. They can be used both in model-free and in existing or future model-based recovery guarantees, as we have shown in Section 3 and in the experiments. The proof techniques that lead to these results are actually simpler, more direct, and more elementary than the ones found in previous papers. 3 We also computed ?RCY but the bounds were not informative. 8 References [1] Emmanuel Abbe and Colin Sandon. Community detection in general stochastic block models: fundamental limits and ef?cient recovery algorithms. arXiv preprint arXiv:1503.00609, 2015. [2] Lada A Adamic and Natalie Glance. The political blogosphere and the 2004 us election: divided they blog. In Proceedings of the 3rd international workshop on Link discovery, pages 36?43. ACM, 2005. [3] Edoardo M. Airoldi, David S. Choi, and Patrick J. Wolfe. Con?dence sets for network structure. Technical Report arXiv:1105.6245, 2011. [4] Pranjal Awasthi. Clustering under stability assumptions. In Encyclopedia of Algorithms, pages 331?335. 2016. [5] Francis Bach and Michael I. Jordan. Learning spectral clustering with applications to speech separation. Journal of Machine Learning Research, 7:1963?2001, 2006. [6] Maria-Florina Balcan, Christian Borgs, Mark Braverman, Jennifer Chayes, and Shang-Hua Teng. Finding endogenously formed communities. In Proceedings of the Twenty-Fourth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 767?783. SIAM, 2013. [7] Shai Ben-David. Computational feasibility of clustering under clusterability assumptions. CoRR, abs/1501.00437, 2015. [8] Rajendra Bhatia. Matrix analysis, volume 169. Springer Science & Business Media, 2013. [9] Yonatan Bilu and Nathan Linial. Are stable instances easy? CoRR, abs/0906.3162, 2009. [10] Fan RK Chung. Spectral graph theory, volume 92. American Mathematical Soc., 1997. [11] Brian Karrer, Elizaveta Levina, and M. E. J. Newman. Robustness of community structure in networks. Phys. Rev. E, 77:046119, Apr 2008. [12] Andrea Lancichinetti, Santo Fortunato, and Filippo Radicchi. Benchmark graphs for testing community detection algorithms. Physical review E, 78(4):046110, 2008. [13] Marina Meil?a. Local equivalence of distances between clusterings ? a geometric perspective. Machine Learning, 86(3):369?389, 2012. [14] Marina Meil?a, Susan Shortreed, and Liang Xu. Regularized spectral learning. In Robert Cowell and Zoubin Ghahramani, editors, Proceedings of the Arti?cial Intelligence and Statistics Workshop(AISTATS 05), 2005. [15] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, Cambridge, MA, 2002. MIT Press. [16] Richard Peng, He Sun, and Luca Zanetti. Partitioning well-clustered graphs with k-means and heat kernel. In Proceedings of the Annual Conference on Learning Theory (COLT), pages 1423?1455, 2015. [17] Tai Qin and Karl Rohe. Regularized spectral clustering under the degree-corrected stochastic blockmodel. In Advances in Neural Information Processing Systems, pages 3120?3128, 2013. [18] Karl Rohe, Sourav Chatterjee, and Bin Yu. Spectral clustering and the high-dimensional stochastic blockmodel. The Annals of Statistics, pages 1878?1915, 2011. [19] Gilbert W Stewart and Ji-guang Sun. Matrix perturbation theory, volume 175. Academic press New York, 1990. [20] Yali Wan and Marina Meila. A class of network models recoverable by spectral clustering. In Daniel Lee and Masashi Sugiyama, editors, Advances in Neural Information Processing Systems (NIPS), page (to appear), 2015. 9
6140 |@word norm:6 stronger:3 nd:2 c0:8 decomposition:3 arti:1 harder:1 ld:1 contains:2 daniel:1 bc:1 existing:6 current:1 comparing:2 incidence:1 must:1 informative:2 cant:1 christian:1 zik:1 stationary:1 v:1 instantiate:1 fewer:1 intelligence:1 directory:1 santo:1 node:17 preference:2 liberal:1 simpler:1 mathematical:1 direct:1 natalie:1 symposium:1 prove:4 shorthand:1 lancichinetti:2 introduce:1 peng:1 expected:2 indeed:2 andrea:1 dist:9 rem:2 decreasing:2 little:1 election:1 increasing:1 spain:1 estimating:1 notation:2 moreover:5 project:1 bounded:1 medium:1 what:3 eigenvector:1 finding:1 guarantee:16 cial:1 masashi:1 subclass:2 tie:1 ensured:1 scaled:1 partitioning:2 appear:1 before:1 local:3 resilience:1 limit:1 meil:3 approximately:1 therein:2 studied:2 equivalence:2 limited:1 range:1 directed:1 unique:1 testing:1 practice:1 block:9 bootstrap:1 procedure:1 aji:1 area:2 zoubin:1 cannot:2 close:6 put:1 context:1 spectacular:1 isfy:1 gilbert:1 equivalent:1 dz:2 blueprint:1 independently:1 recovery:12 immediately:1 estimator:5 sbm:44 spanned:2 orthonormal:1 stability:7 variation:1 annals:1 construction:1 strengthen:1 element:2 wolfe:1 satisfying:1 particularly:1 cut:1 observed:2 preprint:1 capture:1 zbz:1 calculate:1 susan:1 wj:1 connected:1 sun:2 cheeger:1 covariates:1 rigorously:1 depend:2 linial:1 basis:2 easily:1 chapter:1 various:1 heat:1 describe:2 bhatia:1 newman:1 outside:1 sanity:1 widely:2 larger:3 say:2 otherwise:1 statistic:6 kahan:1 itself:1 online:1 chayes:1 eigenvalue:6 evidently:1 propose:1 remainder:2 qin:1 iff:1 mixing:1 frobenius:4 inducing:1 az:2 seattle:2 rst:2 cluster:19 exploiting:1 r1:1 generating:2 ben:1 pose:1 stat:1 ij:2 progress:1 ckk:5 soc:1 signi:1 come:2 implies:1 differ:2 closely:1 stochastic:8 adjacency:3 bin:1 resilient:1 clustered:1 proposition:11 opt:1 tighter:1 elementary:1 brian:1 extension:4 hold:6 weblogs:1 around:1 radicchi:2 adopt:1 omitted:1 earth:1 estimation:2 currently:1 individually:1 correctness:2 weighted:9 awasthi:1 mit:1 always:3 unspeci:1 ck:1 conjunction:1 derived:2 maria:1 bernoulli:1 likelihood:2 check:1 political:5 blockmodel:2 sense:1 relegated:1 colt:1 proposes:1 smoothing:1 equal:2 construct:7 washington:4 sampling:5 ng:1 represents:2 yu:1 abbe:1 future:1 report:1 simplify:1 richard:1 few:1 mover:1 replaced:1 n1:1 ab:2 sumptions:1 detection:4 satis:3 braverman:1 evaluation:1 devoted:1 edge:7 partial:1 necessary:1 orthogonal:3 euclidean:1 plotted:1 theoretical:1 witnessed:1 column:3 instance:2 earlier:1 nkl:2 cover:1 goodness:6 stewart:1 karrer:1 cost:1 deviation:1 entry:1 undetermined:1 too:1 perturbed:2 varies:1 tolerated:1 fundamental:1 international:1 siam:2 lee:1 modelbased:1 michael:1 w1:1 central:1 wan:2 worse:1 american:1 chung:1 account:1 de:18 summarized:1 includes:1 satisfy:7 explicitly:1 ad:1 depends:1 view:1 francis:1 start:1 wm:2 shai:1 contribution:1 formed:1 error2:1 lada:1 bilu:1 cation:2 rajendra:1 phys:1 whenever:5 checked:1 ed:2 obvious:1 associated:2 di:2 proof:8 hamming:1 con:2 sampled:3 proved:3 dataset:1 popular:1 exemplify:3 knowledge:1 actually:1 improved:3 wei:1 though:1 just:1 adamic:2 replacing:1 glance:2 quality:1 perhaps:1 grows:1 believe:2 usa:2 effect:1 dietterich:1 normalized:3 true:2 concept:1 contain:1 hence:2 assigned:2 symmetric:2 sin:1 uniquely:1 davis:1 cosine:1 criterion:1 outline:1 complete:1 balcan:1 meaning:1 ef:2 common:1 physical:1 perturbing:2 ji:1 volume:5 he:1 refer:1 cambridge:1 rd:1 meila:1 similarly:1 sugiyama:1 stable:7 similarity:1 compiled:1 align:1 patrick:1 closest:1 recent:2 perspective:2 belongs:1 yonatan:1 blog:7 seen:1 greater:1 pfm:31 speci:1 paradigm:1 colin:1 recoverable:1 technical:3 match:1 levina:1 academic:1 offer:1 bach:1 divided:1 luca:1 marina:4 a1:1 laplacian:7 impact:1 feasibility:1 florina:1 sinus:2 arxiv:3 represent:2 normalization:1 sometimes:1 kernel:1 background:1 want:1 singular:4 veri:1 subject:1 validating:1 undirected:1 jordan:2 call:1 counting:3 intermediate:1 enough:3 wn:1 easy:1 idea:2 computable:3 politics:1 clusterability:3 becker:1 eigengap:8 edoardo:1 speech:1 speaking:1 york:1 eigenvectors:1 encyclopedia:1 generate:2 exist:2 canonical:2 discrete:1 shall:1 clusterable:1 threshold:1 neither:1 graph:34 fraction:1 year:1 run:1 angle:3 fourth:1 family:3 throughout:2 separation:1 acceptable:1 comparable:1 entirely:1 bound:18 yali:2 guaranteed:1 fan:1 nonnegative:1 annual:2 rkl:1 constraint:1 filippo:1 dence:2 nathan:1 argument:1 ned:15 department:2 disconnected:1 instantiates:2 intimately:1 wi:1 rev:1 making:2 happens:1 gradually:1 computationally:1 remains:1 jennifer:1 describing:1 count:1 fail:1 tai:1 informal:1 available:2 apply:1 spectral:20 generic:4 alternative:4 robustness:1 original:1 denotes:3 clustering:61 ensure:1 remaining:1 exploit:2 restrictive:2 emmanuel:1 especially:2 yz:10 ghahramani:2 quantity:3 usual:1 diagonal:1 said:1 elizaveta:1 subspace:4 distance:9 link:1 nitude:1 simulated:3 me:5 reason:1 assuming:2 index:3 relationship:1 kk:2 equivalently:1 setup:1 liang:1 robert:1 trace:1 negative:3 mink:2 fortunato:2 twenty:1 perform:1 datasets:4 benchmark:2 t:5 immediate:1 maxk:4 extended:1 heterogeneity:1 dc:8 frame:2 perturbation:12 rn:4 smoothed:1 sharp:1 community:8 david:2 lfr:5 sandon:1 barcelona:1 nip:2 tractably:1 below:3 laplacians:1 hyperlink:1 max:3 power:1 natural:3 endogenously:1 business:1 regularized:2 indicator:6 zr:1 turning:1 advanced:1 imply:2 ne:3 nding:1 ready:1 review:2 understanding:2 l2:1 literature:1 discovery:1 geometric:1 law:1 loss:2 expect:1 permutation:1 interesting:1 validation:2 degree:10 consistent:1 editor:3 leaning:1 erasing:1 pranjal:1 karl:2 compatible:4 course:1 last:1 free:12 aij:5 zc:2 taking:1 unweighted:1 computes:1 author:1 sourav:1 bb:2 approximate:1 sz:2 instantiation:1 sk:3 why:1 as1:1 obtaining:1 constructing:1 diag:7 aistats:1 apr:1 main:7 bounding:2 noise:3 guang:1 complementary:1 xu:1 cient:2 wish:1 mmp:1 breaking:1 misclassi:2 rk:3 theorem:39 choi:1 rohe:2 borgs:1 showing:1 exible:1 concern:1 closeness:1 essential:1 workshop:2 corr:2 supplement:4 airoldi:1 conditioned:1 chatterjee:1 sx:8 nk:7 intersection:2 simply:1 blogosphere:1 ordered:1 hua:1 springer:1 cowell:1 truth:1 acm:2 ma:1 formulated:1 bkl:3 tted:3 replace:2 shared:1 change:1 content:1 except:1 corrected:2 principal:3 conservative:1 total:1 called:2 shang:1 teng:1 e:3 experimental:2 intact:1 meaningful:1 indicating:1 formally:1 mark:1 latter:1 dissimilar:1 evaluate:1 shortreed:1
5,682
6,141
An Architecture for Deep, Hierarchical Generative Models Philip Bachman phil.bachman@maluuba.com Maluuba Research Abstract We present an architecture which lets us train deep, directed generative models with many layers of latent variables. We include deterministic paths between all latent variables and the generated output, and provide a richer set of connections between computations for inference and generation, which enables more effective communication of information throughout the model during training. To improve performance on natural images, we incorporate a lightweight autoregressive model in the reconstruction distribution. These techniques permit end-to-end training of models with 10+ layers of latent variables. Experiments show that our approach achieves state-of-the-art performance on standard image modelling benchmarks, can expose latent class structure in the absence of label information, and can provide convincing imputations of occluded regions in natural images. 1 Introduction Training deep, directed generative models with many layers of latent variables poses a challenging problem. Each layer of latent variables introduces variance into gradient estimation which, given current training methods, tends to impede the flow of subtle information about sophisticated structure in the target distribution. Yet, for a generative model to learn effectively, this information needs to propagate from the terminal end of a stochastic computation graph, back to latent variables whose effect on the generated data may be obscured by many intervening sampling steps. One approach to solving this problem is to use recurrent, sequential stochastic generative processes with strong interactions between their inference and generation mechanisms, as introduced in the DRAW model of Gregor et al. [5] and explored further in [1, 19, 22]. Another effective technique is to use lateral connections for merging bottom-up and top-down information in encoder/decoder type models. This approach is exemplified by the Ladder Network of Rasmus et al. [17], and has been developed further for, e.g. generative modelling and image processing in [8, 23]. Models like DRAW owe much of their success to two key properties: they decompose the process of generating data into many small steps of iterative refinement, and their structure includes direct deterministic paths between all latent variables and the final output. In parallel, models with lateral connections permit different components of a model to operate at well-separated levels of abstraction, thus generating a hierarchy of representations. This property is not explicitly shared by DRAW-like models, which typically reuse the same set of latent variables throughout the generative process. This makes it difficult for any of the latent variables, or steps in the generative process, to individually capture abstract properties of the data. We distinguish between the depth used by DRAW and the depth made possible by lateral connections by describing them respectively as sequential depth and hierarchical depth. These two types of depth are complimentary, rather than competing. Our contributions focus on increasing hierarchical depth without forfeiting trainability. We combine the benefits of DRAW-like models and Ladder Networks by developing a class of models which we 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. call Matryoshka Networks (abbr. MatNets), due to their deeply nested structure. In Section 2, we present the general architecture of a MatNet. In the MatNet architecture we: ? Combine the ability of, e.g. LapGANs [3] and Diffusion Nets [21] to learn hierarchicallydeep generative models with the power of jointly-trained inference/generation1 . ? Use lateral connections, shortcut connections, and residual connections [7] to provide direct paths through the inference network to the latent variables, and from the latent variables to the generated output ? this makes hierarchically-deep models easily trainable in practice. Section 2 also presents several extensions to the core architecture including: mixture-based prior distributions, a method for regularizing inference to prevent overfitting in practical settings, and a method for modelling the reconstruction distribution p(x|z) with a lightweight, local autoregressive model. In Section 3, we present experiments showing that MatNets offer state-of-the-art performance on standard benchmarks for modelling simple images and compelling qualitative performance on challenging imputation problems for natural images. Finally, in Section 4 we provide further discussion of related work and promising directions for future work. 2 The Matryoshka Network Architecture Matryoshka Networks combine three components: a top-down network (abbr. TD network), a bottomup network (abbr. BU network), and a set of merge modules which merge information from the BU and TD networks. In the context of stochastic variational inference [10], all three components contribute to the approximate posterior distributions used during inference/training, but only the TD network participates in generation. We first describe the MatNet model formally, and then provide a procedural description of its three components. The full architecture is summarized in Fig. 1. Latent Variables Top-down Network latent mean latent logvar merge state Merge Modules merge module Bottom-up Network TD state BU state merge state (a) (b) Figure 1: (a) The overall structure of a Matryoshka Network, and how information flows through the network during training. First, we perform a feedforward pass through the bottom-up network to generate a sequence of BU states. Next, we sample the initial latent variables conditioned on the final BU state. We then begin a stochastic feedforward pass through the top-down network. Whenever this feedforward pass requires sampling some latent variables, we get the sampling distribution by passing the corresponding TD and BU states through a merge module. This module draws conditional samples of the latent variables via reparametrization [10]. These latent samples are then combined with the current TD state, and the feedforward pass continues. Intuitively, this approach allows the TD network to invert the bottom-up network by tracking back along its intermediate states, and eventually recover its original input. (b) Detailed view of a merge module from the network in (a). This module stacks the relevant BU, TD, and merge states on top of each other, and then passes them through a convolutional residual module, as described in Eqn. 10. The output has three parts ? the first provides means for the latent variables, the second provides their log-variances, and the third conveys updated state information to subsequent merge modules. 1 A significant downside of LapGANs and Diffusion Nets is that they define their inference mechanisms a priori. This is computationally convenient, but prevents the model from learning abstract representations. 2 2.1 Formal Description The distribution p(x) generated by a MatNet is encoded in its top-down network. To model p(x), the TD network decomposes the joint distribution p(x, z) over an observation x and a sequence of latent variables z ? {z0 , ..., zd } into a sequence of simpler conditional distributions: X p(x) = p(x|zd , ..., z0 )p(zd |zd?1 , ..., z0 )...p(zi |zi?1 , ..., z0 )...p(z0 ), (1) (zd ,...,z0 ) which we marginalize with respect to the latent variables to get p(x). The TD network is designed so that each conditional p(zi |zi?1 , ..., z0 ) can be truncated to p(x|hti ) using an internal TD state hti . See Eqns. 7/8 in Sec. 2.2 for procedural details. The distribution q(z|x) used for inference in an unconditional MatNet involves the BU network, TD network, and merge modules. This distribution can be written: q(zd , ..., z0 |x) = q(z0 |x)q(z1 |z0 , x)...q(zi |zi?1 , ..., z0 , x)...q(zd |zd?1 , ..., z0 , x), (2) q(zi |hm i+1 ) where each conditional q(zi |zi?1 , ..., z0 , x) can be truncated to using an internal merge state hm i+1 produced by the ith merge module. See Eqns. 10/11 in Sec. 2.2 for procedural details. MatNets can also be applied to conditional generation problems like inpainting or pixel-wise segmentation. For, e.g. inpainting with known pixels xk and missing pixels xu , the predictive distribution of a conditional MatNet is given by: X p(xu |xk ) = p(xu |zd , ..., z0 , xk )p(zd |zd?1 , ..., z0 , xk )...p(z1 |z0 , xk )p(z0 |xk ). (3) (zd ,...,z0 ) m:g Each conditional p(zi |zi?1 , ..., z0 , xk ) can be truncated to p(zi |hm:g i+1 ), where hi+1 indicates state in a merge module belonging to the generator network. Crucially, conditional MatNets include BU networks and merge modules that participate in generation, in addition to the BU networks and merge modules used by both conditional and unconditional MatNets during inference/training. The distribution used for inference in a conditional MatNet is given by: q(zd , ..., z0 |xk , xu ) = q(zd |zd?1 , ..., z0 , xk , xu )...q(z1 |z0 , xk , xu )q(z0 |xk , xu ), k u q(zi |hm:i i+1 ), (4) hm:i i+1 where each conditional q(zi |zi?1 , ..., z0 , x , x ) can be truncated to where indicates state in a merge module belonging to the inference network. Note that, in a conditional MatNet the distributions p(?|?) are not allowed to condition on xu , while the distributions q(?|?) can. MatNets are well-suited to training with Stochastic Gradient Variational Bayes [10]. In SGVB, one maximizes a lower-bound on the data log-likelihood based on the variational free-energy: log p(x) ? E [log p(x|z)] ? KL(q(z|x) || p(z)), z?q(z|x) (5) for which p and q must satisfy a few simple assumptions and KL(q(z|x) || p(z)) indicates the KL divergence between the inference distribution q(z|x) and the model prior p(z). This bound is tight when the inference distribution matches the true posterior p(z|x) in the model joint distribution p(x, z) = p(x|z)p(z) ? in our case given by Eqns. 1/3. For brevity, we only explicitly write the free-energy bound for a conditional MatNet, which is:   log p(xu |xk ) ? E log p(xu |zd , ..., z0 , xk ) ? (6) q(zd ,...,z0 |xk ,xu ) KL(q(zd , ..., z0 |xk , xu )||p(zd , ..., z0 |xk )). With SGVB we can optimize the bound in Eqn. 6 using the ?reparametrization trick? to allow easy backpropagation through the expectation over z ? q(z|xk , xu ). See [10, 18] for more details about this technique. The bound for unconditional MatNets is nearly identical ? it just removes xk . 2.2 Procedural Description Structurally, top-down networks in MatNets comprise sequences of modules in which each module t fit receives two inputs: a deterministic top-down state hti from the preceding module fi?1 , and some 3 latent variables zi . Module fit produces an updated state hti+1 = fit (hti , zi ; ?t ), where ?t indicates the TD network?s parameters. By defining the TD modules appropriately, we can reproduce the architectures for LapGANs, Diffusion Nets, and Probabilistic Ladder Networks [23]. Motivated by the success of LapGANs and ResNets [7], we use TD modules in which the latent variables are concatenated with the top-down state, then transformed, after which the transformed values are added back to the top-down state prior to further processing. If the adding occurs immediately before, e.g. a ReLU, then the latent variables can effectively gate the top-down state by knocking particular elements below zero. This allows each stochastic module in the top-down network to apply small refinements to the output of preceding modules. MatNets thus perform iterative stochastic refinement through hierarchical depth, rather than through sequential depth as in DRAW2 . More precisely, the top-down modules in our convolutional MatNets compute: hti+1 = lrelu(hti + conv(lrelu(conv([hti ; zi ], vit )), wit )), (7) where [x; x0 ] indicates tensor concatenation along the ?feature? axis, lrelu(?) indicates the leaky ReLU function, conv(h, w) indicates shape-preserving convolution of the input h with the kernel w, and wit /vit indicate the trainable parameters for module i in the TD network. We elide bias terms for brevity. When working with fully-connected models we use stochastic GRU-style state updates rather than the stochastic residual updates in Eq. 7. Exhaustive descriptions of the modules can be found in our code at: https://github.com/Philip-Bachman/MatNets-NIPS. These TD modules represent each conditional p(zi |zi?1 , ..., z0 ) in Eq. 1 using p(zi |hti ). TD module fit places a distribution over zi using parameters [? ?i ; log ? ?i2 ] computed as follows: [? ?i ; log ? ?i2 ] = conv(lrelu(conv(hti , vit )), wit ), (8) where we use ?? to distinguish between Gaussian parameters from the generator network and those from the inference network (see Eqn. 11). The distributions p(?) all depend on the parameters ?t . Bottom-up networks in MatNets comprise sequences of modules in which each module receives input only from the preceding BU module. Our BU networks are all deterministic and feedforward, but sensibly augmenting them with auxiliary latent variables [16, 15] and/or recurrence is a promising topic for future work. Each non-terminal module fib in the BU network computes an updated state: hbi = fib (hbi+1 ; ?b ). The final module, f0b , provides means and log-variances for sampling z0 via reparametrization [10]. To align BU modules with their counterparts in the TD network, we number them in reverse order of evaluation. We structured the modules in our BU networks to take advantage of residual connections. Specifically, each BU module fib computes: hbi = lrelu(hbi+1 + conv(lrelu(conv(hbi+1 , vib )), wib )), (9) with operations defined as for Eq. 7. These updates can be replaced by GRUs, LSTMs, etc. The updates described in Eqns. 7 and 9 both assume that module inputs and outputs are the same shape. We thus construct MatNets using groups of ?meta modules?, within which module input/output shapes are constant. To keep our network design (relatively) simple, we use one meta module for each spatial scale in our networks (e.g. scales of 14x14, 7x7, and fully-connected for MNIST). We connect meta modules using layers which may upsample, downsample, and change feature dimension via strided convolution. We use standard convolution layers, possibly with up or downsampling, to feed data into and out of the bottom-up and top-down networks. During inference, merge modules compare the current top-down state with the state of the corresponding bottom-up module, conditioned on the current merge state, and choose a perturbation of the top-down information to push it towards recovering the bottom-up network?s input (i.e. minimize m b t m m reconstruction error). The ith merge module outputs [?i ; log ?i2 ; hm i+1 ] = fi (hi , hi , hi ; ? ), where 2 ?i and log ?i are the mean and log-variance for sampling zi via reparametrization, and hm i+1 gives the updated merge state. As in the TD and BU networks, we use a residual update: m m b t i i hm i+1 = lrelu(hi + conv(lrelu(conv([hi ; hi ; hi ], ui )), vi )) [?i ; log ?i2 ] i = conv(hm i+1 , wi ), (10) (11) 2 Current DRAW-like models can be extended to incorporate hierarchical depth, and our models can be extended to incorporate sequential depth. 4 in which the convolution kernels uii , vii , and wii constitute the trainable parameters of this module. Each merge module thus computes an updated merge state and then reparametrizes a diagonal Gaussian using a linear function of the updated merge state. In our experiments all modules in all networks had their own trainable parameters. We experimented with parameter sharing and GRU-style state in our convolutional models. The stochastic convolutional GRU is particularly interesting when applied depth-wise (rather than time-wise as in [19]), as it implements a stochastic Neural GPU [9] trainable by variational inference and capable of multi-modal dynamics. We saw no performance gains with these changes, but they merit further investigation. In unconditional MatNets, the top-most latent variables z0 follow a zero-mean, unit-variance Gaussian prior, except in our experiments with mixture-based priors. In conditional MatNets, z0 follows a distribution conditioned on the known values xk . Conditional MatNets use parallel sets of BU and merge modules for the conditional generator and the inference network. BU modules in the conditional generator observe a partial input xk , while BU modules in the inference network observe both xk and the unknown values xu (which the model is trained to predict). The generative BU and merge modules in a conditional MatNet interact with the TD modules analogously to the BU and merge modules used for inference. Our models used independent Bernoullis, diagonal Gaussians, or ?integrated? Logistics (see [11]) for the final output distribution p(x|zd , ..., z0 )/p(xu |zd , ..., z0 , xk ). 2.3 Model Extensions We also develop several extensions for the MatNet architecture. The first is to replace the zero-mean, unit-variance Gaussian prior over z0 with a Gaussian Mixture Model, which we train simultaneously with the rest of the model. When using a mixture prior, we use an analytical approximation to the required KL divergence. For Gaussian distribution q, and Gaussian mixture p with components {p1 , ..., pk } with uniform mixture weights, we use the KL approximation: KL(q || p) ? log Pk i=1 1 e? KL(q || pi ) . (12) Our tests with mixture-based priors are only concerned with qualitative behaviour, so we do not worry about the approximation error in Eqn. 12. The second extension is a technique for regularizing the inference model to prevent overfitting beyond that which is present in the generator. This regularization is applied by optimizing:   maximize E E [log p(x|z)] ? KL(q(z|x) || p(z)) . (13) q x?p(x) z?q(z|x) This maximizes the free-energy bound for samples drawn from our model, but without changing their true log-likelihood. By maximizing Eqn. 13, we implicitly reduce KL(q(z|x) || p(z|x)), which is the gap between the free-energy bound and the true log-likelihood. A similar regularizer can be constructed for minimizing KL(p(z|x) || q(z|x)). We use (13) to reduce overfitting, and slightly boost test performance, in our experiments with MNIST and Omniglot. The third extension off-loads responsibility for modelling sharp local dynamics in images, e.g. precise edge placements and small variations in textures, from the latent variables onto a local, deterministic autoregressive model. We use a simplified version of the masked convolutions in the PixelCNN of [25], modified to condition on the output of the final TD module in a MatNet. This modification is easy ? we just concatenate the final TD module?s output and the true image, and feed this into a PixelCNN with, e.g. five layers. A trick we use to improve gradient flow back to the MatNet is to feed the MatNet?s output directly into each internal layer of the PixelCNN. In the masked convolution layers, connections to the MatNet output are unrestricted, since they are already separated from the ground truth by an appropriately-monitored noisy channel. Larger, more powerful mechanisms for combining local autoregressions and conditioning information are explored in [26]. 3 Experiments We measured quantitative performance of MatNets on three datasets: MNIST, Omniglot [13], and CIFAR 10 [12]. We used the 28x28 version of Omniglot described in [2], which can be found at: https://github.com/yburda/iwae. All quantitative experiments measured performance in 5 Figure 2: MatNet performance on quantitative benchmarks. All tables except the lower-right table describe standard unconditional generative NLL results. The lower-right table presents results from the structured prediction task in [22], in which 1-3 quadrants of an MNIST digit are visible, and NLL is measured on predictions for the unobserved quadrants. Figure 3: Class-like structure learned by a MatNet trained on 28x28 Omniglot, without label information. The model used a GMM prior over z0 with 50 mixture components. Each group of three columns corresponds to a mixture component. The top row shows validation set examples whose posterior over the mixture components placed them into each component. Subsequent rows show samples drawn by freely resampling latent variables from the model prior, conditioned on the top k layers of latent variables, i.e. {z0 , ..., zk?1 } being drawn from the approximate posterior for the example at the top of the column. From the second row down, we show k = {1, 2, 4, 6, 8, 10}. terms of negative log-likelihood, with the CIFAR 10 scores rescaled to bits-per-pixel and corrected for discrete/continuous observations as described in [24]. We used the IWAE bound from [2] to evaluate our models, with 2500 samples in the bound. We performed additional experiments measuring the qualitative performance of MatNets using Omniglot, CelebA faces [14], LSUN 2015 towers, and LSUN 2015 churches. The latter three datasets are 64x64 color images with significant detail and non-trivial structure. Complete hyperparameters for model architecture and optimization can be found in the code at https://github.com/Philip-Bachman/MatNets-NIPS. We performed three quantitative tests using MNIST. The first tests measured generative performance on dynamically-binarized images using a fully-connected model (for comparison with [2, 23]) and on the fixed binarization from [20] using a convolutional model (for comparison with [25, 19]). MatNets improved on existing results in both settings. See the tables in Fig. 2. Our third tests with MNIST measured performance of conditional MatNets for structured prediction. For this, we recreated the tests described in [22]. MatNet performance on these tests was also strong, though the prior results were from a fully-connected model, which skews the comparison. We also measured quantitative performance using the 32x32 color images of CIFAR 10. We trained two models on this data ? one with a Gaussian reconstruction distribution and dequantization as described in [24], and the other which added a local autoregression and used the ?integrated Logistic? likelihood described in [11]. The Gaussian model fell just short of the best previously reported result for a variational method (from [6]), and well short of the Pixel RNN presented in [25]. Performance on this task seems very dependent on a model?s ability to predict pixel intensities precisely along edges. The ability to efficiently capture global structure has a relatively weak benefit. Mistaking a cat for a dog costs little when amortized over thousands of pixels, while misplacing a single edge can spike the reconstruction cost dramatically. We demonstrate the strength of this effect in Fig. 4, where we plot how the bits paid to encode observations are distributed among the modules in the network over the course of training for MNIST, Omniglot, and CIFAR 10. The plots show a stark difference between these distributions when modelling simple line drawings vs. when modelling more natural 6 (a) (b) (c) t Figure 4: This figure shows per module divergences KL(q(zi |hm i+1 ) || p(zi |hi )) over the course of training for models trained on MNIST, Omniglot, and CIFAR 10. The stacked area plots are grouped by ?meta module? in the TD network. The MNIST and Omniglot models both had a single FC module and meta modules at spatial dimension 7x7 and 14x14. The meta modules at 7x7 and 14x14 both comprised 5 TD modules. The CIFAR10 model (without autoregression) had one FC module, and meta modules at spatial dimension 8x8, 16x16, and 32x32. These meta modules comprised 2, 4, and 4 modules respectively. Light lines separate modules, and dark lines separate meta modules. The encoding cost on CIFAR 10 is clearly dominated by the low-level details encoded by the latent variables in the full-resolution TD modules closest to the output. images. For CIFAR 10, almost all of the encoding cost was spent in the 32x32 layers of the network closest to the generated output. This was our motivation for adding a lightweight autoregression to p(x|z), which significantly reduced the gap between our model and the PixelRNN. Fig. 5 shows some samples from our model, which exhibit occasional glimpses of global and local structure. Our final quantitative test used the Omniglot handwritten character dataset, rescaled to 28x28 as in [2]. These tests used the same convolutional architecture as on MNIST. Our model outperformed previous results, as shown in Fig. 2. Using Omniglot we also experimented with placing a mixture-based prior distribution over the top-most latent variables z0 . The purpose of these tests was to determine whether the model could uncover latent class structure in the data without seeing any label information. We visualize results of these tests in Fig. 3. Additional description is provided in the figure caption. We placed a slight penalty on the entropy of the posterior distributions for each input to the model, to encourage a stronger separation of the mixture components. The inputs assigned to each mixture component (based on their posteriors) exhibit clear stylistic coherence. In addition to qualitative tests exploring our model?s ability to uncover latent factors of variation in Omniglot data, we tested the performance of our models at imputing missing regions of higher resolution images. These tests used images of celebrity faces, churches, and towers. These images include far more detail and variation than those in MNIST/Omniglot/CIFAR 10. We used two-stage models for these tests, in which each stage was a conditional MatNet. The first stage formed an initial guess for the missing image content, and the second stage then refined that guess. Both stages used the same architectures for their inference and generator networks. We sampled imputation problems by placing three 20x20 occluders uniformly at random in the image. Each stage had single TD modules at scales 32x32, 16x16, 8x8, and fully-connected. We trained models for roughly 200k updates, and show imputation performance on images from a test set that was held out during training. Results are shown in Fig. 5. 4 Related Work and Discussion Previous successful attempts to train hierarchically-deep models largely fall into a class of methods based on deconstructing, and then reconstructing data. Such approaches are akin to solving mazes by starting at the end and working backwards, or to learning how an object works by repeatedly disassembling and reassembling it. Examples include LapGANs [3], which deconstruct an image by repeatedly downsampling it, and Diffusion Nets [21], which deconstruct arbitrary data by subjecting it to a long sequence of small random perturbations. The power of these approaches stems from the way in which gradually deconstructing the data leaves behind a trail of crumbs which can be followed back to a well-formed observation. In the generative models of [3, 21], the deconstruction processes were defined a priori, which avoided the need for trained inference. This makes training significantly 7 (a) CIFAR 10 samples (b) CelebA Faces (c) LSUN Churches (d) LSUN Towers Figure 5: Imputation results on challenging, real-world images. These images show predictions for missing data generated by a two stage conditional MatNet, trained as described in Section 3. Each occluded region was 20x20 pixels. Locations for the occlusions were selected uniformly at random within the images. One interesting behaviour which emerged in these tests was that our model successfully learned to properly reconstruct the watermark for ?shutterstock?, which was a source of many of the LSUN images ? see the second input/output pair in the third row of (b). easier, but subverts one of the main motivations for working with latent variables and sample-based approximate inference, i.e. the ability to capture salient factors of variation in the inferred relations between latent variables and observed data. This deficiency is beginning to be addressed by, e.g. the Probabilistic Ladder Networks of [23], which are a special case of our architecture in which the deterministic paths from latent variables to observations are removed and the conditioning mechanism in inference is more restricted. Reasoning about data through the posteriors induced by an appropriate generative model motivates some intriguing work at the intersection of machine learning and cognitive science. This work shows that, in the context of an appropriate generative model, powerful inference mechanisms are capable of exposing the underlying factors of variation in fairly sophisticated data. See, e.g. Lake et al. [13]. Techniques for training coupled generation and inference have now reached a level that makes it possible to investigate these ideas while learning models end-to-end [4]. In future work we plan to apply our models to more ?interesting? generative modelling problems, including more challenging image data and problems in language/sequence modelling. The strong performance of our models on benchmark problems suggests their potential for solving difficult structured prediction problems. Combining the hierarchical depth of MatNets with the sequential depth of DRAW is also worthwhile. 8 References [1] P. Bachman and D. Precup. Data generation as sequential decision making. In Advances in Neural Information Processing Systems (NIPS), 2015. [2] Y. Burda, R. Grosse, and R. Salakhutdinov. Importance weighted auto-encoders. arXiv:1509.00519v1 [cs.LG], 2015. [3] E. L. Denton, S. Chintala, A. Szlam, and R. Fergus. Deep generative models using a laplacian pyramid of adversarial networks. arXiv:1506.05751 [cs.CV], 2015. [4] S. M. A. Eslami, N. Heess, T. Weber, Y. Tassa, K. Kavucuoglu, and G. E. Hinton. Attend, infer, repeat: Fast scene understanding with generative models. arXiv:1603.08575 [cs.CV], 2016. [5] K. Gregor, I. Danihelka, A. Graves, and D. Wierstra. Draw: A recurrent neural network for image generation. In International Conference on Machine Learning (ICML), 2015. [6] K. Gregor, F. Besse, D. J. Rezende, I. Danihelka, and D. Wierstra. Towards conceptual compression. In arXiv:1604.08772v1 [stat.ML], 2016. [7] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. arXiv:1512.03385v1 [cs.CV], 2015. [8] S. Honari, J. Yosinski, P. Vincent, and C. Pal. Recombinator networks: Learning coarse-to-fine feature aggregation. In Computer Vision and Pattern Recognition (CVPR), 2016. [9] L. Kaiser and I. Sutskever. Neural gpus learn algorithms. In International Conference on Learning Representations (ICLR), 2016. [10] D. P. Kingma and M. Welling. Auto-encoding variational bayes. In International Conference on Learning Representations (ICLR), 2014. [11] D. P. Kingma, T. Salimans, and M. Welling. Improving variational inference with inverse autoregressive flow. arXiv:1606.04934 [cs.LG], 2016. [12] A. Krizhevsky and G. E. Hinton. Learning multiple layers of features from tiny images. Master?s thesis, University of Toronto, 2009. [13] B. M. Lake, R. Salakhutdinov, and J. B. Tenebaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332?1338, 2015. [14] Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. In International Conference on Computer Vision (ICCV), 2015. [15] L. Maal?e, C. K. S?nderby, S. K. S?nderby, and O. Winther. Auxiliary deep generative models. In International Conference on Machine Learning (ICML), 2016. [16] R. Ranganath, D. Tran, and D. M. Blei. Hierarchical variational models. In International Conference on Machine Learning (ICML), 2016. [17] A. Rasmus, H. Valpola, M. Honkala, M. Berglund, and T. Raiko. Semi-supervised learning with ladder networks. In Advances in Neural Information Processing Systems (NIPS), 2015. [18] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In International Conference on Machine Learning (ICML), 2014. [19] D. J. Rezende, S. Mohamed, I. Danihelka, K. Gregor, and D. Wierstra. One-shot generalization in deep generative models. In International Conference on Machine Learning (ICML), 2016. [20] R. Salakhutdinov and I. Murray. On the quantitative analysis of deep belief networks. In International Conference on Machine Learning (ICML), 2008. [21] J. Sohl-Dickstein, E. A. Weiss, N. Maheswaranathan, and S. Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning (ICML), 2015. [22] K. Sohn, H. Lee, and X. Yan. Learning structured output representation using deep conditional generative models. In Advances in Neural Information Processing Systems (NIPS), 2015. [23] C. K. S?nderby, T. Raiko, L. Maal?e, S. K. S?nderby, and O. Winther. How to train deep variational autoencoders and probabilistic ladder networks. International Conference on Machine Learning (ICML), 2016. [24] L. Theis and M. Bethge. Generative image modeling using spatial lstms. In Advances in Neural Information Processing Systems (NIPS), 2015. [25] A. van den Oord, N. Kalchbrenner, and K. Kavucuoglu. Pixel recurrent neural networks. International Conference on Machine Learning (ICML), 2016. [26] A. van den Oord, N. Kalchbrenner, O. Vinyals, L. Espeholt, A. Graves, and K. Kavucuoglu. Conditional image generation with pixelcnn decoders. arXiv:1606.05328 [cs.CV], 2016. 9
6141 |@word version:2 compression:1 seems:1 stronger:1 propagate:1 bachman:5 crucially:1 paid:1 inpainting:2 shot:1 initial:2 liu:1 lightweight:3 score:1 existing:1 current:5 com:4 luo:1 yet:1 intriguing:1 written:1 must:1 gpu:1 exposing:1 subsequent:2 concatenate:1 visible:1 shape:3 enables:1 remove:1 designed:1 plot:3 update:6 resampling:1 v:1 generative:23 leaf:1 guess:2 selected:1 xk:22 beginning:1 ith:2 core:1 short:2 blei:1 provides:3 coarse:1 forfeiting:1 contribute:1 location:1 toronto:1 simpler:1 zhang:1 five:1 wierstra:4 along:3 constructed:1 direct:2 qualitative:4 combine:3 wild:1 x0:1 roughly:1 p1:1 multi:1 terminal:2 salakhutdinov:3 td:27 little:1 increasing:1 conv:10 spain:1 begin:1 provided:1 underlying:1 maximizes:2 complimentary:1 developed:1 unobserved:1 quantitative:7 binarized:1 sensibly:1 unit:2 crumb:1 szlam:1 danihelka:3 before:1 occluders:1 local:6 attend:1 tends:1 sgvb:2 eslami:1 encoding:3 path:4 merge:27 dynamically:1 suggests:1 challenging:4 mistaking:1 directed:2 practical:1 practice:1 implement:1 backpropagation:2 digit:1 area:1 rnn:1 yan:1 significantly:2 convenient:1 pixelrnn:1 quadrant:2 seeing:1 get:2 onto:1 marginalize:1 context:2 optimize:1 deterministic:6 phil:1 missing:4 maximizing:1 starting:1 vit:3 resolution:2 wit:3 iwae:2 immediately:1 x32:4 x14:3 x64:1 variation:5 updated:6 target:1 hierarchy:1 caption:1 trail:1 trick:2 element:1 amortized:1 recognition:2 particularly:1 nderby:4 continues:1 bottom:8 observed:1 module:70 knocking:1 wang:1 capture:3 thousand:1 region:3 connected:5 sun:1 rescaled:2 removed:1 deeply:1 ui:1 occluded:2 dynamic:2 trained:8 depend:1 solving:3 tight:1 predictive:1 easily:1 joint:2 maheswaranathan:1 cat:1 regularizer:1 train:4 separated:2 stacked:1 fast:1 effective:2 describe:2 refined:1 exhaustive:1 kalchbrenner:2 whose:2 richer:1 encoded:2 larger:1 cvpr:1 emerged:1 drawing:1 reconstruct:1 encoder:1 ability:5 skews:1 deconstructing:2 jointly:1 noisy:1 final:7 nll:2 sequence:7 advantage:1 net:4 analytical:1 reconstruction:5 tran:1 interaction:1 relevant:1 combining:2 intervening:1 description:5 sutskever:1 produce:1 generating:2 object:1 spent:1 recurrent:3 develop:1 augmenting:1 stat:1 pose:1 measured:6 eq:3 strong:3 auxiliary:2 recovering:1 involves:1 indicate:1 c:6 direction:1 attribute:1 stochastic:12 human:1 espeholt:1 behaviour:2 generalization:1 decompose:1 investigation:1 extension:5 exploring:1 ground:1 predict:2 visualize:1 achieves:1 purpose:1 estimation:1 outperformed:1 label:3 expose:1 honkala:1 saw:1 individually:1 grouped:1 successfully:1 weighted:1 clearly:1 gaussian:9 modified:1 rather:4 encode:1 rezende:3 focus:1 properly:1 modelling:9 indicates:7 likelihood:5 bernoulli:1 adversarial:1 inference:29 ganguli:1 abstraction:1 downsample:1 dependent:1 typically:1 integrated:2 relation:1 reproduce:1 transformed:2 pixel:9 overall:1 among:1 priori:2 plan:1 art:2 spatial:4 special:1 fairly:1 comprise:2 construct:1 sampling:5 identical:1 placing:2 denton:1 nearly:1 icml:9 subjecting:1 celeba:2 future:3 unsupervised:1 few:1 strided:1 simultaneously:1 divergence:3 replaced:1 occlusion:1 owe:1 attempt:1 investigate:1 evaluation:1 introduces:1 mixture:13 unconditional:5 light:1 behind:1 held:1 edge:3 capable:2 partial:1 cifar10:1 glimpse:1 encourage:1 obscured:1 hbi:5 column:2 modeling:1 compelling:1 downside:1 measuring:1 cost:4 nonequilibrium:1 uniform:1 krizhevsky:1 masked:2 comprised:2 lsun:5 successful:1 pal:1 reported:1 connect:1 encoders:1 combined:1 grus:1 international:12 winther:2 oord:2 bu:22 probabilistic:4 participates:1 off:1 lee:1 recreated:1 analogously:1 precup:1 bethge:1 thesis:1 choose:1 possibly:1 berglund:1 cognitive:1 style:2 stark:1 potential:1 summarized:1 sec:2 includes:1 satisfy:1 explicitly:2 vi:1 performed:2 view:1 responsibility:1 reached:1 recover:1 bayes:2 reparametrization:4 parallel:2 aggregation:1 contribution:1 minimize:1 formed:2 convolutional:6 variance:6 largely:1 efficiently:1 tenebaum:1 weak:1 handwritten:1 vincent:1 produced:1 ren:1 whenever:1 sharing:1 energy:4 mohamed:2 conveys:1 chintala:1 maluuba:2 monitored:1 reassembling:1 deconstruct:2 gain:1 sampled:1 dataset:1 color:2 segmentation:1 subtle:1 sophisticated:2 uncover:2 back:5 worry:1 feed:3 higher:1 supervised:1 follow:1 logvar:1 modal:1 matnets:22 improved:1 wei:1 though:1 just:3 stage:7 autoencoders:1 working:3 eqn:5 receives:2 lstms:2 celebrity:1 logistic:1 impede:1 effect:2 concept:1 true:4 counterpart:1 regularization:1 assigned:1 i2:4 during:6 eqns:4 recurrence:1 complete:1 demonstrate:1 reasoning:1 image:29 variational:9 wise:3 weber:1 fi:2 regularizing:2 imputing:1 conditioning:2 tassa:1 slight:1 he:1 yosinski:1 significant:2 cv:4 omniglot:12 language:1 had:4 pixelcnn:4 etc:1 align:1 posterior:7 own:1 closest:2 optimizing:1 reverse:1 meta:9 success:2 wib:1 preserving:1 unrestricted:1 additional:2 preceding:3 freely:1 determine:1 maximize:1 semi:1 full:2 multiple:1 infer:1 stem:1 match:1 x28:3 offer:1 long:1 cifar:9 laplacian:1 prediction:5 vision:2 expectation:1 arxiv:7 resnets:1 kernel:2 represent:1 pyramid:1 invert:1 addition:2 fine:1 addressed:1 source:1 appropriately:2 operate:1 rest:1 pass:1 fell:1 induced:1 flow:4 call:1 backwards:1 feedforward:5 intermediate:1 easy:2 concerned:1 fit:4 zi:25 relu:2 architecture:13 competing:1 disassembling:1 reduce:2 idea:1 whether:1 motivated:1 reuse:1 akin:1 penalty:1 passing:1 constitute:1 repeatedly:2 deep:15 dramatically:1 heess:1 detailed:1 clear:1 dark:1 sohn:1 reduced:1 generate:1 http:3 per:2 zd:21 write:1 discrete:1 dickstein:1 group:2 key:1 salient:1 procedural:4 drawn:3 imputation:5 changing:1 prevent:2 gmm:1 diffusion:4 v1:3 graph:1 inverse:1 powerful:2 master:1 place:1 throughout:2 almost:1 stylistic:1 separation:1 lake:2 draw:9 coherence:1 decision:1 bit:2 layer:12 hi:9 bound:9 followed:1 distinguish:2 strength:1 placement:1 precisely:2 deficiency:1 uii:1 scene:1 fib:3 dominated:1 x7:3 relatively:2 gpus:1 structured:5 developing:1 belonging:2 slightly:1 reconstructing:1 character:1 wi:1 modification:1 making:1 intuitively:1 gradually:1 restricted:1 iccv:1 den:2 computationally:1 previously:1 describing:1 eventually:1 mechanism:5 merit:1 end:6 maal:2 autoregression:3 operation:1 wii:1 permit:2 gaussians:1 apply:2 observe:2 hierarchical:7 yburda:1 occasional:1 appropriate:2 worthwhile:1 salimans:1 gate:1 original:1 top:21 include:4 concatenated:1 murray:1 gregor:4 tensor:1 added:2 already:1 occurs:1 spike:1 kaiser:1 diagonal:2 exhibit:2 gradient:3 iclr:2 separate:2 valpola:1 lateral:4 concatenation:1 philip:3 decoder:2 participate:1 topic:1 tower:3 trivial:1 induction:1 code:2 rasmus:2 vib:1 convincing:1 downsampling:2 minimizing:1 difficult:2 x20:2 lg:2 negative:1 honari:1 design:1 motivates:1 unknown:1 perform:2 observation:5 convolution:6 datasets:2 benchmark:4 truncated:4 logistics:1 defining:1 extended:2 communication:1 precise:1 hinton:2 perturbation:2 stack:1 sharp:1 arbitrary:1 intensity:1 inferred:1 introduced:1 dog:1 trainable:5 kl:12 gru:3 connection:9 z1:3 required:1 pair:1 learned:2 barcelona:1 boost:1 nip:7 kingma:2 beyond:1 below:1 exemplified:1 pattern:1 program:1 including:2 belief:1 power:2 natural:4 residual:6 thermodynamics:1 improve:2 github:3 ladder:6 axis:1 raiko:2 church:3 x8:2 hm:10 coupled:1 auto:2 binarization:1 prior:12 autoregressions:1 understanding:1 theis:1 graf:2 fully:5 generation:9 interesting:3 generator:6 validation:1 tiny:1 pi:1 row:4 course:2 placed:2 repeat:1 free:4 formal:1 allow:1 bias:1 burda:1 fall:1 face:4 leaky:1 benefit:2 distributed:1 van:2 depth:13 dimension:3 world:1 maze:1 autoregressive:4 computes:3 made:1 refinement:3 simplified:1 avoided:1 far:1 welling:2 ranganath:1 approximate:4 implicitly:1 keep:1 ml:1 global:2 overfitting:3 conceptual:1 fergus:1 bottomup:1 continuous:1 latent:37 iterative:2 decomposes:1 table:4 promising:2 learn:3 channel:1 zk:1 improving:1 interact:1 pk:2 hierarchically:2 main:1 motivation:2 lrelu:8 hyperparameters:1 allowed:1 xu:15 fig:7 besse:1 x16:2 grosse:1 structurally:1 abbr:3 third:4 hti:10 tang:1 down:16 z0:38 load:1 showing:1 explored:2 experimented:2 mnist:11 sequential:6 effectively:2 merging:1 adding:2 importance:1 texture:1 sohl:1 conditioned:4 push:1 gap:2 easier:1 suited:1 vii:1 entropy:1 intersection:1 fc:2 prevents:1 vinyals:1 tracking:1 upsample:1 nested:1 truth:1 corresponds:1 conditional:24 towards:2 shared:1 absence:1 shortcut:1 change:2 replace:1 content:1 specifically:1 except:2 corrected:1 uniformly:2 pas:4 trainability:1 formally:1 internal:3 latter:1 brevity:2 incorporate:3 evaluate:1 tested:1
5,683
6,142
Data Poisoning Attacks on Factorization-Based Collaborative Filtering Bo Li ? Vanderbilt University bo.li.2@vanderbilt.edu Aarti Singh Carnegie Mellon University aarti@cs.cmu.edu Yining Wang ? Carnegie Mellon University ynwang.yining@gmail.com Yevgeniy Vorobeychik Vanderbilt University yevgeniy.vorobeychik@vanderbilt.edu Abstract Recommendation and collaborative filtering systems are important in modern information and e-commerce applications. As these systems are becoming increasingly popular in the industry, their outputs could affect business decision making, introducing incentives for an adversarial party to compromise the availability or integrity of such systems. We introduce a data poisoning attack on collaborative filtering systems. We demonstrate how a powerful attacker with full knowledge of the learner can generate malicious data so as to maximize his/her malicious objectives, while at the same time mimicking normal user behavior to avoid being detected. While the complete knowledge assumption seems extreme, it enables a robust assessment of the vulnerability of collaborative filtering schemes to highly motivated attacks. We present efficient solutions for two popular factorizationbased collaborative filtering algorithms: the alternative minimization formulation and the nuclear norm minimization method. Finally, we test the effectiveness of our proposed algorithms on real-world data and discuss potential defensive strategies. 1 Introduction Recommendation systems have emerged as a crucial feature of many electronic commerce systems. In machine learning such problems are usually referred to as collaborative filtering or matrix completion, where the known users? preferences are abstracted into an incomplete user-by-item matrix, and the goal is to complete the matrix and subsequently make new item recommendations for each user. Existing approaches in the literature include nearest-neighbor methods, where a user?s (item?s) preference is determined by other users (items) with similar profiles [1], and factorization-based methods where the incomplete preference matrix is assumed to be approximately low-rank [2, 3]. As recommendation systems play an ever increasing role in current information and e-commerce systems, they are susceptible to a risk of being maliciously attacked. One particular form of attacks is called data poisoning, in which a malicious party creates dummy (malicious) users in a recommendation system with carefully chosen item preferences (i.e., data) such that the effectiveness or credibility of the system is maximally degraded. For example, an attacker might attempt to make recommendations that are as different as possible from those that would otherwise be made by the recommendation system. In another, more subtle, example, the attacker is associated with the producer of a specific movie or product, who may wish to increase or decrease the popularity of a certain item. In both cases, the credibility of a recommendation system is harmed by the malicious activities, which could lead to significant economic loss. Due to the open nature of recommendation ? Both authors contribute equally 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. systems and their reliance on user-specified judgments for building profiles, various forms of attacks are possible and have been discussed, such as the random attack and random product push/nuke attack [4, 5]. However, these attacks are not formally analyzed and cannot be optimized according to specific collaborative filtering algorithms. As it is not difficult for attackers to determine the defender?s filtering algorithm or even its parameters settings (e.g., through insider attacks), this can lead one to significantly under-estimate the attacker?s ability and result in substantial loss. We present a systematic approach to computing near-optimal data poisoning attacks for factorizationbased collaborative filtering/recommendation models. We assume a highly motivated attacker with knowledge of both the learning algorithms and parameters of the learner following the Kerckhoffs? principle to ensure reliable vulnerability analysis in the worst case. We focus on two most popular algorithms: alternating minimization [6] and nuclear norm minimization [3]. Our main contributions are as follows: ? Comprehensive characterization of attacker utilities: We characterize several attacker utilities, which include availability attacks, where prediction error is increased, and integrity attacks, where item-specific objectives are considered. Optimal attack strategies for all utilities can be computed under a unified optimization framework. ? Novel gradient computations: Building upon existing gradient-based data poisoning frameworks [7, 8, 9], we develop novel methods for gradient computation based on first-order KKT conditions for two widely used algorithms: alternating minimization [6] and nuclear norm minimization [2]. The resulting derivations are highly non-trivial; in addition, to our knowledge this work is the first to give systematic data poisoning attacks for problems involving non-smooth nuclear norm type objectives. ? Mimicking normal user behaviors: For data poisoning attacks, most prior work focuses on maximizing attacker?s utility. A less investigated problem is how to synthesize malicious data points that are hard for a defender to detect. In this paper we provide a novel technique based on stochastic gradient Langevin dynamics optimization [10] to produce malicious users that mimic normal user behaviors in order to avoid detection, while achieving attack objectives. Related Work: There has been extensive prior research concerning the security of machine learning algorithms [11, 12, 13, 14, 15]. Biggio et al. pioneered the research of optimizing malicious datadriven attacks for kernel-based learning algorithms such as SVM [16]. The key optimization technique is to approximately compute implicit gradients of the solution of an optimization problem based on first-order KKT conditions. Similar techniques were later generalized to optimize data poisoning attacks for several other important learning algorithms, such as Lasso regression [7], topic modeling [8], and autoregressive models [17]. The reader may refer to [9] for a general algorithmic framework of the abovementioned methods. In terms of collaborative filtering/matrix completion, there is another line of established research that focuses on robust matrix completion, in which a small portion of elements or rows in the underlying low-rank matrix is assumed to be arbitrarily perturbed [18, 19, 20, 21]. Specifically, the stability of alternating minimization solutions was analyzed with respect to malicious data manipulations in [22]. However, [22] assumes a globally optimal solution of alternating minimization can be obtained, which is rarely true in practice. 2 Preliminaries We first set up the collaborative filtering/matrix completion problem and give an overview of existing low-rank factorization based approaches. Let M ? Rm?n be a data matrix consisting of m rows and n columns. Mij for i ? [m] and j ? [n] would then correspond to the rating the ith user gives for the jth item. We use ? = {(i, j) : Mij is observed} to denote all observable entries in M and assume that |?|  mn. We also use ?i ? [n] and ?0j ? [m] for columns (rows) that are observable at the ith row (jth column). The goal of collaborative filtering (also referred to as matrix completion in the statistical learning literature [2]) is then to recover the complete matrix M from few observations M? . The matrix completion problem is in general ill-posed as it is impossible to complete an arbitrary matrix with partial observations. As a result, additional assumptions are imposed on the underlying data matrix M. One standard assumption is that M is very close to an m ? n rank-k matrix with 2 k  min(m, n). Under such assumptions, the complete matrix M can be recovered by solving the following optimization problem: min X?Rm?n kR? (M ? X)k2F , s.t. rank(X) ? k, (1) P where kAk2F = i,j A2ij denotes the squared Frobenious norm of matrix A and [R? (A)]ij equals Aij if (i, j) ? ? and 0 otherwise. Unfortunately, the feasible set in Eq. (1) is non-convex, making the optimimzation problem difficult to solve. There has been an extensive prior literature on approximately solving Eq. (1) and/or its surrogates that lead to two standard approaches: alternating minimization and nuclear norm minimization. For the first approach, one considers the following problem:  min kR? (M ? UV> )k2F +2?U kUk2F + 2?V kVk2F . (2) U?Rm?k ,V?Rn?k Eq. (2) is equivalent to Eq. (1) when ?U = ?V = 0. In practice people usually set both regularization parameters ?U and ?V to be small positive constants in order to avoid large entries in the completed matrix and also improve convergence. Since Eq. (2) is bi-convex in U and V, an alternating minimization procedure can be applied. Alternatively, one solves a nuclear-norm minimization problem min kR? (M ? X)k2F + 2?kXk? , (3) X?Rm?n Prank(X) where ? > 0 is a regularization parameter and kXk? = i=1 |?i (X)| is the nuclear norm of X, which acts as a convex surrogate of the rank function. Eq. (3) is a convex optimization function and can be solved using an iterative singular value thresholding algorithm [3]. It can be shown that both methods in Eq. (2) and (3) provably approximate the true underlying data matrix M under certain conditions [6, 2]. 3 The Attack Model In this section we describe the data poisoning attack model considered in this paper. For a data matrix consisting of m users and n items, the attacker is capable of adding ?m malicious users to the training data matrix, and each malicious user is allowed to report his/her preference on at most B items with each preference bounded in the range [??, ?]. Before proceeding to describe the attacker?s goals, we first introduce some notation to facilitate f ? Rm0 ?n to denote presentation. We use M ? Rm?n to denote the original data matrix and M f and e be the set of non-zero entries in M the data matrix of all m0 = ?m malicious users. Let ? e i ? [n] be all items that the ith malicious user rated. According to our attack models, |? e i | ? B for ? f max = max |M f ij | ? ?. Let ?? (M; f M) be the optimal solution every i ? {1, ? ? ? , m0 } and kMk f computed jointly on the original and poisoned data matrices (M; M) using regularization parameters ?. For example, Eq. (2) becomes f M) = arg min kR? (M?UV> )k2F +kR ? (M? f UV e > )k2F +2?U (kUk2F +kUk e 2F )+2?V kVk2F ?? (M; ? e U,U,V (4) e for normal and malicious users as where the resulting ? consists of low-rank latent factors U, U well as V for items. Simiarly, for the nuclear norm minimization formulation in Eq. (3), we have f M) ?? (M; = f ? X)k e 2F + 2?k(X; X)k e ?, arg min kR? (M ? X)k2F + kR?? (M (5) e X,X c e . Let M(?) where ? = (X, X) be the matrix estimated from learnt model ?. For example, for c c Eq. (4) we have M(?) = UV> and for Eq. (5) we have M(?) = X. The goal of the attacker is to ? f find optimal malicious users M such that m0 ?n f ? ? argmax f R(M(? c ? (M; f M)), M). M M?M (6) f ? R f max ? ?} is the set of all feasible poisoning attacks ? i | ? B, kMk Here M = {M : |? c M) denotes the attacker?s utility for diverting the collabodiscussed earlier in this section and R(M, c on an original data set M, with the help of few malicious users rative filtering algorithm to predict M f M. Below we list several typical attacker utilities: 3 Availability attack the attacker wants to maximize the error of the collaborative filtering system, and eventually render the system useless. Suppose M is the prediction of the collaborative filtering system without data poisoning attacks.2 The utility function is then defined as the total amount of c (predictions after poisoning attacks) on unseen entries perturbation of predictions between M and M C ? : c M) = kR?C (M c ? M)k2F . Rav (M, (7) Integrity attack in this model the attacker wants to boost (or reduce) the popularity of a (subset) of items. Suppose J0 ? [n] is the subset of items the attacker is interested in and w : J0 ? R is a pre-specified weight vector by the attacker. The utility function is c M) = RJin0 ,w (M, m X X i=1 j?J0 c ij . w(j)M (8) Hybrid attack a hybrid loss function can also be defined: c M) = ?1 RJav ,w (M, c M) + ?2 Rin (M, c M), RJhybrid (M, 0 0 ,w,? (9) where ? = (?1 , ?2 ) are coefficients that trade off the availability and integrity attack objectives. In addition, ?1 could be negative, which models the case when the attacker wants to leave a ?light trace": the attacker wants to make his item more popular while making the other recommendations of the system less perturbed to avoid detection. 4 Computing Optimal Attack Strategies We describe practical algorithms to solve the optimization problem in Eq. (6) for optimal attack f ? that maximizes the attacker?s utility. We first consider the alternating minimization strategy M formulation in Eq. (4) and derive a projected gradient ascent method that solves for the corresponding optimal attack strategy. Similar derivations are then extended to the nuclear norm minimization formulation in Eq. (5). Finally, we discuss how to design malicious users that mimic normal user behavior in order to avoid detection. 4.1 Attacking Alternating Minimization We use the projected gradient ascent (PGA) method for solving the optimization problem in Eq. (6) f (t) as with respect to the alternating minimization formulation in Eq. (4): in iteration t we update M follows:   f (t+1) = Proj M f (t) + st ? ? f R(M, c M) , M M M (10) where ProjM (?) is the projection operator onto the feasible region M and st is the step size in f M) learnt on the joint c depends on the model ?? (M; iteration t. Note that the estimated matrix M f data matrix, which further depends on the malicious users M. Since the constraint set M is highly non-convex, we generate B items uniformly at random for each malicious user to rate. The ProjM (?) operator then reduces to projecting each malicious users? rating vector onto an `? ball of diameter ?, f at the level of ??. which can be easily evaluated by truncating all entries in M c We next show how to (approximately) compute ?M f R(M, M). This is challenging because one of the arguments in the loss function involves an implicit optimization problem. We first apply chain rule to arrive at c f c ?M f R(M, M) = ?M f ?? (M; M)?? R(M, M). (11) The second gradient (with respect to ?) is easy to evaluate, as all loss functions mentioned in the c M) is deferred to previous section are smooth and differentiable. Detailed derivation of ?? R(M, Appendix A. On the other hand, the first gradient term term is much harder to evaluate because ?? (?) is an optimization procedure. Inspired by [7, 8, 9], we exploit the KKT conditions of the optimization f problem ?? (?) to approximately compute ?M f ?? (M; M). More specifically, the optimal solution e ? = (U, U, V) of Eq. (4) satisfies ? U ui = X j??i (Mij ? u> i v j )v j ; 2 Note that when the collaborative filtering algorithm and its parameters are set, M is a function of observed entries R? (M). 4 f via PGA Algorithm 1 Optimizing M 1: Input: Original partially observed m ? n data matrix M, algorithm regularization parameter ?, attack budget parameters ?, B and ?, attacker?s utility function R, step size {st }? t=1 . f (0) ? M with both ratings and rated items uniformly sampled at random; t = 0. 2: Initialization: random M f (t) does not converge do 3: while M f (t) ; M). 4: Compute the optimal solution ?? (M c 5: Compute gradient ?M f R(M, M) using Eq. (10). f (t+1) = Proj (M f (t) + st ? f R). 6: Update: M M M 7: t ? t + 1. 8: end while f (t) . 9: Output: m0 ? n malicious matrix M ?i = ?U u ?V v j = X ei j?? X i??0j f ij ? u ?> (M i v j )v j ; (Mij ? u> i v j )ui + X e0 i?? j f ij ? u ?> (M ui , i v j )? e and v j is the jth row (also of dimension k) ? i are the ith rows (of dimension k) in U or U where ui , u ? i , v j } can be expressed as functions of the original and malicious data in V. Subsequently, {ui , u f Using the fact that (a> x)a = (aa> )x and M does not change with M, f we matrices M and M. obtain  ?1 f f ?ui (M) = 0; f ij ?M (i) (j)  ?1 f ?v j (M) (j) = ? V Ik + ?V ui . f ij ?M Here ?U and ?V are defined as (i) ?U = ? i (M) ?u (i) = ?U Ik + ?U f ij ?M X (j) vj v> j , ?V = X vj ; ui u> i . (12) e0 i??0j ?? j ei j??i ?? A framework of the proposed optimization algorithm is described in Algorithm 1. 4.2 Attacking Nuclear Norm Minimization We extend the projected gradient ascent algorithm in Sec. 4.1 to compute optimal attack strategies for the nuclear norm minimization formulation in Eq. (5). Since the objective in Eq. (5) is convex, e can be obtained by conventional convex optimization the global optimal solution ? = (X, X) procedures such as proximal gradient descent (a.k.a. singular value thresholding [3] for nuclear norm e is low rank due to the nuclear norm minimization). In addition, the resulting estimation (X; X) e e V, ?) as an alternative penalty [2]. Suppose (X; X) has rank ? ? min(m, n). We use ?0 = (U, U, characterization of the learnt model with a reduced number of parameters. Here X = U?V> and > e = U?V e e that is, U ? Rm?? , U e ? Rm0 ?? , X are singular value decompositions of X and X; V ? Rn?? have orthornormal columns and ? = diag(?1 , ? ? ? , ?? ) is a non-negative diagonal matrix. c To compute the gradient ?M f R(M, M), we again apply the chain rule to decompose the gradient into two parts: 0 f c c ?M f R(M, M) = ?M f ?? (M; M)??0 R(M, M). (13) c M) is relatively easier to evaluate. Its Similar to Eq. (11), the second gradient term ??0 R(M, derivation details are deferred to the Appendix. In the remainder of this section we shall focus on the e V, ?) with computation of the first gradient term, which involves partial derivatives of ?0 = (U, U, f respect to malicious users M. We begin with the KKT condition at the optimal solution ?0 of Eq. (5). Unlike the alternating minimization formulation, the nuclear norm function k ? k? is not everywhere differentiable. As a 5 f via SGLD Algorithm 2 Optimizing M 1: Input: Original partially observed m ? n data matrix M, algorithm regularization parameter ?, attack budget parameters ?, B and ?, attacker?s utility function R, step size {st }? t=1 , tuning parameter ?, number of SGLD iterations T . P P m m 2 2 1 1 2: Prior setup: compute ?j = m i=1 Mij and ?j = m i=1 (Mij ? ?j ) for every j ? [n]. (0) 2 0 f ? N (?j , ?j ) for i ? [m ] and j ? [n]. 3: Initialization: sample M ij 4: for t = 0 to T do f (t) ; M). 5: Compute the optimal solution ?? (M c 6: Compute gradient ?M f R(M, M) using Eq. (10). (t+1) f 7: Update M according to Eq. (17). 8: end for f ? ? arg min f kM f?M f (t) k2F . Details in the main text. 9: Projection: find M M?M 0 ? f 10: Output: m ? n malicious matrix M . result, the KKT condition relates the subdifferential of the nuclear norm function ?k ? k? as   f ? [X; X] e ? ??k[X; X]k e ?. R?,?? [M; M] (14) e The subdifferential of the nuclear e is the concatenated (m + m0 ) ? n matrix of X and X. Here [X; X] norm function ?k ? k? is also known [2]: n o ?kXk? = UV> + W : U> W = WV = 0, kWk2 ? 1 , where X = U?V> is the singular value decomposition of X. Suppose {ui }, {? ui } and {v j } are e rows of U, U, V and W = {wij }. We can then re-formulate the KKT condition Eq. (14) as follows: ?(i, j) ? ?, e ?(i, j) ? ?, Mij f ij M = u> i (? + ?I? )v j + ?wij ; = ?> u ?ij . i (? + ?I? )v j + ?w ? , v, ?); the full derivation is deferred to the extended version 3 . Now we derive ?M f ? = ?M f (u, u 4.3 Mimicing Normal User Behaviors Normal users generally do not rate items uniformly at random. For example, some movies are significantly more popular than others. As a result, malicious users that pick rated movies uniformly at random can be easily identified by running a t-test against a known database consisting of only normal users, as shown in Sec. 5. To alleviate this issue, in this section we propose an alternative f mimics normal approach to compute data poisoning attacks such that the resulting malicious users M c M) for users M to avoid potential detection, while still achieving reasonably large utility R(M, the attacker. We use a Bayesian formulation to take both data poisoning and detection avoidance f captures normal user behaviors and is objectives into consideration. The prior distribution p0 (M) defined as a multivariate normal distribution f = p0 (M) 0 m Y n Y i=1 j=1 f ij ; ?j , ?j2 ), N (M where ?j and ?j2 are mean and variance parameters for the rating of the jth item provided by normal Pm 1 users. In practice both parameters can be estimated using normal user matrix M as ?j = m i=1 Mij P m 1 2 f and ? 2 = m (M ? ? ) . On the other hand, the likelihood p(M| M) is defined as ij j i=1 f = p(M|M)   1 c M) , exp ? ? R(M, Z (15) c M) = R(M(? c ? (M; f M)), M) is one of the attacker utility functions defined in Sec. 3, where R(M, Z is a normalization constant and ? > 0 is a tuning parameter that trades off attack performance and 3 http://arxiv.org/abs/1608.08182 6 (a) (b) (c) (d) Figure 1: RMSE/Average ratings for alternating minimization with different percentage of malicious profiles; (a) ?1 = 1, ?2 = 0, (b) ?1 = 1, ?2 = ?1, (c)?1 = 0, ?2 = 1, (d)?1 = ?1, ?2 = 1. f toward its prior, which makes the resulting detection avoidance. A small ? shifts the posterior of M attack strategy less effective but harder to detect, and vice versa. f can be Given both prior and likelihood functions, an effective detection-avoiding attack strategy M obtained by sampling from its posterior distribution: f p(M|M) = f f p0 (M)p(M| M)/p(M) ? ? ? m0 X n X f ij ? ?j )2 ( M c M)? . exp ?? + ?R(M, 2?j2 i=1 j=1 (16) Posterior sampling of Eq. (16) is clearly intractable due to the implicit and complicated dependency c on the malicious data M, f that is, M c = M(? c ? (M; f M))). To circumvent of the estimated matrix M this problem, we apply Stochastic Gradient Langevin Dynamics (SGLD, [10]) to approximately f from its posterior distribution in Eq. (16). More specfically, the SGLD algorithm iteratively sample M f (t) }t?0 and in iteration t the new sample M f (t+1) is computes a sequence of posterior samples {M computed as   f (t+1) = M f (t) + st ? f log p(M|M) f M + ?t , M 2 (17) where {st }t?0 are step sizes and ?t ? N (0, st I) are independent Gaussian noises injected at each f SGLD iteration. The gradient ?M f log p(M|M) can be computed as ?1 f f c ?M + ??M f log p(M|M) = ?(M ? ?)? f R(M, M), where ? = diag(?12 , ? ? ? , ?n2 ) and ? is an m0 ? n matrix with ?ij = ?j for i ? [m0 ] and j ? [n]. The c other gradient ?M f R(M, M) can be computed using the procedure in Sections 4.1 and 4.2. Finally, f (t) is projected back onto the feasible set M by selecting B items the sampled malicious matrix M per user with the largest absolute rating and truncating ratings to the level of {??}. A high-level description of the proposed method is given in Algorithm 2. 5 Experimental Results To evaluate the effectiveness of our proposed poisoning attack strategy, we use the publicly available MovieLens dataset which contains 20 millions ratings and 465,000 tag applications applied to 27,000 movies by 138,000 users [23]. We shift the rating range to [?2, 2] for computation convenience. To avoid the ?cold-start? problem, we consider users who have rated at least 20 movies. Two metrics are employed to measure the relative performance of the systems before and after data poisoning attacks: root mean square error (RMSE) for the predicted unseen entries4 and average rating for specific items. We then analyze the tradeoff between attack performance and detection avoidance, which is controled by the ? parameter in Eq. (15). This serves as a guide for how ? should be set in later experiments. We use a paired t-test to compare the distributions of rated items between normal and malicious users. We present the trend of p-value against different values of ? in the extended version of the paper. To strive for a good tradeoff, we set ? = 0.6 at which the p-value stablizes around 0.7 and the poisoning attack performance is not significantly sacrificed. We employ attack models specified in Eq. (9), where the utility parameters ?1 and ?2 balance two different malicious goals (availability and integrity) an attacker wishes to achieve. For the integrity qP C c 2 defined as RMSE = (i,j)??C (Mij ? Mij ) /|? |, where M is the prediction of model trained on clean data R? (M) only (i.e., without data poisoning attacks). 4 7 (a) (b) (c) (d) Figure 2: RMSE/Average ratings for nuclear norm minimization with different percentage of malicious profiles; (a) ?1 = 1, ?2 = 0, (b) ?1 = 1, ?2 = ?1, (c)?1 = 0, ?2 = 1, (d)?1 = ?1, ?2 = 1. utility RJin0 ,w , the J0 set contains only one item j0 selected randomly from all items whose average predicted ratings are around 0.8. The weight wj0 is set as wj0 = 2. Figure 1 (a) (b) plots the RMSE after data poisoning attacks. When ?1 = 1, ?2 = 0, the attacker is interested in increasing the RMSE of the collaborative filtering system and hence reducing the system?s availability. On the other hand, when ?1 = 1, ?2 = ?1 the attacker wishes to increase RMSE while at the same time keeping the rating of specific items (j0 ) as low as possible for certain malicious purposes. Figure 1 (b) shows that when the attackers consider to both objectives (?1 = 1, ?2 = ?1), the RMSE after poisoning is slightly lower than that if only availability is targeted (?1 = 1, ?2 = 0). In addition, the projected gradient ascent (PGA) strategy generates the largest RMSE score compared with the other methods. However, PGA requires malicious users to rate each item uniformly at random, which might expose the malicious profiles to an informed defender. More specifically, the paired t-test on those malicious profiles produced by PGA rejects the null hypothesis that the items rated by the attacker strategies are the same as those obtained from normal users (p < 0.05). In contrast, the SGLD method leads to slightly worse attacker utility but generates malicious users that are hard to distinguish from the normal users (for example, the paired t-test leads to inconclusive p-values (larger than 0.7) with ? = 0.6. Finally, both PGA and SGLD result in higher attacker utility compared to uniform attacks, where both ratings and rated items are sampled uniformly at random for malicious profiles. Apart from the RMSE scores, we also plot ratings of specific items against percentage of malicious profiles in Figure 1 (c) (d). We consider two additional attack utility settings: ?1 = 0, ?2 = 1, in which the attacker wishes to push the ratings of some particular items (specified in w and J0 of Rin ) as high as possible; and ?1 = ?1, ?2 = 1, where the attacker also wants to leave a ?light trace" by reducing the impact on the entire system resulted from malicious activities. It is clear that targeted attackes (both PGA and SGLD) are indeed more effective at manipulating ratings of specific items for integrity attacks. We also plot RMSE/Average ratings against malicious user percentage in Figure 2 for the nuclear norm minimization under similar settings based on a subset of 1000 users and 1700 movies (items), since it is more computationally expensive than alternating minimization. In general, we observe similar behavior of both RMSE/Average ratings under different attacking models ?1 , ?2 with alternating minimization. 6 Discussion and Concluding Remarks Our ultimate goal for the poisoning attack analysis is to develop possible defensive strategies based on the careful analysis of adversarial behaviors. Since the poisoning data is optimized based on the attacker?s malicious objectives, the correlations among features within a feature vector may change to appear different from normal instances. Therefore, tracking and detecting deviations in the feature correlations and other accuracy metrics can be one potential defense. Additionally, defender can also apply the combinational models or sampling strategies, such as bagging, to reduce the influence of poisoning attacks. Acknowledgments This research was partially supported by the NSF (CNS-1238959, IIS-1526860), ONR (N00014-151-2621), ARO (W911NF-16-1-0069), AFRL (FA8750-14-2-0180), Sandia National Laboratories, and Symantec Labs Graduate Research Fellowship. 8 References [1] Jun Wang, Arjen de Vires, and Marcel Reinders. Unifying user-based and item-based collaborative filtering approaches by similarity fusion. In SIGIR, 2006. [2] Emmanuel Cand?s and Ben Recht. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6):717?772, 2007. [3] Jian-Feng Cai, Emmanuel Cand?s, and Zuowei Shen. A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20(4):1956?1982, 2010. [4] Bamshad Mobasher, Robin Burke, Runa Bhaumik, and Chad Williams. Effective attack models for shilling item-based collaborative filtering systems. In Proceedings of the 2005 WebKDD Workshop, held in conjuction with ACM SIGKDD?2005, 2005. [5] Michael P O?Mahony, Neil J Hurley, and Guenole CM Silvestre. Promoting recommendations: An attack on collaborative filtering. In Database and Expert Systems Applications, pages 494?503. Springer, 2002. [6] Prateek Jain, Praneeth Netrapalli, and Sujay Sanghavi. Low-rank matrix completion using alternating minimization. In STOC, 2013. [7] Huang Xiao, Battista Biggio, Gavin Brown, Giorgio Fumera, Claudia Eckert, and Fabio Roli. Is feature selection secure against training data poisoning. In ICML, 2015. [8] Shike Mei and Xiaojin Zhu. The security of latent dirichlet allocation. In AISTATS, 2015. [9] Shike Mei and Xiaojin Zhu. Using machine teaching to identify optimal training-set attacks on machine learners. In AAAI, 2015. [10] Max Welling and Yee W Teh. Bayesian learning via stochastic gradient langevin dynamics. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 681?688, 2011. [11] Nilesh Dalvi, Pedro Domingos, Sumit Sanghai, Deepak Verma, et al. Adversarial classification. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 99?108. ACM, 2004. [12] Daniel Lowd and Christopher Meek. Adversarial learning. In Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, pages 641?647. ACM, 2005. [13] Bo Li and Yevgeniy Vorobeychik. Feature cross-substitution in adversarial classification. In Advances in Neural Information Processing Systems, pages 2087?2095, 2014. [14] Bo Li and Yevgeniy Vorobeychik. Scalable optimization of randomized operational decisions in adversarial classification settings. In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, pages 599?607, 2015. [15] Marco Barreno, Blaine Nelson, Russell Sears, Anthony D Joseph, and J Doug Tygar. Can machine learning be secure? In Proceedings of the 2006 ACM Symposium on Information, computer and communications security, pages 16?25. ACM, 2006. [16] Battista Biggio, Blaine Nelson, and Pavel Laskov. Poisoning attacks against support vector machines. In ICML, 2012. [17] Scott Alfeld, Xiaojin Zhu, and Paul Barford. Data poisoning attacks against autoregressive models. In AAAI, 2016. [18] Olga Klopp, Karim Lounici, and Alexandre Tsybakov. Robust matrix completion. arXiv:1412.8132, 2014. [19] Yudong Chen, Huan Xu, Constantine Caramanis, and Sujay Sanghavi. Robust matrix completion and corrupted columns. In ICML, 2011. [20] Yudong Chen, Ali Jalali, Sujay Sanghavi, and Constantine Caramanis. Low-rank matrix recovery from errors and erasures. IEEE Transactions on Information Theory, 59(7):4324?4337, 2013. [21] Feiping Nie, Hua Wang, Xiao Cai, Heng Huang, and Chris Ding. Robust matrix completion via joint schatten p-norm and lp-norm minimization. In ICDM, 2012. [22] Yu-Xiang Wang and Huan Xu. Stability of matrix factorization for collaborative filtering. In ICML, 2012. [23] Research GroupLens. www.grouplens.org. 9
6142 |@word version:2 seems:1 norm:21 open:1 km:1 decomposition:2 p0:3 arjen:1 pavel:1 pick:1 harder:2 substitution:1 contains:2 score:2 selecting:1 daniel:1 fa8750:1 existing:3 kmk:2 current:1 com:1 recovered:1 gmail:1 enables:1 plot:3 update:3 poisoned:1 intelligence:1 selected:1 item:34 rav:1 ith:4 characterization:2 detecting:1 contribute:1 preference:6 attack:55 org:2 vorobeychik:4 symposium:1 ik:2 consists:1 combinational:1 dalvi:1 eleventh:1 introduce:2 datadriven:1 indeed:1 behavior:8 cand:2 inspired:1 globally:1 increasing:2 becomes:1 spain:1 begin:1 underlying:3 bounded:1 notation:1 maximizes:1 provided:1 null:1 prateek:1 cm:1 informed:1 unified:1 every:2 act:1 rm:6 appear:1 positive:1 before:2 giorgio:1 becoming:1 approximately:6 might:2 initialization:2 challenging:1 factorization:4 bi:1 range:2 graduate:1 commerce:3 practical:1 acknowledgment:1 practice:3 procedure:4 cold:1 mei:2 j0:7 erasure:1 significantly:3 reject:1 projection:2 pre:1 cannot:1 close:1 onto:3 operator:2 convenience:1 mahony:1 risk:1 impossible:1 influence:1 selection:1 yee:1 optimize:1 equivalent:1 imposed:1 conventional:1 eighteenth:1 maximizing:1 www:1 williams:1 truncating:2 convex:8 sigir:1 formulate:1 shen:1 defensive:2 recovery:1 rule:2 maliciously:1 avoidance:3 nuclear:18 his:3 stability:2 play:1 suppose:4 user:46 pioneered:1 exact:1 hypothesis:1 domingo:1 synthesize:1 element:1 trend:1 expensive:1 database:2 observed:4 role:1 ding:1 wang:4 solved:1 worst:1 capture:1 sanghai:1 region:1 decrease:1 trade:2 russell:1 substantial:1 mentioned:1 ui:10 nie:1 dynamic:3 trained:1 singh:1 solving:3 blaine:2 compromise:1 ali:1 creates:1 upon:1 rin:2 learner:3 easily:2 joint:2 various:1 caramanis:2 derivation:5 sacrificed:1 sears:1 jain:1 describe:3 effective:4 detected:1 kuk2f:2 artificial:1 insider:1 whose:1 emerged:1 widely:1 posed:1 solve:2 larger:1 otherwise:2 ability:1 statistic:1 neil:1 unseen:2 jointly:1 sequence:1 differentiable:2 cai:2 propose:1 aro:1 product:2 remainder:1 j2:3 achieve:1 description:1 convergence:1 vanderbilt:4 produce:1 leave:2 ben:1 help:1 derive:2 develop:2 completion:12 ij:15 nearest:1 eq:29 solves:2 netrapalli:1 c:1 involves:2 predicted:2 marcel:1 subsequently:2 stochastic:3 preliminary:1 decompose:1 alleviate:1 burke:1 marco:1 around:2 considered:2 gavin:1 normal:17 sgld:8 exp:2 algorithmic:1 predict:1 m0:8 aarti:2 purpose:1 estimation:1 expose:1 vulnerability:2 conjuction:1 grouplens:2 largest:2 vice:1 minimization:28 clearly:1 gaussian:1 harmed:1 avoid:7 focus:4 rm0:2 rank:11 likelihood:2 contrast:1 adversarial:6 sigkdd:3 secure:2 detect:2 entire:1 her:2 barford:1 manipulating:1 proj:2 wij:2 interested:2 provably:1 mimicking:2 issue:1 arg:3 ill:1 classification:3 among:1 tygar:1 equal:1 yevgeniy:4 sampling:3 yu:1 k2f:8 icml:5 mimic:3 report:1 others:1 sanghavi:3 defender:4 producer:1 few:2 modern:1 employ:1 randomly:1 resulted:1 comprehensive:1 national:1 argmax:1 consisting:3 cns:1 attempt:1 ab:1 detection:8 highly:4 mining:2 deferred:3 yining:2 extreme:1 analyzed:2 light:2 held:1 chain:2 symantec:1 capable:1 partial:2 huan:2 incomplete:2 re:1 e0:2 battista:2 increased:1 industry:1 modeling:1 column:5 earlier:1 instance:1 w911nf:1 introducing:1 deviation:1 subset:3 entry:6 uniform:1 sumit:1 factorizationbased:2 characterize:1 dependency:1 perturbed:2 corrupted:1 learnt:3 proximal:1 st:8 recht:1 international:4 siam:1 randomized:1 systematic:2 off:2 nilesh:1 michael:1 squared:1 again:1 aaai:2 huang:2 worse:1 expert:1 derivative:1 strive:1 li:4 potential:3 de:1 sec:3 availability:7 coefficient:1 depends:2 later:2 root:1 lab:1 analyze:1 portion:1 start:1 recover:1 complicated:1 rmse:12 collaborative:19 contribution:1 square:1 publicly:1 degraded:1 accuracy:1 variance:1 who:2 judgment:1 correspond:1 identify:1 bayesian:2 produced:1 against:7 associated:1 sampled:3 dataset:1 popular:5 knowledge:6 subtle:1 carefully:1 back:1 alexandre:1 afrl:1 higher:1 maximally:1 formulation:8 evaluated:1 lounici:1 implicit:3 correlation:2 hand:3 chad:1 ei:2 christopher:1 assessment:1 lowd:1 feiping:1 building:2 facilitate:1 brown:1 true:2 regularization:5 hence:1 alternating:14 iteratively:1 laboratory:1 karim:1 claudia:1 biggio:3 generalized:1 wj0:2 complete:5 demonstrate:1 consideration:1 novel:3 qp:1 overview:1 million:1 discussed:1 extend:1 kwk2:1 mellon:2 significant:1 refer:1 versa:1 credibility:2 tuning:2 uv:5 sujay:3 pm:1 mathematics:1 teaching:1 similarity:1 integrity:7 multivariate:1 posterior:5 optimizing:3 constantine:2 apart:1 manipulation:1 certain:3 n00014:1 wv:1 arbitrarily:1 onr:1 additional:2 zuowei:1 employed:1 determine:1 maximize:2 attacking:3 converge:1 ii:1 relates:1 full:2 reduces:1 smooth:2 cross:1 concerning:1 icdm:1 equally:1 paired:3 impact:1 prediction:5 involving:1 regression:1 scalable:1 cmu:1 metric:2 arxiv:2 iteration:5 kernel:1 normalization:1 addition:4 want:5 subdifferential:2 fellowship:1 singular:5 malicious:42 jian:1 crucial:1 unlike:1 ascent:4 effectiveness:3 near:1 easy:1 affect:1 lasso:1 identified:1 economic:1 reduce:2 praneeth:1 tradeoff:2 shift:2 motivated:2 utility:18 ultimate:1 defense:1 penalty:1 render:1 remark:1 generally:1 detailed:1 clear:1 amount:1 tsybakov:1 diameter:1 reduced:1 generate:2 http:1 percentage:4 nsf:1 estimated:4 dummy:1 popularity:2 per:1 carnegie:2 shall:1 incentive:1 controled:1 key:1 reliance:1 achieving:2 clean:1 kuk:1 tenth:1 everywhere:1 powerful:1 injected:1 arrive:1 reader:1 electronic:1 frobenious:1 decision:2 appendix:2 pga:7 meek:1 distinguish:1 laskov:1 activity:2 constraint:1 tag:1 generates:2 argument:1 min:8 concluding:1 poisoning:26 relatively:1 according:3 ball:1 slightly:2 increasingly:1 lp:1 joseph:1 making:3 projecting:1 computationally:1 discus:2 eventually:1 barreno:1 end:2 serf:1 sandia:1 available:1 apply:4 observe:1 promoting:1 alternative:3 original:6 bagging:1 assumes:1 denotes:2 include:2 ensure:1 completed:1 running:1 dirichlet:1 unifying:1 exploit:1 concatenated:1 emmanuel:2 klopp:1 feng:1 objective:9 strategy:13 diagonal:1 abovementioned:1 surrogate:2 jalali:1 gradient:22 fabio:1 schatten:1 kak2f:1 topic:1 nelson:2 chris:1 considers:1 trivial:1 toward:1 useless:1 balance:1 difficult:2 susceptible:1 unfortunately:1 setup:1 stoc:1 trace:2 negative:2 a2ij:1 design:1 attacker:35 teh:1 observation:2 hurley:1 descent:1 attacked:1 prank:1 langevin:3 extended:3 ever:1 communication:1 rn:2 perturbation:1 arbitrary:1 rating:19 specified:4 extensive:2 optimized:2 security:3 established:1 barcelona:1 boost:1 nip:1 usually:2 below:1 scott:1 reliable:1 max:4 business:1 hybrid:2 circumvent:1 mn:1 zhu:3 scheme:1 improve:1 movie:6 rated:7 doug:1 jun:1 xiaojin:3 text:1 prior:7 literature:3 discovery:2 relative:1 xiang:1 loss:5 filtering:21 allocation:1 foundation:1 xiao:2 principle:1 thresholding:3 heng:1 verma:1 row:7 roli:1 eckert:1 supported:1 keeping:1 jth:4 aij:1 guide:1 neighbor:1 deepak:1 absolute:1 orthornormal:1 yudong:2 dimension:2 world:1 autoregressive:2 computes:1 author:1 made:1 projected:5 party:2 welling:1 transaction:1 approximate:1 observable:2 abstracted:1 global:1 kkt:6 assumed:2 alternatively:1 fumera:1 iterative:1 latent:2 robin:1 additionally:1 nature:1 reasonably:1 robust:5 operational:1 investigated:1 anthony:1 vj:2 diag:2 aistats:1 main:2 noise:1 paul:1 profile:8 n2:1 allowed:1 kerckhoffs:1 xu:2 referred:2 wish:4 kvk2f:2 specific:7 list:1 svm:1 inconclusive:1 intractable:1 fusion:1 workshop:1 adding:1 kr:8 budget:2 push:2 chen:2 easier:1 kxk:3 expressed:1 tracking:1 partially:3 bo:4 recommendation:12 hua:1 springer:1 mij:10 aa:1 pedro:1 satisfies:1 acm:7 goal:6 presentation:1 targeted:2 careful:1 feasible:4 hard:2 change:2 determined:1 specifically:3 typical:1 uniformly:6 movielens:1 reducing:2 olga:1 called:1 total:1 experimental:1 rarely:1 formally:1 people:1 support:1 evaluate:4 avoiding:1
5,684
6,143
DISCO Nets: DISsimilarity COefficient Networks Diane Bouchacourt University of Oxford diane@robots.ox.ac.uk M. Pawan Kumar University of Oxford pawan@robots.ox.ac.uk Sebastian Nowozin Microsoft Research Cambridge sebastian.nowozin@microsoft.com Abstract We present a new type of probabilistic model which we call DISsimilarity COefficient Networks (DISCO Nets). DISCO Nets allow us to efficiently sample from a posterior distribution parametrised by a neural network. During training, DISCO Nets are learned by minimising the dissimilarity coefficient between the true distribution and the estimated distribution. This allows us to tailor the training to the loss related to the task at hand. We empirically show that (i) by modeling uncertainty on the output value, DISCO Nets outperform equivalent non-probabilistic predictive networks and (ii) DISCO Nets accurately model the uncertainty of the output, outperforming existing probabilistic models based on deep neural networks. 1 Introduction We are interested in the class of problems that require the prediction of a structured output y ? Y given an input x ? X . Complex applications often have large uncertainty on the correct value of y. For example, consider the task of hand pose estimation from depth images, where one wants to accurately estimate the pose y of a hand given a depth image x. The depth image often has some occlusions and missing depth values and this results in some uncertainty on the pose of the hand. It is, therefore, natural to use probabilistic models that are capable of representing this uncertainty. Often, the capacity of the model is restricted and cannot represent the true distribution perfectly. In this case, the choice of the learning objective influences final performance. Similar to Lacoste-Julien et al. [12], we argue that the learning objective should be tailored to the evaluation loss in order to obtain the best performance with respect to this loss. In details, we denote by ?training the loss function employed during model training, and by ?task the loss employed to evaluate the model?s performance. We present a simple example to illustrate the point made above. We consider a data distribution that is a mixture of two bidimensional Gaussians. We now consider two models to capture the data probability distribution. Each model is able to represent a bidimensional Gaussian distribution with diagonal covariance parametrised by (?1 , ?2 , ?1 , ?2 ). In this case, neither of the models will be able to recover the true data distribution since they do not have the ability to represent a mixture of Gaussians. In other words, we cannot avoid model error, similarly to the real data scenario. Each model uses its own training loss ?training . Model A employs a loss that emphasises on the first dimension of the data, specified for x = (x1 , x2 ), x0 = (x01 , x02 ) ? R2 1 by ?A (x ? x0 ) = (10 ? (x1 ? x01 )2 + 0.1 ? (x2 ? x02 )2 ) /2 . Model B does the opposite and employs 1 the loss function ?B (x ? x0 ) = (0.1 ? (x1 ? x01 )2 + 10 ? (x2 ? x02 )2 ) /2 . Each model performs a grid search over the best parameters values for (?1 , ?2 , ?1 , ?2 ). Figure 1 shows the contours of the Mixture of Gaussians distribution of the data (in black), and the contour of the Gaussian fitted by each model (in red and green). Detailed setting of this example is available in the supplementary material. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Table 1: ?task ? SEM (standard error of the mean) with respect to ?training employed. Evaluation is done the test set. ?task ?training ?A ?B ?A ?B 11.6 ? 0.287 13.7 ? 0.331 12.1 ? 0.305 11.0 ? 0.257 Figure 1: Contour lines of the Gaussian distribution fitted by each model on the Mixture of Gaussians data distribution. Best viewed in color. As expected, the fitted Gaussian distributions differ according to ?training employed. Table 1 shows that the loss on the test set, evaluated with ?task , is minimised if ?training = ?task . This simple example illustrates the advantage to being able to tailor the model?s training objective function to have ?training = ?task . This is in contrast to the commonly employed learning objectives we present in Section 2, that are agnostic of the evaluation loss. In order to alleviate the aforementioned deficiency of the state-of-the-art, we introduce DISCO Nets, a new class of probabilistic model. DISCO Nets represent P , the true posterior distribution of the data, with a distribution Q parametrised by a neural network. We design a learning objective based on a dissimilarity coefficient between P and Q. The dissimilarity coefficient we employ was first introduced by Rao [23] and is defined for any non-negative symmetric loss function. Thus, any such loss can be incorporated in our setting, allowing the user to tailor DISCO Nets to his or her needs. Finally, contrarily to existing probabilistic models presented in Section 2, DISCO Nets do not require any specific architecture or training procedure, making them an efficient and easy-to-use class of model. 2 Related Work Deep neural networks, and in particular, Convolutional Neural Networks (CNNs) are comprised of several convolutional layers, followed by one or more fully connected (dense) layers, interleaved by non-linear function(s) and (optionally) pooling. Recent probabilistic models use CNNs to represent non-linear functions of the data. We observe that such models separate into two types. The first type of model does not explicitly compute the probability distribution of interest. Rather, these models allow the user to sample from this distribution by feeding the CNN with some noise z. Among such models, Generative Adversarial Networks (GAN) presented in Goodfellow et al. [7] are very popular and have been used in several computer vision applications, for example in Denton et al. [1], Radford et al. [22], Springenberg [25] and Yan et al. [28]. A GAN model consists of two networks, simultaneously trained in an adversarial manner. A generative model, referred as the Generator G, is trained to replicate the data from noise, while an adversarial discriminative model, referred as the Discriminator D, is trained to identify whether a sample comes from the true data or from G. The GAN training objective is based on a minimax game between the two networks and approximately optimizes a Jensen-Shannon divergence. However, as mentioned in Goodfellow et al. [7] and Radford et al. [22], GAN models require very careful design of the networks? architecture. Their training procedure is tedious and tends to oscillate. GAN models have been generalized to conditional GAN (cGAN) in Mirza and Osindero [16], where some additional input information can be fed to the Generator and the Discriminator. For example in Mirza and Osindero [16] a cGAN model generates tags corresponding to an image. Gauthier [4] applies cGAN to face generation. Reed et al. [24] propose to generate images of flowers with a cGAN model, where the conditional information is a word description of the flower to generate1 . While the application of cGAN is very promising, little quantitative evaluation has been done. Furthermore, cGAN models suffer from the same difficulties we mentioned for GAN. Another line of work has developed towards the use of statistical hypothesis testing to learn probabilistic models. In Dziugaite et al. [2] and Li et al. [14], the authors propose to train generative deep networks with an objective function based on the Maximum Mean Discrepancy (MMD) criterion. The MMD method (see Gretton et al. [8, 9]) is a statistical hypothesis test assessing if two probabilistic distributions are similar. As mentioned in Dziugaite et al. [2], the MMD test can been seen as playing the role of an adversary. 1 At the time writing, we do not have access to the full paper of Reed et al. [24] and therefore cannot take advantage of this work in our experimental comparison. 2 The second type of model approximates intractable posterior distributions with use of variational inference. The Variational Auto-Encoders (VAE) presented in Kingma and Welling [10] is composed of a probabilistic encoder and a probabilistic decoder. The probabilistic encoder is fed with the input x ? X and produces a posterior distribution P (z|x) over the possible values of noise z that could have generated x. The probabilistic decoder learns to map the noise z back to the data space X . The training of VAE uses an objective function based on a Kullback-Leibler Divergence. VAE and GAN models have been combined in Makhzani et al. [15], where the authors propose to regularise autoencoders with an adversarial network. The adversarial network ensures that the posterior distribution P (z|x) matches an arbitrary prior P (z). In hand pose estimation, imagine the user wants to obtain accurate positions of the thumb and the index finger but does not need accurate locations of the other fingers. The task loss ?task might be based on a weighted L2-norm between the predicted and the ground-truth poses, with high weights on the thumb and the index. Existing probabilistic models cannot be tailored to task-specific losses and we propose the DISsimilarity COefficient Networks (DISCO Nets) to alleviate this deficiency. 3 DISCO Nets We begin the description of our model by specifying how it can be used to generate samples from the posterior distribution, and how the samples can in turn be employed to provide a pointwise estimate. In the subsequent subsection, we describe how to estimate the parameters of the model. 3.1 Prediction Sampling. A DISCO Net consists of several convolutional and dense layers (interleaved by nonlinear function(s) and possibly pooling) and takes as input a pair (x, z) ? X ? Z, where x is input data and z is some random noise. Given one pair (x, z), the DISCO Net produces a value for the output y. In the example of hand pose estimation, the input depth image x is fed to the convolutional layers. The output of the last convolutional layer is flattened and concatenated with a noise sample z. The resulting vector is fed to several dense layers, and the last dense layer outputs a pose y. From a single depth image x, by using different noise samples, the DISCO Net produces different pose candidates for the depth image. This process is illustrated in Figure 2. Importantly, DISCO Nets are flexible in the choice of the architecture. For example, the noise could be concatenated at any stage of the network, including at the start. Figure 2: For a single depth image x, using 3 different noise samples (z1 , z2 , z3 ), DISCO Nets output 3 different candidate poses (y1 , y2 , y3 ) (shown superimposed on the depth image). The depth image is from the NYU Hand Pose Dataset of Tompson et al. [27], preprocessed as in Oberweger et al. [17]. Best viewed in color. We denote Q the distribution that is parametrised by the DISCO Net?s neural network. For a given input x, DISCO Nets provide the user with samples y drawn from Q(y|x) without requiring the expensive computation of the (often intractable) partition function. In the remainder of the paper we consider x ? Rdx , y ? Rdy and z ? Rdz . Pointwise Prediction. In order to obtain a single prediction y for a given input x, DISCO Nets use the principle of Maximum Expected Utility (MEU), similarly to Premachandran et al. [21]. The prediction y?task maximises the expected utility, or rather minimises the expected task-specific loss ?task , estimated using the sampled candidates. Formally, the prediction is made as follows: 3 y?task = argmax EU(yk ) = argmin k?[1,K] K X ?task (yk , yk0 ) (1) k?[1,K] k0 =1 where (y1 , ..., yK ) are the candidate outputs sampled for the single input x. Details on the MEU method are in the supplementary material. 3.2 Learning DISCO Nets Objective Function. We want DISCO Nets to accurately model the true probability P (y|x) via Q(y|x). In other words, Q(y|x) should be as similar as possible to P (y|x). This similarity is evaluated with respect to the loss specific to the task at hand. Given any non-negative symmetric loss function between two outputs ?(y, y 0 ) with (y, y 0 ) ? Y ? Y, we employ a diversity coefficient that is the expected loss between two samples drawn randomly from the two distributions. Formally, the diversity coefficient is defined as: DIV? (P, Q, D) = Ex?D(x) [Ey?P (y|x) [Ey0 ?Q(y0 |x) [?(y, y 0 )]]] (2) Intuitively, we should minimise DIV? (P, Q, D) so that Q(y|x) is as similar as possible to P (y|x). However there is uncertainty on the output y to predict for a given x. In other words, P (y|x) is diverse and Q(y|x) should be diverse as well. Thus we encourage Q(y|x) to provide sample outputs, for a given x, that are diverse by minimising the following dissimilarity coefficient: DISC? (P, Q, D) = DIV? (P, Q, D) ? ?DIV? (Q, Q, D) ? (1 ? ?)DIV? (P, P, D) (3) with ? ? [0, 1]. The dissimilarity DISC? (P, Q, D) is the difference between the diversity between P and Q, and an affine combination of the diversity of each distribution, given x ? D(x). These coefficients were introduced by Rao [23] with ? = 1/2 and used for latent variable models by Kumar et al. [11]. We do not need to consider the term DIV? (P, P, D) as it is a constant in our problem, and thus the DISCO Nets objective function is defined as follows: F = DIV? (P, Q, D) ? ?DIV? (Q, Q, D) (4) When minimising F , the term ?DIV? (Q, Q, D) encourages Q(y|x) to be diverse. The value of ? balances between the two goals of Q(y|x) that are providing accurate outputs while being diverse. Optimisation. Let us consider a training dataset composed of N examples input-output pairs D = {(xn , yn ), n = 1..N }. In order to train DISCO Nets, we need to compute the objective function of equation (4). We do not have knowledge of the true probability distributions P (y, x) and P (x). To overcome this deficiency, we construct estimators of each diversity term DIV? (P, Q) and DIV? (Q, Q). First, we take an empirical distribution of the data, that is, taking ground-truth pairs (xn , yn ). We then estimate each distribution Q(y|xn ) by sampling K outputs from our model for each xn . This gives us an unbiased estimate of each diversity term, defined as: N K X 1 X d ? (P, Q, D) = 1 DIV ?(yn , G(zk , xn ; ?)) N n=1 K k=1 N K X X 1 d ? (Q, Q, D) = 1 DIV N n=1 K(K ? 1) K X (5) ?(G(zk , xn ; ?), G(zk0 , xn ; ?)) k=1 k0 =1,k0 6=k We have an unbiased estimate of the DISCO Nets? objective function of equation (4): d ? (P, Q, D) ? ? DIV d ? (Q, Q, D) Fb(?, ?) = DIV (6) where yk = G(zk , xn ; ?) is a candidate output sampled from DISCO Nets for (xn ,zk ), and ? are the parameters of DISCO Nets. It is important to note that the second term of equation (6) is summing over k and k 0 6= k to have an unbiased estimate, therefore we compute the loss between pairs of different samples G(zk , xn ; ?) and G(zk0 , xn ; ?). The parameters ? are learned by Gradient Descent. Algorithm 1 shows the training of DISCO Nets. In steps 4 and 5 of Algorithm 1, we draw K random noise vectors (zn,1 , ...zn,k ) per input example xn , and generate K candidate outputs G(zn,k , xn ; ?) per input. This allow us to compute an unbiased estimate of the gradient in step 7. For clarity, in the remainder of the paper we do not explicitely write the parameters ? and write G(zk , xn ). 4 Algorithm 1: DISCO Nets Training algorithm. for t=1...T epochs do Sample minibatch of b training example pairs {(x1 , y1 )...(xb , yb )}. for n=1...b do Sample K random noise vectors (zn,1 , ...zn,k ) for training example xn Generate K candidate outputs G(zn,k , xn ; ?), k = 1..K for training example xn end Update parameters ? t ? ? t?1 by descending the gradient of equation (6) : ?? Fb(?, ?). end 1 2 3 4 5 6 7 8 3.3 Strictly Proper Scoring Rules. Scoring Rule for Learning. A scoring rule S(Q, P ), as defined in Gneiting and Raftery [5], evaluates the quality of a predictive distribution Q with respect to a true distribution P . When using a scoring rule one should ensure that it is proper, which means it is maximised when P = Q. A scoring rule is said to be strictly proper if P = Q is the unique maximiser of S. Hence maximising a proper scoring rule ensures that the model aims at predicting relevant forecast. Gneiting and Raftery [5] define score divergences corresponding to a proper scoring rule S: d(Q, P ) = S(P, P ) ? S(Q, P ) (7) If S is proper, d is a valid non-negative divergence function, with value 0 if (and only if, in the case of strictly proper) Q = P . For example the MMD criterion (see Gretton et al. [8, 9]) mentioned in Section 2 is an example of this type of divergence. In our case, any loss ? expressed with an universal kernel will define the DISCO Nets? objective function as such divergence (see Zawadzki and Lahaie [29]). For example, by Theorem 5 of Gneiting and Raftery [5], if we take as loss Pdy ? function ?? (y, y 0 ) = ||y ? y 0 ||?2 = i=1 |(y i ? y 0i |2 ) /2 with ? ? [0, 2] excluding 0 and 2, our training objective is (the negative of) a strictly proper scoring rule, that is: i P P 1 PN h 1 P 1 1 ? ? 0 Fb(?, ?) = n=1 k ||yn ? G(zk , xn )||2 ? k k0 6=k ||G(zk , xn ) ? G(zk , xn )||2 N K 2 K(K ? 1) (8) This score has been referred in the litterature as the Energy Score in Gneiting and Raftery [5], Gneiting et al. [6], Pinson and Tastu [19]. By employing a (strictly) proper scoring rule we ensure that our objective function is (only) minimised at the true distribution P , and expect DISCO Nets to generalise better on unseen data. We show below that strictly proper scoring rules are also relevant to assess the quality of the distribution Q captured by the model. Discriminative power of proper scoring rules. As observed in Fukumizu et al. [3], kernel density estimation (KDE) fails in high dimensional output spaces. Our goal is to compare the quality of the distribution captured between two models, Q1 and Q2 . In our setting Q1 better models P than Q2 according to the scoring rule S and its associated divergence d if d(Q1 , P ) < d(Q2 , P ). As noted in Pinson and Tastu [19], S being proper does not ensure d(Q1 , y) < d(Q2 , y) for all observations y drawn from P . However if the scoring rule is strictly proper scoring rule, this property should be ensured in the neighbourhood of the true distribution. 4 Experiments : Hand Pose Estimation Given a depth image x, which often contains occlusions and missing values, we wish to predict the hand pose y. We use the NYU Hand Pose dataset of Tompson et al. [27] to estimate the efficiency of DISCO Nets for this task. 4.1 Experimental Setup NYU Hand Pose Dataset. The NYU Hand Pose dataset of Tompson et al. [27] contains 8252 testing and 72,757 training frames of captured RGBD data with ground-truth hand pose information. The training set is composed of images of one person whereas the testing set gathers samples from two persons. For each frame, the RGBD data from 3 Kinects is provided: a frontal view and 2 side views. In our experiments we use only the depth data from the frontal view. While the ground truth 5 contains J = 36 annotated joints, we follow the evaluation protocol of Oberweger et al. [17, 18] and use the same subset of J = 14 joints. We also perform the same data preprocessing as in Oberweger et al. [17, 18], and extract a fixed-size metric cube around the hand from the depth image. We resize the depth values within the cube to a 128 ? 128 patch and normalized them in [?1, 1]. Pixels deeper than the back of the cube and missing depth values are both set to a depth of 1. Methods. We employ loss functions between two outputs of the form of the Energy score (8), that is, ?training = ?? (y, y 0 ) = ||y ? y 0 ||?2 . Our first goal is to assess the advantages of DISCO Nets with respect to non-probabilistic deep networks. One model, referred as DISCO?,? , is a DISCO Nets probabilistic model, with ? 6= 0 in the dissimilarity coefficient of equation (6). When taking ? = 0, noise is injected and the model capacity is the same as DISCO?,?6=0 . The model BASE? , is a non-probabilistic model, by taking ? = 0 in the objective function of equation (6) and no noise is concatenated. This corresponds to a classic deep network which for a given input x generates a single output y = G(x). Note that we write G(x) and not G(z, x) since no noise is concatenated. Evaluation Metrics. We report classic non-probabilistic metrics for hand pose estimation employed in Oberweger et al. [17, 18] and Taylor et al. [26], that are, the Mean Joint Euclidean Error (MeJEE), the Max Joint Euclidean Error (MaJEE) and the Fraction of Frames within distance (FF). We refer the reader to the supplementary material for detailed expression of these metrics. These metrics use the Euclidean distance between the prediction and the ground-truth and require a single pointwise prediction. This pointwise prediction is chosen with the MEU method among K candidates. We added the probabilistic metric ProbLoss. ProbLoss is defined as in Equation 8 with the Euclidean norm and is the divergence associated with a strictly proper scoring rule. Thus, ProbLoss ranks the ability of the models to represent the true distribution. ProbLoss is computed using K candidate poses for a given depth image. For the non-probabilistic model BASE? , only a single pointwise predicted output y is available. We construct the K candidates by adding some Gaussian random noise of mean 0 and diagonal covariance ? = ?1, with ? ? {1mm, 5mm, 10mm} and refer to the model as BASE?,? . 2 Loss functions. As we employ standard evaluation metrics based on the Euclidean norm, we train with the Euclidean norm (that is, ?training (y, y 0 ) = ||y ? y 0 ||?2 with ? = 1). When ? = 12 our objective function coincides with ProbLoss. Architecture. The novelty of DISCO Nets resides in their objective function. They do not require the use of a specific network architecture. This allows us to design a simple network architecture inspired by Oberweger et al. [18]. The architecture is shown in Figure 2. The input depth image x is fed to 2 convolutional layers, each having 8 filters, with kernels of size 5 ? 5, with stride 1, followed by Rectified Linear Units (ReLUs) and Max Pooling layers of kernel size 3 ? 3. A third and last convolutional layer has 8 filters, with kernels of size 5 ? 5, with stride 1, followed by a Rectified Linear Unit. The ouput of the convolution is concatenated to the random noise vector z of size dz = 200, drawn from a uniform distribution in [?1, 1]. The result of the concatenation is fed to 2 dense layers of output size 1024, with ReLUs, and a third dense layer that outputs the candidate pose y ? R3?J . For the non-probabilistic BASE?,? model no noise is concatenated as only a pointwise estimate is produced. Training. We use 10,000 examples from the 72,757 training frames to construct a validation dataset and train only on 62,757 examples. Back-propagation is used with Stochastic Gradient Descent with a batchsize of 256. The learning rate is fixed to ? = 0.01 and we use a momentum of m = 0.9 (see Polyak [20]). We also add L2-regularisation controlled by the parameter C. We use C = [0.0001, 0.001, 0.01] which is a relevant range as the comparative model BASE? is best performing for C = 0.001. Note that DISCO Nets report consistent performances across the different values C, contrarily to BASE? . We use 3 different random seeds to initialize each model network parameters. We report the performance of each model with its best cross-validated seed and C. We train all models for 400 epochs as it results in a change of less than 3% in the value of the loss on the validation dataset for BASE? . We refer the reader to the supplementary material for details on the setting. 2 We also evaluate the non-probabilistic model BASE? using its pointwise prediction rather than the MEU method. Results are consistent and detailed in the supplementary material. 6 Table 2: Metrics values on the test set ? SEM. Best Table 3: Metrics values on the test set ? SEM for performances in bold. cGAN. Model ProbLoss (mm) MeJEE (mm) MaJEE (mm) FF (80mm) BASE?=1,?=1 103.8?0.627 25.2?0.152 52.7?0.290 86.040 BASE?=1,?=5 99.3?0.620 25.5?0.151 52.9?0.289 85.773 BASE?=1,?=10 96.3?0.612 25.7?0.149 53.2?0.288 85.664 DISCO?=1,?=0 92.9?0.533 21.6?0.128 46.0?0.251 92.971 DISCO?=1,?=0.25 89.9?0.510 21.2?0.122 46.4?0.252 93.262 DISCO?=1,?=0.5 83.8 ?0.503 20.9?0.124 45.1?0.246 94.438 4.2 Model ProbLoss (mm) MeJEE (mm) MaJEE (mm) FF (80mm) cGAN 442.7?0.513 109.8?0.128 201.4?0.320 0.000 cGANinit, fixed 128.9?0.480 31.8?0.117 64.3?0.230 78.454 Results. Quantitative Evaluation. Table 2 reports performances on the test dataset, with parameters crossvalidated on the validation set. All versions of the DISCO Net model outperform the BASE? model. Among the different values of ?, we see that ? = 0.5 better captures the true distribution (lower ProbLoss) while retaining accurate performance on the standard pointwise metrics. Interestingly, using an all-zero noise at test-time gives similar performances on pointwise metrics. We link this to the observation that both the MEAN and the MEU method perform equivalently on these metrics (see supplementary material). Qualitative Evaluation. In Figure 3 we show candidate poses generated by DISCO?=1,?=0.5 for 3 testing examples. The left image shows the input depth image, and the right image shows the ground-truth pose (in grey) with 100 candidate outputs (superimposed in transparent red). The model predict the joint locations and we interpolate the joints with edges. If an edge is thinner and more opaque, it means the different predictions overlap and that the uncertainty on the location of the edge?s joints is low. We can see that DISCO?=1,?=0.5 captures relevant information on the structure of the hand. (a) When there are no occlusions, DISCO Nets model low uncertainty on all joints. (b) When the hand is half-fisted, (c) Here the fingertips of all finDISCO Nets model the uncergers but the forefinger are octainty on the location of the fincluded and DISCO Nets model gertips. high uncertainty on them. Figure 3: Visualisation of DISCO?=1,?=0.5 predictions for 3 examples from the testing dataset. The left image shows the input depth image, and the right image shows the ground-truth pose in grey with 100 candidate outputs superimposed in transparent red. Best viewed in color. P PR PL TR TM TT IM IT MM MT RM RT PM PT P PR PL TR TM TT IM IT MM MT RM RT PM PT Figure 4 shows the matrices of Pearson product-moment correlation coefficients between joints. We note that DISCO Net with ? = 0.5 better captures the correlation between the joints of a finger and between the fingers. P PR PL TR TM TT IM IT MM MT RM RT PM PT P PR PL TR TM TT IM IT MM MT RM RT PM PT ?=0 ? = 0.5 Figure 4: Pearson coefficients matrices of the joints: Palm (no value as the empirical variance is null), Palm Right, Palm Left, Thumb Root, Thumb Mid, Index Mid, Index Tip, Middle Mid, Middle Tip, Ring Mid, Ring Tip, Pinky Mid, Pinky Tip. 7 4.3 Comparison with existing probabilistic models. To the best of our knowledge the conditional Generative Adversarial Networks (cGAN) from Mirza and Osindero [16] has not been applied to pose estimation. In order to compare cGAN to DISCO Nets, several issues must be overcome. First, we must design a network architecture for the Discriminator. This is a first disadvantage of cGAN compared to DISCO Nets which require no adversary. Second, as mentioned in Goodfellow et al. [7] and Radford et al. [22], GAN (and thus cGAN) require very careful design of the networks? architecture and training procedure. In order to do a fair comparison, we followed the work in Mirza and Osindero [16] and practical advice for GAN presented in Larsen and S?nderby [13]. We try (i) cGAN, initialising all layers of D and G randomly, and (ii) cGANinit, fixed initialising the convolutional layers of G and D with the trained best-performing DISCO?=1,?=0.5 of Section 4.2, and keeping these layers fixed. That is, the convolutional parts of G and D are fixed feature extractors for the depth image. This is a setting similar to the one employed for tag-annotation of images in Mirza and Osindero [16]. Details on the setting can be found in the supplementary material. Table 3 shows that the cGAN model obtains relevant results only when the convolutional layers of G and D are initialised with our trained model and kept fixed, that is cGANinit, fixed . These results are still worse than DISCO Nets performances. While there may be a better architecture for cGAN, our experiments demonstrate the difficulty of training cGAN over DISCO Nets. 4.4 Reference state-of-the-art values. We train the best-performing DISCO?=1,?=0.5 of Section 4.2 on the entire dataset, and compare performances with state-of-the-art methods in Table 4 and Figure 5. These state-of-the-art methods are specifically designed for hand pose estimation. In Oberweger et al. [17] a constrained prior hand model, referred as NYU-Prior, is refined on each hand joint position to increase accuracy, referred as NYU-Prior-Refined. In Oberweger et al. [18], the input depth image is fed to a first network NYU-Init, that outputs a pose used to synthesize an image with a second network. The synthesized image is used with the input depth image to derive a pose update. We refer to the whole model as NYU-Feedback. On the contrary, DISCO Nets uses a single network whose architecture is similar to NYU-Prior (without constraining on a pose prior). By accurately modeling the distribution of the pose given the depth image, DISCO Nets obtain comparable performances to NYU-Prior and NYU-Prior-Refined. Without any extra effort, DISCO Nets could be embedded in the presented refinement and feedback methods, possibly boosting state-of-the-art performances. Table 4: DISCO Nets compared to stateof-the-art performances ? SEM. Model NYU-Prior NYU-Prior-Refined NYU-Init NYU-Feedback DISCO?=1,?=0.5 5 MeJEE (mm) 20.7?0.150 19.7?0.157 27.4?0.152 16.0?0.096 20.7?0.121 MaJEE (mm) 44.8?0.289 44.7?0.327 55.4?0.265 36.1?0.208 45.1?0.246 FF (80mm) 91.190 88.148 86.537 97.334 93.250 Discussion. Figure 5: Fractions of frames within distance d in mm (by 5 mm). Best viewed in color. We presented DISCO Nets, a new family of probabilistic model based on deep networks. DISCO Nets employ a prediction and training procedure based on the minimisation of a dissimilarity coefficient. Theoretically, this ensures that DISCO Nets accurately capture uncertainty on the correct output to predict given an input. Experimental results on the task of hand pose estimation consistently support our theoretical hypothesis as DISCO Nets outperform non-probabilistic equivalent models, and existing probabilistic models. Furthermore, DISCO Nets can be tailored to the task to perform. This allows a possible user to train them to tackle different problems of interest. As their novelty resides mainly in their objective function, DISCO Nets do not require any specific architecture and can be easily applied to new problems. We contemplate several directions for future work. First, we will apply DISCO Nets to other prediction problems where there is uncertainty on the output. Second, we would like to extend DISCO Nets to latent variables models, allowing us to apply DISCO Nets to diverse dataset where ground-truth annotations are missing or incomplete. 6 Acknowlegements. This work is funded by the Microsoft Research PhD Scholarship Programme. We would like to thank Pankaj Pansari, Leonard Berrada and Ondra Miksik for their useful discussions and insights. 8 References. [1] E.L. Denton, S. Chintala, A. Szlam, and R. Fergus. Deep generative image models using a Laplacian pyramid of adversarial networks. In NIPS. 2015. [2] G. K. Dziugaite, D. M. Roy, and Z. Ghahramani. Training generative neural networks via maximum mean discrepancy optimization. In UAI, 2015. [3] K. Fukumizu, L. Song, and A. Gretton. Kernel Bayes? rule: Bayesian inference with positive definite kernels. JMLR, 2013. [4] J. Gauthier. Conditional generative adversarial nets for convolutional face generation. Class Project for Stanford CS231N: Convolutional Neural Networks for Visual Recognition, 2014. [5] T. Gneiting and A. E. Raftery. Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association, 2007. [6] Tilmann Gneiting, Larissa I. Stanberry, Eric P. Grimit, Leonhard Held, and Nicholas A. Johnson. Assessing probabilistic forecasts of multivariate quantities, with an application to ensemble predictions of surface winds. TEST, 2008. [7] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, Bing Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS. 2014. [8] A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Scholkopf, and A. J. Smola. A kernel method for the two-sample problem. In NIPS, 2007. [9] A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Scholkopf, and A. J. Smola. A kernel two-sample test. In JMLR, 2012. [10] D. P. Kingma and M. Welling. Auto-encoding variational Bayes. In ICLR, 2014. [11] M. P. Kumar, B. Packer, and D. Koller. Modeling latent variable uncertainty for loss-based learning. In ICML, 2012. [12] S. Lacoste-Julien, F. Huszar, and Z. Ghahramani. Approximate inference for the loss-calibrated Bayesian. In AISTATS, 2011. [13] A. B. L. Larsen and S. K. S?nderby. URL http://torch.ch/blog/2015/11/13/gan. html. [14] Y. Li, K. Swersky, and R. Zemel. Generative moment matching networks. In ICML, 2015. [15] A. Makhzani, J. Shlens, N. Jaitly, and I. J. Goodfellow. Adversarial autoencoders. ICLR Workshop, 2015. [16] M. Mirza and S. Osindero. Conditional generative adversarial nets. In NIPS Deep Learning Workshop, 2014. [17] M. Oberweger, P. Wohlhart, and V. Lepetit. Hands deep in deep learning for hand pose estimation. In Computer Vision Winter Workshop, 2015. [18] M. Oberweger, P. Wohlhart, and V. Lepetit. Training a Feedback Loop for Hand Pose Estimation. In ICCV, 2015. [19] Pierre Pinson and Julija Tastu. Discrimination ability of the Energy score. 2013. [20] B. T. Polyak. Some methods of speeding up the convergence of iteration methods. 1964. [21] V. Premachandran, D. Tarlow, and D. Batra. Empirical minimum Bayes risk prediction: How to extract an extra few% performance from vision models with just three more parameters. In CVPR, 2014. [22] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2015. [23] C.R. Rao. Diversity and dissimilarity coefficients: A unified approach. Theoretical Population Biology, pages Vol. 21, No. 1, pp 24?43, 1982. [24] S. Reed, Z. Akata, X. Yan, L. Logeswaran, H. Lee, and B. Schiele. Generative adversarial text to image synthesis. In ICML, 2016. [25] J. T. Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial networks. ICLR, 2016. [26] J. Taylor, J. Shotton, T. Sharp, and A. Fitzgibbon. The vitruvian Manifold: Inferring dense correspondences for oneshot human pose estimation. In CVPR, 2012. [27] J. Tompson, M. Stein, Y. Lecun, and K. Perlin. Real-time continuous pose recovery of human hands using convolutional networks. ACM Transactions on Graphics, 2014. [28] X. Yan, J. Yang, K. Sohn, and H. Lee. Attribute2image: Conditional image generation from visual attributes. 2016. [29] E. Zawadzki and S. Lahaie. Nonparametric scoring rules. In AAAI Conference on Artificial Intelligence. 2015. 9
6143 |@word cnn:1 middle:2 version:1 norm:4 replicate:1 tedious:1 grey:2 covariance:2 q1:4 acknowlegements:1 tr:4 lepetit:2 moment:2 contains:3 score:5 interestingly:1 existing:5 com:1 z2:1 must:2 subsequent:1 partition:1 premachandran:2 designed:1 update:2 discrimination:1 generative:13 half:1 intelligence:1 maximised:1 tarlow:1 boosting:1 location:4 ouput:1 qualitative:1 consists:2 scholkopf:2 manner:1 introduce:1 theoretically:1 x0:3 expected:5 inspired:1 little:1 spain:1 begin:1 provided:1 project:1 agnostic:1 null:1 argmin:1 q2:4 developed:1 unified:1 quantitative:2 y3:1 tackle:1 ensured:1 rm:4 uk:2 unit:2 szlam:1 yn:4 positive:1 gneiting:7 thinner:1 tends:1 encoding:1 oxford:2 approximately:1 black:1 might:1 emphasis:1 specifying:1 logeswaran:1 range:1 unique:1 practical:1 lecun:1 testing:5 definite:1 fitzgibbon:1 procedure:4 empirical:3 yan:3 universal:1 matching:1 word:4 pinky:2 cannot:4 risk:1 influence:1 writing:1 descending:1 equivalent:2 map:1 missing:4 dz:1 recovery:1 pouget:1 estimator:1 rule:18 insight:1 importantly:1 shlens:1 his:1 classic:2 population:1 imagine:1 pt:4 user:5 us:3 goodfellow:5 hypothesis:3 jaitly:1 synthesize:1 roy:1 expensive:1 recognition:1 nderby:2 observed:1 role:1 capture:5 ensures:3 connected:1 eu:1 yk:4 mentioned:5 schiele:1 warde:1 cs231n:1 trained:5 predictive:2 efficiency:1 eric:1 easily:1 joint:12 k0:4 finger:4 train:7 describe:1 artificial:1 zemel:1 pearson:2 refined:4 whose:1 supplementary:7 stanford:1 cvpr:2 encoder:2 ability:3 unseen:1 bouchacourt:1 final:1 advantage:3 net:61 propose:4 product:1 remainder:2 relevant:5 loop:1 description:2 convergence:1 assessing:2 produce:3 comparative:1 ring:2 illustrate:1 derive:1 ac:2 pose:33 minimises:1 predicted:2 come:1 differ:1 direction:1 rasch:2 correct:2 annotated:1 cnns:2 filter:2 stochastic:1 attribute:1 human:2 material:7 require:8 feeding:1 transparent:2 alleviate:2 im:4 strictly:9 pl:4 mm:20 batchsize:1 around:1 ground:8 seed:2 predict:4 cgan:16 estimation:13 weighted:1 fukumizu:2 gaussian:5 aim:1 rather:3 avoid:1 rdx:1 pn:1 vae:3 minimisation:1 validated:1 consistently:1 rank:1 superimposed:3 mainly:1 contrast:1 adversarial:14 inference:3 entire:1 torch:1 her:1 visualisation:1 koller:1 contemplate:1 interested:1 leonhard:1 pixel:1 issue:1 ey0:1 aforementioned:1 among:3 flexible:1 stateof:1 retaining:1 html:1 art:6 constrained:1 initialize:1 cube:3 construct:3 having:1 sampling:2 biology:1 denton:2 icml:3 unsupervised:2 discrepancy:2 future:1 mirza:7 report:4 employ:7 few:1 randomly:2 winter:1 composed:3 simultaneously:1 divergence:8 interpolate:1 packer:1 argmax:1 pawan:2 occlusion:3 microsoft:3 interest:2 fingertip:1 evaluation:9 mixture:4 tompson:4 farley:1 parametrised:4 held:1 xb:1 accurate:4 edge:3 capable:1 encourage:1 lahaie:2 forefinger:1 incomplete:1 taylor:2 euclidean:6 theoretical:2 fitted:3 modeling:3 rao:3 disadvantage:1 zn:6 subset:1 uniform:1 comprised:1 johnson:1 osindero:6 graphic:1 encoders:1 combined:1 calibrated:1 person:2 density:1 borgwardt:2 probabilistic:27 lee:2 minimised:2 tip:4 synthesis:1 aaai:1 possibly:2 worse:1 american:1 li:2 diversity:7 stride:2 bold:1 coefficient:15 explicitly:1 view:3 root:1 try:1 wind:1 red:3 start:1 recover:1 relus:2 bayes:3 metz:1 annotation:2 ass:2 accuracy:1 convolutional:14 variance:1 efficiently:1 ensemble:1 identify:1 bayesian:2 thumb:4 accurately:5 disc:2 produced:1 rectified:2 sebastian:2 evaluates:1 energy:3 initialised:1 larsen:2 pp:1 chintala:2 associated:2 sampled:3 dataset:11 popular:1 color:4 subsection:1 knowledge:2 akata:1 back:3 supervised:1 follow:1 yb:1 done:2 ox:2 evaluated:2 furthermore:2 just:1 stage:1 smola:2 autoencoders:2 correlation:2 hand:26 gauthier:2 nonlinear:1 propagation:1 minibatch:1 quality:3 dziugaite:3 requiring:1 true:12 y2:1 unbiased:4 normalized:1 hence:1 symmetric:2 leibler:1 illustrated:1 during:2 game:1 encourages:1 noted:1 coincides:1 criterion:2 generalized:1 tt:4 demonstrate:1 performs:1 image:32 variational:3 zawadzki:2 mt:4 empirically:1 extend:1 association:1 approximates:1 synthesized:1 refer:4 rdy:1 cambridge:1 grid:1 pm:4 similarly:2 funded:1 robot:2 access:1 similarity:1 surface:1 base:12 add:1 posterior:6 own:1 recent:1 multivariate:1 optimizes:1 scenario:1 outperforming:1 blog:1 scoring:17 seen:1 captured:3 additional:1 minimum:1 employed:8 ey:1 novelty:2 x02:3 ii:2 semi:1 full:1 gretton:5 match:1 minimising:3 cross:1 controlled:1 laplacian:1 prediction:17 vision:3 optimisation:1 metric:12 yk0:1 represent:6 tailored:3 mmd:4 kernel:9 pyramid:1 iteration:1 whereas:1 want:3 extra:2 contrarily:2 pooling:3 contrary:1 call:1 yang:1 constraining:1 bengio:1 easy:1 shotton:1 architecture:12 perfectly:1 opposite:1 polyak:2 tm:4 minimise:1 whether:1 expression:1 utility:2 url:1 effort:1 song:1 suffer:1 oscillate:1 wohlhart:2 oneshot:1 deep:11 useful:1 detailed:3 nonparametric:1 pankaj:1 mid:5 stein:1 sohn:1 generate:4 http:1 outperform:3 estimated:2 per:2 diverse:6 write:3 vol:1 drawn:4 clarity:1 preprocessed:1 neither:1 lacoste:2 kept:1 fraction:2 uncertainty:12 injected:1 springenberg:2 opaque:1 tailor:3 swersky:1 family:1 reader:2 patch:1 draw:1 resize:1 initialising:2 huszar:1 comparable:1 maximiser:1 interleaved:2 layer:16 followed:4 courville:1 correspondence:1 deficiency:3 x2:3 tag:2 generates:2 kumar:3 performing:3 structured:1 palm:3 according:2 combination:1 across:1 y0:1 making:1 intuitively:1 restricted:1 pr:4 iccv:1 equation:7 bing:1 turn:1 pinson:3 r3:1 tilmann:1 fed:7 end:2 zk0:2 available:2 gaussians:4 apply:2 observe:1 nicholas:1 pierre:1 neighbourhood:1 ensure:3 meu:5 gan:11 concatenated:6 scholarship:1 ghahramani:2 objective:19 added:1 quantity:1 makhzani:2 rt:4 diagonal:2 said:1 div:15 gradient:4 iclr:4 distance:3 separate:1 link:1 thank:1 capacity:2 decoder:2 concatenation:1 manifold:1 argue:1 ozair:1 maximising:1 index:4 reed:3 pointwise:9 z3:1 balance:1 providing:1 optionally:1 setup:1 equivalently:1 kde:1 negative:4 design:5 proper:15 litterature:1 perform:3 allowing:2 maximises:1 observation:2 convolution:1 descent:2 incorporated:1 excluding:1 y1:3 frame:5 arbitrary:1 sharp:1 introduced:2 pair:6 specified:1 z1:1 discriminator:3 learned:2 barcelona:1 kingma:2 nip:5 able:3 adversary:2 flower:2 below:1 green:1 including:1 max:2 power:1 overlap:1 natural:1 difficulty:2 predicting:1 rdz:1 representing:1 minimax:1 julien:2 raftery:5 categorical:1 auto:2 extract:2 regularise:1 speeding:1 text:1 prior:10 epoch:2 l2:2 regularisation:1 embedded:1 loss:26 fully:1 expect:1 generation:3 generator:2 validation:3 x01:3 affine:1 gather:1 consistent:2 principle:1 nowozin:2 playing:1 last:3 keeping:1 side:1 allow:3 deeper:1 generalise:1 face:2 taking:3 crossvalidated:1 feedback:4 overcome:2 depth:24 dimension:1 xn:20 valid:1 contour:3 fb:3 resides:2 author:2 made:2 commonly:1 preprocessing:1 refinement:1 programme:1 employing:1 attribute2image:1 welling:2 transaction:1 approximate:1 obtains:1 kullback:1 uai:1 summing:1 explicitely:1 discriminative:2 fergus:1 search:1 latent:3 continuous:1 table:8 promising:1 learn:1 zk:9 sem:4 init:2 diane:2 complex:1 protocol:1 aistats:1 dense:7 whole:1 noise:18 perlin:1 fair:1 rgbd:2 x1:4 xu:1 advice:1 referred:6 ff:4 fails:1 position:2 momentum:1 wish:1 inferring:1 candidate:14 jmlr:2 third:2 extractor:1 learns:1 theorem:1 specific:6 jensen:1 r2:1 nyu:15 abadie:1 intractable:2 workshop:3 adding:1 flattened:1 phd:1 dissimilarity:11 illustrates:1 forecast:2 visual:2 expressed:1 applies:1 radford:4 ch:1 corresponds:1 truth:8 acm:1 conditional:6 viewed:4 goal:3 careful:2 towards:1 leonard:1 change:1 specifically:1 disco:68 batra:1 experimental:3 shannon:1 formally:2 bidimensional:2 support:1 frontal:2 evaluate:2 ex:1
5,685
6,144
Higher-Order Factorization Machines Mathieu Blondel, Akinori Fujino, Naonori Ueda NTT Communication Science Laboratories Japan Masakazu Ishihata Hokkaido University Japan Abstract Factorization machines (FMs) are a supervised learning approach that can use second-order feature combinations even when the data is very high-dimensional. Unfortunately, despite increasing interest in FMs, there exists to date no efficient training algorithm for higher-order FMs (HOFMs). In this paper, we present the first generic yet efficient algorithms for training arbitrary-order HOFMs. We also present new variants of HOFMs with shared parameters, which greatly reduce model size and prediction times while maintaining similar accuracy. We demonstrate the proposed approaches on four different link prediction tasks. 1 Introduction Factorization machines (FMs) [13, 14] are a supervised learning approach that can use second-order feature combinations efficiently even when the data is very high-dimensional. The key idea of FMs is to model the weights of feature combinations using a low-rank matrix. This has two main benefits. First, FMs can achieve empirical accuracy on a par with polynomial regression or kernel methods but with smaller and faster to evaluate models [4]. Second, FMs can infer the weights of feature combinations that were not observed in the training set. This second property is crucial for instance in recommender systems, a domain where FMs have become increasingly popular [14, 16]. Without the low-rank property, FMs would fail to generalize to unseen user-item interactions. Unfortunately, although higher-order FMs (HOFMs) were briefly mentioned in the original work of [13, 14], there exists to date no efficient algorithm for training arbitrary-order HOFMs. In fact, even just computing predictions given the model parameters naively takes polynomial time in the number of features. For this reason, HOFMs have, to our knowledge, never been applied to any problem. In addition, HOFMs, as originally defined in [13, 14], model each degree in the polynomial expansion with a different matrix and therefore require the estimation of a large number of parameters. In this paper, we propose the first efficient algorithms for training arbitrary-order HOFMs. To do so, we rely on a link between FMs and the so-called ANOVA kernel [4]. We propose linear-time dynamic programming algorithms for evaluating the ANOVA kernel and computing its gradient. Based on these, we propose stochastic gradient and coordinate descent algorithms for arbitrary-order HOFMs. To reduce the number of parameters, as well as prediction times, we also introduce two new kernels derived from the ANOVA kernel, allowing us to define new variants of HOFMs with shared parameters. We demonstrate the proposed approaches on four different link prediction tasks. 2 Factorization machines (FMs) Second-order FMs. Factorization machines (FMs) [13, 14] are an increasingly popular method for efficiently using second-order feature combinations in classification or regression tasks even when the data is very high-dimensional. Let w ? Rd and P ? Rd?k , where k ? N is a rank ? j and its columns by ps , for j ? [d] and s ? [k], hyper-parameter. We denote the rows of P by p 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. where [d] := {1, . . . , d}. Then, FMs predict an output y ? R from a vector x = [x1 , . . . , xd ]T by X ? j 0 ixj xj 0 . y?FM (x) := hw, xi + h? pj , p (1) j 0 >j An important characteristic of (1) is that it considers only combinations of distinct features (i.e., the squared features x21 , . . . , x2d are ignored). The main advantage of FMs compared to naive polynomial regression is that the number of parameters to estimate is O(dk) instead of O(d2 ). In addition, we can compute predictions in O(2dk) time1 using y?FM (x) = wT x + k  1 T 2 X kP xk ? kps ? xk2 , 2 s=1 where ? indicates element-wise product [3]. Given a training set X = [x1 , . . . , xn ] ? Rd?n and y = [y1 , . . . , yn ]T ? Rn , w and P can be learned by minimizing the following non-convex objective n 1X ?1 ?2 ` (yi , y?FM (xi )) + kwk2 + kP k2 , n i=1 2 2 (2) where ` is a convex loss function and ?1 > 0, ?2 > 0 are hyper-parameters. The popular libfm library [14] implements efficient stochastic gradient and coordinate descent algorithms for obtaining a stationary point of (2). Both algorithms have a runtime complexity of O(2dkn) per epoch. Higher-order FMs (HOFMs). Although no training algorithm was provided, FMs were extended to higher-order feature combinations in the original work of [13, 14]. Let P (t) ? Rd?kt , where t ? {2, . . . , m} is the order or degree of feature combinations considered, and kt ? N is a rank (t) ? j be the j th row of P (t) . Then m-order HOFMs can be defined as hyper-parameter. Let p X (2) (2) X (m) (m) ? j 0 ixj xj 0 + ? ? ? + ? jm ixj1 xj2 . . . xjm (3) y?HOFM (x) := hw, xi + h? pj , p h? pj1 , . . . , p j 0 >j (t) jm >???>j1 (t) (t) (t) ? jt i := sum(? ? jt ) (sum of element-wise products). The where we defined h? pj1 , . . . , p pj1 ? ? ? ? ? p objective function of HOFMs can be expressed in a similar way as for (2): n m X 1X ?1 ?t ` (yi , y?HOFM (xi )) + kwk2 + kP (t) k2 , n i=1 2 2 t=2 (4) where ?1 , . . . , ?m > 0 are hyper-parameters. To avoid the combinatorial explosion of hyperparameter combinations to search, in our experiments we will simply set ?1 = ? ? ? = ?m and k2 = ? ? ? = km . While (3) looks quite daunting, [4] recently showed that FMs can be expressed from a simpler kernel perspective. Let us define the ANOVA2 kernel [19] of degree 2 ? m ? d by Am (p, x) := X m Y pjt xjt . (5) jm >???>j1 t=1 For later convenience, we also define A0 (p, x) := 1 and A1 (p, x) := hp, xi. Then it is shown that k2 km     X X y?HOFM (x) = hw, xi + A2 p(2) Am p(m) (6) s ,x + ??? + s ,x , s=1 (t) ps th s=1 (t) where is the s column of P . This perspective shows that we can view FMs and HOFMs as a type of kernel machine whose ?support vectors? are learned directly from data. Intuitively, the ANOVA kernel can be thought as a kind of polynomial kernel that uses feature combinations without replacement (i.e., of distinct features). A key property of the ANOVA kernel is multi-linearity [4]: Am (p, x) = Am (p?j , x?j ) + pj xj Am?1 (p?j , x?j ), (7) where p?j denotes the (d ? 1)-dimensional vector with pj removed and similarly for x?j . That is, everything else kept fixed, Am (p, x) is an affine function of pj ?j ? [d]. Although no training 1 2 We include the constant factor for fair later comparison with arbitrary-order HOFMs. The name comes from the ANOVA decomposition of functions. [20, 19] 2 algorithm was provided, [4] showed based on (7) that, although non-convex, the objective function of arbitrary-order HOFMs is convex in w and in each row of P (2) , . . . , P (m) , separately. Interpretability of HOFMs. An advantage of FMs and HOFMs is their interpretability. To see why this is the case, notice that we can rewrite (3) as X X (2) (m) y?HOFM (x) = hw, xi + W j,j 0 xj xj 0 + ? ? ? + W j1 ,...,jm xj1 xj2 . . . xjm , j 0 >j where we defined W (t) := Pkt jm >???>j1 (t) s=1 ps | ? ??? ? {z p(t) s . } t times t Intuitively, W (t) ? Rd is a low-rank t-way tensor which contains the weights of feature combinations of degree t. For instance, when t = 3, (3) W i,j,k is the weight of xi xj xk . Similarly to the ANOVA decomposition of functions, HOFMs consider only combinations of distinct features (i.e., xj1 xj2 . . . xjm for jm > ? ? ? > j2 > j1 ). This paper. Unfortunately, there exists to date no efficient algorithm for training arbitrary-order HOFMs. Indeed, computing (5) naively takes O(dm ), i.e., polynomial time. In the following, we present linear-time algorithms. Moreover, HOFMs, as originally defined in [13, 14] require the estimation of m ? 1 matrices P (2) , . . . , P (m) . Thus, HOFMs can produce large models when m is large. To address this issue, we propose new variants of HOFMs with shared parameters. 3 Linear-time stochastic gradient algorithms for HOFMs The kernel view presented in Section 2 allows us to focus on the ANOVA kernel as the main ?computational unit? for training HOFMs. In this section, we develop dynamic programming (DP) algorithms for evaluating the ANOVA kernel and computing its gradient in only O(dm) time. Evaluation. The main observation (see also [18, Section 9.2]) is that we can use (7) to recursively remove features until computing the kernel becomes trivial. Let us denote a subvector of p by p1:j ? Rj and similarly for x. Let us introduce the shorthand aj,t := At (p1:j , x1:j ). Then, from (7), aj,t = aj?1,t + pj xj aj?1,t?1 ?d ? j ? t ? 1. (8) For convenience, we also define aj,0 = 1 ?j ? 0 since A0 (p, x) = 1 and aj,t = 0 ?j < t since there does not exist any t-combination of features in a j < t dimensional vector. The quantity we want to compute is Am (p, x) = ad,m . Instead of naively using recursion (8), which would lead to many redundant computations, we use a bottom-up approach and organize computations in a DP table. We start from the top-left corner to initialize the recursion and go through the table to arrive at the solution in the bottomright corner. The procedure, summarized in Algorithm 1, takes O(dm) time and memory. Table 1: Example of DP table t=0 t=1 t=2 . . . t=m j=0 1 0 0 . . . 0 j=1 1 a1,1 0 . . . 0 j=2 1 a2,1 a2,2 . . . 0 ... 1 ... ... .. . ... j=d 1 ad,1 ad,2 . . . ad,m Gradients. For computing the gradient of Am (p, x) w.r.t. p, we use reverse-mode differentiation [2] (a.k.a. backpropagation in a neural network context), since it allows us to compute the entire gradient in a single pass. We supplement each variable aj,t in the DP table by a so-called adjoint ?a a ?j,t := ?ad,m , which represents the sensitivity of ad,m = Am (p, x) w.r.t. aj,t . From recursion (8), j,t except for edge cases, aj,t influences aj+1,t+1 and aj+1,t . Using the chain rule, we then obtain ?ad,m ?aj+1,t ?ad,m ?aj+1,t+1 + =a ?j+1,t +pj+1 xj+1 a ?j+1,t+1 ?d?1 ? j ? t ? 1. a ?j,t = ?aj+1,t ?aj,t ?aj+1,t+1 ?aj,t (9) ?ad,m Similarly, we introduce the adjoint p?j := ?pj ?j ? [d]. Since pj influences aj,t ?t ? [m], we have p?j = m X ?ad,m ?aj,t t=1 ?aj,t ?pj = m X a ?j,t aj?1,t?1 xj . t=1 ?a We can run recursion (9) in reverse order of the DP table starting from a ?d,m = ?ad,m = 1. Using d,m m T this approach, we can compute the entire gradient ?A (p, x) = [? p1 , . . . , p?d ] w.r.t. p in O(dm) time and memory. The procedure is summarized in Algorithm 2. 3 Algorithm 1 Evaluating Am (p, x) in O(dm) Input: p ? Rd , x ? Rd aj,t ? 0 ?t ? [m], j ? [d] ? {0} aj,0 ? 1 ?j ? [d] ? {0} Algorithm 2 Computing ?Am (p, x) in O(dm) Input: p ? Rd , x ? Rd , {aj,t }d,m j,t=0 a ?j,t ? 0 ?t ? [m + 1], j ? [d] a ?d,m ? 1 for t := m, . . . , 1 do for j := d ? 1, . . . , t do a ?j,t ? a ?j+1,t + a ?j+1,t+1 pj+1 xj+1 end for end for P ?j,t aj?1,t?1 xj ?j ? [d] p?j := m t=1 a Output: ?Am (p, x) = [? p1 , . . . , p?d ]T for t := 1, . . . , m do for j := t, . . . , d do aj,t ? aj?1,t + pj xj aj?1,t?1 end for end for Output: Am (p, x) = ad,m Stochastic gradient (SG) algorithms. Based on Algorithm 1 and 2, we can easily learn arbitraryorder HOFMs using any gradient-based optimization algorithm. Here we focus our discussion on SG algorithms. If we alternatingly minimize (4) w.r.t P (2) , . . . , P (m) , then the sub-problem associated with degree m is of the form ! k n X 1X ? m F (P ) := A (ps , xi ) + oi + kP k2 , ` yi , (10) n i=1 2 s=1 where o1 , . . . , on ? R are fixed offsets which account for the contribution of degrees other than m to the predictions. The sub-problem is convex in each row of P [4]. A SG update for (10) w.r.t. ps for some instance xi can be computed by ps ? ps ? ?`0 (yi , y?i )?Am (ps , xi ) ? ??ps , where ? is Pk a learning rate and where we defined y?i := s=1 Am (ps , xi ) + oi . Because evaluating Am (p, x) and computing its gradient both take O(dm), the cost per epoch, i.e., of visiting all instances, is O(mdkn). When m = 2, this is the same cost as the SG algorithm implemented in libfm. Sparse data. We conclude this section with a few useful remarks on sparse data. Let us denote the support of a vector x = [x1 , . . . , xd ]T by supp(x) := {j ? [d] : xj 6= 0} and let us define xS := [xj : j ? S]T . It is easy to see from (7) that the gradient and x have the same support, i.e., supp(?Am (p, x)) = supp(x). Another useful remark is that Am (p, x) = Am (psupp(x) , xsupp(x) ), provided that m ? nz (x), where nz (x) is the number of non-zero elements in x. Hence, when the data is sparse, we only need to iterate over non-zero features in Algorithm 1 and 2. Consequently, their time and memory cost is only O(nz (x)m) and thus the cost per epoch of SG algorithms is O(mknz (X)). 4 Coordinate descent algorithm for arbitrary-order HOFMs We now describe a coordinate descent (CD) solver for arbitrary-order HOFMs. CD is a good choice for learning HOFMs because their objective function is coordinate-wise convex, thanks to the multilinearity of the ANOVA kernel [4]. Our algorithm can be seen as a generalization to higher orders of the CD algorithms proposed in [14, 4]. An alternative recursion. Efficient CD implementations typically require maintaining statistics for each training instance, such as the predictions at the current iteration. When a coordinate is updated, the statistics then need to be synchronized. Unfortunately, the recursion we used in the previous section is not suitable for a CD algorithm because it would require to store and synchronize the DP table for each training instance upon coordinate-wise updates. We therefore turn to an alternative recursion: m 1 X Am (p, x) = (?1)t+1 Am?t (p, x)Dt (p, x), (11) m t=1 Pd where we defined Dt (p, x) := j=1 (pj xj )t . Note that the recursion was already known in the context of traditional kernel methods (c.f., [19, Section 11.8]) but its application to HOFMs is novel. Since we know that A0 (p, x) = 1 and A1 (p, x) = hp, xi, we can use (11) to compute A2 (p, x), then A3 (p, x), and so on. The overall evaluation cost for arbitrary m ? N is O(md + m2 ). 4 Coordinate-wise derivatives. We can apply reverse-mode differentiation to recursion (11) in order to compute the entire gradient (c.f., Appendix C). However, in CD, since we only need the derivative of one variable at a time, we can simply use forward-mode differentiation:   m ?Am (p, x) 1 X ?Am?t (p, x) t ?Dt (p, x) = (?1)t+1 D (p, x) + Am?t (p, x) , (12) ?pj m t=1 ?pj ?pj t t t where ?D?p(p,x) = tpt?1 j xj . The advantage of (12) is that we only need to cache D (p, x) for t ? [m]. j Hence the memory complexity per sample is only O(m) instead of O(dm) for (8). Use in a CD algorithm. Similarly to [4], we assume that the loss function ` is ?-smooth and update ?1 ?F (P ) the elements pj,s of P in cyclic order by pj,s ? pj,s ? ?j,s ?pj,s , where we defined n ?j,s := ?X n i=1  ?Am (ps , xi ) ?pj,s 2 n +? and ?F (P ) 1X 0 ?Am (ps , xi ) = ` (yi , y?i ) + ?pj,s . ?pj,s n i=1 ?pj,s The update guarantees that the objective value is monotonically non-increasing and is the exact coordinate-wise minimizer when ` is the squared loss. Overall, the total cost per epoch, i.e., updating all coordinates once, is O(? (m)knz (X)), where ? (m) is the time it takes to compute (12). Assuming Dt (ps , xi ) have been previously cached, for t ? [m], computing (12) takes ? (m) = m(m + 1)/2 ? 1 operations. For fixed m, if we unroll the two loops needed to compute (12), modern compilers can often further reduce the number of operations needed. Nevertheless, this quadratic dependency on m means that our CD algorithm is best for small m, typically m ? 4. 5 HOFMs with shared parameters HOFMs, as originally defined in [13, 14], model each degree with separate matrices P (2) , . . . , P (m) . Assuming that we use the same rank k for all matrices, the total model size of m-order HOFMs is therefore O(kdm). Moreover, even when using our O(dm) DP algorithm, the cost of computing predictions is O(k(2d + ? ? ? + md)) = O(kdm2 ). Hence, HOFMs tend to produce large, expensiveto-evaluate models. To reduce model size and prediction times, we introduce two new kernels which allow us to share parameters between each degree: the inhomogeneous ANOVA kernel and the all-subsets kernel. Because both kernels are derived from the ANOVA kernel, they share the same appealing properties: multi-linearity, sparse gradients and sparse-data friendliness. 5.1 Inhomogeneous ANOVA kernel It is well-known that a sum of kernels is equivalent to concatenating their associated feature maps [18, Section 3.4]. Let ? = [?1 , . . . , ?m ]T . To combine different degrees, a natural kernel is therefore A1?m (p, x; ?) := m X t=1 ?t At (p, x). (13) The kernel uses all feature combinations of degrees 1 up to m. We call it inhomogeneous ANOVA kernel, since it is an inhomogeneous polynomial of x. In contrast, Am (p, x) is homogeneous. The main difference between (13) and (6) is that all ANOVA kernels in the sum share the same parameters. However, to increase modeling power, we allow each kernel to have different weights ?1 , . . . , ?m . Evaluation. Due to the recursive nature of Algorithm 1, when computing Am (p, x), we also get A1 (p, x), . . . , Am?1 (p, x) for free. Indeed, lower-degree kernels are available in the last column of the DP table, i.e., At (p, x) = ad,t ?t ? [m]. Hence, the cost of evaluating (13) is O(dm) time. The Pk total cost for computing y? = s=1 A1?m (ps , x; ?) is O(kdm) instead of O(kdm2 ) for y?HOFM (x). Learning. While it is certainly possible to learn P and ? by directly minimizing some objective function, here we propose an easier solution, which works well in practice. Our key observation is that we can easily turn Am into A1?m by adding dummy values to feature vectors. Let us denote the concatenation of p with a scalar ? by [?, p] and similarly for x. From (7), we easily obtain Am ([?1 , p], [1, x]) = Am (p, x) + ?1 Am?1 (p, x). 5 Table 2: Datasets used in our experiments. For a detailed description, c.f. Appendix A. Dataset NIPS [17] Enzyme [21] GD [10] Movielens 100K [6] n+ 4,140 2,994 3,954 21,201 Columns of A Authors Enzymes Diseases Users nA 2,037 668 3,209 943 dA 13,649 325 3,209 49 Columns of B nB dB Genes Movies 12,331 1,682 25,275 29 Similarly, if we apply (7) twice, we obtain: Am ([?1 , ?2 , p], [1, 1, x]) = Am (p, x) + (?1 + ?2 )Am?1 (p, x) + ?1 ?2 Am?2 (p, x). Applying the above to m = 2 and m = 3, we obtain A2 ([?1 , p], [1, x]) = A1?2 (p, x; [?1 , 1]) and A3 ([?1 , ?2 , p], [1, 1, x]) = A1?3 (p, x; [?1 ?2 , ?1 +?2 , 1]). More generally, by adding m ? 1 dummy features to p and x, we can convert Am to A1?m . Because p is learned, this means that we can automatically learn ?1 , . . . , ?m?1 . These weights can then be converted to ?1 , . . . , ?m by ?unrolling? recursion (7). Although simple, we show in our experiments that this approach works favorably compared to directly learning P and ?. The main advantage of this approach is that we can use the same software unmodified (we simply need to minimize (10) with the augmented data). Moreover, the cost of computing the entire gradient by Algorithm 2 using the augmented data is just O(dm + m2 ) compared to O(dm2 ) for HOFMs with separate parameters. 5.2 All-subsets kernel We now consider a closely related kernel called all-subsets kernel [18, Definition 9.5]: S(p, x) := d Y (1 + pj xj ). j=1 The main difference with the traditional use of this kernel is that we learn p. Interestingly, it can be shown that S(p, x) = 1 + A1?d (p, x; 1) = 1 + A1?nz (x) (p, x; 1), where nz (x) is the number of non-zero features in x. Hence, the kernel uses all combinations of distinct features up to order nz (x) with uniform weights. Even if d is very large, the kernel can be a good choice if each training instance contains only a few non-zero elements. To learn the parameters, we simply substitute Am with S in (10). In SG or CD algorithms, all it entails is to substitute ?Am (p, x) with ?S(p, x). For computing ?S(p, x), it is easy to verify that S(p, x) = S(p?j , x?j )(1 + pj xj ) ?j ? [d] and therefore we have  T  T x1 S(p, x) xd S(p, x) ?S(p, x) = x1 S(p?1 , x?1 ), . . . , xd S(p?d , x?d ) = ,..., . 1 + p 1 x1 1 + p d xd Therefore, the main advantage of the all-subsets kernel is that we can evaluate it and compute its Pk gradient in just O(d) time. The total cost for computing y? = s=1 S(ps , x) is only O(kd). 6 6.1 Experimental results Application to link prediction Problem setting. We now demonstrate a novel application of HOFMs to predict the presence or absence of links between nodes in a graph. Formally, we assume two sets of possibly disjoint nodes of size nA and nB , respectively. We assume features for the two sets of nodes, represented by matrices A ? RdA ?nA and B ? RdB ?nB . For instance, A can represent user features and B movie features. We denote the columns of A and B by ai and bj , respectively. We are given a matrix Y ? {0, 1}nA ?nB , whose elements indicate presence (positive sample) or absence (negative sample) of link between two nodes ai and bj . We denote the number of positive samples by n+ . Using this data, our goal is to predict new associations. Datasets used in our experiments are summarized in Table 2. Note that for the NIPS and Enzyme datasets, A = B. Conversion to a supervised problem. We need to convert the above information to a format FMs and HOFMs can handle. To predict an element yi,j of Y , we simply form xi,j to be the concatenation 6 Table 3: Comparison of area under the ROC curve (AUC) as measured on the test sets. HOFM (m = 2) HOFM (m = 3) HOFM (m = 4) HOFM (m = 5) HOFM-shared-augmented (m = 2) HOFM-shared-augmented (m = 3) HOFM-shared-augmented (m = 4) HOFM-shared-augmented (m = 5) HOFM-shared-simplex (m = 2) HOFM-shared-simplex (m = 3) HOFM-shared-simplex (m = 4) HOFM-shared-simplex (m = 5) All-subsets Polynomial network (m = 2) Polynomial network (m = 3) Polynomial network (m = 4) Polynomial network (m = 5) Low-rank bilinear regression NIPS 0.856 0.875 0.874 0.874 0.858 0.874 0.836 0.824 0.716 0.777 0.758 0.722 0.730 0.725 0.789 0.782 0.543 0.855 Enzyme 0.880 0.888 0.887 0.887 0.876 0.887 0.824 0.795 0.865 0.870 0.870 0.869 0.840 0.879 0.853 0.873 0.524 0.694 GD 0.717 0.717 0.717 0.717 0.704 0.704 0.663 0.600 0.721 0.721 0.721 0.721 0.721 0.721 0.719 0.717 0.648 0.611 Movielens 100K 0.778 0.786 0.786 0.786 0.778 0.787 0.779 0.621 0.701 0.709 0.709 0.709 0.714 0.761 0.696 0.708 0.501 0.718 of ai and bj and feed this to a HOFM in order to compute a prediction y?i,j . Because HOFMs use feature combinations in xi,j , they can learn the weights of feature combinations between ai and bj . At training time, we need both positive and negative samples. Let us denote the set of positive and negative samples by ?. Then our training set is composed of (xi,j , yi,j ) pairs, where (i, j) ? ?. Models compared. ? HOFM: y?i,j = y?HOFM (xi,j ) as defined in (3) and as originally proposed in [13, 14]. We minimize (4) by alternating minimization of (10) for each degree. Pk ? HOFM-shared: y?i,j = s=1 A1?m (ps , xi,j ; ?). We learn P and ? using the simple augmented data approach described in Section 5.1 (HOFM-shared-augmented). Inspired Pby SimpleMKL [12], 1 we also report results when learning P and ? directly by minimizing |?| ?i,j ) + (i,j)?? `(yi,j , y ? 2 2 kP k subject to ? ? 0 and h?, 1i = 1 (HOFM-shared-simplex). Pk ? All-subsets: y?i,j = s=1 S(ps , xi,j ). As explained in Section 5.2, this model is equivalent to the HOFM-shared model with m = nz (xi,j ) and ? = 1. Pk ? Polynomial network: y?i,j = s=1 (?s + hps , xi,j i)m . This model can be thought as factorization machine variant that uses a polynomial kernel instead of the ANOVA kernel (c.f., [8, 4, 22]). ? Low-rank bilinear regression: y?i,j = ai U V T bj , where U ? RdA ?k and V ? RdB ?k . Such model for link prediction in [9] and [10]. We learn U and V by minimizing P was shown to work well ? 1 `(y , y ? ) + (kU k2 + kV k2 ). i,j i,j (i,j)?? |?| 2 Experimental setup and evaluation. In this experiment, for all models above, we use CD rather than SG to avoid the tuning of a learning rate hyper-parameter. We set ` to be the squared loss. Although we omitted it from our notation for clarity, we also fit a bias term for all models. We evaluated the compared models using the area under the ROC curve (AUC), which is the probability that the model correctly ranks a positive sample higher than a negative sample. We split the n+ positive samples into 50% for training and 50% for testing. We sample the same number of negative samples as positive samples for training and use the rest for testing. We chose ? from 10?6 , 10?5 , . . . , 106 by cross-validation and following [9] we empirically set k = 30. Throughout our experiments, we initialized the elements of P randomly by N (0, 0.01). Results are indicated in Table 3. Overall the two best models were HOFM and HOFM-sharedaugmented, which achieved the best scores on 3 out of 4 datasets. The two models outperformed low-rank bilinear regression on 3 out 4 datasets, showing the benefit of using higher-order feature combinations. HOFM-shared-augmented achieved similar accuracy to HOFM, despite using a smaller model. Surprisingly, HOFM-shared-simplex did not improve over HOFM-shared-augmented except 7 (a) Convergence when m = 2 (b) Convergence when m = 3 (c) Convergence when m = 4 (d) Scalability w.r.t. degree m Figure 1: Solver comparison for minimizing (10) when varying the degree m on the NIPS dataset with ? = 0.1 and k = 30. Results on other datasets are in Appendix B. on the GD dataset. We conclude that our augmented data approach is convenient yet works well in practice. All-subsets and polynomial networks performed worse than HOFM and HOFM-sharedaugmented, except on the GD dataset where they were the best. Finally, we observe that HOFM were quite robust to increasing m, which is likely a benefit of modeling each degree with a separate matrix. 6.2 Solver comparison We compared AdaGrad [5], L-BFGS and coordinate descent (CD) for minimizing (10) when varying the degree m on the NIPS dataset with ? = 0.1 and k = 30. We constructed the data in the same way as explained in the previous section and added m ? 1 dummy features, resulting in n = 8, 280 sparse samples of dimension d = 27, 298 + m ? 1. For AdaGrad and L-BFGS, we computed the (stochastic) gradients using Algorithm 2. All solvers used the same initialization. Results are indicated in Figure 1. We see that our CD algorithm performs very well when m ? 3 but starts to deteriorate when m ? 4, in which case L-BFGS becomes advantageous. As shown in Figure 1 d), the cost per epoch of AdaGrad and L-BFGS scales linearly with m, a benefit of our DP algorithm for computing the gradient. However, to our surprise, we found that AdaGrad is quite sensitive to the learning rate ?. AdaGrad diverged for ? ? {1, 0.1, 0.01} and the largest value to work well was ? = 0.001. This explains why AdaGrad did not outperform CD despite the lower cost per epoch. In the future, it would be useful to create a CD algorithm with a better dependency on m. 7 Conclusion and future directions In this paper, we presented the first training algorithms for HOFMs and introduced new HOFM variants with shared parameters. A popular way to deal with a large number of negative samples is to use an objective function that directly maximize AUC [9, 15]. This is especially easy to do with SG algorithms because we can sample pairs of positive and negative samples from the dataset upon each SG update. We therefore expect the algorithms developed in Section 3 to be especially useful in this setting. Recently, [7] proposed a distributed SG algorithm for training second-order FMs. It should be straightforward to extend this algorithm to HOFMs based on our contributions in Section 3. Finally, it should be possible to integrate Algorithm 1 and 2 into a deep learning framework such as TensorFlow [1], in order to easily compose ANOVA kernels with other layers (e.g., convolutional). 8 References [1] M. Abadi et al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. [2] A. G. Baydin, B. A. Pearlmutter, and A. A. Radul. Automatic differentiation in machine learning: a survey. arXiv preprint arXiv:1502.05767, 2015. [3] M. Blondel, A. Fujino, and N. Ueada. Convex factorization machines. In Proceedings of European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), 2015. [4] M. Blondel, M. Ishihata, A. Fujino, and N. Ueada. Polynomial networks and factorization machines: New insights and efficient training algorithms. In Proceedings of International Conference on Machine Learning (ICML), 2016. [5] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121?2159, 2011. [6] GroupLens. http://grouplens.org/datasets/movielens/, 1998. [7] M. Li, Z. Liu, A. Smola, and Y.-X. Wang. Difacto ? distributed factorization machines. In Proceedings of International Conference on Web Search and Data Mining (WSDM), 2016. [8] R. Livni, S. Shalev-Shwartz, and O. Shamir. On the computational efficiency of training neural networks. In Advances in Neural Information Processing Systems, pages 855?863, 2014. [9] A. K. Menon and C. Elkan. Link prediction via matrix factorization. In Machine Learning and Knowledge Discovery in Databases, pages 437?452. 2011. [10] N. Natarajan and I. S. Dhillon. Inductive matrix completion for predicting gene?disease associations. Bioinformatics, 30(12):i60?i68, 2014. [11] V. Y. Pan. Structured Matrices and Polynomials: Unified Superfast Algorithms. Springer-Verlag New York, Inc., 2001. [12] A. Rakotomamonjy, F. Bach, S. Canu, and Y. Grandvalet. Simplemkl. Journal of Machine Learning Research, 9:2491?2521, 2008. [13] S. Rendle. Factorization machines. In Proceedings of International Conference on Data Mining, pages 995?1000. IEEE, 2010. [14] S. Rendle. Factorization machines with libfm. ACM Transactions on Intelligent Systems and Technology (TIST), 3(3):57?78, 2012. [15] S. Rendle, C. Freudenthaler, Z. Gantner, and L. Schmidt-Thieme. Bpr: Bayesian personalized ranking from implicit feedback. In Proceedings of the twenty-fifth conference on uncertainty in artificial intelligence, pages 452?461, 2009. [16] S. Rendle, Z. Gantner, C. Freudenthaler, and L. Schmidt-Thieme. Fast context-aware recommendations with factorization machines. In SIGIR, pages 635?644, 2011. [17] S. Roweis. http://www.cs.nyu.edu/~roweis/data.html, 2002. [18] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, 2004. [19] V. Vapnik. Statistical learning theory. Wiley, 1998. [20] G. Wahba. Spline models for observational data, volume 59. Siam, 1990. [21] Y. Yamanishi, J.-P. Vert, and M. Kanehisa. Supervised enzyme network inference from the integration of genomic data and chemical information. Bioinformatics, 21:i468?i477, 2005. [22] J. Yang and A. Gittens. Tensor machines for learning target-specific polynomial features. arXiv preprint arXiv:1504.01697, 2015. 9
6144 |@word briefly:1 polynomial:17 advantageous:1 d2:1 km:2 decomposition:2 recursively:1 cyclic:1 contains:2 score:1 liu:1 tist:1 interestingly:1 current:1 ixj:2 yet:2 j1:5 remove:1 update:5 stationary:1 intelligence:1 item:1 xk:2 node:4 org:1 simpler:1 constructed:1 become:1 abadi:1 shorthand:1 combine:1 compose:1 introduce:4 deteriorate:1 blondel:3 indeed:2 p1:4 pkdd:1 multi:2 inspired:1 wsdm:1 automatically:1 dm2:1 jm:6 cache:1 solver:4 increasing:3 becomes:2 spain:1 provided:3 linearity:2 moreover:3 unrolling:1 notation:1 kind:1 thieme:2 developed:1 unified:1 differentiation:4 guarantee:1 xd:5 runtime:1 k2:7 unit:1 yn:1 organize:1 positive:8 despite:3 bilinear:3 simplemkl:2 chose:1 twice:1 nz:7 initialization:1 factorization:13 testing:2 recursive:1 practice:3 implement:1 backpropagation:1 procedure:2 area:2 empirical:1 thought:2 vert:1 convenient:1 get:1 convenience:2 nb:4 context:3 influence:2 applying:1 www:1 equivalent:2 map:1 go:1 straightforward:1 starting:1 convex:7 survey:1 sigir:1 m2:2 rule:1 insight:1 handle:1 coordinate:11 updated:1 shamir:1 target:1 user:3 exact:1 programming:2 homogeneous:1 us:4 elkan:1 element:8 natarajan:1 updating:1 database:2 observed:1 bottom:1 preprint:2 wang:1 removed:1 mentioned:1 disease:2 pd:1 complexity:2 cristianini:1 dynamic:2 rewrite:1 upon:2 efficiency:1 bpr:1 easily:4 represented:1 x2d:1 distinct:4 fast:1 describe:1 kp:6 artificial:1 hyper:5 shalev:1 quite:3 whose:2 statistic:2 unseen:1 online:1 advantage:5 propose:5 interaction:1 product:2 j2:1 loop:1 date:3 achieve:1 roweis:2 adjoint:2 description:1 kv:1 scalability:1 xj2:3 convergence:3 p:17 produce:2 cached:1 yamanishi:1 develop:1 completion:1 measured:1 implemented:1 c:1 come:1 indicate:1 synchronized:1 direction:1 inhomogeneous:4 closely:1 stochastic:6 observational:1 everything:1 explains:1 require:4 generalization:1 rda:2 superfast:1 considered:1 predict:4 bj:5 diverged:1 baydin:1 a2:5 xk2:1 pjt:1 omitted:1 estimation:2 outperformed:1 combinatorial:1 multilinearity:1 grouplens:2 sensitive:1 largest:1 create:1 minimization:1 genomic:1 rather:1 avoid:2 varying:2 derived:2 dkn:1 focus:2 rank:10 indicates:1 greatly:1 contrast:1 am:40 inference:1 entire:4 typically:2 a0:3 issue:1 classification:1 overall:3 html:1 integration:1 initialize:1 once:1 never:1 aware:1 represents:1 look:1 icml:1 future:2 simplex:6 report:1 spline:1 intelligent:1 few:2 modern:1 randomly:1 composed:1 replacement:1 interest:1 mining:2 evaluation:4 pj1:3 certainly:1 chain:1 kt:2 naonori:1 edge:1 explosion:1 taylor:1 initialized:1 instance:8 column:6 modeling:2 rdb:2 unmodified:1 cost:13 rakotomamonjy:1 subset:7 uniform:1 fujino:3 dependency:2 gd:4 thanks:1 international:3 sensitivity:1 siam:1 na:4 squared:3 possibly:1 worse:1 corner:2 derivative:2 li:1 japan:2 account:1 supp:3 converted:1 bfgs:4 summarized:3 inc:1 ranking:1 ad:13 later:2 view:2 performed:1 hazan:1 compiler:1 start:2 contribution:2 minimize:3 oi:2 accuracy:3 convolutional:1 characteristic:1 efficiently:2 generalize:1 bayesian:1 alternatingly:1 definition:1 dm:11 associated:2 dataset:6 popular:4 knowledge:3 feed:1 higher:8 ishihata:2 supervised:4 originally:4 dt:4 daunting:1 evaluated:1 just:3 smola:1 implicit:1 until:1 web:1 hokkaido:1 mode:3 aj:28 indicated:2 menon:1 name:1 xj1:2 verify:1 unroll:1 hence:5 inductive:1 chemical:1 alternating:1 laboratory:1 dhillon:1 deal:1 auc:3 demonstrate:3 pearlmutter:1 performs:1 duchi:1 wise:6 novel:2 recently:2 empirically:1 volume:1 association:2 extend:1 kwk2:2 cambridge:1 ai:5 rd:9 tuning:1 automatic:1 canu:1 hp:3 similarly:7 shawe:1 entail:1 enzyme:5 showed:2 perspective:2 reverse:3 store:1 verlag:1 yi:8 seen:1 maximize:1 redundant:1 monotonically:1 rj:1 infer:1 ntt:1 smooth:1 faster:1 cross:1 bach:1 a1:13 prediction:14 variant:5 regression:6 xjt:1 heterogeneous:1 arxiv:4 iteration:1 kernel:41 represent:1 achieved:2 addition:2 want:1 separately:1 else:1 crucial:1 rest:1 subject:1 tend:1 db:1 call:1 presence:2 yang:1 split:1 easy:3 iterate:1 xj:18 fit:1 fm:26 wahba:1 reduce:4 idea:1 york:1 remark:2 deep:1 ignored:1 useful:4 generally:1 detailed:1 http:2 outperform:1 exist:1 notice:1 disjoint:1 per:7 dummy:3 correctly:1 hyperparameter:1 key:3 four:2 nevertheless:1 clarity:1 pj:26 anova:17 time1:1 kept:1 graph:1 subgradient:1 sum:4 convert:2 run:1 uncertainty:1 arrive:1 throughout:1 ueda:1 appendix:3 layer:1 masakazu:1 quadratic:1 kdm:2 software:1 personalized:1 format:1 structured:1 combination:18 kd:1 smaller:2 increasingly:2 pan:1 appealing:1 gittens:1 intuitively:2 explained:2 previously:1 turn:2 fail:1 needed:2 know:1 singer:1 rendle:4 end:4 available:1 operation:2 apply:2 observe:1 generic:1 alternative:2 schmidt:2 original:2 substitute:2 denotes:1 top:1 include:1 x21:1 maintaining:2 especially:2 freudenthaler:2 tensor:2 objective:7 already:1 quantity:1 added:1 md:2 traditional:2 visiting:1 gradient:19 dp:9 link:8 separate:3 concatenation:2 considers:1 trivial:1 reason:1 assuming:2 o1:1 minimizing:6 setup:1 unfortunately:4 i60:1 favorably:1 negative:7 implementation:1 twenty:1 allowing:1 recommender:1 conversion:1 observation:2 datasets:7 descent:5 ecml:1 extended:1 communication:1 y1:1 rn:1 i68:1 arbitrary:10 introduced:1 pair:2 subvector:1 learned:3 tensorflow:2 barcelona:1 nip:6 address:1 pattern:1 interpretability:2 memory:4 power:1 suitable:1 natural:1 rely:1 synchronize:1 predicting:1 recursion:10 improve:1 movie:2 technology:1 library:1 mathieu:1 naive:1 epoch:6 sg:10 discovery:2 adagrad:6 loss:4 par:1 expect:1 validation:1 integrate:1 degree:16 affine:1 principle:1 grandvalet:1 share:3 cd:14 row:4 friendliness:1 surprisingly:1 last:1 free:1 bias:1 allow:2 livni:1 sparse:6 fifth:1 benefit:4 distributed:2 curve:2 dimension:1 xn:1 evaluating:5 feedback:1 forward:1 xjm:3 author:1 adaptive:1 transaction:1 gene:2 conclude:2 xi:24 shwartz:1 search:2 why:2 table:12 pkt:1 learn:8 nature:1 ku:1 robust:1 obtaining:1 expansion:1 european:1 domain:1 da:1 tpt:1 pk:6 main:8 did:2 linearly:1 pby:1 fair:1 x1:7 augmented:11 roc:2 wiley:1 sub:2 concatenating:1 hw:4 specific:1 jt:2 showing:1 offset:1 dk:2 x:1 nyu:1 a3:2 exists:3 naively:3 vapnik:1 adding:2 supplement:1 gantner:2 easier:1 surprise:1 simply:5 likely:1 expressed:2 scalar:1 recommendation:1 springer:1 minimizer:1 acm:1 goal:1 kanehisa:1 consequently:1 shared:20 absence:2 movielens:3 except:3 wt:1 called:3 total:4 pas:1 experimental:2 formally:1 support:3 bioinformatics:2 evaluate:3
5,686
6,145
A Multi-Batch L-BFGS Method for Machine Learning Albert S. Berahas Northwestern University Evanston, IL albertberahas@u.northwestern.edu Jorge Nocedal Northwestern University Evanston, IL j-nocedal@northwestern.edu Martin Tak??c Lehigh University Bethlehem, PA takac.mt@gmail.com Abstract The question of how to parallelize the stochastic gradient descent (SGD) method has received much attention in the literature. In this paper, we focus instead on batch methods that use a sizeable fraction of the training set at each iteration to facilitate parallelism, and that employ second-order information. In order to improve the learning process, we follow a multi-batch approach in which the batch changes at each iteration. This can cause difficulties because L-BFGS employs gradient differences to update the Hessian approximations, and when these gradients are computed using different data points the process can be unstable. This paper shows how to perform stable quasi-Newton updating in the multi-batch setting, illustrates the behavior of the algorithm in a distributed computing platform, and studies its convergence properties for both the convex and nonconvex cases. 1 Introduction It is common in machine learning to encounter optimization problems involving millions of parameters and very large datasets. To deal with the computational demands imposed by such applications, high performance implementations of stochastic gradient and batch quasi-Newton methods have been developed [1, 11, 9]. In this paper we study a batch approach based on the L-BFGS method [20] that strives to reach the right balance between efficient learning and productive parallelism. In supervised learning, one seeks to minimize empirical risk, n F (w) := n X 1X def 1 f (w; xi , y i ) = fi (w), n i=1 n i=1 where (xi , y i )ni=1 denote the training examples and f (?; x, y) : Rd ? R is the composition of a prediction function (parametrized by w) and a loss function. The training problem consists of finding an optimal choice of the parameters w ? Rd with respect to F , i.e., n min F (w) = w?Rd 1X fi (w). n i=1 (1.1) At present, the preferred optimization method is the stochastic gradient descent (SGD) method [23, 5], and its variants [14, 24, 12], which are implemented either in an asynchronous manner (e.g. when 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. using a parameter server in a distributed setting) or following a synchronous mini-batch approach that exploits parallelism in the gradient evaluation [2, 22, 13]. A drawback of the asynchronous approach is that it cannot use large batches, as this would cause updates to become too dense and compromise the stability and scalability of the method [16, 22]. As a result, the algorithm spends more time in communication as compared to computation. On the other hand, using a synchronous mini-batch approach one can achieve a near-linear decrease in the number of SGD iterations as the mini-batch size is increased, up to a certain point after which the increase in computation is not offset by the faster convergence [26]. An alternative to SGD is a batch method, such as L-BFGS, which is able to reach high training accuracy and allows one to perform more computation per node, so as to achieve a better balance with communication costs [27]. Batch methods are, however, not as efficient learning algorithms as SGD in a sequential setting [6]. To benefit from the strength of both methods some high performance systems employ SGD at the start and later switch to a batch method [1]. Multi-Batch Method. In this paper, we follow a different approach consisting of a single method that selects a sizeable subset (batch) of the training data to compute a step, and changes this batch at each iteration to improve the learning abilities of the method. We call this a multi-batch approach to differentiate it from the mini-batch approach used in conjunction with SGD, which employs a very small subset of the training data. When using large batches it is natural to employ a quasiNewton method, as incorporating second-order information imposes little computational overhead and improves the stability and speed of the method. We focus here on the L-BFGS method, which employs gradient information to update an estimate of the Hessian and computes a step in O(d) flops, where d is the number of variables. The multi-batch approach can, however, cause difficulties to L-BFGS because this method employs gradient differences to update Hessian approximations. When the gradients used in these differences are based on different data points, the updating procedure can be unstable. Similar difficulties arise in a parallel implementation of the standard L-BFGS method, if some of the computational nodes devoted to the evaluation of the function and gradient are unable to return results on time ? as this again amounts to using different data points to evaluate the function and gradient at the beginning and the end of the iteration. The goal of this paper is to show that stable quasi-Newton updating can be achieved in both settings without incurring extra computational cost, or special synchronization. The key is to perform quasi-Newton updating based on the overlap between consecutive batches. The only restriction is that this overlap should not be too small, something that can be achieved in most situations. Contributions. We describe a novel implementation of the batch L-BFGS method that is robust in the absence of sample consistency; i.e., when different samples are used to evaluate the objective function and its gradient at consecutive iterations. The numerical experiments show that the method proposed in this paper ? which we call the multi-batch L-BFGS method ? achieves a good balance between computation and communication costs. We also analyze the convergence properties of the new method (using a fixed step length strategy) on both convex and nonconvex problems. 2 The Multi-Batch Quasi-Newton Method In a pure batch approach, one applies a gradient based method, such as L-BFGS [20], to the deterministic optimization problem (1.1). When the number n of training examples is large, it is natural to parallelize the evaluation of F and ?F by assigning the computation of the component functions fi to different processors. If this is done on a distributed platform, it is possible for some of the computational nodes to be slower than the rest. In this case, the contribution of the slow (or unresponsive) computational nodes could be ignored given the stochastic nature of the objective function. This leads, however, to an inconsistency in the objective function and gradient at the beginning and at the end of the iteration, which can be detrimental to quasi-Newton methods. Thus, we seek to find a fault-tolerant variant of the batch L-BFGS method that is capable of dealing with slow or unresponsive computational nodes. A similar challenge arises in a multi-batch implementation of the L-BFGS method in which the entire training set T = {(xi , y i )ni=1 } is not employed at every iteration, but rather, a subset of the data is used to compute the gradient. Specifically, we consider a method in which the dataset is randomly divided into a number of batches ? say 10, 50, or 100 ? and the minimization is performed with respect to a different batch at every iteration. At the k-th iteration, the algorithm chooses a batch 2 Sk ? {1, . . . , n}, computes 1 X fi (wk ) , F Sk (wk ) = |Sk | i?Sk ?F Sk (wk ) = gkSk = 1 X ?fi (wk ) , |Sk | (2.2) i?Sk and takes a step along the direction ?Hk gkSk , where Hk is an approximation to ?2 F (wk )?1 . Allowing the sample Sk to change freely at every iteration gives this approach flexibility of implementation and is beneficial to the learning process, as we show in Section 4. (We refer to Sk as the sample of training points, even though Sk only indexes those points.) The case of unresponsive computational nodes and the multi-batch method are similar. The main difference is that node failures create unpredictable changes to the samples Sk , whereas a multi-batch method has control over sample generation. In either case, the algorithm employs a stochastic approximation to the gradient and can no longer be considered deterministic. We must, however, distinguish our setting from that of the classical SGD method, which employs small mini-batches and noisy gradient approximations. Our algorithm operates with much larger batches so that distributing the function evaluation is beneficial and the compute time of gkSk is not overwhelmed by communication costs. This gives rise to gradients with relatively small variance and justifies the use of a second-order method such as L-BFGS. Robust Quasi-Newton Updating. The difficulties created by the use of a different sample Sk at each iteration can be circumvented if consecutive samples Sk and Sk+1 overlap, so that Ok = Sk ?Sk+1 6= ?. One can then perform stable quasi-Newton updating by computing gradient differences based on this overlap, i.e., by defining Ok yk+1 = gk+1 ? gkOk , sk+1 = wk+1 ? wk , (2.3) in the notation given in (2.2). The correction pair (yk , sk ) can then be used in the BFGS update. When the overlap set Ok is not too small, yk is a useful approximation of the curvature of the objective function F along the most recent displacement, and will lead to a productive quasi-Newton step. This observation is based on an important property of Newton-like methods, namely that there is much more freedom in choosing a Hessian approximation than in computing the gradient [7, 3]. Thus, a smaller sample Ok can be employed for updating the inverse Hessian approximation Hk than for computing the batch gradient gkSk in the search direction ?Hk gkSk . In summary, by ensuring that unresponsive nodes do not constitute the vast majority of all working nodes in a fault-tolerant parallel implementation, or by exerting a small degree of control over the creation of the samples Sk in the multi-batch method, one can design a robust method that naturally builds upon the fundamental properties of BFGS updating. We should mention in passing that a commonly used strategy for ensuring stability of quasi-Newton updating in machine learning is to enforce gradient consistency [25], i.e., to use the same sample Sk to compute gradient evaluations at the beginning and the end of the iteration. Another popular remedy is to use the same batch Sk for multiple iterations [19], alleviating the gradient inconsistency problem at the price of slower convergence. In this paper, we assume that achieving such sample consistency is not possible (in the fault-tolerant case) or desirable (in a multi-batch framework), and wish to design a new variant of L-BFGS that imposes minimal restrictions in the sample changes. 2.1 Specification of the Method At the k-th iteration, the multi-batch BFGS algorithm chooses a set Sk ? {1, . . . , n} and computes a new iterate wk+1 = wk ? ?k Hk gkSk , (2.4) where ?k is the step length, gkSk is the batch gradient (2.2) and Hk is the inverse BFGS Hessian matrix approximation that is updated at every iteration by means of the formula Hk+1 = VkT Hk Vk + ?k sk sTk , ?k = 1 Ts , yk k Vk = I ? ?k yk sTk . To compute the correction vectors (sk , yk ), we determine the overlap set Ok = Sk ? Sk+1 consisting of the samples that are common at the k-th and k + 1-st iterations. We define 1 X 1 X F Ok (wk ) = fi (wk ) , ?F Ok (wk ) = gkOk = ?fi (wk ) , |Ok | |Ok | i?Ok i?Ok 3 and compute the correction vectors as in (2.3). In this paper we assume that ?k is constant. In the limited memory version, the matrix Hk is defined at each iteration as the result of applying m BFGS updates to a multiple of the identity matrix, using a set of m correction pairs {si , yi } kept in storage. The memory parameter m is typically in the range 2 to 20. When computing the matrix-vector product in (2.4) it is not necessary to form that matrix Hk since one can obtain this product via the two-loop recursion [20], using the m most recent correction pairs {si , yi }. After the step has been computed, the oldest pair (sj , yj ) is discarded and the new curvature pair is stored. A pseudo-code of the proposed method is given below, and depends on several parameters. The parameter r denotes the fraction of samples in the dataset used to define the gradient, i.e., r = |S| n . The parameter o denotes the length of overlap between consecutive samples, and is defined as a fraction of the number of samples in a given batch S, i.e., o = |O| |S| . Algorithm 1 Multi-Batch L-BFGS Input: w0 (initial iterate), T = {(xi , y i ), for i = 1, . . . , n} (training set), m (memory parameter), r (batch, fraction of n), o (overlap, fraction of batch), k ? 0 (iteration counter). 1: Create initial batch S0 . As shown in Firgure 1 2: for k = 0, 1, 2, ... do 3: Calculate the search direction pk = ?Hk gkSk . Using L-BFGS formula 4: Choose the step length ?k > 0 5: Compute wk+1 = wk + ?k pk 6: Create the next batch Sk+1 Ok 7: Compute the curvature pairs sk+1 = wk+1 ? wk and yk+1 = gk+1 ? gkOk 8: Replace the oldest pair (si , yi ) by sk+1 , yk+1 9: end for 2.2 Sample Generation We now discuss how the sample Sk+1 is created at each iteration (Line 8 in Algorithm 1). Distributed Computing with Faults. Consider a distributed implementation in which slave nodes read the current iterate wk from the master node, compute a local gradient on a subset of the dataset, and send it back to the master node for aggregation in the calculation (2.2). Given a time (computational) budget, it is possible for some nodes to fail to return a result. The schematic in Figure 1a illustrates the gradient calculation across two iterations, k and k+1, in the presence of faults. Here Bi , i = 1, ..., B denote the batches of data that each slave node i receives (where T = ?i Bi ), ? (w) is the gradient calculation using all nodes that responded within the preallocated time. and ?f n MASTER NODE wk d wk+1 S0 B1 B2 ? B1 (wk ) rf B3 ??? SLAVE NODES BB ? B3 (wk ) rf ? BB (wk ) rf wk B1 B2 B3 ? B1 (wk+1 ) rf MASTER NODE ? (wk ) rf ??? SHUFFLED DATA O0 SHUFFLED DATA S3 O 2 O3 S6 O 5 O6 BB S1 O 0 O1 S4 O 3 O4 ? BB (wk+1 ) rf S2 O 1 wk+1 O2 S5 O 4 O5 ? (wk+1 ) rf (a) (b) Figure 1: Sample and Overlap formation. Let Jk ? {1, 2, ..., B} and Jk+1 ? {1, 2, ..., B} be the set of indices of all nodes that returned a gradient at the k-th and k + 1-st iterations, respectively. Using this notation Sk = ?j?Jk Bj and Sk+1 = ?j?Jk+1 Bj , and we define Ok = ?j?Jk ?Jk+1 Bj . The simplest implementation in this setting preallocates the data on each compute node, requiring minimal data communication, i.e., only 4 one data transfer. In this case the samples Sk will be independent if node failures occur randomly. On the other hand, if the same set of nodes fail, then sample creation will be biased, which is harmful both in theory and practice. One way to ensure independent sampling is to shuffle and redistribute the data to all nodes after a certain number of iterations. Multi-batch Sampling. We propose two strategies for the multi-batch setting. Figure 1b illustrates the sample creation process in the first strategy. The dataset is shuffled and batches are generated by collecting subsets of the training set, in order. Every set (except S0 ) is of the form Sk = {Ok?1 , Nk , Ok }, where Ok?1 and Ok are the overlapping samples with batches Sk?1 and Sk+1 respectively, and Nk are the samples that are unique to batch Sk . After each pass through the dataset, the samples are reshuffled, and the procedure described above is repeated. In our implementation samples are drawn without replacement, guaranteeing that after every pass (epoch) all samples are used. This strategy has the advantage that it requires no extra computation in the Ok evaluation of gkOk and gk+1 , but the samples {Sk } are not independent. The second sampling strategy is simpler and requires less control. At every iteration k, a batch Sk is created by randomly selecting |Sk | elements from {1, . . . n}. The overlapping set Ok is then formed by randomly selecting |Ok | elements from Sk (subsampling). This strategy is slightly more expensive Ok since gk+1 requires extra computation, but if the overlap is small this cost is not significant. 3 Convergence Analysis In this section, we analyze the convergence properties of the multi-batch L-BFGS method (Algorithm 1) when applied to the minimization of strongly convex and nonconvex objective functions, using a fixed step length strategy. We assume that the goal is to minimize the empirical risk F given in (1.1), but note that a similar analysis could be used to study the minimization of the expected risk. 3.1 Strongly Convex case Due to the stochastic nature of the multi-batch approach, every iteration of Algorithm 1 employs a gradient that contains errors that do not converge to zero. Therefore, by using a fixed step length strategy one cannot establish convergence to the optimal solution w? , but only convergence to a neighborhood of w? [18]. Nevertheless, this result is of interest as it reflects the common practice of using a fixed step length and decreasing it only if the desired testing error has not been achieved. It also illustrates the tradeoffs that arise between the size of the batch and the step length. In our analysis, we make the following assumptions about the objective function and the algorithm. Assumptions A. 1. F is twice continuously differentiable. ? and ? ?  ?2 F O (w)  ?I ? such that ?I ? for all w ? Rd and all 2. There exist positive constants ? sets O ? {1, 2, . . . , n}.  2 3. There is a constant ? such that ES k?F S (w)k ? ? 2 for all w ? Rd and all sets S ? {1, 2, . . . , n}. 4. The samples S are drawn independently and ?F S (w) is an unbiased estimator of the true gradient ?F (w) for all w ? Rd , i.e., ES [?F S (w)] = ?F (w). Note that Assumption A.2 implies that the entire Hessian ?2 F (w) also satisfies ?I  ?2 F (w)  ?I, ?w ? Rd , for some constants ?, ? > 0. Assuming that every sub-sampled function F O (w) is strongly convex is not unreasonable as a regularization term is commonly added in practice when that is not the case. We begin by showing that the inverse Hessian approximations Hk generated by the multi-batch L-BFGS method have eigenvalues that are uniformly bounded above and away from zero. The proof technique used is an adaptation of that in [8]. Lemma 3.1. If Assumptions A.1-A.2 above hold, there exist constants 0 < ?1 ? ?2 such that the Hessian approximations {Hk } generated by Algorithm 1 satisfy ?1 I  Hk  ?2 I, for k = 0, 1, 2, . . . 5 Utilizing Lemma 3.1, we show that the multi-batch L-BFGS method with a constant step length converges to a neighborhood of the optimal solution. Theorem 3.2. Suppose that Assumptions A.1-A.4 hold and let F ? = F (w? ), where w? is the minimizer of F . Let {wk } be the iterates generated by Algorithm 1 with ?k = ? ? (0, 2?11 ? ), starting from w0 . Then for all k ? 0, E[F (wk ) ? F ? ] ? (1 ? 2??1 ?)k [F (w0 ) ? F ? ] + [1 ? (1 ? ??1 ?)k ] k?? ????? ??22 ? 2 ? 4?1 ? ??22 ? 2 ? . 4?1 ? The bound provided by this theorem has two components: (i) a term decaying linearly to zero, and (ii) a term identifying the neighborhood of convergence. Note that a larger step length yields a more favorable constant in the linearly decaying term, at the cost of an increase in the size of the neighborhood of convergence. We will consider again these tradeoffs in Section 4, where we also note that larger batches increase the opportunities for parallelism and improve the limiting accuracy in the solution, but slow down the learning abilities of the algorithm. One can establish convergence of the multi-batch L-BFGS method to the optimal solution w? by employing a sequence of step lengths {?k } that converge to zero according to the schedule proposed by Robbins and Monro [23]. However, that provides only a sublinear rate of convergence, which is of little interest in our context where large batches are employed and some type of linear convergence is expected. In this light, Theorem 3.2 is more relevant to practice. 3.2 Nonconvex case The BFGS method is known to fail on noconvex problems [17, 10]. Even for L-BFGS, which makes only a finite number of updates at each iteration, one cannot guarantee that the Hessian approximations have eigenvalues that are uniformly bounded above and away from zero. To establish convergence of the BFGS method in the nonconvex case cautious updating procedures have been proposed [15]. Here we employ a cautious strategy that is well suited to our particular algorithm; we skip the update, i.e., set Hk+1 = Hk , if the curvature condition ykT sk ? ksk k2 (3.5) is not satisfied, where  > 0 is a predetermined constant. Using said mechanism we show that the eigenvalues of the Hessian matrix approximations generated by the multi-batch L-BFGS method are bounded above and away from zero (Lemma 3.3). The analysis presented in this section is based on the following assumptions. Assumptions B. 1. F is twice continuously differentiable. 2. The gradients of F are ?-Lipschitz continuous, and the gradients of F O are ?O -Lipschitz continuous for all w ? Rd and all sets O ? {1, 2, . . . , n}. 3. The function F (w) is bounded below by a scalar Fb .  2 4. There exist constants ? ? 0 and ? > 0 such that ES k?F S (w)k ? ? 2 + ?k?F (w)k2 for all w ? Rd and all sets S ? {1, 2, . . . , n}. 5. The samples S are drawn independently and ?F S (w) is an unbiased estimator of the true gradient ?F (w) for all w ? Rd , i.e., E[?F S (w)] = ?F (w). Lemma 3.3. Suppose that Assumptions B.1-B.2 hold and let  > 0 be given. Let {Hk } be the Hessian approximations generated by Algorithm 1, with the modification that Hk+1 = Hk whenever (3.5) is not satisfied. Then, there exist constants 0 < ?1 ? ?2 such that ?1 I  Hk  ?2 I, for k = 0, 1, 2, . . . We can now follow the analysis in [4, Chapter 4] to establish the following result about the behavior of the gradient norm for the multi-batch L-BFGS method with a cautious update strategy. Theorem 3.4. Suppose that Assumptions B.1-B.5 above hold, and let  > 0 be given. Let {wk } be 1 the iterates generated by Algorithm 1, with ?k = ? ? (0, ?2??? ), starting from w0 , and with the 2 6 modification that Hk+1 = Hk whenever (3.5) is not satisfied. Then, i ??2 ? 2 ? 2[F (w ) ? Fb] h 1 L?1 X 0 2 k?F (wk )k2 ? + E L ?1 ??1 L k=0 L?? ????? ??22 ? 2 ? . ?1 This result bounds the average norm of the gradient of F after the first L ? 1 iterations, and shows that the iterates spend increasingly more time in regions where the objective function has a small gradient. 4 Numerical Results In this Section, we present numerical results that evaluate the proposed robust multi-batch L-BFGS scheme (Algorithm 1) on logistic regression problems. Figure 2 shows the performance on the webspam dataset1 , where we compare it against three methods: (i) multi-batch L-BFGS without enforcing sample consistency (L-BFGS), where gradient differences are computed using different Sk+1 samples, i.e., yk = gk+1 ? gkSk ; (ii) multi-batch gradient descent (Gradient Descent), which is obtained by setting Hk = I in Algorithm 1; and, (iii) serial SGD, where at every iteration one sample is used to compute the gradient. We run each method with 10 different random seeds, and, where applicable, report results for different batch (r) and overlap (o) sizes. The proposed method is more stable than the standard L-BFGS method; this is especially noticeable when r is small. On the other hand, serial SGD achieves similar accuracy as the robust L-BFGS method and at a similar 1 rate (e.g., r = 1%), at the cost of n communications per epochs versus r(1?o) communications per epoch. Figure 2 also indicates that the robust L-BFGS method is not too sensitive to the size of overlap. Similar behavior was observed on other datasets, in regimes where r ? o was not too small. We mention in passing that the L-BFGS step was computed using the a vector-free implementation proposed in [9]. webspam ? = 1 r= 1% K = 16 o=20% 2 10 0 10 ?2 ?2 ?4 0.5 1 1.5 Epochs 2 2.5 3 10 webspam ? = 1 r= 1% K = 16 o=5% 1 ?3 10 0 ?4 0.5 1 1.5 Epochs 2 2.5 3 10 webspam ? = 1 r= 1% K = 16 o=10% 2 10 Robust L?BFGS L?BFGS Gradient Descent SGD 10 0 0 0 0.5 1 1.5 Epochs 2 2.5 3 webspam ? = 1 r= 1% K = 16 o=30% 0 10 Robust L?BFGS L?BFGS Gradient Descent SGD Robust L?BFGS L?BFGS Gradient Descent SGD ?1 10 10 ?1 k?F (w)k 10 ?2 10 k?F (w)k 0 ?2 10 ?3 10 k?F (w)k 10 10 ?4 10 Robust L?BFGS L?BFGS Gradient Descent SGD ?1 10 10 webspam ? = 1 r= 10% K = 16 o=20% 0 10 Robust L?BFGS L?BFGS Gradient Descent SGD ?1 k?F (w)k k?F (w)k 10 webspam ? = 1 r= 5% K = 16 o=20% 0 10 Robust L?BFGS L?BFGS Gradient Descent SGD k?F (w)k 4 10 ?2 10 ?2 10 ?3 10 ?3 10 ?4 10 0 ?4 0.5 1 1.5 Epochs 2 2.5 3 10 0 ?4 0.5 1 1.5 Epochs 2 2.5 3 10 0 0.5 1 1.5 Epochs 2 2.5 3 Figure 2: webspam dataset. Comparison of Robust L-BFGS, L-BFGS (multi-batch L-BFGS without enforcing sample consistency), Gradient Descent (multi-batch Gradient method) and SGD for various batch (r) and overlap (o) sizes. Solid lines show average performance, and dashed lines show worst and best performance, over 10 runs (per algorithm). K = 16 MPI processes. We also explore the performance of the robust multi-batch L-BFGS method in the presence of node failures (faults), and compare it to the multi-batch variant that does not enforce sample consistency (L-BFGS). Figure 3 illustrates the performance of the methods on the webspam dataset, for various 1 LIBSVM: https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary.html. 7 probabilities of node failures p ? {0.1, 0.3, 0.5}, and suggests that the robust L-BFGS variant is more stable. webspam ? = 0.1 p= 0.1 K = 16 0 10 webspam ? = 0.1 p= 0.3 K = 16 0 10 Robust L?BFGS L?BFGS webspam ? = 0.1 p= 0.5 K = 16 0 10 Robust L?BFGS L?BFGS ?1 Robust L?BFGS L?BFGS ?1 10 10 ?2 ?2 ?3 10 ?4 ?2 10 k?F (w)k 10 k?F (w)k k?F (w)k 10 ?3 10 10 ?4 ?4 10 ?6 10 0 10 ?5 50 100 150 200 Iterations/Epochs 250 300 10 0 ?5 50 100 150 200 Iterations/Epochs 250 300 10 0 50 100 150 200 Iterations/Epochs 250 300 Figure 3: webspam dataset. Comparison of Robust L-BFGS and L-BFGS (multi-batch L-BFGS without enforcing sample consistency), for various node failure probabilities p. Solid lines show average performance, and dashed lines show worst and best performance, over 10 runs (per algorithm). K = 16 MPI processes. Lastly, we study the strong and weak scaling properties of the robust L-BFGS method on artificial data (Figure 4). We measure the time needed to compute a gradient (Gradient) and the associated communication (Gradient+C), as well as, the time needed to compute the L-BFGS direction (LBFGS) and the associated communication (L-BFGS+C), for various batch sizes (r). The figure on the left shows strong scaling of multi-batch LBFGS on a d = 104 dimensional problem with n = 107 samples. The size of input data is 24GB, and we vary the number of MPI processes, K ? {1, 2, . . . , 128}. The time it takes to compute the gradient decreases with K, however, for small values of r, the communication time exceeds the compute time. The figure on the right shows weak scaling on a problem of similar size, but with varying sparsity. Each sample has 10 ? K non-zero elements, thus for any K the size of local problem is roughly 1.5GB (for K = 128 size of data 192GB). We observe almost constant time for the gradient computation while the cost of computing the L-BFGS direction decreases with K; however, if communication is considered, the overall time needed to compute the L-BFGS direction increases slightly. Strong Scaling Elapsed Time [s] ?2 10 ?1 Gradient Gradient+C L?BFGS L?BFGS+C r = 10.00% r = 5.00% r = 2.50% r = 1.25% r = 0.63% r = 0.32% r = 0.16% r = 0.08% r = 0.04% ?4 10 10 ?2 10 Elapsed Time [s] 0 10 ?3 10 ?4 10 ?5 10 ?6 10 ?6 1 10 Number of MPI processes ? K 10 2 10 Weak Scaling ? Fix problem dimensions r = 10.00% r = 5.00% r = 2.50% r = 1.25% r = 0.63% r = 0.32% r = 0.16% r = 0.08% r = 0.04% Gradient Gradient+C L?BFGS L?BFGS+C 1 10 Number of MPI processes ? K 2 10 Figure 4: Strong and weak scaling of multi-batch L-BFGS method. 5 Conclusion This paper describes a novel variant of the L-BFGS method that is robust and efficient in two settings. The first occurs in the presence of node failures in a distributed computing implementation; the second arises when one wishes to employ a different batch at each iteration in order to accelerate learning. The proposed method avoids the pitfalls of using inconsistent gradient differences by performing quasi-Newton updating based on the overlap between consecutive samples. Numerical results show that the method is efficient in practice, and a convergence analysis illustrates its theoretical properties. Acknowledgements The first two authors were supported by the Office of Naval Research award N000141410313, the Department of Energy grant DE-FG02-87ER25047 and the National Science Foundation grant DMS-1620022. Martin Tak??c was supported by National Science Foundation grant CCF-1618717. 8 References [1] A. Agarwal, O. Chapelle, M. Dud?k, and J. Langford. A reliable effective terascale linear learning system. The Journal of Machine Learning Research, 15(1):1111?1133, 2014. [2] D. P. Bertsekas and J. N. Tsitsiklis. Parallel and Distributed Computation: Numerical Methods, volume 23. Prentice hall Englewood Cliffs, NJ, 1989. [3] R. Bollapragada, R. Byrd, and J. Nocedal. Exact and inexact subsampled newton methods for optimization. arXiv preprint arXiv:1609.08502, 2016. [4] L. Bottou, F. E. Curtis, and J. Nocedal. Optimization methods for large-scale machine learning. arXiv preprint arXiv:1606.04838, 2016. [5] L. Bottou and Y. LeCun. Large scale online learning. In NIPS, pages 217?224, 2004. [6] O. Bousquet and L. Bottou. The tradeoffs of large scale learning. In NIPS, pages 161?168, 2008. [7] R. H. Byrd, G. M. Chin, W. Neveitt, and J. Nocedal. On the use of stochastic Hessian information in optimization methods for machine learning. SIAM Journal on Optimization, 21(3):977?995, 2011. [8] R. H. Byrd, S. L. Hansen, J. Nocedal, and Y. Singer. A stochastic quasi-newton method for large-scale optimization. SIAM Journal on Optimization, 26(2):1008?1031, 2016. [9] W. Chen, Z. Wang, and J. Zhou. Large-scale L-BFGS using MapReduce. In NIPS, pages 1332?1340, 2014. [10] Y.-H. Dai. Convergence properties of the BFGS algoritm. SIAM Journal on Optimization, 13(3):693?701, 2002. [11] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, A. Senior, P. Tucker, K. Yang, Q. V. Le, et al. Large scale distributed deep networks. In NIPS, pages 1223?1231, 2012. [12] A. Defazio, F. Bach, and S. Lacoste-Julien. SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. In NIPS, pages 1646?1654, 2014. [13] I. Goodfellow, Y. Bengio, and A. Courville. Deep learning. Book in preparation for MIT Press, 2016. [14] R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In NIPS, pages 315?323, 2013. [15] D.-H. Li and M. Fukushima. On the global convergence of the BFGS method for nonconvex unconstrained optimization problems. SIAM Journal on Optimization, 11(4):1054?1064, 2001. [16] H. Mania, X. Pan, D. Papailiopoulos, B. Recht, K. Ramchandran, and M. I. Jordan. Perturbed iterate analysis for asynchronous stochastic optimization. arXiv preprint arXiv:1507.06970, 2015. [17] W. F. Mascarenhas. The BFGS method with exact line searches fails for non-convex objective functions. Mathematical Programming, 99(1):49?61, 2004. [18] A. Nedi?c and D. Bertsekas. Convergence rate of incremental subgradient algorithms. In Stochastic Optimization: Algorithms and Applications, pages 223?264. Springer, 2001. [19] J. Ngiam, A. Coates, A. Lahiri, B. Prochnow, Q. V. Le, and A. Y. Ng. On optimization methods for deep learning. In ICML, pages 265?272, 2011. [20] J. Nocedal and S. Wright. Numerical Optimization. Springer New York, 2 edition, 1999. [21] M. J. Powell. Some global convergence properties of a variable metric algorithm for minimization without exact line searches. Nonlinear programming, 9(1):53?72, 1976. [22] B. Recht, C. Re, S. Wright, and F. Niu. HOGWILD!: A lock-free approach to parallelizing stochastic gradient descent. In NIPS, pages 693?701, 2011. [23] H. Robbins and S. Monro. A stochastic approximation method. The Annals of Mathematical Statistics, pages 400?407, 1951. [24] M. Schmidt, N. Le Roux, and F. Bach. Minimizing finite sums with the stochastic average gradient. Mathematical Programming, page 1?30, 2016. [25] N. N. Schraudolph, J. Yu, and S. G?nter. A stochastic quasi-Newton method for online convex optimization. In AISTATS, pages 436?443, 2007. [26] M. Tak??c, A. Bijral, P. Richt?rik, and N. Srebro. Mini-batch primal and dual methods for SVMs. In ICML, pages 1022?1030, 2013. [27] Y. Zhang and X. Lin. DiSCO: Distributed optimization for self-concordant empirical loss. In ICML, pages 362?370, 2015. 9
6145 |@word version:1 norm:2 seek:2 sgd:17 mention:2 solid:2 reduction:1 initial:2 contains:1 selecting:2 o2:1 current:1 com:1 si:3 gmail:1 assigning:1 must:1 devin:1 numerical:6 predetermined:1 update:9 oldest:2 beginning:3 iterates:3 provides:1 node:27 simpler:1 zhang:2 mathematical:3 along:2 become:1 consists:1 overhead:1 manner:1 expected:2 roughly:1 behavior:3 multi:34 decreasing:1 pitfall:1 byrd:3 little:2 unpredictable:1 spain:1 begin:1 notation:2 bounded:4 provided:1 spends:1 developed:1 finding:1 nj:1 guarantee:1 pseudo:1 every:10 collecting:1 k2:3 evanston:2 control:3 grant:3 bertsekas:2 positive:1 local:2 cliff:1 parallelize:2 niu:1 twice:2 suggests:1 limited:1 range:1 bi:2 unique:1 lecun:1 yj:1 testing:1 practice:5 procedure:3 powell:1 displacement:1 empirical:3 composite:1 cannot:3 storage:1 risk:3 applying:1 context:1 prentice:1 restriction:2 www:1 imposed:1 deterministic:2 dean:1 send:1 attention:1 starting:2 independently:2 convex:8 nedi:1 roux:1 identifying:1 pure:1 estimator:2 utilizing:1 s6:1 stability:3 updated:1 limiting:1 papailiopoulos:1 suppose:3 annals:1 alleviating:1 exact:3 programming:3 goodfellow:1 pa:1 element:3 expensive:1 jk:6 updating:11 observed:1 csie:1 preprint:3 wang:1 worst:2 calculate:1 region:1 richt:1 decrease:3 counter:1 shuffle:1 yk:9 productive:2 neveitt:1 compromise:1 creation:3 predictive:1 upon:1 accelerate:1 chapter:1 various:4 fast:1 describe:1 effective:1 artificial:1 formation:1 choosing:1 neighborhood:4 larger:3 spend:1 say:1 ability:2 statistic:1 noisy:1 online:2 differentiate:1 advantage:1 differentiable:2 eigenvalue:3 sequence:1 propose:1 product:2 adaptation:1 relevant:1 loop:1 flexibility:1 achieve:2 scalability:1 cautious:3 convergence:19 guaranteeing:1 converges:1 incremental:2 noticeable:1 received:1 strong:4 implemented:1 skip:1 implies:1 direction:6 drawback:1 stochastic:15 libsvmtools:1 fix:1 ntu:1 correction:5 hold:4 considered:2 hall:1 wright:2 seed:1 bj:3 achieves:2 consecutive:5 vary:1 favorable:1 applicable:1 hansen:1 sensitive:1 robbins:2 create:3 reflects:1 minimization:4 mit:1 rather:1 zhou:1 varying:1 office:1 conjunction:1 focus:2 naval:1 vk:2 indicates:1 hk:23 entire:2 typically:1 tak:3 quasi:13 algoritm:1 selects:1 overall:1 dual:1 html:1 platform:2 special:1 ng:1 sampling:3 yu:1 icml:3 report:1 employ:12 randomly:4 national:2 subsampled:1 consisting:2 replacement:1 fukushima:1 freedom:1 interest:2 englewood:1 evaluation:6 redistribute:1 light:1 primal:1 devoted:1 capable:1 necessary:1 harmful:1 desired:1 re:1 theoretical:1 minimal:2 increased:1 bijral:1 cost:8 subset:5 johnson:1 too:5 stored:1 perturbed:1 chooses:2 st:2 recht:2 fundamental:1 siam:4 continuously:2 again:2 satisfied:3 choose:1 book:1 return:2 li:1 bfgs:79 de:1 sizeable:2 b2:2 wk:33 unresponsive:4 preallocated:1 satisfy:1 depends:1 later:1 performed:1 hogwild:1 analyze:2 start:1 aggregation:1 decaying:2 parallel:3 monro:2 contribution:2 minimize:2 formed:1 il:2 ni:2 accuracy:3 variance:2 responded:1 yield:1 weak:4 nter:1 processor:1 reach:2 whenever:2 inexact:1 failure:6 against:1 energy:1 tucker:1 dm:1 naturally:1 proof:1 associated:2 sampled:1 dataset:8 popular:1 improves:1 exerting:1 schedule:1 back:1 ok:21 supervised:1 follow:3 o6:1 done:1 though:1 strongly:4 lastly:1 langford:1 hand:3 working:1 receives:1 lahiri:1 nonlinear:1 overlapping:2 logistic:1 b3:3 facilitate:1 requiring:1 unbiased:2 remedy:1 true:2 ccf:1 regularization:1 shuffled:3 read:1 dud:1 deal:1 self:1 mpi:5 o3:1 chin:1 novel:2 fi:7 common:3 mt:1 volume:1 million:1 refer:1 composition:1 s5:1 significant:1 rd:10 unconstrained:1 consistency:7 chapelle:1 stable:5 specification:1 longer:1 mania:1 something:1 curvature:4 recent:2 certain:2 nonconvex:6 server:1 binary:1 jorge:1 fault:6 inconsistency:2 yi:3 dai:1 employed:3 freely:1 determine:1 converge:2 corrado:1 dashed:2 ii:2 multiple:2 desirable:1 o5:1 exceeds:1 faster:1 calculation:3 bach:2 schraudolph:1 lin:1 divided:1 serial:2 award:1 ensuring:2 prediction:1 involving:1 variant:6 schematic:1 regression:1 metric:1 albert:1 iteration:32 arxiv:6 monga:1 agarwal:1 achieved:3 whereas:1 reshuffled:1 extra:3 rest:1 biased:1 inconsistent:1 jordan:1 call:2 near:1 presence:3 yang:1 iii:1 bengio:1 switch:1 iterate:4 tradeoff:3 synchronous:2 o0:1 defazio:1 distributing:1 gb:3 accelerating:1 returned:1 hessian:13 cause:3 constitute:1 passing:2 york:1 deep:3 ignored:1 useful:1 amount:1 s4:1 svms:1 simplest:1 http:1 exist:4 coates:1 s3:1 per:5 key:1 nevertheless:1 achieving:1 drawn:3 libsvm:1 kept:1 nocedal:7 lacoste:1 vast:1 subgradient:1 fraction:5 sum:1 run:3 inverse:3 master:4 almost:1 scaling:6 def:1 bound:2 distinguish:1 courville:1 strength:1 occur:1 prochnow:1 bousquet:1 speed:1 min:1 performing:1 martin:2 relatively:1 circumvented:1 department:1 according:1 beneficial:2 strives:1 smaller:1 across:1 slightly:2 increasingly:1 describes:1 tw:1 pan:1 modification:2 s1:1 discus:1 fail:3 mechanism:1 cjlin:1 needed:3 singer:1 end:4 incurring:1 unreasonable:1 observe:1 away:3 enforce:2 batch:79 encounter:1 alternative:1 slower:2 schmidt:1 denotes:2 ensure:1 subsampling:1 opportunity:1 lock:1 newton:15 exploit:1 build:1 establish:4 especially:1 classical:1 objective:9 question:1 added:1 occurs:1 strategy:11 said:1 gradient:64 detrimental:1 unable:1 parametrized:1 majority:1 w0:4 unstable:2 enforcing:3 o4:1 assuming:1 length:11 code:1 index:2 o1:1 mini:6 balance:3 minimizing:1 gk:5 rise:1 implementation:11 design:2 perform:4 allowing:1 observation:1 datasets:3 discarded:1 vkt:1 finite:2 descent:13 t:1 flop:1 situation:1 communication:11 defining:1 parallelizing:1 pair:7 namely:1 elapsed:2 barcelona:1 nip:8 able:1 parallelism:4 below:2 regime:1 sparsity:1 challenge:1 rf:7 reliable:1 memory:3 webspam:13 overlap:14 difficulty:4 natural:2 recursion:1 scheme:1 improve:3 julien:1 created:3 epoch:12 literature:1 acknowledgement:1 mapreduce:1 synchronization:1 loss:2 ksk:1 northwestern:4 generation:2 sublinear:1 srebro:1 versus:1 foundation:2 degree:1 rik:1 imposes:2 s0:3 berahas:1 terascale:1 summary:1 supported:2 asynchronous:3 free:2 tsitsiklis:1 senior:1 distributed:9 benefit:1 dimension:1 avoids:1 dataset1:1 computes:3 fb:2 author:1 commonly:2 employing:1 bb:4 sj:1 preferred:1 dealing:1 global:2 tolerant:3 b1:4 xi:4 search:4 continuous:2 sk:43 nature:2 transfer:1 robust:21 curtis:1 ngiam:1 bottou:3 aistats:1 pk:2 dense:1 main:1 linearly:2 s2:1 arise:2 edition:1 repeated:1 slow:3 sub:1 mao:1 lehigh:1 wish:2 slave:3 saga:1 fails:1 formula:2 theorem:4 down:1 showing:1 offset:1 incorporating:1 sequential:1 fg02:1 ramchandran:1 illustrates:6 overwhelmed:1 justifies:1 demand:1 budget:1 nk:2 chen:2 suited:1 explore:1 lbfgs:2 ykt:1 scalar:1 applies:1 springer:2 minimizer:1 quasinewton:1 satisfies:1 goal:2 identity:1 stk:2 price:1 absence:1 replace:1 change:5 lipschitz:2 specifically:1 except:1 operates:1 uniformly:2 disco:1 lemma:4 pas:2 e:3 concordant:1 takac:1 support:1 arises:2 preparation:1 evaluate:3
5,687
6,146
SoundNet: Learning Sound Representations from Unlabeled Video Yusuf Aytar? MIT yusuf@csail.mit.edu Carl Vondrick? MIT vondrick@mit.edu Antonio Torralba MIT torralba@mit.edu Abstract We learn rich natural sound representations by capitalizing on large amounts of unlabeled sound data collected in the wild. We leverage the natural synchronization between vision and sound to learn an acoustic representation using two-million unlabeled videos. Unlabeled video has the advantage that it can be economically acquired at massive scales, yet contains useful signals about natural sound. We propose a student-teacher training procedure which transfers discriminative visual knowledge from well established visual recognition models into the sound modality using unlabeled video as a bridge. Our sound representation yields significant performance improvements over the state-of-the-art results on standard benchmarks for acoustic scene/object classification. Visualizations suggest some high-level semantics automatically emerge in the sound network, even though it is trained without ground truth labels. 1 Introduction The fields of object recognition, speech recognition, machine translation have been revolutionized by the emergence of massive labeled datasets [31, 42, 10] and learned deep representations [17, 33, 10, 35]. However, there has not yet been the same corresponding progress in natural sound understanding tasks. We attribute this partly to the lack of large labeled datasets of sound, which are often both expensive and ambiguous to collect. We believe that large-scale sound data can also significantly advance natural sound understanding. In this paper, we leverage over one year of sounds collected in-the-wild to learn semantically rich sound representations. We propose to scale up by capitalizing on the natural synchronization between vision and sound to learn an acoustic representation from unlabeled video. Unlabeled video has the advantage that it can be economically acquired at massive scales, yet contains useful signals about sound. Recent progress in computer vision has enabled machines to recognize scenes and objects in images and videos with good accuracy. We show how to transfer this discriminative visual knowledge into sound using unlabeled video as a bridge. We present a deep convolutional network that learns directly on raw audio waveforms, which is trained by transferring knowledge from vision into sound. Although the network is trained with visual supervision, the network has no dependence on vision during inference. In our experiments, we show that the representation learned by our network obtains state-of-the-art accuracy on three standard acoustic scene classification datasets. Since we can leverage large amounts of unlabeled sound data, it is feasible to train deeper networks without significant overfitting, and our experiments suggest deeper models perform better. Visualizations of the representation suggest that the network is also learning high-level detectors, such as recognizing bird chirps or crowds cheering, even though it is trained directly from audio without ground truth labels. ? contributed equally 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Visual ?Recognition ?Networks Unlabeled ? Video Object ?Distribution RGB ?Frames KL ImageNet CNN Scene ?Distribution KL Places ?CNN Raw ? Waveform conv4 pool2 conv1 pool1 Input conv2 conv3 pool5 conv5 conv6 conv7 conv8 SoundNet Architecture Deep ?1D ?Convolutional ?Network Figure 1: SoundNet: We propose a deep convolutional architecture for natural sound recognition. We train the network by transferring discriminative knowledge from visual recognition networks into sound networks. Our approach capitalizes on the synchronization of vision and sound in video. The primary contribution of this paper is the development of a large-scale and semantically rich representation for natural sound. We believe large-scale models of natural sounds can have a large impact in many real-world applications, such as robotics and cross-modal understanding. The remainder of this paper describes our method and experiments in detail. We first review related work. In section 2, we describe our unlabeled video dataset and in section 3 we present our network and training procedure. Finally in section 4 we conclude with experiments on standard benchmarks and show several visualizations of the learned representation. Code, data, and models will be released. 1.1 Related Work Sound Recognition: Although large-scale audio understanding has been extensively studied in the context of music [5, 37] and speech recognition [10], we focus on understanding natural, in-the-wild sounds. Acoustic scene classification, classifying sound excerpts into existing acoustic scene/object categories, is predominantly based on applying a variety of general classifiers (SVMs, GMMs, etc.) to the manually crafted sound features (MFCC, spectrograms, etc.) [4, 29, 21, 30, 34, 32]. Even though there are unsupervised [20] and supervised [27, 23, 6, 12] deep learning methods applied to sound classification, the models are limited by the amount of available labeled natural sound data. We distinguish ourselves from the existing literature by training a deep fully convolutional network on a large scale dataset (2M videos). This allows us to train much deeper networks. Another key advantage of our approach is that we supervise our sound recognition network through semantically rich visual discriminative models [33, 17] which proved their robustness on a variety of large scale object/scene categorization challenges[31, 42]. [26] also investigates the relation between vision and sound modalities, but focuses on producing sound from image sequences. Concurrent work [11] also explores video as a form of weak labeling for audio event classification. Transfer Learning: Transfer learning is widely studied within computer vision such as transferring knowledge for object detection [1, 2] and segmentation [18], however transferring from vision to other modalities are only possible recently with the emergence of high performance visual models [33, 17]. Our method builds upon teacher-student models [3, 9] and dark knowledge transfer [13]. In [3, 13] the basic idea is to compress (i.e. transfer) discriminative knowledge from a well-trained complex model to a simpler model without loosing considerable accuracy. In [3] and [13] both the teacher and the student are in the same modality, whereas in our approach the teacher operates on vision to train the student model in sound. [9] also transfer visual supervision into depth models. Cross-Modal Learning and Unlabeled Video: Our approach is broadly inspired by efforts to model cross-modal relations [24, 14, 7, 26] and works that leverage large amounts of unlabeled video [25, 41, 8, 40, 39]. In this work, we leverage the natural synchronization between vision and sound to learn a deep representation of natural sounds without ground truth sound labels. 2 Beach Classroom Construction River Club Forrest Hockey Playroom Engine Vegetation Figure 2: Unlabeled Video Dataset: Sample frames from our 2+ million video dataset. For visualization purposes, each frame is automatically categorized by object and scene vision networks. 2 Large Unlabeled Video Dataset We seek to learn a representation for sound by leveraging massive amounts of unlabeled videos. While there are a variety of sources available on the web (e.g., YouTube, Flickr), we chose to use videos from Flickr because they are natural, not professionally edited, short clips that capture various sounds in everyday, in-the-wild situations. We downloaded over two million videos from Flickr by querying for popular tags [36] and dictionary words, which resulted in over one year of continuous natural sound and video, which we use for training. The length of each video varies from a few seconds to several minutes. We show a small sample of frames from the video dataset in Figure 2. We wish to process sound waves in the raw. Hence, the only post-processing we did on the videos was to convert sound to MP3s, reduce the sampling rate to 22 kHz, and convert to single channel audio. Although this slightly degrades the quality of the sound, it allows us to more efficiently operate on large datasets. We also scaled the waveform to be in the range [?256, 256]. We did not need to subtract the mean because it was naturally near zero already. 3 3.1 Learning Sound Representations Deep Convolutional Sound Network Convolutional Network: We present a deep convolutional architecture for learning sound representations. We propose to use a series of one-dimensional convolutions followed by nonlinearities (i.e. ReLU layer) in order to process sound. Convolutional networks are well-suited for audio signals for a couple of reasons. Firstly, like images [19], we desire our network to be invariant to translations, a property that reduces the number of parameters we need to learn and increases efficiency. Secondly, convolutional networks allow us to stack layers, which enables us to detect higher-level concepts through a series of lower-level detectors. Variable Length Input/Output: Since sound can vary in temporal length, we desire our network to handle variable-length inputs. To do this, we use a fully convolutional network. As convolutional layers are invariant to location, we can convolve each layer depending on the length of the input. Consequently, in our architecture, we only use convolutional and pooling layers. Since the representation adapts to the input length, we must design the output layers to work with variable length inputs as well. While we could have used a global pooling strategy [37] to down-sample variable length inputs to a fixed dimensional vector, such a strategy may unnecessarily discard information useful for high-level representations. Since we ultimately aim to train this network with video, which is also variable length, we instead use a convolutional output layer to produce an output over multiple timesteps in video. This strategy is similar to a spatial loss in images [22], but instead temporally. Network Depth: Since we will use a large amount of video to train, it is feasible to use deep architectures without significant over-fitting. We experiment with both five-layer and eight-layer networks. 3 Layer conv1 pool1 conv2 pool2 conv3 conv4 conv5 pool5 conv6 conv7 conv8 Dim. 220,050 27,506 13,782 1,722 862 432 217 54 28 15 4 # of Filters 16 16 32 32 64 128 256 256 512 1024 1401 Filter Size 64 8 32 8 16 8 4 4 4 4 8 Stride 2 1 2 1 2 2 2 1 2 2 2 Padding 32 0 16 0 8 4 2 0 2 2 0 Table 1: SoundNet (8 Layer): The configuration of the layers for the 8-layer SoundNet. conv1 pool1 conv2 pool2 conv3 pool3 conv4 conv5 220,050 27,506 13,782 1,722 862 432 217 54 32 32 64 64 128 128 256 1401 8 32 8 16 8 8 16 64 2 8 2 8 2 8 2 12 0 16 0 8 0 4 4 32 Table 2: SoundNet (5 Layer): The configuration for the 5-layer SoundNet. We visualize the eight-layer network architecture in Figure 1, which conists of 8 convolutional layers and 3 max-pooling layers. We show the layer configuration in Table 1 and Table 2. 3.2 Visual Transfer into Sound The main idea in this paper is to leverage the natural synchronization between vision and sound in unlabeled video in order to learn a representation for sound. We model the learning problem from a student-teacher perspective. In our case, state-of-the-art networks for vision will teach our network for sound to recognize scenes and objects. Let xi ? RD be a waveform and yi ? R3?T ?W ?H be its corresponding video for 1 ? i ? N , where W, H, T are width, height and number of sampled frames in the video, respectively. During learning, we aim to use the posterior probabilities from a teacher vision network gk (yi ) in order to train our student network fk (xi ) to recognize concepts given sound. As we wish to transfer knowledge from both object and scene networks, k enumerates the concepts we are transferring. During learning, we PK PN P Pj optimize min? k=1 i=1 DKL (gk (yi )||fk (xi ; ?)) where DKL (P ||Q) = j Pj log Qj is the KLdivergence. While there are a variety of distance metrics we could have use, we chose KL-divergence because the outputs from the vision network gk can be interpreted as a distribution of categories. As KL-divergence is differentiable, we optimize it using back-propagation [19] and stochastic gradient descent. We transfer from both scene and object visual networks (K = 2). 3.3 Sound Classification Although we train SoundNet to classify visual categories, the categories we wish to recognize may not appear in visual models (e.g., sneezing). Consequently, we use a different strategy to attach semantic meaning to sounds. We ignore the output layer of our network and use the internal representation as features for training classifiers, using a small amount of labeled sound data for the concepts of interest. We pick a layer in the network to use as features and train a linear SVM. For multi-class classification, we use a one-vs-all strategy. We perform cross-validation to pick the margin regularization hyperparameter. For robustness, we follow a standard data augmentation procedure where each training sample is split into overlapping fixed length sound excerpts, which we compute features on and use for training. During inference, we average predictions across all windows. 3.4 Implementation Our approach is implemented in Torch7. We use the Adam [16] optimizer and a fixed learning rate of 0.001 and momentum term of 0.9 throughout our experiments. We experimented with several batch sizes, and found 64 to produce good results. We initialized all the weights to zero mean Gaussian noise with a standard deviation of 0.01. After every convolution, we use batch normalization [15] and rectified linear activation units [17]. We train the network for 100, 000 iterations. Optimization typically took 1 day on a GPU. 4 Experiments Experimental Setup: We split the unlabeled video dataset into a training set and a held-out validation set. We use 2, 000, 000 videos for training, and the remaining 140, 000 videos for validation. After training the network, we use the hidden representation as a feature extractor for learning on smaller, 4 Method Accuracy RG [29] 69% LTT [21] 72% RNH [30] 77% Ensemble [34] 78% SoundNet 88% Table 3: Acoustic Scene Classification on DCASE: We evaluate classification accuracy on the DCASE dataset. By leveraging large amounts of unlabeled video, SoundNet generally outperforms hand-crafted features by 10%. Accuracy on Method ESC-50 ESC-10 SVM-MFCC [28] 39.6% 67.5% Convolutional Autoencoder 39.9% 74.3% Random Forest [28] 44.3% 72.7% Piczak ConvNet [27] 64.5% 81.0% SoundNet 74.2% 92.2% Human Performance [28] 81.3% 95.7% Table 4: Acoustic Scene Classification on ESC-50 and ESC-10: We evaluate classification accuracy on the ESC datasets. Results suggest that deep convolutional sound networks trained with visual supervision on unlabeled data outperforms baselines. labeled sound only datasets. We extract features for a given layer, and train an SVM on the task of interest. For training the SVM, we use the standard training/test splits of the datasets. We report classification accuracy. Baselines:: In addition to published baselines on standard datasets, we explored an additional baseline trained on our unlabeled videos. We experimented using a convolutional autoencoder for sound, trained over our video dataset. We use an autoencoder with 4 encoder layers and 4 decoder layers. For the encoder layers, we used the same first four convolutional layers as SoundNet. For the decoders, we used a fractionally strided convolutional layers (in order to upsample instead of downsample). Note that we experimented with deeper autoencoders, but they performed worse. We used mean squared error for the reconstruction loss, and trained the autoencoders for several days. 4.1 Acoustic Scene Classification We evaluate the SoundNet representation for acoustic scene classification. The aim in this task is to categorize sound clips into one of the many acoustic scene categories. We use three standard, publicly available datasets: DCASE Challenge[34], ESC-50 [28], and ESC-10 [28]. DCASE[34]: One of the tasks in the Detection and Classification of Acoustic Scenes and Events Challenge (DCASE)[34] is to recognize scenes from natural sounds. In the challenge, there are 10 acoustic scene categories, 10 training examples per category, and 100 held-out testing examples. Each example is a 30 seconds audio recording. The task is to categorize natural sounds into existing 10 acoustic scene categories. Multi-class classification accuracy is used as the performance metric. ESC-50 and ESC-10 [28]: The ESC-50 dataset is a collection of 2000 short (5 seconds) environmental sound recordings of equally balanced 50 categories selected from 5 major groups (animals, natural soundscapes, human non-speech sounds, interior/domestic sounds, and exterior/urban noises). Each category has 40 samples. The data is prearranged into 5 folds and the accuracy results are reported as the mean of 5 leave-one-fold-out evaluations. The performance of untrained human participants on this dataset is 81.3% [28]. ESC-10 is a subset of ESC-50 which consists of 10 classes (dog bark, rain, sea waves, baby cry, clock tic, person sneeze, helicopter, chainsaw, rooster, and fire cracking). The human performance on this dataset is 95.7%. We have two major evaluations on this section: (a) comparison with the existing state of the art results, (b) diagnostic performance evaluation Figure 3: SoundNet confusions on ESC-50 of inner layers of SoundNet as generic features for this task. In DCASE we used 5 second excerpts, and in ESC datasets we used 1 second windows. In both evaluations a multi-class SVM (multiple one-vs all classifiers) is trained over extracted 5 Accuracy on Comparison of SoundNet Model ESC-50 ESC-10 Table 5: Ablation Analysis: 8 Layer, `2 Loss 47.8% 81.5% We breakdown accuracy of Loss 8 Layer, KL Loss 72.9% 92.2% various configurations using pool5 from SoundNet trained 8 Layer, ImageNet Only 69.5% 89.8% with VGG. Results suggest that Teacher Net 8 Layer, Places Only 71.1% 89.5% deeper convolutional sound 8 Layer, Both 72.9% 92.2% networks trained with visual 5 Layer, Scratch Init 65.0% 82.3% supervision on unlabeled data Depth and 8 Layer, Scratch Init 51.1% 75.5% helps recognition. Visual Transfer 5 Layer, Unlabeled Video 66.1% 86.8% 8 Layer, Unlabeled Video 72.9% 92.2% Dataset Model conv4 conv5 pool5 conv6 conv7 conv8 8 Layer, AlexNet 84% 85% 84% 83% 78% 68% DCASE [34] 8 Layer, VGG 77% 88% 88% 87% 84% 74% 8 Layer, AlexNet 66.0% 71.2% 74.2% 74% 63.8% 45.7% ESC50 [28] 8 Layer, VGG 66.0% 69.3% 72.9% 73.3% 59.8% 43.7% Table 6: Which layer and teacher network gives better features? The performance comparison of extracting features at different SoundNet layers on acoustic scene/object classification tasks. SoundNet features. Same data augmentation procedure is also applied during testing and the mean score of all sound excerpts is used as the final score of a test recording for any particular category. Comparison to State-of-the-Art: Table 3 and 4 compare recognition performance of SoundNet features versus previous state-of-the-art features on three datasets. In all cases SoundNet features outperformed the existing results by around 10%. Interestingly, SoundNet features approach human performance on ESC-10 dataset, however we stress that this dataset may be easy. We report the confusion matrix across all folds on ESC-50 in Figure 3. The results suggest our approach obtains very good performance on categories such as toilet flush (97% accuracy) or door knocks (95% accuracy). Common confusions are laughing confused as hens, foot steps confused as door knocks, and insects confused as washing machines. 4.2 Ablation Analysis To better understand our approach, we perform an ablation analysis in Table 5 and Table 6. Comparison of Loss and Teacher Net (Table 5): We tried training with different subsets of target categories. In general, performance generally improves with increasing visual supervision. As expected, our results suggest that using both ImageNet and Places networks as supervision performs better than a single one. This indicates that progress in sound understanding may be furthered by building stronger vision models. We also experimented with using `2 loss on the target outputs instead of KL loss, which performed significantly worse. Comparison of Network Depth (Table 5): We quantified the impact of network depth. We use five layer version of SoundNet (instead of the full eight) as a feature extractor instead. The five-layer SoundNet architecture performed 8% worse than the eight-layer architecture, suggesting depth is helpful for sound understanding. Interestingly, the five-layer network still generally outperforms previous state-of-the-art baselines, but the margin is less. We hypothesize even deeper networks may perform better, which can be trained without significant over-fitting by leveraging large amounts of unlabeled video. Comparison of Supervision (Table 5): We also experimented with training the network without video by using only the labeled target training set, which is relatively small (thousands of examples). We simply change the network to output the class probabilities, and train it from random initialization with a cross entropy loss. Hence, the only change is that this baseline does not use any unlabeled video, allowing us to quantify the contribution of unlabeled video. The five layer SoundNet achieves slightly better results than [27] which is also a convolutional network trained with same data but with a different architecture, suggesting our five layer architecture is similar. Increasing the depth from five layers to eight layers decreases the performance from 65% to 51%, probably because it overfits to the small training set. However, when trained with visual transfer from unlabeled video, the eight layer SoundNet achieves a significant gain of around 20% compared to the five layer version. This 6 (a) t-SNE embedding of visual features (b) t-SNE embedding of sound features Figure 4: t-SNE embeddings using visual features and sound features (SoundNet conv7). The visual features are concatenated fc7 features from the VGG networks for ImageNet and Places2. Note that t-SNE embeddings do not use the class labels. Labels are only used during final visualization. Feature 8 Layer, conv7 8 Layer, conv8 sound 32.4% 32.3% vision 49.4% 49.4% vision+sound 51.4% 50.5% Table 7: Multi-Modal Recognition: We report classification accuracy on ? 4K labeled test videos over 44 categories. suggests that unlabeled video is a powerful signal for sound understanding, and it can be acquired at large enough scales to support training high-capacity deep networks. Comparison of Layer and Teacher Network (Table 6): We analyze the discriminative performance of each SoundNet layer. Generally, features from the pool5 layer gives the best performance. We also compared different teacher networks for visual supervision (either VGGNet or AlexNet). The results are inconclusive on which teacher network to use: VGG is a better teacher network for DCASE while AlexNet is a better teacher network for ESC50. 4.3 Multi-Modal Recognition In order to compare sound features with visual features on scene/object categorization, we annotated additional 9,478 videos (vision+sound) which are not seen by the trained networks before. This new dataset consists of 44 categories from 6 major groups of concepts (i.e. urban, nature, work/home, music/entertainment, sports, and vehicles). It is annotated by Amazon Mechanical Turk workers. The frequency of categories depend on natural occurrences on the web, hence unbalanced. Vision vs. Sound Embeddings: In order to show the semantic relevance of the features, we performed a two dimensional t-SNE [38] embedding and visualized our dataset in figure 4. The visual features are concatenated fc7 features of the two VGG networks trained using ImageNet and Places2 datasets. We computed the visual features from uniformly selected 4 frames for each video and computed the mean feature as the final visual representation. The sound features are the conv7 features extracted using SoundNet trained with VGG supervision. This visualizations suggests that sound features alone also contain considerable amount of semantic information. Object and Scene Classification: We also performed a quantitative comparison between sound features and visual features. We used 60% of our dataset for training and the rest for the testing. The chance level of the task is 2.2% and choosing always the most common category (i.e. music performance) yields 14% accuracy. Similar to acoustic scene classification methods, we trained a multi-class SVM over both sound and visual features individually and then jointly. The results are displayed in Table 7. Visual features alone obtained an accuracy of 49.4%. The SoundNet features obtained 32.4% accuracy. This suggests that even though sound is not as informative as vision, it still contains considerable amount of discriminative information. Furthermore, sound and vision together resulted in a modest improvement of 2% over vision only models. 4.4 Visualizations In order to have a better insight on what network learned, we visualize its representation. Figure 5 displays the first 16 convolutional filters applied to the raw input audio. The learned filters are diverse, including low and high frequencies, wavelet-like patterns, increasing and decreasing amplitude filters. We also visualize some of the hidden units in the last hidden layer (conv7) of our sound representation 7 Figure 5: Learned filters in conv1: We visualize the filters for raw audio in the first layer of the deep convolutional network. Baby Talk Bubbles Cheering Bird Chirps Figure 6: What emerges in sound hidden units? We visualize some of the hidden units in the last hidden layer of our sound representation by finding inputs that maximally activate a hidden unit. Above, we illustrate what these units capture by showing the corresponding video frames. No vision is used in this experiment; we only show frames for visualization purposes only. by finding inputs that maximally activate a hidden unit. These visualization are displayed on Figure 6. Note that visual frames are not used during computation of activations; they are only included in the figure for visualization purposes. 5 Conclusion We propose to train deep sound networks (SoundNet) by transferring knowledge from established vision networks and large amounts of unlabeled video. The synchronous nature of videos (sound + vision) allow us to perform such a transfer which resulted in semantically rich audio representations for natural sounds. Our results show that transfer with unlabeled video is a powerful paradigm for learning sound representations. All of our experiments suggest that one may obtain better performance simply by downloading more videos, creating deeper networks, and leveraging richer vision models. Acknowledgements: We thank MIT TIG, especially Garrett Wollman, for helping store 26 TB of video. We are grateful for the GPUs donated by NVidia. This work was supported by NSF grant #1524817 to AT and the Google PhD fellowship to CV. References [1] Yusuf Aytar and Andrew Zisserman. Tabula rasa: Model transfer for object category detection. In ICCV, 2011. [2] Yusuf Aytar and Andrew Zisserman. Part level transfer regularization for enhancing exemplar svms. CVIU, 2015. [3] Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? In NIPS, 2014. [4] Daniele Barchiesi, Dimitrios Giannoulis, Dan Stowell, and Mark D Plumbley. Acoustic scene classification: Classifying environments from the sounds they produce. SPM, 2015. [5] Thierry Bertin-Mahieux, Daniel PW Ellis, Brian Whitman, and Paul Lamere. The million song dataset. In ISMIR, 2011. [6] Emre Cakir, Toni Heittola, Heikki Huttunen, and Tuomas Virtanen. Polyphonic sound event detection using multi label deep neural networks. In IJCNN, 2015. [7] Lluis Castrejon, Yusuf Aytar, Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Learning aligned cross-modal representations from weakly aligned data. In CVPR, 2016. [8] Chao-Yeh Chen and Kristen Grauman. Watching unlabeled video helps learn new human actions from very few labeled snapshots. In CVPR, 2013. [9] Saurabh Gupta, Judy Hoffman, and Jitendra Malik. Cross modal distillation for supervision transfer. arXiv preprint arXiv:1507.00448, 2015. [10] Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, et al. Deep speech: Scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567, 2014. [11] Shawn Hershey, Sourish Chaudhuri, Daniel PW Ellis, Jort F Gemmeke, Aren Jansen, R Channing Moore, Manoj Plakal, Devin Platt, Rif A Saurous, Bryan Seybold, et al. Cnn architectures for large-scale audio classification. arXiv, 2016. 8 [12] Lars Hertel, Huy Phan, and Alfred Mertins. Comparing time and frequency domain for audio event recognition using deep learning. arXiv, 2016. [13] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv, 2015. [14] Jing Huang and Brian Kingsbury. Audio-visual deep learning for noise robust speech recognition. In ICASSP, 2013. [15] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv, 2015. [16] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [17] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. [18] Daniel Kuettel and Vittorio Ferrari. Figure-ground segmentation by transferring window masks. In CVPR, 2012. [19] Yann LeCun, L?on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. IEEE, 1998. [20] Honglak Lee, Peter Pham, Yan Largman, and Andrew Y Ng. Unsupervised feature learning for audio classification using convolutional deep belief networks. In NIPS, 2009. [21] David Li, Jason Tam, and Derek Toub. Auditory scene classification using machine learning techniques. AASP Challenge, 2013. [22] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. [23] Ian McLoughlin, Haomin Zhang, Zhipeng Xie, Yan Song, and Wei Xiao. Robust sound event classification using deep neural networks. ASL, 2015. [24] Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y Ng. Multimodal deep learning. In ICML, 2011. [25] Phuc Xuan Nguyen, Gregory Rogez, Charless Fowlkes, and Deva Ramamnan. The open world of micro-videos. arXiv, 2016. [26] Andrew Owens, Phillip Isola, Josh McDermott, Antonio Torralba, Edward H Adelson, and William T Freeman. Visually indicated sounds. arXiv preprint arXiv:1512.08512, 2015. [27] Karol J Piczak. Environmental sound classification with convolutional neural networks. In MLSP, 2015. [28] Karol J Piczak. Esc: Dataset for environmental sound classification. In ACM Multimedia, 2015. [29] Alain Rakotomamonjy and Gilles Gasso. Histogram of gradients of time-frequency representations for audio scene classification. TASLP, 2015. [30] Guido Roma, Waldo Nogueira, and Perfecto Herrera. Recurrence quantification analysis features for environmental sound recognition. In WASPAA, 2013. [31] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. IJCV, 2015. [32] Justin Salamon and Juan Pablo Bello. Unsupervised feature learning for urban sound classification. In ICASSP, 2015. [33] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. [34] Dan Stowell, Dimitrios Giannoulis, Emmanouil Benetos, Mathieu Lagrange, and Mark D Plumbley. Detection and classification of acoustic scenes and events. TM, 2015. [35] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In NIPS, 2014. [36] Bart Thomee, David A Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland, Damian Borth, and Li-Jia Li. Yfcc100m: The new data in multimedia research. Communications of the ACM, 2016. [37] Aaron Van den Oord, Sander Dieleman, and Benjamin Schrauwen. Deep content-based music recommendation. In NIPS, 2013. [38] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. JMLR, 2008. [39] Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Anticipating visual representations from unlabeled video. CVPR, 2016. [40] Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Generating videos with scene dynamics. NIPS, 2016. [41] Jacob Walker, Abhinav Gupta, and Martial Hebert. Dense optical flow prediction from a static image. In ICCV, 2015. [42] Bolei Zhou, Agata Lapedriza, Jianxiong Xiao, Antonio Torralba, and Aude Oliva. Learning deep features for scene recognition using places database. In NIPS, 2014. 9
6146 |@word economically:2 version:2 cnn:3 pw:2 stronger:1 open:1 seek:1 tried:1 rgb:1 jacob:1 downloading:1 pick:2 configuration:4 contains:3 series:2 score:2 daniel:3 document:1 interestingly:2 outperforms:3 existing:5 comparing:1 places2:2 activation:2 yet:3 diederik:1 must:1 gpu:1 bello:1 devin:1 informative:1 enables:1 cracking:1 hypothesize:1 christian:1 polyphonic:1 v:3 alone:2 bart:1 selected:2 capitalizes:1 pool2:3 short:2 club:1 location:1 firstly:1 simpler:1 zhang:1 five:8 height:1 plumbley:2 mahieux:1 kingsbury:1 consists:2 ijcv:1 wild:4 fitting:2 dan:2 professionally:1 shubho:1 acquired:3 mask:1 expected:1 multi:7 inspired:1 freeman:1 decreasing:1 automatically:2 window:3 increasing:3 domestic:1 spain:1 confused:3 lapedriza:1 alexnet:4 what:3 tic:1 interpreted:1 finding:2 kldivergence:1 temporal:1 quantitative:1 every:1 donated:1 grauman:1 classifier:3 scaled:1 platt:1 saurous:1 unit:7 grant:1 appear:1 producing:1 before:1 virtanen:1 chose:2 bird:2 initialization:1 chirp:2 studied:2 quantified:1 collect:1 suggests:3 catanzaro:1 limited:1 shamma:1 range:1 lecun:1 testing:3 procedure:4 evan:1 yan:2 significantly:2 word:1 mingyu:1 suggest:8 unlabeled:33 interior:1 lamere:1 andrej:1 thomee:1 context:1 applying:1 optimize:2 dean:1 vittorio:1 conv4:4 jimmy:2 amazon:1 insight:1 nam:1 enabled:1 embedding:3 handle:1 ferrari:1 construction:1 target:3 massive:4 guido:1 carl:5 recognition:21 expensive:1 breakdown:1 labeled:8 database:1 preprint:5 capture:2 thousand:1 decrease:1 edited:1 balanced:1 benjamin:2 environment:1 waldo:1 dynamic:1 gerald:1 ultimately:1 trained:19 rnh:1 depend:1 grateful:1 weakly:1 deva:1 upon:1 efficiency:1 toilet:1 whitman:1 icassp:2 multimodal:1 various:2 talk:1 pool1:3 train:13 describe:1 pool5:5 activate:2 prenger:1 labeling:1 choosing:1 crowd:1 richer:1 widely:1 cvpr:5 encoder:2 simonyan:1 emergence:2 jointly:1 final:3 advantage:3 sequence:3 differentiable:1 net:3 took:1 propose:5 reconstruction:1 helicopter:1 remainder:1 aligned:2 ablation:3 chaudhuri:1 adapts:1 everyday:1 sutskever:2 darrell:1 jing:1 sea:1 produce:3 generating:1 categorization:2 adam:3 leave:1 object:15 help:2 depending:1 illustrate:1 andrew:6 xuan:1 exemplar:1 thierry:1 progress:3 conv5:4 edward:1 implemented:1 quantify:1 distilling:1 waveform:4 foot:1 laurens:1 annotated:2 attribute:1 filter:7 stochastic:2 lars:1 human:6 really:1 kristen:1 brian:2 ryan:1 secondly:1 awni:1 helping:1 pham:1 around:2 ground:4 visually:1 dieleman:1 visualize:5 major:3 dictionary:1 torralba:7 optimizer:1 released:1 achieves:2 purpose:3 vary:1 outperformed:1 label:6 bridge:2 individually:1 concurrent:1 hoffman:1 mit:7 gaussian:1 always:1 aim:3 pn:1 manoj:1 zhou:1 agata:1 focus:2 improvement:2 indicates:1 rooster:1 baseline:6 detect:1 kim:1 dim:1 inference:2 helpful:1 downsample:1 typically:1 transferring:7 hidden:8 relation:2 semantics:1 classification:31 insect:1 development:1 animal:1 art:7 spatial:1 sengupta:1 jansen:1 field:1 saurabh:1 beach:1 sampling:1 manually:1 ng:2 unnecessarily:1 adelson:1 unsupervised:3 icml:1 report:3 yoshua:1 micro:1 few:2 strided:1 recognize:5 resulted:3 divergence:2 dimitrios:2 ourselves:1 fire:1 william:1 detection:5 interest:2 evaluation:4 soundnet:31 lluis:1 elizalde:1 held:2 worker:1 modest:1 initialized:1 elsen:1 classify:1 elli:2 caruana:1 deviation:1 subset:2 rakotomamonjy:1 recognizing:1 krizhevsky:1 reported:1 teacher:14 varies:1 gregory:1 person:1 explores:1 river:1 oord:1 csail:1 lee:2 michael:1 together:1 ilya:2 sanjeev:2 schrauwen:1 augmentation:2 squared:1 huang:2 juan:1 worse:3 watching:1 creating:1 tam:1 li:3 szegedy:1 suggesting:2 nonlinearities:1 kuettel:1 stride:1 student:6 mlsp:1 juhan:1 jitendra:1 performed:5 vehicle:1 jason:1 overfits:1 analyze:1 wave:2 participant:1 jia:2 contribution:2 ni:1 accuracy:18 convolutional:28 publicly:1 pool3:1 efficiently:1 ensemble:1 yield:2 greg:1 benetos:1 weak:1 raw:5 mfcc:2 rectified:1 russakovsky:1 published:1 detector:2 hamed:3 flickr:3 trevor:1 waspaa:1 frequency:4 turk:1 derek:1 naturally:1 static:1 couple:1 sampled:1 gain:1 dataset:20 proved:1 popular:1 auditory:1 knowledge:10 enumerates:1 improves:1 emerges:1 segmentation:3 classroom:1 amplitude:1 garrett:1 sean:1 playroom:1 back:1 anticipating:1 salamon:1 rif:1 higher:1 supervised:1 follow:1 day:2 modal:7 maximally:2 zisserman:3 hershey:1 xie:1 wei:1 though:4 emmanouil:1 furthermore:1 autoencoders:2 clock:1 hand:1 web:2 su:1 overlapping:1 lack:1 propagation:1 google:1 spm:1 quality:1 indicated:1 believe:2 aude:1 building:1 phillip:1 concept:5 contain:1 asl:1 hence:3 regularization:2 moore:1 semantic:4 visualizing:1 during:7 width:1 recurrence:1 ambiguous:1 daniele:1 stress:1 confusion:3 performs:1 largman:1 vondrick:5 zhiheng:1 image:6 meaning:1 recently:1 charles:1 predominantly:1 common:2 khz:1 million:4 vegetation:1 significant:5 distillation:1 seybold:1 honglak:2 cv:1 rd:1 fk:2 erich:1 rasa:1 herrera:1 aytar:4 supervision:10 etc:2 fc7:2 patrick:1 posterior:1 recent:1 perspective:1 discard:1 revolutionized:1 store:1 nvidia:1 baby:2 yi:3 mcdermott:1 der:1 seen:1 additional:2 tabula:1 isola:1 spectrogram:1 deng:1 mcloughlin:1 paradigm:1 signal:4 multiple:2 sound:99 full:1 reduces:1 huttunen:1 cross:7 dcase:8 long:1 stowell:2 bolei:1 post:1 equally:2 dkl:2 impact:2 prediction:2 basic:1 oliva:1 vision:28 metric:2 enhancing:1 arxiv:15 iteration:1 normalization:2 sergey:1 histogram:1 robotics:1 damian:1 whereas:1 addition:1 fellowship:1 krause:1 walker:1 source:1 yusuf:5 modality:4 operate:1 rest:1 probably:1 pooling:3 recording:3 heikki:1 leveraging:4 gmms:1 flow:1 extracting:1 near:1 leverage:6 door:2 bernstein:1 split:3 easy:1 embeddings:3 enough:1 variety:4 bengio:1 relu:1 timesteps:1 sander:1 architecture:11 reduce:1 idea:2 inner:1 shawn:1 vgg:7 haffner:1 tm:1 shift:1 qj:1 synchronous:1 torch7:1 padding:1 effort:1 accelerating:1 song:2 peter:1 karen:1 speech:6 cheering:2 action:1 antonio:6 deep:28 useful:3 generally:4 karpathy:1 amount:12 dark:1 extensively:1 clip:2 svms:2 category:18 visualized:1 nsf:1 coates:1 diagnostic:1 per:1 bryan:2 broadly:1 diverse:1 alfred:1 hyperparameter:1 group:2 key:1 four:1 fractionally:1 urban:3 pj:2 douglas:1 borth:1 year:2 convert:2 powerful:2 place:4 throughout:1 ismir:1 yann:1 forrest:1 home:1 excerpt:4 maaten:1 jiquan:1 scaling:1 investigates:1 layer:61 followed:1 distinguish:1 display:1 mp3s:1 fold:3 ijcnn:1 alex:1 scene:30 tag:1 min:1 optical:1 relatively:1 gpus:1 flush:1 describes:1 slightly:2 across:2 smaller:1 yfcc100m:1 quoc:1 ltt:1 supervise:1 den:1 invariant:2 iccv:2 washing:1 visualization:10 hannun:1 r3:1 jared:1 end:2 capitalizing:2 available:3 eight:6 generic:1 occurrence:1 fowlkes:1 batch:3 robustness:2 compress:1 convolve:1 remaining:1 esc:19 rain:1 entertainment:1 music:4 concatenated:2 tig:1 build:1 especially:1 malik:1 already:1 degrades:1 primary:1 dependence:1 strategy:5 furthered:1 gradient:3 friedland:1 distance:1 convnet:1 thank:1 capacity:1 decoder:2 collected:2 reason:1 rom:1 code:1 length:10 tuomas:1 setup:1 sne:6 teach:1 gk:3 hao:1 ba:2 design:1 implementation:1 satheesh:2 conv2:3 perform:5 contributed:1 allowing:1 gilles:1 convolution:2 snapshot:1 datasets:12 benchmark:2 descent:1 displayed:2 situation:1 hinton:3 communication:1 frame:9 stack:1 david:2 pablo:1 dog:1 mechanical:1 kl:6 imagenet:7 acoustic:18 engine:1 learned:6 established:2 barcelona:1 kingma:1 nip:8 justin:1 pattern:1 challenge:6 tb:1 max:1 including:1 video:58 pirsiavash:3 belief:1 nogueira:1 event:6 natural:21 quantification:1 attach:1 abhinav:1 temporally:1 mathieu:1 vggnet:1 martial:1 bubble:1 gasso:1 autoencoder:3 extract:1 chao:1 review:1 understanding:8 literature:1 bark:1 hen:1 acknowledgement:1 yeh:1 poland:1 emre:1 synchronization:5 fully:3 loss:9 querying:1 versus:1 geoffrey:3 bertin:1 validation:3 shelhamer:1 downloaded:1 conv1:4 xiao:2 classifying:2 cry:1 translation:2 casper:1 karl:1 supported:1 last:2 hebert:1 alain:1 allow:2 deeper:7 understand:1 karol:2 conv3:3 emerge:1 conv7:7 van:2 depth:7 world:2 rich:6 collection:1 nguyen:1 obtains:2 ignore:1 global:1 overfitting:1 ioffe:1 conclude:1 discriminative:7 xi:3 continuous:1 khosla:2 hockey:1 table:17 nature:2 learn:9 transfer:17 channel:1 exterior:1 robust:2 init:2 forest:1 ngiam:1 untrained:1 complex:1 bottou:1 rogez:1 domain:1 did:2 pk:1 main:1 dense:1 noise:3 paul:1 huy:1 toni:1 categorized:1 crafted:2 judy:1 momentum:1 wish:3 jmlr:1 extractor:2 learns:1 wavelet:1 ian:1 minute:1 down:1 covariate:1 showing:1 explored:1 experimented:5 svm:6 gupta:2 inconclusive:1 diamos:1 phd:1 margin:2 knock:2 cviu:1 chen:1 subtract:1 suited:1 rg:1 entropy:1 aren:1 phan:1 simply:2 visual:33 josh:1 vinyals:2 desire:2 aditya:2 lagrange:1 upsample:1 sport:1 recommendation:1 truth:3 environmental:4 chance:1 extracted:2 acm:2 ma:1 loosing:1 consequently:2 jeff:1 owen:1 feasible:2 considerable:3 youtube:1 change:2 included:1 content:1 operates:1 semantically:4 uniformly:1 reducing:1 olga:1 multimedia:2 partly:1 experimental:1 aaron:1 internal:2 support:1 mark:2 unbalanced:1 categorize:2 relevance:1 jonathan:2 jianxiong:1 oriol:2 evaluate:3 audio:15 scratch:2
5,688
6,147
Towards Unifying Hamiltonian Monte Carlo and Slice Sampling Yizhe Zhang, Xiangyu Wang, Changyou Chen, Ricardo Henao, Kai Fan, Lawrence Carin Duke University Durham, NC, 27708 {yz196,xw56,changyou.chen, ricardo.henao, kf96 , lcarin} @duke.edu Abstract We unify slice sampling and Hamiltonian Monte Carlo (HMC) sampling, demonstrating their connection via the Hamiltonian-Jacobi equation from Hamiltonian mechanics. This insight enables extension of HMC and slice sampling to a broader family of samplers, called Monomial Gamma Samplers (MGS). We provide a theoretical analysis of the mixing performance of such samplers, proving that in the limit of a single parameter, the MGS draws decorrelated samples from the desired target distribution. We further show that as this parameter tends toward this limit, performance gains are achieved at a cost of increasing numerical difficulty and some practical convergence issues. Our theoretical results are validated with synthetic data and real-world applications. 1 Introduction Markov Chain Monte Carlo (MCMC) sampling [1] stands as a fundamental approach for probabilistic inference in many computational statistical problems. In MCMC one typically seeks to design methods to efficiently draw samples from an unnormalized density function. Two popular auxiliaryvariable sampling schemes for this task are Hamiltonian Monte Carlo (HMC) [2, 3] and the slice sampler [4]. HMC exploits gradient information to propose samples along a trajectory that follows Hamiltonian dynamics [3], introducing momentum as an auxiliary variable. Extending the random proposal associated with Metropolis-Hastings sampling [4], HMC is often able to propose large moves with acceptance rates close to one [2]. Recent attempts toward improving HMC have leveraged geometric manifold information [5] and have used better numerical integrators [6]. Limitations of HMC include being sensitive to parameter tuning and being restricted to continuous distributions. These issues can be partially solved by using adaptive approaches [7, 8], and by transforming sampling from discrete distributions into sampling from continuous ones [9, 10]. Seemingly distinct from HMC, the slice sampler [4] alternates between drawing conditional samples based on a target distribution and a uniformly distributed slice variable (the auxiliary variable). One problem with the slice sampler is the difficulty of solving for the slice interval, i.e., the domain of the uniform distribution, especially in high dimensions. As a consequence, adaptive methods are often applied [4]. Alternatively, one recent attempt to perform efficient slice sampling on latent Gaussian models samples from a high-dimensional elliptical curve parameterized by a single scalar [11]. It has been shown that in some cases slice sampling is more efficient than Gibbs sampling and Metropolis-Hastings, due to the adaptability of the sampler to the scale of the region currently being sampled [4]. Despite the success of slice sampling and HMC, little research has been performed to investigate their connections. In this paper we use the Hamilton-Jacobi equation from classical mechanics to show that slice sampling is equivalent to HMC with a (simply) generalized kinetic function. Further, we also show that different settings of the HMC kinetic function correspond to generalized slice 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. sampling, with a non-uniform conditional slicing distribution. Based on this relationship, we develop theory to analyze the newly proposed broad family of auxiliary-variable-based samplers. We prove that under this special family of distributions for the momentum in HMC, as the distribution becomes more heavy-tailed, the one-step autocorrelation of samples from the target distribution converges asymptotically to zero, leading to potentially decorrelated samples. While of limited practical impact, this theoretical result provides insights into the properties of the proposed family of samplers. We also elaborate on the practical tradeoff between the increased computational complexity associated with improved theoretical sampling efficiency. In the experiments, we validate our theory on both synthetic data and with real-world problems, including Bayesian Logistic Regression (BLR) and Independent Component Analysis (ICA), for which we compare the mixing performance of our approach with that of standard HMC and slice sampling. 2 Solving Hamiltonian dynamics via the Hamilton-Jacobi equation A Hamiltonian system consists of a kinetic function K(p) with momentum variable p 2 R, and a potential energy function U (x) with coordinate x 2 R. We elaborate on multivariate cases in the Appendix. The dynamics of a Hamiltonian system are completely determined by a set of first-order Partial Differential Equations (PDEs) known as Hamilton?s equations [12]: @p = @? @H(x, p, ? ) , @x @x @H(x, p, ? ) = , @? @p (1) where H(x, p, ? ) = K(p(? )) + U (x(? )) is the Hamiltonian, and ? is the system time. Solving (1) gives the dynamics of x(? ) and p(? ) as a function of system time ? . In a Hamiltonian system governed by (1), H(?) is a constant for every ? [12]. A specified H(?), together with the initial point {x(0), p(0)}, defines a Hamiltonian trajectory {{x(? ), p(? )} : 8? }, in {x, p} space. It is well known that in many practical cases, a direct solution to (1) may be difficult [13]. Alternatively, one might seek to transform the original HMC system {H(?), x, p, ? } to a dual space {H 0 (?), x0 , p0 , ? } in hope that the transformed PDEs in the dual space becomes simpler than the original PDEs in (1). One promising approach consists of using the Legendre transformation [12]. This family of transformations defines a unique mapping between primed and original variables, where the system time, ? , is identical. In the transformed space, the resulting dynamics are often simpler than the original Hamiltonian system. An important property of the Legendre transformation is that the form of (1) is preserved in the new space [14], i.e., @p0 /@? = @H 0 (x0 , p0 , ? )/@x0 , @x0 /@? = @H 0 (x0 , p0 , ? )/@p0 . To guarantee a valid Legendre transformation between the original Hamiltonian system {H(?), x, p, ? } and the transformed Hamiltonian system {H 0 (?), x0 , p0 , ? }, both systems should satisfy the Hamilton?s principle [13], which equivalently express Hamilton?s equations (1). The form of this Legendre transformation is not unique. One possibility is to use a generating function approach [13], which requires the transformed variables to satisfy p ? @x/@? H(x, p, ? ) = p0 ? @x0 /@? H(x0 , p0 , ? )0 + dG(x, x0 , p0 , ? )/d? , where dG(x, x0 , p0 , ? )/d? follows from the chain rule and G(?) is a Type-2 generating function defined as G(?) , x0 ? p0 + S(x, p0 , ? ) [14], with S(x, p0 , ? ) being the Hamilton?s principal function [15], defined below. The following holds due to the independency of x, x0 and p0 in the previous transformation (after replacing G(?) by its definition): p= @S(x, p0 , ? ) , @x x0 = @S(x, p0 , ? ) , @p0 H 0 (x0 , p0 , ? ) = H(x, p, ? ) + @S(x, p0 , ? ) . (2) @? We then obtain the desired Legendre transformation by setting H 0 (x0 , p0 , ? ) = 0. The resulting (2) is known as the Hamilton-Jacobi equation (HJE). We refer the reader to [13, 12] for extensive discussions on the Legendre transformation and HJE. Recall from above that the Legendre transformation preserves the form of (1). Since H 0 (x0 , p0 , ? ) = 0, {x0 , p0 } are time-invariant (constant for every ? ). Importantly, the time-invariant point {x0 , p0 } corresponds to a Hamiltonian trajectory in the original space, and it defines the initial point {x(0), p(0)} in the original space {x, p}; hence, given {x0 , p0 }, one may update the point along the trajectory by specifying the time ? . A new point {x(? ), p(? )} in the original space along the Hamiltonian trajectory, with system time ? , can be determined from the transformed point {x0 , p0 } via solving (2). One typically specifies the kinetic function as K(p) = p2 [2], and Hamilton?s principal function as S(x, p0 , ? ) = W (x) p0 ? , where W (x) is a function to be determined (defined below). From (2), 2 and the definition of S(?), we can write H(x, p, ? ) + @S = H(x, p, ? ) @? p0 = U (x) + ? @S @x 2 p0 = U (x) + ? dW (x) dx 2 p0 = 0 , (3) where the second equality is obtained by replacing H(x, p, ? ) = U (x(? )) + K(p(? )) and the third equality by replacing p from (2) into K(p(? )). From (3), p0 = H(x, p, ? ) represents the total Hamiltonian in the original space {x, p}, and uniquely defines a Hamiltonian trajectory in {x, p}. Define X , {x : H(?) U (x) 0} as the slice interval, which for constant p0 = H(x, p, ? ) corresponds to a set of valid coordinates in the original space {x, p}. Solving (3) for W (x) gives ? Z x(? ) 1 H(?) U (z), z 2 X 2 W (x) = f (z) dz + C , f (z) = , (4) 0, z 62 X xmin where xmin = min{x : x 2 X} and C is a constant. In addition, from (2) we have Z 1 @S(x, p0 , ? ) @W (x) 1 x(? ) 0 x = = ?= f (z) 2 dz ? , @p0 @H 2 xmin (5) where the second equality is obtained by substituting S(?) by its definition and the third equality is obtained by applying Fubini?s theorem on (4). Hence, for constant {x0 , p0 = H(x, p, ? )}, equation (5) uniquely defines x(? ) in the original space, for a specified system time ? . 3 3.1 Formulating HMC as a Slice Sampler p xt+1 (0), pt+1 (0) Revisiting HMC and Slice Sampling xt (?t ), pt (?t ) Suppose we are interested in sampling a random variable x from an unnormalized density function f (x) / exp[ U (x)], where xt (0), pt (0) x U (x) is the potential energy function. Hamiltonian Monte Carlo (HMC) augments the target density with an auxiliary momentum Figure 1: Representation of HMC random variable p, that is independent of x. The distribution of p sampling. Points {xt (0), pt (0)} is specified as / exp[ K(p)], where K(p) is the kinetic energy and {xt+1 (0), pt+1 (0)} represent function. Define H(x, p) = U (x) + K(p) as the Hamiltonian. HMC samples at iterations t and We have omitted the dependency of H(?), x and p on the system t + 1, respectively. The trajectotime ? for simplicity. HMC iteratively performs dynamic evolv- ries for t and t + 1 correspond to ing and momentum resampling steps, by sampling xt from the distinct Hamiltonian levels Ht (?) target distribution and pt from the momentum distribution (Gaus- and Ht+1 (?), denoted as black and sian as K(p) = p2 ), respectively, for t = 1, 2, . . . iterations. red lines, respectively. Figure 1 illustrates two iterations of this procedure. Starting from point {xt (0), pt (0)} at the t-th (discrete) iteration, HMC leverages the Hamiltonian dynamics, governed by Hamilton?s equations in (1) to propose the next sample {xt (?t ), pt (?t )}, at system time ?t . The position in HMC at iteration t + 1 is updated as xt+1 (0) = xt (?t ) (dynamic evolving). A new momentum pt+1 (0) is resampled independently from a Gaussian distribution (assuming K(p) = p2 ), establishing the next initial point {xt+1 (0), pt+1 (0)} for iteration t + 1 (momentum resampling). The latter point corresponds to the initial point of a new trajectory because the Hamiltonian H(?) is commensurately updated. This means that trajectories correspond to distinct values of H(?). Typically, numerical integrators such as the leap-frog method [2] are employed to numerically approximate the Hamiltonian dynamics. In practice, a random number (uniformly drawn from a fixed range) of discrete numerical integration steps (leap-frog steps) are often used (corresponding to random time ?t along the trajectory), which has been shown to have better convergence properties than a single leap-frog step [16]. The discretization error introduced by the numerical integration is corrected by a Metropolis Hastings (MH) step. Slice sampling is conceptually simpler than HMC. It augments the target unnormalized density f (x) with a randomR variable y, with joint distribution expressed as p(x, y) = Z1 1 , s.t. 0 < y < f (x), where Z1 = f (x)dx is the normalization constant, and the marginal distribution of x exactly recovers the target normalized distribution f (x)/Z1 . To sample from the target density, slice sampling iteratively performs a conditional sampling step from p(x|y) and sampling a slice from p(y|x). At iteration t, starting from xt , a slice yt is uniformly drawn from (0, f (xt )). Then, the next sample xt+1 , at iteration t + 1, is uniformly drawn from the slice interval {x : f (x) > yt }. 3 HMC and slice sampling both augment the target distribution with auxiliary variables and can propose long-range moves with high acceptance probability. 3.2 Formulating HMC as a Slice Sampler Consider the dynamic evolving step in HMC, i.e., {xt (0), pt (0)} 7! {xt (? ), pt (? )} in Figure 1. From Section 2, the Hamiltonian dynamics in {x, p} space with initial point {x(0), p(0)} can be performed by mapping to {x0 , p0 } space and updating {x(? ), p(? )} via selecting a ? and solving (5). AsRwe show in the Appendix, from (5) and in univariate cases? the Hamiltonian dynamics has 1 period X [H(?) U (z)] 2 dz and is symmetric along p = 0 (due to the symmetric form of the kinetic function). Also from (5), the system time, ? , is? specified uniformly sampled from?a half-period of R 1 the Hamiltonian dynamics. i.e., ? ? Uniform x0 , x0 + 12 X [H(?) U (z)] 2 . Intuitively, x0 is R the ?anchor? of 1the initial point {x(0), p(0)}, w.r.t. the start of the first half period, i.e, when [H(?) U (z)] 2 = 0. Further, we only need consider half a period because for a symmetric X kinetic function, K(p) = p2 , the Hamiltonian dynamics for the two half-periods are mirrored [14]. For the same reason, Figure 1 only shows half of the {x, p} space, when p 0. Given the sampled ? and the constant {x0 , p0 }, equation (5) can be solved for x? , x(? ), i.e., the value of x at time ? . Interestingly, the integral in (5) can be interpreted as (up to normalization constant) a cumulative density function (CDF) of x(? ). From the inverse CDF transform sampling method, uniformly sampling ? from half of a period and solving for x? from (5), are equivalent to directly sampling x? from the following density p(x? |H(?)) / [H(?) U (x? )] 1 2 , s.t., H(?) U (x? ) 0. (6) We note that this transformation does not make the analytic solution of x(? ) generally tractable. However, it provides the basic setup to reveal the connection between the slice sampler and HMC. In the momentum resampling step of HMC, i.e., {xt (? ), pt (? )} 7! {xt+1 (0), pt+1 (0)} in Figure 1, and using the previously described kinetic function, K(p) = p2 , resampling corresponds to drawing p from a Gaussian distribution [2]. The algorithm to analytically sample from the HMC (analytic HMC) proceeds as follows: at iteration t, momentum pt is drawn from a Gaussian distribution. The previously sampled value of xt 1 and the newly sampled pt yield a Hamiltonian Ht (?). Then, the next sample xt is drawn from (6). This procedure relates HMC to the slice sampler. To clearly see the connection, we denote yt = e Ht (?) . Instead of directly sampling {p, x} as just described, we sample {y, x} instead. By substituting Ht (?) with yt in (6), the conditional updates for this new sampling procedure can be rewritten as below, yielding the HMC slice sampler (HMC-SS), with conditional distributions defined as 1 Sampling a slice: p(yt |xt ) = [log f (xt ) log yt ]1 a , s.t. 0 < yt < f (xt ) , (7) (a)f (xt ) 1 Conditional sampling: p(xt+1 |yt ) = [log f (xt+1 ) log yt ]1 a , s.t. f (xt ) > yt , (8) Z2 (yt ) where Ra = 1/2 (other values ofR a considered below), f (x) = e U (x) is an unnormalized density, and 1 Z1 , f (x)dx and Z2 (y) , f (x)>y [log f (x) log y] 2 dx are the normalization constants. Comparing these two procedures, analytic HMC and HMC-SS, we see that the resampling momentum in analytic HMC corresponds to sampling a slice in HMC-SS. Further, the dynamic evolving in HMC corresponds to the conditional sampling in MG-SS. We have thus shown that HMC can be equivalently formulated as a slice sampler procedure via (7) and (8). 3.3 Reformulating Standard Slice Sampler from HMC-SS In standard slice sampling (described in Section 3.1), both conditional sampling and sampling a slice are drawn from uniform distributions. However those for HMC-SS in (7) and (8) represent non-uniform distributions. Interestingly, if we change a in (7) and (8) from a = 1/2 to a = 1, we obtain the desired uniform distributions for standard slice sampling. This key observation leads us to consider a generalized form of the kinetic function for HMC, described below. ? For multidimensional cases, the Hamiltonian dynamics are semi-periodic, yet a similar conclusion still holds. Details are discussed in the Appendix. 4 Consider the generalized family of kinetic functions K(p) = |p|1/a with a > 0. One may rederive equations (3)-(8) using this generalized kinetic energy. As shown in the Appendix, these equations remained unchanged, with the update that each isolated 2 in these equations is replaced by 1/a, and 1/2 is replaced by a 1. Sampling p (for the momentum resampling step) with the generalized kinetics, corresponds to drawing p from ?(p; m, a) = 12 m a / (a + 1) exp[ |p|1/a /m], with m = 1. All the formulation in the paper still holds for arbitrary m, see Appendix for details. We denote this distribution the monomial Gamma (MG) distribution, MG(a, m), where m is the mass parameter, and a is the monomial parameter. Note that this is equivalent to the exponential power distribution with zero-mean, described in [17]. We summarize some properties of the MG distribution in the Appendix. To generate random samples from the MG distribution, one can draw G ? Gamma(a, m) and a uniform sign variable S ? { 1, 1}, then S ? Ga follows the MG(a, m) distribution. We call the HMC sampler based on the generalized kinetic function, K(p; a, m): Monomial Gamma Hamiltonian Monte Carlo (MG-HMC). The algorithm to analytically sample from the MG-HMC is shown in Algorithm 1. The only difference between this procedure and the previously described is the momentum resampling step, in that for analytic HMC, p is drawn Gaussian instead of MG(a, m). However, note that the Gaussian distribution is a special case of MG(a, m) when a = 1/2. Algorithm 1: MG-HMC with HJE for t = 1 to T do Resample momentum: pt ? MG(m, a). Compute Hamiltonian: Ht = U (xt 1 ) + K(pt ). Find X , {x : x 2 R; U (x) ? Ht (?)}. Dynamic evolving: xt |Ht (?) / [Ht (?) U (xt )]a Algorithm 2: MG-SS 1 ; x 2 X. for t = 1 to T do Sampling a slice: Sample yt from (7). Conditional sampling: Sample xt from (8). Interestingly, when a = 1, the Monomial Gamma Slice sampler (MG-SS) in Algorithm 2 recovers exactly the same update formulas as in standard slice sampling, described in Section 3.1, where the conditional distributions in (7) and (8) are both uniform. When a 6= 1, we have to iteratively alternate between sampling from non-uniform distributions (7) and (8), for both auxiliary (slicing) variable y and target variable x. Using the same argument from the convergence analysis of standard slice sampling [4], the iterative sampling procedure in (7) and (8), converges to an invariant joint distribution (detailed in the Appendix). Further, the marginal distribution of x recovers the target distribution as f (x)/Z1 , while the marginal distribution of y is given by p(y) = Z2 (y)/[ (a)Z1 ]. The MG-SS can be divided into three broad regimes: 0 < a < 1, a = 1 and a > 1 (illustrated in the Appendix). When 0 < a < 1, the conditional distribution p(yt |xt ) is skewed towards the current unnormalized density value f (xt ). The conditional draw of p(xt+1 |yt ) encourages taking samples with smaller density value (inefficient moves), within the domain of the slice interval X. On the other hand, when a > 1, draws of yt tend to take smaller values, while draws of xt+1 encourage sampling from those with large density function values (efficient moves). The case a = 1 corresponds to the conventional slice sampler. Intuitively, setting a to be small makes the auxiliary variable, yt , stay close to f (xt ), thus f (xt+1 ) is close to f (xt ). As a result, a larger a seems more desirable. This intuition is justified in the following sections. 4 Theoretical analysis We analyze theoretical properties of the MG sampler. All the proofs as well as the ergodicity properties of analytic MG-SS are given in the Appendix. One-step autocorrelation of analytic MG-SS We present results on the univariate distribution case: p(x) / e U (x) . We first investigate the impact of the monomial parameter a on the one-step autocorrelation function (ACF), ?x (1) , ?(xt , xt+1 ) = [Ext xt+1 (Ex)2 ]/Var(x), as a ! 1. Theorem 1 characterizes the limiting behavior of ?(xt , xt+1 ). Theorem 1 For a univariate target distribution, i.e. exp[ U (x)] has finite integral over R, under certain regularity conditions, the one-step autocorrelation of the MG-SS parameterized by a, asymptotically approaches zero as a ! 1, i.e., lima!0 ?x (1) = 0. 5 In the Appendix we also show that lima!1 ?(yt , yt+1 ) = 0. In addition, we show that ?(yt , yt+h ) is a non-negative decreasing function of the time lag in discrete steps h. Effective sample size The variance of a Monte Carlo P1estimator is determined by its Effective Sample Size (ESS) [18], defined as ESS = N/(1 + 2 ? h=1 ?x (h)), where N is the total number of samples, ?x (h) is the h-step autocorrelation function, which can be calculated in a recursive manner. We prove in the Appendix that ?x (h) is non-negative. Further, assuming the MG sampler is uniformly ergodic and ?x (h) is monotonically decreasing, it can be shown that lima!1 ESS = N . When ESS approaches full sample size, N , the resulting sampler delivers excellent mixing efficiency [5]. Details and further discussion are provided in the Appendix. Case study To examine a specific 1D example, we consider sampling from the exponential distribution, Exp(?), with energy function given by U (x) = x/?, where x 0. This case has analytic ?x (h) and ESS. After some algebra (details in the Appendix), ?x (1) = 1 1 Na x0 ? , ?x (h) = , ESS = ,x ?h (x0 ) , E?h (xh |x0 ) xh = ? + . a+1 (a + 1)h a+2 (a + 1)h These results are in agreement with Theorem 1 and related arguments of ESS and monotonicity of autocorrelation w.r.t. a. Here x ?h (x0 ) denotes the expectation of the h-lag sample, starting from any 1 x0 . The relative difference x?hx(x0 0 )? ? decays exponentially in h, with a factor of a+1 . In fact, the ?x (1) for the exponential family class of models introduced in [19], with potential energy U (x) = x! /?, where x 0, !, ? > 0, can be analytically calculated. The result, provided in the Appendix, indicates that for this family, ?x (1) decays at a rate of O(a 1 ). MG-HMC mixing performance In theory, the analytic MG-HMC (the dynamics in (5) can be solved exactly) is expected to have the same theoretical properties of the analytic MG-SS for unimodal cases, since they are derived from the same setup. However, the mixing performance of the two methods could differ significantly when sampling from a multimodal distribution, due to the fact that the Hamiltonian dynamics may get ?trapped? into a single closed trajectory (one of the modes) with low energy, whereas the analytic MG-SS does not suffer from this problem as is able to sample from disjoint slice intervals (one per mode). This is a well-known property of slice sampling [4] that arises from (7) and (8). However, if a is large enough, as we show in the Appendix, the probability of getting into a low-energy level associated with more than one Hamiltonian trajectory, which restrict movement between modes, is arbitrarily small. As a result, the analytic MG-HMC with large value of a is able to approach the stationary mixing performance of MG-SS. 5 MG sampling in practice MG-HMC with numerical integrator In practice, MG-SS (performing Algorithm 2) requires: 1) analytically solving for the slice interval X, which is typically infeasible for multivariate cases [4]; or 2) analytically computing the integral Z2 (y) over X, implied by the non-uniform conditionals from MG-SS. These are usually computationally infeasible, though adaptive estimation of X could be done using schemes like ?doubling? and ?shrinking? strategies from the slice sampling literature [4]. It is more convenient to perform approximate MG-HMC using a numerical integrator like in traditional HMC, i.e., in each iteration, the momentum p is first initialized by sampling from MG(m, a), then second order St?rmer-Verlet integration [2] is performed for the Hamiltonian dynamics updates: pt+1/2 = pt ? 2 rU (xt ) , xt+1 = xt + ?rK(pt+1/2 ) , pt+1 = pt+1/2 1 1/a 1 . ma |p| ? 2 rU (xt+1 ) , (9) where rK(p) = sign(p) ? When a = 1, [rK(p)]d = 1/m for any dimension d, independent of x and p. To avoid moving on a grid when a = 1, we employ a random step-size ? from a uniform distribution within non-negative range (r1 , r2 ), as suggested in [2]. No free lunch With a numerical integrator for MG-HMC, however, the argument about choosing large a (of great theoretical advantage as discussed in the previous section) may face practical issues. First, a large value of a will lead to a less accurate numerical integrator. This is because as a gets larger, the trajectory of the total Hamiltonian becomes ?stiffer?, i.e., that the maximum curvature becomes larger. When a > 1/2, the Hamiltonian trajectory in the phase space, (x, p), has at least 2D (D denotes the total dimension) non-differentiable points (?turnovers?), at each intersection point with the hyperplane p(d) = 0, d 2 {1 ? ? ? D}. As a result, directly applying St?rmer-Verlet integration would lead to high integration error as D becomes large. 6 Second, if the sampler is initialized in the tail region of a light-tailed target distribution, MG-HMC with a > 1 may converge arbitrarily slow to the true target distribution, i.e., the burn-in period could take arbitrarily long time. For example, with a > 1, rU (x0 ) can be very large when x0 is in the light-tailed region, leading the update x0 + rK(p0 + rU (x0 )) to be arbitrary close to x0 , i.e., the sampler does not move. To ameliorate these issues, we provide mitigating strategies. For the first (numerical) issue, we propose two possibilities: 1) As an analog to the ?reflection? action of [2], in (9), whenever the d-th dimension(s) of the momentum changes sign, we ?recoil? the point of these dimension(s) to the (d) (d) (d) (d) previous iteration, and negate the momentum of these dimension(s), i.e., xt+1 = xt , pt+1 = pt . 2) Substituting the kinetic function K(p) with a ?softened? kinetic function, and use importance sampling to sample the momentum. The details and comparison between the ?reflection? action and ?softened? kinetics are discussed in the Appendix. For the second (convergence) issue, we suggest using a step-size decay scheme, e.g., ? = max(?1 ?t , ?0 ). In our experiments we use (?1 , ?) = (106 , 0.9), where ?0 is problem-specific. This approach empirically alleviates the slow convergence problem, however we note that a more principled way would be adaptively selecting a during sampling, which is left for further investigation. As a compromise between theoretical gains and practical issues, we suggest setting a = 1 (HMC implementation of a slice sampler) when the dimension is relatively large. This is because in our experiments, when a > 1, numerical errors and convergence issues tend to overwhelm the theoretical mixing performance gains described in Section 4. Theoretical MG-SS MG-HMC (c) #104 0.4 0.2 1 Theoretical MG-SS MG-HMC 0.5 0 0 1 2 3 Theoretical MG-SS MG-HMC 4 Monomial parameter a 0 0 1 2 3 4 (e) #104 0.6 0.4 1.5 Theoretical MG-SS MG-HMC 1 0.5 0 0 1 2 3 1 4 Monomial parameter a Theoretical 0.8 2 0.2 Monomial parameter a 2.5 ESS 1.5 0.6 (d) 1 0.8 ESS ;(1) 2 ;(1) (b) 1 0.8 ;(1) (a) 0 0 1 2 3 MG-HMC 0.6 0.4 0.2 4 Monomial parameter a 0 0 1 2 3 4 Monomial parameter a Figure 2: Theoretical and empirical ?x (1) and ESS of exponential distribution (a,b), N+ (c,d) and Gamma (e). 6 6.1 Experiments Simulation studies 1D unimodal problems We first evaluate the performance of the MG sampler with several univariate distributions: 1) Exponential distribution, U (x) = ?x, x 0. 2) Truncated Gaussian, U (x) = ?x2 , x 0. 3) Gamma distribution, U (x) = (r 1) log x + ?x. Note that the performance of the sampler does not depend on the scale parameter ? > 0. We compare the empirical ?x (1) and ESS of the analytic MG-SS and MG-HMC with their theoretical values. In the Gamma distribution case, analytic derivations of the autocorrelations and ESS are difficult, thus we resort to a numerical approach to compute ?x (1) and ESS. Details are provided in the Appendix. Each method is run for 30,000 iterations with 10,000 burn-in samples. The number of leap-frog steps is set to be uniformly drawn from (100 l, 100 + l) with l = 20, as suggested by [16]. We also compared MG-HMC (a = 1) with standard slice sampling using doubling and shrinking scheme [4] As expected, the resulting ESS (not shown) for these two methods is almost identical. The experiment settings and results are provided in the Appendix. The acceptance rates decrease from around 0.98 to around 0.77 for each case, when a grows from 0.5 to 4, as shown in Figure 2(a)-(d), The results for analytic MG-SS match well with the theoretical results, however MG-HMC seems to suffer from practical difficulties when a is large, evidenced by results gradually deviating from the theoretical values. This issue is more evident in the Gamma case (see Figure 2(e)), where ?x (1) first decreases then increases. Meanwhile, the acceptance rates decreases from 0.9 to 0.5. 1D and 2D bimodal problems We further conduct simulation studies to evaluate the efficiency of MG-HMC when sampling 1D and 2D multimodal distributions. For the univariate case, the potential energy is given by U (x) = x4 2x2 ; whereas U (x) = 0.2 ? (x1 + x2 )2 + 0.01 ? (x1 + x2 )4 0.4 ? (x1 x2 )2 in the bivariate case. We show in the Appendix that if the energy functions are symmetric along x = C, where C is a constant, in theory, the analytic MG-SS will have ESS equal to the total sample size. However, as shown in Section 4, the analytic MG-HMC is expected to have an ESS less than its corresponding analytic MG-SS, and the gap between the analytic MG-HMC 7 and analytic MG-SS counterpart should decrease with a. As a result, despite numerical difficulties, we expect the MG-HMC based on numerical integration to have better mixing performance with large a. To verify our theory, we run MG-HMC for a = {0.5, 1, 2} for 30,000 iterations with 10,000 burn-in samples. The parameter settings and the acceptance rates are detailed in the Appendix. Empirically, we find that the efficiency of HMC is significantly improved with a large a as shown in Table 1, which coincides with the theory in Section 4. From Figure 3, we observe that the MG-HMC sampler with monomial parameter a = {1, 2} performs better at jumping between modes of the target distribution, when compared to standard HMC, which confirms the theory in Section 4. We also compared MG-HMC (a = 1) with standard SS [4]. As expected, in the 1D case, the standard SS yields ESS close to full sample size, while in 2D case, the resulting ESS is lower than MG-HMC (a = 1) (details are provided in the Appendix). Figure 3: 10 MC samples Table 1: ESS of MG-HMC by MG-HMC from a 2D for 1D and 2D bimodal disBayesian logistic regression We evalu- distribution and different a. tributions. ate our methods on 6 real-world datasets 1D ESS ?x (1) from the UCI repository [20]: German 3 a = 0.5 5175 0.60 credit (G), Australian credit (A), Pima In2 a=1 10157 0.43 dian (P), Heart (H), Ripley (R) and Car1 a=2 24298 0.11 0 avan (C) [21]. Feature dimensions range 2D ESS ? x (1) -1 from 7 to 87, and total data instances are MG-HMC (a=0.5) a = 0.5 4691 0.67 -2 MG-HMC (a=1) between 250 to 5822. All datasets are a=1 16349 0.60 MG-HMC (a=2) -3 Density contour normalized to have zero mean and unit a=2 18007 0.53 -3.5 -2 -0.5 x 1 2.5 3.5 variance. Gaussian priors N (0, 100I) are imposed on the regression coefficients. We draw 5000 iterations with 1000 burn-in samples for each experiment. The leap-frog steps are set to be uniformly drawn from (100 l, 100 + l) with l = 20. Other experimental settings (m and ?) are provided in the Appendix. Real data x 2 6.2 1 Results in terms of minimum ESS are summarized in Table 2. Prediction accuracies estimated via cross-validation are almost identical all across (reported in the Appendix). It can be seen that MG-HMC with a = 1 outperforms (in terms of ESS) the other two settings with a = 0.5 and a = 2, indicating increased numerical difficulties counter the theoretical gains when a becomes large. This can be also seen by noting that the acceptance rates drop from around 0.9 to around 0.7 as a increases from 0.5 to 2. The dimensionality also seems to have an impact on the optimal setting of a, since in the high-dimensional dataset Cavaran, the improvement of MG-HMC with a = 1 is less significant compared with other datasets, and a = 2 seems to suffer more of numerical difficulties. Comparisons between MG-HMC (a = 1) and standard slice sampling are provided in the Appendix. In general, standard slice sampling with adaptive search underperforms relative to MG-HMC (a = 1). Table 2: Minimum ESS for each method (dimensionality indicated in parenthesis). Left: BLR; Right: ICA Dataset (dim) A (15) G (25) H (14) P (8) R (7) C (87) ICA (25) a = 0.5 3124 3447 3524 3434 3317 33 (median 3987) 2677 a=1 4308 4353 4591 4664 4226 36 (median 4531) 3029 a=2 1490 3646 4315 4424 1490 7 (median 740) 1534 ICA We finally evaluate our methods on the MEG [22] dataset for Independent Component Analysis (ICA), with 17,730 time points and 25 feature dimension. All experiments are based on 5000 MCMC samples. The acceptance rates for a = (0.5, 1, 2) are (0.98, 0.97, 0.77). Running time is almost identical for different a. Settings (including m and ?) are provided in the Appendix. As shown in Table 2, when a = 1, MG-HMC has better mixing performance compared with other settings. 7 Conclusion We demonstrated the connection between HMC and slice sampling, introducing a new method for implementing a slice sampler via an augmented form of HMC. With few modifications to standard HMC, our MG-HMC can be seen as a drop-in replacement for any scenario where HMC and its variants apply, for example, Hamiltonian Variational Inference (HVI) [23]. We showed the theoretical advantages of our method over standard HMC, as well as numerical difficulties associated with it. Several future extensions can be explored to mitigate numerical issues, e.g., performing MG-HMC on the Riemann manifold [5] so that step-sizes can be adaptively chosen, and using a high-order symplectic numerical method [24, 25] to reduce the discretization error introduced by the integrator. 8 References [1] Christian Robert and George Casella. Monte Carlo statistical methods. Springer Science & Business Media, 2004. [2] Radford M Neal. MCMC using Hamiltonian dynamics. Handbook of Markov Chain Monte Carlo, 2, 2011. [3] Simon Duane, Anthony D Kennedy, Brian J Pendleton, and Duncan Roweth. Hybrid Monte Carlo. Physics letters B, 195(2), 1987. [4] Radford M Neal. Slice sampling. Annals of statistics, 2003. [5] Mark Girolami and Ben Calderhead. Riemann manifold Langevin and Hamiltonian Monte Carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73(2), 2011. [6] Wei-Lun Chao, Justin Solomon, Dominik Michels, and Fei Sha. Exponential integration for Hamiltonian Monte Carlo. In ICML, 2015. [7] Matthew D Homan and Andrew Gelman. The no-u-turn sampler: Adaptively setting path lengths in hamiltonian monte carlo. The Journal of Machine Learning Research, 15(1), 2014. [8] Ziyu Wang, Shakir Mohamed, and De Nando. Adaptive hamiltonian and riemann manifold monte carlo. In ICML, 2013. [9] Ari Pakman and Liam Paninski. Auxiliary-variable exact Hamiltonian Monte Carlo samplers for binary distributions. In NIPS, 2013. [10] Yichuan Zhang, Zoubin Ghahramani, Amos J Storkey, and Charles A Sutton. Continuous relaxations for discrete Hamiltonian Monte Carlo. In NIPS, 2012. [11] Iain Murray, Ryan Prescott Adams, and David JC MacKay. Elliptical slice sampling. ArXiv, 2009. [12] Vladimir Igorevich Arnol?d. Mathematical methods of classical mechanics, volume 60. Springer Science & Business Media, 2013. [13] Herbert Goldstein. Classical mechanics. Pearson Education India, 1965. [14] John Robert Taylor. Classical mechanics. University Science Books, 2005. [15] LD Landau and EM Lifshitz. Mechanics, 1st edition. Pergamon Press, Oxford, 1976. [16] Samuel Livingstone, Michael Betancourt, Simon Byrne, and Mark Girolami. On the Geometric Ergodicity of Hamiltonian Monte Carlo. ArXiv, January 2016. [17] Saralees Nadarajah. A generalized normal distribution. Journal of Applied Statistics, 32(7), 2005. [18] Steve Brooks, Andrew Gelman, Galin Jones, and Xiao-Li Meng. Handbook of Markov Chain Monte Carlo. CRC press, 2011. [19] Gareth O Roberts and Richard L Tweedie. Exponential convergence of Langevin distributions and their discrete approximations. Bernoulli, 1996. [20] Kevin Bache and Moshe Lichman. UCI machine learning repository, 2013. [21] Peter Van Der Putten and Maarten van Someren. COIL challenge 2000: The insurance company case. Sentient Machine Research, 9, 2000. [22] Ricardo Vig?rio, Veikko Jousm?ki, M H?m?l?ninen, R Haft, and Erkki Oja. Independent component analysis for identification of artifacts in magnetoencephalographic recordings. In NIPS, 1998. [23] Tim Salimans, Diederik P Kingma, and Max Welling. Markov chain Monte Carlo and variational inference: Bridging the gap. ArXiv, 2014. [24] Michael Striebel, Michael G?nther, Francesco Knechtli, and Mich?le Wandelt. Accuracy of symmetric partitioned Runge-Kutta methods for differential equations on Lie-groups. ArXiv, 12 2011. [25] Chengxiang Jiang and Yuhao Cong. A sixth order diagonally implicit symmetric and symplectic RungeKutta method for solving hamiltonian systems. Journal of Applied Analysis and Computation, 5(1), 2015. [26] Ivar Ekeland and Jean-Michel Lasry. On the number of periodic trajectories for a Hamiltonian flow on a convex energy surface. Annals of Mathematics, 1980. [27] Luke Tierney and Antonietta Mira. Some adaptive Monte Carlo methods for Bayesian inference. Statistics in Medicine, 18(1718), 1999. [28] Richard Isaac. A general version of doeblin?s condition. The Annals of Mathematical Statistics, 1963. [29] Eric Cances, Fr?d?ric Legoll, and Gabriel Stoltz. Theoretical and numerical comparison of some sampling methods for molecular dynamics. ESAIM: Mathematical Modelling and Numerical Analysis, 41(02), 2007. [30] Alicia A Johnson. Geometric ergodicity of Gibbs samplers. PhD thesis, university of Minnesota, 2009. [31] Gareth O Roberts and Jeffrey S Rosenthal. Markov-chain Monte Carlo: Some practical implications of theoretical results. Canadian Journal of Statistics, 26(1), 1998. [32] Jeffrey S Rosenthal. Minorization conditions and convergence rates for Markov chain Monte Carlo. Journal of the American Statistical Association, 90(430), 1995. [33] Michael Betancourt, Simon Byrne, and Mark Girolami. Optimizing the integrator step size for Hamiltonian Monte Carlo. ArXiv, 2014. [34] Aapo Hyv?rinen and Erkki Oja. Independent component analysis: algorithms and applications. Neural networks, 13(4), 2000. [35] Anoop Korattikara, Yutian Chen, and Max Welling. Austerity in MCMC land: Cutting the MetropolisHastings budget. ArXiv, 2013. 9
6147 |@word repository:2 version:1 changyou:2 seems:4 hyv:1 confirms:1 seek:2 simulation:2 p0:38 doeblin:1 ld:1 initial:6 series:1 lichman:1 selecting:2 interestingly:3 outperforms:1 elliptical:2 discretization:2 z2:4 comparing:1 current:1 yet:1 dx:4 diederik:1 john:1 numerical:21 enables:1 analytic:20 christian:1 drop:2 update:6 resampling:7 stationary:1 half:6 es:24 hamiltonian:50 commensurately:1 provides:2 minorization:1 simpler:3 zhang:2 mathematical:3 along:6 direct:1 differential:2 prove:2 consists:2 magnetoencephalographic:1 autocorrelation:6 manner:1 x0:37 ra:1 expected:4 ica:5 behavior:1 examine:1 mechanic:6 integrator:8 decreasing:2 riemann:3 landau:1 company:1 little:1 increasing:1 becomes:6 spain:1 provided:8 mass:1 medium:2 interpreted:1 transformation:10 guarantee:1 mitigate:1 every:2 multidimensional:1 exactly:3 unit:1 rungekutta:1 hamilton:9 haft:1 tends:1 limit:2 consequence:1 vig:1 despite:2 ext:1 sutton:1 oxford:1 jiang:1 establishing:1 meng:1 path:1 might:1 black:1 burn:4 frog:5 specifying:1 luke:1 limited:1 liam:1 range:4 practical:8 unique:2 practice:3 recursive:1 lcarin:1 procedure:7 empirical:2 evolving:4 significantly:2 convenient:1 prescott:1 suggest:2 zoubin:1 get:2 close:5 ga:1 gelman:2 applying:2 equivalent:3 conventional:1 imposed:1 dz:3 yt:20 demonstrated:1 starting:3 independently:1 convex:1 ergodic:1 unify:1 simplicity:1 slicing:2 insight:2 rule:1 iain:1 importantly:1 dw:1 proving:1 maarten:1 coordinate:2 updated:2 limiting:1 target:15 pt:25 suppose:1 lima:3 annals:3 duke:2 exact:1 rinen:1 agreement:1 storkey:1 updating:1 bache:1 tributions:1 xw56:1 wang:2 solved:3 cong:1 revisiting:1 region:3 movement:1 decrease:4 xmin:3 counter:1 principled:1 intuition:1 transforming:1 complexity:1 turnover:1 dynamic:22 depend:1 solving:9 algebra:1 compromise:1 yutian:1 calderhead:1 ries:1 efficiency:4 eric:1 completely:1 multimodal:2 mh:1 joint:2 derivation:1 distinct:3 effective:2 monte:23 kevin:1 choosing:1 pendleton:1 pearson:1 jean:1 lag:2 kai:1 larger:3 drawing:3 s:28 statistic:5 transform:2 shakir:1 seemingly:1 runge:1 advantage:2 mg:74 differentiable:1 dian:1 propose:5 fr:1 uci:2 blr:2 korattikara:1 mixing:9 alleviates:1 validate:1 getting:1 convergence:8 regularity:1 extending:1 r1:1 generating:2 adam:1 converges:2 ben:1 tim:1 develop:1 andrew:2 p2:5 auxiliary:8 australian:1 girolami:3 differ:1 nando:1 implementing:1 education:1 crc:1 hx:1 investigation:1 symplectic:2 brian:1 ryan:1 extension:2 kinetics:2 hold:3 around:4 considered:1 credit:2 normal:1 exp:5 great:1 lawrence:1 mapping:2 matthew:1 substituting:3 hvi:1 lasry:1 omitted:1 resample:1 estimation:1 leap:5 currently:1 sensitive:1 amos:1 hope:1 clearly:1 gaussian:8 primed:1 avoid:1 gaus:1 broader:1 validated:1 derived:1 improvement:1 bernoulli:1 indicates:1 modelling:1 dim:1 inference:4 rio:1 striebel:1 austerity:1 typically:4 transformed:5 interested:1 mitigating:1 henao:2 issue:10 dual:2 denoted:1 augment:1 special:2 integration:7 mackay:1 marginal:3 equal:1 sampling:63 identical:4 represents:1 broad:2 x4:1 icml:2 carin:1 jones:1 future:1 richard:2 employ:1 few:1 oja:2 dg:2 gamma:9 preserve:1 deviating:1 replaced:2 phase:1 replacement:1 jeffrey:2 attempt:2 acceptance:7 investigate:2 possibility:2 insurance:1 yielding:1 light:2 chain:7 implication:1 accurate:1 integral:3 encourage:1 partial:1 jumping:1 tweedie:1 stoltz:1 conduct:1 yuhao:1 taylor:1 initialized:2 desired:3 isolated:1 theoretical:23 roweth:1 increased:2 instance:1 cost:1 introducing:2 uniform:11 johnson:1 reported:1 dependency:1 periodic:2 synthetic:2 adaptively:3 st:3 density:12 fundamental:1 stay:1 probabilistic:1 physic:1 michael:4 together:1 na:1 thesis:1 solomon:1 leveraged:1 book:1 resort:1 inefficient:1 leading:2 ricardo:3 michel:1 li:1 american:1 potential:4 de:1 summarized:1 coefficient:1 satisfy:2 jc:1 performed:3 closed:1 analyze:2 characterizes:1 red:1 start:1 simon:3 accuracy:2 variance:2 efficiently:1 correspond:3 yield:2 conceptually:1 bayesian:2 identification:1 mc:1 carlo:23 trajectory:14 homan:1 kennedy:1 casella:1 decorrelated:2 whenever:1 definition:3 sixth:1 energy:11 mohamed:1 isaac:1 associated:4 jacobi:4 recovers:3 proof:1 gain:4 sampled:5 newly:2 dataset:3 popular:1 recall:1 dimensionality:2 adaptability:1 goldstein:1 steve:1 fubini:1 methodology:1 improved:2 wei:1 formulation:1 done:1 though:1 just:1 ergodicity:3 implicit:1 hand:1 hastings:3 replacing:3 autocorrelations:1 defines:5 logistic:2 mode:4 artifact:1 reveal:1 indicated:1 grows:1 normalized:2 true:1 verify:1 counterpart:1 byrne:2 hence:2 equality:4 analytically:5 reformulating:1 symmetric:6 iteratively:3 nadarajah:1 neal:2 illustrated:1 skewed:1 chengxiang:1 uniquely:2 encourages:1 during:1 unnormalized:5 coincides:1 samuel:1 generalized:8 evident:1 performs:3 delivers:1 reflection:2 variational:2 ari:1 charles:1 empirically:2 exponentially:1 volume:1 discussed:3 tail:1 analog:1 association:1 numerically:1 refer:1 significant:1 gibbs:2 tuning:1 grid:1 mathematics:1 minnesota:1 moving:1 stiffer:1 surface:1 curvature:1 multivariate:2 recent:2 showed:1 optimizing:1 scenario:1 certain:1 binary:1 success:1 arbitrarily:3 der:1 ofr:1 seen:3 minimum:2 george:1 herbert:1 employed:1 xiangyu:1 converge:1 period:7 monotonically:1 semi:1 relates:1 full:2 desirable:1 unimodal:2 ing:1 match:1 pakman:1 cross:1 long:2 divided:1 molecular:1 parenthesis:1 impact:3 prediction:1 variant:1 regression:3 basic:1 aapo:1 expectation:1 arxiv:6 iteration:14 represent:2 normalization:3 bimodal:2 achieved:1 underperforms:1 proposal:1 preserved:1 addition:2 justified:1 whereas:2 interval:6 conditionals:1 median:3 recording:1 tend:2 flow:1 call:1 leverage:1 noting:1 canadian:1 enough:1 restrict:1 reduce:1 tradeoff:1 bridging:1 suffer:3 peter:1 action:2 gabriel:1 generally:1 detailed:2 augments:2 generate:1 specifies:1 mirrored:1 sign:3 trapped:1 disjoint:1 per:1 estimated:1 rosenthal:2 discrete:6 write:1 express:1 group:1 independency:1 key:1 demonstrating:1 drawn:9 tierney:1 ht:9 asymptotically:2 relaxation:1 run:2 inverse:1 parameterized:2 letter:1 ameliorate:1 family:8 reader:1 almost:3 draw:7 appendix:25 duncan:1 ric:1 ki:1 resampled:1 fan:1 fei:1 x2:5 erkki:2 argument:3 min:1 formulating:2 performing:2 relatively:1 softened:2 alternate:2 legendre:7 smaller:2 ate:1 across:1 em:1 partitioned:1 metropolis:3 lunch:1 modification:1 intuitively:2 restricted:1 invariant:3 gradually:1 heart:1 computationally:1 equation:14 previously:3 overwhelm:1 turn:1 german:1 tractable:1 evalu:1 rewritten:1 apply:1 observe:1 salimans:1 original:11 in2:1 denotes:2 running:1 include:1 unifying:1 medicine:1 exploit:1 ghahramani:1 especially:1 murray:1 classical:4 society:1 unchanged:1 implied:1 move:5 pergamon:1 moshe:1 strategy:2 sha:1 traditional:1 gradient:1 kutta:1 antonietta:1 manifold:4 toward:2 reason:1 assuming:2 ru:4 meg:1 length:1 relationship:1 vladimir:1 nc:1 difficult:2 hmc:91 equivalently:2 setup:2 potentially:1 pima:1 robert:4 negative:3 design:1 implementation:1 perform:2 observation:1 francesco:1 markov:6 datasets:3 finite:1 truncated:1 january:1 langevin:2 arbitrary:2 introduced:3 evidenced:1 david:1 specified:4 extensive:1 connection:5 z1:6 barcelona:1 kingma:1 nip:4 brook:1 able:3 suggested:2 proceeds:1 below:5 usually:1 justin:1 alicia:1 regime:1 summarize:1 challenge:1 including:2 max:3 royal:1 power:1 metropolishastings:1 difficulty:7 business:2 hybrid:1 sian:1 scheme:4 esaim:1 galin:1 chao:1 prior:1 geometric:3 literature:1 betancourt:2 relative:2 expect:1 limitation:1 var:1 validation:1 xiao:1 principle:1 heavy:1 land:1 diagonally:1 free:1 pdes:3 infeasible:2 monomial:12 india:1 taking:1 face:1 distributed:1 slice:52 curve:1 dimension:9 calculated:2 world:3 stand:1 valid:2 cumulative:1 contour:1 adaptive:6 welling:2 approximate:2 yichuan:1 cutting:1 monotonicity:1 anchor:1 handbook:2 jousm:1 ziyu:1 alternatively:2 ripley:1 putten:1 continuous:3 latent:1 iterative:1 search:1 tailed:3 mich:1 table:5 promising:1 improving:1 excellent:1 meanwhile:1 anthony:1 domain:2 edition:1 x1:3 augmented:1 elaborate:2 slow:2 shrinking:2 lun:1 momentum:18 position:1 mira:1 acf:1 exponential:7 xh:2 lie:1 governed:2 rederive:1 third:2 dominik:1 theorem:4 remained:1 formula:1 rk:4 xt:49 specific:2 r2:1 decay:3 explored:1 negate:1 bivariate:1 importance:1 phd:1 illustrates:1 budget:1 chen:3 durham:1 gap:2 michels:1 intersection:1 simply:1 univariate:5 paninski:1 expressed:1 partially:1 scalar:1 doubling:2 van:2 springer:2 radford:2 corresponds:8 duane:1 veikko:1 gareth:2 kinetic:14 cdf:2 yizhe:1 conditional:12 ma:1 coil:1 formulated:1 towards:2 change:2 determined:4 uniformly:9 corrected:1 sampler:32 hyperplane:1 principal:2 called:1 total:6 experimental:1 livingstone:1 indicating:1 mark:3 latter:1 arises:1 anoop:1 evaluate:3 mcmc:5 ex:1
5,689
6,148
Interpretable Distribution Features with Maximum Testing Power Wittawat Jitkrittum, Zolt?n Szab?, Kacper Chwialkowski, Arthur Gretton wittawatj@gmail.com zoltan.szabo.m@gmail.com kacper.chwialkowski@gmail.com arthur.gretton@gmail.com Gatsby Unit, University College London Abstract Two semimetrics on probability distributions are proposed, given as the sum of differences of expectations of analytic functions evaluated at spatial or frequency locations (i.e, features). The features are chosen so as to maximize the distinguishability of the distributions, by optimizing a lower bound on test power for a statistical test using these features. The result is a parsimonious and interpretable indication of how and where two distributions differ locally. We show that the empirical estimate of the test power criterion converges with increasing sample size, ensuring the quality of the returned features. In real-world benchmarks on highdimensional text and image data, linear-time tests using the proposed semimetrics achieve comparable performance to the state-of-the-art quadratic-time maximum mean discrepancy test, while returning human-interpretable features that explain the test results. 1 Introduction We address the problem of discovering features of distinct probability distributions, with which they can most easily be distinguished. The distributions may be in high dimensions, can differ in non-trivial ways (i.e., not simply in their means), and are observed only through i.i.d. samples. One application for such divergence measures is to model criticism, where samples from a trained model are compared with a validation sample: in the univariate case, through the KL divergence (Cinzia Carota and Polson, 1996), or in the multivariate case, by use of the maximum mean discrepancy (MMD) (Lloyd and Ghahramani, 2015). An alternative, interpretable analysis of a multivariate difference in distributions may be obtained by projecting onto a discriminative direction, such that the Wasserstein distance on this projection is maximized (Mueller and Jaakkola, 2015). Note that both recent works require low dimensionality, either explicitly (in the case of Lloyd and Gharamani, the function becomes difficult to plot in more than two dimensions), or implicitly in the case of Mueller and Jaakkola, in that a large difference in distributions must occur in projection along a particular one-dimensional axis. Distances between distributions in high dimensions may be more subtle, however, and it is of interest to find interpretable, distinguishing features of these distributions. In the present paper, we take a hypothesis testing approach to discovering features which best distinguish two multivariate probability measures P and Q, as observed by samples X := {xi }ni=1 drawn independently and identically (i.i.d.) from P , and Y := {yi }ni=1 ? Rd from Q. Nonparametric two-sample tests based on RKHS distances (Eric et al., 2008; Fromont et al., 2012; Gretton et al., 2012a) or energy distances (Sz?kely and Rizzo, 2004; Baringhaus and Franz, 2004) have as their test statistic an integral probability metric, the Maximum Mean Discrepancy (Gretton et al., 2012a; Sejdinovic et al., 2013). For this metric, a smooth witness function is computed, such that the amplitude is largest where the probability mass differs most (e.g. Gretton et al., 2012a, 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Figure 1). Lloyd and Ghahramani (2015) used this witness function to compare the model output of the Automated Statistician (Lloyd et al., 2014) with a reference sample, yielding a visual indication of where the model fails. In high dimensions, however, the witness function cannot be plotted, and is less helpful. Furthermore, the witness function does not give an easily interpretable result for distributions with local differences in their characteristic functions. A more subtle shortcoming is that it does not provide a direct indication of the distribution features which, when compared, would maximize test power - rather, it is the witness function norm, and (broadly speaking) its variance under the null, that determine test power. Our approach builds on the analytic representations of probability distributions of Chwialkowski et al. (2015), where differences in expectations of analytic functions at particular spatial or frequency locations are used to construct a two-sample test statistic, which can be computed in linear time. Despite the differences in these analytic functions being evaluated at random locations, the analytic tests have greater power than linear time tests based on subsampled estimates of the MMD (Gretton et al., 2012b; Zaremba et al., 2013). Our first theoretical contribution, in Sec. 3, is to derive a lower bound on the test power, which can be maximized over the choice of test locations. We propose two novel tests, both of which significantly outperform the random feature choice of Chwialkowski et al.. The (ME) test evaluates the difference of mean embeddings at locations chosen to maximize the test power lower bound (i.e., spatial features); unlike the maxima of the MMD witness function, these features are directly chosen to maximize the distinguishability of the distributions, and take variance into account. The Smooth Characteristic Function (SCF) test uses as its statistic the difference of the two smoothed empirical characteristic functions, evaluated at points in the frequency domain so as to maximize the same criterion (i.e., frequency features). Optimization of the mean embedding kernels/frequency smoothing functions themselves is achieved on a held-out data set with the same consistent objective. As our second theoretical contribution in Sec. 3, we prove that the empirical estimate of the test power criterion asymptotically converges to its population quantity uniformly over the class of Gaussian kernels. Two important consequences follow: first, in testing, we obtain a more powerful test with fewer features. Second, we obtain a parsimonious and interpretable set of features that best distinguish the probability distributions. In Sec. 4, we provide experiments demonstrating that the proposed linear-time tests greatly outperform all previous linear time tests, and achieve performance that compares to or exceeds the more expensive quadratic-time MMD test (Gretton et al., 2012a). Moreover, the new tests discover features of text data (NIPS proceedings) and image data (distinct facial expressions) which have a clear human interpretation, thus validating our feature elicitation procedure in these challenging high-dimensional testing scenarios. 2 ME and SCF tests In this section, we review the ME and SCF tests (Chwialkowski et al., 2015) for two-sample testing. In Sec. 3, we will extend these approaches to learn features that optimize the power of these tests. Given two samples X := {xi }ni=1 , Y := {yi }ni=1 ? Rd independently and identically distributed (i.i.d.) according to P and Q, respectively, the goal of a two-sample test is to decide whether P is different from Q on the basis of the samples. The task is formulated as a statistical hypothesis test proposing a null hypothesis H0 : P = Q (samples are drawn from the same distribution) against an alternative hypothesis H1 : P 6= Q (the sample generating distributions are different). A test ? n from X and Y, and rejects H0 if ? ? n exceeds a predetermined test threshold calculates a test statistic ? ?n (critical value). The threshold is given by the (1 ? ?)-quantile of the (asymptotic) distribution of ? under H0 i.e., the null distribution, and ? is the significance level of the test. ? n , a form of Hotelling?s T-squared statistic, defined ME test The ME test uses as its test statistic ? Pn Pn > ?1 1 1 > ? as ?n := nzn Sn zn , where zn := n i=1 zi , Sn := n?1 i=1 (zi ? zn )(zi ? zn ) , and zi := J J (k(xi , vj ) ? k(yi , vj ))j=1 ? R . The statistic depends on a positive definite kernel k : X ? X ? R (with X ? Rd ), and a set of J test locations V = {vj }Jj=1 ? Rd . Under H0 , asymptotically ? n follows ?2 (J), a chi-squared distribution with J degrees of freedom. The ME test rejects H0 ? ? n > T? , where the test threshold T? is given by the (1 ? ?)-quantile of the asymptotic null if ? ? n under H1 was not derived, Chwialkowski et al. distribution ?2 (J). Although the distribution of ? (2015) showed that if k is analytic, integrable and characteristic (in the sense of Sriperumbudur et al. ? n can be arbitrarily large as n ? ?, allowing the test to correctly reject H0 . (2011)), under H1 , ? 2 One can intuitively think of the ME test statistic as a squared normalized (by the inverse covariance 2 , V ) distance of the mean embeddings (Smola et al., 2007) of the empirical measures S?1 n ) L (X Pn J Pn PJ 1 Pn := n i=1 ?xi , and Qn := n1 i=1 ?yi where VJ := J1 i=1 ?vi , and ?x is the Dirac measure concentrated at x. The unnormalized counterpart (i.e., without S?1 n ) was shown by Chwialkowski et al. (2015) to be a metric on the space of probability measures for any V. Both variants behave similarly for two-sample testing, with the normalized version being a semimetric having a more computationally tractable null distribution, i.e., ?2 (J). SCF test The SCF uses the test statistic which has the same form as the ME test statistic with > > > J ? ? ? a modified zi := [?l(xi ) sin(x> i vj ) ? l(yi ) sin(yi vj ), l(xi ) cos(xi vj ) ? l(yi ) cos(yi vj )]j=1 ? R R2J , where ?l(x) = Rd exp(?iu> x)l(u) du is the Fourier transform of l(x), and l : Rd ? R is an analytic translation-invariant kernel i.e., l(x ? y) defines a positive definite kernel for x and y. In contrast to the ME test defining the statistic in terms of spatial locations, the locations V = {vj }Jj=1 ? Rd in the SCF test are in the frequency domain. As a brief description, let > ?P (w) := Ex?P exp(iw R x) be the characteristic function of P . Define a smooth characteristic function as ?P (v) = Rd ?P (w)l(v ? w) dw (Chwialkowski et al., 2015, Definition 2). Then, similar to the ME test, the statistic defined by the SCF test can be seen as a normalized (by S?1 n ) version of L2 (X , VJ ) distance of empirical ?P (v) and ?Q (v). The SCF test statistic has asymptotic distribution ?2 (2J) under H0 . We will use J 0 to refer to the degrees of freedom of the chi-squared distribution i.e., J 0 = J for the ME test, and J 0 = 2J for the SCF test. ? n := In this work, we modify the statistic with a regularization parameter ?n > 0, giving ? ?1 zn , for stability of the matrix inverse. Using multivariate Slutsky?s theorem, nz> n (Sn + ?n I) ? under H0 , ?n still asymptotically follows ?2 (J 0 ) provided that ?n ? 0 as n ? ?. 3 Lower bound on test power, consistency of empirical power statistic This section contains our main results. We propose to optimize the test locations V and kernel parameters (jointly referred to as ?) by maximizing a lower bound on the test power in Proposition 1. This criterion offers a simple objective function for fast parameter tuning. The bound may be of independent interest in other Hotelling?s T-squared statistics, since apart from the Gaussian case (e.g. Bilodeau and Brenner, 2008, Ch. 8), the characterization of such statistics under the alternative distribution is challenging. The optimization procedure is given in Sec. 4. We use Exy as a shorthand for Ex?P Ey?Q and let k ? kF be the Frobenius norm. Proposition 1 (Lower bound on ME test power). Let K be a uniformly bounded (i.e., ?B < ? such that supk?K sup(x,y)?X 2 |k(x, y)| ? B) family of k : X ? X ? R measurable kernels. Let V be a collection in which each element is a set of J test locations. Assume that ? n ? T? of the ME test satisfies c? := supV?V,k?K k??1 kF < ?. Then, the test power P ?   ? n ? T? ? L(?n ) where P ? 2 ? [?n (?n ?T? )(n?1)??2 n]2 2 2 ?3 n(2n?1)2 ? 2e?[(?n ?T? )/3?c3 n?n ] ?n /?4 , L(?n ) := 1 ? 2e??1 (?n ?T? ) /n ? 2e and c3 , ?1 , . . . ?4 are positive constants depending on only B, J and c?. The parameter ?n := ? n := nz> (Sn + ?n I)?1 zn where ? = Exy [z1 ] and n?> ??1 ? is the population counterpart of ? n > ? = Exy [(z1 ? ?)(z1 ? ?) ]. For large n, L(?n ) is increasing in ?n . ? n ? ?n | which involves bounding kzn ? ?k2 Proof (sketch). The idea is to construct a bound for |? and kSn ? ?kF separately using Hoeffding?s inequality.The result follows after a reparameterization ? n ? ?n | ? t) to have P ? ? n ? T? . See Sec. F for details. of the bound on P(|? Proposition 1 suggests that for large n it is sufficient to maximize ?n to maximize a lower bound on the ME test power. The same conclusion holds for the SCF test (result omitted due to space constraints). Assume that k is characteristic (Sriperumbudur et al., 2011). It can be shown that ?n = 0 if and only if P = Q i.e., ?n is a semimetric for P and Q. In this sense, one can see ?n as encoding the ease of rejecting H0 . The higher ?n , the easier for the test to correctly reject H0 when H1 holds. This observation justifies the use of ?n as a maximization objective for parameter tuning. 3 ? n for both ME and SCF tests depends on a set of test locations V and Contributions The statistic ? a kernel parameter ?. We propose to set ? := {V, ?} = arg max? ?n = arg max? ?> ??1 ?. The optimization of ? brings two benefits: first, it significantly increases the probability of rejecting H0 when H1 holds; second, the learned test locations act as discriminative features allowing an interpretation of how the two distributions differ. We note that optimizing parameters by maximizing a test power proxy (Gretton et al., 2012b) is valid under both H0 and H1 as long as the data used for parameter tuning and for testing are disjoint. If H0 holds, then ? = arg max 0 is arbitrary. Since the test statistic asymptotically follows ?2 (J 0 ) for any ?, the optimization does not change the null distribution. Also, the rejection threshold T? depends on only J 0 and is independent of ?. To avoid creating a dependency between ? and the data used for testing (which would affect the null distribution), we split the data into two disjoint sets. Let D := (X, Y) and Dtr , Dte ? D such that ? tr in place of Dtr ? Dte = ? and Dtr ? Dte = D. In practice, since ? and ? are unknown, we use ? n/2 ? tr is the test statistic computed on the training set Dtr . For simplicity, we assume that ?n , where ? n/2 each of Dtr and Dte has half of the samples in D. We perform an optimization of ? with gradient ? tr (?). The actual two-sample test is performed using the test statistic ? ? te (?) ascent algorithm on ? n/2 n/2 te computed on D . The full procedure from tuning the parameters to the actual two-sample test is summarized in Sec. A. ? tr in place of ?n for parameter optimization, we give a finiteSince we use an empirical estimate ? n/2 ?1 sample bound in Theorem 2 guaranteeing the convergence of z> zn to ?> ??1 ? as n (Sn + ?n I) n increases, uniformly over all kernels k ? K (a family of uniformly bounded kernels) and all test locations in an appropriate class. Kernel classes the widely  satisfying conditions of Theorem 2 include  used isotropic Gaussian kernel class Kg = k? : (x, y) 7? exp ?(2? 2 )?1 kx ? yk2 | ? > 0 , and  the more general full Gaussian kernel class Kfull = {k : (x, y) 7? exp ?(x ? y)> A(x ? y) | A is positive definite} (see Lemma 5 and Lemma 6). ? n in the ME test). Let X ? Rd be a measurable set, and V be a Theorem 2 (Consistency of ? collection in which each element is a set of J test locations. All suprema over V and k are to be understood as supV?V and supk?K respectively. For a class of kernels K on X ? Rd , define F1 := {x 7? k(x, v) | k ? K, v ? X }, F2 := {x 7? k(x, v)k(x, v0 ) | k ? K, v, v0 ? X }, (1) F3 := {(x, y) 7? k(x, v)k(y, v0 ) | k ? K, v, v0 ? X }. (2) Assume that (1) K is a uniformly bounded (by B) family of k : X ? X ? R measurable kernels, (2) c? := supV,k k??1 kF < ?, and (3) Fi = {f?i | ?i ? ?i } is VC-subgraph with VC-index ? ? V C(Fi ), and ? 7? f?i (m) is continuous (?m, i = 1, 2, 3). Let c1 := 4B 2 J J c?, c2 := 4B J c?, and c3 := 4B 2 J c?2 . Let Ci -s (i = 1, 2, 3) be the universal constants associated to Fi -s according to Theorem 2.6.7 in van der Vaart and Wellner (2000). Then for any ? ? (0, 1) with probability at least 1 ? ?, ?1 sup z> zn ? ?> ??1 ? n (Sn + ?n I) V,k  ? 2n ? 1 2 2 8 c1 B 2 J c1 BJ + c2 J + c1 J(TF2 + TF3 ) + + c3 ?n , where ?n n?1 ?n ?n n ? 1 ! r p ? q   2?[V C(Fj ) ? 1] 2 log(5/?) 16 2B ?j ?j V C(Fj ) ? = 2 log Cj ? V C(Fj )(16e) + +B , 2 n n  ? 2TF1 TFj for j = 1, 2, 3 and ?1 = 1, ?2 = ?3 = 2. Proof (sketch). The idea is to lower bound the difference with an expression involving supV,k kzn ? ?k2 and supV,k kSn ? ?kF . These two quantities can be seen as suprema of empirical processes, and can be bounded by Rademacher complexities of their respective function classes (i.e., F1 , F2 , and F3 ). Finally, the Rademacher complexities can be upper bounded using Dudley entropy bound and VC subgraph properties of the function classes. Proof details are given in Sec. D. Theorem 2 implies that if we set ?n = O(n?1/4 ), then we > ?1 > ?1 ?1/4 supV,k zn (Sn + ?n I) zn ? ? ? ? = Op (n ) as the rate of convergence. 4 have Both Proposition 1 and Theorem 2 require c? := supV?V,k?K k??1 kF < ? as a precondition. To guarantee that c? < ?, a concrete construction of K is the isotropic Gaussian kernel class Kg , where ? is constrained to be in a compact set. Also, consider V := {V | any two locations are at least  distance apart, and all test locations have their norms bounded by ?} for some , ? > 0. Then, for any non-degenerate P, Q, we have c? < ? since (?, V) 7? ?n is continuous, and thus attains its supremum over compact sets K and V. 4 Experiments In this section, we demonstrate the effectiveness Table 1: Four toy problems. H0 holds only in SG. of the proposed methods on both toy and real Data P Q problems. We consider the isotropic Gaussian kernel class Kg in all kernel-based tests. We SG N (0d , Id ) N (0d , Id ) study seven two-sample test algorithms. For the GMD N (0d , Id ) N ((1, 0, . . . , 0)> , Id ) SCF test, we set ?l(x) = k(x, 0). Denote by ME- GVD N (0d , Id ) N (0d , diag(2, 1, . . . , 1)) full and SCF-full the ME and SCF tests whose Blobs Gaussian mixtures in R2 as studied in test locations and the Gaussian width ? are fully Chwialkowski et al. (2015); Gretton optimized using gradient ascent on a separate et al. (2012b). Blobs data. Sample from P. Blobs data. Sample from Q. training sample (Dtr ) of the same size as the test set (Dte ). ME-grid and SCF-grid are as in Chwialkowski et al. (2015) where only the Gaussian width is optimized by a grid search,1 and the test locations are randomly drawn from a multivariate normal distribution. MMD-quad (quadratic-time) and MMD-lin (linear-time) refer to the nonparametric tests based on maximum mean discrepancy of Gretton et al. (2012a), where to ensure a fair comparison, the Gaussian kernel width is also chosen so as to maximize a criterion for the test power on training data, following the same principle as (Gretton et al., 2012b). For MMDquad, since its null distribution is given by an infinite sum of weighted chi-squared variables (no closed-form quantiles), in each trial we randomly permute the two samples 400 times to approximate the null distribution. Finally, T 2 is the standard two-sample Hotelling?s T-squared test, which serves as a baseline with Gaussian assumptions on P and Q. 10 10 5 5 0 0 ?5 ?5 ?10 ?10 ?5 0 5 10 ?10 ?10 ?5 0 5 10 In all the following experiments, each problem is repeated for 500 trials. For toy problems, new samples are generated from the specified P, Q distributions in each trial. For real problems, samples are partitioned randomly into training and test sets in each trial. In all of the simulations, we report an ? te ? te ? T? ) which is the proportion of the number of times the statistic ? empirical estimate of P(? n/2 n/2 is above T? . This quantity is an estimate of type-I error under H0 , and corresponds to test power when H1 is true. We set ? = 0.01 in all the experiments. All the code and preprocessed data are available at https://github.com/wittawatj/interpretable-test. ? tr (?) is a function of ? consisting of one real-valued Optimization The parameter tuning objective ? n/2 ? and J test locations each of d dimensions. The parameters ? can thus be regarded as a Jd + 1 ? tr (?) with respect to ?, and use gradient ascent to Euclidean vector. We take the derivative of ? n/2 maximize it. J is pre-specified and fixed. For the ME test, we initialize the test locations with realizations from two multivariate normal distributions fitted to samples from P and Q; this ensures that the initial locations are well supported by the data. For the SCF test, initialization using the standard normal distribution is found to be sufficient. The parameter ?n is not optimized; we set the regularization parameter ?n to be as small as possible while being large enough to ensure that (Sn + ?n I)?1 can be stably computed. We emphasize that both the optimization and testing are linear in n. The testing cost O(J 3 + J 2 n + dJn) and the optimization costs O(J 3 + dJ 2 n) per gradient ascent iteration. Runtimes of all methods are reported in Sec. C in the appendix. 1. Informative features: simple demonstration We begin with a demonstration that the proxy ? tr (?) for the test power is informative for revealing the difference of the two samples in the ME ? n/2 1 Chwialkowski et al. (2015) chooses the Gaussian width that minimizes the median of the p-values, a heuristic that does not directly address test power. Here, we perform a grid search to choose the best Gaussian ? tr as done in ME-full and SCF-full. width by maximizing ? n/2 5 1.0 0.8 0.8 0.005 0.000 1000 2000 3000 4000 Test sample size (a) SG. d = 50. 5000 0.6 0.4 0.2 0.0 1000 2000 3000 4000 Test sample size (b) GMD. d = 100. 5000 1.0 ME-full ME-grid SCF-full SCF-grid MMD-quad MMD-lin 0.8 0.6 Test power 0.010 Test power 1.0 0.015 Test power Type-I error 0.020 0.4 0.2 0.0 1000 2000 3000 4000 Test sample size 0.6 0.4 0.2 5000 0.0 1000 (c) GVD. d = 50. 2000 3000 4000 Test sample size 5000 T2 (d) Blobs. te Figure 2: Plots of type-I error/test power against the test sample size n in the four toy problems. test. We consider the Gaussian Mean Difference (GMD) problem (see Table 1), where both P and Q are two-dimensional normal distributions with the difference in means. We use J = 2 test locations v1 and v2 , where v1 is fixed to the location indicated by the black triangle in Fig. 1. The contour plot ? tr (v1 , v2 ). shows v2 7? ? n/2 ? tr is maximized when v2 is placed in either of the two regions that Fig. 1 (top) suggests that ? n/2 captures the difference of the two samples i.e., the region in which the probability masses of P and Q have less overlap. Fig. 1 (bottom), we consider placing v1 in one of the two key regions. In this ? tr , implying case, the contour plot shows that v2 should be placed in the other region to maximize ? n/2 that placing multiple test locations in the same neighborhood will not increase the discriminability. The two modes on the left and right suggest two ways to place the test location in a region that ? tr is an indication of many informative ways to reveals the difference. The non-convexity of the ? n/2 detect differences of P and Q, rather than a drawback. A convex objective would not capture this multimodality. 2. Test power vs. sample size n We now demonstrate the rate of increase of test power with sample size. When the null hypothesis holds, the type-I error stays at the specified level ?. We consider the following four toy problems: Same Gaussian (SG), Gaussian mean difference (GMD), Gaussian variance difference (GVD), and Blobs. The specifications of P and Q are summarized in Table. 1. In the Blobs problem, P and Q are defined as a mixture of Gaussian distributions arranged on a 4 ? 4 grid in R2 . This problem is challenging as the difference of P and Q is encoded at a much smaller length scale compared to the global structure (Gretton et al., 2012b). Specifically, the eigenvalue ratio for the covariance of each Gaussian distribution is 2.0 in P , and 1.0 in Q. We set J = 5 in this experiment. The results are shown in Fig. 2 where type-I error (for SG problem), and test power (for GMD, GVD and Blobs problems) are plotted against test sample size. A number of observations are worth noting. In the SG problem, we see that the type-I error roughly stays at the specified level: the rate of rejection of H0 when it is true is roughly at the specified level ? = 0.01. v 2 ? ?^tr n=2 (v 1 ; v 2 ) 160 140 120 100 80 60 40 20 v 2 ? ?^tr n=2 (v 1 ; v 2 ) 0 192 184 176 168 160 152 144 136 128 Figure 1: A contour plot ? tr as a function of of ? n/2 v2 when J = 2 and v1 is fixed (black trian? tr gle). The objective ? n/2 is high in the regions that GMD with 100 dimensions turns out to be an easy problem for all the reveal the difference of tests except MMD-lin. In the GVD and Blobs cases, ME-full and SCFthe two samples. full achieve substantially higher test power than ME-grid and SCF-grid, respectively, suggesting a clear advantage from optimizing the test locations. Remarkably, ME-full consistently outperforms the quadratic-time MMD across all test sample sizes in the GVD case. When the difference of P and Q is subtle as in the Blobs problem, ME-grid, which uses randomly drawn test locations, can perform poorly (see Fig. 2d) since it is unlikely that randomly drawn locations will be placed in the key regions that reveal the difference. In this case, optimization of the test locations can considerably boost the test power (see ME-full in Fig. 2d). Note also that SCF variants perform significantly better than ME variants on the Blobs problem, as the difference in P and Q is localized in the frequency domain; ME-full and ME-grid would require many more test locations in the spatial domain to match the test powers of the SCF variants. For the same reason, SCF-full does much better than the quadratic-time MMD across most sample sizes, as the latter represents a weighted distance between characteristic functions integrated across the entire frequency domain (Sriperumbudur et al., 2010, Corollary 4). 6 Test power Type-I error 0.020 0.015 0.010 0.005 0.0005 300 600 900 1200 1500 Dimension (a) SG 1.0 0.8 0.6 0.4 0.2 0.05 1.0 ME-full ME-grid SCF-full SCF-grid MMD-lin 0.8 Test power 0.025 300 600 900 1200 1500 Dimension 0.6 0.4 0.2 0.0 5 100 (b) GMD 200 300 Dimension 400 500 T2 (c) GVD Figure 3: Plots of type-I error/test power against the dimensions d in the four toy problems in Table 1. Table 2: Type-I errors and powers of various tests in the problem of distinguishing NIPS papers from two categories. ? = 0.01. J = 1. nte denotes the test sample size of each of the two samples. Problem nte ME-full ME-grid SCF-full SCF-grid MMD-quad MMD-lin Bayes-Bayes Bayes-Deep Bayes-Learn Bayes-Neuro Learn-Deep Learn-Neuro 215 216 138 394 149 146 .012 .954 .990 1.00 .956 .960 .018 .034 .774 .300 .052 .572 .012 .688 .836 .828 .656 .590 .004 .180 .534 .500 .138 .360 .022 .906 1.00 .952 .876 1.00 .008 .262 .238 .972 .500 .538 3. Test power vs. dimension d We next investigate how the dimension (d) of the problem can affect type-I errors and test powers of ME and SCF tests. We consider the same artificial problems: SG, GMD and GVD. This time, we fix the test sample size to 10000, set J = 5, and vary the dimension. The results are shown in Fig. 3. Due to the large dimensions and sample size, it is computationally infeasible to run MMD-quad. We observe that all the tests except the T-test can maintain type-I error at roughly the specified significance level ? = 0.01 as dimension increases. The type-I performance of the T-test is incorrect at large d because of the difficulty in accurately estimating the covariance matrix in high dimensions. It is interesting to note the high performance of ME-full in the GMD problem in Fig. 3b. ME-full achieves the maximum test power of 1.0 throughout and matches the power T-test, in spite of being nonparametric and making no assumption on P and Q (the T-test is further advantaged by its excessive Type-I error). However, this is true only with optimization of the test locations. This is reflected in the test power of ME-grid in Fig. 3b which drops monotonically as dimension increases, highlighting the importance of test location optimization. The performance of MMD-lin degrades quickly with increasing dimension, as expected from Ramdas et al. (2015). 4. Distinguishing articles from two categories We now turn to performance on real data. We first consider the problem of distinguishing two categories of publications at the conference on Neural Information Processing Systems (NIPS). Out of 5903 papers published in NIPS from 1988 to 2015, we manually select disjoint subsets related to Bayesian inference (Bayes), neuroscience (Neuro), deep learning (Deep), and statistical learning theory (Learn) (see Sec. B). Each paper is represented as a bag of words using TF-IDF (Manning et al., 2008) as features. We perform stemming, remove all stop words, and retain only nouns. A further filtering of document-frequency (DF) of words that satisfies 5 ? DF ? 2000 yields approximately 5000 words from which 2000 words (i.e., d = 2000 dimensions) are randomly selected. See Sec. B for more details on the preprocessing. For ME and SCF tests, we use only one test location i.e., set J = 1. We perform 1000 permutations to approximate the null distribution of MMD-quad in this and the following experiments. Type-I errors and test powers are summarized in Table. 2. The first column indicates the categories of the papers in the two samples. In Bayes-Bayes problem, papers on Bayesian inference are randomly partitioned into two samples in each trial. This task represents a case in which H0 holds. Among all the linear-time tests, we observe that ME-full has the highest test power in all the tasks, attaining a maximum test power of 1.0 in the Bayes-Neuro problem. This high performance assures that although different test locations V may be selected in different trials, these locations are each informative. It is interesting to observe that ME-full has performance close to or better than MMD-quad, which requires O(n2 ) runtime complexity. Besides clear advantages of interpretability and linear runtime of the proposed tests, these results suggest that evaluating the differences in expectations of analytic functions at particular locations can yield an equally powerful test at a much lower cost, as opposed to 7 Table 3: Type-I errors and powers in the problem of distinguishing positive (+) and negative (-) facial expressions. ? = 0.01. J = 1. Problem nte ME-full ME-grid SCF-full SCF-grid MMD-quad MMD-lin ? vs. ? + vs. ? 201 201 .010 .998 .012 .656 .014 1.00 .002 .750 .018 1.00 .008 .578 computing the RKHS norm of the witness function as done in MMD. Unlike Blobs, however, Fourier features are less powerful in this setting. We further investigate the interpretability of the ME test by the following procedure. For the learned ? t = (? test location vt ? Rd (d = 2000) in trial t, we construct v v1t , . . . , v?dt ) such that v?jt = |vjt |. t t Let ?j ? {0, 1} be an indicator variable taking value 1 if v?j is among the top five largest for all P j ? {1, . . . , d}, and 0 otherwise. Define ?j := t ?jt as a proxy indicating the significance of word j i.e., ?j is high if word j is frequently among the top five largest as measured by v?jt . The top seven words as sorted in descending order by ?j in the Bayes-Neuro problem are spike, markov, cortex, dropout, recurr, iii, gibb, showing that the learned test locations are highly interpretable. Indeed, ?markov? and ?gibb? (i.e., stemmed from Gibbs) are discriminative terms in Bayesian inference category, and ?spike? and ?cortex? are key terms in neuroscience. We give full lists of discriminative terms learned in all the problems in Sec. B.1. To show that not all the randomly selected 2000 terms are informative, if the definition of ?jt is modified to consider the least important words (i.e., ?j is high if word j is frequently among the top five smallest as measured by v?jt ), we instead obtain circumfer, bra, dominiqu, rhino, mitra, kid, impostor, which are not discriminative. 5. Distinguishing positive and negative emotions In the final experiment, we study how well ME and SCF tests can distinguish two samples of photos of people showing positive and negative facial expressions. Our emphasis is on the discriminative features of the faces identified by ME test showing how (a) HA (b) NE (c) SU the two groups differ. For this purpose, we use Karolinska Directed Emotional Faces (KDEF) dataset (Lundqvist et al., 1998) containing 5040 aligned face images of 70 amateur actors, 35 females and 35 males. We use only photos showing front views of the faces. In the dataset, each actor displays seven expressions: happy (HA), neutral (NE), surprised (SU), sad (SA), afraid (AF), (d) AF (e) AN (f) DI (g) v 1 angry (AN), and disgusted (DI). We assign HA, NE, and SU faces into the positive emotion group (i.e., samples from P ), and Figure 4: (a)-(f): Six facial expresAF, AN and DI faces into the negative emotion group (samples sions of actor AM05 in the KDEF from Q). We denote this problem as ?+ vs. ??. Examples of data. (g): Average across trials of six facial expressions from one actor are shown in Fig. 4. Photos the learned test locations v . 1 of the SA group are unused to keep the sizes of the two samples the same. Each image of size 562 ? 762 pixels is cropped to exclude the background, resized to 48 ? 34 = 1632 pixels (d), and converted to grayscale. We run the tests 500 times with the same setting used previously i.e., Gaussian kernels, and J = 1. The type-I errors and test powers are shown in Table 3. In the table, ?? vs. ?? is a problem in which all faces expressing the six emotions are randomly split into two samples of equal sizes i.e., H0 is true. Both ME-full and SCF-full achieve high test powers while maintaining the correct type-I errors. As a way to interpret how positive and negative emotions differ, we take an average across trials of the learned test locations of ME-full in the ?+ vs. ?? problem. This average is shown in Fig. 4g. We see that the test locations faithfully capture the difference of positive and negative emotions by giving more weights to the regions of nose, upper lip, and nasolabial folds (smile lines), confirming the interpretability of the test in a high-dimensional setting. Acknowledgement We thank the Gatsby Charitable Foundation for the financial support. 8 References L. Baringhaus and C. Franz. On a new multivariate two-sample test. Journal of Multivariate Analysis, 88: 190?206, 2004. M. Bilodeau and D. Brenner. Theory of multivariate statistics. Springer Science & Business Media, 2008. S. Bird, E. Klein, and E. Loper. Natural Language Processing with Python. O?Reilly Media, 1st edition, 2009. O. Bousquet. New approaches to statistical learning theory. Annals of the Institute of Statistical Mathematics, 55:371?389, 2003. K. Chwialkowski, A. Ramdas, D. Sejdinovic, and A. Gretton. Fast two-sample testing with analytic representations of probability measures. In NIPS, pages 1972?1980, 2015. G. P. Cinzia Carota and N. G. Polson. Diagnostic measures for model criticism. Journal of the American Statistical Association, 91(434):753?762, 1996. M. Eric, F. R. Bach, and Z. Harchaoui. Testing for homogeneity with kernel Fisher discriminant analysis. In NIPS, pages 609?616. 2008. M. Fromont, B. Laurent, M. Lerasle, and P. Reynaud-Bouret. Kernels based tests with non-asymptotic bootstrap approaches for two-sample problems. In COLT, pages 23.1?23.22, 2012. A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Sch?lkopf, and A. Smola. A kernel two-sample test. Journal of Machine Learning Research, 13:723?773, 2012a. A. Gretton, D. Sejdinovic, H. Strathmann, S. Balakrishnan, M. Pontil, K. Fukumizu, and B. K. Sriperumbudur. Optimal kernel choice for large-scale two-sample tests. In NIPS, pages 1205?1213, 2012b. M. R. Kosorok. Introduction to Empirical Processes and Semiparametric Inference. Springer, 2008. J. R. Lloyd and Z. Ghahramani. Statistical model criticism using kernel two sample tests. In NIPS, pages 829?837, 2015. J. R. Lloyd, D. Duvenaud, R. Grosse, J. B. Tenenbaum, and Z. Ghahramani. Automatic construction and Natural-Language description of nonparametric regression models. In AAAI, pages 1242?1250, 2014. D. Lundqvist, A. Flykt, and A. ?hman. The Karolinska directed emotional faces-KDEF. Technical report, ISBN 91-630-7164-9, 1998. C. D. Manning, P. Raghavan, and H. Sch?tze. Introduction to information retrieval. Cambridge University Press, 2008. J. Mueller and T. Jaakkola. Principal differences analysis: Interpretable characterization of differences between distributions. In NIPS, pages 1693?1701, 2015. A. Ramdas, S. Jakkam Reddi, B. P?czos, A. Singh, and L. Wasserman. On the decreasing power of kernel and distance based nonparametric hypothesis tests in high dimensions. In AAAI, pages 3571?3577, 2015. D. Sejdinovic, B. Sriperumbudur, A. Gretton, and K. Fukumizu. Equivalence of distance-based and RKHS-based statistics in hypothesis testing. Annals of Statistics, 41(5):2263?2291, 2013. A. Smola, A. Gretton, L. Song, and B. Sch?lkopf. A Hilbert space embedding for distributions. In ALT, pages 13?31, 2007. N. Srebro and S. Ben-David. Learning bounds for support vector machines with learned kernels. In COLT, pages 169?183, 2006. B. Sriperumbudur, A. Gretton, K. Fukumizu, B. Schoelkopf, and G. Lanckriet. Hilbert space embeddings and metrics on probability measures. Journal of Machine Learning Research, 11:1517?1561, 2010. B. K. Sriperumbudur, K. Fukumizu, and G. R. Lanckriet. Universality, characteristic kernels and RKHS embedding of measures. The Journal of Machine Learning Research, 12:2389?2410, 2011. I. Steinwart and A. Christmann. Support Vector Machines. Springer, 2008. G. Sz?kely and M. Rizzo. Testing for equal distributions in high dimension. InterStat, (5), 2004. A. van der Vaart and J. Wellner. Weak Convergence and Empirical Processes: With Applications to Statistics (Springer Series in Statistics). Springer, 2000. W. Zaremba, A. Gretton, and M. Blaschko. B-test: A non-parametric, low variance kernel two-sample test. In NIPS, pages 755?763, 2013. 9
6148 |@word trial:9 version:2 norm:4 proportion:1 simulation:1 covariance:3 zolt:1 tr:16 initial:1 contains:1 series:1 afraid:1 rkhs:4 document:1 outperforms:1 com:5 exy:3 gmail:4 stemmed:1 must:1 universality:1 stemming:1 j1:1 predetermined:1 informative:5 analytic:9 confirming:1 remove:1 plot:6 interpretable:10 drop:1 v:7 implying:1 half:1 discovering:2 fewer:1 selected:3 isotropic:3 characterization:2 location:41 five:3 along:1 c2:2 direct:1 surprised:1 incorrect:1 prove:1 shorthand:1 multimodality:1 indeed:1 expected:1 roughly:3 themselves:1 frequently:2 chi:3 v1t:1 decreasing:1 actual:2 quad:7 increasing:3 becomes:1 spain:1 discover:1 moreover:1 provided:1 bounded:6 mass:2 begin:1 null:11 estimating:1 kg:3 medium:2 minimizes:1 substantially:1 proposing:1 guarantee:1 act:1 zaremba:2 runtime:2 returning:1 k2:2 supv:7 advantaged:1 unit:1 positive:10 understood:1 local:1 modify:1 mitra:1 consequence:1 despite:1 encoding:1 id:5 laurent:1 approximately:1 black:2 discriminability:1 nz:2 studied:1 initialization:1 emphasis:1 suggests:2 challenging:3 bird:1 co:2 equivalence:1 ease:1 directed:2 testing:14 practice:1 impostor:1 definite:3 differs:1 bootstrap:1 procedure:4 pontil:1 empirical:11 suprema:2 universal:1 significantly:3 reject:4 projection:2 revealing:1 pre:1 word:10 reilly:1 spite:1 suggest:2 onto:1 cannot:1 close:1 descending:1 optimize:2 measurable:3 maximizing:3 independently:2 convex:1 simplicity:1 wasserman:1 regarded:1 financial:1 dw:1 reparameterization:1 embedding:3 population:2 stability:1 annals:2 construction:2 distinguishing:6 us:4 hypothesis:7 lanckriet:2 element:2 expensive:1 satisfying:1 observed:2 bottom:1 capture:3 tf2:1 precondition:1 region:8 ensures:1 schoelkopf:1 highest:1 convexity:1 complexity:3 trained:1 singh:1 eric:2 f2:2 basis:1 triangle:1 easily:2 bouret:1 various:1 represented:1 distinct:2 fast:2 shortcoming:1 london:1 artificial:1 neighborhood:1 h0:18 whose:1 heuristic:1 widely:1 valued:1 encoded:1 otherwise:1 statistic:27 vaart:2 think:1 transform:1 jointly:1 final:1 blob:11 indication:4 eigenvalue:1 advantage:2 isbn:1 propose:3 aligned:1 realization:1 baringhaus:2 subgraph:2 degenerate:1 achieve:4 poorly:1 description:2 frobenius:1 dirac:1 convergence:3 strathmann:1 rademacher:2 generating:1 guaranteeing:1 converges:2 ben:1 derive:1 depending:1 measured:2 op:1 sa:2 r2j:1 involves:1 implies:1 christmann:1 differ:5 direction:1 rasch:1 drawback:1 correct:1 vc:3 human:2 raghavan:1 require:3 assign:1 f1:2 fix:1 proposition:4 zoltan:1 hold:7 bilodeau:2 duvenaud:1 normal:4 exp:4 bj:1 vary:1 achieves:1 smallest:1 omitted:1 purpose:1 bag:1 iw:1 largest:3 tf:1 faithfully:1 weighted:2 fukumizu:4 gaussian:20 modified:2 rather:2 pn:5 avoid:1 resized:1 sion:1 jaakkola:3 publication:1 corollary:1 derived:1 kid:1 loper:1 consistently:1 indicates:1 reynaud:1 greatly:1 contrast:1 criticism:3 baseline:1 attains:1 sense:2 helpful:1 inference:4 mueller:3 detect:1 unlikely:1 integrated:1 entire:1 rhino:1 iu:1 arg:3 among:4 colt:2 pixel:2 noun:1 spatial:5 art:1 smoothing:1 constrained:1 initialize:1 construct:3 f3:2 having:1 emotion:6 equal:2 runtimes:1 manually:1 placing:2 represents:2 excessive:1 discrepancy:4 report:2 t2:2 randomly:9 divergence:2 kacper:2 homogeneity:1 szabo:1 subsampled:1 consisting:1 statistician:1 n1:1 maintain:1 freedom:2 interest:2 investigate:2 highly:1 male:1 mixture:2 yielding:1 held:1 integral:1 arthur:2 respective:1 facial:5 amateur:1 euclidean:1 tf3:1 plotted:2 theoretical:2 fitted:1 column:1 zn:10 maximization:1 cost:3 subset:1 neutral:1 front:1 reported:1 dependency:1 semimetrics:2 considerably:1 chooses:1 st:1 borgwardt:1 kely:2 stay:2 retain:1 quickly:1 concrete:1 squared:7 aaai:2 opposed:1 choose:1 containing:1 hoeffding:1 creating:1 american:1 derivative:1 toy:6 account:1 suggesting:1 exclude:1 converted:1 attaining:1 lloyd:6 sec:12 summarized:3 explicitly:1 depends:3 vi:1 performed:1 h1:7 view:1 closed:1 sup:2 bayes:10 contribution:3 ni:4 variance:4 characteristic:9 maximized:3 yield:2 lkopf:2 bayesian:3 weak:1 rejecting:2 accurately:1 worth:1 published:1 explain:1 blaschko:1 definition:2 evaluates:1 against:4 energy:1 sriperumbudur:7 frequency:9 semimetric:2 proof:3 associated:1 di:3 stop:1 dataset:2 jitkrittum:1 dimensionality:1 cj:1 subtle:3 amplitude:1 hilbert:2 higher:2 dt:1 follow:1 reflected:1 arranged:1 evaluated:3 done:2 furthermore:1 smola:3 tf1:1 sketch:2 steinwart:1 su:3 defines:1 mode:1 brings:1 quality:1 stably:1 indicated:1 reveal:2 lundqvist:2 normalized:3 true:4 counterpart:2 regularization:2 sin:2 width:5 unnormalized:1 criterion:5 demonstrate:2 fj:3 image:4 novel:1 fi:3 extend:1 interpretation:2 association:1 interpret:1 refer:2 expressing:1 cambridge:1 gibbs:1 rd:11 tuning:5 consistency:2 grid:18 similarly:1 mathematics:1 automatic:1 language:2 dj:1 specification:1 fromont:2 cortex:2 yk2:1 v0:4 actor:4 multivariate:9 recent:1 showed:1 female:1 optimizing:3 apart:2 scenario:1 inequality:1 arbitrarily:1 vt:1 yi:8 der:2 integrable:1 seen:2 wasserstein:1 greater:1 ey:1 bra:1 determine:1 maximize:10 monotonically:1 full:28 multiple:1 harchaoui:1 gretton:19 smooth:3 exceeds:2 match:2 technical:1 af:2 offer:1 long:1 lin:7 bach:1 retrieval:1 equally:1 ensuring:1 calculates:1 variant:4 involving:1 neuro:5 regression:1 expectation:3 metric:4 df:2 iteration:1 sejdinovic:4 kernel:29 mmd:21 achieved:1 c1:4 cropped:1 remarkably:1 separately:1 background:1 semiparametric:1 median:1 sch:3 unlike:2 ascent:4 interstat:1 validating:1 rizzo:2 chwialkowski:12 balakrishnan:1 effectiveness:1 smile:1 reddi:1 noting:1 unused:1 wittawat:1 split:2 identically:2 embeddings:3 automated:1 enough:1 affect:2 easy:1 zi:5 iii:1 identified:1 idea:2 scf:33 whether:1 expression:6 six:3 wellner:2 song:1 returned:1 speaking:1 jj:2 deep:4 clear:3 gle:1 nonparametric:5 locally:1 tenenbaum:1 concentrated:1 category:5 gmd:9 http:1 outperform:2 neuroscience:2 disjoint:3 correctly:2 per:1 klein:1 diagnostic:1 broadly:1 wittawatj:2 group:4 key:3 four:4 demonstrating:1 threshold:4 drawn:5 preprocessed:1 pj:1 v1:5 asymptotically:4 sum:2 run:2 inverse:2 powerful:3 place:3 family:3 throughout:1 decide:1 parsimonious:2 sad:1 appendix:1 comparable:1 dropout:1 bound:14 angry:1 distinguish:3 display:1 fold:1 quadratic:5 slutsky:1 occur:1 constraint:1 idf:1 bousquet:1 gibb:2 fourier:2 according:2 manning:2 smaller:1 across:5 partitioned:2 making:1 projecting:1 intuitively:1 invariant:1 computationally:2 vjt:1 previously:1 assures:1 turn:2 nose:1 tractable:1 serf:1 photo:3 available:1 observe:3 v2:6 appropriate:1 dudley:1 distinguished:1 hotelling:3 alternative:3 jd:1 top:5 denotes:1 include:1 ensure:2 maintaining:1 emotional:2 giving:2 ghahramani:4 build:1 quantile:2 objective:6 quantity:3 spike:2 degrades:1 parametric:1 gradient:4 distance:10 separate:1 thank:1 me:50 seven:3 discriminant:1 trivial:1 reason:1 code:1 length:1 index:1 besides:1 ratio:1 demonstration:2 happy:1 difficult:1 dtr:6 negative:6 polson:2 unknown:1 perform:6 allowing:2 upper:2 observation:2 markov:2 benchmark:1 behave:1 defining:1 witness:7 smoothed:1 arbitrary:1 david:1 kl:1 c3:4 z1:3 optimized:3 specified:6 learned:7 barcelona:1 boost:1 nip:11 distinguishability:2 address:2 elicitation:1 max:3 interpretability:3 power:47 critical:1 overlap:1 difficulty:1 business:1 natural:2 carota:2 indicator:1 github:1 djn:1 brief:1 ne:3 axis:1 sn:8 text:2 review:1 sg:8 l2:1 acknowledgement:1 kf:6 python:1 asymptotic:4 fully:1 permutation:1 interesting:2 filtering:1 srebro:1 localized:1 validation:1 foundation:1 degree:2 sufficient:2 consistent:1 proxy:3 nte:3 principle:1 article:1 charitable:1 translation:1 supported:1 placed:3 czos:1 infeasible:1 institute:1 taking:1 face:8 distributed:1 benefit:1 van:2 dimension:21 world:1 valid:1 contour:3 qn:1 evaluating:1 collection:2 preprocessing:1 franz:2 approximate:2 compact:2 emphasize:1 implicitly:1 supremum:1 sz:2 keep:1 global:1 reveals:1 discriminative:6 xi:7 grayscale:1 continuous:2 search:2 table:9 lip:1 learn:5 gharamani:1 dte:5 permute:1 du:1 domain:5 vj:10 diag:1 significance:3 main:1 bounding:1 edition:1 n2:1 ramdas:3 fair:1 repeated:1 fig:11 referred:1 quantiles:1 gatsby:2 grosse:1 fails:1 theorem:7 jt:5 showing:4 kzn:2 r2:2 list:1 alt:1 importance:1 ci:1 te:5 justifies:1 kx:1 easier:1 rejection:2 entropy:1 simply:1 univariate:1 tfj:1 tze:1 visual:1 highlighting:1 supk:2 springer:5 ch:1 corresponds:1 satisfies:2 goal:1 formulated:1 sorted:1 brenner:2 fisher:1 change:1 infinite:1 specifically:1 szab:1 uniformly:5 except:2 lemma:2 principal:1 ksn:2 indicating:1 select:1 college:1 highdimensional:1 people:1 support:3 latter:1 ex:2
5,690
6,149
Threshold Bandit, With and Without Censored Feedback Jacob Abernethy Department of Computer Science University of Michigan Ann Arbor, MI 48109 jabernet@umich.edu Kareem Amin Department of Computer Science University of Michigan Ann Arbor, MI 48109 amkareem@umich.edu Ruihao Zhu AeroAstro&CSAIL MIT Cambridge, MA 02139 rzhu@mit.edu Abstract We consider the Threshold Bandit setting, a variant of the classical multi-armed bandit problem in which the reward on each round depends on a piece of side information known as a threshold value. The learner selects one of K actions (arms), this action generates a random sample from a fixed distribution, and the action then receives a unit payoff in the event that this sample exceeds the threshold value. We consider two versions of this problem, the uncensored and censored case, that determine whether the sample is always observed or only when the threshold is not met. Using new tools to understand the popular UCB algorithm, we show that the uncensored case is essentially no more difficult than the classical multi-armed bandit setting. Finally we show that the censored case exhibits more challenges, but we give guarantees in the event that the sequence of threshold values is generated optimistically. 1 Introduction The classical Multi-armed Bandit (MAB) problem provides a framework to reason about sequential decision settings, but specifically where the learner?s chosen decision is intimately tied to information content received as feedback. MAB problems have generated much interest in the Machine Learning research literature in recent years, particularly as a result of the changing nature in which learning and estimation algorithms are employed in practice. More and more we encounter scenarios in which the procedure used to make and exploit algorithmic predictions is exactly the same procedure used to capture new data to improve prediction performance. In other words it is increasingly harder to view training and testing as distinct entities. MAB problems generally involve repeatedly making a choice between one of a finite (or even infinite) set of actions, and these actions have historically been referred to as arms of the bandit. If we ?pull? arm i at round t, then we receive a reward Rti 2 [0, 1] which is frequently assumed to be a stochastic quantity that is drawn according to distribution Di . Typically we assume that Di are heterogeneous across the arms i, whereas we assume the samples {Rti }t=1,...,T are independently and identically distributed according to the fixed Di across all times t.1 Of course, were the learner to have full knowledge of the distributions Di from the outset, she would presumably choose to pull the arm whose expected reward ?i is highest. With that in mind, we tend to consider the (expected) regret of the learner, defined to be the (expected) reward of the best arm minus the (expected) reward of the actual arms selected by the learner. Early work on MAB problems (Robbins, 1952; Lai and Robbins, 1985; Gittins et al., 2011) tended to be more focused on asymptotic guarantees, whereas more recent work (Auer et al., 2002; Auer, 2003) 1 Note that in much of our notation we use superscript t to denote the time period rather than as an exponent. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. has been directed towards a more ?finite time? approach in which we can bound regret for fixed time horizons T . One of the best-known and well-studied techniques is known as the Upper Confidence Bound (UCB) algorithm (Auer et al., 2002; Auer and Ortner, 2010). The magic of UCB relies on a very intuitive policy framework, that a learner should select decisions by maximizing over rewards estimated from previous data but only after biasing each estimate according to its uncertainty. Simply put, one should choose the arm that maximizes the ?mean plus confidence interval,? hence the name Upper Confidence Bound. In the present paper we focus on the Threshold Bandit setting, described as follows. On each round t, a piece of side information is given to the learner in the form of a real number ct , the learner must then choose arm i out of K arms, and this arm produces a value Xit drawn from a survival distribution with survival function Fi (x) = Pr(Xit x). The reward to the learner is not Xit itself but is instead the binary value Rti = I[Xit ct ]; that is, we receive a unit reward when the sample Xit exceeds the threshold value ct , and otherwise we receive no reward. For a fixed value of ct , each arm i has expected payoff E[Rti ] = Fi (ct ). Notice, crucially, that the arm with the greatest expected payoff can vary significantly across different threshold values. This abstract model has a number of very natural applications: 1. Packet Delivery with Deadlines: FedEx receives a stream of packages that need to be shipped from source to destination, and each package is supplied with a delivery deadline. The goal of the FedEx routing system is to select a transportation route (via air or road or ship, etc.) that has the highest probability of on-time arrival. Of course some transportation schemes are often faster (e.g. air travel) but have higher volatility (e.g. due to poor weather). 2. Supplier Selection: Customers approach a manufacturing firm to produce a product with specific quality demands. The firm must approach one of several suppliers to contract out the work, but the firm is uncertain as to the capabilities and variabilities of the products each supplier produces. 3. Dark Pool Brokerage: A financial brokerage firm is asked to buy or sell various sized bundles of shares, and the brokerage aims to offload the transactions onto one of many dark pools, i.e. financial exchanges that match buyers and sellers in a confidential manner (Ganchev et al., 2010; Amin et al., 2012; Agarwal et al., 2010). A standard dark pool mechanism will simply execute the transaction if there is suitable liquidity, or will reject the transaction when no match is made. Of course the brokerage gets paid on commission, and simply wants to choose the pool that has the highest probability of completion. What distinguishes the Threshold Bandit problem from the standard stochastic multi-armed bandit setting are two main features: 1. The regret of the learner will be measured in comparison to the best policy rather than to simply the best arm. Note that the optimal offline policy may incorporate the threshold value ct before selecting an arm I t . 2. Whereas the standard stochastic bandit setting assumes that we observe the reward RtI t of the chosen arm I t , in the Threshold Bandit setting we consider two types of feedback. (a) Uncensored Feedback: After playing arm I t , the learner observes the sample XIt t regardless of the threshold value ct . This is a natural model for the FedEx routing problem above, wherein one learns the travel time of a package regardless of the deadline having been met. (b) Censored Feedback: After playing I t , the learner observes a null value when XIt t ct , and otherwise observes XIt t . This is a natural model for the Supplier Selection problem above, as we would only learn the product?s quality value when the customer rejects what is received from the supplier. In the present paper we present roughly three primary results. First, we provide a new perspective on the classical UCB algorithm, giving an alternative proof that relies on an interesting potential function argument; we believe this technique may be of independent interest. Second, we analyze the Threshold Bandit setting when given uncensored feedback, and we give a novel algorithm called DKWUCB based on the Dvoretzky-Kiefer-Wolfowitz inequality (Dvoretzky et al., 1956). We show, somewhat surprisingly, that with uncensored feedback the regret bound is no worse than the standard 2 stochastic MAB setting, suggesting that despite the much richer policy class one has nearly the same learning challenge. Finally, we consider learning in the censored feedback setting, and propose an algorithm KMUCB, akin to the Kaplan-Meier estimator (Kaplan and Meier, 1958). Learning with censored feedback is indeed more difficult, as the performance can depend significantly on the order of the threshold values. In the worst case, when threshold values are chosen in an adversarial order, the cost of learning scales with the number of unique threshold values, whereas one can perform significantly better under certain constraints on optimistic assumptions on the order or even a random order. 2 A New Perspective on UCB Before focusing on the Threshold Bandit problem, let us turn our attention to the classical stochastic MAB setting and give another look at the UCB algorithm. We will now provide a modified proof of the regret bound of UCB that relies on a potential function. Potential arguments have proved quite popular in studying adversarial bandit problems (Auer et al., 2003; Audibert and Bubeck, 2009; Abernethy et al., 2012; Neu and Bart?k, 2013; Abernethy et al., 2015), but have received less use in the stochastic setting. This potential trick is the basis for forthcoming results on the Threshold Bandit. Let Di be a distribution on the reward Rti , with support on [0, 1]. We imagine the rewards i.i.d. R1i , . . . , RTi ? Di , whose mean E[Rti ] = ?i . A bandit algorithm is simply a procedure that chooses a random arm/action I t on round t as a function of the set of past observed (action, reward) pairs, (I 1 , R1I 1 ), . . . , (I t 1 , RtI t 11 ). Finally, let Nit := ?tt=11 I[I t = i] and define the empirical mean estimator at time t to be ?? ti := 1 I[I t =i]RtI t ?tt=1 . Nit We assume we are given a particular deviation bound which provides the following guarantee, Pr |?i ?? ti | > e Nit N ? f (N, e), where f (?) is some function, continuous in e and monotonically decreasing in both parameters, that controls the probability of a large deviation. While UCB relies specifically on the Hoeffding-Azuma inequality (Cesa-Bianchi and Lugosi, 2006), for now we leave the deviation bound in generic form. This will be useful in following sections. Given f (?, ?), what is of interest to our present work is a pair of functions that allow us to convert between values of e and N in order to guarantee that f (N, e) ? d for a given d > 0. To this end define ](e, d) := V(N, d) := min{N : f (N, e/2) ? d}, ? inf{e : f (N, e) ? d} if N > 0; 1 otherwise, We will often omit the d in the argument to ](?), V(?). Note the key property that V(N, d) ? e/2 for any N ](e, d). We can now define our variant of the UCB algorithm for a fixed choice of d > 0. UCB Algorithm: on round t play I t = arg max ?? ti + V(Nit , d) i (1) We will make the simplifying assumption that the largest ?i is unique and, without loss of generality, let us assume that the coordinates are permuted in order that ?1 is the largest mean reward. Furthermore, define Di := ?1 ?i for i = 2, . . . , K. A central piece of the analysis relies on the following potential function, which depends on the current number of plays of each arm i = 2, . . . , K. F(N2t , . . . , NKt ) t K Ni 1 := 2 ? ? V(N, d) i=2 N=0 Lemma 1. The expected regret of UCB is bounded as T +1 E[RegretT (UCB)] ? E[F(N2 3 , . . . , NKT +1 )] + O(T d) (2) Proof. The (random) additional regret suffered on round t of UCB is exactly ?1 ?I t . By virtue of our given deviation bound, we know that ?1 ? ?? t1 + V(N1t , d) and ?? tI t ? ?I t + V(NIt t , d), each w.p. > 1 d. (3) t Also, let x be the indicator variable that one of the above two inequalities fails to hold. Of course we chose V(?) in order that E[xt ] ? 2d via a simple union bound. Note that, by virtue of using the UCB selection rule for I t , it is clear that we have ?? t1 + V(N1t , d) ? ?? tI t + V(NIt t , d) If we combine Equations 3 and 4, and consider the event that xt = 0, then we obtain ?1 ? ?? t1 + V(N1t , d) ? ?? tI t + V(NIt t , d) ? ?I t + 2V(NIt t , d). Even in the event that xt = 1 we have that ?1 ?I t ? 1. Hence, it follows immediately that ?1 2V(NIt t , d) + xt . (4) ?I t ? Finally, we observe that the potential function was chosen so that F(N2t+1 , . . . , NKt+1 ) F(N2t , . . . , NKt ) = 2V(NIt t , d). Recalling that F(0, . . . , 0) = 0, a simple telescoping argument gives that " # T T +1 T +1 t T +1 T +1 E[RegretT (UCB)] ? E F(N2 , . . . , NK ) + ? x = E[F(N2 , . . . , NK )] + 2T d. t=1 The final piece we need to establish is that the number of pulls Nit of arm i, for i = 2, . . . , K, is unlikely to exceed ](Di , d). This result uses some more standard techniques from the original UCB analysis (Auer et al., 2002), and we defer it to the appendix. Lemma 2. For any T > 0 we have E[F(N2T +1 , . . . , NKT +1 )] ? F(](D2 , d), . . . , ](DK , d)) + O(T 2 d). We are now able to combine the above results for the final bound. Theorem 1. If we set d = T 2 /2, the expected regret of UCB is bounded as K log(T ) + O(1). Di i=2 E[RegretT (UCB)] ? 8 ? Proof. Note that a very standard deviation bound that holds for all distributions supported on [0, 1] is the Hoeffding-Azuma inequality (Cesa-Bianchi and Lugosi, 2006), wherelthe bound m is 2 log 2/d 2 given by f (N, e) = 2 exp( 2Ne ). Utilizing Hoeffding-Azuma we have ](e, d) = and e2 q p V(N, d) = log(2/d) for N > 0. If we utilize the fact that ?Yy=1 p1y ? 2 Y , then we see that 2N r K ](Di ,d) K K log(2/d)](Di , d) log(2/d) F(](D2 , d), . . . , ](DK , d)) = 2 ? ? V(N, d) = 2 ? 2 =4? . 2 Di i=2 N=0 i=2 i=2 Combining the Lemma 1 and Lemma 2, setting d = T 3 2 /2, we conclude the theorem. The Threshold Bandits Model In the preceding, we described a potential-based proof for the UCB algorithm in the classic stochastic bandit problem. We now return to the Threshold Bandit setting, our problem of interest. A K-armed Threshold Bandit problem is defined by random variables Xit and a sequence of threshold values ct for 1 ? i ? K and 1 ? t ? T, where i is the index for arms. Successive pulling of arm i generates the values Xi1 , Xi2 , . . . , XiT , which are drawn i.i.d. from an unknown distribution. The threshold values c1 , c2 , . . . , cT are drawn from M = {1, 2, . . . , m} (according to rules specified later). The threshold value ct is observed at the beginning of round t, and the learner follows a policy P to choose the arm to play based on its past selections and previously observed feedbacks. Suppose the arm pulled at round t is I t , the observed reward is then RtI t = I[XIt t ct ]; that is, we receive a unit reward when the sample XIt t exceeds the threshold value ct , and otherwise we receive no reward. We distinguish two different types of feedback. 4 1. Uncensored Feedback: After playing arm I t , the learner observes the sample XIt t regardless of the threshold value ct . ? 0/ if XIt t ct , 2. Censored Feedback: After playing I t , the learner observes2 . t XI t otherwise In this case, we refer to the threshold value as a censor value. Let Fi (x) denote the survival function of the distribution on arm i. That is, Fi (x) = Pr(Xit x). We measure regret against the optimal policy with full knowledge of F1 , . . . , Fn i.e., " ? " ? ?# ?# T T t t t t t t RegretT (P) = E ? max Ri RI t = E ? max I Xi c I XI t c . t=1 i2[n] t=1 i2[n] Notice that for a fixed value of ct , each arm i has expected payoff E[Rti ] = Fi (ct ), the regret can also be written as ? T ? RegretT (P) = E ?t=1 maxi2[n] Fi (ct ) FI t (ct ) . Our goal is to design a policy that minimizes the regret. 4 DKWUCB: Dvoretzky-Kiefer-Wolfowitz Inequality based Upper Confidence Bound algorithm In this section, we study the uncensored feedback setting in which the value XIt t is always observed regardless of ct . We assume that the largest Fi ( j) is unique for all j 2 M, and define i? ( j) = arg maxi Fi ( j), Di ( j) = Fi? ( j) ( j) Fi ( j) for all i = 1, 2, . . . , K and j 2 M. Under this setting, the algorithm will use the empirical distribution as an estimate for the true distribution. Specifically, we want to estimate the true survival function Fi via: ?t 1 I[X tt j, I t = i] F?it ( j) = t=1 I t 8j 2 M Ni (5) The key tool in our analysis is a deviation bound on the empirical CDF of a distribution, and we note that this bound holds uniformly over the support of the distribution. The Dvoretzky-KieferWolfowitz (DKW) inequality (Dvoretzky et al., 1956) allows us to bound the error on F?it ( j) : Lemma 3. At a time t, let F?it be the empirical distribution function of Fi as given in equation 5. The probability that the maximum of the difference between F?it and Fi over all j 2 M is at least e is less than 2 exp 2e2 Nit , i.e., Pr sup j2M |F?it ( j) Fi ( j)| e Nit N ? 2 exp 2e2 N . The proof of the lemma can be found in Dvoretzky et al. (1956). The key insight is that the estimate F?i converges to Fi point-wise at the same rate as the Hoeffding-Asumza inequality. That is, one does not pay an additional M factor from applying a union bound. The fact that we have uniform convergence of the CDF with the same rate as the Hoeffding-Azuma inequality allows us to immediately apply the potential function argument from Section 2. In particular, we define f (N, e) = 2 exp 2e2 N , as well as the pair of functions ](e, d) and V(N, d) exactly the same as the previous section, i.e., ? ? 2 log 2/d ](e, d) := , e2 ( q log(2/d) if N > 0; 2N V(N, d) := 1 otherwise. We are now ready to define our DKWUCB algorithm for a fixed choice of parameter d > 0 to solve the problem. DKWUCB Algorithm: on round t play I t arg max F?it (ct ) + V(Nit , d) . (6) i 2 Existing literature often refers to this as right-censoring. With right-censored feedback, samples from playing arms at high threshold values can inform decisions at low threshold values but not vice versa. 5 To analyze DKWUCB, we use a slight variant of the potential function defined in Section 2. Let i? ( j) = arg maxi Fi ( j) denote the optimal arm for threshold value j, and N? it denote the number of rounds arm i is pulled when it is not optimal, N? it = ?tt=11 I[I t = i, I t 6= i? (ct )]. Notice that N? it ? Nit . Define the potential function as: t K N? i 1 F(N? 1t , . . . , N? Kt ) := 2 ? ? V(N, d) (7) i=1 N=0 Theorem 2. Setting d = T 2 /2, the expected regret of DKWUCB is bounded as K log T + O(1), min j2M Di ( j) i=1 E[RegretT (DKWUCB)] ? 8 ? We defer the proof of this theorem to the appendix. We pause now to comment on some of the strengths of this type of analysis. At a high-level, the typical analysis to the UCB algorithm for the standard multi-armed bandit problem (Auer ? et al., ? ) 2002) is the following: (1) at some finite time T , the number of pulls of a bad arm i is O log(T D2 i with high probability, and (2) the ? regret ? suffered by any such pull is O(Di ). The contribution of arm log(T ) i to total regret is therefore O . In contrast, we analyzed the UCB algorithm in Section 2 Di by observing that the expected regret suffered on round t is bounded by the difference between the empirical mean estimator and the true mean for the payoff of arm I t . Of course by design this quantity is almost certainly (w.p. at least 1 d) less than V(NItt ). The potential function F(?, . . . , ?) tracks the accumulation of these values V(Nit ) for each arm i, and the final regret bound is a consequence of the summation properties of V, ] for the particular estimator being used. While these two approaches lead to the same bound in the standard multi-armed bandit problem, the potential function approach bears fruit in the Threshold Bandit setting. Because the uniform convergence rate promised by the DKW inequality matches that of the Hoeffding-Azume inequality, Theorem 2 should not be surprising; the ith arm?s contribution to DKWUCB?s regret should be idenitical to UCB, but with the suboptimality gap now equal to min j Di ( j). However, following the program analysis of UCB, one would naively argue that ? for the standard ? arm i is incorrectly pulled O (min log(TD)( j))2 times. These pulls might come in the face of any j2M i t number of threshold ? ? values c , suffering as much as max j2M Di ( j) regret,? yielding?a bound of max j2M Di ( j) log(T ) max D ( j) O (min on the ith arm?s regret contribution, which is a factor O min jj Dii( j) worse than 2 j2M Di ( j)) the derived result. By tracking the convergence of the underlying estimator, we circumvent this problem entirely. 5 KMUCB: Kaplan-Meier based Upper Confidence Bound Algorithm We now turn to the censored feedback setting, in which the feedback of pulling arm I t is observed only when XIt t is less than ct . For ease of presentation, we assume that the largest Fi ( j) is unique for all j 2 M, and define i? ( j) = arg maxi Fi ( j), Di ( j) = Fi? ( j) ( j) Fi ( j) for all i = 1, 2, . . . , K and j 2 M. One prevalent non-parametric estimator for censored data is the Kaplan-Meier maximum likelihood estimator Kaplan and Meier (1958); Peterson (1983). Most of existing works have studied the uniform error bound of Kaplan-Meier estimator in the case that the threshold values are drawn i.i.d. from a known distribution Foldes and Rejto (1981) or asymptotic error bound for the non-i.i.d. case Huh et al. (2009). The only known uniform error bound of Kaplan-Meier estimator is proposed in Ganchev et al. (2010). Noting that for a given threshold value, all the feedbacks from larger threshold values are useful, we propose a new estimator with tighter uniform error bound based on the Kaplan-Meier estimator as following: Dt ( j) F?it = ti (8) Ni ( j) 6 where Dti ( j) and Nit ( j) is defined as follows At := min{XIt t , ct }, Dti ( j) := t 1 ? I[At j, I t = i], t=1 Nit ( j) := t 1 ? I[ct t=1 j, I t = i]. We first present an error bound for the modified Kaplan-Meier estimate of Fi ( j) : Lemma 4. At time t, let F?it be the modified Kaplan-Meier estimate of Fi as given in equation 8. For any j 2 M, the probability that the difference between F?it ( j) and Fi ( j) is at least e is less than ? 2 t ? e Ni ( j) 2 exp , i.e., 2 ? 2 t ? e Ni ( j) t ? Pr |Fi ( j) Fi ( j)| e ? 2 exp . 2 We defer the proof of this lemma to the appendix. Different to the stochastic uncensored MAB setting, we show that the cost of learning with censored feedback depends significantly on the order of the threshold values. To illustrate this point, we first show a comparison between the regret of adversarial setting and optimistic setting. In the adversarial setting, the threshold values are chosen to arrive in a non-decreasing order 1, 1, . . . , 1, 2, . . . , 2, 3, . . . , m, the problem becomes playing m independent copies of bandits, and the regret scales with m; while in the optimistic setting, the threshold values are chosen to arrive in a non-increasing order m, m, . . . , m, m 1, . . . , m 1, . . . , 1, . . . , 1, which means the learner can make full use of the samples, and can thus perform significantly better. Afterwards, we show that if the order of the threshold values is close to uniformly random, the regret only scales with log m. 5.1 Adversarial vs. Optimistic Setting For the simplicity of presentation, we assume that in both settings, the time horizon could be divided in to m stages, each with length bT /mc.. In the adversarial setting, threshold value j comes during stage j; while in the optimistic setting threshold value m j + 1 comes during stage j. For the adversarial setting, due to the censored feedback structure, only the samples observed within the same stage can help to inform decision making. From the perspective of the learner, this is equivalent to facing m independent copies of stochastic MAB problems, and thus, the regret scales with m. Making use of the lower bound of stochastic MAB problems Lai and Robbins (1985), we can conclude the following theorem. Theorem 3. If the threshold values arrive according to the adversarial order specified above, no /m) learning algorithm can achive a regret bound better than ?mj=1 ?Ki=1 KL(B(Filog(T ( j)||B(F ? ( j))) , where i ( j) KL(?||?) is the Kullback-Leibler divergence Lai and Robbins (1985) and B(?) is the probability distribution function of Bernoulli distribution. For the optimistic setting, although the feedbacks are right censored, we note that every sample observed in the previous rounds are useful in later rounds. This is because the threshold values arrive in non-increasing order. Therefore, we can reduce the optimistic setting to the Threshold Bandit problem with uncensored feedback, and use the DKWUCB proposed in Section 4 to solve it. More specifically, we can set f (N, e) := 2 exp( e2 N/2), ? ? 8 log 2/d ](e, d) := , e2 ( q 2 log(2/d) N if N 1; , 1 otherwise. and on every round, the learner plays the same strategy as DKWUCB. We call this strategy OPTIM. Following the same procedure in Section 4, we can provide a regret for OPTIM. Theorem 4. Let d = T 2 /2 and assume T mK. The regret of the optimistic setting satisfies V(N, d) := K log T + O(1). i=1 min j2M Di ( j) E[RegretT (OPTIM )] ? 32 ? 7 5.2 Cyclic Permutation Setting In this subsection, we show that if the order of threshold values is close to uniformly random, we can perform significantly better than the adversarial setting. To be precise, we assume that the threshold values are a cyclic permutation order of 1, 2, . . . , m. We define the set M = {ckm , ckm+1 , . . . , ck(m+1) 1 } for any non-negative integer k ? T /m. We are now ready to present KMUCB, which is a modified Kaplan-Meier-based UCB algorithm. KMUCB divides the time horizon into epochs of length Km and, for each epoch, pulls each arm once for each threshold value. KMUCB then performs an ?arm elimination? process, and once all but one arm has been eliminated, it proceeds to pull the single remaining arm for the given threshold value. KMUCB?s estimation procedure leverages information across threshold values, where observations from higher thresholds are utilized to estimate mean payoffs for lower thresholds; information does not flow in the other direction, however, as a result of the censoring assumption. Specifically, for a given threshold index j, KMUCB tracks the arm elimination process as follows: for any threshold values below j, KMUCB believes that we have determined the best arm, and plays that arm constantly. For threshold values greater than or equal to j, KMUCB explores all arms uniformly. Note that by uniform exploration over all arms for?threshold value j, all?sub-optimal arms can be detected with T probability at least 1 O T1 after O (m j+1)log epochs. KMUCB then removes all the min D2 ( j) i2[K] i sub-optimal arms for threshold value j, and increments j by 1. Denoting the last time unit of epoch k as tk = kKm, the detailed description of KMUCB is shown in Algorithm 1. Algorithm 1 KMUCB 1: Input: A set of arms 1, 2, . . . , K. 2: Initialization: L j [K] 8 j 2 M, k 1, j 1 3: for epoch k = 1, 2, . . . , T /Km do 4: count[ j0 ] 0 8 j0 2 M 5: for t from (tk 1 + 1) to tk do 6: Observe ct = j0 and set count[ j0 ] count[ j0 ] + 1 7: if j0 < j then 8: I t index of the single arm remaining in L j0 9: else 10: I t count[ j0 ]. 11: end if 12: end for q t t 16 log(T k) ? tk 13: if j ? m and maxi0 2[K] F?i0k ( j) F?i k ( j) (m j+1)k 8i 2 L j \ {arg maxi0 2[K] Fi0 ( j)} then 14: Lj ( ) t arg max F?i0k ( j) 0 i 2[K] , j j+1 15: end if 16: end for Theorem 5. The expected regret of KMUCB is bounded as K ? log m i=1 128 max j2M Di ( j) log T + O(1). mini2[K], j2M D2i ( j) We defer the proof of this theorem to the appendix. We note two directions of future research. First, we believe the above bound can likely be made stronger by either improving upon the minimization in the denominator or the maximization in the numerator. Second, we believe the ?cyclic permutation? assumption can be weakened to ?uniformly randomly sequence of thresholds,? but we were unable to make progress in this direction. We welcome further investigation along these lines. 8 References Jacob Abernethy, Elad Hazan, and Alexander Rakhlin. 2012. Interior-point methods for fullinformation and bandit online learning. IEEE Transactions on Information Theory 58, 7 (2012), 4164?4175. Jacob D Abernethy, Chansoo Lee, and Ambuj Tewari. 2015. Fighting Bandits with a New Kind of Smoothness. In Advances in Neural Information Processing Systems. 2188?2196. Alekh Agarwal, Peter L Bartlett, and Max Dama. 2010. Optimal Allocation Strategies for the Dark Pool Problem. In International Conference on Artificial Intelligence and Statistics. 9?16. Kareem Amin, Michael Kearns, Peter Key, and Anton Schwaighofer. 2012. Budget optimization for sponsored search: Censored learning in mdps. arXiv preprint arXiv:1210.4847 (2012). Jean-Yves Audibert and S?bastien Bubeck. 2009. Minimax policies for adversarial and stochastic bandits. In COLT. 217?226. Peter Auer. 2003. Using confidence bounds for exploitation-exploration trade-offs. The Journal of Machine Learning Research 3 (2003), 397?422. Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. 2002. Finite-time analysis of the multiarmed bandit problem. Machine learning 47, 2-3 (2002), 235?256. Peter Auer, Nicol? Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. 2003. The Nonstochastic Multiarmed Bandit Problem. SIAM Journal of Computuataion 32, 1 (2003), 48?77. Peter Auer and Ronald Ortner. 2010. UCB revisited: Improved regret bounds for the stochastic multi-armed bandit problem. Periodica Mathematica Hungarica 61, 1-2 (2010), 55?65. Nicol? Cesa-Bianchi and G?bor Lugosi. 2006. Prediction, Learning, and Games. Cambridge University Press. A. Dvoretzky, J. Kiefer, and J. Wolfowitz. 1956. Asymptotic Minimax Character of the Sample Distribution Function and of the Classical Multinomial Estimator. In Annals of Mathematical Statistics. A. Foldes and L. Rejto. 1981. Strong uniform consistency for nonparametric survival curve estimators from randomly censored data. In The Annals of Statistics. 9(1):122?129. Kuzman Ganchev, Michael Kearns, Yuriy Nevmyvaka, and Jennifer Wortman Vaughan. 2010. Censored Exploration and the Dark Pool Problem. In UAI. John Gittins, Kevin Glazebrook, and Richard Weber. 2011. Multi-armed bandit allocation indices. John Wiley & Sons. W. T. Huh, R. Levi, P. Rusmevichientong, and J. Orlin. 2009. Adaptive data-driven inventory control policies based on Kaplan-Meier estimator. In http://legacy.orie.cornell.edu/ paatrus/ psfiles/kmmyopic.pdf. E. L. Kaplan and P. Meier. 1958. Nonparametric Estimation from Incomplete Observations. In JASA. T. L. Lai and Herbert Robbins. 1985. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics 6 (1985), 4?22. Gergely Neu and G?bor Bart?k. 2013. An efficient algorithm for learning with semi-bandit feedback. In Algorithmic Learning Theory. Springer, 234?248. A. V. Peterson. 1983. Kaplan-Meier estimator. In Encyclopedia of Statistical Sciences. Herbert Robbins. 1952. Some aspects of the sequential design of experiments. Bull. Amer. Math. Soc. 58, 5 (1952), 527?535. 9
6149 |@word exploitation:1 version:1 stronger:1 d2:4 km:2 crucially:1 jacob:3 simplifying:1 paid:1 minus:1 harder:1 offload:1 cyclic:3 selecting:1 denoting:1 past:2 existing:2 current:1 optim:3 surprising:1 must:2 written:1 john:2 fn:1 ronald:1 remove:1 sponsored:1 bart:2 v:1 intelligence:1 selected:1 beginning:1 ith:2 provides:2 math:1 revisited:1 successive:1 mathematical:1 along:1 c2:1 combine:2 manner:1 indeed:1 expected:12 roughly:1 frequently:1 multi:8 decreasing:2 td:1 actual:1 armed:9 increasing:2 becomes:1 spain:1 notation:1 bounded:5 maximizes:1 underlying:1 null:1 what:3 kind:1 minimizes:1 guarantee:4 dti:2 every:2 ti:7 exactly:3 control:2 unit:4 omit:1 before:2 t1:4 consequence:1 despite:1 optimistically:1 lugosi:3 might:1 plus:1 chose:1 initialization:1 studied:2 weakened:1 ease:1 directed:1 unique:4 testing:1 practice:1 regret:28 union:2 procedure:5 j0:8 empirical:5 significantly:6 weather:1 reject:2 word:1 outset:1 confidence:6 road:1 n2t:4 refers:1 glazebrook:1 get:1 onto:1 close:2 selection:4 interior:1 put:1 applying:1 vaughan:1 accumulation:1 equivalent:1 customer:2 transportation:2 maximizing:1 nit:18 regardless:4 attention:1 independently:1 focused:1 simplicity:1 immediately:2 estimator:15 rule:3 utilizing:1 insight:1 kkm:1 pull:8 financial:2 classic:1 coordinate:1 increment:1 annals:2 imagine:1 play:6 suppose:1 us:1 trick:1 particularly:1 utilized:1 ckm:2 observed:9 preprint:1 capture:1 worst:1 trade:1 highest:3 observes:4 dama:1 reward:17 asked:1 seller:1 n1t:3 d2i:1 depend:1 upon:1 learner:18 basis:1 various:1 distinct:1 detected:1 artificial:1 kevin:1 abernethy:5 firm:4 whose:2 richer:1 quite:1 solve:2 larger:1 elad:1 jean:1 otherwise:7 statistic:3 fischer:1 itself:1 superscript:1 final:3 online:1 sequence:3 i0k:2 propose:2 product:3 combining:1 paatrus:1 fi0:1 amin:3 intuitive:1 description:1 convergence:3 produce:3 gittins:2 leave:1 converges:1 tk:4 volatility:1 illustrate:1 help:1 completion:1 measured:1 received:3 progress:1 strong:1 soc:1 come:3 met:2 direction:3 stochastic:12 exploration:3 packet:1 routing:2 dii:1 elimination:2 exchange:1 f1:1 investigation:1 mab:9 tighter:1 summation:1 hold:3 exp:7 presumably:1 algorithmic:2 vary:1 early:1 estimation:3 travel:2 robbins:6 largest:4 vice:1 ganchev:3 tool:2 minimization:1 mit:2 offs:1 always:2 aim:1 modified:4 rather:2 ck:1 cornell:1 derived:1 focus:1 xit:18 she:1 prevalent:1 likelihood:1 bernoulli:1 contrast:1 adversarial:10 censor:1 typically:1 unlikely:1 bt:1 lj:1 bandit:33 selects:1 arg:7 colt:1 exponent:1 equal:2 once:2 having:1 eliminated:1 sell:1 look:1 nearly:1 future:1 richard:1 ortner:2 distinguishes:1 randomly:2 divergence:1 recalling:1 interest:4 certainly:1 analyzed:1 yielding:1 bundle:1 kt:1 maxi2:1 censored:16 dkw:2 shipped:1 incomplete:1 divide:1 periodica:1 uncertain:1 mk:1 yoav:1 maximization:1 bull:1 cost:2 deviation:6 uniform:7 wortman:1 commission:1 chansoo:1 chooses:1 explores:1 international:1 siam:1 csail:1 lee:1 xi1:1 j2m:9 destination:1 contract:1 pool:6 michael:2 gergely:1 central:1 cesa:5 choose:5 hoeffding:6 worse:2 return:1 suggesting:1 potential:12 rusmevichientong:1 psfiles:1 audibert:2 depends:3 stream:1 piece:4 later:2 view:1 optimistic:8 analyze:2 sup:1 observing:1 hazan:1 capability:1 defer:4 contribution:3 orlin:1 air:2 ni:5 kiefer:3 yves:1 anton:1 bor:2 mc:1 inform:2 tended:1 neu:2 against:1 mathematica:1 e2:7 proof:9 mi:2 di:23 proved:1 popular:2 knowledge:2 subsection:1 auer:11 dvoretzky:7 focusing:1 higher:2 dt:1 wherein:1 improved:1 amer:1 execute:1 generality:1 furthermore:1 stage:4 receives:2 quality:2 pulling:2 believe:3 name:1 true:3 mini2:1 hence:2 leibler:1 i2:3 round:14 during:2 numerator:1 game:1 suboptimality:1 pdf:1 tt:4 performs:1 weber:1 wise:1 novel:1 fi:26 permuted:1 multinomial:1 slight:1 refer:1 multiarmed:2 cambridge:2 r1i:2 versa:1 smoothness:1 consistency:1 mathematics:1 alekh:1 etc:1 nicolo:1 recent:2 perspective:3 inf:1 driven:1 ship:1 scenario:1 route:1 certain:1 inequality:10 binary:1 herbert:2 additional:2 somewhat:1 preceding:1 greater:1 employed:1 determine:1 wolfowitz:3 period:1 monotonically:1 semi:1 full:3 afterwards:1 exceeds:3 faster:1 match:3 rti:12 huh:2 lai:4 divided:1 deadline:3 prediction:3 variant:3 heterogeneous:1 essentially:1 denominator:1 arxiv:2 agarwal:2 c1:1 receive:5 whereas:4 want:2 interval:1 else:1 source:1 suffered:3 comment:1 tend:1 flow:1 call:1 integer:1 noting:1 leverage:1 exceed:1 identically:1 forthcoming:1 nonstochastic:1 reduce:1 whether:1 bartlett:1 akin:1 peter:6 jj:1 action:7 repeatedly:1 regrett:7 generally:1 useful:3 clear:1 involve:1 detailed:1 tewari:1 nonparametric:2 dark:5 encyclopedia:1 welcome:1 schapire:1 http:1 supplied:1 notice:3 estimated:1 track:2 yy:1 key:4 levi:1 threshold:58 promised:1 drawn:5 changing:1 utilize:1 asymptotically:1 year:1 convert:1 package:3 uncertainty:1 arrive:4 almost:1 delivery:2 decision:5 appendix:4 entirely:1 bound:31 ct:26 pay:1 ki:1 distinguish:1 strength:1 constraint:1 ri:2 generates:2 aspect:1 argument:5 min:9 department:2 according:5 poor:1 across:4 increasingly:1 intimately:1 character:1 son:1 making:3 pr:5 equation:3 previously:1 jennifer:1 turn:2 count:4 mechanism:1 xi2:1 mind:1 know:1 end:5 umich:2 studying:1 apply:1 observe:3 generic:1 alternative:1 encounter:1 original:1 assumes:1 remaining:2 exploit:1 giving:1 establish:1 classical:6 quantity:2 parametric:1 primary:1 strategy:3 exhibit:1 unable:1 uncensored:9 entity:1 kieferwolfowitz:1 argue:1 reason:1 length:2 fighting:1 index:4 kuzman:1 difficult:2 robert:1 negative:1 kaplan:14 magic:1 design:3 policy:9 unknown:1 perform:3 bianchi:5 upper:4 observation:2 finite:4 incorrectly:1 payoff:6 variability:1 precise:1 jabernet:1 confidential:1 meier:14 brokerage:4 pair:3 specified:2 kl:2 barcelona:1 nip:1 able:1 proceeds:1 below:1 azuma:4 biasing:1 challenge:2 program:1 ambuj:1 max:10 belief:1 greatest:1 event:4 suitable:1 natural:3 circumvent:1 nkt:5 indicator:1 pause:1 telescoping:1 zhu:1 arm:51 scheme:1 improve:1 minimax:2 historically:1 mdps:1 ne:1 ready:2 hungarica:1 epoch:5 literature:2 nicol:2 asymptotic:3 freund:1 loss:1 bear:1 permutation:3 interesting:1 allocation:3 facing:1 jasa:1 fruit:1 playing:6 share:1 censoring:2 course:5 surprisingly:1 supported:1 copy:2 last:1 maxi0:2 legacy:1 yuriy:1 offline:1 side:2 allow:1 understand:1 pulled:3 fullinformation:1 peterson:2 face:1 kareem:2 distributed:1 liquidity:1 feedback:23 curve:1 made:2 adaptive:2 transaction:4 kullback:1 buy:1 uai:1 assumed:1 conclude:2 xi:3 continuous:1 search:1 nature:1 learn:1 mj:1 improving:1 inventory:1 main:1 paul:1 arrival:1 n2:3 suffering:1 referred:1 wiley:1 fails:1 sub:2 tied:1 supplier:5 learns:1 theorem:10 bad:1 specific:1 xt:4 bastien:1 maxi:3 rakhlin:1 dk:2 orie:1 virtue:2 survival:5 naively:1 sequential:2 budget:1 horizon:3 demand:1 nk:2 gap:1 michigan:2 simply:5 likely:1 bubeck:2 schwaighofer:1 tracking:1 springer:1 satisfies:1 relies:5 constantly:1 ma:1 cdf:2 goal:2 sized:1 presentation:2 ann:2 towards:1 manufacturing:1 content:1 specifically:5 infinite:1 uniformly:5 typical:1 determined:1 lemma:8 kearns:2 called:1 total:1 arbor:2 buyer:1 ucb:25 select:2 support:2 alexander:1 incorporate:1
5,691
615
Neural Network On-Line Learning Control of Spacecraft Smart Structures Dr. Christopher Bowman Ball Aerospace Systems Group P.O. Box 1062 Boulder. CO 80306 Abstract The overall goal is to reduce spacecraft weight. volume, and cost by online adaptive non-linear control of flexible structural components. The objective of this effort is to develop an adaptive Neural Network (NN) controller for the Ball C-Side 1m x 3m antenna with embedded actuators and the RAMS sensor system. A traditional optimal controller for the major modes is provided perturbations by the NN to compensate for unknown residual modes. On-line training of recurrent and feed-forward NN architectures have achieved adaptive vibration control with unknown modal variations and noisy measurements. On-line training feedback to each actuator NN output is computed via Newton's method to reduce the difference between desired and achieved antenna positions. 1 ADAPTIVE CONTROL BACKGROUND The two traditional approaches to adaptive control are 1) direct control (such as perfonned in direct model reference adaptive controllers) and 2) indirect control (such as performed by explicit self-tuning regulators). Direct control techniques (e.g. model-reference adaptive cootrul) provide good stability however are susceptible to noise. Whereas indirect control techn;'q~es (e.g. explicit self-tuning regulators) have low noise susceptibility and good convergence rate. However they require more control effort and have worse stability and are less roblistto mismodeling. NNs synergistically augment traditional adaptive control techniques by providing improved mismodeling robustness both adaptively on-line for time-varying dynamics as well as in a learned control mode at a slower rate. The NN control approaches which correspond to direct and indirect adaptive control are commonly known as inverse and forward modeling. respectively. More specifically, aNN which maps the plant state and its desired perfonnance to the control command is called an inverse model, a NN mapping both the current plant state and control to the next state and its performance is called the forward model. When given a desired performwce and the current state. the inverse model generates the control. see Figure 1. The actual perfonnance is observed and is used to train/update the inverse model. A significant problem occurs when the desired and achieved perfonnance differ greatly since the model near the desired slate is not changed. This condition is corrected by adding random noise to the control outputs so as to extend the state space 303 304 Bowman being explored. However, this correction has the effect of slowing the learning and reducing broadband stability. easurements Trainin ,..---=.,:-:-=.....n..:=; __~-----' ,." "" n;o:uu;u.;k lnV_~;""1 Con ~-~ Nel!Pll'Controller ..." 1 Nonlinear r~ s....""' 1 L.!!.:.~-Structures x Filters Y ents Previous controls and stale measurements Figure 1: Direct Adaptive Control Using Inverse Modeling Neural Network Controller Trainin " " "" Feedback ements y II~F~=fI-..1 Current and "" -lnv-I-~-'M"'I""ode-I" Control I':"" Net.Tl!JtControlier I' u Previous controls Provisions state pr;.::ev.;.;l;.;;,O.::.;Us;..;c;.;o;;.;n.;;.tro~_ _ _.....;;meas=;.;;urements 1 N_ Structures ~i x -I ~M=eas=ur=e=m=en=ts=~ Fillers y and stale measurements Figure 2: Dual (Indirect and Direct) Adaptive Control Using Forward Modeling Neural Network State Predictor To Aid Inverse Model Convergence For forward modeling the map from the current control and state to the resulting state and performance is learned, see Figure 2. For cases where the performance is evaluated at a future time (i.e. distal in time), a predictive critic [Barto and Sutton, 1989] NN model is learned. In both cases the Jacobian of this performance can be computed to iteratively generate the next control action. However, this differentiating of the critic NN for backpropagation training of the controller network is very slow and in some cases steers the searching the wrong direction due to initial erroneous forward model estimates. As the NN adapts itself the performance flattens which results in the slow halting of learning at an Neural Network On-Line Learning Control of Spacecraft Smart Structures unacceptable solution. Adding noise to the controller's output [Jordan and Jacobs, 1990] breaks the redundancy but forces the critic to predict the effects of future noise. This problem has been solved by using a separately trained intermediate plant model to predict the next state from the prior state and control while having an independent predictor model generate the performance evaluation from the plant model predicted state [Werbos, 1990] and [Brody, 1991]. The result is a 50-100 fold learning speed improvement over reinforcement training of the forward model controller NN. However, this method still relies on a "good" forward model to incrementally train the inverse model. These incremental changes can still lead to undesirable solutions. For control systems which follow the stage 1,2 or 3 models given in [Narendra, 1991) the control can be analytically computed from a forward-only model. For the most general, non-linear (stage 4) systems, an alternative is the memory-based forward model [Moore, 1992]. Using only a forward NN model, a direct hill-climbing or Newton's method search of candidate actions can be applied until a control decision is reached. The resulting state and its performance are used for on-line training of the forward model. Judicial random control actions are applied to improve behavior only where the forward model error is predicted to be large (e.g. via cross-validation). Also using robust regression, experiences can be deweighted according to their quality and their age. The high computational burden of these cross-validation techniques can be reduced by parallel on-line processing providing the "policy" parameters for fast on-line NN control. For control problems which are distal in time and space, a hybrid of these two forwardmodeling approaches can be used. Namely, a NN plant model is added which is trained off-line in real-time and updated as necessary at a slower rate than the on-line forward model which predicts performance based upon the current plant model. This slower rate trained forward-model NN supports learned control (e.g. via numerical inversion) whereas the on-line forward model provides the faster response adaptive control. Other NN control techniques such as using a Hopfield net to solve the optimal-control quadraticprogramming problem or the supervised training of ART II off-line with adaptive vigilance for on-line pole placement have been proposed. However, their on-line robustness appears limited due to their sensitivity to a priori parameter assumptions. A forward model NN which augments a traditional controller for unmodeled modes and unforeseen situations is presented in the following section. Performance results for both feed-faward and current learning versions are compared in Section 3. 2 RESIDUAL FORWARD MODEL NEURAL NETWORK (RFM-NN) CONTROLLER A type of forward model NN which acts as a residual mode mter to support a reduced-order model (ROM) traditional optimal state controller has been evaluated. see Figure 3. The ROM determines the control based upon its model coordinate approximate representation of the structure. Model coordinates are obtained by a transformation using known primary vibration modes, [Young, 1990]. The transformation operator is a set of eigenvectors (mode shapes) generated by finite element modeling. The ROM controller is traditionally augmented by a residual-mode mter (RMF). Ball's RFM-NN Ball's RFM-NN replaces the RMF in order to better capture the mismodeled. unmodeled and changing modes. The objective of the RFM-NN is to provide ROM controller with ROM derivative state perturbations, so that the ROM controls the structure as desired by the user. The RFMNN is trained on-line using scored supervised feedback to generate these desired ROM state perturbations. The scored supervised training provides a score for each perturbation output based upon the measured position of the structure. The measured deviations, Y*(t), from the desired structure position are converted to errors in the estimated ROM state using the ROM. transformation. Specifically, the training score, S(t), for each ROM derivative state XN (t) is expressed in the following discrete equation: . S(t) = BN Y * (t) - xN(t) where *N(t) = [AN + BNGN - KNCN]XN(t -1) + KN Y(t -1) 305 306 Bowman " , . .; ;" .} ," ..:,::..-:.;" .. :: . ::: ': ..: ", Figure 3: Residual Forward Model Neural Network Adaptive Controller Replaces Traditional Residual Mode Filter 0*= (1) ROM state ~ations which zero Newton's method is then applied to find lbe the score. First, the score is smoothed, Set) ~S(t -1) + (1- o)S(t) and the neural network output is smoothed similarly. Second, Newton's method computes the adjusbnents needed to zero the scores, ~(O*N(t? =-S(t)(8i N(t) - -1? 8iN(t I [S(t) - Set -1)] = -EXN(t) (if either difference = 0) Third, the NN is trained, 8*T(t + 1) ~(8iN(t? + 8i N(t) with the appropriate learning rate, a (e.g. approximation to inverse of largest eigenvalue of the Hessian weight matrix). = 3 RFM-NN ON-LINE LEARNING RESULTS Both feed-forward and recurrent RFM-NNs have been incorporated into an interactive simulation of Ball's Control-Structure Interaction Demonstration Experiment (C-SIDE) see Figure 4. This 1m x 3m lightweight antenna facesheet has 8 embedded actuators plus three auxiliary input actuators and uses 8 remote angular measurement sensors (RAMS) plus 4 displacement and 3 velocity auxiliary sensors. In order to evaluate the on-line performance of the RFM-NNs the ROM controller was given insufficient and partially incorrect modes. The ROM without the RFM-NN grew unstable (i.e. greater than 10 millimeter C-SIDE displacements) in 13 seconds. The initial feed-forward RFM-NN used 8 sensor and 6 ROM state feedback estimate inputs as well as 5 hidden units and 3 ROM velocity state perturbation outputs. This RFM-NN had random initial weights, logistic Neural Network On-Line Learning Control of Spacecraft Smart Structures activation functions. and back-propagation training using one sixth the learning rate for the output layers (e.g..06 and .01). Newton RFM-NN training search used a step size of one with smoothing factor of one tenth. Figure 4: 1m x 3m C-SIDE Antenna Facesheet With Embedded Actuators. This RFM-NN learned on-line to stabilized and reduce vibration to less than ?Imm within 20 seconds, see Figure 5. A five Newton force applied a few seconds later is compensated for within nine seconds, see Figure 6. This is accomplished with learning off as well as when on. To test the necessity of the RFM-NN the ROM was given the scored supervised training (Le. Newton's search estimates) directly instead of the RFMNN outputs. This caused immediate unstable behavior. To test the RFM-NN sensitivity to measurement accuracy a unifonn error of ?5% was added. Starting from the same random weight start the RFM-NN required 25 seconds to learn to stabilize the antenna, see Figure 7. The best stability was achieved when the product of the Newton and BPN steps was approximately .01. This feed-forward NN was compared to an Elman-type recurrent NN (i.e. hidden layer feedback to itself with one-step BP training). The recurrent RFM-NN on-line learning stability was much less sensitive to initial weights. The recurrent RFM-NN stabilized C-SIDE with up to 10% - 20% measurement noise versus 5% - 10% limit for feed-forward RFM-NN. 4 SUMMARY AND RECOMMENDATIONS Adaptive smart sbUctures promise to reduce spacecraft weight and dependence on extensive ground monitoring. A recurrent forward model NN is used as a residual mode fllter to augment a traditional reduced-order model (ROM) controller. It was more robust than the feed-forward NN and the traditional-only controller in the presence of unmodeled modes and noisy measurements. Further analyses and hardware implementations will be perfonned to better quantify this robustness including the sensitivity to the ROM controller mode fidelity, number of output modes. learning rates, measurement-to-state errors, and time quantization effects. To improve robustness to ROM mode changes a comparison to the dual forward/inverse NN control approach is recommended. The forward model will adjust the search used to train an inverse model which provides control augmentations to the ROM controller. This will enable control searches to occur both off-line faster than real-time using the forward model (Le. imagination) and on-line using direct search trials with varying noise levels. The forward model will adapt using quality experiences (e.g. via cross validation) which improves inverse models searches. The inverse model reliance on forward model will reduce until forward model prediction errors increase. Future challenges. include solving 307 308 Bowman (-SIll APtU'lcial "8UNl ttetuaI'k Reai411&1 no.. 1 Cantrall... ROft State Esti ....tea /) i /"f'X I r\ I? \ X ? \i I i \, '0xYJ ROft State Ed inat. AcIjud_nts Figure 5: RFM-NN On-Line Learning To Achieve Stable Control (-Sill APtltlcial "lW'al Hetwal'k Reai411&1 I10MI Cantrall... DiapllClIII8IIt lteuvennta (+.(-18I111d '/\X~, . . , , \; \ Figure 5: RFM-NN On-Line Learning To Achieve Stable Control (concluded) the temporal credit assignment problem, partitioning to restricted chip sizes, combining with incomplete a priori knowledge, and balancing adaptivity of response with long-term learning. The goal is to extend stabiJity-dominated, fixed-goal traditional control with adaptive robotic-type neural control to enable better autonomous control where fullyjustified fixed models and complete system knowledge are not required. The resultant robust autonomous control will capitalize on the speed of massively parallel analog neural-like computations (e.g. with NN pulse stream chips). Neural Network On-Line Learning Control of Spacecraft Smart Structures C-SIJI APtificial tteuNl IIetuaI'k Resiaul IIoUI Cantrall... lb.,: 36.4& Paus. - Hit to conti.... lletworll: Figure 6: 5 Newton Force Vibration Removed Using RFM-NN Learned Forward Model C-SIJlI APtificial ltauPal tIIriwarIc Reaiwl twal Cantrall... 25.28 Pause.t - Hit to contb... network: fl,.: . ~,~ ... . .. ' ~.; .. \i ...\\,: "r~ l \. ~. \l ~. "-. / ,J... R~ Est inat. Acljustaents : / I --",.., "t?"'" '. ~ --.~ . =.::..-.:.-=.....~==::::.:::::.....:::_== ~~- ":::=<I>~-::::. :::::.:::;::::::::::::::-=:-==--=~O:::= ' ..: ~; ~ --... \,...---/./ ~~~ __~______.~ _ ~_ __~.~ - .-~.~.---~.-~-- Figure 7: RFM?NN Learning to Remove Vibrations in C?SIDE With ?15% Noisy Displacement Measurements 309 310 Bowman 5 REFERENCES Barto, A.G., Sutton, R.S .? and Watkins, CJ.C.H., Learning and Sequential Decision Making. Univ. of Mass. at Amherst COINS Technical Report 89-95, September 1989 Bowman, C.L., Adaptive Neural Networks Applied to Signal Recognition, 3rd TriService Data fusion Symposium, May 1989 Brody, Carlos, Fast Learning With Predictive Forward Models. Neural Information Processing Systems 4 (NIPS4), 1992 Jorden, M.I., and Jacobs, R.A., Learning to Control and Unstable System with Forward Modeling, in D.S. Touretzky, ed., Advances in NIPS 2, Morgan Kaufmann 1990. Moore, A.W., Fast. Robust Adaptive Control by Learning Only Forward Models. NIPS 4, 1992 Mukhopadhyay S. and Narendra, D.S .? Disturbance Rejection in Nonlinear Systems Using Neural Networks Yale University Report No. 9114 December 1991 Werbos, P., Architectures For Reinforcement Learning, in Miller, Sutton and Werbos. ed., Neural Networks for Control, MIT Press 1990 Young, D.O., Distributed Finite-Element Modeling and Control Approach for Large Flexible Structures, J. of Guidance, Control and Dynamics, Vol. 13 (4),703-713,1990
615 |@word trial:1 version:1 inversion:1 simulation:1 pulse:1 bn:1 jacob:2 necessity:1 synergistically:1 lightweight:1 score:5 initial:4 current:6 activation:1 numerical:1 shape:1 remove:1 update:1 slowing:1 provides:3 five:1 bowman:6 unacceptable:1 direct:8 symposium:1 incorrect:1 spacecraft:6 behavior:2 elman:1 actual:1 provided:1 mass:1 transformation:3 esti:1 temporal:1 act:1 interactive:1 wrong:1 hit:2 control:56 unit:1 partitioning:1 limit:1 sutton:3 approximately:1 plus:2 xyj:1 co:1 limited:1 sill:2 backpropagation:1 displacement:3 undesirable:1 operator:1 map:2 compensated:1 starting:1 stability:5 searching:1 autonomous:2 variation:1 coordinate:2 traditionally:1 updated:1 user:1 us:1 element:2 velocity:2 recognition:1 werbos:3 mukhopadhyay:1 predicts:1 observed:1 solved:1 capture:1 remote:1 removed:1 dynamic:2 trained:5 solving:1 smart:5 predictive:2 upon:3 indirect:4 hopfield:1 slate:1 chip:2 train:3 univ:1 fast:3 solve:1 antenna:5 noisy:3 itself:2 online:1 eigenvalue:1 net:2 interaction:1 product:1 combining:1 achieve:2 adapts:1 pll:1 convergence:2 incremental:1 develop:1 recurrent:6 n_:1 measured:2 auxiliary:2 predicted:2 uu:1 quantify:1 differ:1 direction:1 filter:2 enable:2 require:1 correction:1 credit:1 ground:1 mapping:1 predict:2 major:1 narendra:2 susceptibility:1 sensitive:1 vibration:5 largest:1 mit:1 sensor:4 lbe:1 varying:2 command:1 barto:2 improvement:1 greatly:1 rmf:2 nn:44 hidden:2 overall:1 dual:2 flexible:2 fidelity:1 augment:2 priori:2 art:1 smoothing:1 having:1 capitalize:1 future:3 report:2 few:1 ents:1 rfm:22 evaluation:1 adjust:1 necessary:1 experience:2 judicial:1 perfonnance:3 incomplete:1 desired:8 guidance:1 modeling:7 steer:1 ations:1 assignment:1 cost:1 pole:1 deviation:1 predictor:2 kn:1 nns:3 adaptively:1 sensitivity:3 amherst:1 off:4 unforeseen:1 augmentation:1 vigilance:1 dr:1 worse:1 derivative:2 imagination:1 halting:1 converted:1 stabilize:1 caused:1 stream:1 performed:1 break:1 later:1 reached:1 start:1 carlos:1 parallel:2 accuracy:1 kaufmann:1 miller:1 correspond:1 climbing:1 millimeter:1 monitoring:1 touretzky:1 ed:3 sixth:1 resultant:1 con:1 knowledge:2 improves:1 provision:1 cj:1 ea:1 back:1 appears:1 feed:7 supervised:4 follow:1 modal:1 improved:1 response:2 evaluated:2 box:1 angular:1 stage:2 until:2 christopher:1 nonlinear:2 propagation:1 incrementally:1 mode:16 logistic:1 quality:2 stale:2 effect:3 analytically:1 iteratively:1 moore:2 distal:2 self:2 hill:1 complete:1 tro:1 fi:1 volume:1 extend:2 analog:1 measurement:9 significant:1 tuning:2 rd:1 similarly:1 had:1 stable:2 massively:1 accomplished:1 morgan:1 greater:1 nips4:1 exn:1 recommended:1 signal:1 ii:2 technical:1 faster:2 adapt:1 cross:3 compensate:1 long:1 prediction:1 regression:1 controller:19 achieved:4 background:1 whereas:2 separately:1 ode:1 concluded:1 december:1 jordan:1 structural:1 near:1 presence:1 intermediate:1 architecture:2 reduce:5 effort:2 hessian:1 nine:1 action:3 eigenvectors:1 nel:1 hardware:1 augments:1 reduced:3 generate:3 stabilized:2 unifonn:1 estimated:1 discrete:1 promise:1 tea:1 vol:1 group:1 redundancy:1 reliance:1 changing:1 tenth:1 ram:2 inverse:12 bpn:1 decision:2 brody:2 layer:2 fl:1 yale:1 ements:1 fold:1 replaces:2 occur:1 placement:1 bp:1 dominated:1 generates:1 regulator:2 speed:2 according:1 ball:5 ur:1 making:1 restricted:1 pr:1 boulder:1 equation:1 needed:1 actuator:5 appropriate:1 alternative:1 robustness:4 coin:1 slower:3 include:1 newton:9 objective:2 added:2 flattens:1 occurs:1 primary:1 dependence:1 traditional:9 september:1 unstable:3 rom:20 insufficient:1 providing:2 demonstration:1 susceptible:1 implementation:1 policy:1 unknown:2 finite:2 t:1 immediate:1 situation:1 grew:1 incorporated:1 perturbation:5 smoothed:2 lb:1 unmodeled:3 namely:1 required:2 extensive:1 aerospace:1 learned:6 nip:2 ev:1 challenge:1 including:1 memory:1 perfonned:2 force:3 hybrid:1 disturbance:1 residual:7 pause:1 improve:2 prior:1 embedded:3 plant:6 adaptivity:1 versus:1 age:1 validation:3 unl:1 critic:3 balancing:1 changed:1 summary:1 side:6 differentiating:1 distributed:1 feedback:5 xn:3 computes:1 forward:36 commonly:1 adaptive:18 reinforcement:2 approximate:1 mter:2 imm:1 robotic:1 conti:1 search:7 learn:1 robust:4 noise:7 scored:3 augmented:1 broadband:1 tl:1 en:1 slow:2 aid:1 position:3 explicit:2 trainin:2 candidate:1 lw:1 jacobian:1 third:1 watkins:1 young:2 erroneous:1 explored:1 meas:1 fusion:1 burden:1 quantization:1 adding:2 sequential:1 rejection:1 expressed:1 partially:1 recommendation:1 determines:1 relies:1 goal:3 ann:1 change:2 specifically:2 corrected:1 reducing:1 called:2 techn:1 e:1 est:1 support:2 filler:1 evaluate:1
5,692
6,150
Learning from Rational Behavior: Predicting Solutions to Unknown Linear Programs Shahin Jabbari, Ryan Rogers, Aaron Roth, Zhiwei Steven Wu University of Pennsylvania {jabbari@cis, ryrogers@sas, aaroth@cis, wuzhiwei@cis}.upenn.edu Abstract We define and study the problem of predicting the solution to a linear program (LP) given only partial information about its objective and constraints. This generalizes the problem of learning to predict the purchasing behavior of a rational agent who has an unknown objective function, that has been studied under the name ?Learning from Revealed Preferences". We give mistake bound learning algorithms in two settings: in the first, the objective of the LP is known to the learner but there is an arbitrary, fixed set of constraints which are unknown. Each example is defined by an additional known constraint and the goal of the learner is to predict the optimal solution of the LP given the union of the known and unknown constraints. This models the problem of predicting the behavior of a rational agent whose goals are known, but whose resources are unknown. In the second setting, the objective of the LP is unknown, and changing in a controlled way. The constraints of the LP may also change every day, but are known. An example is given by a set of constraints and partial information about the objective, and the task of the learner is again to predict the optimal solution of the partially known LP. 1 Introduction We initiate the systematic study of a general class of multi-dimensional prediction problems, where the learner wishes to predict the solution to an unknown linear program (LP), given some partial information about either the set of constraints or the objective. In the special case in which there is a single known constraint that is changing and the objective that is unknown and fixed, this problem has been studied under the name learning from revealed preferences [1, 2, 3, 16] and captures the following scenario: a buyer, with an unknown linear utility function over d goods u : Rd ! R defined as u(x) = c ? x faces a purchasing decision every day. On day t, she observes a set of prices pt 2 Rd 0 and buys the bundle of goods that maximizes her unknown utility, subject to a budget b: x(t) = argmax c ? x x such that pt ? x ? b In this problem, the goal of the learner is to predict the bundle that the buyer will buy, given the prices that she faces. Each example at day t is specified by the vector pt 2 Rd 0 (which fixes the constraint), and the goal is to accurately predict the purchased bundle x(t) 2 [0, 1]d that is the result of optimizing the unknown linear objective. It is also natural to consider the class of problems in which the goal is to predict the outcome to a LP broadly e.g. suppose the objective c ? x is known but there is an unknown set of constraints Ax ? b. An instance is again specified by a changing known constraint (pt , bt ) and the goal is to predict: x(t) = argmax c ? x x such that Ax ? b and pt ? x ? bt . 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. (1) This models the problem of predicting the behavior of an agent whose goals are known, but whose resource constraints are unknown. Another natural generalization is the problem in which the objective is unknown, and may vary in a specified way across examples, and in which there may also be multiple arbitrary known constraints which vary across examples. Specifically, suppose that there are n distinct, unknown linear objective functions v1 , . . . , vn . An instance on day t is specified by a subset of the unknown objective functions, S t ? [n] := {1, . . . , n} and a convex feasible region P t , and the goal is to predict: X x(t) = argmax vi ? x such that x 2 P t . (2) x i2S t When the changing feasible regions P t correspond simply to varying prices as in the revealed preferences problem, this models a setting in which at different times, purchasing decisions are made by different members of an organization, with heterogeneous preferences ? but are still bound by an organization-wide budget. The learner?s problem is, given the subset of decision makers and the prices at day t, to predict which bundle they will purchase. This generalizes some of the preference learning problems recently studied by Blum et al [6]. Of course, in this generality, we may also consider a richer set of changing constraints which represent things beyond prices and budgets. In all of the settings we study, the problem can be viewed as the task of predicting the behavior of a rational decision maker, who always chooses the action that maximizes her objective function subject to a set of constraints. Some part of her optimization problem is unknown, and the goal is to learn, through observing her behavior, that unknown part of her optimization problem sufficiently so that we may reliably predict her future actions. 1.1 Our Results We study both variants of the problem (see below) in the strong mistake bound model of learning [13]. In this model, the learner encounters an arbitrary adversarially chosen sequence of examples online and must make a prediction for the optimal solution in each example before seeing future examples. Whenever the learner?s prediction is incorrect, the learner encounters a mistake, and the goal is to prove an upper bound on the number of mistakes the learner can make, in the worst case over the sequence of examples. Mistake bound learnability is stronger than (and implies) PAC learnability [15]. Known Objective and Unknown Constraints We first study this problem under the assumption that there is a uniform upper bound on the number of bits of precision used to specify the constraint defining each example. In this case, we show that there is a learning algorithm with both running time and mistake bound linear in the number of edges of the polytope formed by the unknown constraint matrix Ax ? b. We note that this is always polynomial in the dimension d when the number of unknown constraints is at most d + O(1). (In the supplementary material, we show that by allowing the learner to run in time exponential in d, we can give a mistake bound that is always linear in the dimension and the number of rows of A, but we leave as an open question whether or not this mistake bound can be achieved by an efficient algorithm.) We then show that our bounded precision assumption is necessary ? i.e. we show that when the precision to which constraints are specified need not be uniformly upper bounded, then no algorithm for this problem in dimension d 3 can have a finite mistake bound. This lower bound motivates us to study a PAC style variant of the problem, where the examples are not chosen in an adversarial manner, but instead are drawn independently at random from an arbitrary unknown distribution. In this setting, we show that even if the constraints can be specified to arbitrary (even infinite) precision, there is a learner that requires sample complexity only linear in the number of edges of the unknown constraint polytope. This learner can be implemented efficiently when the constraints are specified with finite precision. Known Constraints and Unknown Objective For the variant of the problem in which the objective is unknown and changing and the constraints are known but changing, we give an algorithm that has a mistake bound and running time polynomial in the dimension d. Our algorithm uses the Ellipsoid algorithm to learn the coefficients of the unknown objective by implementing a separation oracle that generates separating hyperplanes given examples on which our algorithm made a mistake. 2 We leave the study of either of our problems under natural relaxations (e.g. under a less demanding loss function) and whether it is possible to substantially improve our results in these relaxations as an interesting open problem. 1.2 Related Work Beigman and Vohra [3] were the first to study revealed preference problems (RPP) as a learning problems and to relate them to multi-dimensional classification. They derived sample complexity bounds for such problems by computing the fat shattering dimension of the class of target utility functions, and showed that the set of Lipschitz-continuous valuation functions had finite fat-shattering dimension. Zadimoghaddam and Roth [16] gave efficient algorithms with polynomial sample complexity for PAC learning of the RPP over the class of linear (and piecewise linear) utility functions. Balcan et al. [2] showed a connection between RPP and the structured prediction problem of learning d-dimensional linear classes [7, 8, 12], and use an efficient variant of the compression techniques given by Daniely and Shalev-Shwartz [9] to give efficient PAC algorithms with optimal sample complexity for various classes of economically meaningful utility functions. Amin et al. [1] study the RPP for linear valuation functions in the mistake bound model, and in the query model in which the learner gets to set prices and wishes to maximize profit. Roth et al. [14] also study the query model of learning and give results for strongly concave objective functions, leveraging an algorithm of Belloni et al. [4] for bandit convex optimization with adversarial noise. All of the works above focus on the setting of predicting the optimizer of a fixed unknown objective function, together with a single known, changing constraint representing prices. This is the primary point of departure for our work ? we give algorithms for the more general settings of predicting the optimizer of a LP when there may be many unknown constraints, or when the unknown objective function is changing. Finally, the literature on preference learning (see e.g. [10]) has similar goals, but is technically quite distinct: the canonical problem in preference learning is to learn a ranking on distinct elements. In contrast, the problem we consider here is to predict the outcome of a continuous optimization problem as a function of varying constraints. 2 Model and Preliminaries We first formally define the geometric notions used throughout this paper. A hyperplane and a halfspace in Rd are the set of points satisfying the linear equation a1 x1 + . . . ad xd = b and the linear inequality a1 x1 + . . . + ad xd ? b for a set of ai s respectively, assuming that not all ai ?s are simultaneously zero. A set of hyperplanes are linearly independent if the normal vectors to the hyperplanes are linearly independent. A polytope (denoted by P ? Rd ) is the bounded intersection of finitely many halfspaces, written as P = {x | Ax ? b}. An edge-space e of a polytope P is a one dimensional subspace that is the intersection of d 1 linearly independent hyperplanes of P, and an edge is the intersection between an edge-space e and the polytope P.We denote the set of edges of polytope P by EP . A vertex of P is a point where d linearly independent hyperplanes of P intersect. Equivalently, P can be written as the convex hull of its vertices V denoted by Conv(V ). Finally, we define a set of points to be collinear if there exists a line that contains all the points in the set. We study an online prediction problem with the goal of predicting the optimal solution of a changing LP whose parameters are only partially known. Formally, in each day t = 1, 2, . . . an adversary chooses a LP specified by a polytope P (t) (a set of linear inequalities) and coefficients c(t) 2 Rd of the linear objective function. The learner?s goal is to predict the solution x(t) where x(t) = ? (t) , the learner observes the optimal x(t) and argmaxx2P (t) c(t) ? x. After making the prediction x (t) (t) learns whether she has made a mistake (? x 6= x ). The mistake bound is defined as follows. Definition 1. Given a LP with feasible polytope P and objective function c, let (t) denote the parameters of the LP that are revealed to the learner on day t. A learning algorithm A takes as input the sequence { (t) }t , the known parameters of an adaptively chosen sequence {(P (t) , c(t) )}t of LPs and outputs a sequence predictions {? x(t) }t . We say that A has mistake bound M if ? (t) of (t) ? 1 ? 6= x max{(P (t) ,c(t) )}t ?t=1 1 x ? M, where x(t) = argmaxx2P (t) c(t) ? x on day t. We consider two different instances of the problem described above. First, in Section 3, we study the problem given in (1) in which c(t) = c is fixed and known to the learner but the polytope P (t) = 3 P \ N (t) consists of an unknown fixed polytope P and a new constraint N (t) = {x | p(t) ? x ? b(t) } which is revealed to the learner on day t i.e. (t) = (N (t) , c). We refer to this as the Known Objective problem. Then, in Section 4, we study in which the polytope P (t) is changing and known P the problem (t) i but the objective function c = i2S (t) v is unknown and changing as in (2) where the set S (t) is known i.e. (t) = (P (t) , S (t) ). We refer to this as the Known Constraints problem. In order for our prediction problem to be well defined, we make Assumption 1 about the observed solution x(t) in each day. Assumption 1 guarantees that each solution is on a vertex of P (t) . Assumption 1. The optimal solution to the LP: maxx2P (t) c(t) ? x is unique for all t. 3 The Known Objective Problem In this section, we focus on the Known Objective Problem where the coefficients of the objective function c are fixed and known to the learner but the feasible region P (t) on day t is unknown and changing. In particular, P (t) is the intersection of a fixed and unknown polytope P = {x | Ax ? b, A ? Rm?d } and a known halfspace N (t) = {x | p(t) ? x ? b(t) } i.e. P (t) = P \ N (t) . Throughout this section we make the following assumptions. First, we assume w.l.o.g. (up to scaling) that the points in P have `1 -norm bounded by 1. Assumption 2. The unknown polytope P lies inside the unit `1 -ball i.e. P ? {x | ||x||1 ? 1}. We also assume that the coordinates of the vertices in P can be written with finite precision (this is implied if the halfspaces defining P can be described with finite precision). 1 Assumption 3. The coordinates of each vertex of P can be written with N bits of precision. We show in Section 3.3 that Assumption 3 is necessary ? without any upper bound on precision, there is no algorithm with a finite mistake bound. Next, we make some non-degeneracy assumptions on polytopes P and P (t) , respectively. We require these assumptions to hold on each day. Assumption 4. Any subset of d 1 rows of A have rank d 1 where A is the constraint matrix in P = {x | Ax ? b}. Assumption 5. Each vertex of P (t) is the intersection of exactly d-hyperplanes of P (t) . The rest of this section is organized as follows. We present LearnEdge for the Known Objective Problem and analyze its mistake bound in Sections 3.1 and 3.2, respectively. Then in Section 3.3, we prove the necessity of Assumption 3 to get a finite mistake bound. Finally in Section 3.4, we present the LearnHull in a PAC style setting where the new constraint each day is drawn i.i.d. from an unknown distribution, rather than selected adversarially. 3.1 LearnEdge Algorithm In this section we introduce LearnEdge and show in Theorem 1 that the number of mistakes of LearnEdge depends linearly on the number of edges EP and the precision parameter N and only logarithmically on the dimension d. We defer all the missing proofs to the supplementary material. Theorem 1. The number of mistakes and per day running time of LearnEdge in the Known Objective Problem are O(|EP |N log(d)) and poly(m, d, |EP |) respectively when A ? Rm?d . At a high level, LearnEdge maintains a set of prediction information I (t) about the prediction history up to day t, and makes prediction in each day based on I (t) and a set of prediction rules (P.1 P.4). After making a mistake, LearnEdge updates the information with a set of update rules (U.1 U.4). Prediction Information It is natural to ask ?What information is useful for prediction?" Lemma 2 establishes the importance of the set of edges EP by showing that all the observed solutions will be on an element of EP . 1 Lemma 6.2.4 from Grotschel et al. [11] states that if each constraint in P ? Rd has encoding length at most N then each vertex of P has encoding length at most 4d2 N . Typically the finite precision assumption is made on the constraints of the LP. However, since this assumption implies that the vertices can be described with finite precision, for simplicity, we make our assumption directly on the vertices. 4 Lemma 2. On any day t, the observed solution x(t) lies on an edge in EP . In the proof of Lemma 2 we also show that when x(t) does not bind the new constraint N (t) , then x(t) is the solution for the underlying LP: argmaxx2P c ? x. Corollary 1. If x(t) 2 {x | p(t) x < b(t) } then x(t) = x? ? argmaxx2P c ? x. We then show how an edge-space e of P can be recovered after seeing 3 collinear observed solutions. Lemma 3. Let x, y, z be 3 distinct collinear points on edges of P. Then they are all on the same edge of P and the 1-dimensional subspace containing them is an edge-space of P. Given the relation between observed solutions and edges, the information I (t) is stored as follows: Me0 Me1 } } } } } Ye0 Q e0 Fe Q e1 Ye1 Figure 1: Regions on an edge-space e: feasible region Fe (blue), questionable intervals Q0e and Q1e (green) with their mid-points Me0 and Me1 and infeasible regions Ye0 and Ye1 (dashed). I.1 (Observed Solutions) LearnEdge keeps track of the set of observed solutions that were ? (? ) 6= x(? ) } and also the solution for predicted incorrectly so far X (t) = {x(? ) : ? ? t x the underlying unknown polytope x? ? argmaxx2P c ? x if it is observed. I.2 (Edges) LearnEdge keeps track of the set of edge-spaces E (t) given by any 3 collinear points in X (t) . For each e 2 E (t) , it also maintains the regions on e that are certainly feasible or infeasible. The remaining parts of e called the questionable region is where LearnEdge cannot classify as infeasible or feasible with certainty (see Figure 1). Formally, 1. (Feasible Interval) The feasible interval Fe is an interval along e that is identified to be on the boundary of P. More formally, Fe = Conv(X (t) \ e). 2. (Infeasible Region) The infeasible region Ye = Ye0 [ Ye1 is the union of two disjoint intervals Ye0 and Ye1 that are identified to be outside of P. By Assumption 2, we initialize the infeasible region Ye to {x 2 e | kxk1 > 1} for all e. 3. (Questionable Region) The questionable region Qe = Q0e [ Q1e on e is the union of two disjoint questionable intervals along e. Formally, Qe = e \ (Fe [ Ye ). The points in Qe cannot be certified to be either inside or outside of P by LearnEdge. 4. (Midpoints in Qe ) For each questionable interval Qie , let Mei denote the midpoint of Qie . We add the superscript (t) to show the dependence of these quantities on days. Furthermore, we S (t) eliminate the subscript e when taking the union over all elements in E (t) , e.g. F (t) = e2E (t) Fe . (t) (t) (t) (t) (t) (t) (t) (t) So the information I can be written as follows: I = X , E , F , Y , Q , M . e (t) = {x | Prediction Rules We now focus on the prediction rules of LearnEdge. On day t, let N (t) (t) (t) (t) e (t) , then p ? x = b } be the hyperplane specified by the additional constraint N . If x 2 /N (t) ? ? ? x = x by Corollary 1. So whenever the algorithm observes x , it will store x and predict it in the future days when x? 2 N (t) . This is case P.1. So in the remaining cases we know x? 2 / N (t) . e (t) and the edges EP , The analysis of Lemma 2 shows that x(t) must be in the intersection between N (t) so x = argmaxx2Ne (t) \EP c ? x. Hence, LearnEdge can restrict its prediction to the following ? (t) } \ N e (t) where E ? (t) = {e 2 E (t) | e ? N e (t) }. As candidate set: Cand(t) = {(E (t) [ X (t) ) \ E (t) (t) (t) (t) ? ? we show in Lemma 4, x will not be in E , so it is safe to remove E from Cand . e (t) , then x(t) 62 e. Lemma 4. Let e be an edge-space of P such that e ? N However, Cand(t) can be empty or only contain points in the infeasible regions of the edge-spaces. If so, then there is simply not enough information to predict a feasible point in P. Hence, LearnEdge predicts an arbitrary point outside of Cand(t) . This is case P.2. 5 Otherwise Cand(t) contains points from the feasible and questionable regions of the edge-spaces. LearnEdge predicts from a subset of Cand(t) called the extended feasible region Ext(t) instead of directly predicting from Cand(t) . Ext(t) contains the whole feasible region and only parts of the ? (t) . We will show later that this guarantees questionable region on all the edge-spaces in E (t) \ E LearnEdge makes progress in learning the true feasible region on some edge-space upon making a e (t) with the union of intervals between the mistake. More formally, Ext(t) is the intersection of N 0 (t) 1 (t) ? (t) and all points in X (t) : two mid-points (Me ) and (Me ) on every edge-space e 2 E (t) \ E e (t) . Ext(t) = X (t) [ [e2E (t) \E? (t) Conv (Me0 )(t) , (Me1 )(t) \N In P.3, if Ext(t) 6= ; then LearnEdge predicts the point with the highest objective value in Ext(t) . e (t) only intersects within the questionable regions of the Finally, if Ext(t) = ;, then we know N learned edge-spaces. In this case, LearnEdge predicts the intersection point with the lowest objective value, which corresponds to P.4. Although it might seem counter-intuitive to predict the point with the lowest objective value, this guarantees that LearnEdge makes progress in learning the true feasible region on some edge-space upon making a mistake. The prediction rules are summarized as follows: ? (t) P.1 First, if x? is observed and x? 2 N (t) , then predict x x? ; S (t) P.2 Else if Cand = ; or Cand(t) ? e2E (t) Ye , then predict any point outside Cand(t) ; ? (t) = argmaxx2Ext(t) c ? x; P.3 Else if Ext(t) 6= ;, then predict x ? (t) = argminx2Cand(t) c ? x. P.4 Else, predict x Update Rules Next we describe how LearnEdge updates its information. Upon making a mistake, LearnEdge adds x(t) to the set of previously observed solutions X (t) i.e. X (t+1) X (t) [ {x(t) }. Then it performs one of the following four mutually exclusive update rules (U.1-U.4) in order. e (t) , then LearnEdge records x(t) as the unconstrained optimal solution x? . U.1 If x(t) 2 /N U.2 Then if x(t) is not on any learned edge-space in E (t) , LearnEdge will try to learn a new edge-space by checking the collinearity of x(t) and any couple of points in X (t) . So after this update LearnEdge might recover a new edge-space of the polytope. If the previous updates were not invoked, then x(t) was on some learned edge-space e. LearnEdge ? (t) and x(t) (we know c ? x ? (t) 6= c ? x(t) by Assumption 1): then compares the objective values of x ? (t) > c ? x(t) , then x ? (t) must be infeasible and LearnEdge then updates the questionU.3 If c ? x able and infeasible regions for e. ? (t) < c ? x(t) then x(t) was outside of the extended feasible region of e. LearnEdge U.4 If c ? x then updates the questionable region and feasible interval on e. In both of U.3 and U.4, LearnEdge will shrink some questionable interval substantially till the interval has length less than 2 N in which case Assumption 3 implies that the interval contains no points. So LearnEdge can update the adjacent feasible region and infeasible interval accordingly. 3.2 Analysis of LearnEdge Whenever LearnEdge makes a mistake, one of the update rules U.1 - U.4 is invoked. So the number of mistakes of LearnEdge is bounded by the number of times each update rule is invoked. The mistake bound of LearnEdge in Theorem 1 is hence the sum of mistakes bounds in Lemmas 5-7. Lemma 5. Update U.1 is invoked at most 1 time. Lemma 6. Update U.2 is invoked at most 3|EP | times. 2 Lemma 7. Updates U.3 and U.4 are invoked at most O(|EP |N log(d)) times. 2 The dependency on |EP | can be improved by replacing it with the set of edges of P on which an optimal solution is observed. This applies to all the dependencies on |EP | in our bounds. 6 3.3 Necessity of the Precision Bound We show the necessity of Assumption 3 by showing that the dependence on the precision parameter N in our mistake bound is tight. We show that subject to Assumption 3, there exist a polytope and a sequence of additional constraints such that any learning algorithm will make ?(N ) mistakes. This implies that without any upper bound on precision, it is impossible to learn with finite mistakes. Theorem 8. For any learning algorithm A in the Known Objective Problem and any d 3, there exists a polytope P and a sequence of additional constraints {N (t) }t such that the number of mistakes made by A is at least ?(N ). 3 3.4 Stochastic Setting Given the lower bound in Theorem 8, we ask ?In what settings we can still learn without an upper bound on the precision to which constraints are specified?? The lower bound implies we must abandon the adversarial setting so we consider a PAC style variant. In this variant, the additional constraint at each day t is drawn i.i.d. from some fixed but unknown distribution D over Rd ? R such that each point (p, b) drawn from D corresponds to the halfspace N = {x | p ? x ? b}. We make no assumption on the form of D and require our bounds to hold in the worst case over all choices of D. We describe LearnHull an algorithm based on the following high level idea: LearnHull keeps track of the convex hull C (t 1) of all the solutions observed up to day t. LearnHull then behaves as if this convex hull is the entire feasible region. So at day t, given the constraint N (t) = {x | p(t) ? x ? b(t) }, ? (t) where x ? (t) = argmaxx2C (t 1) \N (t) c ? x. LearnHull predicts x LearnHull?s hypothetical feasible region is therefore always a subset of the true feasible region ? i.e. it can never make a mistake because its prediction was infeasible, but only because its prediction was sub-optimal. Hence, whenever LearnHull makes a mistake, it must have observed a point that expands the convex hull. Hence, whenever it fails to predict x(t) , LearnHull will enlarge its feasible region by adding the point x(t) to the convex hull: C (t) Conv(C (t 1) [ {x(t) }), otherwise it (t) (t 1) will simply set C C for the next day. We show that the expected number of mistakes of LearnHull over T days is linear in the number of edges of P and only logarithmic in T . 4 Theorem 9. For any T > 0 and any constraint distribution D, the expected number of mistakes of LearnHull after T days is bounded by O (|EP | log(T )). To prove Theorem 9, first in Lemma 10 we bound the probability that the solution observed at day t falls outside of the convex hull of the previously observed solutions. This is the only event that can cause LearnHull to make a mistake. In Lemma 10, we abstract away the fact that the point observed at each day is the solution to some optimization problem. Lemma 10. Let P be a polytope and D a distribution over points on EP . Let X = {x1 , . . . , xt 1 } be t 1 i.i.d. draws from D and xt an additional independent draw from D. Then Pr[xt 62 Conv(X)] ? 2|EP |/t where the probability is taken over the draws of points x1 , . . . , xt from D. Finally in Theorem 11 we convert the bound on the expected number of mistakes of LearnHull in Theorem 9 to a high probability bound. 5 Theorem 11. There exists a deterministic procedure such that after T = O (|EP | log (1/ )) days, the probability (over the randomness of the additional constraint) that the procedure makes a mistake on day T + 1 is at most for any 2 (0, 1/2). 4 The Known Constraints Problem We now consider the Known Constraints Problem in which the learner observes the changing constraint polytope P (t) at each day, but does not know the changing objective function which we 3 We point out that the condition d 3 is necessary in the statement of Theorem 8 since there exists learning algorithms for d = 1 and d = 2 with finite mistake bounds independent of N . See the supplementary material. 4 LearnHull can be implemented efficiently in time poly(T, N, d) if all of the coefficients in the unknown constraints in P are represented in N bits. Note that given the observed solutions so far and a new point, a separation oracle can be implemented in time poly(T, N, d) using a LP solver. 5 LearnEdge fails to give any non-trivial mistake bound in the adversarial setting. 7 P assume to be written as c(t) = i2S (t) vi , where {vi }i2[n] are fixed but unknown. Given P (t) and ? (t) on each day. Inspired by Bhaskar et the subset S (t) ? [n], the learner must make a prediction x al. [5], we use the Ellipsoid algorithm to learn the coefficients {vi }i2[n] , and show that the mistake bound of the resulting algorithm is bounded by the (polynomial) running time of the Ellipsoid. We use V 2 Rd?n to denote the matrix whose columns are vi and make the following assumption on V . Assumption 6. Each entry in V can be written with N bits of precision. Also w.l.o.g. ||V ||F ? 1. Similar to Section 3 we assume the coordinates of P (t) ?s vertices can be written with finite precision.6 Assumption 7. The coordinates of each vertex of P (t) can be written with N bits of precision. We first observe that the coefficients of the objective function represent a point that is guaranteed to lie in a region F (described below) which may be written as the intersection of possibly infinitely many halfspaces. Given a subset S ? [n] and a polytope P, let xS,P denote the optimal solution to the instance defined by S and P. Informally, the halfspaces defining F ensure that for any problem instance defined by arbitrary choices of S and P, the objective value of the optimal solution xS,P must be at least as high as the objective value of any feasible point in P. Since the convergence rate of the Ellipsoid algorithm depends on the precision to which constraints are specified, we do not in fact consider a hyperplane for every feasible solution but only for those solutions that are vertices of the feasible polytope P. This is not a relaxation, since LPs always have vertex-optimal solutions. We denote the set of all vertices of polytope P by vert(P), and the set of polytopes P satisfying Assumption 7 by . We then define F as follows: ( ) X 1 n n?d i S,P F = W = (w , . . . , w ) 2 R | 8S ? [n], 8P 2 , w ? x x 0, 8x 2 vert(P) i2S The idea behind our LearnEllipsoid algorithm is that we will run a copy of the Ellipsoid algorithm with variables w 2 Rd?n , as if we were solving the feasibility LP defined by the constraints defining F. We will always predict according to the centroid of the ellipsoid maintained by the Ellipsoid algorithm (i.e. its candidate solution). Whenever a mistake occurs, we are able to find one of the constraints that define F such that our prediction violates the constraint ? exactly what is needed to take a step in solving the feasibility LP. Since we know F is non-empty (at least the true objective function V lies within it) we know that the LP we are solving is feasible. Given the polynomial convergence time of the Ellipsoid algorithm, this gives a polynomial mistake bound for our algorithm. The Ellipsoid algorithm will generate a sequence of ellipsoids with decreasing volume such that each one contains feasible region F. Given the ellipsoid E (t) at day t, (t) LearnEllipsoid uses the centroid of E as its hypothesis for the objective function W (t) = (w1 )(t) , . . . , (wn )(t) . Given the subset S (t) and polytope P (t) , LearnEllipsoid predicts P ? (t) 2 argmaxx2P (t) { i2S (t) (wi )(t) ? x}. When a mistake occurs, LearnEllipsoid finds the x P ? (t) ) > 0 that separates hyperplane H(t) = W = (w1 , . . . , wn ) 2 Rn?d : i2S (t) wi ? (x(t) x the centroid of the current ellipsoid (the current candidate objective) from F. After the update, we use the Ellipsoid algorithm to compute the minimum-volume ellipsoid E (t+1) that contains H(t) \ E (t) . On day t + 1, LearnEllipsoid sets W (t+1) to be the centroid of E (t+1) . We left the procedure used to solve the LP in the prediction rule of LearnEllipsoid unspecified. To ? (t) which is a vertex of P (t) . simplify our analysis, we use a specific LP solver to obtain a prediction x Theorem 12 (Theorem 6.4.12 and Remark 6.5.2 [11]). There exists a LP solver that runs in time polynomial in the length of its input and returns an exact solution that is a vertex of P (t) . In Theorem 13, we show that the number of mistakes made by LearnEllipsoid is at most the number of updates that the Ellipsoid algorithm makes before it finds a point in F and the number of updates of the Ellipsoid algorithm can be bounded by well-known results from the literature on LP. Theorem 13. The total number of mistakes and the running time of LearnEllipsoid in the Known Constraints Problem is at most poly(n, d, N ). 6 We again point out that this is implied if the halfspaces defining the polytope are described with finite precision [11]. 8 References [1] A MIN , K., C UMMINGS , R., DWORKIN , L., K EARNS , M., AND ROTH , A. Online learning and profit maximization from revealed preferences. In Proceedings of the 29th AAAI Conference on Artificial Intelligence (2015), pp. 770?776. [2] BALCAN , M., DANIELY, A., M EHTA , R., U RNER , R., AND VAZIRANI , V. Learning economic parameters from revealed preferences. In Proceeding of the 10th International Conference on Web and Internet Economics (2014), pp. 338?353. [3] B EIGMAN , E., AND VOHRA , R. Learning from revealed preference. In Proceedings of the 7th ACM Conference on Electronic Commerce (2006), pp. 36?42. [4] B ELLONI , A., L IANG , T., NARAYANAN , H., AND R AKHLIN , A. Escaping the local minima via simulated annealing: Optimization of approximately convex functions. In Proceeding of the 28th Conference on Learning Theory (2015), pp. 240?265. [5] B HASKAR , U., L IGETT, K., S CHULMAN , L., AND S WAMY, C. Achieving target equilibria in network routing games without knowing the latency functions. In Proceeding of the 55th IEEE Annual Symposium on Foundations of Computer Science (2014), pp. 31?40. [6] B LUM , A., M ANSOUR , Y., AND M ORGENSTERN , J. Learning what?s going on: Reconstructing preferences and priorities from opaque transactions. In Proceedings of the 16th ACM Conference on Economics and Computation (2015), pp. 601?618. [7] C OLLINS , M. Discriminative reranking for natural language parsing. In Proceedings of the 17th International Conference on Machine Learning (2000), Morgan Kaufmann, pp. 175?182. [8] C OLLINS , M. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing (2002), pp. 1?8. [9] DANIELY, A., AND S HALEV-S HWARTZ , S. Optimal learners for multiclass problems. In Proceedings of the 27th Conference on Learning Theory (2014), pp. 287?316. [10] F ?RNKRANZ , J., AND H ?LLERMEIER , E. Preference learning. Springer, 2010. [11] G R?TSCHEL , M., L OV?SZ , L., AND S CHRIJVER , A. Geometric Algorithms and Combinatorial Optimization, second corrected ed., vol. 2 of Algorithms and Combinatorics. Springer, 1993. [12] L AFFERTY, J., M C C ALLUM , A., AND P EREIRA , F. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the 18th International Conference on Machine Learning (2001), pp. 282?289. [13] L ITTLESTONE , N. Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning 2, 4 (1988), 285?318. [14] ROTH , A., U LLMAN , J., AND W U , Z. Watch and learn: Optimizing from revealed preferences feedback. In Proceedings of the 48th Annual ACMSymposium on Theory of Computing (2016), pp. 949?962. [15] VALIANT, L. A theory of the learnable. Communications of the ACM 27, 11 (1984), 1134?1142. [16] Z ADIMOGHADDAM , M., AND ROTH , A. Efficiently learning from revealed preference. In Proceedings of the 8th International Workshop on Internet and Network Economics (2012), pp. 114?127. 9
6150 |@word collinearity:1 economically:1 compression:1 polynomial:7 stronger:1 norm:1 open:2 d2:1 profit:2 necessity:3 contains:6 q1e:2 recovered:1 current:2 must:7 parsing:1 written:10 remove:1 update:18 intelligence:1 selected:1 reranking:1 accordingly:1 record:1 preference:15 hyperplanes:6 along:2 symposium:1 incorrect:1 prove:3 consists:1 inside:2 introduce:1 manner:1 upenn:1 expected:3 behavior:6 cand:10 multi:2 inspired:1 decreasing:1 solver:3 conv:5 spain:1 grotschel:1 bounded:8 underlying:2 maximizes:2 abound:1 lowest:2 what:4 unspecified:1 substantially:2 guarantee:3 certainty:1 every:4 hypothetical:1 expands:1 concave:1 xd:2 questionable:11 fat:2 exactly:2 rm:2 me0:3 unit:1 segmenting:1 before:2 bind:1 local:1 mistake:47 ext:8 encoding:2 subscript:1 approximately:1 might:2 acl:1 studied:3 ye1:4 unique:1 commerce:1 union:5 procedure:3 mei:1 intersect:1 empirical:1 vert:2 seeing:2 get:2 cannot:2 impossible:1 deterministic:1 roth:6 missing:1 economics:3 independently:1 convex:9 simplicity:1 rule:10 notion:1 coordinate:4 pt:5 suppose:2 target:2 exact:1 us:2 hypothesis:1 element:3 logarithmically:1 satisfying:2 predicts:6 ep:17 steven:1 observed:17 kxk1:1 capture:1 worst:2 region:31 counter:1 highest:1 observes:4 halfspaces:5 complexity:4 ov:1 tight:1 solving:3 technically:1 upon:3 aaroth:1 learner:23 various:1 represented:1 intersects:1 distinct:4 describe:2 query:2 artificial:1 labeling:1 outcome:2 shalev:1 maxx2p:1 outside:6 whose:6 richer:1 supplementary:3 quite:1 solve:1 say:1 otherwise:2 abandon:1 certified:1 online:3 superscript:1 sequence:9 till:1 amin:1 intuitive:1 convergence:2 empty:2 leave:2 i2s:6 finitely:1 progress:2 sa:1 strong:1 implemented:3 predicted:1 implies:5 safe:1 attribute:1 hull:6 stochastic:1 routing:1 rogers:1 material:3 implementing:1 violates:1 require:2 hwartz:1 fix:1 generalization:1 preliminary:1 ryan:1 akhlin:1 zhiwei:1 hold:2 sufficiently:1 normal:1 equilibrium:1 predict:22 ansour:1 vary:2 optimizer:2 combinatorial:1 maker:2 establishes:1 always:6 shahin:1 rather:1 varying:2 corollary:2 ax:6 derived:1 focus:3 she:3 rank:1 contrast:1 adversarial:4 centroid:4 rpp:4 bt:2 typically:1 eliminate:1 entire:1 hidden:1 her:6 bandit:1 relation:1 going:1 classification:1 denoted:2 special:1 initialize:1 field:1 never:1 enlarge:1 shattering:2 adversarially:2 purchase:1 future:3 piecewise:1 simplify:1 simultaneously:1 argmax:3 organization:2 certainly:1 behind:1 bundle:4 edge:32 partial:3 necessary:3 e0:1 instance:5 classify:1 qie:2 column:1 maximization:1 vertex:16 subset:8 entry:1 daniely:3 uniform:1 learnability:2 stored:1 dependency:2 chooses:2 adaptively:1 international:4 systematic:1 probabilistic:1 together:1 quickly:1 w1:2 again:3 aaai:1 containing:1 possibly:1 priority:1 halev:1 style:3 return:1 summarized:1 coefficient:6 combinatorics:1 ranking:1 vi:5 ad:2 depends:2 later:1 try:1 observing:1 analyze:1 recover:1 maintains:2 e2e:3 defer:1 halfspace:3 formed:1 kaufmann:1 who:2 efficiently:3 correspond:1 ollins:2 accurately:1 vohra:2 randomness:1 history:1 whenever:6 ed:1 definition:1 iang:1 pp:12 proof:2 degeneracy:1 rational:4 couple:1 ask:2 organized:1 day:35 specify:1 improved:1 shrink:1 strongly:1 generality:1 furthermore:1 web:1 replacing:1 name:2 ye:4 contain:1 true:4 hence:5 i2:2 adjacent:1 game:1 maintained:1 qe:4 performs:1 balcan:2 invoked:6 recently:1 behaves:1 volume:2 refer:2 ai:2 rd:10 unconstrained:1 language:2 had:1 add:2 showed:2 optimizing:2 zadimoghaddam:1 irrelevant:1 scenario:1 store:1 inequality:2 morgan:1 minimum:2 additional:7 maximize:1 dashed:1 multiple:1 e1:1 a1:2 controlled:1 feasibility:2 prediction:24 variant:6 heterogeneous:1 represent:2 achieved:1 interval:13 annealing:1 else:3 rest:1 subject:3 thing:1 member:1 leveraging:1 seem:1 bhaskar:1 revealed:11 enough:1 wn:2 gave:1 pennsylvania:1 identified:2 restrict:1 earns:1 economic:1 idea:2 escaping:1 knowing:1 multiclass:1 whether:3 utility:5 collinear:4 cause:1 action:2 remark:1 useful:1 latency:1 informally:1 mid:2 narayanan:1 generate:1 exist:1 canonical:1 llermeier:1 disjoint:2 per:1 track:3 blue:1 broadly:1 vol:1 four:1 threshold:1 blum:1 achieving:1 drawn:4 changing:15 v1:1 relaxation:3 sum:1 convert:1 run:3 opaque:1 throughout:2 wu:1 vn:1 separation:2 electronic:1 draw:3 decision:4 scaling:1 bit:5 bound:37 internet:2 guaranteed:1 oracle:2 annual:2 constraint:52 belloni:1 generates:1 min:1 structured:1 according:1 ball:1 across:2 reconstructing:1 wi:2 lp:26 making:5 pr:1 taken:1 resource:2 equation:1 previously:2 mutually:1 needed:1 initiate:1 know:6 jabbari:2 generalizes:2 observe:1 away:1 encounter:2 running:5 remaining:2 ensure:1 purchased:1 implied:2 objective:41 question:1 quantity:1 occurs:2 primary:1 dependence:2 exclusive:1 subspace:2 separate:1 separating:1 simulated:1 me:2 polytope:24 valuation:2 trivial:1 assuming:1 length:4 ellipsoid:16 equivalently:1 fe:6 statement:1 relate:1 reliably:1 motivates:1 unknown:39 ye0:4 allowing:1 upper:6 markov:1 afferty:1 finite:13 incorrectly:1 defining:5 extended:2 communication:1 rn:1 arbitrary:7 specified:11 connection:1 learned:3 polytopes:2 barcelona:1 nip:1 beyond:1 adversary:1 able:2 below:2 departure:1 program:3 max:1 green:1 event:1 demanding:1 natural:6 predicting:9 representing:1 improve:1 rner:1 lum:1 literature:2 geometric:2 checking:1 loss:1 interesting:1 foundation:1 agent:3 purchasing:3 row:2 course:1 copy:1 infeasible:11 perceptron:1 wide:1 fall:1 face:2 taking:1 midpoint:2 boundary:1 dimension:7 feedback:1 rnkranz:1 made:6 far:2 transaction:1 vazirani:1 keep:3 sz:1 buy:2 discriminative:2 shwartz:1 continuous:2 learn:8 tschel:1 poly:4 ehta:1 linearly:5 whole:1 noise:1 x1:4 precision:21 sub:1 fails:2 wish:2 exponential:1 lie:4 candidate:3 learns:1 theorem:15 xt:4 specific:1 pac:6 showing:2 learnable:1 x:2 exists:5 workshop:1 adding:1 valiant:1 importance:1 ci:3 budget:3 intersection:9 logarithmic:1 simply:3 infinitely:1 partially:2 watch:1 applies:1 springer:2 corresponds:2 acm:3 conditional:1 goal:13 viewed:1 price:7 lipschitz:1 feasible:27 change:1 specifically:1 infinite:1 uniformly:1 corrected:1 hyperplane:4 lemma:15 called:2 total:1 buyer:2 meaningful:1 aaron:1 formally:6
5,693
6,151
A Forward Model at Purkinje Cell Synapses Facilitates Cerebellar Anticipatory Control Ivan Herreros-Alonso SPECS lab Universitat Pompeu Fabra Barcelona, Spain ivan.herreros@upf.edu Xerxes D. Arsiwalla SPECS lab Universitat Pompeu Fabra Barcelona, Spain Paul F.M.J. Verschure SPECS, UPF Catalan Institution of Research and Advanced Studies (ICREA) Barcelona, Spain Abstract How does our motor system solve the problem of anticipatory control in spite of a wide spectrum of response dynamics from different musculo-skeletal systems, transport delays as well as response latencies throughout the central nervous system? To a great extent, our highly-skilled motor responses are a result of a reactive feedback system, originating in the brain-stem and spinal cord, combined with a feed-forward anticipatory system, that is adaptively fine-tuned by sensory experience and originates in the cerebellum. Based on that interaction we design the counterfactual predictive control (CFPC) architecture, an anticipatory adaptive motor control scheme in which a feed-forward module, based on the cerebellum, steers an error feedback controller with counterfactual error signals. Those are signals that trigger reactions as actual errors would, but that do not code for any current or forthcoming errors. In order to determine the optimal learning strategy, we derive a novel learning rule for the feed-forward module that involves an eligibility trace and operates at the synaptic level. In particular, our eligibility trace provides a mechanism beyond co-incidence detection in that it convolves a history of prior synaptic inputs with error signals. In the context of cerebellar physiology, this solution implies that Purkinje cell synapses should generate eligibility traces using a forward model of the system being controlled. From an engineering perspective, CFPC provides a general-purpose anticipatory control architecture equipped with a learning rule that exploits the full dynamics of the closed-loop system. 1 Introduction Learning and anticipation are central features of cerebellar computation and function (Bastian, 2006): the cerebellum learns from experience and is able to anticipate events, thereby complementing a reactive feedback control by an anticipatory feed-forward one (Hofstoetter et al., 2002; Herreros and Verschure, 2013). This interpretation is based on a series of anticipatory motor behaviors that originate in the cerebellum. For instance, anticipation is a crucial component of acquired behavior in eye-blink conditioning (Gormezano et al., 1983), a trial by trial learning protocol where an initially neutral stimulus such as a tone or a light (the conditioning stimulus, CS) is followed, after a fixed delay, by a noxious one, such as an air puff to the eye (the unconditioned stimulus, US). During early trials, a protective unconditioned response (UR), a blink, occurs reflexively in a feedback manner following the US. After training though, a well-timed anticipatory blink (the conditioned response, CR) precedes the US. Thus, learning results in the (partial) transference from an initial feedback action to an anticipatory (or predictive) feed-forward one. Similar responses occur during anticipatory postural adjustments, which are postural changes that precede voluntary motor movements, such as raising an arm while standing (Massion, 1992). The goal of these anticipatory adjustments is to counteract the postural and equilibrium disturbances that voluntary movements introduce. These 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. behaviors can be seen as feedback reactions to events that after learning have been transferred to feed-forward actions anticipating the predicted events. Anticipatory feed-forward control can yield high performance gains over feedback control whenever the feedback loop exhibits transmission (or transport) delays (Jordan, 1996). However, even if a plant has negligible transmission delays, it may still have sizable inertial latencies. For example, if we apply a force to a visco-elastic plant, its peak velocity will be achieved after a certain delay; i.e. the velocity itself will lag the force. An efficient way to counteract this lag will be to apply forces anticipating changes in the desired velocity. That is, anticipation can be beneficial even when one can act instantaneously on the plant. Given that, here we address two questions: what is the optimal strategy to learn anticipatory actions in a cerebellar-based architecture? and how could it be implemented in the cerebellum? To answer that we design the counterfactual predictive control (CFPC) scheme, a cerebellar-based adaptive-anticipatory control architecture that learns to anticipate performance errors from experience. The CFPC scheme is motivated from neuro-anatomy and physiology of eye-blink conditioning. It includes a reactive controller, which is an output-error feedback controller that models brain stem reflexes actuating on eyelid muscles, and a feed-forward adaptive component that models the cerebellum and learns to associate its inputs with the error signals driving the reactive controller. With CFPC we propose a generic scheme in which a feed-forward module enhances the performance of a reactive error feedback controller steering it with signals that facilitate anticipation, namely, with counterfactual errors. However, within CFPC, even if these counterfactual errors that enable predictive control are learned based on past errors in behavior, they do not reflect any current or forthcoming error in the ongoing behavior. In addition to eye-blink conditioning and postural adjustments, the interaction between reactive and cerebellar-dependent acquired anticipatory behavior has also been studied in paradigms such as visually-guided smooth pursuit eye movements (Lisberger, 1987). All these paradigms can be abstracted as tasks in which the same predictive stimuli and disturbance or reference signal are repeatedly experienced. In accordance to that, we operate our control scheme in trial-by-trial (batch) mode. With that, we derive a learning rule for anticipatory control that modifies the well-known least-mean-squares/Widrow-Hoff rule with an eligibility trace. More specifically, our model predicts that to facilitate learning, parallel fibers to Purkinje cell synapses implement a forward model that generates an eligibility trace. Finally, to stress that CFPC is not specific to eye-blink conditioning, we demonstrate its application with a smooth pursuit task. 2 2.1 Methods Cerebellar Model w1 x1 xj wj xN wN e o Figure 1: Anatomical scheme of a Cerebellar Purkinje cell. The xj denote parallel fiber inputs to Purkinje synapses (in red) with weights wj . o denotes the output of the Purkinje cell. The error signal e, through the climbing fibers (in green), modulates synaptic weights. We follow the simplifying approach of modeling the cerebellum as a linear adaptive filter, while focusing on computations at the level of the Purkinje cells, which are the main output cells of the cerebellar cortex (Fujita, 1982; Dean et al., 2010). Over the mossy fibers, the cerebellum receives a wide range of inputs. Those inputs reach Purkinke cells via parallel fibers (Fig. 1), that cross 2 dendritic trees of Purkinje cells in a ratio of up to 1.5 ? 106 parallel fiber synapses per cell (Eccles et al., 1967). We denote the signal carried by a particular fiber as xj , j ? [1, G], with G equal to the total number of inputs fibers. These inputs from the mossy/parallel fiber pathway carry contextual information (interoceptive or exteroceptive) that allows the Purkinje cell to generate a functional output. We refer to these inputs as cortical bases, indicating that they are localized at the cerebellar cortex and that they provide a repertoire of states and inputs that the cerebellum combines to generate its output o. As we will develop a discrete time analysis of the system, we use n to indicate time (or time-step). The output of the cerebellum at any time point n results from a weighted sum of those cortical bases. wj indicates the weight or synaptic efficacy associated with the fiber j. Thus, we | | have x[n] = [x1 [n], . . . , xG [n]] and w[n] = [w1 [n], . . . , wG [n]] (where the transpose, | , indicates that x[n] and w[n] are column vectors) containing the set of inputs and synaptic weights at time n, respectively, which determine the output of the cerebellum according to o[n] = x[n]| w[n] (1) The adaptive feed-forward control of the cerebellum stems from updating the weights according to a rule of the form ?wj [n + 1] = f (xj [n], . . . , xj [1], e[n], ?) (2) where ? denotes global parameters of the learning rule; xj [n], . . . , xj [1], the history of its presynaptic inputs of synapse j; and e[n], an error signal that is the same for all synapses, corresponding to the difference between the desired, r, and the actual output, y, of the controlled plant. Note that in drawing an analogy with the eye-blink conditioning paradigm, we use the simplifying convention of considering the noxious stimulus (the air-puff) as a reference, r, that indicates that the eyelids should close; the closure of the eyelid as the output of the plant, y; and the sensory response to the noxious stimulus as an error, e, that encodes the difference between the desired, r, and the actual eyelid closures, y. Given this, we advance a new learning rule, f , that achieves optimal performance in the context of eye-blink conditioning and other cerebellar learning paradigms. 2.2 Cerebellar Control Architecture Cerebellum (cortex and nuclei) and Inferior olive [FF] Trigeminal nucleus [o] [e] [e] [u] + - [r] US (airpuff) [y] Facial nucleus [C] Eyelids (Blink) [P] x ADAPTIVE-ANTICIPATORY (FEED-FORWARD) LAYER FF o [x] r Pons + e - + C u P y REACTIVE (FEEDBACK) LAYER CS (Context, e.g.: sound, light) FEEDBACK CLOSEDLOOP SYSTEM Figure 2: Neuroanatomy of eye-blink conditioning and the CFPC architecture. Left: Mapping of signals to anatomical structures in eye-blink conditioning (De Zeeuw and Yeo, 2005); regular arrows indicate external inputs and outputs, arrows with inverted heads indicate neural pathways. Right: CFPC architecture. Note that the feedback controller, C, and the feed-forward module, F F , belong to the control architecture, while the plant, P , denotes an object controlled. Other abbreviations: r, reference signal; y, plant?s output; e, output error; x, basis signals; o, feed-forward signal; and u, motor command. We embed the adaptive filter cerebellar module in a layered control architecture, namely the CFPC architecture, based on the interaction between brain stem motor nuclei driving motor reflexes and the cerebellum, such as the one established between the cerebellar microcircuit responsible for conditioned responses and the brain stem reflex circuitry that produces unconditioned eye-blinks (Hesslow and Yeo, 2002) (Fig. 2 left). Note that in our interpretation of this anatomy we assume that cerebellar output, o, feeds the lower reflex controller (Fig. 2 right). Put in control theory terms, within the CFPC scheme an adaptive feed-forward layer supplements a negative feedback controller steering it with feed-forward signals. 3 Our architecture uses a single-input single-output negative-feedback controller. The controller receives as input the output error e = r ? y. For the derivation of the learning algorithm, we assume that both plant and controller are linear and time-invariant (LTI) systems. Importantly, the feedback controller and the plant form a reactive closed-loop system, that mathematically can be seen as a system that maps the reference, r, into the plant?s output, y. A feed-forward layer that contains the above-mentioned cerebellar model provides the negative feedback controller with an additional input signal, o. We refer to o as a counter-factual error signal, since although it mechanistically drives the negative feedback controller analogously to an error signal it is not an actual error. The counterfactual error is generated by the feed-forward module that receives an output error, e, as its teaching signal. Notably, from the point of view of the reactive layer closed-loop system, o can also be interpreted as a signal that offsets r. In other words, even if r remains the reference that sets the target of behavior, r + o functions as the effective reference that drives the closed-loop system. 3 3.1 Results Derivation of the gradient descent update rule for the cerebellar control architecture We apply the CFPC architecture defined in the previous section to a task that consists in following a finite reference signal r ? RN that is repeated trial-by-trial. To analyze this system, we use the discrete time formalism and assume that all components are linear time-invariant (LTI). Given this, both reactive controller and plant can be lumped together into a closed-loop dynamical system, that can be described with the dynamics A, input B, measurement C and feed-through D matrices. In general, these matrices describe how the state of a dynamical system autonomously evolves with time, A; how inputs affect system states, B; how states are mapped into outputs, C; and how inputs instantaneously affect the system?s output D (Astrom and Murray, 2012). As we consider a reference of a finite length N , we can construct the N -by-N transfer matrix T as follows (Boyd, 2008) ? ? D 0 0 ... 0 CB D 0 ... 0 ? ? ? CAB CB D ... 0 ? ? T =? ? .. ? .. .. .. .. ? . . . . . ? CAN ?2 B CAN ?3 B CAN ?4 B ... D With this transfer matrix we can map any given reference r into an output yr using yr = T r, obtaining what would have been the complete output trajectory of the plant on an entirely feedback-driven trial. Note that the first column of T contains the impulse response curve of the closed-loop system, while the rest of the columns are obtained shifting that impulse response down. Therefore, we can build the transfer matrix T either in a model-based manner, deriving the state-space characterization of the closed-loop system, or in measurement-based manner, measuring the impulse response curve. Additionally, note that (I ? T )r yields the error of the feedback control in following the reference, a signal which we denote with e0 . Let o ? RN be the entire feed-forward signal for a given trial. Given commutativity, we can consider that from the point of view of the closed-loop system o is added directly to the reference r, (Fig. 2 right). In that case, we can use y = T (r + o) to obtain the output of the closed-loop system when it is driven by both the reference and the feed-forward signal. The feed-forward module only outputs linear combinations of a set of bases. Let X ? RN ?G be a matrix with the content of the G bases during all the N time steps of a trial. The feed-forward signal becomes o = Xw, where w ? RG contains the mixing weights. Hence, the output of the plant given a particular w becomes y = T (r + Xw). We implement learning as the process of adjusting the weights w of the feed-forward module in a trial-by-trial manner. At each trial the same reference signal, r, and bases, X, are repeated. Through learning we want to converge to the optimal weight vector w? defined as 1 1 w? = arg min c(w) = arg min e| e = arg min (r ? T (r + Xw))| (r ? T (r + Xw)) 2 2 w w w (3) where c indicates the objective function to minimize, namely the L2 norm or sum of squared errors. ? = T X and using e0 = (I ? T )r, the minimization problem can be cast as a With the substitution X 4 canonical linear least-squares problem: 1 ? | (e0 ? Xw) ? w? = arg min (e0 ? Xw) 2 w (4) ? ? e0 , One the one hand, this allows to directly find the least squares solution for w? , that is, w? = X where ? denotes the Moore-Penrose pseudo-inverse. On the other hand, and more interestingly, with ? w[k] being the weights at trial k and having e[k] = e0 ? Xw[k], we can obtain the gradient of the error function at trial k with relation to w as follows: ? | e[k] = ?X| T | e[k] ?w c = ?X Thus, setting ? as a properly scaled learning rate (the only global parameter ? of the rule), we can derive the following gradient descent strategy for the update of the weights between trials: w[k + 1] = w[k] + ?X| T | e[k] (5) This solves for the learning rule f in eq. 2. Note that f is consistent with both the cerebellar anatomy (Fig. 2left) and the control architecture (Fig. 2right) in that the feed-forward module/cerebellum only requires two signals to update its weights/synaptic efficacies: the basis inputs, X, and error signal, e. 3.2 T | facilitates a synaptic eligibility trace The standard least mean squares (LMS) rule (also known as Widrow-Hoff or decorrelation learning rule) can be represented in its batch version as w[k + 1] = w[k] + ?X| e[k]. Hence, the only difference between the batch LMS rule and the one we have derived is the insertion of the matrix factor T | . Now we will show how this factor acts as a filter that computes an eligibility trace at each weight/synapse. Note that the update of a single weight, according Eq. 5 becomes wj [k + 1] = wj [k] + ?x|j T | e[k] (6) where xj contains the sequence of values of the cortical basis j during the entire trial. This can be rewritten as wj [k + 1] = wj [k] + ?h|j e[k] (7) with hj ? T xj . The above inner product can be expressed as a sum of scalar products wj [k + 1] = wj [k] + ? N X hj [n]e[k, n] (8) n=1 where n indexes the within trial time-step. Note that e[k] in Eq. 7 refers to the whole error signal at trial k whereas e[k, n] in Eq. 8 refers to the error value in the n-th time-step of the trial k. It is now clear that each hj [n] weighs how much an error arriving at time n should modify the weight wj , which is precisely the role of an eligibility trace. Note that since T contains in its columns/rows shifted repetitions of the impulse response curve of the closed-loop system, the eligibility trace codes at any time n, the convolution of the sequence of previous inputs with the impulse-response curve of the reactive layer closed-loop. Indeed, in each synapse, the eligibility trace is generated by a forward model of the closed-loop system that is exclusively driven by the basis signal. Consequently, our main result is that by deriving a gradient descent algorithm for the CFPC cerebellar control architecture we have obtained an exact definition of the suitable eligibility trace. That definition guarantees that the set of weights/synaptic efficacies are updated in a locally optimal manner in the weights? space. 3.3 On-line gradient descent algorithm The trial-by-trial formulation above allowed for a straightforward derivation of the (batch) gradient descent algorithm. As it lumped together all computations occurring in a same trial, it accounted for time within the trial implicitly rather than explicitly: one-dimensional time-signals were mapped onto points in a high-dimensional space. However, after having established the gradient descent algorithm, we can implement the same rule in an on-line manner, dropping the repetitiveness assumption inherent to trial-by-trial learning and performing all computations locally in time. Each weight/synapse must 5 have a process associated to it that outputs the eligibility trace. That process passes the incoming (unweighted) basis signal through a (forward) model of the closed-loop as follows: sj [n + 1] = Asj [n] + Bxj [n] hj [n] = Csj [n] + Dxj [n] where matrices A, B, C and D refer to the closed-loop system (they are the same matrices that we used to define the transfer matrix T ), and sj [n] is the state vector of the forward model of the synapse j at time-step n. In practice, each ?synaptic? forward model computes what would have been the effect of having driven the closed-loop system with each basis signal alone. Given the superposition principle, the outcome of that computation can also be interpreted as saying that hj [n] indicates what would have been the displacement over the current output of the plant, y[n], achieved feeding the closed-loop system with the basis signal xj . The process of weight update is completed as follows: wj [n + 1] = wj [n] + ?hj [n]e[n] (9) At each time step n, the error signal e[n] is multiplied by the current value of the eligibility trace hj [n], scaled by the learning rate ?, and subtracted to the current weight wj [n]. Therefore whereas the contribution of each basis to the output of the adaptive filter depends only on its current value and weight, the change in weight depends on the current and past values passed through a forward model of the closed-loop dynamics. 3.4 Simulation of a visually-guided smooth pursuit task 0.2 r y[1] y[50] 1 0.8 angular position (a.u.) angular position (a.u.) We demonstrate the CFPC approach in an example of a visual smooth pursuit task in which the eyes have to track a target moving on a screen. Even though the simulation does not capture all the complexity of a smooth pursuit task, it illustrates our anticipatory control strategy. We model the plant (eye and ocular muscles) with a two-dimensional linear filter that maps motor commands into angular positions. Our model is an extension of the model in (Porrill and Dean, 2007), even though in that work the plant was considered in the context of the vestibulo-ocular reflex. In particular, we use a chain of two leaky integrators: a slow integrator with a relaxation constant of 100 ms drives the eyes back to the rest position; the second integrator, with a fast time constant of 3 ms ensures that the change in position does not occur instantaneously. To this basic plant, we add a reactive control layer modeled as a proportional-integral (PI) error-feedback controller, with proportional gain kp and integral gain ki . The control loop includes a 50 ms delay in the error feedback, to account for both the actuation and the sensing latency. We choose gains such that reactive tracking lags the target by approximately 100 ms. This gives kp = 20 and ki = 100. To complete the anticipatory and adaptive control architecture, the closed-loop system is supplemented by the feed-forward module. 0.6 0.4 0.2 0 0 0.5 1 1.5 time (s) 2 2.5 e[1] e[50] o[50] 0.1 0 ?0.1 0 0.5 1 1.5 time (s) 2 2.5 Figure 3: Behavior of the system. Left: Reference (r) and output of the system before (y[1]) and after learning (y[50]). Right: Error before e[1] and after learning e[50] and output acquired by cerebellar/feed-forward component (o[50]) The architecture implementing the forward model-based gradient descent algorithm is applied to a task structured in trials of 2.5 sec duration. Within each trial, a target remains still at the center of the visual scene for a duration 0.5 sec, next it moves rightwards for 0.5 sec with constant velocity, remains still for 0.5 sec and repeats the sequence of movements in reverse, returning to the center. The cerebellar component receives 20 Gaussian basis signals (X) whose receptive fields are defined in the temporal domain, relative to trial onset, with a width (standard-deviation) of 50 ms and spaced by 100 ms. The whole system is simulated using a 1 ms time-step. To construct the matrix T we computed closed-loop system impulse response. 6 At the first trial, before any learning, the output of the plant lags the reference signal by approximately 100 ms converging to the position only when the target remains still for about 300 ms (Fig. 3 left). As a result of learning, the plant?s behavior shifts from a reactive to an anticipatory mode, being able to track the reference without any delay. Indeed, the error that is sizable during the target displacement before learning, almost completely disappears by the 50th trial (Fig. 3 right). That cancellation results from learning the weights that generate a feed-forward predictive signal that leads the changes in the reference signal (onsets and offsets of target movements) by approximately 100 ms (Fig. 3 right). Indeed, convergence of the algorithm is remarkably fast and by trial 7 it has almost converged to the optimal solution (Fig. 4). WH WH+50ms WH+70ms FM?ET 1 rRMSE 0.8 0.6 0.4 0.2 0 0 10 20 #trial 30 40 50 Figure 4: Performance achieved with different learning rules. Representative learning curves of the forward model-based eligibility trace gradient descent (FM-ET), the simple Widrow-Hoff (WH) and the Widrow-Hoff algorithm with a delta-eligibility trace matched to error feedback delay (WH+50 ms) or with an eligibility trace exceeding that delay by 20 ms (WH+70 ms). Error is quantified as the relative root mean-squared error (rRMSE), scaled proportionally to the error in the first trial. Error of the optimal solution, obtained with w? = (T X)? e0 , is indicated with a dashed line. To assess how much our forward-model-based eligibility trace contributes to performance, we test three alternative algorithms. In both cases we employ the same control architecture, changing the plasticity rule such that we either use no eligibility trace, thus implementing the basic Widrow-Hoff learning rule, or use the Widrow-Hoff rule extended with a delta-function eligibility trace that matches the latency of the error feedback (50 ms) or slightly exceeds it (70 ms). Performance with the basic WH model worsens rapidly whereas performance with the WH learning rule using a ?pure delay? eligibility trace matched to the transport delay improves but not as fast as with the forward-modelbased eligibility trace (Fig. 4). Indeed, in this case, the best strategy for implementing a delayed delta eligibility trace is setting a delay exceeding the transport delay by around 20 ms, thus matching the peak of the impulse response. In that case, the system performs almost as good as with the forward-model eligibility trace (70 ms). This last result implies that, even though the literature usually emphasizes the role of transport delays, eligibility traces also account for response lags due to intrinsic dynamics of the plant. To summarize our results, we have shown with a basic simulation of a visual smooth pursuit task that generating the eligibility trace by means of a forward model ensures convergence to the optimal solution and accelerates learning by guaranteeing that it follows a gradient descent. 4 Discussion In this paper we have introduced a novel formulation of cerebellar anticipatory control, consistent with experimental evidence, in which a forward model has emerged naturally at the level of Purkinje cell synapses. From a machine learning perspective, we have also provided an optimality argument for the derivation of an eligibility trace, a construct that was often thought of in more heuristic terms as a mechanism to bridge time-delays (Barto et al., 1983; Shibata and Schaal, 2001; McKinstry et al., 2006). The first seminal works of cerebellar computational models emphasized its role as an associative memory (Marr, 1969; Albus, 1971). Later, the cerebellum was investigates as a device processing correlated time signals(Fujita, 1982; Kawato et al., 1987; Dean et al., 2010). In this latter framework, 7 the use of the computational concept of an eligibility trace emerged as a heuristic construct that allowed to compensate for transmission delays in the circuit(Kettner et al., 1997; Shibata and Schaal, 2001; Porrill and Dean, 2007), which introduced lags in the cross-correlation between signals. Concretely, that was referred to as the problem of delayed error feedback, due to which, by the time an error signal reaches a cell, the synapses accountable for that error are no longer the ones currently active, but those that were active at the time when the motor signals that caused the actual error were generated. This view has however neglected the fact that beyond transport delays, response dynamics of physical plants also influence how past pre-synaptic signals could have related to the current output of the plant. Indeed, for a linear plant, the impulse-response function of the plant provides the complete description of how inputs will drive the system, and as such, integrates transmission delays as well as the dynamics of the plant. Recently, Even though cerebellar microcircuits have been used as models for building control architectures, e.g., the feedback-error learning model (Kawato et al., 1987), our CFPC is novel in that it links the cerebellum to the input of the feedback controller, ensuring that the computational features of the feedback controller are exploited at all times. Within the domain of adaptive control, there are remarkable similarities at the functional level between CFPC and iterative learning control (ILC) (Amann et al., 1996), which is an input design technique for learning optimal control signals in repetitive tasks. The difference between our CFPC and ILC lies in the fact that ILC controllers directly learn a control signal, whereas, the CFPC learns a conterfactual error signal that steers a feedback controller. However the similarity between the two approaches can help for extending CFPC to more complex control tasks. With our CFPC framework, we have modeled the cerebellar system at a very high level of abstraction: we have not included bio-physical constraints underlying neural computations, obviated known anatomical connections such as the cerebellar nucleo-olivary inhibition (Bengtsson and Hesslow, 2006; Herreros and Verschure, 2013) and made simplifications such as collapsing cerebellar cortex and nuclei into the same computational unit. On the one hand, such a choice of high-level abstraction may indeed be beneficial for deriving general-purpose machine learning or adaptive control algorithms. On the other hand, it is remarkable that in spite of this abstraction our framework makes fine-grained predictions at the micro-level of biological processes. Namely, that in a cerebellar microcircuit (Apps and Garwicz, 2005), the response dynamics of secondary messengers (Wang et al., 2000) regulating plasticity of Purkinje cell synapses to parallel fibers must mimic the dynamics of the motor system being controlled by that cerebellar microcircuit. Notably, the logical consequence of this prediction, that different Purkinje cells should display different plasticity rules according to the system that they control, has been validated recording single Purkinje cells in vivo (Suvrathan et al., 2016). In conclusion, we find that a normative interpretation of plasticity rules in Purkinje cell synapses emerges from our systems level CFPC computational architecture. That is, in order to generate optimal eligibility traces, synapses must include a forward model of the controlled subsystem. This conclusion, in the broader picture, suggests that synapses are not merely components of multiplicative gains, but rather the loci of complex dynamic computations that are relevant from a functional perspective, both, in terms of optimizing storage capacity (Benna and Fusi, 2016; Lahiri and Ganguli, 2013) and fine-tuning learning rules to behavioral requirements. Acknowledgments The research leading to these results has received funding from the European Commission?s Horizon 2020 socSMC project (socSMC-641321H2020-FETPROACT-2014) and by the European Research Council?s CDAC project (ERC-2013-ADG 341196). References Albus, J. S. (1971). A theory of cerebellar function. Mathematical Biosciences, 10(1):25?61. Amann, N., Owens, D. H., and Rogers, E. (1996). Iterative learning control for discrete-time systems with exponential rate of convergence. IEE Proceedings-Control Theory and Applications, 143(2):217?224. Apps, R. and Garwicz, M. (2005). Anatomical and physiological foundations of cerebellar information processing. Nature reviews. Neuroscience, 6(4):297?311. Astrom, K. J. and Murray, R. M. (2012). Feedback Systems: An Introduction for Scientists and Engineers. Princeton university press. 8 Barto, A. G., Sutton, R. S., and Anderson, C. W. (1983). Neuronlike adaptive elements that can solve difficult learning control problems. IEEE transactions on systems, man, and cybernetics, SMC-13(5):834?846. Bastian, A. J. (2006). Learning to predict the future: the cerebellum adapts feedforward movement control. Current Opinion in Neurobiology, 16(6):645?649. Bengtsson, F. and Hesslow, G. (2006). Cerebellar control of the inferior olive. Cerebellum (London, England), 5(1):7?14. Benna, M. K. and Fusi, S. (2016). Computational principles of synaptic memory consolidation. Nature neuroscience. Boyd, S. (2008). Introduction to linear dynamical systems. Online Lecture Notes. De Zeeuw, C. I. and Yeo, C. H. (2005). Time and tide in cerebellar memory formation. Current opinion in neurobiology, 15(6):667?74. Dean, P., Porrill, J., Ekerot, C.-F., and J?rntell, H. (2010). The cerebellar microcircuit as an adaptive filter: experimental and computational evidence. Nature reviews. Neuroscience, 11(1):30?43. Eccles, J., Ito, M., and Szent?gothai, J. (1967). The cerebellum as a neuronal machine. Springer Berlin. Fujita, M. (1982). Adaptive filter model of the cerebellum. Biological cybernetics, 45(3):195?206. Gormezano, I., Kehoe, E. J., and Marshall, B. S. (1983). Twenty years of classical conditioning with the rabbit. Herreros, I. and Verschure, P. F. M. J. (2013). Nucleo-olivary inhibition balances the interaction between the reactive and adaptive layers in motor control. Neural Networks, 47:64?71. Hesslow, G. and Yeo, C. H. (2002). The functional anatomy of skeletal conditioning. In A neuroscientist?s guide to classical conditioning, pages 86?146. Springer. Hofstoetter, C., Mintz, M., and Verschure, P. F. (2002). The cerebellum in action: a simulation and robotics study. European Journal of Neuroscience, 16(7):1361?1376. Jordan, M. I. (1996). Computational aspects of motor control and motor learning. In Handbook of perception and action, volume 2, pages 71?120. Academic Press. Kawato, M., Furukawa, K., and Suzuki, R. (1987). A hierarchical neural-network model for control and learning of voluntary movement. Biological Cybernetics, 57(3):169?185. Kettner, R. E., Mahamud, S., Leung, H. C., Sitkoff, N., Houk, J. C., Peterson, B. W., and Barto, a. G. (1997). Prediction of complex two-dimensional trajectories by a cerebellar model of smooth pursuit eye movement. Journal of neurophysiology, 77:2115?2130. Lahiri, S. and Ganguli, S. (2013). A memory frontier for complex synapses. In Advances in neural information processing systems, pages 1034?1042. Lisberger, S. (1987). Visual Motion Processing And Sensory-Motor Integration For Smooth Pursuit Eye Movements. Annual Review of Neuroscience, 10(1):97?129. Marr, D. (1969). A theory of cerebellar cortex. The Journal of physiology, 202(2):437?470. Massion, J. (1992). Movement, posture and equilibrium: Interaction and coordination. Progress in Neurobiology, 38(1):35?56. McKinstry, J. L., Edelman, G. M., and Krichmar, J. L. (2006). A cerebellar model for predictive motor control tested in a brain-based device. Proceedings of the National Academy of Sciences of the United States of America, 103(9):3387?3392. Porrill, J. and Dean, P. (2007). Recurrent cerebellar loops simplify adaptive control of redundant and nonlinear motor systems. Neural computation, 19(1):170?193. Shibata, T. and Schaal, S. (2001). Biomimetic smooth pursuit based on fast learning of the target dynamics. In Intelligent Robots and Systems, 2001. Proceedings. 2001 IEEE/RSJ International Conference on, volume 1, pages 278?285. IEEE. Suvrathan, A., Payne, H. L., and Raymond, J. L. (2016). Timing rules for synaptic plasticity matched to behavioral function. Neuron, 92(5):959?967. Wang, S. S.-H., Denk, W., and H?usser, M. (2000). Coincidence detection in single dendritic spines mediated by calcium release. Nature neuroscience, 3(12):1266?1273. 9
6151 |@word neurophysiology:1 trial:34 worsens:1 version:1 norm:1 closure:2 simulation:4 simplifying:2 thereby:1 carry:1 initial:1 substitution:1 series:1 efficacy:3 contains:5 exclusively:1 united:1 tuned:1 interestingly:1 past:3 reaction:2 current:10 contextual:1 incidence:1 must:3 olive:2 plasticity:5 motor:17 update:5 alone:1 spec:3 yr:2 nervous:1 complementing:1 tone:1 device:2 institution:1 provides:4 characterization:1 mathematical:1 skilled:1 visco:1 edelman:1 consists:1 pathway:2 combine:1 behavioral:2 introduce:1 manner:6 acquired:3 notably:2 indeed:6 spine:1 behavior:9 brain:5 integrator:3 actual:5 cdac:1 equipped:1 considering:1 becomes:3 spain:4 provided:1 matched:3 underlying:1 circuit:1 project:2 what:4 musculo:1 interpreted:2 guarantee:1 pseudo:1 temporal:1 act:2 olivary:2 returning:1 scaled:3 control:46 originates:1 bio:1 unit:1 before:4 negligible:1 engineering:1 accordance:1 modify:1 scientist:1 timing:1 consequence:1 sutton:1 approximately:3 studied:1 quantified:1 closedloop:1 suggests:1 co:1 smc:1 range:1 acknowledgment:1 responsible:1 practice:1 implement:3 displacement:2 physiology:3 thought:1 boyd:2 matching:1 word:1 pre:1 regular:1 refers:2 spite:2 anticipation:4 onto:1 close:1 layered:1 subsystem:1 put:1 context:4 influence:1 seminal:1 storage:1 dean:6 map:3 center:2 modifies:1 straightforward:1 duration:2 rabbit:1 pure:1 rule:23 importantly:1 deriving:3 marr:2 pompeu:2 mossy:2 updated:1 target:8 trigger:1 exact:1 us:1 associate:1 velocity:4 element:1 updating:1 predicts:1 hesslow:4 role:3 module:10 factual:1 biomimetic:1 wang:2 capture:1 coincidence:1 cord:1 wj:14 ensures:2 autonomously:1 movement:10 counter:1 mentioned:1 insertion:1 complexity:1 dynamic:11 neglected:1 denk:1 predictive:7 basis:9 completely:1 convolves:1 represented:1 fiber:11 america:1 derivation:4 fast:4 effective:1 describe:1 london:1 kp:2 precedes:1 formation:1 outcome:1 whose:1 lag:6 emerged:2 solve:2 heuristic:2 drawing:1 wg:1 itself:1 unconditioned:3 associative:1 online:1 sequence:3 propose:1 interaction:5 product:2 relevant:1 loop:22 payne:1 rapidly:1 mixing:1 krichmar:1 adapts:1 academy:1 albus:2 description:1 convergence:3 transmission:4 extending:1 requirement:1 produce:1 generating:1 guaranteeing:1 h2020:1 object:1 help:1 derive:3 widrow:6 develop:1 recurrent:1 received:1 progress:1 eq:4 solves:1 sizable:2 implemented:1 c:2 involves:1 implies:2 predicted:1 indicate:3 convention:1 guided:2 anatomy:4 filter:7 enable:1 opinion:2 rogers:1 implementing:3 feeding:1 repertoire:1 anticipate:2 dendritic:2 biological:3 mathematically:1 extension:1 frontier:1 around:1 considered:1 visually:2 great:1 equilibrium:2 mapping:1 cb:2 predict:1 lm:2 circuitry:1 houk:1 driving:2 achieves:1 early:1 purpose:2 integrates:1 precede:1 currently:1 superposition:1 coordination:1 bridge:1 council:1 repetition:1 instantaneously:3 weighted:1 minimization:1 gaussian:1 rather:2 cr:1 hj:7 command:2 barto:3 broader:1 derived:1 validated:1 release:1 schaal:3 properly:1 indicates:5 ganguli:2 dependent:1 abstraction:3 leung:1 entire:2 initially:1 massion:2 relation:1 originating:1 shibata:3 fujita:3 arg:4 integration:1 hoff:6 equal:1 construct:4 field:1 having:3 mimic:1 future:1 stimulus:6 intelligent:1 inherent:1 employ:1 micro:1 simplify:1 national:1 mintz:1 delayed:2 mckinstry:2 detection:2 neuronlike:1 neuroscientist:1 regulating:1 highly:1 light:2 icrea:1 chain:1 integral:2 partial:1 experience:3 facial:1 tree:1 commutativity:1 timed:1 desired:3 e0:7 weighs:1 instance:1 column:4 modeling:1 steer:2 purkinje:14 formalism:1 marshall:1 measuring:1 pons:1 deviation:1 neutral:1 delay:18 iee:1 universitat:2 commission:1 answer:1 combined:1 adaptively:1 peak:2 mechanistically:1 international:1 standing:1 modelbased:1 analogously:1 together:2 w1:2 squared:2 central:2 reflect:1 containing:1 choose:1 collapsing:1 external:1 leading:1 yeo:4 account:2 protective:1 de:2 sec:4 includes:2 explicitly:1 caused:1 depends:2 onset:2 amann:2 later:1 view:3 root:1 lab:2 closed:19 analyze:1 multiplicative:1 red:1 parallel:6 vivo:1 contribution:1 minimize:1 air:2 square:4 ass:1 yield:2 spaced:1 blink:12 climbing:1 apps:2 emphasizes:1 trajectory:2 drive:4 cybernetics:3 history:2 converged:1 synapsis:13 reach:2 messenger:1 whenever:1 synaptic:12 definition:2 ocular:2 naturally:1 associated:2 bioscience:1 gain:5 adjusting:1 wh:8 counterfactual:6 logical:1 usser:1 emerges:1 improves:1 bengtsson:2 anticipating:2 inertial:1 back:1 focusing:1 feed:28 follow:1 response:19 synapse:5 anticipatory:21 formulation:2 catalan:1 though:5 microcircuit:5 anderson:1 angular:3 correlation:1 hand:4 receives:4 transport:6 lahiri:2 nonlinear:1 mode:2 benna:2 indicated:1 impulse:8 facilitate:2 effect:1 building:1 concept:1 hence:2 moore:1 garwicz:2 bxj:1 cerebellum:22 during:5 lumped:2 eligibility:28 inferior:2 width:1 m:19 stress:1 complete:3 demonstrate:2 eccles:2 performs:1 motion:1 ilc:3 novel:3 recently:1 funding:1 kawato:3 functional:4 physical:2 spinal:1 conditioning:12 volume:2 belong:1 interpretation:3 refer:3 measurement:2 tuning:1 teaching:1 erc:1 cancellation:1 moving:1 robot:1 cortex:5 longer:1 similarity:2 inhibition:2 base:5 add:1 perspective:3 optimizing:1 driven:4 reverse:1 certain:1 rightwards:1 exploited:1 muscle:2 inverted:1 seen:2 tide:1 additional:1 furukawa:1 steering:2 neuroanatomy:1 upf:2 determine:2 paradigm:4 converge:1 signal:46 dashed:1 redundant:1 full:1 sound:1 stem:5 smooth:9 exceeds:1 match:1 england:1 academic:1 cross:2 compensate:1 controlled:5 ensuring:1 converging:1 neuro:1 basic:4 prediction:3 controller:20 repetitive:1 cerebellar:38 achieved:3 cell:17 robotics:1 addition:1 want:1 fine:3 whereas:4 remarkably:1 crucial:1 operate:1 rest:2 pass:1 recording:1 facilitates:2 dxj:1 jordan:2 feedforward:1 wn:1 ivan:2 xj:10 affect:2 forthcoming:2 architecture:20 fm:2 inner:1 shift:1 motivated:1 passed:1 action:5 repeatedly:1 latency:4 clear:1 proportionally:1 locally:2 generate:5 canonical:1 shifted:1 delta:3 accountable:1 per:1 track:2 neuroscience:6 anatomical:4 skeletal:2 discrete:3 dropping:1 changing:1 lti:2 relaxation:1 merely:1 sum:3 year:1 counteract:2 inverse:1 throughout:1 saying:1 almost:3 fusi:2 mahamud:1 investigates:1 entirely:1 layer:8 ki:2 accelerates:1 followed:1 simplification:1 display:1 bastian:2 annual:1 occur:2 precisely:1 constraint:1 noxious:3 scene:1 encodes:1 generates:1 aspect:1 argument:1 min:4 optimality:1 performing:1 transferred:1 structured:1 according:4 combination:1 beneficial:2 slightly:1 ur:1 evolves:1 invariant:2 remains:4 mechanism:2 locus:1 pursuit:9 rewritten:1 multiplied:1 apply:3 hierarchical:1 generic:1 subtracted:1 batch:4 alternative:1 denotes:4 include:1 completed:1 xw:7 exploit:1 murray:2 build:1 postural:4 obviated:1 classical:2 rsj:1 objective:1 move:1 question:1 added:1 occurs:1 posture:1 strategy:5 receptive:1 fabra:2 exhibit:1 enhances:1 gradient:10 link:1 mapped:2 simulated:1 capacity:1 berlin:1 alonso:1 originate:1 presynaptic:1 extent:1 kehoe:1 code:2 length:1 index:1 modeled:2 ratio:1 balance:1 difficult:1 trace:28 negative:4 design:3 calcium:1 twenty:1 convolution:1 neuron:1 finite:2 descent:9 voluntary:3 extended:1 neurobiology:3 head:1 gormezano:2 rn:3 introduced:2 namely:4 cast:1 connection:1 raising:1 learned:1 established:2 barcelona:4 nip:1 address:1 beyond:2 able:2 dynamical:3 usually:1 perception:1 summarize:1 green:1 memory:4 shifting:1 event:3 decorrelation:1 suitable:1 force:3 disturbance:2 advanced:1 arm:1 scheme:7 eye:16 picture:1 disappears:1 carried:1 xg:1 mediated:1 raymond:1 prior:1 literature:1 l2:1 review:3 relative:2 plant:25 lecture:1 proportional:2 analogy:1 localized:1 remarkable:2 foundation:1 nucleus:5 consistent:2 vestibulo:1 principle:2 pi:1 row:1 accounted:1 repeat:1 last:1 transpose:1 verschure:5 arriving:1 consolidation:1 guide:1 wide:2 peterson:1 eyelid:5 leaky:1 feedback:30 curve:5 xn:1 cortical:3 csj:1 unweighted:1 computes:2 sensory:3 forward:42 concretely:1 adaptive:17 made:1 suzuki:1 transaction:1 sj:2 asj:1 implicitly:1 abstracted:1 global:2 active:2 incoming:1 handbook:1 spectrum:1 airpuff:1 iterative:2 additionally:1 learn:2 transfer:4 kettner:2 nature:4 elastic:1 obtaining:1 contributes:1 complex:4 european:3 protocol:1 domain:2 main:2 arrow:2 whole:2 paul:1 repeated:2 allowed:2 x1:2 neuronal:1 fig:11 astrom:2 representative:1 referred:1 ff:2 screen:1 adg:1 slow:1 experienced:1 position:6 exceeding:2 exponential:1 lie:1 ito:1 learns:4 grained:1 down:1 embed:1 specific:1 emphasized:1 supplemented:1 normative:1 sensing:1 offset:2 physiological:1 evidence:2 intrinsic:1 modulates:1 supplement:1 cab:1 conditioned:2 occurring:1 illustrates:1 horizon:1 rg:1 penrose:1 visual:4 expressed:1 adjustment:3 tracking:1 scalar:1 reflex:5 springer:2 lisberger:2 abbreviation:1 goal:1 trigeminal:1 consequently:1 owen:1 man:1 content:1 change:5 included:1 specifically:1 operates:1 engineer:1 total:1 secondary:1 experimental:2 indicating:1 puff:2 latter:1 reactive:15 transference:1 actuation:1 ongoing:1 princeton:1 tested:1 correlated:1
5,694
6,152
Learning Tree Structured Potential Games Vikas K. Garg CSAIL, MIT vgarg@csail.mit.edu Tommi Jaakkola CSAIL, MIT tommi@csail.mit.edu Abstract Many real phenomena, including behaviors, involve strategic interactions that can be learned from data. We focus on learning tree structured potential games where equilibria are represented by local maxima of an underlying potential function. We cast the learning problem within a max margin setting and show that the problem is NP-hard even when the strategic interactions form a tree. We develop a variant of dual decomposition to estimate the underlying game and demonstrate with synthetic and real decision/voting data that the game theoretic perspective (carving out local maxima) enables meaningful recovery. 1 Introduction Structured prediction methods [1; 2; 3; 4; 5] are widely adopted techniques for learning mappings between context descriptions x ? X and configurations y ? Y. The variables specifying each configuration y (e.g., arcs in natural language parsing) are typically mutually dependent and it is therefore beneficial to predict them jointly rather than individually. The predicted y often arises as the highest scoring configuration with respect to a parameterized scoring function that decomposes into terms that couple two or more variables together to model their interactions. Structured prediction methods have been broadly useful across areas, from computational biology (e.g., molecular arrangements, alignments), natural language processing (e.g., parsing, tagging), computer vision (e.g., segmentation, matching), and many others. However, the setting is less suitable for modeling strategic interactions that are better characterized in terms of local consistency constraints. We consider the problem of predicting configurations y that represent game theoretic equilibria. Such configurations are unlikely to coincide with the maximum of a global scoring function as in structured prediction. Indeed, there may be many possible equilibria in a specific context, and the particular choice may vary considerably. Each possible configuration is nevertheless characterized by local constraints that represent myopic optimizations of individual players. For example, senators can be thought to vote relative to give and take deals with other closely associated senators. Several assumptions are necessary to make the game theoretic setting feasible. We abstract the setting as a potential game [6; 7; 8] among the players, and define a stochastic process to model the dynamics of the game. A game is said to be a potential game if the incentive of all players to change their strategy can be expressed using a single global potential function. Every potential game is guaranteed to have at least one (possibly multiple) pure strategy Nash equilibria [9], and we will exploit this property in modeling and analyzing several real world scenarios. Note that each pure Nash equilibrium corresponds to a local optimum of the underlying potential function rather than the global optimum as in structured prediction. We further restrict the setting by permitting the payoff of each player to depend only on their own action and the actions of their neighbors (a subset of the other players). Thus, we may view our setting as a graphical game [10; 11]. In this work, we investigate potential games where the graphical structure of the interactions form a tree. The goal is to recover the tree structured potential function that supports observed configurations of actions as locally optimal solutions. We prove that it is 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. NP-hard to recover such games under a max-margin setting. We then propose a variant of dual decomposition (cf. [12; 13]) to learn the tree structure and the associated parameters. 2 Setting We commence with the game theoretic setting. There are n players indexed by a position in [n] , {1, 2, . . . , n}. These players can be visualized as nodes of a tree-structured graph T with undirected edges E. We denote the set of neighbors of node i by Ni , i.e., (i, j) ? E ?? j ? Ni ? i ? Nj , and abbreviate (i, j) ? E as ij ? T without introducing ambiguity. Each player i has a finite discrete set of strategies Yi . A strategy or label configuration is an n-dimensional vector of the form Qprofile n y = (y1 , y2 , . . . , yn ) ? Y = i=1 Yi . We denote the parametric potential function associated with the tree by f (y; x, T, ?), where y is a strategy profile, ? the set of parameters, and x ? X is a context [14]. We obtain an (n ? 1)-dimensional vector y?i = (y1 , . . . , yi?1 , yi+1 , . . . , yn ) by considering the strategies of all players other than i. Thus, we may equivalently write y = (yi , y?i ). Moreover, we use yNi to denote the strategy profile pertaining to the neighbors of node i. We can extract from f (y; x, T, ?) individual payoff (or cost) functions fi (yi , yNi ; x, T, ?), i ? [n], which merely include all the terms that pertain to the strategy of the ith player yi . The choice of a particular equilibrium (local optimum) in a context results from a stochastic process. Starting with an initial configuration y at time t = 0 (e.g., chosen at random), the game proceeds in an iterative fashion: during each subsequent iteration t = 1, 2, . . ., a player pt ? [n] is chosen uniformly at random. The player pt then computes the best response candidate set Zpt = arg max fpt (z, yNpt ; x, T, ?), z?Ypt and switches to a strategy within this set uniformly at random if their current strategy does not already belong to this set, i.e., player changes their strategy only if a better option presents itself. The game finishes when a locally optimal configuration y? ? Y has been reached, i.e., when no player can improve their payoff unilaterally. Since many locally optimal configurations could have been reached in the given context x, the stochastic process induces a distribution over the strategy profiles. We assume that our training data S = {(x1 , y 1 ), . . . , (xM , y M )} is generated by some distribution over contexts and the induced conditional distribution over strategy profiles with respect to some tree structured potential function. Our objective is to learn both the underlying tree structure T and the parameters ? using a max-margin setting. Specifically, given S, we are interested in finding T and ? such that m ? m ? [M ], i ? [n], yi ? Yi , f (y m ; xm , T, ?) ? f (y?i , yi ; xm , T, ?) + e(y, y m ), where e(y, y m ) is a non-negative loss (e.g. Hamming loss), which is 0 if and only if y = y m . Note that the maximum margin framework does not make an explicit use of the assumed induced distribution over equilibria. The setting here is superficially similar to relaxations of structured prediction tasks such as pseudolikelihood [15] or decomposed learning [16]. These methods are, however, designed to provide computationally efficient approximations of the original structured prediction task by using fewer constraints during learning. Instead, we are specifically interested in modeling the observations as locally optimal solutions with respect to the potential function. We only state the results of our theorems in the main text, and defer all the proofs to the Supplementary. 3 Learning Tree Structured Potential Games We first show that it is NP-hard to learn a tree structured potential game in a discriminative maxmargin setting. Previous hardness results are available about learning structured prediction models under global constraints and arbitrary graphs [15], and under global constraints and tree structured models [17], also in a max-margin setting. Theorem 1. Given a set of training examples S = {(xm , y m )}M m=1 and a family of potential functions of the form X X X f (y; x, T, ?) = ?ij (yi , yj ) + ?i (yi ) + xi (yi ), i ij?T 2 i it is NP-hard to decide whether there exists a tree T and parameters ? (up to model equivalence) such that the following holds: m ? m, i, yi , f (y m ; xm , T, ?) ? f (y?i , yi ; xm , T, ?) + e(y, y m ). 3.1 Dual decomposition algorithm The remainder of this section concerns with developing an approximate method for learning the potential function by appeal to dual decomposition. Dual decomposition methods are typically employed to solve inference tasks over combinatorial structures (e.g., [12; 13]). In contrast, we decompose the problem on two levels. On one hand, we break the problem into independent local neighborhood choices and use dual variables to reconcile these choices across the players so as to obtain a single tree-structured model. On the other hand, we ensure that initially disjoint parameters mediating the interactions between a player and its neighbors are in agreement across the edges in the resulting structure. The two constraints ensure that there is a single tree-structured global potential function. For each node i, let Ni be the set of neighbors of i represented in terms of indicator variables such that Nij = 1 if i selects j as a neighbor. Nij can be chosen independently from Nji but the two will be enforced to agree at the solution. We will use Ni as a set of neighbors and as a set of indicator variables interchangeably. Similarly, we decompose the parameters into node potentials ?i ? ?(yi ; x) = ?i (yi ; x) and edge potentials ?ij ? ?(yi , yj ; x) = ?i,j (yi , yj ; x) where again ?ij may be chosen separately from ?ji but will be encouraged to agree. The set of parameters associated with each player then consists of locally controllable parameters ?i = {?i , ?i? } and Ni , where Ni selects the relevant subset of interaction terms: X f (y; x, Ni , ?i ) = ?i (yi ; x) + Nij ?i,j (yi , yj ; x) j6=i Given a training set S = {(x1 , y 1 ), . . . , (xM , y M )}, the goal is to learn the set of neighbors N = {N1 , . . . , Nn }, and weights ? = {?1 , . . . , ?n } so as to minimize n M   1 C XX m ||?||2 + max f (y?i , yi ; xm , Ni , ?i ) ? f (y m ; xm , Ni , ?i ) + e(yi , yim ) 2 M n i=1 m=1 yi {z } | (1) ,Rmi (Ni ,?i ) subject toP N forming a tree and ? agreeing across the players. Let Ri (Ni , ?i ) = C/(M n) m Rmi (Ni , ?i ). We force the neighbor choices to agree with a global tree structure represented by indicators N 0 . Similarly, we enforce parameters ?i to agree across neighbors. The resulting Lagrangian can be written as n X 1 2 i=1 | ||?i ||2 + Ri (Ni , ?i ) + X   X  0 (?ij Nij + ?ij ? ?ij ) + ? ?ij Nij + G(N 0 ) j6=i i,j6=i {z } L(?i ,Ni ;?,?) | {z G(N 0 ,?) } where G(N 0 ) = 0 if N 0 forms a tree and ? otherwise, and ?ij = ??ji . For the dual decomposition algorithm, we must be able to solve min?i L(?i , Ni ; ?, ?) to obtain ??i and minNi L(?i , Ni ; ?, ?) to get Ni? . The former is a QP while the latter is more challenging though may permit efficient solutions via additional relaxations, exploiting combinatorial properties in restricted cases (sub-modularity), or even brute force for smaller problems. G(N 0 , ?) corresponds to a minimum weighted spanning tree, and thus can be efficiently solved using any standard algorithm like Bor?uvka?s, Kruskal?s or Prim?s. ? The basic dual decomposition alternatively solves ??i , Ni? , and N 0 , resulting in updates of the dual variables based on disagreements. While the method has been successful for enforcing structural constraints (e.g., parsing), it is less appropriate for constraints involving continuous variables. To address this, we employ the alternating direction method of multipliers (ADMM) [18; 19; 20] for parameter agreements. Specifically, we encourage ?i? and ??i to agree with their mean ui? , by introducing an additional term to the Lagrangian L ? LA (?i , Ni ; ui? , ?, ?) = L(?i , Ni ; ?, ?) + ||?i? ? ui? ||2 2 3 where ui? is updated as an independent parameter. There are many ways to schedule the updates. We employ a two-phase algorithm that learns the structure of the game tree and the parameters separately. The algorithm is motivated by the following theorem. Since the result applies broadly to the dual decomposition paradigm, we state the theorem in a slightly more generic form than that required for our purpose. The theorem applies to our setting with X f (N 0 ) = ?G(N 0 ), A = [n], and gi (?i , Ni ) = ?ij Nij ? L(?i , Ni ; ?, ?). j6=i We now set up the conditions of the theorem. Consider the following combinatorial problem ( ) X Opt = max f (z) + g? (z? ) , z ??A where f (z) specifies global constraints on admissible z, and g? (z? ) represent local terms guiding the assignment of values to different subsets of variables z? = {zj }j?? . Let the problem be minimized with respect to the dual coefficients {?i,? (zi )} by following a dual decomposition approach. Suppose we can find a global assignment z? and dual coefficients such that this assignment nearly attains the local maxima for all ? ? A, i.e., X X  g? (? z? ) + ?j,? (? zj ) ? max g? (z? ) + ?j,? (zj ) ? . j?? z? j?? Assume further, without loss of generality,1 that the assignment attains the max for the global constraint. Then, we have the following result. Theorem 2. If there exists an assignment z? and associated dual coefficients such that the assignment obtains  -maximum ofeach term in the decomposition, for some  > 0, then the objective value for z? ? Opt ? |A|, Opt . The theorem implies that if a global structure nearly attains the optima for the local neighborhoods, then we might as well shift our focus to finding the global structure rather than optimize for the parameters corresponding to the exact local optima. The result guarantees that the value of such a global structure cannot be too far from that of the optimal global structure. We outline our two-phase approach in Algorithm 1. The first phase concerns only with iteratively finding a globally consistent structure. It is possible that at the conclusion of this phase, the local structures do not fully agree (the relaxation is not tight). For this reason, the procedure runs for a specified maximum number of iterations and selects the global tree corresponding to an iteration that is least inconsistent with the local neighborhoods. Note that this phase does not precisely solve the original problem we posed earlier. Instead, the structure is obtained without constraining parameters to agree. In this sense, the first phase does not consider strictly potential games as the interactions between players can remain intrinsic to the players themselves. The second phase simply optimizes the parameters for the already specified global tree. This step realizes a potential game as the parameters and the structure will be in agreement. We note that such parameters could be optimized directly for the selected tree without the need of dual decomposition. However, Algorithm 1 remains suitable in a distributed setting since each player is required to solve only local problems during the entire execution of the algorithm. 3.2 Scaling the algorithm As already noted, Algorithm 1 exhaustively enumerates all neighborhoods for each local optimization problem. This makes the algorithm computationally prohibitive in realistic settings. We now outline an approximation procedure that restricts the candidate neighborhood assignments. Specifically, for a local optimization at any node i, we may restrict the possible local neighborhoods at any iteration t to only those configurations that are at most h Hamming distance away from the best local configuration for i in iteration t-1. That is, we update each local max-structure incrementally, still guided by the 1 We can adjust the bound with a term that depends on the difference between the value of the optimal global structure and the value of the global structure under consideration if these values do not coincide. 4 overall tree within the same dual decomposition framework. Note that we recover Algorithm 1 as a special case when h = n. A small h corresponds to searching over a much smaller space compared to the brute force algorithm. For instance, if we take h = 1, then the total complexity of the approximate algorithm reduces to O(n2 ? M axIter) since in each iteration we need to solve n local problems each having O(n) candidate neighborhoods. Algorithm 1 Learning tree structured potential games 1: procedure L EARN T REE P OTENTIAL G AME 2: Input: parameters ?, ?, M axIter, and  > 0. 3: 4: Phase 1: Learn Tree Structure 5: Initialize t = 1, ?ij = 0, ?ij = 0, M inGap = ?. 6: repeat 7: Find N 0 = argmin G(N , ?) using a minimum spanning tree algorithm N 8: 9: 10: 11: 12: for each i ? [n] do for each Ni do Compute ?i?t+1 = min L(?i , Ni ; ?, 0) ?i Find Ni? = argmin L(??t+1 , Ni ; ?, 0) i Ni X ? 0 Compute gap: Gap = I(Nij 6= Nij ). i,j 13: if Gap < M inGap then M inGap = Gap, Global = N 0 0 ? Update ? ?i, j 6= i: ?ij = ?ij + ?t (Nij ? Nij ) t?t+1 until M inGap = 0 or t > M axIter. Set N 0? = Global. 14: 15: 16: 17: 18: 19: Phase 2: Learn Parameters 20: Set N = N 0? 21: Compute ?i?t+1 = min L(?i , Ni ; 0, ?) ?i 25: repeat ?t+1 ?t+1 Compute u ?i, j 6= i: ut+1 + ?ji )/2 ij = (?ij ?t+1 t+1 Update ? ?i, j 6= i: ?ij = ?ij + ?(?ij ? uij ) Compute ?i?t+1 = min LA (?i , Ni ; ui? , 0, ?) 26: 27: t? Xt + 1 ?t+1 ?t+1 until ||?ij ? ?ji ||2 <  28: ?t+1 ?t+1 ? ? Set ?ij , ?ji = (?ij + ?ji )/2 22: 23: 24: ?i i,j6=i 29: Output: N 0? , ?? 4 Experimental Results We now describe the results of our experiments on both synthetic and real data to demonstrate the efficacy of our algorithm. We found the algorithm to perform well for a wide range of C and ? across different data. We report below the results of our experiments with the following setting of parameters: ? = 1, ?t = 0.005 (for all t), C = 10,  = 0.1, and M axIter = 100. For each local optimization problem, the configurations were constrained to share the slack variable in order to reduce the total number of optimization variables. Moreover, we used a scaled 0-1 loss [15], e(y, y m ) = 1{y 6= y m }/n for each local optimization. We set h = 1 for the approximate method. We conducted different sets of experiments to underscore the different aspects of our approach. Our experiments with toy synthetic data highlight recovery of an underlying true structure under controlled conditions (pertaining to the data generation process). The results on a real, but toy dataset, Supreme Court vindicate the applicability of the exhaustive approach to unraveling the interactions 5 latent in real datasets. Finally, we address the scalability issues inherent in the exhaustive search, by demonstrating the approximate version on the larger Congressional Votes real dataset. 4.1 Synthetic Dataset We will now describe how the brute force method recovered the true structure on a synthetic dataset. For this, data were assumed to come from the underlying model X X f (y; x, ?) = ?ij (yi , yj ) + xi ?i (yi ), i ij?E where x represents the context that varies. The parameters were set as follows. We designed a n-node degenerate or pathological tree, n = 6, with edges between node i and i + 1, i ? {1, 2, . . . , n ? 1}. On each edge (i, j) ? E, we sampled ?ij (yi , yj ), yi , yj ? {0, 1} uniformly at random from [?1, 1] independently of the other edges. For each node i, we also sampled ?i (yi ), yi ? {0, 1} independently from the same range. Each training example pair (xm , ym ) was sampled in two steps. First, each xmj , j ? [n] was set uniformly at random in the range [?10, 10], independently of each other. The associated ym was then generated according to the stochastic process described in Section 2. Briefly, starting with ym ? {0, 1}n sampled uniformly at random, we successively updated the configuration by changing a randomly chosen coordinate of ym , and accepting the move only if the associated score was higher. Since there are 2n possible configurations of binary vectors, we were guaranteed that, in finite time, this procedure ended in a locally stable configuration. Once this locally stable configuration was reached, we checked if the score of this configuration exceeded all the other configurations with Hamming distance one by at least 1/n. If yes, then we included the pair (xm , ym ) in our synthetic data set, otherwise we discarded the pair. Starting with 100 examples, this procedure resulted in a total of 78 stable configurations that scored higher than each configuration one Hamming distance away by at least 1/n. These configurations formed our synthetic data set. We were able to exactly recover the tree structure at the end of the Phase 1 of our algorithm using the training 0 data. Fig. 1 shows the evolution of the global tree structure (i.e. N in the iterations that resulted in decrease of Gap). Note how the algorithm corrects for incorrect edges, starting from a star tree till it recovers the pathological tree structure. Fig. 2 elucidates the synergy between the global tree and local neighborhoods toward recovering the correct structure. 2 3 2 1 5 4 6 3 2 1 5 4 6 3 2 1 5 4 6 3 1 5 4 6 Figure 1: Recovery on synthetic data. Evolution of the tree structure is shown from left to right. Each incorrect edge is indicated by coloring one of the end nodes in red. After first iteration, only the edge (1, 2) is identified correctly. At termination, all edges in the underlying structure are recovered. We show in Fig. 3 the evolution of the tree when the observations were falsely treated as globally optimal points. Clearly, structured prediction failed to recover the underlying tree structure. 4.2 Real Dataset 1: Supreme Court Rulings For both real datasets, we assumed the following decomposition: X X ?i (yi ). f (y; ?) = ?ij (yi , yj ) + i ij?E For our first real dataset,2 we considered the rulings of a Supreme Court bench comprising Justices Alito (A), Breyer (B), Ginsburg (G), Kennedy (K), Roberts (R), Scalia (S), and Thomas (T ), during 2 Publicly available at http://scdb.wustl.edu/. 6 1 5 6 4 2 3 3 2 6 1 5 2 1 4 3 6 4 5 Figure 2: Global-Local Synergy. (Center & Right) Spanning trees formed from two separate local neighborhoods (in different iterations). (Left) The common global tree structure. The global tree structure reappears during the execution of the algorithm. On first occurrence, the global tree is misaligned from chain 2-3-4 of the local neighborhood tree at node 5, as indicated by tree in the center. The algorithm takes corrective action, and on the next occurrence, node 5 moves to the desired position, as seen from tree on the right. The algorithm proceeds to correct the positioning of node 6. 2 3 5 1 5 4 6 3 5 1 2 4 6 3 5 1 2 6 4 3 1 2 4 6 Figure 3: Evolution of structured prediction. Structured prediction fails to recover true structure. S T Scalia Thomas Kennedy Alito Roberts K A R Breyer G B K B A T R G S Ginsburg Conservatives Liberals Figure 4: (Left) Tree recovered from Supreme Court data. The tree is consistent with widely known ideology of the justices: Justice Kennedy (K) is considered largely moderate, while the others espouse a more conservative or liberal jurisprudence. The thickness of an edge indicates the strength of interaction in terms of (scaled) l2 -norm of the edge parameters. (Right) Enforcing global constraints (structured prediction) resulted in a qualitatively incorrect structure. the year 2013. Justices Alito, Roberts, Scalia, and Thomas are known to be conservatives, while Justices Breyer and Ginsburg belong to the liberal side of the Court. Justice Kennedy generally takes a moderate stand on most issues. On every case under their jurisdiction, each Justice chose an integer from the set {1, 2, . . . , 8}. We considered all the rulings of this bench that had at least one ?dissent". For our purposes, we created a dataset from those rulings that did not register a value 6, 7, 8 from any of the Justices, since these values seem to have a complex interpretation instead of a simple yes/no. For all other values, we used the interpretation by [21]: dissent value 2 was treated as 0 (no), and others with 1 (yes). Fig. 4 shows that we were able to recover the known ideology of the Justices by correctly treating the rulings as local optimal, whereas structured prediction failed to identify a qualitatively correct structure. 7 Figure 5: (Congressional Votes.) The recovered tree is consistent with the expected voting pattern that, in general, Democrats and Republicans vote along their respective party principles. 4.3 Real Dataset 2: Congressional Voting Records We also experimented with a dataset3 obtained by compiling the votes on all the bills of the 110th United States Congress (Session 2). The US Congress records the voting proceedings of the legislative branch of the US federal government [11]. The U.S. Senate consists of 100 senators: each of the 50 U.S. states is represented by two senators. We compiled all the votes of the first 30 senators (in data order) over this period on bills without unanimity. Each vote takes one of the two values: +1 or -1, to denote whether the vote was in favor or against the proposed bill. We treated vote values -1 with 0 to create a binary dataset. Fig. 5 shows how the approximate algorithm is able to recover a qualitatively correct structure that Democrats and Republicans typically vote along their respective party ideologies (note that there might be more than one qualitatively correct structure). Specifically, we obtain a structure where no Democrat is sandwiched between two Republicans, or vice-versa. Discussion A primary goal of this work is to argue that complex strategic interactions are better modeled as locally optimal solutions instead of globally optimal assignments (as done, for instance, in structured prediction). We believe this local versus global distinction has not been accorded due significance in the literature, and we hope our work fosters more research in that direction. The work opens up several interesting avenues. All the results presented in this paper are qualitative in nature, primarily because quantitative evaluation is non-trivial in our setting since a strategic game may have multiple equilibria (local optima). The incremental method proposed in this paper does not come with any certificate of optimality, unlike most dual decomposition settings. We assumed the dynamics of the underlying game follow a stochastic process, whereas players typically take deterministic turns in real game settings. From a statistical learning perspective, it will be interesting to estimate the generalization bounds in terms of the number of local equibria samples. Learning across (repeated) games and exploring sub-modular potential functions are other directions. Acknowledgments Jean Honorio provided the Congressional Votes dataset for our experiments. We would also like to thank the anonymous reviewers for their helpful comments. 3 Publicly available at http://www.senate.gov/. 8 References [1] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and interdependent output variables, JMLR, 6(2), pp. 1453-1484, 2005. [2] B. Taskar, C. Guestrin, and D. Koller. Max-margin Markov networks, NIPS, 2003. [3] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data, ICML, 2001. [4] J. K. Bradley and C. Guestrin. Learning tree conditional random fields, ICML, 2010. [5] S. Nowozin and C. H. Lampert. Structured Learning and Prediction in Computer Vision, Foundations and Trends in Computer Graphics and Vision, 2011. [6] P. Dubey, O. Haimanko, and A. Zapechelnyuk. Strategic complements and substitutes, and potential games, Games and Economic Behavior, 54, pp. 77-94, 2006. [7] D. Monderer and L. Shapley. Potential Games, Games and Economic Behavior, 14, pp. 124-143, 1996. [8] Y. Song, S. H. Y. Wong, and K.-W. Lee. Optimal gateway selection in multi-domain wireless networks: a potential game perspective, MobiCom, 2011. [9] T. Ui. Robust equilibria of potential games, Econometrica, 69, pp. 1373-1380, 2000. [10] M. Kearns, M. L. Littman, and S. P. Singh. Graphical Models for Game Theory, UAI, 2001. [11] J. Honorio and L. Ortiz. Learning the Structure and Parameters of Large-Population Graphical Games from Behavioral Data, JMLR, 16, pp. 1157-1210, 2015. [12] A. M. Rush and M. Collins. A Tutorial on Dual Decomposition and Lagrangian Relaxation for Inference in Natural Language Processing, JAIR, 45, pp. 305-362, 2012. [13] A. M. Rush, D. Sontag, M. Collins, and T. Jaakkola. On Dual Decomposition and Linear Programming Relaxations for Natural Language Processing, EMNLP, 2010. [14] M. Hoefer and A. Skopalik. Social Context in Potential Games, Internet and Network Economics, pp. 364-377, 2012. [15] D. Sontag, O. Meshi, T. Jaakkola, and A. Globerson. More data means less inference: A pseudo-max approach to structured learning, NIPS, 2010. [16] R. Samdani and D. Roth. Efficient Decomposed Learning for Structured Prediction, ICML, 2012. [17] O. Meshi, E. Eban, G. Elidan, and A. Globerson. Learning Max-Margin Tree Predictors, UAI, 2013. [18] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Ecksteain. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. Foundations and Trends in Machine Learning, 3(1), pp. 1-122, 2010. [19] A. F. T. Martins, N. A. Smith, E. P. Xing, P. M. Q. Aguiar, and M. A. T. Figueiredo. Augmenting Dual Decomposition for MAP Inference, NIPS, 2010. [20] A. F. T. Martins, N. A. Smith, P. M. Q. Aguiar, and M. A. T. Figueiredo. Dual Decomposition with Many Overlapping Components, EMNLP, 2011. [21] M. T. Irfan and L. E. Ortiz. On influence, stable behavior, and the most influential individuals in networks: A game-theoretic approach, Artificial Intelligence, 215, pp. 79-119, 2014. 9
6152 |@word briefly:1 version:1 norm:1 justice:9 open:1 termination:1 decomposition:18 initial:1 configuration:23 efficacy:1 score:2 united:1 yni:2 bradley:1 current:1 recovered:4 chu:1 written:1 parsing:3 must:1 subsequent:1 realistic:1 hofmann:1 enables:1 designed:2 treating:1 update:5 intelligence:1 fewer:1 selected:1 prohibitive:1 reappears:1 mccallum:1 ith:1 smith:2 record:2 accepting:1 certificate:1 node:13 liberal:3 along:2 incorrect:3 prove:1 consists:2 qualitative:1 shapley:1 behavioral:1 falsely:1 tagging:1 hardness:1 expected:1 indeed:1 themselves:1 behavior:4 multi:1 globally:3 decomposed:2 gov:1 considering:1 spain:1 xx:1 underlying:9 moreover:2 provided:1 argmin:2 finding:3 nj:1 ended:1 guarantee:1 pseudo:1 quantitative:1 every:2 voting:4 exactly:1 scaled:2 brute:3 yn:2 segmenting:1 local:30 congress:2 analyzing:1 ree:1 might:2 chose:1 garg:1 equivalence:1 specifying:1 challenging:1 misaligned:1 range:3 carving:1 acknowledgment:1 globerson:2 yj:8 procedure:5 area:1 thought:1 matching:1 boyd:1 wustl:1 altun:1 get:1 cannot:1 pertain:1 tsochantaridis:1 selection:1 context:8 influence:1 wong:1 optimize:1 bill:3 deterministic:1 lagrangian:3 center:2 reviewer:1 www:1 roth:1 economics:1 commence:1 starting:4 independently:4 recovery:3 axiter:4 pure:2 unilaterally:1 breyer:3 population:1 searching:1 coordinate:1 updated:2 pt:2 suppose:1 exact:1 elucidates:1 programming:1 agreement:3 trend:2 observed:1 taskar:1 solved:1 mobicom:1 unanimity:1 decrease:1 highest:1 nash:2 ui:6 complexity:1 econometrica:1 littman:1 dynamic:2 exhaustively:1 depend:1 tight:1 singh:1 represented:4 corrective:1 describe:2 pertaining:2 artificial:1 labeling:1 neighborhood:10 exhaustive:2 jean:1 modular:1 widely:2 supplementary:1 solve:5 posed:1 larger:1 otherwise:2 favor:1 gi:1 jointly:1 itself:1 sequence:1 propose:1 interaction:11 remainder:1 supreme:4 relevant:1 till:1 degenerate:1 description:1 scalability:1 exploiting:1 optimum:6 incremental:1 develop:1 augmenting:1 ij:28 solves:1 recovering:1 predicted:1 implies:1 come:2 tommi:2 direction:4 guided:1 closely:1 correct:5 stochastic:5 meshi:2 alito:3 government:1 generalization:1 decompose:2 anonymous:1 opt:3 strictly:1 ideology:3 exploring:1 hold:1 zapechelnyuk:1 considered:3 equilibrium:9 mapping:1 predict:1 kruskal:1 vary:1 purpose:2 realizes:1 label:1 combinatorial:3 individually:1 vice:1 create:1 weighted:1 hope:1 federal:1 mit:4 clearly:1 rather:3 jaakkola:3 focus:2 joachim:1 indicates:1 underscore:1 contrast:1 attains:3 sense:1 helpful:1 inference:4 dependent:1 nn:1 typically:4 unlikely:1 entire:1 initially:1 honorio:2 uij:1 koller:1 interested:2 selects:3 comprising:1 arg:1 dual:21 among:1 overall:1 issue:2 constrained:1 special:1 initialize:1 field:2 once:1 having:1 encouraged:1 biology:1 represents:1 icml:3 nearly:2 minimized:1 np:4 others:3 report:1 inherent:1 employ:2 primarily:1 pathological:2 randomly:1 resulted:3 individual:3 senator:5 phase:10 n1:1 ortiz:2 investigate:1 evaluation:1 adjust:1 alignment:1 myopic:1 chain:1 edge:12 encourage:1 necessary:1 dataset3:1 respective:2 tree:49 indexed:1 desired:1 jurisprudence:1 rush:2 nij:10 instance:2 modeling:3 earlier:1 assignment:8 strategic:6 introducing:2 applicability:1 subset:3 cost:1 predictor:1 successful:1 conducted:1 too:1 graphic:1 thickness:1 varies:1 synthetic:8 considerably:1 csail:4 probabilistic:1 lee:1 corrects:1 together:1 ym:5 earn:1 again:1 ambiguity:1 successively:1 possibly:1 emnlp:2 toy:2 potential:29 star:1 coefficient:3 register:1 depends:1 view:1 break:1 reached:3 red:1 recover:8 option:1 xing:1 defer:1 minimize:1 formed:2 ni:29 publicly:2 largely:1 efficiently:1 identify:1 yes:3 bor:1 kennedy:4 j6:5 checked:1 against:1 hoefer:1 pp:9 associated:7 proof:1 recovers:1 hamming:4 couple:1 sampled:4 dataset:10 enumerates:1 ut:1 segmentation:1 schedule:1 coloring:1 exceeded:1 higher:2 jair:1 follow:1 response:1 done:1 though:1 generality:1 until:2 hand:2 overlapping:1 incrementally:1 indicated:2 believe:1 y2:1 multiplier:2 true:3 former:1 evolution:4 alternating:2 iteratively:1 deal:1 game:37 during:5 interchangeably:1 noted:1 outline:2 theoretic:5 demonstrate:2 consideration:1 fi:1 parikh:1 common:1 ji:6 qp:1 belong:2 interpretation:2 versa:1 consistency:1 scalia:3 similarly:2 session:1 language:4 had:1 stable:4 gateway:1 compiled:1 own:1 perspective:3 optimizes:1 moderate:2 scenario:1 binary:2 yi:32 scoring:3 seen:1 minimum:2 additional:2 guestrin:2 employed:1 paradigm:1 period:1 elidan:1 branch:1 multiple:2 reduces:1 legislative:1 positioning:1 characterized:2 ypt:1 molecular:1 permitting:1 controlled:1 prediction:15 variant:2 basic:1 involving:1 vision:3 iteration:9 represent:3 nji:1 whereas:2 separately:2 unlike:1 comment:1 induced:2 subject:1 undirected:1 inconsistent:1 lafferty:1 seem:1 integer:1 structural:1 constraining:1 congressional:4 switch:1 finish:1 zi:1 restrict:2 identified:1 reduce:1 economic:2 avenue:1 court:5 shift:1 whether:2 motivated:1 song:1 sontag:2 action:4 useful:1 generally:1 dubey:1 involve:1 locally:8 induces:1 visualized:1 http:2 specifies:1 restricts:1 zj:3 tutorial:1 disjoint:1 correctly:2 broadly:2 discrete:1 write:1 incentive:1 ame:1 nevertheless:1 demonstrating:1 changing:1 graph:2 relaxation:5 merely:1 year:1 enforced:1 run:1 parameterized:1 family:1 ruling:5 decide:1 decision:1 scaling:1 bound:2 internet:1 guaranteed:2 strength:1 rmi:2 constraint:11 precisely:1 ri:2 aspect:1 min:4 optimality:1 martin:2 structured:28 developing:1 according:1 influential:1 beneficial:1 across:7 smaller:2 agreeing:1 slightly:1 remain:1 maxmargin:1 restricted:1 computationally:2 mutually:1 agree:7 remains:1 zpt:1 slack:1 turn:1 end:2 adopted:1 available:3 permit:1 uvka:1 accorded:1 away:2 enforce:1 disagreement:1 appropriate:1 generic:1 yim:1 ginsburg:3 occurrence:2 compiling:1 vikas:1 original:2 thomas:3 top:1 substitute:1 cf:1 include:1 ensure:2 graphical:4 exploit:1 sandwiched:1 objective:2 move:2 arrangement:1 already:3 strategy:13 parametric:1 primary:1 map:1 unraveling:1 said:1 distance:3 separate:1 thank:1 monderer:1 argue:1 trivial:1 spanning:3 enforcing:2 reason:1 toward:1 modeled:1 eban:1 equivalently:1 mediating:1 robert:3 negative:1 perform:1 observation:2 datasets:2 discarded:1 arc:1 finite:2 markov:1 payoff:3 y1:2 arbitrary:1 peleato:1 complement:1 cast:1 required:2 specified:2 pair:3 optimized:1 learned:1 distinction:1 barcelona:1 nip:4 address:2 able:4 proceeds:2 below:1 pattern:1 xm:11 including:1 max:13 fpt:1 suitable:2 natural:4 force:4 treated:3 predicting:1 indicator:3 abbreviate:1 senate:2 improve:1 republican:3 created:1 extract:1 text:1 literature:1 l2:1 interdependent:1 relative:1 loss:4 fully:1 highlight:1 generation:1 interesting:2 versus:1 foundation:2 consistent:3 principle:1 foster:1 nowozin:1 share:1 repeat:2 wireless:1 figueiredo:2 side:1 pseudolikelihood:1 neighbor:10 wide:1 distributed:2 world:1 superficially:1 stand:1 computes:1 qualitatively:4 coincide:2 far:1 party:2 social:1 dissent:2 approximate:5 obtains:1 synergy:2 global:28 uai:2 assumed:4 discriminative:1 xi:2 alternatively:1 continuous:1 iterative:1 latent:1 search:1 decomposes:1 modularity:1 learn:6 nature:1 robust:1 controllable:1 irfan:1 complex:2 domain:1 did:1 significance:1 main:1 reconcile:1 scored:1 profile:4 n2:1 lampert:1 repeated:1 x1:2 fig:5 fashion:1 sub:2 position:2 guiding:1 explicit:1 fails:1 pereira:1 candidate:3 jmlr:2 learns:1 admissible:1 theorem:8 specific:1 xt:1 prim:1 appeal:1 experimented:1 concern:2 exists:2 intrinsic:1 execution:2 margin:8 gap:5 democrat:3 simply:1 forming:1 failed:2 expressed:1 applies:2 samdani:1 corresponds:3 conditional:3 goal:3 aguiar:2 admm:1 feasible:1 hard:4 change:2 included:1 specifically:5 uniformly:5 kearns:1 conservative:3 total:3 experimental:1 la:2 player:22 vote:11 meaningful:1 support:1 latter:1 arises:1 collins:2 bench:2 phenomenon:1
5,695
6,153
Estimating Nonlinear Neural Response Functions using GP Priors and Kronecker Methods Cristina Savin IST Austria Klosterneuburg, AT 3400 csavin@ist.ac.at Gasper Tka?cik IST Austria Klosterneuburg, AT 3400 tkacik@ist.ac.at Abstract Jointly characterizing neural responses in terms of several external variables promises novel insights into circuit function, but remains computationally prohibitive in practice. Here we use gaussian process (GP) priors and exploit recent advances in fast GP inference and learning based on Kronecker methods, to efficiently estimate multidimensional nonlinear tuning functions. Our estimator requires considerably less data than traditional methods and further provides principled uncertainty estimates. We apply these tools to hippocampal recordings during open field exploration and use them to characterize the joint dependence of CA1 responses on the position of the animal and several other variables, including the animal?s speed, direction of motion, and network oscillations. Our results provide an unprecedentedly detailed quantification of the tuning of hippocampal neurons. The model?s generality suggests that our approach can be used to estimate neural response properties in other brain regions. 1 Introduction An important facet of neural data analysis concerns characterizing the tuning properties of neurons, defined as the average firing rate of a cell conditioned on the value of some external variables, for instance the orientation of an image patch in a V1 cell, or the position of the animal within an environment for hippocampal cells. As experiments become more complex and more naturalistic, the number of variables that modulate neural responses increases. These include not only experimentally targeted inputs but also variables that are no longer under the experimenter?s control but which can be (to a certain extent) measured, either external (the behavior of the animal) or internal (attentional level, network oscillations, etc). Characterizing these complex dependencies is very difficult, yet it could provide important insights into neural circuits computation and function. Traditional estimates of a cell?s tuning properties often manipulate one variable at the time or consider simple dependencies between inputs and the neural responses e.g. Generalized Linear Models, GLM [1, 2]). There is comparatively little work that allows for complex input-output functional relationships on multidimensional input spaces [3?5]. The reasons for this are twofold. On one hand, dealing with complex nonlinearities is computationally challenging, on the other hand, constraints on experimental duration lead to a potentially very sparse sampling of the stimulus space, requiring additional assumptions for a sensible interpolation. This problem is further exacerbated in experiments in awake animals where the sampling of the stimulus space is driven by the animal behavior. The few solutions for nonlinear tuning properties rely on spline-based approximation of one-dimensional functions (for position on a linear track) [6] or assume a log-Gaussian Cox process generative model as a way to enforce smoothness of 2D functional maps [3?5]. These methods are usually restricted to at most two input dimensions (but see [4]). 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Here we take advantage of recent advances in scaling GP inference and learning using Kronecker methods [7] to extend the approach in [3] to the multidimensional setting, keeping the    while  comd+1 2 putational and memory requirements almost linear in dataset size N , O dN d and O dN d , respectively (for d dimensions) [8]. Our formulation requires a discretization of the input space,1 but allows for a flexible selection of the kernels specifying different assumptions about the nature of the functional dependencies we are looking for in the data, with hyperparameters inferred by maximizing marginal likelihood. We deal with the non-gaussian likelihood in the traditional way by using a Laplace approximation of the posterior [8]. The critical ingredient for our approach is the particular form of the covariance matrix that decomposes into a Kronecker product over covariances corresponding to individual input dimensions, dramatically simplifying computations. The focus here is not on the methods per se but rather on their previously unacknowledged utility for estimating multidimensional nonlinear tuning functions. The inferred tuning functions are probabilistic. The estimator is adaptive, in the sense that it relies strongly on the prior in regions of the input space where data is scarce, but can flexibly capture complex input-output relations where enough data is available. It naturally comes equipped with error bars which can be used for instance for detecting shifts in receptive field properties due to learning. Using artificial data we show that inference and learning in our model can robustly recover the underlying structure of neural responses even in the experimentally realistic setting where the sampling of the input space is sparse and strongly non-uniform (due to stereotyped animal behavior). We further argue for the utility of spectral mixture kernels as a powerful tool for detecting complex functional relationships beyond simple smoothing/interpolation. We go beyond artificial data that follows the assumptions of the model exactly, and show robust estimation of tuning properties in several experimental recordings. For illustration purposes we focus here on data from the CA1 region of the hippocampus of rats, during an open field exploration task. We characterize several 3D tuning functions as a function of the animal?s position but also additional internal (the overall activity in the network at the time) or external variables (speed or direction of motion, time within experiment) and use these to derive new insights into the distribution of spatial and non-spatial information at the level of CA1 principal cell activity. 2 Methods Generative model Given data in the form of spike count ? input pairs D = {y (i) , x(i) }i=1:N , we model neural activity as an inhomogeneous Poisson process with input-dependent firing rate ? (as in [3], see. Fig. 1a):   Y 1 P(y|x) = Poisson y (i) ; ?(x)(i) , where Poisson (y; ?) = ?y e?? . (1) y! i The inputs x are defined on a d-dimensional lattice and the spike counts are measured within a time window ?t for which the input is roughly constant (25.6ms, given by the frequency of positional tracking).2 We formalize assumptions about neural tuning as a GP prior f ? GP(?, k? ), with f = log ?(x), with a constant mean ?i = ? (for the overall scale of neural responses) and a covariance function k(?, ?) with hyperparameters ?. This covariance function defines our assumptions about what kind of functional dependencies are expected in the data (smoothness, periodicity, etc.). The exponential linking f to ? provides a mathematically convenient way to enforce positivity of the mean firing while keeping the posterior log-concave in f , justifiying the use of Laplace methods for approximating the posterior (see also [3]). For computational tractability we restrict our model to the class of product kernels k(x, x0 ) = Q 0 d kd (xd , xd ) for which the covariance matrix decomposes as a Kronecker product K = K1 ? K2 ? . . . Kd , allowing for efficient computation of determinants, matrix multiplications and eigendecomposition in terms of the individual factors Ki (see Suppl.Info. and [7]). The individual kernels can be tailored to the specific application, allowing for a flexible characterization of individual input dimensions (inputs need not live in the same space, e.g. space-time, or can be 1 In practice many input dimensions are discrete to begin with (e.g. measurements of an animal?s position), so this is a weak requirement. The coarseness of the discretization depends on the application. 2 Input noise is ignored here, but could be explicitly incorporated in the generative model [9]. 2 a b f (x) ? GP (?? (x), k? (?, ?)) Poisson x y D-dimensional input neural response true field trajectory estimate histogram c estimate GP rate (Hz) 10 ground truth estimate 0 d data subgroups, 6min each time Figure 1: Model overview and estimator validation. a) Generative model: spike counts arise as Poisson draws with an input dependent mean, f (x) with an exponential linkage function. b) A GP prior specifies the assumptions concerning the properties of this function (smoothness, periodicity, etc). c) Place field estimates from artificial data; left to right: the position of the animal modelled as a bounded random walk, ground-truth, traditional estimate (without smoothing), posterior mean of the inferred functional. d) Vertical slice through the posterior with shaded area showing the 2 ? sd confidence region. d) Estimates of place field selectivity in an example CA1 recording during open field exploration in a cross-shaped box; separate estimates for 6min subsets. periodic, e.g. the phase of theta oscillations). Here we use a classic squared-exponential (SE kernel 2 (x?x0 ) for simple interpolation/smoothing tasks, kd (x, x0 ) = ?2d exp 2?2 , with parameters ? = {?, ?} d specifying the output variance and lengthscale [9]. For tasks involving extrapolation or discovering complex patterns we use spectral mixture (SM) kernels, as a powerful and mathematically tractable route towards automated kernel design [10]. SMs are stationary kernels defined as a linear mixture of basis functions in the spectral domain: 0 kd (x, x ) = Q X q=1  wq exp ?2? 2 (x ? x0 )2 vq cos(2?(x ? x0 )?q ) (2) with parameters ? = {w, ?, v} defining the weights, spectral means and variances for each of the mixture components. Assuming Q is large enough, such a spectral mixture can approximate any arbitrary kernel (the same way Gaussian mixtures can be used to approximate an arbitrary density). Moreover, many traditional kernels can be recovered as special cases; for instance the SE kernel corresponds to a single component spectral density with zero mean (see also [10]). Inference and learning We sketch the main steps of the derivation here and provide the details in the Suppl. Info. Our goal is to find the hyperparameters ? = {?, ?} that maximize P (?|y) ? P (y|?) ? P(?). We follow common practice in using a point estimate ?? = argmax? P (?|y) for the hyperparameters, and leave a fully probabilistic treatment to future work (e.g. using [11]). We use ?? to infer a predictive distribution P(f ? |D, x? , ?? ) for a set of test inputs x? . Because of the Poisson observation noise these quantities do not have simple closed form solutions and some approximations are required. As it is customary [9], we use the Laplace method to approximate the log posterior log P(f |D) = log P(y|f ) + log P(f ) with its second-order Taylor expansion around the maximum ?f . This results in a multivariate Gaussian ?1 approximate posterior, with mean ?f and covariance H + K ?1 , where H = ??? log P(y|f ) |?f is the Hessian of the log likelihood, and K is the covariance matrix. Substituting the approximate posterior, we obtain the Laplace approximate marginal likelihood of the form: log (y|?) = log P(y|?f ) ? 0.5z0 K ?1 z ? 0.5 log |I + KH| (3) 3 with z = K ?1 (?f ? ?). The approximate predictive distribution for ?? is a multivariate Gaussian with ?1 mean k? ? log P(y|?f ) and covariance k?? ? k0? H ?1 + K k? , where k? and k?? correspond to the test-data and test-test covariances, respectively [8]. Lastly the predicted tuning function for an individual test point ?? ) = exp(f ? ), is log-normal with closed-form expressions for mean and variance (see Suppl. Info.). Standard  methods for implementing  these computations using the Cholesky decomposition require O N 3 computations and O N 2 memory, restricting their use to a few hundred data points. The efficient implementation proposed here relies on the Kronecker structure of the covariance matrix (which makes eigenvalue decomposition and matrix vector products very fast, see Suppl.Info.), with linear conjugate gradients optimization and a lower bound on the marginal likelihood for   d+1 hyperparameter learning. The predictive distribution can be efficiently evaluated in O dN d (with a hidden constant given by the number of Newton steps needed for convergence, cf. [8]) Our implementation is based on the gpml library [9] and the code is available online. A more detailed description of the algorithmic details is provided in the Suppl. Info. In practice, this means that it takes minutes on a laptop to estimate a 3D field for a 30min dataset (2-5min depending on the coarseness of the grid), with a traditional 2D field estimated in 20-30sec. 3 Results Estimator validation We first validated our implementation on artificial data with known statistics.3 We defined a circular arena with 1m diameter and simulated the animal?s behavior as a random walk with reflective bounds (Fig. 1 b, left panel). This random process would eventually uniformly cover the space, but for short sessions it yields occupancy maps similar to those seen in real data. We calibrated diffusion parameters to roughly match CA1 statistics (average speed 5cm/sec, peak firing 5-10Hz, 10-30min long sessions). Inferring the underlying place field was already robust with 10min sessions, with the posterior mean f ? close to the ground truth (SE kernel, see Fig. 1 c). In comparison, the traditional histogram-based estimates is quite poor (Fig. 1 b, left panel), though it can potentially be improved by gaussian smoothing at the right spatial scale (although not without caveats, see Suppl. Info.). It is more difficult to quantify the effects of the various approximations on real data where the assumptions of the model are not matched exactly. Our approach was to check the robustness of the GP-based estimates on subsets of the data constructed by combining every 5th data point (see left panel in Fig. 1 d). This partitioning was designed to ensure that subsets are as statistically similar as possible, sharing slow fluctuations in responses (e.g. due to variations in attentional levels, or changes in behavior). An example cell?s response is shown in Fig. 1 d. Our analysis revealed robust field estimation in most cells, provided they were reasonably active during the session (with mean firing rates >0.1Hz; we discarded the non-responsive cells from subsequent analyses). stereotyped behaviour extrapolation true field estimate histogram estimate GP (kSM) trajectory estimate histogram estimate GP (kSM) Figure 2: Spectral mixture kernels for modelling complex structure. We use artificial data with hexagonal grid structure mimicking MEC responses. Extrapolation task: the animal?s position is restricted to the orange delimited region of the environment. Stereotyped behavior: the simulated animal performs a bounded random walk within an annulus . In both cases, we recover the full field, beyond these borders (GP estimate) using a spectral mixture kernel (kSM). 3 Here we show a 2D example for simplicity; we obtained very similar results with 3D artificial inputs. 4 Spectral mixture kernels for complex functional dependencies Place field estimation is relatively easy in a traditional open field exploration session (30min). The main challenge is getting robust estimates on the time scale of a few minutes (e.g. in order to be able to detect changes due to learning), which we have seen a GP-based estimator can do well. A much more difficult problem is detecting tuning properties in a cheeseboard memory task [12]. What distinguishes this setup is that fact that the animal quickly discovers the location of the wells containing rewards, after which its running patterns become highly stereotypical, close to the shortest path that traverses the reward locations. While it is hard to figure out place field selectivity for locations that the animal never visits, GP-based estimators may have an advantage compared to traditional methods when functional dependencies are structured, as is the case for grid cells in the medial enthorinal cortex (MEC) [13, 14]. When tuning properties are complex and structured we can exploit the expressive power of spectral mixture kernels (SM) to make the most of very limited data. We simulated two versions of this scenario. First, we defined an extrapolation task in which the animal?s behaviour is restricted to a subregion of the environment (marked by orange lines in the 2nd panel of Fig. 2) but we want to infer the spatial selectivity outside these borders. The second scenario attempts to mimic the animal running patterns in a cheeseboard maze (after learning) by restricting the trajectory within a ring (random walk with reflective boundaries in both cases). Using a 5 component spectral mixture kernel we were able to fully reconstruct the hexagonal lattice structure of the true field despite the size of the observed region covering only about 2 times the length scale of the periodic pattern. In contrast, traditional methods (including GP-based inference with standard SE kernels) would fail completely at such extrapolation. While such complex patterns of spatial dependence are restricted to MEC (and the estimator is probably best suited for ventral MEC, where grids have a small length scale [15]) it is conceivable that such extrapolation may also be useful in the temporal domain, or more generally for cortical responses in neurons which have so far eluded a simple functional characterization. Spatial and non-spatial modulation of CA1 responses To explore the multidimensional characterization of principal cell responses in CA1 we constructed several 3D estimators where the input combines the position of the animal within a 2D environment with an additional non-spatial variable.4 The first non-spatial variable we considered is the network PNneurons state, quantified as the population spike count, k = yi (naturally a discrete variable i=1 between 0 and some kmax ). This quantity provides a computationally convenient proxy for network oscillations and has been recently used in a series of studies on the statistics of population activity in the retina and cortex [16?19]. Second, we considered the animal?s speed and direction of motion (with a coarse discretization), motivated by past work on non-spatial modulation of place fields on linear tracks [20]. Third, we also considered input variable t measuring time within a session (SE kernel; 3-5 min windows), as a way to examine the stability of spatial tuning over time. For all analyses, positional information was discretized on a 32 ? 32 grid, corresponding to a spacing of 2.5cm, comparable to the binning resolution used in traditional place field estimates. The animal speed (estimated from the positional information with 250ms temporal smoothing) varied between 0 and about 25cm/sec, with a very skewed distribution (not shown). Small to medium variations in the coarseness of the discretization did not qualitatively affect the results although the choice of prior becomes more important on the tail of the speed distribution, where data is scarce. The resulting 3D tuning functions are shown in Fig. 3 for a few example neurons. First, network state modulates the place field selectivity in most CA1 neurons in our recordings. The typical modulation pattern is a monotonic increase in firing with k (Fig. 3, a, top), although we also found k-dependent flickering in a minority of the cells (Fig. 3a, middle), and very rarely k invariance (Fig. 3a, bottom). Rate remapping is also the dominant pattern of speed-dependent modulation in our data set (Fig. 3b). In terms of place field stability over time, about half the cells were stable during a 30min session in a familiar environment, with occasionally higher firing rates at the very beginning of the trial (Fig. 3c, top), while the rest showed fluctuation in representations (Fig. 3c, bottom). Results shown for 5min windows, but results very similar for 3min. 4 We chose to estimate multiple 3D fields rather than jointly conditioning on all variables mainly for simplicity; this strategy has the added bonus of providing sanity checks for the quality of the different estimates. 5 a 3D estimate GP traditional place field k=26 network state k=0 time c 5min time speed k cell1 e 30min 5min d 3rd dimension >10cm/s 30min spatial marginals cell2 cell3 cell4 cell5 f firing rate (Hz) 3 10 nonspatial modulation 0cm/s 60 network state 0 k=0 6 k=26 speed 0cm/sec 5 0 5min 30min spatial information MI(y,x) temporal instability, MI(y,t) 0.6 0.4 1 0 0 familiar 10 0 >10cm/sec time 2 novel speed b 0.2 0 1 2 familiar 3 0 0 0.2 0.4 0.6 familiar Figure 3: Estimating 3D response dependences in CA1 cells. a) Conditional place fields when constraining the network state, defined by the average population activity k. c) Conditional place fields as a function of the time within a 30min session, used to assess the stability of the representation. In all cases, the rightmost field corresponds to the traditional place field ignoring the 3rd dimension. d) Sanity check: marginal statistics of the place field selectivity obtained independently from the 3D fields in 5 example cells. e) Population summary of the degree of modulation of spatial selectivity by non-spatial variables; see text for details. f) Within comparison of cell properties during the exploration of a familiar vs. a novel environment. As a sanity check of our 3D estimators? quality, we independently computed the traditional place field by marginalizing out the 3rd dimension for each of our 3D estimates. We used the empirical distribution as a prior for the non-spatial dimensions, and an uniform prior for space. Reassuringly we find that the estimates computed after marginalization are very close to the simple 2D place field map in all but 2 cells, which we exclude from the next analysis (examples in Fig. 3d). This provides additional confidence in the robustness of the estimator in the multidimensional case. Since we have a closed form expression for the map between stimulus dimensions and neural responses, we can estimate the mutual information between neural activity and various input variables as a way to dissect their contribution to coding. First, we visualize the modulation of spatial selectivity by the non-spatial variable as the spatial information conditioned on the 3rd variable, normalized by the marginal spatial information, MI(x,y|z) MI(x,y) , with z generically denoting any of the non-spatial variables (approximate closed form expression given f and Poisson observation noise). We see monotonic increases in spatial information with k (Fig. 3e, top), and speed (Fig. 3e, top) at the level of the population, and a weak decrease in spatial information over time (possibly due to higher speeds at the beginning of the session, combined with heightened attention/motivation levels). In terms of the division of spatial vs. non-spatial information across cells, we found that space selective cells have weaker k-modulation (Spearman corr(MI(y, x), MI(y, k) = ?0.17). This however does not exclude the possibility that theta-coupled cells have additional spatial information at the fine temporal scale. Additionally, there is little correlation between the coding of position and speed (corr(MI(y, x), MI(y, speed) = ?0.03), suggesting that the encoding of the two is relatively orthogonal at the level of the population. Somewhat unexpectedly, we found a cell?s temporal stability to be largely independent of its spatial selectivity corr(MI(y, x), MI(y, t) = ?0.04). 6 Motivated by recent observations that the overall excitability of cells may be predictive of both their spatial selectivity and of the rigidity of their representation [21], we compared the overall firing rate of the cells with their spatial and non-spatial selectivity. We found relatively strong dependencies, with positive correlations between firing rate and spatial information (cc = 0.21), network influence (cc = 0.43) and the cell?s stability (cc = 0.38). When comparing these quantities in the same cells as the animal visits a familiar or a novel environment (93 cells, 20min in each environment) we found additional nontrivial dependences between spatial and non-spatial tuning. Although the overall firing rates of the cells are remarkably preserved across conditions (reflecting general cell excitability, cc = 0.66), the subpopulation of cells with strong spatial selectivity is largely non-overlapping across environments (corr(MIfam (y, x), MInov (y, x) = 0.07). Moreover, the temporal stability of the representation is also environment specific (corr(MIfam (y, t), MInov (y, t) = ?0.04). Overall, these results paint a complex picture of hippocampal coding, the implications of which need further empirical and theoretical investigation. Lastly, we studied the dependence of CA1 responses on the animal?s direction of motion. Although directional selectivity is well documented on a linear track [20] it remains unclear if a similar behavior occurs in a 2D environment. The main challenge comes from the poor sampling of the position?direction-of-motion input space, something which our methods can handle readily. To construct directionally selective place field estimates in 2D we took inspiration from recent analyses of 2D phase procession [22] conditioning the responses on the main direction of motion within the place field. Specifically, we used our estimation of a traditional 2D place field to define a region of interest (ROI) that covers 90% of the field for each cell (Fig. 4. We isolated all trajectory segments that traverse this ROI and classified them based on the primary direction of motion along the cardinal orientations. We then computed place field estimates for each direction, with data outside the ROI shared across conditions. To avoid artefacts due to the stereotypical pattern of running along the box borders, we restricted this analysis to cells with fields in the central part of the environment (10 cells). A set of representative examples for the resulting directional fields are shown in Fig. 4d. We found the fields to be largely invariant to direction of motion in our setup, with small displacements in peak firing possibly due to differences between the perceived vs. the camera-based measurements of position (see also [22]). Overall, these results suggest that, in contrast to linear track behavior, CA1 responses are largely invariant to the direction of motion in an open field exploration task. a c traditional field d directional fields (GP) GP histogram cell1 b cell6 Figure 4: Directional selectivity in CA1 cells. a) Cell specific ROI that covers the classic place field (example corresponding to cell 6). b) Classification of the traversals of the region of interest as a function of the primary direction of motion along the cardinal directions. Out of ROI data shared across conditions. c) Traditional place field estimates for example CA1 cells and d) their corresponding direction-specific tuning. 7 4 Discussion Strong constraints on experiment duration, poor sampling of the stimulus space and additional sources of variability that are not under direct experimental control make the estimation of tuning properties during awake behavior particularly challenging. Here we have shown that recent advances on fast GP inference based on Kronecker methods allow for a robust characterization of multidimensional nonlinear tuning functions, which was inaccessible to traditional methods. Furthermore, our estimators inherit all the advantages of a probabilistic approach, including a principled way of dealing with the non-uniform sampling of the input space and natural uncertainty estimates. Our methods can robustly estimate place fields with one order of magnitude fewer data points. Furthermore, they allow for more than two-dimensional inputs. While one could imagine it would suffice to estimate separate place fields conditioned on each value of the non-spatial dimension, z, the joint estimator has the advantage that it allows for smoothing across z values, borrowing strength from well-sampled regions of the z space to make better estimates for poorly sampled z values. Several related algorithms have been proposed in the literature [3?5], which vary primarily in how they handle the tradeoff between kernel flexibility and the computational time required for inference and learning (see Table 1). At one extreme, [3] strongly restricts the nature of the covariance matrix to nearest-neighbour interactions on a 2D grid (resulting in a band-diagonal inverse covariance matrix) which allows them to exploit sparse matrix techniques to estimate the posterior mean in linear time. At the other extreme,  [4, 5] allow for an arbitrary covariance structure, but are computationally prohibitive, O N 3 . Our proposal sits between these extremes in that it achieves close-to-linear computational and memory costs without significantly restricting the flexibility of the covariance structure (for a better intuition of the effect of different covariances, see also Fig. S1). In particular, it can be combined with powerful spectral mixture kernels to extract complex functional dependencies that go beyond simple smoothing. This opens the door to a variety of previously inaccessible tasks such as extrapolation. Moreover, it allows for an agnostic exploration of the neural responses functional space, which could be used to discover novel tuning properties in cells for which coding is poorly understood. When applied to CA1 data, our multidimensional estimators revealed a complex picture of the modulation of neural responses by spatial and non-spatial inputs in the hippocampus. First we confirmed linear track results concerning the speed and oscillatory modulation of spatial tuning. Furthermore, we revealed additional insights into the interaction between the representation of space and these non-spatial dimensions, which go beyond the capabilities of traditional methods. Most notably we found 1) a mostly orthogonal representation of speed and position, that 2) place field stability cannot be easily explained in terms of cell excitability or spatial selectivity, although 3) it is environment specific. Lastly, while we showed 2D place field maps to be direction-invariant in an open field exploration task, more interesting directional dependencies may be revealed in other 2D tasks, where the direction of motion is behavioraly more relevant (e.g. cheeseboard). Importantly, there is nothing hippocampus-specific in the methodology. Hence fast GP inference using Kronecker methods, combined with expressive kernels, may provide a general-purpose tool for characterizing neural responses across brain regions. Table 1: Summary comparison of different estimators. Algorithm Kernel function Computing cost Memory cost Rad et al. 2010 [3] sparse banded inverse covariance SE, any in principle O (N ) O (N ) Park et al. 2014 [4] Savin & Tkacik SE and SM, works for any tensor-product O N 3  O dN  d+1 d  O N 2  2 O dN d Data size 105 < 103 105 Acknowledgments We thank Jozsef Csicsvari for kindly sharing the CA1 data. This work was supported by the People Programme (Marie Curie Actions) of the European Union?s Seventh Framework Programme (FP7/2007-2013) under REA grant agreement no. 291734. 8 References [1] Pillow, J.W. Likelihood-based approaches to modeling the neural code. in Bayesian brain: probabilistic approaches to neural coding 1?21 (2006). [2] Pillow, J.W. et al. Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature 454, 995?999 (2008). [3] Rad, K.R. & Paninski, L. Efficient, adaptive estimation of two-dimensional firing rate surfaces via Gaussian process methods. Network 21, 142?168 (2010). [4] Park, M., Weller, J.P., Horwitz, G.D. & Pillow, J.W. Bayesian active learning of neural firing rate maps with transformed gaussian process priors. Neural computation 26, 1519?1541 (2014). [5] Macke, J.H., Gerwinn, S., White, L.E., Kaschube, M. & Bethge, M. Gaussian process methods for estimating cortical maps. neuroimage 56, 570?581 (2011). [6] Frank, L.M., Eden, U.T., Solo, V., Wilson, M.A. & Brown, E.N. Contrasting patterns of receptive field plasticity in the hippocampus and the entorhinal cortex: an adaptive filtering approach. Journal of Neuroscience 22, 3817?3830 (2002). [7] Saat?i, Y. Scalable inference for structured Gaussian process models. PhD thesis, Cambridge University, UK, (2012). [8] Flaxman, A., Wilson, A., Neill, D., Nickisch, H. & Smola, A. Fast Kronecker inference in Gaussian Processes with non-Gaussian likelihoods. in Proceedings of the 32nd International Conference on Machine Learning (ICML-15) , 607?616, (2015). [9] Rasmussen, C.E. & Williams, C.K.I. Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning) (The MIT Press, 2005). [10] Wilson, A., & Adams, R. Gaussian Process kernels for pattern discovery and extrapolation. arXiv.org (2013). [11] Hensman, J., Matthews, A.G. & Filippone, M. MCMC for Variationally Sparse Gaussian Processes. in Advances in Neural Information Processing Systems . MIT Press, (2015). [12] Dupret, D., O?Neill, J., Pleydell-Bouverie, B. & Csicsvari, J. The reorganization and reactivation of hippocampal maps predict spatial memory performance. Nature Publishing Group 13, 995? 1002 (2010). [13] Moser, E.I., Kropff, E. & Moser, M.B. Place Cells, Grid Cells, and the Brain?s Spatial Representation System. Annual Review of Neuroscience 31, 69?89 (2008). [14] Moser, E.I. et al. Grid cells and cortical representation. Nature Publishing Group 15, 466?481 (2014). [15] Brun, V.H. et al. Progressive increase in grid scale from dorsal to ventral medial entorhinal cortex. Hippocampus 18, 1200?1212 (2008). [16] Tka?cik, G. et al. The simplest maximum entropy model for collective behavior in a neural network. Journal of Statistical Mechanics: Theory and Experiment 2013, P03011 (2013). [17] Tka?cik, G. et al. Searching for collective behavior in a large network of sensory neurons. PLoS Computational Biology 10, e1003408 (2014). [18] Fiser, J., Lengyel, M., Savin, C., Orban, G. & Berkes, P. How (not) to assess the importance of correlations for the matching of spontaneous and evoked activity. arXiv (2013). [19] Okun, M. et al. Diverse coupling of neurons to populations in sensory cortex. Nature (2015). [20] McNaughton, B.L., Barnes, C.A. & O?Keefe, J. The contributions of position, direction, and velocity to single unit activity in the hippocampus of freely-moving rats. Experimental Brain Research 52, 41?49 (1983). [21] Grosmark, A.D. & Buzs?ki, G. Diversity in neural firing dynamics supports both rigid and learned hippocampal sequences. Science, 1?5 (2016). [22] Huxter, J.R., Senior, T.J., Allen, K. & Csicsvari, J. Theta phase?specific codes for twodimensional position, trajectory and heading in the hippocampus. Nature Neuroscience 11, 587?594 (2008). 9
6153 |@word trial:1 cox:1 version:1 determinant:1 middle:1 hippocampus:7 coarseness:3 nd:2 open:7 covariance:16 simplifying:1 tkacik:2 decomposition:2 cristina:1 series:1 denoting:1 rightmost:1 past:1 recovered:1 discretization:4 comparing:1 yet:1 readily:1 realistic:1 subsequent:1 plasticity:1 designed:1 medial:2 v:3 stationary:1 generative:4 prohibitive:2 discovering:1 half:1 fewer:1 signalling:1 nonspatial:1 beginning:2 ksm:3 short:1 caveat:1 provides:4 detecting:3 characterization:4 location:3 traverse:2 coarse:1 sits:1 org:1 dn:5 constructed:2 along:3 become:2 direct:1 combine:1 x0:5 notably:1 expected:1 roughly:2 behavior:11 examine:1 mechanic:1 brain:5 discretized:1 little:2 equipped:1 window:3 becomes:1 provided:2 spain:1 estimating:4 underlying:2 bonus:1 begin:1 circuit:2 bounded:2 moreover:3 laptop:1 what:2 medium:1 kind:1 cm:7 agnostic:1 ca1:15 contrasting:1 temporal:7 every:1 multidimensional:8 concave:1 xd:2 exactly:2 k2:1 uk:1 control:2 partitioning:1 grant:1 unit:1 positive:1 understood:1 sd:1 despite:1 encoding:1 firing:15 interpolation:3 fluctuation:2 path:1 modulation:10 chose:1 studied:1 quantified:1 evoked:1 suggests:1 challenging:2 specifying:2 shaded:1 co:1 limited:1 statistically:1 acknowledgment:1 camera:1 practice:4 union:1 displacement:1 area:1 empirical:2 significantly:1 convenient:2 matching:1 confidence:2 subpopulation:1 suggest:1 naturalistic:1 cannot:1 close:4 selection:1 twodimensional:1 kmax:1 live:1 instability:1 influence:1 map:8 maximizing:1 go:3 attention:1 flexibly:1 duration:2 independently:2 williams:1 resolution:1 simplicity:2 insight:4 estimator:14 stereotypical:2 importantly:1 classic:2 population:8 stability:7 variation:2 handle:2 laplace:4 searching:1 mcnaughton:1 imagine:1 spontaneous:1 heightened:1 agreement:1 velocity:1 particularly:1 binning:1 observed:1 bottom:2 capture:1 unexpectedly:1 region:10 plo:1 decrease:1 principled:2 intuition:1 environment:13 inaccessible:2 reward:2 unacknowledged:1 traversal:1 dynamic:1 segment:1 predictive:4 division:1 basis:1 completely:1 easily:1 joint:2 k0:1 various:2 derivation:1 fast:5 lengthscale:1 artificial:6 outside:2 sanity:3 quite:1 reconstruct:1 statistic:4 gp:21 jointly:2 online:1 directionally:1 advantage:4 eigenvalue:1 sequence:1 took:1 okun:1 interaction:2 product:5 relevant:1 combining:1 poorly:2 flexibility:2 description:1 kh:1 getting:1 convergence:1 requirement:2 klosterneuburg:2 leave:1 ring:1 adam:1 derive:1 depending:1 ac:2 coupling:1 measured:2 nearest:1 exacerbated:1 strong:3 subregion:1 predicted:1 come:2 quantify:1 direction:16 artefact:1 inhomogeneous:1 exploration:8 implementing:1 require:1 behaviour:2 investigation:1 mathematically:2 around:1 considered:3 ground:3 normal:1 exp:3 roi:5 algorithmic:1 predict:1 visualize:1 matthew:1 substituting:1 ventral:2 vary:1 achieves:1 purpose:2 perceived:1 estimation:6 tool:3 mit:2 gaussian:16 rather:2 avoid:1 wilson:3 gpml:1 validated:1 focus:2 modelling:1 likelihood:7 check:4 mainly:1 contrast:2 sense:1 detect:1 inference:10 dependent:4 rigid:1 borrowing:1 hidden:1 relation:1 selective:2 transformed:1 csavin:1 mimicking:1 overall:7 classification:1 flexible:2 orientation:2 animal:22 smoothing:7 spatial:42 special:1 orange:2 marginal:5 field:50 construct:1 mutual:1 shaped:1 never:1 sampling:6 biology:1 progressive:1 park:2 icml:1 future:1 mimic:1 stimulus:4 spline:1 cardinal:2 few:4 mec:4 retina:1 primarily:1 neighbour:1 distinguishes:1 individual:5 familiar:6 phase:3 argmax:1 attempt:1 interest:2 possibility:1 circular:1 putational:1 highly:1 arena:1 generically:1 mixture:12 extreme:3 implication:1 solo:1 orthogonal:2 taylor:1 walk:4 isolated:1 theoretical:1 instance:3 modeling:1 facet:1 cover:3 measuring:1 lattice:2 tractability:1 cost:3 subset:3 uniform:3 hundred:1 seventh:1 characterize:2 weller:1 dependency:9 periodic:2 considerably:1 calibrated:1 combined:3 nickisch:1 density:2 peak:2 international:1 moser:3 probabilistic:4 bethge:1 quickly:1 squared:1 central:1 thesis:1 containing:1 possibly:2 positivity:1 external:4 macke:1 suggesting:1 exclude:2 nonlinearities:1 diversity:1 sec:5 coding:5 explicitly:1 depends:1 extrapolation:8 closed:4 suffice:1 recover:2 capability:1 curie:1 contribution:2 ass:2 variance:3 largely:4 efficiently:2 correspond:1 yield:1 directional:5 weak:2 modelled:1 bayesian:2 annulus:1 trajectory:5 confirmed:1 cc:4 lengyel:1 horwitz:1 classified:1 oscillatory:1 banded:1 sharing:2 frequency:1 naturally:2 mi:10 sampled:2 experimenter:1 dataset:2 treatment:1 austria:2 formalize:1 variationally:1 cik:3 reflecting:1 delimited:1 higher:2 follow:1 methodology:1 response:23 improved:1 formulation:1 evaluated:1 box:2 strongly:3 generality:1 though:1 furthermore:3 smola:1 lastly:3 fiser:1 correlation:4 hand:2 sketch:1 expressive:2 matched:1 nonlinear:5 overlapping:1 brun:1 defines:1 quality:2 effect:2 requiring:1 true:3 normalized:1 brown:1 procession:1 hence:1 inspiration:1 excitability:3 deal:1 white:1 during:7 skewed:1 covering:1 rat:2 m:2 generalized:1 hippocampal:6 complete:1 performs:1 motion:11 allen:1 image:1 novel:5 discovers:1 recently:1 common:1 functional:11 overview:1 conditioning:2 extend:1 linking:1 tail:1 marginals:1 measurement:2 cambridge:1 smoothness:3 tuning:21 rd:4 grid:9 session:9 moving:1 stable:1 longer:1 cortex:5 surface:1 etc:3 berkes:1 dominant:1 something:1 posterior:10 multivariate:2 recent:5 showed:2 buzs:1 driven:1 scenario:2 selectivity:14 certain:1 route:1 occasionally:1 gerwinn:1 yi:1 seen:2 additional:8 somewhat:1 freely:1 maximize:1 shortest:1 full:1 multiple:1 infer:2 match:1 cross:1 long:1 concerning:2 manipulate:1 visit:2 involving:1 scalable:1 poisson:7 arxiv:2 histogram:5 kernel:24 tailored:1 suppl:6 filippone:1 cell:40 preserved:1 proposal:1 want:1 fine:1 spacing:1 remarkably:1 rea:1 source:1 rest:1 probably:1 recording:4 hz:4 reflective:2 door:1 revealed:4 constraining:1 enough:2 easy:1 automated:1 variety:1 affect:1 marginalization:1 restrict:1 reassuringly:1 tradeoff:1 shift:1 expression:3 motivated:2 utility:2 linkage:1 hessian:1 action:1 dramatically:1 ignored:1 useful:1 detailed:2 gasper:1 se:8 generally:1 band:1 diameter:1 documented:1 simplest:1 specifies:1 restricts:1 estimated:2 neuroscience:3 track:5 per:1 diverse:1 discrete:2 hyperparameter:1 promise:1 ist:4 group:2 eden:1 marie:1 diffusion:1 v1:1 inverse:2 uncertainty:2 powerful:3 place:27 almost:1 discover:1 patch:1 oscillation:4 draw:1 scaling:1 comparable:1 ki:2 bound:2 neill:2 annual:1 activity:8 nontrivial:1 strength:1 barnes:1 kronecker:9 constraint:2 awake:2 speed:16 orban:1 min:19 savin:3 hexagonal:2 relatively:3 structured:3 poor:3 kd:4 conjugate:1 across:7 spearman:1 s1:1 explained:1 restricted:5 invariant:3 glm:1 computationally:4 vq:1 remains:2 previously:2 count:4 eventually:1 fail:1 needed:1 tractable:1 fp7:1 available:2 apply:1 enforce:2 spectral:12 robustly:2 responsive:1 robustness:2 customary:1 top:4 running:3 include:1 cf:1 ensure:1 publishing:2 newton:1 exploit:3 k1:1 approximating:1 comparatively:1 tensor:1 already:1 quantity:3 spike:4 added:1 receptive:2 strategy:1 dependence:5 paint:1 traditional:19 occurs:1 unclear:1 primary:2 gradient:1 conceivable:1 diagonal:1 attentional:2 separate:2 simulated:3 thank:1 sensible:1 argue:1 extent:1 reason:1 kaschube:1 minority:1 assuming:1 code:3 length:2 dupret:1 relationship:2 illustration:1 providing:1 reorganization:1 difficult:3 setup:2 mostly:1 potentially:2 frank:1 reactivation:1 info:6 design:1 implementation:3 collective:2 allowing:2 vertical:1 neuron:7 observation:3 sm:4 discarded:1 defining:1 looking:1 incorporated:1 variability:1 varied:1 arbitrary:3 inferred:3 pair:1 required:2 csicsvari:3 rad:2 panel:4 eluded:1 learned:1 barcelona:1 subgroup:1 nip:1 beyond:5 bar:1 able:2 usually:1 pattern:10 challenge:2 including:3 memory:6 power:1 critical:1 natural:1 quantification:1 rely:1 scarce:2 occupancy:1 theta:3 library:1 picture:2 coupled:1 extract:1 flaxman:1 text:1 prior:9 literature:1 discovery:1 review:1 multiplication:1 marginalizing:1 fully:2 interesting:1 filtering:1 ingredient:1 validation:2 eigendecomposition:1 degree:1 tka:3 proxy:1 principle:1 periodicity:2 summary:2 supported:1 keeping:2 rasmussen:1 heading:1 weaker:1 allow:3 senior:1 characterizing:4 sparse:5 slice:1 boundary:1 dimension:12 cortical:3 hensman:1 pillow:3 maze:1 sensory:2 qualitatively:1 adaptive:4 programme:2 far:1 approximate:8 dealing:2 active:2 spatio:1 decomposes:2 table:2 additionally:1 nature:7 reasonably:1 robust:5 remapping:1 ignoring:1 expansion:1 complex:14 european:1 domain:2 did:1 inherit:1 main:4 stereotyped:3 kindly:1 border:3 noise:3 hyperparameters:4 arise:1 motivation:1 nothing:1 neuronal:1 fig:20 representative:1 slow:1 neuroimage:1 position:14 inferring:1 exponential:3 dissect:1 third:1 z0:1 minute:2 specific:7 showing:1 concern:1 restricting:3 corr:5 modulates:1 importance:1 keefe:1 phd:1 magnitude:1 entorhinal:2 conditioned:3 suited:1 entropy:1 paninski:1 explore:1 positional:3 visual:1 tracking:1 jozsef:1 monotonic:2 corresponds:2 truth:3 relies:2 grosmark:1 conditional:2 modulate:1 goal:1 targeted:1 marked:1 towards:1 twofold:1 flickering:1 shared:2 experimentally:2 change:2 hard:1 typical:1 specifically:1 uniformly:1 principal:2 invariance:1 experimental:4 rarely:1 internal:2 wq:1 cholesky:1 people:1 support:1 dorsal:1 mcmc:1 rigidity:1
5,696
6,154
A Simple Practical Accelerated Method for Finite Sums Aaron Defazio Ambiata, Sydney Australia Abstract We describe a novel optimization method for finite sums (such as empirical risk minimization problems) building on the recently introduced SAGA method. Our method achieves an accelerated convergence rate on strongly convex smooth problems. Our method has only one parameter (a step size), and is radically simpler than other accelerated methods for finite sums. Additionally it can be applied when the terms are non-smooth, yielding a method applicable in many areas where operator splitting methods would traditionally be applied. Introduction A large body of recent developments in optimization have focused on minimization of convex finite sums of the form: n 1X f (x) = fi (x), n i=1 a very general class of problems including the empirical risk minimization (ERM) framework as a special case. Any function h can be written in this form by setting f1 (x) = h(x) and fi = 0 for i 6= 1, however when each fi is sufficiently regular in a way that can be made precise, it is possible to optimize such sums more efficiently than by treating them as black box functions. In most cases recently developed methods such as SAG [Schmidt et al., 2013] can find an -minimum faster than either stochastic gradient descent or accelerated black-box approaches, both in theory and in practice. We call this class of methods fast incremental gradient methods (FIG). FIG methods are randomized methods similar to SGD, however unlike SGD they are able to achieve linear convergence rates under Lipschitz-smooth and strong convexity conditions [Mairal, 2014, Defazio et al., 2014b, Johnson and Zhang, 2013, Kone?cn? and Richt?rik, 2013]. The linear rate in the first wave of FIG methods directly depended on the condition number L/? of the problem, whereas recently several methods have been developed that depend on the square-root of the condition number [Lan and Zhou, 2015, Lin et al., 2015, Shalev-Shwartz and Zhang, 2013c, Nitanda, 2014], at least when n is not too large. Analogous to the black-box case, these methods are known as accelerated methods. In this work we develop another accelerated method, which is conceptually simpler and requires less tuning than existing accelerated methods. The method we give is a primal approach, however it makes use of a proximal operator oracle for each fi instead of a gradient oracle, unlike other primal approaches. The proximal operator is also used by dual methods such as some variants of SDCA [Shalev-Shwartz and Zhang, 2013a]. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Algorithm 1 Pick some starting point x0 and step size ?. Initialize each gi0 = fi0 (x0 ), where fi0 (x0 ) is any gradient/subgradient at x0 . Then at step k + 1: 1. Pick index j from 1 to n uniformly at random. 2. Update x: " # n X 1 zjk = xk + ? gjk ? gk , n i=1 i  xk+1 = prox?j zjk .  3. Update the gradient table: Set gjk+1 = ?1 zjk ? xk+1 , and leave the rest of the entries unchanged (gik+1 = gik for i 6= j). 1 Algorithm Our algorithm?s main step makes use of the proximal operator for a randomly chosen fi . For convenience, we define:   1 2 prox?i (x) = argminy ?fi (y) + kx ? yk . 2 This proximal operator can be computed efficiently or in closed form in many cases, see Section 4 for details. Like SAGA, we also maintain a table of gradients gi , one for each function fi . We denote the state of gi at the end of step k by gik . The iterate (our guess at the solution) at the end of step k is denoted xk . The starting iterate x0 may be chosen arbitrarily. Pn The full algorithm is given as Algorithm 1. The sum of gradients n1 i=1 gik can be cached and updated efficiently at each step, and in most cases instead of storing a full vector for each gi , only a single real value needs to be stored. This is the case for linear regression or binary classification with logistic loss or hinge loss, in precisely the same way as for standard SAGA. A discussion of further implementation details is given in Section 4. With step size q (n ? 1)2 + 4n L ? 1 ? n1 , 2Ln 2L the expected convergence rate in terms of squared distance to the solution is given by:  k 2 ?? ?+L x0 ? x? 2 , E xk ? x? ? 1 ? 1 + ?? ? ? = ? when each fi : Rd ? R is L-smooth and ?-strongly convex. See Nesterov [1998] for definitions of these conditions. Using big-O notation, the number of steps required to reduce the distance to the solution by a factor  is: s !  ! nL 1 k=O + n log , ?  as  ? 0. This rate matches the lower bound known for this problem [Lan and Zhou, 2015] under the gradient oracle. We conjecture that this rate is optimal under the proximal operator oracle as well. Unlike other accelerated approaches though, we have only a single tunable parameter (the step size ?), and the algorithm doesn?t need knowledge of L or ? except for their appearance in the step size. Compared to the O ((L/? + n) log (1/)) rate for SAGA and other non-accelerated FIG methods, accelerated FIG methods are significantly faster when n is small compared to L/?, however for n ? L/? the performance is essentially the same. All known FIG methods hit a kind of wall at n ? L/?, where they decrease the error at each step by no more than 1 ? n1 . Indeed, when n ? L/? the problem is so well conditioned so as to be easy for any FIG method to solve it efficiently. This is sometimes called the big data setting [Defazio et al., 2014b]. 2 Our convergence rate can also  be compared  to that of  optimal first-order black box methods, which p have rates of the form k = O L/? log (1/) per epoch equivalent. We are able to achieve ? a n speedup on a per-epoch basis, for n not too large. Of course, all of the mentioned rates are significantly better than the O ((L/?) log (1/)) rate of gradient descent. For non-smooth but strongly convex problems, we prove a 1/-type rate under a standard iterate averaging scheme. This rate does not require the use of decreasing step sizes, so our algorithm requires less tuning than other primal approaches on non-smooth problems. 2 Relation to other approaches Our method is most closely related to the SAGA method. To make the relation clear, we may write our method?s main step as: " # n 1X k k+1 k 0 k+1 k x = x ? ? fj (x ) ? gj + g , n i=1 i whereas SAGA has a step of the form: " x k+1 k =x ?? fj0 (xk ) ? gjk # n 1X k g . + n i=1 i The difference is the point at which the gradient of fj is evaluated at. The proximal operator has the effect of evaluating the gradient at xk+1 instead of xk . While a small difference on the surface, this change has profound effects. It allows the method to be applied directly to non-smooth problems using fixed step sizes, a property not shared by SAGA or other primal FIG methods. Additionally, it allows for much larger step sizes to be used, which is why the method is able to achieve an accelerated rate. It is also illustrative to look at how the methods behave at n = 1. SAGA degenerates into regular gradient descent, whereas our method becomes the proximal-point method [Rockafellar, 1976]: xk+1 = prox?f (xk ). The proximal point method has quite remarkable properties. For strongly convex problems, it converges for any ? > 0 at a linear rate. The downside being the inherent difficulty of evaluating the proximal operator. For the n = 2 case, if each term is an indicator function for a convex set, our algorithm matches Dykstra?s projection algorithm if we take ? = 2 and use cyclic instead of random steps. Accelerated incremental gradient methods Several acceleration schemes have been recently developed as extensions of non-accelerated FIG methods. The earliest approach developed was the ASDCA algorithm [Shalev-Shwartz and Zhang, 2013b,c]. The general approach of applying the proximal-point method as the outer-loop of a doubleloop scheme has been dubbed the Catalyst algorithm Lin et al. [2015]. It can be applied to accelerate any FIG method. Recently a very interesting primal-dual approach has been proposed by Lan and Zhou [2015]. All of the prior accelerated methods are significantly more complex than the approach we propose, and have more complex proofs. 3 3.1 Theory Proximal operator bounds In this section we rehash some simple bounds from proximal operator theory that we will use in this work. Define the short-hand p?f (x) = prox?f (x), and let g?f (x) = ?1 (x ? p?f (x)), so that p?f (x) = x ? ?g?f (x). Note that g?f (x) is a subgradient of f at the point p?f (x). This relation is known as the optimality condition of the proximal operator. Note that proofs for the following two propositions are in the supplementary material. 3 Notation xk x? ? p?f (x) Description Current iterate at step k Solution Step size Short-hand in results for generic f prox?i (x) gik gi? vi Proximal operator of ?fi at x j zjk Additional relation xk ? R d x? ? R d p?f (x) n = prox?f (x) = argminy ?fi (y) + A stored subgradient of fi as seen at step k A subgradient of fi at x? vi = x? + ?gi? Chosen componentindex (random variable)  Pn zjk = xk + ? gjk ? n1 i=1 gik 1 2 2 kx ? yk o Pn ? i=1 gi = 0 x? = prox?i (vi ) xk+1 = prox?j zjk j  Table 1: Notation quick reference Proposition 1. (Strengthening firm non-expansiveness under strong convexity) For any x, y ? Rd , and any convex function f : Rd ? R with strong convexity constant ? ? 0, 2 hx ? y, p?f (x) ? p?f (y)i ? (1 + ??) kp?f (x) ? p?f (y)k . In operator theory this property is known as (1 + ??)-cocoerciveness of p?f . Proposition 2. (Moreau decomposition) For any x ? Rd , and any convex function f : Rd ? R with Fenchel conjugate f ? : p?f (x) = x ? ?p ?1 f ? (x/?). (1) Recall our definition of g?f (x) = ?1 (x ? p?f (x)) also. After combining, the following relation thus holds between the proximal operator of the conjugate f ? and g?f : p ?1 f ? (x/?) = 1 (x ? p?f (x)) = g?f (x). ? (2) Theorem 3. For any x, y ? Rd , and any convex L-smooth function f : Rd ? R:  1 hg?f (x) ? g?f (y), x ? yi ? ? 1 + L?  2 kg?f (x) ? g?f (y)k , Proof. We will apply cocoerciveness of the proximal operator of f ? as it appears in the decomposition. Note that L-smoothness of f implies 1/L-strong convexity of f ? . In particular we apply it to the points ?1 x and ?1 y: Pulling 3.2  2   1 1 1 ? 1+ p 1 ? ( x) ? p ?1 f ? ( y) . L? ? f ? ?  1 1 1 1 p ?1 f ? ( x) ? p ?1 f ? ( y), x ? y ? ? ? ? 1 ? from the right side of the inner product out, and plugging in Equation 2, gives the result. Notation Let x? be the unique minimizer (due to strong convexity) of f . In addition to the notation used in the description of the algorithm, we also fix a set of subgradients gj? , one for each of fj at x? , chosen Pn such that j=1 gj? = 0. We also define vj = x? + ?gj? . Note that at the solution x? , we want to apply a proximal step for component j of the form:  x? = prox?j x? + ?gj? = prox?j (vj ) . 4 Lemma 4. (Technical lemma needed by main proof) Under Algorithm 1, taking the expectation over the random choice of j, conditioning on xk and each gik , allows us to bound the following inner product at step k: * " # " # + n n  1X k 1X k k ? k ? k ? E ? gj ? g ? ?gj , x ? x + ? gj ? g ? ?gj n i=1 i n i=1 i n ? ?2 1 X g k ? gi? 2 . n i=1 i The proof is in the supplementary material. 3.3 Main result Theorem 5. (single step Lyapunov descent) We define the Lyapunov function T k of our algorithm (Point-SAGA) at step k as: n c X gik ? gi? 2 + xk ? x? 2 , Tk = n i=1 q (n?1)2 +4n L ? 1? 1 ? 2Ln , the expectation of T k+1 , over the for c = 1/?L. Then using step size ? = 2Ln k k random choice of j, conditioning on x and each gi , is:   ?? , E T k+1 ? (1 ? ?) T k for ? = 1 + ?? when each fi : Rd ? R is L-smooth and ?-strongly convex and 0 < ? < L. This is the same Lyapunov function as used by Hofmann et al. [2015]. Proof. Term 1 of T k+1 is straight-forward to simplify:   n n c X 1 c X k+1 ? 2 gik ? gi? 2 + c E g k+1 ? gj? 2 . E gi ? gi = 1 ? j n i=1 n n i=1 n For term 2 of T k+1 we start by applying cocoerciveness (Theorem 1): 2 (1 + ??)E xk+1 ? x? 2 (1 + ??)E prox?j (zjk ) ? prox?j (vj ) ? E prox?j (zjk ) ? prox?j (vj ), zjk ? vj = E xk+1 ? x? , zjk ? vj . = Now we add and subtract xk : = E xk+1 ? xk + xk ? x? , zjk ? vj = E xk ? x? , zjk ? vj + E xk+1 ? xk , zjk ? vj 2 = xk ? x? + E xk+1 ? xk , zjk ? vj , where we have pulled out the quadratic term by using E[zjk ? vj ] = xk ? x? (we can take the expectation since the left hand side of the inner product doesn?t depend on j). We now expand E xk+1 ? xk , zjk ? vj further: E xk+1 ? xk , zjk ? vj = E xk+1 ? ?gj? + ?gj? ? xk , zjk ? vj * " # n 1X k k+1 k k g ? ?gj? + ?gj? ? xk , = E x ? ?gj + ? gj ? n i=1 i " # + n  1X k k ? k ? x ? x + ? gj ? g ? ?gj . (3) n i=1 i 5 We further split the left side of the inner product to give two separate inner products: " + # # * " n n  1X k 1X k ? k ? k ? k = E ? gj ? g ? ?gj , x ? x + ? gj ? g ? ?gj n i=1 i n i=1 i * " # + n  1X k k+1 ? k ? k ? + E ?gj ? ?gj , x ? x + ? gj ? g ? ?gj . n i=1 i (4) The first inner product in Equation 4 is the quantity we bounded in Lemma 4 by 2 Pn ? 2 n1 i=1 gik ? gi? . The second inner product in Equation 4, can be simplified using Theorem 3 (note the right side of the inner product is equal to zjk ? vj ):   2 k+1 1 ? k 2 E gjk+1 ? gj? . ??E gj ? gj , zj ? vj ? ?? 1 + L? k+1 2 Combing these gives the following bound on (1 + ??)E x ? x? :   n X k k+1 2 k 1 21 ? 2 ? 2 ? 2 2 +? ? x ?x ?x (1+??)E x E gjk+1 ? gj? . gi ? gi ?? 1 + n i=1 L? ?? 1 Define ? = 1+?? . Now we multiply the above inequality through by ? = 1 ? ?, where ? = 1+?? and combine with the rest of the Lyapunov function, giving: n    c 1 X gik ? gi? 2 E T k+1 ? T k + ?? 2 ? n n i c 2 2 ??  ? ?? 2 ? + E gjk+1 ? gj? ? ?E xk ? x? . n L We want an ? convergence rate, so we pull out the required terms: n    c 1 X gik ? gi? 2 E T k+1 ? ?T k + ?? 2 + ?c ? n n i c 2 ??  + ? ?? 2 ? E gjk+1 ? gj? . n L q (n?1)2 +4n L ? 1? 1 Now to complete the proof we note that c = 1/?L and ? = ? 2Ln ensure that both 2Ln k+1 k terms inside the round brackets are non-positive, giving ET ? ?T . These constants were found by equating the equations in the brackets to zero, and solving with respect to the two unknowns, ? and c. It is easy to verify that ? is always positive, as a consequence of the condition number L/? always being at least 1. Corollary 6. (Smooth case) Chaining Theorem 5 gives a convergence rate for Point-SAGA at step k under the constants given in Theorem 5 of: 2 k ?+L x0 ? x? 2 , E xk ? x? ? (1 ? ?) ? if each fi : Rd ? R is L-smooth and ?-strongly convex. Theorem case) Suppose each fi : Rd ? R is ?-strongly convex, gi0 ? gi? ? B 0 7. ?(Non-smooth ? and x ? x ? R. Then after k iterations of Point-SAGA with step size ? = R/B n: ? ? k 2 n (1 + ? (R/B n)) ? ? x? ? 2 E x RB, ?k Pk where x ?k = k1 E t=1 xt . The proof of this theorem is included in the supplementary material. 6 4 Implementation Care must be taken for efficient implementation, particularly in the sparse gradient case. We discuss the key points below. A fast Cython implementation is available on the author?s website incorporating these techniques. Proximal operators For the most common binary classification and regression methods, implementing the proximal operator is straight-forward. We include details of the computation of the proximal operators for the hinge, square and logistic losses in the supplementary material. The logistic loss does not have a closed form proximal operator, however it may be computed very efficiently in practice using Newton?s method on a 1D subproblem. For problems of a non-trivial dimensionality the cost of the dot products in the main step is much greater than the cost of the proximal operator evaluation. We also detail how to handle a quadratic regularizer within each term?s prox operator, which has a closed form in terms of the unregularized prox operator. Initialization Instead of setting gi0 = fi0 (x0 ) before commencing the algorithm, we recommend using gi0 = 0 instead. This avoids the cost of a initial pass over the data. In practical effect this is similar to the SDCA initialization of each dual variable to 0. 5 Experiments We tested our algorithm which we call Point-SAGA against SAGA [Defazio et al., 2014a], SDCA [Shalev-Shwartz and Zhang, 2013a], Pegasos/SGD [Shalev-Shwartz et al., 2011] and the catalyst acceleration scheme [Lin et al., 2015]. SDCA was chosen as the inner algorithm for the catalyst scheme as it doesn?t require a step-size, making it the most practical of the variants. Catalyst applied to SDCA is essentially the same algorithm as proposed in Shalev-Shwartz and Zhang [2013c]. A single inner epoch was used for each SDCA invocation. Accelerated MISO as well as the primal-dual FIG method [Lan and Zhou, 2015] were excluded as we wanted to test on sparse problems and they are not designed to take advantage of sparsity. The step-size parameter for each method (? for catalyst-SDCA) was chosen using a grid search of powers of 2. The step size that gives the lowest error at the final epoch is used for each method. We selected a set of commonly used datasets from the LIBSVM repository [Chang and Lin, 2011]. The pre-scaled versions were used when available. Logistic regression with L2 regularization was applied to each problem. The L2 regularization constant for each problem was set by hand to ensure f was not in the big data regime n ? L/?; as noted above, all the methods perform essentially the same when n ? L/?. The constant used is noted beneath each plot. Open source code to exactly replicate the experimental results is available at https://github.com/adefazio/point-saga. Algorithm scaling with respect to n The key property that distinguishes accelerated FIG methods from their non-accelerated counterparts is their performance scaling with respect to the dataset size. For large datasets on well-conditioned problems we expect from the theory to see little difference between the methods. To this end, we ran experiments including versions of the datasets subsampled randomly without replacement in 10% and 5% increments, in order to show the scaling with n empirically. The same amount of regularization was used for each subset. Figure 1 shows the function value sub-optimality for each dataset-subset combination. We see that in general accelerated methods dominate the performance of their non-accelerated counter-parts. Both SDCA and SAGA are much slower on some datasets comparatively than others. For example, SDCA is very slow on the 5 and 10% COVTYPE datasets, whereas both SAGA and SDCA are much slower than the accelerated methods on the AUSTRALIAN dataset. These differences reflect known properties of the two methods. SAGA is able to adapt to inherent strong convexity while SDCA can be faster on very well-conditioned problems. There is no clear winner between the two accelerated methods, each gives excellent results on each problem. The Pegasos (stochastic gradient descent) algorithm with its slower than linear rate is a clear loser on each problem, almost appearing as an almost horizontal line on the log scale of these plots. Non-smooth problems We also tested the RCV1 dataset on the hinge loss. In general we did not expect an accelerated rate for this problem, and indeed we observe that Point-SAGA is roughly as fast as SDCA across the different dataset sizes. 7 10?2 10?3 10?4 10?5 10?6 10?7 10?8 0 5 10 Epoch 15 20 100 10?1 10?2 10?3 10?4 10?5 10?6 10?7 10?8 10?9 Function Suboptimality Function Suboptimality Function Suboptimality 100 10?1 0 5 10 Epoch 15 20 101 100 10?1 10?2 10?3 10?4 10?5 10?6 10?7 10?8 10?9 10?10 10?11 10?12 0 5 10 Epoch 15 20 (a) COVTYPE ? = 2 ? 10?6 : 5%, 10%, 100% subsets 100 10?1 10?2 10?3 10?4 0 5 10 15 Epoch 20 25 10?1 10?2 10?3 10?4 30 100 Function Suboptimality 100 Function Suboptimality Function Suboptimality 101 0 5 10 15 Epoch 20 25 10?1 10?2 10?3 10?4 10?5 10?6 10?7 10?8 30 0 5 10 15 Epoch 20 25 30 5 10 15 Epoch 20 25 30 Function Suboptimality 100 10?1 10?2 10?3 10 ?4 0 5 10 10 15 Epoch 20 25 30 0 5 10 15 Epoch 20 25 30 100 10?1 10?2 10?3 10?4 10?5 10?6 10?7 10?8 10?9 10?10 10?11 10?12 10?13 0 101 (c) MUSHROOMS ? = 10?4 : 5%, 10%, 100% subsets 100 10 10 10 1 10 10 10 2 10 10 3 10 10 10 4 10 10 10 10 10 1510 20 525 30 35 40 0 5 10 15 20 25 30 35 40 0 5 10 Epoch Epoch 10 6 (d) 5 ? 10?5 : 5%, 0 RCV1 with hinge 5 loss, ? =10 1510%, 100% subsets 20 Point-SAGA Epoch SDCA 0 0 Function Suboptimality 5 100 10?1 10?2 10?3 10?4 10?5 10?6 10?7 10?8 10?9 10?10 Function Suboptimality Function Suboptimality 0 Function Suboptimality 100 10?1 10?2 10?3 10?4 10?5 10?6 10?7 10?8 10?9 10?10 Function Suboptimality Function Suboptimality (b) AUSTRALIAN ? = 10?4 : 5%, 10%, 100% subsets ?1 ?2 ?3 ?4 ?5 ?1 ?2 ?3 ?4 ?5 Pegasos SAGA Catalyst-SDCA Figure 1: Experimental results 8 15 20 25 Epoch 30 35 40 References Chih-Chung Chang and Chih-Jen Lin. Libsvm : a library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1?27:27, 2011. Aaron Defazio, Francis Bach, and Simon Lacoste-Julien. Saga: A fast incremental gradient method with support for non-strongly convex composite objectives. Advances in Neural Information Processing Systems 27 (NIPS 2014), 2014a. Aaron Defazio, Tiberio Caetano, and Justin Domke. Finito: A faster, permutable incremental gradient method for big data problems. Proceedings of the 31st International Conference on Machine Learning, 2014b. Thomas Hofmann, Aurelien Lucchi, Simon Lacoste-Julien, and Brian McWilliams. Variance reduced stochastic gradient descent with neighbors. In C. Cortes, N.D. Lawrence, D.D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2296?2304. Curran Associates, Inc., 2015. Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. NIPS, 2013. Jakub Kone?cn? and Peter Richt?rik. Semi-Stochastic Gradient Descent Methods. ArXiv e-prints, December 2013. G. Lan and Y. Zhou. An optimal randomized incremental gradient method. ArXiv e-prints, July 2015. Hongzhou Lin, Julien Mairal, and Zaid Harchaoui. A universal catalyst for first-order optimization. In C. Cortes, N.D. Lawrence, D.D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 3366?3374. Curran Associates, Inc., 2015. Julien Mairal. Incremental majorization-minimization optimization with application to large-scale machine learning. Technical report, INRIA Grenoble Rh?ne-Alpes / LJK Laboratoire Jean Kuntzmann, 2014. Yu. Nesterov. Introductory Lectures On Convex Programming. Springer, 1998. Atsushi Nitanda. Stochastic proximal gradient descent with acceleration techniques. In Z. Ghahramani, M. Welling, C. Cortes, N.D. Lawrence, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 1574?1582. Curran Associates, Inc., 2014. R Tyrrell Rockafellar. Monotone operators and the proximal point algorithm. SIAM journal on control and optimization, 14(5):877?898, 1976. Mark Schmidt, Nicolas Le Roux, and Francis Bach. Minimizing finite sums with the stochastic average gradient. Technical report, INRIA, 2013. Shai Shalev-Shwartz and Tong Zhang. Stochastic dual coordinate ascent methods for regularized loss minimization. JMLR, 2013a. Shai Shalev-Shwartz and Tong Zhang. Accelerated mini-batch stochastic dual coordinate ascent. In C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 378?385. Curran Associates, Inc., 2013b. Shai Shalev-Shwartz and Tong Zhang. Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization. Technical report, The Hebrew University, Jerusalem and Rutgers University, NJ, USA, 2013c. Shai Shalev-Shwartz, Yoram Singer, Nathan Srebro, and Andrew Cotter. Pegasos: Primal estimated sub-gradient solver for svm. Mathematical programming, 127(1):3?30, 2011. 9
6154 |@word repository:1 version:2 replicate:1 open:1 decomposition:2 pick:2 sgd:3 reduction:1 initial:1 cyclic:1 existing:1 current:1 com:1 mushroom:1 written:1 must:1 hofmann:2 wanted:1 zaid:1 treating:1 designed:1 update:2 plot:2 selected:1 guess:1 website:1 xk:38 short:2 simpler:2 zhang:10 mathematical:1 profound:1 prove:1 combine:1 introductory:1 inside:1 x0:8 indeed:2 expected:1 roughly:1 gjk:8 decreasing:1 little:1 solver:1 becomes:1 spain:1 notation:5 bounded:1 lowest:1 kg:1 kind:1 permutable:1 developed:4 dubbed:1 nj:1 sag:1 exactly:1 scaled:1 hit:1 control:1 mcwilliams:1 positive:2 before:1 depended:1 consequence:1 black:4 inria:2 initialization:2 equating:1 practical:3 unique:1 practice:2 sdca:14 area:1 universal:1 empirical:2 significantly:3 composite:1 projection:1 pre:1 regular:2 convenience:1 pegasos:4 operator:23 risk:2 applying:2 optimize:1 equivalent:1 quick:1 jerusalem:1 starting:2 convex:14 focused:1 roux:1 splitting:1 alpes:1 dominate:1 pull:1 handle:1 traditionally:1 increment:1 analogous:1 updated:1 coordinate:3 suppose:1 programming:2 curran:4 associate:4 particularly:1 subproblem:1 caetano:1 richt:2 decrease:1 counter:1 yk:2 mentioned:1 ran:1 convexity:6 nesterov:2 depend:2 solving:1 predictive:1 basis:1 accelerate:1 regularizer:1 fast:4 describe:1 kp:1 shalev:10 firm:1 quite:1 jean:1 larger:1 solve:1 supplementary:4 gi:18 final:1 advantage:1 propose:1 product:9 strengthening:1 loop:1 combining:1 beneath:1 loser:1 degenerate:1 achieve:3 fi0:3 description:2 convergence:6 cached:1 incremental:6 leave:1 converges:1 tk:1 develop:1 andrew:1 strong:6 sydney:1 implies:1 australian:2 lyapunov:4 closely:1 stochastic:10 australia:1 material:4 implementing:1 require:2 hx:1 f1:1 fix:1 wall:1 tiberio:1 proposition:3 brian:1 extension:1 hold:1 sufficiently:1 lawrence:3 achieves:1 applicable:1 miso:1 cotter:1 minimization:6 always:2 zhou:5 pn:5 earliest:1 corollary:1 hongzhou:1 relation:5 expand:1 dual:7 classification:2 denoted:1 development:1 special:1 initialize:1 equal:1 look:1 yu:1 report:3 others:1 recommend:1 simplify:1 inherent:2 intelligent:1 distinguishes:1 randomly:2 grenoble:1 subsampled:1 replacement:1 maintain:1 n1:5 multiply:1 evaluation:1 cython:1 bracket:2 nl:1 yielding:1 kone:2 primal:7 hg:1 fj0:1 fenchel:1 doubleloop:1 downside:1 cost:3 entry:1 subset:6 johnson:2 too:2 stored:2 proximal:25 st:1 international:1 randomized:2 siam:1 lee:2 lucchi:1 squared:1 reflect:1 chung:1 combing:1 prox:16 rockafellar:2 inc:4 vi:3 root:1 closed:3 francis:2 wave:1 start:1 shai:4 simon:2 majorization:1 square:2 variance:2 efficiently:5 conceptually:1 straight:2 definition:2 against:1 proof:8 tunable:1 dataset:5 recall:1 knowledge:1 dimensionality:1 appears:1 rie:1 evaluated:1 box:4 strongly:8 though:1 hand:4 horizontal:1 logistic:4 pulling:1 zjk:19 usa:1 effect:3 building:1 verify:1 counterpart:1 regularization:3 excluded:1 round:1 illustrative:1 noted:2 chaining:1 suboptimality:13 complete:1 atsushi:1 fj:3 novel:1 recently:5 fi:15 argminy:2 common:1 empirically:1 conditioning:2 winner:1 smoothness:1 tuning:2 rd:10 grid:1 sugiyama:2 dot:1 surface:1 gj:32 add:1 recent:1 inequality:1 binary:2 arbitrarily:1 yi:1 seen:1 minimum:1 additional:1 care:1 greater:1 july:1 semi:1 full:2 harchaoui:1 smooth:13 technical:4 match:2 adapt:1 faster:4 bach:2 lin:6 plugging:1 variant:2 regression:3 essentially:3 expectation:3 rutgers:1 arxiv:2 asdca:1 sometimes:1 iteration:1 whereas:4 addition:1 want:2 laboratoire:1 source:1 rest:2 unlike:3 ascent:3 december:1 call:2 split:1 easy:2 iterate:4 reduce:1 inner:10 cn:2 defazio:6 accelerating:1 peter:1 clear:3 amount:1 reduced:1 http:1 zj:1 estimated:1 per:2 rb:1 write:1 key:2 lan:5 libsvm:2 lacoste:2 subgradient:4 monotone:1 sum:7 almost:2 chih:2 scaling:3 bound:5 quadratic:2 oracle:4 precisely:1 aurelien:1 nathan:1 optimality:2 subgradients:1 rcv1:2 conjecture:1 speedup:1 combination:1 conjugate:2 across:1 making:1 erm:1 taken:1 unregularized:1 ln:5 equation:4 discus:1 needed:1 singer:1 nitanda:2 end:3 available:3 apply:3 observe:1 generic:1 appearing:1 schmidt:2 weinberger:2 batch:1 slower:3 thomas:1 ensure:2 include:1 hinge:4 newton:1 yoram:1 giving:2 kuntzmann:1 k1:1 ghahramani:2 dykstra:1 comparatively:1 unchanged:1 objective:1 print:2 quantity:1 gradient:24 distance:2 separate:1 outer:1 trivial:1 code:1 index:2 mini:1 minimizing:1 hebrew:1 gk:1 implementation:4 unknown:1 perform:1 datasets:5 finite:5 descent:9 behave:1 precise:1 introduced:1 required:2 barcelona:1 nip:3 able:4 justin:1 below:1 regime:1 sparsity:1 including:2 power:1 difficulty:1 regularized:2 indicator:1 scheme:5 github:1 technology:1 library:1 ne:1 julien:4 commencing:1 epoch:17 prior:1 l2:2 catalyst:7 loss:8 expect:2 lecture:1 interesting:1 srebro:1 remarkable:1 rik:2 editor:4 gi0:4 storing:1 course:1 side:4 expansiveness:1 pulled:1 burges:1 neighbor:1 taking:1 sparse:2 moreau:1 evaluating:2 avoids:1 doesn:3 forward:2 made:1 author:1 commonly:1 simplified:1 welling:2 transaction:1 mairal:3 shwartz:10 search:1 why:1 table:3 additionally:2 nicolas:1 excellent:1 complex:2 bottou:1 garnett:2 vj:16 did:1 pk:1 main:5 rh:1 big:4 finito:1 body:1 fig:12 slow:1 tong:4 sub:2 saga:21 invocation:1 jmlr:1 theorem:8 xt:1 jen:1 jakub:1 covtype:2 cortes:3 gik:12 svm:1 incorporating:1 conditioned:3 kx:2 subtract:1 appearance:1 chang:2 springer:1 radically:1 minimizer:1 acm:1 ljk:1 acceleration:3 lipschitz:1 shared:1 change:1 included:1 except:1 uniformly:1 tyrrell:1 averaging:1 domke:1 lemma:3 called:1 pas:1 experimental:2 aaron:3 support:2 mark:1 accelerated:24 tested:2
5,697
6,155
Active Learning with Oracle Epiphany Tzu-Kuo Huang ? Uber Advanced Technologies Group Pittsburgh, PA 15201 Ara Vartanian University of Wisconsin?Madison Madison, WI 53706 Saleema Amershi Microsoft Research Redmond, WA 98052 Lihong Li Microsoft Research Redmond, WA 98052 Xiaojin Zhu University of Wisconsin?Madison Madison, WI 53706 Abstract We present a theoretical analysis of active learning with more realistic interactions with human oracles. Previous empirical studies have shown oracles abstaining on difficult queries until accumulating enough information to make label decisions. We formalize this phenomenon with an ?oracle epiphany model? and analyze active learning query complexity under such oracles for both the realizable and the agnostic cases. Our analysis shows that active learning is possible with oracle epiphany, but incurs an additional cost depending on when the epiphany happens. Our results suggest new, principled active learning approaches with realistic oracles. 1 Introduction There is currently a wide gap between theory and practice of active learning with oracle interaction. Theoretical active learning assumes an omniscient oracle. Given a query x, the oracle simply answers its label y by drawing from the conditional distribution p(y | x). This oracle model is motivated largely by its convenience for analysis. However, there is mounting empirical evidence from psychology and human-computer interaction research that humans behave in far more complex ways. The oracle may abstain on some queries [Donmez and Carbonell, 2008] (note this is distinct from classifier abstention [Zhang and Chaudhuri, 2014, El-Yaniv and Wiener, 2010]), or their answers can be influenced by the identity and order of previous queries [Newell and Ruths, 2016, Sarkar et al., 2016, Kulesza et al., 2014] and by incentives [Shah and Zhou, 2015]. Theoretical active learning has yet to account for such richness in human behaviors, which are critical to designing principled algorithms to effectively learn from human annotators. This paper takes a step toward bridging this gap. Specifically, we formalize and analyze the phenomenon of ?oracle epiphany.? Consider active learning from a human oracle to build a webpage classifier on basketball sport vs. others. It is well-known in practice that no matter how simple the task looks, the oracle can encounter difficult queries. The oracle may easily answer webpage queries that are obviously about basketball or obviously not about the sport, until she encounters a webpage on basketball jerseys. Here, the oracle cannot immediately decide how to label (?Does this jersey webpage qualify as a webpage about basketball??). One solution is to allow the oracle to abstain by answering with a special I-don?t-know label [Donmez and Carbonell, 2008]. More interestingly, Kulesza et al. [2014] demonstrated that with proper user interface support, the oracle may temporarily abstain on similar queries but then have an ?epiphany?: she may suddenly decide how to label all basketball apparel-related webpages. Empirical evidence in [Kulesza et al., 2014] suggests that epiphany may be induced by the accumulative effect of seeing multiple similar queries. If a future basketball-jersey webpage query arrives, the oracle will no longer abstain but will answer ? Part of this work was done while the author was with Microsoft Research. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. with the label she determined during epiphany. In this way, the oracle improves herself on the subset of the input space that corresponds to basketball apparel-related webpages. Empirical evidence also suggests that oracle abstention, and subsequent epiphany, may happen separately on different subsets of the input space. When building a cooking vs. others text classifier, Kulesza et al. [2014] observed oracle epiphany on a subset of cooking supplies documents, and separately on the subset of culinary service documents; on gardening vs. others, they observed separate oracle epiphany on plant information and on local garden documents; on travel vs. others, they observed separate oracle epiphany on photography, rental cars, and medical tourism documents. Our contributions are three-fold: (i) We formalize oracle epiphany in Section 2; (ii) We analyze EPICAL, a variant of the CAL algorithm [Cohn et al., 1994], for realizable active learning with oracle epiphany in Section 3. (iii) We analyze Oracular-EPICAL, a variant of the Oracular-CAL algorithm [Hsu, 2010, Huang et al., 2015], for agnostic active learning in Section 4. Our query complexity bounds show that active learning is possible with oracle epiphany, although we may incur a penalty waiting for epiphany to happen. This is verified with simulations in Section 5, which highlights the nuanced dependency between query complexity and epiphany parameters. 2 Problem Setting As in standard active learning, we are given a hypothesis class H ? YX for some input space X and a binary label set Y , {?1, 1}. There is an unknown distribution ? over X ? Y, from which examples are drawn IID. The marginal distribution over X is ?X . Define the expected classification error, or risk, of a classifier h ? H to be err(h) , E(x,y)?? [1 (h(x) 6= y)]. As usual, the active learning goal is as follows: given any fixed , ? ? (0, 1), we seek an active learning algorithm which, with probability at least 1 ? ?, returns a hypothesis with classification error at most after sending a ?small? number of queries to the oracle. What is unique here is an ?oracle epiphany model.? The input space consists of two disjoint sets X = K ? U. The oracle knows the label for items in K (for ?known?) but initially does not know the labels in U (for ?unknown?). The oracle will abstain if a query comes from U (unless epiphany happens, see below). Furthermore, U is partitioned into K disjoint subsets U = U1 ? U2 ? . . . ? UK . These correspond to the photograph/rental cars/medical tourism subsets in the travel task earlier. The active learner does not know the partitions nor K. When the active learner submits a query x ? X to the oracle, the learner will receive one of three outcomes in Y+ , {?1, 1, ?}, where ? indicates I-don?t-know abstention. Importantly, we assume that epiphany is modeled as K Markov chains: Whenever a unique x ? Uk is queried on some unknown region k ? {1, . . . , K} which did not experience epiphany yet, the oracle has a probability ? ? [0, 1] of epiphany on that region. If epiphany happens, the oracle then understands how to label everything in Uk . In effect, the state of Uk is flipped from unknown to known. Epiphany is irrevocable: Uk will stay known from now on and the oracle will answer accordingly for all future x therein. Thus the oracle will only answer ? if Uk remains unknown. The requirement for a unique x is to prevent a trivial active learning algorithm which repeatedly queries the same ? item in an attempt to induce oracle epiphany. This requirement does not pose difficulty for analysis if ?X is continuous on X, since all queries will be unique with probability one. Therefore, our oracle epiphany model is parameterized by (?, K, U1 , . . . , UK ). All our analyses below will be based on this epiphany model. Of course, the model is only an approximation to real human oracle behaviors; In Section 6 we will discuss more sophisticated epiphany models for future work. 3 The Realizable Case In this section, we study the realizable active learning case, where we assume there exists some h? ? H such that the label of an example x ? X is y = h? (x). It follows that err(h? ) = 0. Although the realizability assumption is strong, the analysis is insightful on the role of epiphany. We will show that the worst-case query complexity has an additional 1/? dependence. We also discuss nice cases where this 1/? can be avoided depending on U?s interaction with the disagreement region. Furthermore, our analysis focuses on the K = 1 case; that is, the oracle has only one unknown region U = U1 . This case is the simplest but captures the essence of the algorithm we propose in this section. 2 For convenience, we will drop the superscript and write U. In the next section, we will eliminate both assumptions, and present and analyze an algorithm for the agnostic case with an arbitrary K ? 1. We modify the standard CAL algorithm [Cohn et al., 1994] to accommodate oracle epiphany. The modified algorithm, which we call EPICAL for ?epiphany CAL,? is given in Alg. 1. Like CAL, EPICAL receives a stream of unlabeled items; It maintains a version space; If the unlabeled item falls into the disagreement region of the version space the oracle is queried. The essential difference to CAL is that if the oracle answers ?, no update to the version space happens. The stopping criterion ensures that the true risk of any hypothesis in the version space is at most , with high probability. Algorithm 1 EPICAL Input: , ?, oracle, X, H Version space V ? H Disagreement region D ? {x ? X | ?h, h0 ? V, h(x) 6= h0 (x)} for t = 1, 2, 3, . . . do Sample an unlabeled example from the marginal distribution restricted to D: xt ? ?X|D Query oracle with xt to get yt if yt 6= ? then V ? {h ? V | h(xt ) = yt } D ? {x ? X | ?h, h0 ? V, h(x) 6= h0 (x)} end if if ?X (D) ?  then Return any h ? V end if end for Our analysis is based on the following observation: before oracle epiphany and ignoring all queries that result in ?, EPICAL behaves exactly the same as CAL on an induced active-learning problem. The induced problem has input space K, but with a projected hypothesis space we detail below. Hence, standard CAL analysis bounds the number of queries to find a good hypothesis in the induced problem. Now consider the sequence of probabilities of getting a ? label in each step of EPICAL. If these probabilities tend to be small, EPICAL will terminate with an-risk hypothesis without even having to wait for epiphany. If these probabilities tend to be large, we may often hit the unknown region U. But the number of such steps is bounded because epiphany will happen with high probability. ? , K, and Formally, we define the induced active-learning problem as follows. The input space is X the output space is still Y. The sampling distribution is ? ?X (x) , ?X (x)1 (x ? K) /?X (K). The ? ? YX? | ?h ? H, ?x ? X ? ? H ? , {h ? : h(x) hypothesis space is the projection of H onto X: = h(x)}. ? ? be the projected target hypothesis. Let ? be the Clearly, the induced problem is still realizable; let h disagreement coefficient [Hanneke, 2014] for the original problem without unknown regions. The induced problem potentially has a different disagreement coefficient:     ??H ? ? (x) 6= h(x), ? ? 0 ) 6= h ? ? (x0 ) ? r . ? s.t. h ?? , sup r?1 ? Ex???X 1 ?h Ex0 ???X 1 h(x r>0 Let m ? be the number of queries required for the CAL algorithm to find a hypothesis of /2 risk with probability 1 ? ?/4 in the induced problem. It is known [Hanneke, 2014, Theorem 5.1] that    2 4 2 ? ? ? ? ln ln . m ? ? M , ? dim(H) ln ? + ln ?   where dim(?) is the VC dimension. Similarly, let mCAL be the number of queries required for CAL to find a hypothesis of  risk with probability  1 ? ?/4 in the original problem, and we have mCAL ? MCAL , ? dim(H) ln ? + ln 4? ln 1 ln 1 . Furthermore, define m? , |{t | yt = ?}| to be the number of queries in EPICAL for which the oracle returns ?. We define Ut to be U for an iteration t before epiphany, and ? after that. We define Dt to be the disagreement region D at iteration t. Finally, define the unknown fraction within disagreement as ?t , ?X (Dt ? Ut )/?X (Dt ). We are now ready to state the main result of this section. ? ? H with Theorem 1. Given any  and ?, EPICAL will, with probability at least 1 ? ?, return an h 3 4 ? ? err(h) ? , after making at most MCAL + M + ln queries. ? 3 ? Remark The bound above consists of three terms. The first is the standard CAL query complexity bound with an omniscient oracle. The other two are the price we pay when the oracle is imperfect. The second term is the query complexity for finding a low-risk hypothesis in the induced active-learning problem. In situations where ?X (U) = /2 and ?  1, it is hard to induce epiphany, but it suffices to ? with /2 risk in the induced problem (which implies at most  risk under find a hypothesis from H ? is unavoidable in some cases. The third term is roughly the original distribution ?X ); it indicates M the extra query complexity required to induce epiphany. It is unavoidable in the worst case: when U = X, one has to wait for oracle epiphany to start collecting labeled examples to infer h? ; the average number of steps until epiphany is on the order of 1/?. Finally, note that not all three terms contribute simultaneously to the query complexity of EPICAL. As we will see in the analysis and in the experiments, usually one or two of them will dominate, depending on how U interacts with the disagreement region. Summing them up simplifies our exposition, without changing the order of the worst-case bounds. Our analysis starts with the definition of the following two events. Lemmas 2 and 3 show that they hold with high probability when running EPICAL; the proofs are delegated to Appendix A. Define:     1 4 2 4 E? , m? ? ln and E? , |{t | ?t > 1/2}| ? ln . ? ? ? ? Lemma 2. Pr{E? } ? 1 ? ?/4 . Lemma 3. Pr{E? } ? 1 ? ?/4. Lemma 4. Assume event E? holds. Then, the number of queries from K before oracle epiphany or before EPICAL terminates, whichever happens first, is at most m ? + ?2 ln 4? . Proof. (sketch) Denote the quantity by m. Before epiphany, V and D in EPICAL behave in exactly the same way as in CAL on K. It takes m ? queries to get to /2 accuracy in K by the definition of m. ? If m ? m, ? then m < m ? + ?2 ln 4? trivially, and we are done. Otherwise, it must be the case that ?t > 1/2 for every step after V reaches /2 accuracy on K. Suppose not. Then there is a step t where ?t ? 1/2. Note V reaching /2 accuracy on K implies ?X (Dt ) ? ?X (Dt ? Ut ) ? /2. Together with ?t = ?X (Dt ? Ut )/?X (Dt ) ? 1/2, we have ?X (Dt ) < . But this would have triggered termination of EPICAL at step t, a contradiction. Since we assume E? holds, we have m ? m ? + ?2 ln 4? . Proof of Theorem 1. We will prove the query complexity bound, assuming (i) events E? and E? hold; ? and MCAL successfully upper bound the corresponding query complexity of standard and (ii) M CAL. By Lemmas 2 and 3 and a union bound, the above holds with probability at least 1 ? ?. Suppose epiphany happens before EPICAL terminates. By event E? and Lemma 4, the total number of queried examples before epiphany is at most m ? + ?3 ln 4? . After epiphany, the total number of queries is no more than that of running CAL from scratch; this number is at most MCAL . Therefore, ? + MCAL + 3 ln 4 . the total query complexity is at most M ? ? Suppose epiphany does not happen before EPICAL terminates. In this case, the number of queries in the unknown region is at most ?1 ln 4? (event E? ), and the number of queries in the known region is at ? + 3 ln 4 . most m ? + ?2 ln 4? (Lemma 4). Thus, the total number of queries is at most M ? ? 4 The Agnostic Case In the agnostic setting the best hypothesis, h? , arg minh err(h), has a nonzero error. We want an active learning algorithm that, for a given accuracy  > 0, returns a hypothesis h with small regret reg(h, h? ) , err(h) ? err(h? ) ?  while making a small number of queries. Among existing agnostic active learning algorithms we choose to adapt the Oracular-CAL algorithm, first proposed by Hsu [2010] and later improved by Huang et al. [2015]. Oracular-CAL makes no assumption on H or ?, and can be implemented solely with an empirical risk minimization (ERM) subroutine, which is often well approximated by convex optimization over a surrogate loss in practice. This is a significant advantage over several existing agnostic algorithms, which either explicitly maintain a version space, as done in A2 [Balcan et al., 2006], or require a constrained ERM routine [Dasgupta et al., 2007] that may not be well approximated efficiently in practice. IWAL [Beygelzimer et al., 2010] and Active 4 Algorithm 2 Oracular-EPICAL ? 1: Set c1 , 4 and c2 , 2 6 + 9. Let ?0 , 1 and ?t , 12 t ln  32t|H| ln t ?  , t ? 1. 2: Initialize labeled data Z0 ? ?, the version space V1 ? H, and the ERM h1 as any h ? H. 3: for t = 1, 2, . . . do i.i.d. Observe new example xt , where (xt , yt ) ? ?. if xt ? Dt , {x | x ? X, ?(h, h0 ) ? V2t s.t. h(x) 6= h0 (x)} then Query oracle with xt .  Zt?1 ? {(xt , yt )}, oracle returns yt . Zt ? Zt?1 , oracle returns ?. ut ? 1 (oracle returns ?) . else Zt ? Zt?1 ? {(xt , ht (xt ))}. // update the labeled data with the current ERM?s prediction ut ? 0. end if Pt err(h, Zt ) , 1t i=1 1 (xi ? Di ) (1 ? ui )1 (h(xi ) 6= yi ) + 1 (xi ? / Di ) 1 (h(xi ) 6= hi (xi )) . ht+1 ? arg minh?H err(h, Zt ). Pt bt ? 1t i=1 ui . p ?t ? c1 ?t err(ht+1 , Zt ) + c2 (?t + bt ). Vt+1 ? {h ? H | err(h, Zt ) ? err(ht+1 , Zt ) ? ?t }. end for 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: Cover [Huang et al., 2015] are agnostic algorithms that are implementable with an ERM routine, both using importance weights to correct for querying bias. But in the presence of ??s, choosing proper importance weights becomes challenging. Moreover, the improved Oracular-CAL [Huang et al., 2015] we use2 has stronger guarantees than IWAL, and in fact, the best known worst-case guarantees among efficient, agnostic active learning algorithms. Our proposed algorithm, Oracular-EPICAL, is given in Alg. 2. Note t here counts unlabeled data, while in Alg. 1 it counts queries. Roughly speaking, Oracular-EPICAL also has an additive factor of O(K/?) compared to Oracular-CAL?s query complexity. It keeps a growing set Z of labeled examples. If the unlabeled example xt falls in the disagreement region, the algorithm queries its label: when the oracle returns a label yt , the algorithm adds xt and yt to Z; when the oracle returns ?, no update to Z happens. If xt is outside the disagreement region, the algorithm adds xt and the label predicted by the current ERM hypothesis ht (xt ) to Z. Alg. 2 keeps an indicator ut , which records whether ? was returned on xt , and it always updates the ERM and the version space after every new xt . For simplicity we assume a finite H; this can be extended to H with finite VC dimension. The critical modification we make here to accommodate oracle abstention is that the threshold ?t defining the version space additively depends on the average number of ??s received up to round t. This allows us to show that Oracular-EPICAL retains the favorable bias guarantee of OracularCAL: with high probability, all of the imputed labels are consistent with the classifications of h? , so imputation never pushes the algorithm away from h? . Oracular-EPICAL only uses the version space in the disagreement test. With the same technique used by Oracular-CAL, summarized in Appendix B, the algorithm is able to perform the test solely with an ERM routine. We now state Oracular-EPICAL?s general theoretical guarantees, which hold for any oracle model, and then specialize them for the epiphany model in Section 2. We start with a consistency result: p Theorem 5 (Consistency Guarantee). Pick any 0 < ? < 1/e and let ??t := c1 ?t err(h? ) + c2 (?t + bt ). With probability at least 1 ? ?, the following holds for all t ? 1, err(h) ? err(h? ) ? 4??t err(h , Zt ) ? err(ht+1 , Zt ) ? ?t . ? 2 for all h ? Vt+1 , and (1) (2) This improved version of Oracular-CAL defines the version space using a tighter threshold than the one used by Hsu [2010], and has the same worst-case guarantees as Active Cover [Huang et al., 2015]. 5 All hypotheses in the current version space, including the current ERM, have controlled expected regrets. Compared with Oracular-CAL?s consistency guarantee, this is worse by an additive factor of O(bt ), the average number of ??s over t examples. Importantly, h? always remains in the version space, as implied by (2). This guarantees that all predicted labels used by the algorithm are consistent with h? , since the entire version space makes the same prediction. The query complexity bound is: Pt Theorem 6 (Query Complexity Bound). Let Qt , i=1 1 (xi ? Di ) denote the total number of queries Alg. 2 makes after observing t examples. Under the conditions of Theorem 5, with probability at least 1 ? ? the following holds: ?t > 0, Qt is bounded by q  4? err(h? )t + ? ? O t err(h? ) ln(t|H|/?) ln2 t + ln(t|H|/?) ln t + tbt ln t + 8 ln(8t2 ln t/?) , where ? denotes the disagreement coefficient [Hanneke, 2014]. Again, this result is worse than Oracular-CAL?s query complexity [Huang et al., 2015] by an additive factor. The magnitude of this factor is less trivial than it seems: since the algorithm increases the threshold by bt , it includes more hypotheses in the version space, which may cause the algorithm to query a lot more. However, our analysis shows that the number of queries only increases by O(tbt ln t), i.e., ln t times the total number of ??s received over t examples. The full proofs of both theorems are in Appendix C. Here we provide the key ingredient. Consider an imaginary dataset Zt? where all the labels queried by the algorithm but not returned by the oracle are imputed, and define the error on this imputed data: t err(h, Zt? ) , 1X 1 (xi ? Di ) 1 (h(xi ) 6= yi ) + 1 (xi ? / Di ) 1 (h(xi ) 6= hi (xi )) . t i=1 (3) Note that the version space Vt and therefore the disagreement region Dt are still defined in terms of err(h, Zt ), not err(h, Zt? ). Also define the empirical regrets between two hypotheses h and h0 : reg(h, h0 , Zt ) , err(h, Zt ) ? err(h0 , Zt ) and reg(h, h0 , Zt? ) on Zt? in the same way. The empirical error and regret on Zt? are not observable, but can be easily bounded by observable quantities: err(h, Zt ) ? err(h, Zt? ) ? err(h, Zt ) + bt , 0 0 |reg(h, h , Zt ) ? reg(h, h , Zt? )| (4) ? bt , (5) Pt where bt = i=1 ui /t is also observable. Using a martingale analysis resembling Huang et al. [2015]?s for Oracular-CAL, we prove concentration of the empirical regret reg(h, h? , Zt? ) to its expectation. For every h ? Vt+1 , the algorithm controls its empirical regret on Zt , which bounds reg(h, h? , Zt? ) by the above. This leads to a bound on the expected regret of h. The query complexity analysis follows the standard framework of Hsu [2010] and Huang et al. [2015]. Next, we specialize the guarantees to the oracle epiphany model in Section 2: e , Corollary 7. Assume the epiphany model in Section 2. Fix  > 0, ? > 0. Let d? , ln(|H|/(?)), K ? ? K ln(K/?) and e , err(h ). With probability at least 1 ? ?, following holds: The ERM  the  ?? e hypothesis ht +1 satisfies err(ht +1 ) ? e? ? , where t = O de2 + 1 d? + K , and the total    number of queries made round  ? up to  t is  ?  ? ? e e d?e K ??O  + ln e2 + 1 d? +  + ? e K ?    ?  e ? ?  +1 d+ ? e K ?  . The proof is in Appendix D. This corollary reveals how the epiphany parameters K and ? affect query e = 0 recovers the result for a perfect oracle, showing that the (unlabeled) sample complexity. Setting K e complexity t worsens by an additive factor of K/(?) in both realizable and agnostic settings. For    e e query complexity, in the realizable setting the bound becomes ? ? O ln d?+ K/? / d?+ K/? .  e ? )/(?) . In both cases, In the agnostic setting, the leading term in our bound is ? ? O (e? /)2 d?+ (Ke e our bounds are worse by roughly an additive factor of O(K/?) than bounds for perfect oracles. As for the effect of U, the above corollary is a worst-case result: it uses an upper bound on tbt that holds even for U = X. For certain U ?s the upper bound can be much tighter. For example, if U ? Dt = ? for sufficiently large t, then tbt will be O(1) for all ?, with or without epiphany. 6 5 Experiments To complement our theoretical results, we present two simulated experiments on active learning with oracle epiphany: learning a 1D threshold classifier and handwritten digit recognition (OCR). Specifically, we will highlight query complexity dependency on the epiphany parameter? and on U. EPICAL on 1D Threshold Classifiers. Take ?X to be the uniform distribution over the interval X = [0, 1]. Our hypothesis space is the set of threshold classifiers H = {ha : a ? [0, 1]} where ha (x) = 1 (x ? a). We choose h? = h 12 and set the target classification error at  = 0.05. 0.0 0.5 ? (a) U = [0.4, 0.6] 1.0 0 Excess queries 20 40 60 EPICAL Passive 0 0 EPICAL Passive 80 Queries 100 200 300 400 Queries 100 200 300 400 We illustrate epiphany with a single unknown region K = 1, U = U1 . However, we contrast two shapes of U: in one set of experiments we set U = [0.4, 0.6] which contains the decision boundary 0.5. In this case, the active learner EPICAL must induce oracle epiphany in order to achieve  risk. In another set of experiments U = [0.7, 0.9], where we expect the learner to be able to ?bypass? the need for epiphany. Intuitively, this latter U could soon be excluded from the disagreement region. For both U, we systematically vary the oracle epiphany parameter ? ? {2?6 , 2?5 , . . . , 20 }. A small ? means epiphany is less likely per query, thus we expect the learner to spend more queries trying to induce epiphany in the case of U = [0.4, 0.6]. In contrast, ? may not matter much in the case of U = [0.7, 0.9] since epiphany may not be required. Note that ? = 20 = 1 reverts back to the standard active learning oracle, since epiphany always happens immediately. We run each combination of ?, U 0.0 0.5 ? (b) U = [0.7, 0.9] 1.0 0 20 40 1/? 60 80 (c) U = [0.4, 0.6] Figure 1: EPICAL results on 1D threshold classifiers for 10, 000 trials. The results are shown in Figure 1. As expected, (a) shows a clear dependency on ?. This indicates that epiphany is necessary in the case U = [0.4, 0.6] for learning to be successful. In contrast, the dependence on ? vanishes in (b) when U is shifted sufficiently away from the target threshold (and thus from later disagreement regions). The oracle need not reach epiphany for learning to happen. Note (b) does not contradict with EPICAL query complexity analysis since Theorem 1 is a worst case bound that must hold true for all U. To further clarify the role of ?, note EPICAL query complexity bound predicts an additive term ? and MCAL ). This term of O(1/?) on top of the standard CAL query complexities (i.e., both M represents ?excess queries? needed to induce epiphany. In Figure 1(c) we plot this excess against ?1 for U = [0.4, 0.6]. Excess is computed as the number of EPICAL queries minus the average number of queries for ? = 1. Indeed, we see a near linear relationship between excess queries and 1/?. Finally, as a baseline we compare EPICAL to passive learning. In passive learning x1 , x2 , . . . are chosen randomly according to ?X instead of adaptively. Note passive learning here is also subject to oracle epiphany. That is, the labels yt are produced by the same oracle epiphany model, some of them can be ? initially. Our passive learning simply maintains a version space. If it encounters ? it does not update the version space. All EPICAL results are better than passive learning. Oracular-EPICAL on OCR. We consider the binary classification task of 5 vs. other digits on MNIST [LeCun et al., 1998]. This allows us to design the unknown regions {Uk } as certain other digits, making the experiments more interpretable. Furthermore, we can control how confusable the U digits are to ?5? to observe the influence on oracle epiphany. Although Alg. 2 is efficiently implementable with an ERM routine, it still requires two calls to a supervised learning algorithm on every new example. To scale it up, we implement an approximate version of Alg. 2 that uses online optimization in place of the ERM. More details are in Appendix E. While being efficient in practice, this online algorithm may not retain Alg. 2?s theoretical guarantees. 7 14000 1800 Oracular?EPICAL Passive 12000 1400 Queries Queries 10000 8000 6000 1200 1000 4000 800 2000 600 0 Oracular?EPICAL Passive 1600 0 1e?4 1e?3 1e?2 1e?1 ? 400 1 (a) U =?3? 0 1e?4 1e?3 1e?2 1e?1 ? 1 (b) U =?1? Figure 2: Oracular-EPICAL results on OCR. We use epiphany parameters ? ? {1, 10?1 , 10?2 , 10?3 , 10?4 , 0}, K = 1, and U is either ?3? or ?1?. By using ? = 1 and ? = 0, we include the boundary cases where the oracle is perfect or never has an epiphany. The two different U?s correspond to two contrasting scenarios: ?3? is among the ?nearest? digits to ?5? as measured by the binary classification error between ?5? and every other single digit, while ?1? is the farthest. The two U?s are about the same size, each covering roughly 10% of the data. More details and experimental results with other choices of U can be found in Appendix E. For each combination of ? and U, we perform 100 random trials. In each trial, we run both the online version of Alg. 2 and online passive logistic regression (also subject to oracle epiphany) over a randomly permuted training set of 60, 000 examples, and check the error of the online ERM on the 10, 000 testing examples every 10 queries from 200 up to our query budget of 13, 000. In each trial we record the smallest number of queries for achieving a test error of 4%. Fig. 2(a) and Fig. 2(b) show the median of this number over the 100 random trials, with error bars being the 25th and 75th quantiles. The effect of ? on query complexity is dramatic for the near U = ?3? but subdued for the far U = ?1?. In particular, for U = ?3? small ??s force active learning to query as many labels as passive learning. The flattening at 13, 000 at the end means no algorithm could achieve a 4% test error within our query budget. For U = ?1?, active learning is always much better than passive regardless of ?. Again, this illustrates that both ? and U affect the query complexity. As performance references, passive learning on the entire labeled training data achieves a test error of 2.6%, while predicting the majority class (non-5) has a test error of 8.9%. 6 Discussions Our analysis reveals a worst case O(1/?) term in query complexity due to the wait for epiphany, and we hypothesize ?(K/?) to be the tight lower bound. This immediately raises the question: can we decouple active learning queries from epiphany induction? What if the learner can quickly induce epiphany by showing the oracle a screenful of unlabeled items at a time, without the oracle labeling them? This possibility is hinted in empirical studies. For example, Kulesza et al. [2014] observed epiphanies resulting from seeing items. Then there is a tradeoff between two learner actions toward the oracle: asking a query (getting a label or small contribution toward epiphany), or showing several items (not getting labels but potentially large contribution toward epiphany). One must formalize the cost and benefit of this tradeoff. Of course, real human behaviors are even richer. Epiphanies may be reversible on certain queries, where the oracle begins to have doubts on her previous labeling. Extending our model under more relaxed assumptions is an interesting open question for future research. Acknowledgments This work is supported in part by NSF grants IIS-0953219, IIS-1623605, DGE-1545481, CCF1423237, and by the University of Wisconsin-Madison Graduate School with funding from the Wisconsin Alumni Research Foundation. References Maria-Florina Balcan, Alina Beygelzimer, and John Langford. Agnostic active learning. In Proceedings of the 23rd international conference on Machine learning, pages 65?72. ACM, 2006. 8 Alina Beygelzimer, John Langford, Zhang Tong, and Daniel J Hsu. Agnostic active learning without constraints. In Advances in Neural Information Processing Systems, pages 199?207, 2010. David Cohn, Les Atlas, and Richard Ladner. Improving generalization with active learning. Machine learning, 15(2):201?221, 1994. Sanjoy Dasgupta, Claire Monteleoni, and Daniel J Hsu. A general agnostic active learning algorithm. In Advances in neural information processing systems, pages 353?360, 2007. Pinar Donmez and Jaime G Carbonell. Proactive learning: cost-sensitive active learning with multiple imperfect oracles. In Proceedings of the 17th ACM conference on Information and knowledge management, pages 619?628. ACM, 2008. Ran El-Yaniv and Yair Wiener. On the foundations of noise-free selective classification. The Journal of Machine Learning Research, 11:1605?1641, 2010. Steve Hanneke. Theory of disagreement-based active learning. Foundations and Trends in Machine Learning, 7(2-3):131?309, 2014. Daniel J. Hsu. Algorithms for Active Learning. PhD thesis, University of California at San Diego, 2010. Tzu-Kuo Huang, Alekh Agarwal, Daniel J Hsu, John Langford, and Robert E Schapire. Efficient and parsimonious agnostic active learning. In NIPS, pages 2737?2745, 2015. S. M. Kakade and A. Tewari. On the generalization ability of online strongly convex programming algorithms. In Advances in Neural Information Processing Systems 21, 2009. Nikos Karampatziakis and John Langford. Online importance weight aware updates. In UAI, pages 392?399, 2011. Todd Kulesza, Saleema Amershi, Rich Caruana, Danyel Fisher, and Denis Xavier Charles. Structured labeling for facilitating concept evolution in machine learning. In CHI, pages 3075?3084, 2014. Yann LeCun, L?on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998. Edward Newell and Derek Ruths. How one microtask affects another. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI, pages 3155?3166, 2016. Advait Sarkar, Cecily Morrison, Jonas F Dorn, Rishi Bedi, Saskia Steinheimer, Jacques Boisvert, Jessica Burggraaff, Marcus D?Souza, Peter Kontschieder, Samuel Rota Bul?, et al. Setwise comparison: Consistent, scalable, continuum labels for computer vision. In CHI, 2016. Nihar Bhadresh Shah and Denny Zhou. Double or nothing: Multiplicative incentive mechanisms for crowdsourcing. In Advances in Neural Information Processing Systems, pages 1?9, 2015. Chicheng Zhang and Kamalika Chaudhuri. Beyond disagreement-based agnostic active learning. In Advances in Neural Information Processing Systems, pages 442?450, 2014. 9
6155 |@word worsens:1 trial:5 version:21 seems:1 stronger:1 open:1 termination:1 additively:1 simulation:1 seek:1 pick:1 dramatic:1 incurs:1 minus:1 accommodate:2 contains:1 daniel:4 document:5 interestingly:1 omniscient:2 existing:2 err:28 current:4 imaginary:1 beygelzimer:3 yet:2 must:4 john:4 realistic:2 subsequent:1 happen:5 partition:1 additive:6 shape:1 hypothesize:1 drop:1 plot:1 update:6 mounting:1 v:5 interpretable:1 atlas:1 item:7 accordingly:1 record:2 contribute:1 denis:1 zhang:3 c2:3 supply:1 jonas:1 consists:2 prove:2 specialize:2 x0:1 indeed:1 expected:4 roughly:4 behavior:3 nor:1 growing:1 chi:4 ara:1 becomes:2 spain:1 begin:1 bounded:3 moreover:1 agnostic:16 what:2 contrasting:1 finding:1 guarantee:10 every:6 collecting:1 exactly:2 classifier:8 hit:1 uk:8 control:2 medical:2 grant:1 cooking:2 farthest:1 before:8 service:1 local:1 modify:1 todd:1 v2t:1 solely:2 iwal:2 therein:1 suggests:2 challenging:1 graduate:1 unique:4 lecun:2 acknowledgment:1 testing:1 practice:5 union:1 regret:7 implement:1 rishi:1 digit:6 empirical:10 projection:1 induce:7 seeing:2 wait:3 suggest:1 submits:1 get:2 convenience:2 cannot:1 unlabeled:7 cal:24 onto:1 risk:10 influence:1 accumulating:1 jaime:1 demonstrated:1 yt:10 resembling:1 regardless:1 convex:2 ke:1 simplicity:1 immediately:3 contradiction:1 importantly:2 dominate:1 de2:1 crowdsourcing:1 delegated:1 target:3 suppose:3 pt:4 user:1 diego:1 programming:1 setwise:1 us:3 designing:1 hypothesis:20 pa:1 trend:1 approximated:2 recognition:2 predicts:1 labeled:5 observed:4 role:2 capture:1 worst:8 region:19 ensures:1 richness:1 ran:1 principled:2 vanishes:1 complexity:26 ui:3 raise:1 tight:1 incur:1 learner:8 easily:2 jersey:3 herself:1 distinct:1 query:79 labeling:3 outcome:1 h0:10 choosing:1 outside:1 richer:1 spend:1 drawing:1 otherwise:1 ability:1 superscript:1 online:7 obviously:2 sequence:1 triggered:1 advantage:1 propose:1 interaction:4 denny:1 chaudhuri:2 achieve:2 getting:3 webpage:8 yaniv:2 requirement:2 extending:1 double:1 perfect:3 depending:3 illustrate:1 pose:1 measured:1 nearest:1 qt:2 school:1 received:2 strong:1 edward:1 implemented:1 predicted:2 come:1 implies:2 correct:1 vc:2 human:9 apparel:2 everything:1 require:1 suffices:1 fix:1 generalization:2 tighter:2 hinted:1 clarify:1 hold:11 sufficiently:2 achieves:1 vary:1 a2:1 smallest:1 continuum:1 favorable:1 travel:2 label:23 currently:1 sensitive:1 ex0:1 successfully:1 minimization:1 clearly:1 always:4 modified:1 reaching:1 zhou:2 corollary:3 focus:1 she:3 maria:1 karampatziakis:1 indicates:3 check:1 contrast:3 baseline:1 realizable:7 dim:3 el:2 stopping:1 eliminate:1 bt:8 entire:2 initially:2 her:1 selective:1 subroutine:1 arg:2 classification:7 among:3 tourism:2 special:1 constrained:1 initialize:1 marginal:2 aware:1 never:2 having:1 sampling:1 flipped:1 represents:1 look:1 future:4 others:4 t2:1 yoshua:1 richard:1 randomly:2 simultaneously:1 microsoft:3 maintain:1 attempt:1 jessica:1 amershi:2 possibility:1 arrives:1 chain:1 necessary:1 experience:1 unless:1 confusable:1 theoretical:6 earlier:1 asking:1 cover:2 retains:1 caruana:1 cost:3 subset:6 uniform:1 successful:1 culinary:1 dependency:3 answer:7 adaptively:1 international:1 stay:1 retain:1 together:1 quickly:1 thesis:1 again:2 tzu:2 unavoidable:2 management:1 choose:2 huang:10 worse:3 leading:1 return:10 li:1 doubt:1 account:1 summarized:1 includes:1 coefficient:3 matter:2 explicitly:1 depends:1 stream:1 later:2 h1:1 lot:1 proactive:1 multiplicative:1 analyze:5 sup:1 observing:1 start:3 maintains:2 chicheng:1 contribution:3 accuracy:4 wiener:2 largely:1 efficiently:2 correspond:2 handwritten:1 tbt:4 produced:1 iid:1 hanneke:4 influenced:1 reach:2 monteleoni:1 whenever:1 definition:2 against:1 derek:1 proof:5 di:5 recovers:1 hsu:8 dataset:1 knowledge:1 car:2 improves:1 ut:7 formalize:4 routine:4 sophisticated:1 back:1 understands:1 steve:1 dt:11 supervised:1 improved:3 done:3 strongly:1 furthermore:4 until:3 langford:4 sketch:1 receives:1 cohn:3 reversible:1 defines:1 logistic:1 nuanced:1 dge:1 building:1 effect:4 accumulative:1 true:2 concept:1 xavier:1 hence:1 alumnus:1 evolution:1 excluded:1 nonzero:1 round:2 during:1 basketball:7 essence:1 covering:1 samuel:1 criterion:1 ln2:1 trying:1 interface:1 passive:13 balcan:2 photography:1 abstain:5 funding:1 charles:1 donmez:3 behaves:1 permuted:1 significant:1 queried:4 rd:1 trivially:1 consistency:3 similarly:1 lihong:1 longer:1 alekh:1 rental:2 add:2 patrick:1 scenario:1 certain:3 binary:3 qualify:1 vt:4 abstention:4 yi:2 additional:2 relaxed:1 nikos:1 morrison:1 ii:4 multiple:2 full:1 infer:1 adapt:1 controlled:1 prediction:2 variant:2 regression:1 florina:1 scalable:1 vision:1 expectation:1 iteration:2 agarwal:1 c1:3 receive:1 want:1 separately:2 interval:1 else:1 median:1 extra:1 induced:10 tend:2 subject:2 call:2 near:2 presence:1 iii:1 enough:1 bengio:1 affect:3 psychology:1 imperfect:2 simplifies:1 haffner:1 tradeoff:2 whether:1 motivated:1 bridging:1 penalty:1 peter:1 returned:2 speaking:1 cause:1 repeatedly:1 remark:1 action:1 tewari:1 clear:1 simplest:1 imputed:3 schapire:1 nsf:1 shifted:1 jacques:1 disjoint:2 per:1 write:1 dasgupta:2 incentive:2 waiting:1 pinar:1 group:1 key:1 threshold:8 achieving:1 drawn:1 imputation:1 changing:1 prevent:1 alina:2 verified:1 abstaining:1 ht:8 v1:1 fraction:1 run:2 parameterized:1 place:1 decide:2 yann:1 parsimonious:1 decision:2 appendix:6 bound:21 hi:2 pay:1 fold:1 oracle:79 constraint:1 x2:1 u1:4 structured:1 according:1 combination:2 oracular:21 terminates:3 wi:2 partitioned:1 kakade:1 making:3 happens:8 modification:1 intuitively:1 restricted:1 pr:2 erm:13 ln:33 remains:2 discus:2 count:2 mechanism:1 saleema:2 know:5 needed:1 whichever:1 end:6 sending:1 observe:2 ocr:3 away:2 disagreement:17 encounter:3 shah:2 yair:1 original:3 dorn:1 assumes:1 running:2 denotes:1 top:1 include:1 madison:5 yx:2 build:1 suddenly:1 implied:1 question:2 quantity:2 concentration:1 dependence:2 usual:1 interacts:1 surrogate:1 gradient:1 separate:2 simulated:1 majority:1 carbonell:3 trivial:2 toward:4 induction:1 marcus:1 assuming:1 ruth:2 modeled:1 relationship:1 difficult:2 robert:1 potentially:2 design:1 proper:2 zt:30 unknown:12 perform:2 upper:3 ladner:1 observation:1 markov:1 minh:2 implementable:2 finite:2 behave:2 situation:1 extended:1 defining:1 nihar:1 arbitrary:1 souza:1 sarkar:2 david:1 complement:1 required:4 california:1 barcelona:1 nip:2 able:2 redmond:2 bar:1 below:3 usually:1 beyond:1 kulesza:6 reverts:1 including:1 garden:1 critical:2 event:5 difficulty:1 force:1 predicting:1 indicator:1 advanced:1 zhu:1 technology:1 realizability:1 ready:1 xiaojin:1 text:1 nice:1 wisconsin:4 plant:1 loss:1 highlight:2 expect:2 interesting:1 querying:1 ingredient:1 annotator:1 foundation:3 consistent:3 systematically:1 irrevocable:1 bypass:1 claire:1 course:2 supported:1 soon:1 free:1 bias:2 allow:1 rota:1 wide:1 fall:2 benefit:1 boundary:2 dimension:2 rich:1 author:1 made:1 projected:2 avoided:1 san:1 far:2 excess:5 approximate:1 observable:3 contradict:1 keep:2 active:42 reveals:2 uai:1 summing:1 pittsburgh:1 xi:11 don:2 continuous:1 learn:1 terminate:1 ignoring:1 improving:1 alg:9 bottou:1 complex:1 did:1 flattening:1 main:1 noise:1 nothing:1 facilitating:1 x1:1 fig:2 quantiles:1 martingale:1 tong:1 answering:1 third:1 theorem:8 z0:1 xt:17 showing:3 insightful:1 evidence:3 exists:1 essential:1 mnist:1 effectively:1 importance:3 kamalika:1 phd:1 magnitude:1 budget:2 push:1 illustrates:1 gap:2 photograph:1 simply:2 likely:1 temporarily:1 sport:2 u2:1 newell:2 corresponds:1 satisfies:1 acm:3 conditional:1 identity:1 goal:1 bul:1 exposition:1 price:1 fisher:1 hard:1 specifically:2 determined:1 kontschieder:1 decouple:1 lemma:7 total:7 sanjoy:1 kuo:2 experimental:1 uber:1 formally:1 support:1 latter:1 mcal:8 phenomenon:2 reg:7 scratch:1 ex:1
5,698
6,156
?-risk: a New Surrogate Risk for Learning from Weakly Labeled Data Valentina Zantedeschi? R?mi Emonet Marc Sebban firstname.lastname@univ-st-etienne.fr Univ Lyon, UJM-Saint-Etienne, CNRS, Institut d Optique Graduate School, Laboratoire Hubert Curien UMR 5516, F-42023, SAINT-ETIENNE, France Abstract During the past few years, the machine learning community has paid attention to developing new methods for learning from weakly labeled data. This field covers different settings like semi-supervised learning, learning with label proportions, multi-instance learning, noise-tolerant learning, etc. This paper presents a generic framework to deal with these weakly labeled scenarios. We introduce the ?-risk as a generalized formulation of the standard empirical risk based on surrogate marginbased loss functions. This risk allows us to express the reliability on the labels and to derive different kinds of learning algorithms. We specifically focus on SVMs and propose a soft margin ?-SVM algorithm which behaves better that the state of the art. 1 Introduction The growing amount of data available nowadays allowed us to increase the confidence in the models induced by machine learning methods. On the other hand, it also caused several issues, especially in supervised classification, regarding the availability of labels and their reliability. Because it may be expensive and tricky to assign a reliable and unique label to each training instance, the data at our disposal for the application at hand are often weakly labeled. Learning from weak supervision has received important attention over the past few years [14, 12]. This research field includes different settings: only a fraction of the labels are known (Semi-Supervised learning [22]); we can access only the proportions of the classes (Learning with Label Proportions [19] and Multi-Instance Learning [8]); the labels are uncertain or noisy (Noise-Tolerant Learning [1, 18, 16]); different discording labels are given to the same instance by different experts (Multi-Expert Learning [21]); labels are completely unknown (Unsupervised Learning [11]). As a consequence of this statement of fact, the data provided in all these situations cannot be fully exploited using supervised techniques, at the risk of drastically reducing the performance of the learned models. To address this issue, numerous machine learning methods have been developed to deal with each of the previous specific situations. However, all these weakly labeled learning tasks share common features mainly relying on the confidence in the labels, opening the door to the development of generic frameworks. Unfortunately, only a few attempts have tried to address several settings with the same approach. The most interesting one has been presented in [14] where the authors propose W ELL SVM which is dedicated to deal with three different weakly labeled learning scenarios: semi-supervised learning, multi-instance learning and clustering. However, W ELL SVM focuses specifically on Support Vector Machines and it requires to ? http://vzantedeschi.com/ 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. derive a new optimization problem for each new task. Even though W ELL SVM constitutes a step further towards general models, it stopped in midstream constraining the learner to use SVMs. This paper aims to bridge this gap by presenting a generic framework for learning from weakly labeled data. Our approach is based on the derivation of the ?-risk , a new surrogate empirical risk defined as a strict generalization of the standard empirical risk relying on surrogate margin-based loss functions. The main interesting property of the ?-risk comes from its ability to exploit the information given by the weakly supervised setting and encoded as a ? matrix reflecting the supervision on the labels. Moreover, the instance-specific weights ? let one integrate in classical methods the side information provided by the setting. This is the peculiarity w.r.t. [18, 16]: in both papers, the proposed losses are defined using class-dependent weights (fixed to 1/2 for the first paper, and dependent on the class noise rate for the latter) while in our approach the used weights are provided for each instance, which gives a more flexible formulation. Making use of this ?-risk , we design a generic algorithm devoted to address different kinds of aforementioned weakly labeled settings. To allow a comparison with the state of the art, we instantiate it with a learner that takes the form of an SVM algorithm. In this context, we derive a soft margin ?-SVM algorithm and show that it outperforms W ELL SVM. The remainder of this paper is organized as follows: in Section 2, we define the empirical surrogate ?-risk and show under which conditions it can be used to learn without explicitly accessing the labels; we also show how to instantiate ? according to the weakly labeled learning setting at hand; in Section 3, we present our generic iterative algorithm for learning with weakly labeled data and in Section 4 we exploit our new framework to derive a novel formulation of the Support Vector Machine problem, the ?-SVM ; finally, we report experiments in semi-supervised learning and learning with label noise, conducted on classical datasets from the UCI repository [15], in order to compare our algorithm with the state of the art approaches. 2 From Classical Surrogate Losses and Surrogate Risks to the ?-risk In this section, we first provide reminders about surrogate losses and then exploit the characteristics of the popular loss functions to introduce the empirical surrogate ?-risk . The ?-risk formulation allows us to tackle the problem of learning with weakly labeled data. We show under which conditions it can be used instead of the standard empirical surrogate risk (defined in a fully supervised context). Those conditions give insight on how to design algorithms that learn from weak supervision. We restrain our study to the context of binary classification. 2.1 Preliminaries In statistical learning, a common approach for choosing the optimal hypothesis h? from a hypothesis class H is to select the classifier that minimizes the expected risk over the joint space Z = X ? Y , where X is the feature space and Y the label space, expressed as Z R` (h) = `(yh(x))p(x, y)dxdy X?Y with ` : H ? Z ? R+ a margin-based loss function. Since the true distribution of the data p(x, y) is usually unknown, machine learning algorithms typically minimize the empirical version of the risk, computed over a finite set S composed of m instances (xi , yi ) i.i.d. drawn from a distribution over X ? {?1, 1}: m R` (S, h) = 1 X `(yi h(xi )). m i=1 The most natural loss function is the so-called 0-1 loss. As this function is not convex, not differentiable and has zero gradient, other loss functions are commonly employed instead. These losses, such as the logistic loss (e.g., for the logistic regression [6]), the exponential loss (e.g., for boosting techniques [10]) and the hinge loss (e.g., for the SVM [7]), are convex and smooth relaxations of the 0-1 loss. Theoretical studies on the characteristics and behavior of such surrogate losses can be found in [17, 2, 20]. In particular, [17] showed that each commonly used surrogate loss can be 2 characterized by a permissible function ? (see below) and rewritten as F? (x) F? (x) = ?? (?x) ? a? b? where ?? (x) = supa (xa ? ?(a)) is the Legendre conjugate of ? (for more details, see [4]), a? = ??(0) = ??(1) ? 0 and b? = ??( 12 ) ? a? > 0. As presented by the authors of [13] and [17], a permissible function is a function f : [0, 1] ? R? , symmetric about ? 12 , differentiable on ]0, 1[ and strictly convex. For instance, the permissible function ?log related to the logistic loss F? (x) = log(1 + exp?x ) is: ?log (x) = x log(x) + (1 ? x) log(1 ? x) and a? = 0 and b? = log(2). As detailed in [17], considering a surrogate loss F? , the empirical surrogate risk of an hypothesis h : X ? R w.r.t. S can be expressed as: R? (S, h) = m m  b? X 1 X D? yi , ??1 (h(x )) = F? (yi h(xi )) i ? m i=1 m i=1 with D? the Bregman Divergence D? (x, y) = ?(x) ? ?(y) ? (x ? y)?? (y). In order to evaluate such risk R? (S, h), it is mandatory to provide the labels y for all the instances. In addition, it is not possible to take into account eventual uncertainties on the given labels. Consequently, R? is defined in a totally supervised context, where the labels y are known and considered to be true. In order to face the numerous situations where training data may be weakly labeled, we claim that there is a need to fill the gap by defining a new empirical surrogate risk that can deal with such settings. In the following section, we propose a generalization of the empirical surrogate risk, called the empirical surrogate ??risk, which can be employed in the context of weakly labeled data instead of the standard one under some linear conditions on the margin. 2.2 The Empirical Surrogate ?-risk Before defining the empirical surrogate ?-risk for any loss F? and hypothesis h ? H, let us rewrite the definition of R? introducing a new set of variables named ?, and that can be laid out as a 2?m matrix. Lemma 2.1. For any S, ? and h, and for any non-negative real coefficients ?i-1 and ?i+1 defined for each instance xi ? S such that ?i-1 + ?i+1 = 1, the empirical surrogate risk R? (S, h) can be rewritten as R? (S, h) = R? (S, h, ?) where R? (S, h, ?) = m m b? X X ? 1 X -yi ?i F? (?h(xi )) + ? (?yi h(xi )). m i=1 ?? m i=1 i {-1,+1} The coefficient ?i+1 (resp. ?i-1 ) for an instance xi can be interpreted here as the degree of confidence in (or the probability of) the label +1 (resp. -1) assigned to xi . 3 Proof. m b? X R? (S, h) = F? (yi h(xi )) m i=1 m  b? X yi ?i F? (yi h(xi )) + ?i-yi F? (yi h(xi )) m i=1   m  b? X yi h(xi ) = ?iyi F? (yi h(xi )) + ?i-yi F? (?yi h(xi )) ? m i=1 b? = = m m b? X X ? 1 X -yi ?i F? (?h(xi )) + ? (?yi h(xi )). m i=1 ?? m i=1 i (1) (2) (3) {-1,+1} Eq. (1) is because ?i-1 + ?i+1 = 1; Eq. (2) is due to the fact that ?? (?x) = ?? (x) ? x (see the sup?? (?x)?a? ?? (x)?a? ?x plementary material) for any permissible function ?, so that F? (x) = = = b? b? x F? (?x) ? b? . From Eq. (3), and considering that the sample S is composed by the finite set of features X and labels Y, we can write that m R? (S, h) = R? (S, h, ?) = R?? (X , h) ? where R?? (X , h) = 1 X -yi ? yi h(xi ) m i=1 i (4) m b? X X ? ? F? (?h(xi )) m i=1 ?? i {-1,+1} -1 +1 -1 ]. |?0 , ..., ?m is the empirical surrogate ?-risk for a matrix ? = [?0+1 , ..., ?m It is worth noticing that R? (S, h, ?) is expressed in the form of a sum of two terms: the second one takes into account the labels of the data, while the first one, the ?-risk, focuses on the loss suffered by h over X without explicitly needing the labels Y. The empirical ?-risk is a generalization of the empirical risk: setting ?iyi = 1 (and thus ?i?yi = 0) for each instance, the second term vanishes and we retrieve the classical formulation of the empirical risk. Additionally, as developed in Section 2.3, the introduction of ? makes it possible to inject some side-information about the labels. For this reason, we claim that the ?-risk is suited to deal with classification in the context of weakly labeled data. Let us now focus on the conditions allowing the empirical ?-risk (i) to be a surrogate of the 0-1 loss-based empirical risk and (ii) to be sufficient to learn with a weak supervision on the labels. From (4), we deduce: m R?? (X , h) = R? (S, h, ?) + m 1 X -yi 1 X -yi ?i yi h(xi ) ? R0/1 (S, h) + ? yi h(xi ) m i=1 m i=1 i (5) where R0/1 (S, h) the empirical risk related to the 0-1 loss and Eq. (5) is because b? F? (x) ? F0/1 (x) (for any surrogate loss). It is possible to ensure that the ?-risk is both a convex upper-bound of the 0-1 loss based risk and a relaxation as tight as the traditional risk (i.e., that we have R0/1 (S, h) ? R?? (X , h) ? R? (S, h)) is Pm to force the following constraint: i=1 ?i-yi yi h(xi ) = 0. Pm Unfortunately, the constraint i=1 ?i-yi yi h(xi ) = 0 still depends on the vector y of labels, which is not always provided and most likely uncertain or inaccurate in a weakly labeled data setting. We will show in Section 3 that this issue can be overcome by means of an iterative 2-step learning procedure, that first learns a classifier minimizing the ?-risk , possibly violating the constraint, and then learns a new matrix ? that fulfills the constraint. 4 2.3 Instantiating ? for Different Weakly Supervised Settings The ?-risk can be used as the basis for handling different learning settings, including weakly labeled learning. This can be achieved by fixing the ? values, choosing their initial values or putting a prior on them. We have already seen that, fully supervised learning can be obtained by fixing all ? values to 1 for the assigned class and to 0 for the opposite class. The current section provides guidance on how ? could be instantiated to handle various weakly labeled settings. In a semi-supervised setting, as detailed in the experimental section, we propose to initialize the ? of unlabeled points to 0.5 and then to automatically refine them in an iterative process. Going further, and if we are ready to integrate spatial or topological information in the process, the ? values of each unlabeled point could be initialized using a density estimation procedure (e.g., by considering the label proportions of the k nearest labeled neighbors). In the context of multi-expert learning, the experts? votes for each instance i can simply be averaged to produce the ?i values (or their initialization, or a prior). The case of learning with label proportions is especially useful for privacy-preserving data processing: the training points are grouped into bags and, for each bag, the proportion of labels are given. One way of handling such supervision is to initialize, for each bag, all the ? with the same value that corresponds to the provided proportion of labels. Noise-tolerant learning aims at learning in the presence of label noise, where labels are given but can be wrong. For any point that can be possibly noisy, a direct approach is to use lower ? values (instead of 1 in the supervised case) and refine them as in the semi-supervised setting. ? can also be initialized using the label proportion of the k nearest labeled example (as done in the experimental section). The case of Multiple Instance Learning (MIL) is trickier: in a typical MIL setting, instances are grouped in bags and the supervision is given as a single label per bag that is positive if the bag contains at least one positive instance (negative bags contain only negative instances). A straightforward solution would be to recast the MIL supervision as a ?learning with label proportion? (e.g., considering exactly one positive instance in each bag). It is not fully satisfying and a more promising solution would be to consider, within each bag, the set of ? +1 variables and put a sparsity-inducing prior on them. This approach would be a less-constrained version of the relaxation proposed in WellSVM [14] (where it is supposed that there is exactly one positive instance per positive bag) and could be achieved by a l1 penalty or using a Dirichlet prior (with low ? to promote sparsity). 3 An Iterative Algorithm for Weakly-labeled Learning As explained in Section 2, a sufficient condition for guaranteeing that the ?-risk is a convex upper-bound of the 0-1 loss based risk and it is not worse than the traditional risk is to fix Pm -yi ? y h(x i i ) = 0. However, the previous constraint depends on the labels. We overcome i=1 i this problem by (i) iteratively learning a classifier minimizing the ?-risk and most likely violating the constraint and then (ii) learning a new matrix ? that fulfills it. The algorithm is generic. It can be used in different weakly labeled settings and can be instantiated with different losses and regularizations, as we will do in the next Section with SVMs. As the process is iterative, let t ? be the estimation of ? at iteration t. At each iteration, our algorithm consists in two steps. We first learn an hypothesis h for the following problem P1 : t ? ht+1 = P1 (X , t ?) = arg min cR? (X , h) + N (h) h which boils down to minimizing the N -regularized empirical surrogate ?-risk over the training sample X of size m, where N , for instance, can take the form of a L1 or a L2 norm. Then, we find the optimal ? of the following problem P2 for the points of X : t+1 ? = P2 (X , ht+1 ) = arg min R?? (X , ht+1 ) ? s.t. m X ?i-yi (?yi ht+1 (xi )) = 0 i=1 ?i-1 + ?i+1 = 1, ?i-1 ? 0, ?i+1 ? 0 ?i = 1..m . For this step, a vector of labels is required. We choose to re-estimate it at each iteration according to the current value of ?: we affect to an instance the most probable label, i.e. the ? corresponding 5 to the biggest ? ? . The matrix ? has to be initialized at the beginning of the algorithm according to the problem setting (see Section 2.3). While some stabilization criterion does not exceed a given threshold , the two steps are repeated. 4 Soft-margin ?-SVM A major advantage of the empirical surrogate ?-risk is that it can be plugged in numerous learning settings without radically modifying the original formulations. As an example, in this section we derive a new version of the Support Vector Machine problem, using the empirical surrogate ?-risk , that takes into account the knowledge provided for each training instance (through the matrix ?). The soft-margin ?-SVM optimization problem is a direct generalization of a standard soft-margin SVM and is defined as follows: m X  1 2 arg min k?k2 + c ?i-1 ?i-1 + ?i+1 ?i+1 2 ? i=1 s.t. ?(?T ?(xi ) + b) ? 1 ? ?i? ?i = 1..m, ? ? {?1, 1} ?i? ? 0 ?i = 1..m, ? ? {?1, 1} where ? ? X 0 is the vector defining the margin hyperplane and b its offset, ? : X ? X 0 a mapping function and c ? R a tuned hyper-parameter. In the rest of the paper, we will refer to K : X ?X ? R as the kernel function corresponding to ?, i.e. K(xi , xj ) = ?(xi )?(xj ). The corresponding Lagrangian dual problem is given by (the complete derivation is provided in the supplementary material): max ? ? m m m X X 1X X X X ? ? 0 ?i ??j ? K(xi , xj ) + ?i? 2 i=1 ?? j=1 0 i=1 ?? {-1,+1} ? ? {-1,+1} {-1,+1} s.t. 0 ? ?i? ? c?i? ?i = 1..m, ? ? {?1, 1} m X X ?i? ? = 0 ?i = 1..m, ? ? {?1, 1} i=1 ?? {-1,+1} which is concave w.r.t. ? as for the standard SVM. The ?-SVM formulation differs from the SVM one in two points: first, the number of Lagrangian multipliers is doubled, because we consider both positive and negative labels for each instance; second, the upper-bounds for ? are not the same for all instances but depend on the given matrix ?. Like the coefficient c in the classical formulation of SVM, those upper-bounds play the role of trade-off between under-fitting and over-fitting: the smaller they are, the more robust to outliers the learner is but the less it adapts to the data. It is then logical that the upper-bound for an instance i depends on ?i? because it reflects the reliability on the label ? for that instance: if the label ? is unlikely, its corresponding ?i? will be constrained to be null (and its adversary will have more chance to be selected as a support vector, as ?i? + ?i? ? = 1). Also, those points for which no label is more probable than the other (?i? ? 0.5) will have less importance in the learning process compared to those for which a label is almost certain. In order to fully exploit the advantages of our formulation, c has to be finite and bigger than 0. As a matter of fact, when c ? ? or c ? 0, the constraints become exactly those of the original formulation. 5 Experimental Results In the first part of this section, we present some experimental results obtained by adapting the iterative algorithm presented in Section 3 for semi-supervised learning and combining it with the previously derived ?-SVM . Note that some approaches based on SVMs have been already presented in the literature to address the problem of semi-supervised learning. Among them, TransductiveSVM [5] 6 iteratively learns a separator with the labeled instances, classifies a subset of the unlabeled instances and adds it to the training set. On the other hand, WellSVM [14] combines the classical SVM with a label generation strategy that allows one to learn the optimal separator, even when the training sample is not completely labeled, by convexly relaxing the original Mixed-Integer Programming problem. In [14], WellSVM has been shown to be very effective and better than TransductiveSVM and the state of the art. For this reason, we compare in this section ?-SVM to WellSVM. In the second subsection, we present some preliminary results in the noise-tolerant learning setting, showing how ?-SVM behaves when facing data with label noise. Iterative ?-SVM for semi-supervised learning 5.1 We compare our method?s performances to those of WellSVM, that has been proved, in [14], to performs in average better than the state of the art semi-supervised learning methods based on SVM and the standard SVM as well. In a semi-supervised context, a set Xl of labeled instances of size ml and a set Xu of unlabeled instances of size mu are provided. The matrix ? is initialized as follows: ?i = 1..ml and ?? in {?1, 1}, 0 ? ?i = 1 if ? = yi , 0 otherwise, ?i = ml +1..mu and ?? in {?1, 1}, 0 ? ?i = 0.5 and we learn an optimal separator: t ? t ? ht+1 = P1 (Xl ? Xu , t ?) = arg min c1 R? (Xl , h) + c2 R? (Xu , h) + N (h). h Here c1 and c2 are balance constants between the labeled and unlabeled set: when the number of unlabeled instances become greater than the number of labeled instances, we need to reduce the importance of the unlabeled set in the learning procedure because there exists the risk that the labeled set will be ignored. We consider the provided labels to be correct, so we keep the corresponding t+1 ). The l ? fixed during the iterations of the algorithm and estimate u ? by optimizing P2 (Xu , h iterative algorithm with ?-SVM is implemented in Python using Cvxopt (for optimizing ?-SVM ) and Cvxpy 2 with its Ecos solver [9]. For each dataset, we show in Figure 1 the accuracy of the two methods with an increasing proportion of labeled data. The different approaches are compared on the same kernel, either the linear or the gaussian, the one that gives higher overall accuracy. As a matter of fact, the choice of the kernel depends on the geometry of the data, not on the learning method. For each proportion of labeled data, we perform a 4-fold cross-validation and we show the average accuracy over 10 iterations. Concerning the hyper-parameters of the different methods, we fix c2 l of ?-SVM to c1 m m , c1 of WellSVM to 1 as explained in [14] and all the other hyper-parameters (c1 for ?-SVM and c2 for WellSVM) are tuned by cross-validation through grid search. As for the stopping criteria, we fix  of ?-SVM to 10?5 + 10?3 khkF and  of WellSVM to 10?3 and the maximal number of iterations to 20 for both methods. When using the gaussian kernel, the ? in 2 K(xi , xj ) = exp(?kxi ? xj k2 /?) is fixed to the mean distance between instances. Our method performs better than WellSVM, with few exceptions, and is more efficient in terms of CPU time: for the Australian dataset, the biggest dataset in number of features and instances, WellSVM is in average 30 times slower than our algorithm (without particular optimization efforts). 5.2 Preliminary results under label-noise We quickly tackle another setting of the weakly labeled data field: the noise-tolerant learning, the task of learning from data that have noisy or uncertain labels. It has been shown in [3] that SVM learning is extremely sensitive to outliers, especially the ones lying next to the boundary. We study, the sensitivity of ?-SVM to label noise artificially introduced on the Ionosphere dataset. We consider two initialization strategies for ?: the standard on where ? yi = 1 and ? ?yi = 0 and the 4-nn one where ? ? is set to the proportion of neighboring instances with label ?. In Figure 2, we draw the mean accuracy over 4 repetitions w.r.t. an increasing percentage (as a proportion of the smallest dataset) of two kinds of noise: the symmetric noise, introduced by swapping the labels of instances belonging to different classes, and the asymmetric noise, introduced by gradually changing the labels of the 2 http://cvxopt.org/ and http://www.cvxpy.org/ 7 0.85 0.8 0.75 5 10 15 0.8 0.65 0.8 0.75 0.6 0.75 0.7 0.55 0.7 20 5 (a) Ionosphere, gaussian kernel. 10 15 0.7 0.65 0.65 0.6 10 15 10 15 20 20 5 10 15 20 (d) Australian, gaussian kernel. WellSVM betaSVM 5 (f) Sonar, linear kernel. (e) Pima, linear kernel. 15 0.65 0.6 0.55 0.5 5 20 10 (c) Liver, linear kernel. (b) Heart-statlog, linear kernel. 0.75 5 5 20 10 15 20 (g) Splice, gaussian kernel. Figure 1: Comparison of the mean accuracies of WellSVM and ?-SVM versus the percentage of labeled data on different UCI datasets. 0.85 0.85 0.8 0.8 standard 4-nn 0.75 0.75 10 20 30 40 (a) Symmetric Noise. 10 50 20 30 40 50 (b) Asymmetric Noise. Figure 2: Comparison of the mean accuracy versus the percentage of noise of iterative ?-SVM with different initializations of ?. The standard curve refers to the initialization of ? yi = 1 and ? ?yi = 0 and the 4-nn to the initialization of ? ? to the proportion of neighboring instances with label ?. instances of one class. These preliminary results are encouraging and show that locally estimating the conditional class density to initialize the ? matrix improves the robustness of our method to label noise. 6 Conclusion This paper focuses on the problem of learning from weakly labeled data. We introduced the ?risk which generalizes the standard empirical risk while allowing the integration of weak supervision. From the expression of the ?-risk , we derived a generic algorithm for weakly labeled data and specialized it in an SVM-like context. The resulting ?-SVM algorithm has been applied in two different weakly labeled settings, namely semi-supervised learning and learning with label noise, showing the advantages of the approach. The perspectives of this work are numerous and of two main kinds: covering new weakly labeled settings and studying theoretical guarantees. As proposed in Section 2.3, the ?-risk can be used in various weakly labeled scenarios. This requires to use different strategies for the initialization and the refinement of ?, and also to propose proper priors for these parameters. Generalizing the proposed ?-risk to a multi-class setting is a natural extension as ? is already a matrix of class probabilities. Another broad direction involves deriving robustness and convergence bounds for the algorithms built on the ?-risk . 7 Acknowledgments We thank the reviewers for their valuable remarks. We also thank the ANR projects SOLSTICE (ANR-13-BS02-01) and LIVES (ANR-15-CE230026-03). 8 References [1] D. Angluin and P. Laird. Learning from noisy examples. Machine Learning, 2(4):343?370, 1988. [2] S. Ben-David, D. Loker, N. Srebro, and K. Sridharan. Minimizing the misclassification error rate using a surrogate convex loss. In Proceedings of the 29th International Conference on Machine Learning, ICML. icml.cc / Omnipress, 2012. [3] B. E. Boser, I. M. Guyon, and V. N. Vapnik. A training algorithm for optimal margin classifiers. In Proceedings of the fifth annual workshop on Computational learning theory, pages 144?152. ACM, 1992. [4] S. Boyd and L. Vandenberghe. Convex optimization. Cambridge university press, 2004. [5] L. Bruzzone, M. Chi, and M. Marconcini. A novel transductive svm for semisupervised classification of remote-sensing images. Geoscience and Remote Sensing, IEEE Transactions on, 44(11):3363?3373, 2006. [6] M. Collins, R. E. Schapire, and Y. Singer. Logistic regression, adaboost and bregman distances. Machine Learning, 48(1-3):253?285, 2002. [7] C. Cortes and V. Vapnik. Support-vector networks. Machine learning, 20(3):273?297, 1995. [8] T. G. Dietterich, R. H. Lathrop, and T. Lozano-P?rez. Solving the multiple instance problem with axis-parallel rectangles. Artificial intelligence, 89(1):31?71, 1997. [9] A. Domahidi, E. Chu, and S. Boyd. Ecos: An socp solver for embedded systems. In Control Conference (ECC), 2013 European, pages 3071?3076. IEEE, 2013. [10] Y. Freund, R. E. Schapire, et al. Experiments with a new boosting algorithm. In ICML, volume 96, pages 148?156, 1996. [11] T. Hastie, R. Tibshirani, and J. Friedman. Unsupervised learning. Springer, 2009. [12] A. Joulin and F. Bach. A convex relaxation for weakly supervised classifiers. arXiv preprint arXiv:1206.6413, 2012. [13] M. Kearns and Y. Mansour. On the boosting ability of top-down decision tree learning algorithms. In Proceedings of the twenty-eighth annual ACM symposium on Theory of computing, pages 459?468. ACM, 1996. [14] Y.-F. Li, I. W. Tsang, J. T. Kwok, and Z.-H. Zhou. Convex and scalable weakly labeled svms. The Journal of Machine Learning Research, 14(1):2151?2188, 2013. [15] M. Lichman. UCI machine learning repository, 2013. [16] N. Natarajan, I. S. Dhillon, P. K. Ravikumar, and A. Tewari. Learning with noisy labels. In Advances in neural information processing systems, pages 1196?1204, 2013. [17] R. Nock and F. Nielsen. Bregman divergences and surrogates for learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(11):2048?2059, 2009. [18] G. Patrini, F. Nielsen, R. Nock, and M. Carioni. Loss factorization, weakly supervised learning and label noise robustness. arXiv preprint arXiv:1602.02450, 2016. [19] G. Patrini, R. Nock, T. Caetano, and P. Rivera. (almost) no label no cry. In Advances in Neural Information Processing Systems, pages 190?198, 2014. [20] L. Rosasco, E. De Vito, A. Caponnetto, M. Piana, and A. Verri. Are loss functions all the same? Neural Computation, 16(5):1063?1076, 2004. [21] V. S. Sheng, F. Provost, and P. G. Ipeirotis. Get another label? improving data quality and data mining using multiple, noisy labelers. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 614?622. ACM, 2008. [22] X. Zhu. Semi-supervised learning literature survey. Technical Report 1530, Computer Sciences, University of Wisconsin-Madison, 2005. 9
6156 |@word repository:2 version:3 proportion:14 norm:1 tried:1 paid:1 rivera:1 initial:1 contains:1 lichman:1 tuned:2 past:2 outperforms:1 current:2 com:1 chu:1 intelligence:2 instantiate:2 selected:1 beginning:1 provides:1 boosting:3 org:2 c2:4 direct:2 become:2 symposium:1 consists:1 fitting:2 combine:1 privacy:1 introduce:2 expected:1 cvxopt:2 behavior:1 p1:3 growing:1 multi:6 chi:1 relying:2 automatically:1 lyon:1 cpu:1 encouraging:1 considering:4 totally:1 solver:2 provided:9 spain:1 moreover:1 classifies:1 increasing:2 estimating:1 project:1 null:1 bs02:1 kind:4 interpreted:1 minimizes:1 developed:2 guarantee:1 concave:1 tackle:2 exactly:3 classifier:5 wrong:1 tricky:1 k2:2 control:1 before:1 positive:6 ecc:1 consequence:1 umr:1 initialization:6 relaxing:1 factorization:1 graduate:1 averaged:1 unique:1 acknowledgment:1 differs:1 procedure:3 empirical:25 adapting:1 boyd:2 confidence:3 refers:1 doubled:1 cannot:1 unlabeled:7 get:1 put:1 risk:55 context:9 www:1 lagrangian:2 reviewer:1 straightforward:1 attention:2 convex:9 survey:1 insight:1 fill:1 deriving:1 vandenberghe:1 retrieve:1 handle:1 resp:2 play:1 programming:1 hypothesis:5 expensive:1 satisfying:1 natarajan:1 asymmetric:2 labeled:37 role:1 preprint:2 tsang:1 caetano:1 remote:2 trade:1 valuable:1 accessing:1 vanishes:1 mu:2 vito:1 weakly:30 rewrite:1 tight:1 depend:1 solving:1 learner:3 completely:2 basis:1 joint:1 various:2 derivation:2 univ:2 instantiated:2 effective:1 artificial:1 hyper:3 choosing:2 encoded:1 supplementary:1 otherwise:1 anr:3 ability:2 transductive:1 noisy:6 laird:1 advantage:3 differentiable:2 propose:5 maximal:1 fr:1 remainder:1 neighboring:2 uci:3 combining:1 adapts:1 supposed:1 inducing:1 convergence:1 produce:1 guaranteeing:1 ben:1 derive:5 fixing:2 liver:1 nearest:2 school:1 received:1 eq:4 p2:3 implemented:1 involves:1 come:1 australian:2 direction:1 correct:1 nock:3 modifying:1 peculiarity:1 stabilization:1 material:2 ujm:1 assign:1 fix:3 generalization:4 preliminary:4 probable:2 statlog:1 strictly:1 extension:1 lying:1 considered:1 exp:2 mapping:1 claim:2 major:1 smallest:1 estimation:2 bag:10 label:57 bridge:1 sensitive:1 grouped:2 repetition:1 reflects:1 always:1 gaussian:5 aim:2 zhou:1 cr:1 mil:3 derived:2 focus:5 mainly:1 sigkdd:1 dependent:2 stopping:1 cnrs:1 nn:3 inaccurate:1 typically:1 unlikely:1 going:1 france:1 arg:4 issue:3 flexible:1 classification:4 dual:1 aforementioned:1 among:1 development:1 overall:1 art:5 spatial:1 ell:4 initialize:3 constrained:2 field:3 integration:1 broad:1 unsupervised:2 constitutes:1 icml:3 promote:1 report:2 few:4 opening:1 composed:2 divergence:2 geometry:1 attempt:1 friedman:1 mining:2 swapping:1 devoted:1 hubert:1 bregman:3 nowadays:1 institut:1 tree:1 plugged:1 initialized:4 re:1 guidance:1 theoretical:2 uncertain:3 stopped:1 instance:40 soft:5 cover:1 trickier:1 introducing:1 subset:1 conducted:1 kxi:1 st:1 density:2 cvxpy:2 sensitivity:1 international:2 off:1 quickly:1 choose:1 possibly:2 rosasco:1 worse:1 expert:4 inject:1 li:1 account:3 socp:1 de:1 availability:1 includes:1 coefficient:3 matter:2 caused:1 explicitly:2 depends:4 sup:1 parallel:1 minimize:1 accuracy:6 characteristic:2 ecos:2 weak:4 worth:1 cc:1 definition:1 proof:1 mi:1 boil:1 proved:1 dataset:5 popular:1 logical:1 reminder:1 knowledge:2 subsection:1 improves:1 organized:1 nielsen:2 reflecting:1 disposal:1 higher:1 supervised:23 violating:2 adaboost:1 verri:1 formulation:10 done:1 though:1 xa:1 hand:4 sheng:1 logistic:4 quality:1 semisupervised:1 dietterich:1 contain:1 true:2 multiplier:1 lozano:1 regularization:1 assigned:2 symmetric:3 iteratively:2 dhillon:1 deal:5 during:2 lastname:1 covering:1 criterion:2 generalized:1 presenting:1 complete:1 performs:2 dedicated:1 l1:2 omnipress:1 patrini:2 image:1 novel:2 common:2 specialized:1 behaves:2 sebban:1 volume:1 refer:1 cambridge:1 grid:1 pm:3 reliability:3 access:1 f0:1 supervision:8 iyi:2 etc:1 deduce:1 add:1 labelers:1 showed:1 perspective:1 optimizing:2 scenario:3 mandatory:1 certain:1 binary:1 life:1 yi:36 exploited:1 seen:1 preserving:1 dxdy:1 greater:1 employed:2 r0:3 semi:13 ii:2 multiple:3 needing:1 caponnetto:1 smooth:1 technical:1 characterized:1 cross:2 bach:1 curien:1 concerning:1 ravikumar:1 bigger:1 instantiating:1 scalable:1 regression:2 arxiv:4 iteration:6 kernel:11 achieved:2 c1:5 addition:1 laboratoire:1 suffered:1 permissible:4 rest:1 strict:1 induced:1 sridharan:1 integer:1 presence:1 door:1 constraining:1 exceed:1 affect:1 xj:5 hastie:1 opposite:1 reduce:1 regarding:1 valentina:1 expression:1 emonet:1 effort:1 penalty:1 remark:1 ignored:1 useful:1 tewari:1 detailed:2 amount:1 locally:1 svms:5 http:3 angluin:1 schapire:2 percentage:3 per:2 tibshirani:1 write:1 express:1 putting:1 restrain:1 threshold:1 drawn:1 changing:1 ht:5 rectangle:1 relaxation:4 fraction:1 year:2 sum:1 noticing:1 uncertainty:1 named:1 laid:1 almost:2 guyon:1 draw:1 decision:1 bound:6 fold:1 topological:1 refine:2 annual:2 constraint:7 min:4 extremely:1 developing:1 according:3 marginbased:1 legendre:1 conjugate:1 smaller:1 belonging:1 making:1 explained:2 outlier:2 gradually:1 heart:1 previously:1 khkf:1 singer:1 studying:1 available:1 generalizes:1 rewritten:2 kwok:1 generic:7 robustness:3 slower:1 original:3 top:1 clustering:1 ensure:1 dirichlet:1 saint:2 hinge:1 madison:1 etienne:3 exploit:4 especially:3 classical:6 already:3 strategy:3 traditional:2 surrogate:28 gradient:1 distance:2 thank:2 reason:2 minimizing:4 balance:1 loker:1 unfortunately:2 statement:1 pima:1 negative:4 design:2 proper:1 carioni:1 unknown:2 perform:1 allowing:2 upper:5 twenty:1 datasets:2 finite:3 situation:3 defining:3 mansour:1 supa:1 provost:1 community:1 introduced:4 david:1 namely:1 required:1 learned:1 boser:1 barcelona:1 nip:1 address:4 adversary:1 usually:1 below:1 firstname:1 eighth:1 pattern:1 sparsity:2 recast:1 reliable:1 including:1 max:1 built:1 misclassification:1 natural:2 force:1 regularized:1 ipeirotis:1 zhu:1 numerous:4 axis:1 ready:1 prior:5 literature:2 l2:1 python:1 discovery:1 wisconsin:1 embedded:1 loss:30 fully:5 freund:1 mixed:1 interesting:2 generation:1 srebro:1 facing:1 versus:2 validation:2 integrate:2 degree:1 sufficient:2 share:1 cry:1 drastically:1 side:2 allow:1 neighbor:1 face:1 fifth:1 overcome:2 boundary:1 curve:1 author:2 commonly:2 refinement:1 transaction:2 keep:1 ml:3 tolerant:5 xi:28 piana:1 search:1 iterative:9 sonar:1 plementary:1 additionally:1 promising:1 learn:6 robust:1 improving:1 european:1 separator:3 artificially:1 marc:1 joulin:1 main:2 noise:20 allowed:1 repeated:1 xu:4 biggest:2 exponential:1 xl:3 yh:1 learns:3 rez:1 splice:1 down:2 specific:2 showing:2 sensing:2 offset:1 svm:35 cortes:1 convexly:1 ionosphere:2 exists:1 workshop:1 vapnik:2 importance:2 margin:10 gap:2 suited:1 generalizing:1 simply:1 likely:2 expressed:3 geoscience:1 springer:1 corresponds:1 radically:1 chance:1 acm:5 conditional:1 consequently:1 towards:1 eventual:1 specifically:2 typical:1 reducing:1 hyperplane:1 lemma:1 kearns:1 called:2 lathrop:1 experimental:4 vote:1 exception:1 select:1 support:5 latter:1 fulfills:2 collins:1 evaluate:1 handling:2
5,699
6,157
Double Thompson Sampling for Dueling Bandits Huasen Wu University of California, Davis hswu@ucdavis.edu Xin Liu University of California, Davis xinliu@ucdavis.edu Abstract In this paper, we propose a Double Thompson Sampling (D-TS) algorithm for dueling bandit problems. As its name suggests, D-TS selects both the first and the second candidates according to Thompson Sampling. Specifically, D-TS maintains a posterior distribution for the preference matrix, and chooses the pair of arms for comparison according to two sets of samples independently drawn from the posterior distribution. This simple algorithm applies to general Copeland dueling bandits, including Condorcet dueling bandits as a special case. For general Copeland dueling bandits, we show that D-TS achieves O(K 2 log T ) regret. Moreover, using a back substitution argument, we refine the regret to O(K log T + K 2 log log T ) in Condorcet dueling bandits and most practical Copeland dueling bandits. In addition, we propose an enhancement of D-TS, referred to as D-TS+ , to reduce the regret in practice by carefully breaking ties. Experiments based on both synthetic and real-world data demonstrate that D-TS and D-TS+ significantly improve the overall performance, in terms of regret and robustness. 1 Introduction The dueling bandit problem [1] is a variant of the classical multi-armed bandit (MAB) problem, where the feedback comes in the form of pairwise comparison. This model has attracted much attention as it can be applied in many systems such as information retrieval (IR) [2, 3], where user preferences are easier to obtain and typically more stable. Most earlier work [1, 4, 5] focuses on Condorcet dueling bandits, where there exists an arm, referred to as the Condorcet winner, that beats all other arms. Recent work [6, 7] turns to a more general and practical case of a Copeland winner(s), which is the arm (or arms) that beats the most other arms. Existing algorithms are mainly generalized from traditional MAB algorithms along two lines: 1) UCB (Upper Confidence Bound)-type algorithms, such as RUCB [4] and CCB [6]; and, 2) MED (Minimum Empirical Divergence)-type algorithms, such as RMED [5] and CW-RMED/ECW-RMED [7]. In traditional MAB, an alternative effective solution is Thompson Sampling (TS) [8]. Its principle is to choose the optimal action that maximizes the expected reward according to the randomly drawn belief. TS has been successfully applied in traditional MAB [9, 10, 11, 12] and other online learning problems [13, 14]. In particular, empirical studies in [9] show that TS not only achieves lower regret than other algorithms in practice, but is also more robust as a randomized algorithm. In the wake of the success of TS in these online learning problems, a natural question is whether and how TS can be applied to dueling bandits to further improve the performance. However, it is challenging to apply the standard TS framework to dueling bandits, because not all comparisons provide information about the system statistics. Specifically, a good learning algorithm for dueling bandits will eventually compare the winner against itself. However, comparing one arm against itself does not provide any statistical information, which is critical in TS to update the posterior distribution. Thus, TS needs to be adjusted so that 1) comparing the winners against themselves is allowed, but, 2) trapping in comparing a non-winner arm against itself is avoided. In this paper, we propose a Double Thompson Sampling (D-TS) algorithm for dueling bandits, including both Condorcet dueling bandits and general Copeland dueling bandits. As its name suggests, D-TS typically selects both the first and the second candidates according to samples independently drawn from the posterior distribution. D-TS also utilizes the idea of confidence bounds to eliminate the likely non-winner arms, and thus avoids trapping in suboptimal comparisons. Compared to prior studies on dueling bandits, D-TS has both practical and theoretical advantages. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. First, the double sampling structure of D-TS better suits the nature of dueling bandits. Launching two independent rounds of sampling provides us the opportunity to select the same arm in both rounds and thus to compare the winners against themselves. This double sampling structure also leads to more extensive utilization of TS (e.g., compared to RCS [3]), and significantly reduces the regret. In addition, this simple framework applies to general Copeland dueling bandits and achieves lower regret than existing algorithms such as CCB [6]. Moreover, as a randomized algorithm, D-TS is more robust in practice. Second, this double sampling structure enables us to obtain theoretical bounds for the regret of D-TS. As noted in traditional MAB literature [10, 15], theoretical analysis of TS is usually more difficult than UCB-type algorithms. The analysis in dueling bandits is even more challenging because the selection of arms involves more factors and the two selected arms may be correlated. To address this issue, our D-TS algorithm draws the two sets of samples independently. Because their distributions are fully captured by historic comparison results, when the first candidate is fixed, the comparison between it and all other arms is similar to traditional MAB and thus we can borrow ideas from traditional MAB. Using the properties of TS and confidence bounds, we show that D-TS achieves O(K 2 log T ) regret for a general K-armed Copeland dueling bandit. More interestingly, the property that the sample distribution only depends on historic comparing results (but not t) enables us to refine the regret using a back substitution argument, where we show that D-TS achieves O(K log T + K 2 log log T ) in Condorcet dueling bandits and many practical Copeland dueling bandits. Based on the analysis, we further refine the tie-breaking criterion in D-TS and propose its enhancement called D-TS+ . D-TS+ achieves the same theoretical regret bound as D-TS, but performs better in practice especially when there are multiple winners. In summary, the main contributions of this paper are as follows: ? We propose a D-TS algorithm and its enhancement D-TS+ for general Copeland dueling bandits. The double sampling structure suits the nature of dueling bandits and leads to more extensive usage of TS, which significantly reduces the regret. ? We obtain theoretical regret bounds for D-TS and D-TS+ . For general Copeland dueling bandits, we show that D-TS and D-TS+ achieve O(K 2 log T ) regret. In Condorcet dueling bandits and most practical Copeland dueling bandits, we further refine the regret bound to O(K log T + K 2 log log T ) using a back substitution argument. ? We evaluate the D-TS and D-TS+ algorithms through experiments based on both synthetic and real-world data. The results show that D-TS and D-TS+ significantly improve the overall performance, in terms of regret and robustness, compared to existing algorithms. 2 Related Work Early dueling bandit algorithms study finite-horizon settings, using the ?explore-then-exploit? approaches, such as IF [1], BTM [16], and SAVAGE [17]. For infinite horizon settings, recent work has generalized the traditional MAB algorithms to dueling bandits along two lines. First, RUCB [4] and CCB [6] are generalizations of UCB for Condorcet and general Copeland dueling bandits, respectively. In addition, [18] reduces dueling bandits to traditional MAB, which is then solved by UCB-type algorithms, called MutiSBM and Sparring. Second, [5] and [7] extend the MED algorithm to dueling bandits, where they present the lower bound on the regret and propose the corresponding optimal algorithms, including RMED for Condorcet dueling bandits [5], CW-RMED and its computationally efficient version ECW-RMED for general Copeland dueling bandits [7]. Different from such existing work, we study algorithms for dueling bandits from the perspective of TS, which typically achieves lower regret and is more robust in practice. Dated back to 1933, TS [8] is one of the earliest algorithms for exploration/exploitation tradeoff. Nowadays, it has been applied in many variants of MAB [11, 12, 13] and other more complex problems, e.g., [14], due to its simplicity, good performance, and robustness [9]. Theoretical analysis of TS is much more difficult. Only recently, [10] proposes a logarithmic bound for the standard frequentist expected regret, whose constant factor is further improved in [15]. Moreover [19, 20] derive the bounds for its Bayesian expected regret through information-theoretic analysis. TS has been preliminarily considered for dueling bandits [3, 21]. In particular, recent work [3] proposes a Relative Confidence Sampling (RCS) algorithm that combines TS with RUCB [4] for Condorcet dueling bandits. Under RCS, the first arm is selected by TS while the second arm is selected according to their RUCB. Empirical studies demonstrate the performance improvement of using RCS in practice, but no theoretical bounds on the regret are provided. 2 3 System Model We consider a dueling bandit problem with K (K ? 2) arms, denoted by A = {1, 2, . . . , K}. At (2) each time-slot t > 0, a pair of arms (a(1) t , at ) is displayed to a user and a noisy comparison outcome wt is obtained, where wt = 1 if the user prefers a(1) to a(2) t t , and wt = 2 otherwise. We assume the user preference is stationary over time and the distribution of comparison outcomes is characterized by the preference matrix P = [pij ]K?K , where pij is the probability that the user prefers arm i to arm j, i.e., pij = P{i  j}, i, j = 1, 2, . . . , K. We assume that the displaying order does not affect the preference, and hence, pij + pji = 1 and pii = 1/2. We say that arm i beats arm j if pij > 1/2. We study the general Copeland dueling bandits, where the Copeland winner is defined as the arm (or arms) that maximizes the number of other arms it beats [6, 7]. Specifically, the Copeland P score is defined as j6=i 1(pij > 1/2), and the normalized Copeland score is defined as ?i = P 1 ? j6=i 1(pij > 1/2), where 1(?) is the indicator function. Let ? be the highest normalized K?1 ? Copeland score, i.e., ? = max1?i?K ?i . Then the Copeland winner is defined as the arm (or arms) with the highest normalized Copeland score, i.e., C ? = {i : 1 ? i ? K, ?i = ? ? }. Note that the Condorcet winner is a special case of Copeland winner with ? ? = 1. A dueling bandit algorithm ? decides which pair of arms to compare depending on the historic obser(2) vations. Specifically, define a filtration Ht?1 as the history before t, i.e., Ht?1 = {a(1) ? , a? , w? , ? = (1) (2) 1, 2, . . . , t ? 1}. Then a dueling bandit algorithm ? is a function that maps Ht?1 to (at , at ), i.e., (1) (2) (at , at ) = ?(Ht?1 ). The performance of a dueling bandit algorithm ? is measured by its expected cumulative regret, which is defined as R? (T ) = ? ? T ? T  1X  E ?a(1) + ?a(2) . t t 2 t=1 (1) The objective of ? is then to minimize R? (T ). As pointed out in [6], the results can be adapted to other regret definitions because the above definition bounds the number of suboptimal comparisons. 4 Double Thompson Sampling 4.1 D-TS Algorithm We present the D-TS algorithm for Copeland dueling bandits, as described in Algorithm 1 (time index t is omitted in pseudo codes for brevity). As its name suggests, the basic idea of D-TS is to select both the first and the second candidates by TS. For each pair (i, j) with i 6= j, we assume a beta prior distribution for its preference probability pij . These distributions are updated according to the comparison results Bij (t ? 1) and Bji (t ? 1), where Bij (t ? 1) (resp. Bji (t ? 1)) is the number of time-slots when arm i (resp. j) beats arm j (resp. i) before t. D-TS selects the two candidates by sampling from the posterior distributions. Specifically, at each time-slot t, the D-TS algorithm consists of two phases that select the first and the second candidates, respectively. When choosing the first candidate a(1) t , we first use the RUCB [4] of pij to eliminate the arms that are unlikely to be the Copeland winner, resulting in a candidate (1) set Ct (Lines 4 to 6). The algorithm then samples ?ij (t) from the posterior beta distribution, and the (1) first candidate at is chosen by ?majority voting?, i.e., the arm within Ct that beats the most arms (1) according to ?ij (t) will be selected (Lines 7 to 11). The ties are broken randomly here for simplicity and will be refined later in Section 4.3. A similar idea is applied to select the second candidate a(2) t , where new samples ?(2)(1) (t) are generated and the arm with the largest ?(2)(1) (t) among all arms with iat iat lia(1) ? 1/2 is selected as the second candidate (Lines 13 to 14). t The double sampling structure of D-TS is designed based on the nature of dueling bandits, i.e., at each time-slot, two arms are needed for comparison. Unlike RCS [3], D-TS selects both candidates using TS. This leads to more extensive utilization of TS and thus achieves much lower regret. Moreover, the two sets of samples are independently distributed, following the same posterior that is only determined by the comparison statistics Bij (t ? 1) and Bji (t ? 1). This property enables us to obtain an O(K 2 log T ) regret bound and further refine it by a back substitution argument, as discussed later. We also note that RUCB-based elimination (Lines 4 to 6) and RLCB (Relative Lower Confidence Bound)-based elimination (Line 14) are essential in D-TS. Without these eliminations, the algorithm 3 Algorithm 1 D-TS for Copeland Dueling Bandits 1: Init: B ? 0K?K ; // Bij is the number of time-slots that the user prefers arm i to j. 2: for t = 1 to T do 3: // Phase 1: Choose the first candidate a(1) q q Bij Bij ? log t ? log t , l = 4: U := [uij ], L := [lij ], where uij = Bij +B + ? ij Bij +Bji Bij +Bji Bij +Bji , if ji i 6= j, and uii = lii = 1/2, ?i; // x0 := 1 for any x. P 1 ??i ? K?1 j6=i 1(uij > 1/2); // Upper bound of the normalized Copeland score. ? C ? {i : ?i = maxj ??j }; for i, j = 1, . . . , K with i < j do (1) Sample ?ij ? Beta(Bij + 1, Bji + 1); 5: 6: 7: 8: (1) (1) 9: 10: 11: ?ji ? 1 ? ?ij ; end for P (1) a(1) ? arg max j6=i 1(?ij > 1/2); // Choosing from C to eliminate likely non-winner 12: 13: arms; Ties are broken randomly. // Phase 2: Choose the second candidate a(2) (2) (2) Sample ?ia(1) ? Beta(Bia(1) + 1, Ba(1) i + 1) for all i 6= a(1) , and let ?a(1) a(1) = 1/2; 14: a(2) ? arg max ?ia(1) ; // Choosing only from uncertain pairs. i?C (2) i:lia(1) ?1/2 15: // Compare and Update 16: Compare pair (a(1) , a(2) ) and observe the result w; 17: Update B: Ba(1) a(2) ? Ba(1) a(2) + 1 if w = 1, or Ba(2) a(1) ? Ba(2) a(1) + 1 if w = 2; 18: end for may trap in suboptimal comparisons. Consider one extreme case in Condorcet dueling bandits1 : assume arm 1 is the Condorcet winner with p1j = 0.501 for all j > 1, and arm 2 is not the Condorcet winner, but with p2j = 1 for all j > 2. Then for a larger K (e.g., K > 4), without RUCB-based elimination, the algorithm may trap in a(1) = 2 for a long time, because arm 2 is likely to receive t higher score than arm 1. This issue can be addressed by RUCB-based elimination as follows: when chosen as the first candidate, arm 2 has a great probability to compare with arm 1; after sufficient comparisons with arm 1, arm 2 will have u21 (t) < 1/2 with high probability; then arm 2 is likely to be eliminated because arm 1 has ??1 (t) = 1 > ??2 (t) with high probability. Similarly, RLCB-based elimination (Line 14, where we restrict to the arms with lia(1) (t) ? 1/2) is important especially t for non-Condorcet dueling bandits. Specifically, lia(1) (t) > 1/2 indicates that arm i beats a(1) with t t high probability. Thus, comparing a(1) and arm i brings little information gain and thus should be t eliminated to minimize the regret. 4.2 Regret Analysis Before conducting the regret analysis, we first introduce certain notations that will be used later. Gap to 1/2: In dueling bandits, an important benchmark for pij is 1/2, and thus we let ?ij be the gap between pij and 1/2, i.e., ?ij = |pij ? 1/2|. (1) (2) Number of Comparisons: Under D-TS, (i, j) can be compared in the form of (at , at ) = (i, j) (1) (2) and (at , at ) = (j, i). We consider these two cases separately and define the following counters: Pt Pt (1) (1) (2) (2) (1) (2) Nij (t) = ? =1 1(a? = i, a? = j) and Nij (t) = ? =1 1(a? = j, a? = i). Then the total (1) (2) (1) (2) number of comparisons is Nij (t) = Nij (t) + Nij (t) for i 6= j, and Nii (t) = Nii (t) = Nii (t) for i = j. 4.2.1 O(K 2 log T ) Regret To obtain theoretical bounds for the regret of D-TS, we make the following assumption: 1 A Borda winner may be more appropriate in this special case [22], and we mainly use it to illustrate the dilemma. 4 Assumption 1: The preference probability pij 6= 1/2 for any i 6= j. Under Assumption 1, we present the first result for D-TS in general Copeland dueling bandits: Proposition 1. When applying D-TS with ? > 0.5 in a Copeland dueling bandit with a preference matrix P = [pij ]K?K satisfying Assumption 1, its regret is bounded as:   X log T 4? log T K2 + (1 + ) (2) RD-TS (T ) ? + O( 2 ), 2 ?ij D(pij ||1/2)  i6=j:pij <1/2 where  > 0 is an arbitrary constant, and D(p||q) = p log p q + (1 ? p) log 1?p 1?q is the KL divergence. The summation operation in Eq. (2) is conducted over all pairs (i, j) with pij < 1/2. Thus, Proposition 1 states that D-TS achieves O(K 2 log T ) regret in Copeland dueling bandits. To the best of our knowledge, this is the first theoretical bound for TS in dueling bandits. The scaling behavior of this bound with respect to T is order optimal, since a lower bound ?(log T ) has been shown in [7]. The refinement of the scaling behavior with respect to K will be discussed later. Proving Proposition 1 needs to bound the number of comparisons for all pairs (i, j) with i ? / C? (1) ? or j ? / C . When fixing the first candidate as at = i, the selection of the second candidate a(2) t is similar to a traditional K-armed bandit problem with expected utilities pji (j = 1, 2, . . . , K). However, the analysis is more complex here since different arms are eliminated differently depending on the value of pji . We prove Proposition 1 through Lemmas 1 to 3, which bound the number of comparisons for all suboptimal pairs (i, j) under different scenarios, i.e., pji < 1/2, pji > 1/2, and pji = 1/2 (j = i ? / C ? ), respectively. Lemma 1. Under D-TS, for an arbitrary constant  > 0 and one pair (i, j) with pji < 1/2, we have log T 1 (1) E[Nij (T )] ? (1 + ) + O( 2 ). (3) D(pji ||1/2)  Proof. We can prove this lemma by viewing the comparison between the first candidate arm i and its inferiors as a traditional MAB. In fact, it may be even simpler than that in [15] because under D-TS, arm j with pji < 1/2 is competing with arm i with pii = 1/2, which is known and fixed. Then we (1) can bound E[Nij (T )] using the techniques in [15]. Details can be found in Appendix B.1. Lemma 2. Under D-TS with ? > 0.5, for one pair (i, j) with pji > 1/2, we have 4? log T (1) E[Nij (T )] ? + O(1). ?2ji (1) (4) (2) Proof. We note that when at = i, arm j can be selected as at only when its RLCB lji (t) ? 1/2. (1) T Then we can bound E[Nij (T )] by O( 4??log ) similarly to the analysis of traditional UCB algorithms 2 ji [23]. Details can be found in Appendix B.2. Lemma 3. Under D-TS, for any arm i ? / C ? , we have X 1 1 1  E[Nii (T )] ? O(K) + ? 2 + 2 + 4 = O(K). ?ki ?ki D(1/2||pki ) ?ki (5) k:pki >1/2 Before proving Lemma 3, we present an important property for ??? (t) := max1?i?K ??i (t). Recall that ? ? is the maximum normalized Copeland score. Using the concentration property of RUCB (Lemma 6 in Appendix A), the following lemma shows that ??? (t) is indeed a UCB of ? ? .  log t  2? Lemma 4. For any ? > 0.5 and t > 0, P{??? (t) ? ? ? } ? 1 ? K + 1 t? ?+1/2 . log(?+1/2) Return to the proof of Lemma 3. To prove Lemma 3, we consider the cases of ??? (t) < ? ? and ??? (t) ? ? ? . The former case ??? (t) < ? ? can be bounded by Lemma 4. For the latter case, we (2) note that when ??? (t) ? ? ? , the event (a(1) t , at ) = (i, i) occurs only if: a) there exists at least one (2) k ? K with pki > 1/2, such that lki (t) ? 1/2; and b) ?ki (t) ? 1/2 for all k with lki (t) ? 1/2. In (1) (2) (1) (2) this case, we can bound the probability of (at , at ) = (i, i) by that of (at , at ) = (i, k), for k with pki > 1/2 but lki (t) ? 1/2, where the coefficient decays exponentially. Then we can bound E[Nii (T )] by O(1) similar to [15]. Details of proof can be found in Appendix B.4. The conclusion of Proposition 1 then follows by combining Lemmas 1 to 3. 5 4.2.2 Regret Bound Refinement In this section, we refine the regret bound for D-TS and reduce its scaling factor with respect to the number of arms K. We sort the arms for each i ? / C ? in the descending order of pji , and let (?i(1) , ?i(2) , . . . , ?i(K) ) be a permutation of (1, 2, . . . , K), such that p?i(1) ,i ? p?i(2) ,i ? . . . ? p?i(K) ,i . In addition, for a PK Copeland winner i? ? C, let LC = j=1 1(pji? > 1/2) be the number of arms that beat arm i? . To refine the regret, we introduce an additional no-tie assumption: Assumption 2: For each arm i ? / C ? , p?i(LC +1) ,i > p?i(j) ,i for all j > LC + 1. We present a refined regret bound for D-TS as follows: Theorem 1. When applying D-TS with ? > 0.5 in a Copeland dueling bandit with a preference matrix P = [pij ]K?K satisfying Assumptions 1 and 2, its regret is bounded as: RD-TS (T ) ? X i?C ? X j:pji >1/2 +?(1 + )2 4? log T + ?2ji X K X i?C / ? j=LC +2 X j:pji <1/2 (1 + )  X LX C +1 log T 4? log T + D(pji ||1/2) ?2?i(j) ,i ? j=1 i?C / K2 log log T + O(K 3 ) + O( 2 ), D(p?i(j) ,i ||p?i(LC +1) ,i )  (6) where ? > 2 and  > 0 are constants, and D(?||?) is the KL-divergence. In (6), the first term corresponds to the regret when the first candidate a(1) is a winner, and is t O(K|C ? | log T ). The second term corresponds to the comparisons between a non-winner arm and its first LC + 1 superiors, which is bounded by O(K(LC + 1) log T ). The remaining terms correspond to the comparisons between a non-winner arm and the remaining arms, and is bounded by O K 2 log log T . As demonstrated in [6], LC is relatively small compared to K, and can be viewed as a constant. Thus, the total regret RD-TS (T ) is bounded as RD-TS (T ) = O(K log T + K 2 log log T ). In particular, this asymptotic trend can be easily seen for Condorcet dueling bandits where LC = 0. Comparing Eq. (6) with Eq. (2), we can see the difference is the third and fourth terms in (6), which refine the regret of comparing a suboptimal arm and its last (K ? LC ? 1) inferiors into O(log log T ). Thus, to prove Theorem 1, it suffices to show the following additional lemma: Lemma 5. Under Assumptions 1 and 2, for any suboptimal arm i ? / C ? and j > LC + 1, we have (1) E[Ni?i(j) (T )] ? ?(1 + )2 log log T 1 + O(K) + O( 2 ), D(p?i(j) ,i ||p?i(LC +1) ,i )  (7) where ? > 2 and  > 0 are constants. Proof. We prove this lemma using a back substitution argument. The intuition is that when fixing the (1) first candidate as a(1) = i, the comparison between at and the other arms is similar to a traditional t P MAB with expected utilities pji (1 ? j ? K). Let Ni(1) (T ) = Tt=1 1(a(1) = i) be the number t of time-slots when this type of MAB is played. Using the fact that the distribution of the samples (1) (1) only depends on the historic comparison results (but not t), we can show E[Ni,? (T )|Ni (T )] = i(j) (1) (1) (1) O(log Ni (T )), which holds for any Ni (T ). We have shown that E[Ni (T )] = O(K log T ) for (1) any i 6= C ? when proving Proposition 1. Then, substituting the bound of E[Ni (T )] back and   (1) (1) (1) using the concavity of the log(?) function, we have E[Ni,?i(j) (T )] = E E[Ni,?i(j) (T )|Ni (T )] ? (1) O(log E[Ni (T )]) = O(log log T + log K). Details can be found in Appendix C.1 4.3 Further Improvement: D-TS+ D-TS is a TS framework for dueling bandits, and its performance can be improved by refining certain components of it. In this section, we propose an enhanced version of D-TS, referred to as D-TS+ , that carefully breaks the ties to reduce the regret. Note that by randomly breaking the ties (Line 11 in Algorithm 1), D-TS tends to explore all potential winners. This may be desirable in certain applications such as restaurant recommendation, where 6 users may not want to stick to a single winner. However, because of this, the regret of D-TS scales with the number of winners |C ? | as shown in Theorem 1. To further reduce the regret, we can break the ties according to estimated regret. (1) Specifically, with samples ?ij (t), the normalized Copeland score for each arm i can be estiP (1) 1 ? mated as ?i (t) = K?1 j6=i 1(?ij (t) > 1/2). Then the maximum normalized Copeland score is   ??? (t) = maxi ??i (t), and the loss of comparing arm i and arm j is r?ij (t) = ??? (t) ? 1 ??i (t) + ??j (t) . 2 T For pij 6= 1/2, we need about ?( D(plog ) time-slots to distinguish it from 1/2 [5]. Thus, when ij ||1/2) choosing i as the first candidate, the regret of comparing it with all other arms can be estimated by (1) ? (1) (t) = P (1) R r? (t)/D(?ij (t)||1/2). We propose the following D-TS+ algorithm that i j:? (t)6=1/2 ij ij ? (1) (t). breaks the ties to minimize R i D-TS+ : Implement the same operations as D-TS, except for the selection of the first candidate (Line 11 in Algorithm 1) is replaced by the following two steps: A(1) ? {i ? C : ?i = max i?C (1) a X (1) 1(?ij > 1/2)}; j6=i ? (1) ; ? arg min R i i?A(1) D-TS+ only changes the tie-breaking criterion in selecting the first candidate. Thus, the regret bound of D-TS directly applies to D-TS+ : Corollary 1. The regret of D-TS+ , RD-TS+ (T ), satisfies inequality (6) under Assumptions 1 and 2. Corollary 1 provides an upper bound for the regret of D-TS+ . In practice, however, D-TS+ performs better than D-TS in the scenarios with multiple winners, as we can see in Section 5 and Appendix D. Our conjecture is that with this regret-minimization criterion, the D-TS+ algorithm tends to focus on one of the winners (if there is no tie in terms of expected regret), and thus reduces the first term in (6) from O(K|C ? | log T ) to O(K log T ). The proof of this conjecture requires properties for the evolution of the statistics for all arms and the majority voting results based on the Thompson samples, and is complex. This is left as part of our future work. In the above D-TS+ algorithm, we only consider the regret of choosing i as the first candidate. From Theorem 1, we know that comparing other arms with their superiors will also result in ?(log T ) regret. Thus, although the current D-TS+ algorithm performs well in most practical scenarios, one ? (1) (t). may further improve its performance by taking these additional comparisons into account in R i 5 Experiments To evaluate the proposed D-TS and D-TS+ algorithms, we run experiments based on synthetic and real-world data. Here we present the results for experiments based on the Microsoft Learning to Rank (MSLR) dataset [24], which provides the relevance for queries and ranked documents. Based on this dataset, [6] derives a preference matrix for 136 rankers, where each ranker is a function that maps a user?s query to a document ranking and can be viewed as one arm in dueling bandits. We use the two 5-armed submatrices in [6], one for Condorcet dueling bandit and the other for non-Condorcet dueling bandit. More experiments and discussions can be found in Appendix D 2 . We compare D-TS and D-TS+ with the following algorithms: BTM [16], SAVAGE [17], Sparring [18], RUCB [4], RCS [3], CCB [6], SCB [6], RMED1 [5], and ECW-RMED [7]. For BTM, we set the relaxed factor ? = 1.3 as [16]. For algorithms using RUCB and RLCB, including D-TS and D-TS+ , we set the scale factor ? = 0.51. For RMED1, we use the same settings as [5], and for ECW-RMED, we use the same setting as [7]. For the ?explore-then-exploit? algorithms, BTM and SAVAGE, each point is obtained by resetting the time horizon as the corresponding value. The results are averaged over 500 independent experiments, where in each experiment, the arms are randomly shuffled to prevent algorithms from exploiting special structures of the preference matrix. In Condorcet dueling bandits, our D-TS and D-TS+ algorithms achieve almost the same performance and both perform much better than existing algorithms, as shown in Fig. 1(a). In particular, compared with RCS, we can see that the full utilization of TS in D-TS and D-TS+ significantly reduces the 2 Source codes are available at https://github.com/HuasenWu/DuelingBandits. 7 500 15 B RU /CC CB ED CW -RM D1/E E /RM S RC #10 4 BTM SAVAGE RUCB RCS Sparring CCB SCB RMED1 ECW-RMED D-TS 10 Regret Regret 1000 BTM SAVAGE RUCB RCS Sparring CCB SCB RMED1 ECW-RMED D-TS + S D-T D-TS+ -T S/D 5 D-TS+ S D-T ECW-RMED 0 10 2 10 4 Time t (a) K = 5, Condorcet 10 6 0 10 4 10 6 19 3.0 9% D-TS+ 150 100 50 29 D-TS + 10 5 ECW-RMED D-TS 200 CCB Normalized STD of regret 1500 .37 % 5% 1% .6 30.5 27 29 .12 % 13 10 7 Time t (b) K = 5, non-Condorcet Figure 1: Regret in MSLR dataset. In (b), there are 3 Copeland winners with normalized Copeland score ? ? = 3/4. 0 .16 % K = 5, Condorcet K = 5, non-Condorcet Dataset Figure 2: Standard deviation (STD) of regret for T = 106 (normalized by RECW?RMED (T )). regret. Compared with RMED1 and ECW-RMED, our D-TS and D-TS+ algorithms also perform better. [5] has shown that RMED1 is optimal in Condorcet dueling bandits, not only in the sense of asymptotic order, but also the coefficients in the regret bound. The simulation results show that D-TS and D-TS+ not only achieve the similar slope as RMED1/ECW-RMED, but also converge faster to the asymptotic regime and thus achieve much lower regret. This inspires us to further refine the regret bounds for D-TS and D-TS+ in the future. In non-Condorcet dueling bandits, as shown in Fig. 1(b), D-TS and D-TS+ significantly reduce the regret compared to the UCB-type algorithm, CCB (e.g., the regret of D-TS+ is less than 10% of that of CCB). Compared with ECW-RMED, D-TS achieves higher regret, mainly because it randomly explores all Copeland winners due to the random tie-breaking rule. With a regret-minimization tie-breaking rule, D-TS+ further reduces the regret, and outperforms ECW-RMED in this dataset. Moreover, as randomized algorithms, D-TS and D-TS+ are more robust to the preference probabilities. As shown in Fig. 2, D-TS and D-TS+ have much smaller regret STD than that of ECW-RMED in the non-Condorcet dataset, where certain preference probabilities (for different arms) are close to 1/2. In particular, the STD of regret for ECW-RMED is almost 200% of its mean value, while it is only 13.16% for D-TS+ . In addition, as shown in Appendix D.2.3, D-TS and D-TS+ are also robust to delayed feedback, which is typically batched and provided periodically in practice. Overall, D-TS and D-TS+ significantly outperform all existing algorithms, with the exception of ECW-RMED. Compared to ECW-RMED, D-TS+ achieves much lower regret in the Condorcet case, lower or comparable regret in the non-Condorcet case, and much more robustness in terms of regret STD and delayed feedback. Thus, the simplicity, good performance, and robustness of D-TS and D-TS+ make them good algorithms in practice. 6 Conclusions and Future Work In this paper, we study TS algorithms for dueling bandits. We propose a D-TS algorithm and its enhanced version D-TS+ for general Copeland dueling bandits, including Condorcet dueling bandits as a special case. Our study reveals desirable properties of D-TS and D-TS+ from both theoretical and practical perspectives. Theoretically, we show that the regret of D-TS and D-TS+ is bounded by O(K 2 log T ) in general Copeland dueling bandits, and can be refined to O(K log T + K 2 log log T ) in Condorcet dueling bandits and most practical Copeland dueling bandits. Practically, experimental results demonstrate that these simple algorithms achieve significantly better overall-performance than existing algorithms, i.e., D-TS and D-TS+ typically achieve much lower regret in practice and are robust to many practical factors, such as preference matrix and feedback delay. Although logarithmic regret bounds have been obtained for D-TS and D-TS+ , our analysis relies heavily on the properties of RUCB/RLCB and the regret bounds are likely loose. In fact, we see from experiments that RUCB-based elimination seldom occurs under most practical settings. We will further refine the regret bounds by investigating the properties of TS-based majority-voting. Moreover, results from recent work such as [7] may be leveraged to improve TS algorithms. Last, it is also an interesting future direction to study D-TS type algorithms for dueling bandits with other definition of winners. Acknowledgements: This research was supported in part by NSF Grants CCF-1423542, CNS1457060, and CNS-1547461. The authors would like to thank Prof. R. Srikant (UIUC), Prof. Shipra Agrawal (Columbia University), Masrour Zoghi (University of Amsterdam), and Dr. Junpei Komiyama (University of Tokyo) for their helpful discussions and suggestions. 8 References [1] Y. Yue, J. Broder, R. Kleinberg, and T. Joachims. The k-armed dueling bandits problem. Journal of Computer and System Sciences, 78(5):1538?1556, 2012. [2] Y. Yue and T. Joachims. Interactively optimizing information retrieval systems as a dueling bandits problem. In International Conference on Machine Learning (ICML), pages 1201?1208, 2009. [3] M. Zoghi, S. A. Whiteson, M. De Rijke, and R. Munos. Relative confidence sampling for efficient on-line ranker evaluation. In ACM International Conference on Web Search and Data Mining, pages 73?82, 2014. [4] M. Zoghi, S. Whiteson, R. Munos, and M. D. Rijke. Relative upper confidence bound for the k-armed dueling bandit problem. In International Conference on Machine Learning (ICML), pages 10?18, 2014. [5] J. Komiyama, J. Honda, H. Kashima, and H. Nakagawa. Regret lower bound and optimal algorithm in dueling bandit problem. In Proceedings of Conference on Learning Theory, 2015. [6] M. Zoghi, Z. S. Karnin, S. Whiteson, and M. de Rijke. Copeland dueling bandits. In Advances in Neural Information Processing Systems, pages 307?315, 2015. [7] J. Komiyama, J. Honda, and H. Nakagawa. Copeland dueling bandit problem: Regret lower bound, optimal algorithm, and computationally efficient algorithm. In International Conference on Machine Learning (ICML), 2016. [8] W. R. Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, pages 285?294, 1933. [9] O. Chapelle and L. Li. An empirical evaluation of Thompson Sampling. In Advances in Neural Information Processing Systems, pages 2249?2257, 2011. [10] S. Agrawal and N. Goyal. Analysis of Thompson Sampling for the multi-armed bandit problem. In Conference on Learning Theory (COLT), 2012. [11] J. Komiyama, J. Honda, and H. Nakagawa. Optimal regret analysis of Thompson Sampling in stochastic multi-armed bandit problem with multiple plays. In International Conference on Machine Learning (ICML), 2015. [12] Y. Xia, H. Li, T. Qin, N. Yu, and T.-Y. Liu. Thompson sampling for budgeted multi-armed bandits. In International Joint Conference on Artificial Intelligence, 2015. [13] A. Gopalan, S. Mannor, and Y. Mansour. Thompson sampling for complex online problems. In International Conference on Machine Learning (ICML), pages 100?108, 2014. [14] A. Gopalan and S. Mannor. Thompson sampling for learning parameterized Markov decision processes. In Proceedings of Conference on Learning Theory, pages 861?898, 2015. [15] S. Agrawal and N. Goyal. Further optimal regret bounds for Thompson Sampling. In International Conference on Artificial Intelligence and Statistics, pages 99?107, 2013. [16] Y. Yue and T. Joachims. Beat the mean bandit. In International Conference on Machine Learning (ICML), pages 241?248, 2011. [17] T. Urvoy, F. Clerot, R. F?raud, and S. Naamane. Generic exploration and k-armed voting bandits. In International Conference on Machine Learning (ICML), pages 91?99, 2013. [18] N. Ailon, Z. Karnin, and T. Joachims. Reducing dueling bandits to cardinal bandits. In Proceedings of The 31st International Conference on Machine Learning, pages 856?864, 2014. [19] D. Russo and B. Van Roy. An information-theoretic analysis of Thompson Sampling. arXiv preprint arXiv:1403.5341, 2014. [20] D. Russo and B. Van Roy. Learning to optimize via posterior sampling. Mathematics of Operations Research, 39(4):1221?1243, 2014. [21] N. Welsh. Thompson sampling for the dueling bandits problem. In Large-Scale Online Learning and Decision Making (LSOLDM) Workshop, 2012. available at http://videolectures.net/lsoldm2012_welsh_bandits_problem/. [22] K. Jamieson, S. Katariya, A. Deshpande, and R. Nowak. Sparse dueling bandits. In Conference on Learning Theory (COLT), 2015. [23] S. Bubeck. Bandits games and clustering foundations. PhD thesis, Universit? des Sciences et Technologie de Lille-Lille I, 2010. [24] Microsoft Research, Microsoft Learning to Rank Datasets. http://research.microsoft.com/enus/projects/mslr/, 2010. 9
6157 |@word exploitation:1 version:3 simulation:1 liu:2 substitution:5 score:10 selecting:1 nii:5 document:2 interestingly:1 outperforms:1 existing:7 savage:5 comparing:10 current:1 com:2 attracted:1 periodically:1 enables:3 designed:1 update:3 stationary:1 intelligence:2 selected:6 trapping:2 provides:3 mannor:2 honda:3 preference:14 launching:1 obser:1 simpler:1 lx:1 rc:1 along:2 beta:4 consists:1 prove:5 combine:1 introduce:2 theoretically:1 x0:1 pairwise:1 indeed:1 expected:7 behavior:2 themselves:2 uiuc:1 multi:4 little:1 armed:10 spain:1 provided:2 moreover:6 notation:1 maximizes:2 bounded:7 project:1 pseudo:1 voting:4 p2j:1 tie:13 biometrika:1 k2:2 rm:2 stick:1 utilization:3 universit:1 grant:1 jamieson:1 before:4 tends:2 suggests:3 challenging:2 averaged:1 russo:2 practical:10 practice:10 regret:80 implement:1 goyal:2 empirical:4 submatrices:1 significantly:8 videolectures:1 confidence:7 masrour:1 close:1 selection:3 applying:2 descending:1 optimize:1 map:2 demonstrated:1 attention:1 independently:4 thompson:17 simplicity:3 rule:2 borrow:1 proving:3 ccb:9 updated:1 resp:3 pt:2 enhanced:2 heavily:1 user:8 play:1 trend:1 roy:2 satisfying:2 std:5 preprint:1 solved:1 counter:1 highest:2 intuition:1 broken:2 lji:1 reward:1 technologie:1 dilemma:1 max1:2 shipra:1 easily:1 joint:1 differently:1 effective:1 query:2 artificial:2 outcome:2 choosing:5 refined:3 vations:1 whose:1 larger:1 say:1 otherwise:1 statistic:4 itself:3 noisy:1 p1j:1 online:4 advantage:1 preliminarily:1 agrawal:3 net:1 propose:9 qin:1 combining:1 achieve:6 exploiting:1 enhancement:3 double:9 derive:1 depending:2 illustrate:1 fixing:2 measured:1 ij:17 eq:3 huasen:1 involves:1 come:1 pii:2 direction:1 tokyo:1 stochastic:1 exploration:2 viewing:1 elimination:7 iat:2 suffices:1 generalization:1 mab:13 proposition:6 summation:1 adjusted:1 raud:1 hold:1 practically:1 considered:1 great:1 cb:1 urvoy:1 naamane:1 substituting:1 achieves:11 early:1 omitted:1 largest:1 successfully:1 minimization:2 pki:4 earliest:1 corollary:2 focus:2 refining:1 joachim:4 improvement:2 rank:2 indicates:1 mainly:3 u21:1 zoghi:4 likelihood:1 sense:1 helpful:1 typically:5 eliminate:3 unlikely:1 bandit:85 uij:3 selects:4 issue:2 overall:4 among:1 arg:3 denoted:1 colt:2 proposes:2 special:5 karnin:2 sampling:25 eliminated:3 lille:2 yu:1 icml:7 future:4 cardinal:1 randomly:6 divergence:3 delayed:2 maxj:1 replaced:1 phase:3 welsh:1 cns:1 microsoft:4 suit:2 mining:1 evaluation:2 extreme:1 nowadays:1 nowak:1 theoretical:10 nij:9 uncertain:1 earlier:1 plog:1 deviation:1 delay:1 conducted:1 inspires:1 synthetic:3 chooses:1 st:1 explores:1 randomized:3 broder:1 international:11 thesis:1 interactively:1 choose:3 leveraged:1 dr:1 lii:1 return:1 li:2 account:1 potential:1 de:4 coefficient:2 sparring:4 ranking:1 depends:2 later:4 break:3 view:1 sort:1 maintains:1 slope:1 borda:1 contribution:1 minimize:3 ir:1 ni:12 conducting:1 resetting:1 correspond:1 rijke:3 bayesian:1 cc:1 j6:6 history:1 ed:1 definition:3 against:5 deshpande:1 copeland:41 proof:6 gain:1 dataset:6 recall:1 knowledge:1 carefully:2 back:7 higher:2 improved:2 web:1 brings:1 name:3 usage:1 normalized:10 clerot:1 ccf:1 former:1 hence:1 evolution:1 shuffled:1 round:2 game:1 inferior:2 davis:2 noted:1 criterion:3 generalized:2 theoretic:2 tt:1 demonstrate:3 performs:3 recently:1 superior:2 ji:5 winner:29 exponentially:1 extend:1 discussed:2 rd:5 seldom:1 mathematics:1 similarly:2 pointed:1 i6:1 chapelle:1 stable:1 posterior:8 recent:4 perspective:2 optimizing:1 scenario:3 certain:4 inequality:1 success:1 captured:1 minimum:1 additional:3 seen:1 relaxed:1 converge:1 multiple:3 desirable:2 full:1 reduces:6 exceeds:1 faster:1 characterized:1 long:1 retrieval:2 rcs:9 variant:2 basic:1 arxiv:2 receive:1 addition:5 want:1 separately:1 addressed:1 wake:1 source:1 unlike:1 yue:3 med:2 affect:1 restaurant:1 scb:3 restrict:1 suboptimal:6 competing:1 reduce:5 idea:4 tradeoff:1 ranker:3 whether:1 utility:2 action:1 prefers:3 gopalan:2 http:3 outperform:1 nsf:1 srikant:1 estimated:2 drawn:3 prevent:1 budgeted:1 ht:4 bia:1 run:1 parameterized:1 fourth:1 almost:2 wu:1 utilizes:1 draw:1 decision:2 appendix:8 scaling:3 comparable:1 bound:40 ct:2 ki:4 played:1 distinguish:1 refine:10 adapted:1 uii:1 katariya:1 kleinberg:1 argument:5 min:1 mated:1 relatively:1 conjecture:2 ailon:1 according:8 smaller:1 making:1 computationally:2 turn:1 eventually:1 loose:1 needed:1 know:1 rmed:21 end:2 available:2 operation:3 komiyama:4 apply:1 observe:1 appropriate:1 generic:1 frequentist:1 alternative:1 robustness:5 pji:16 kashima:1 rucb:15 remaining:2 clustering:1 opportunity:1 exploit:2 especially:2 prof:2 classical:1 objective:1 question:1 occurs:2 concentration:1 traditional:12 mslr:3 cw:3 thank:1 majority:3 condorcet:30 ru:1 code:2 index:1 difficult:2 lki:3 btm:6 filtration:1 ba:5 unknown:1 perform:2 upper:4 markov:1 datasets:1 benchmark:1 finite:1 t:155 displayed:1 beat:9 mansour:1 arbitrary:2 pair:11 kl:2 extensive:3 california:2 ucdavis:2 barcelona:1 nip:1 address:1 usually:1 regime:1 including:5 max:3 belief:1 dueling:77 ia:2 critical:1 event:1 natural:1 ranked:1 indicator:1 arm:76 improve:5 github:1 dated:1 columbia:1 lij:1 prior:2 literature:1 acknowledgement:1 relative:4 asymptotic:3 fully:1 historic:4 permutation:1 loss:1 interesting:1 suggestion:1 foundation:1 pij:19 sufficient:1 principle:1 displaying:1 summary:1 supported:1 last:2 taking:1 munos:2 sparse:1 distributed:1 van:2 feedback:4 xia:1 world:3 avoids:1 cumulative:1 concavity:1 author:1 refinement:2 avoided:1 lia:4 decides:1 reveals:1 investigating:1 search:1 nature:3 robust:6 init:1 whiteson:3 complex:4 pk:1 main:1 allowed:1 fig:3 referred:3 batched:1 lc:12 candidate:24 breaking:6 third:1 bij:11 theorem:4 maxi:1 decay:1 evidence:1 derives:1 exists:2 essential:1 trap:2 workshop:1 phd:1 horizon:3 gap:2 easier:1 logarithmic:2 likely:5 explore:3 bubeck:1 amsterdam:1 recommendation:1 applies:3 corresponds:2 satisfies:1 relies:1 acm:1 bji:7 slot:7 viewed:2 change:1 specifically:7 infinite:1 determined:1 except:1 wt:3 nakagawa:3 reducing:1 lemma:16 called:2 total:2 experimental:1 xin:1 ucb:7 exception:1 select:4 junpei:1 latter:1 brevity:1 relevance:1 evaluate:2 d1:1 correlated:1