Unnamed: 0
int64
0
7.24k
id
int64
1
7.28k
raw_text
stringlengths
9
124k
vw_text
stringlengths
12
15k
2,300
3,088
Large Margin Component Analysis Lorenzo Torresani Riya, Inc. lorenzo@riya.com Kuang-chih Lee Riya, Inc. kclee@riya.com Abstract Metric learning has been shown to significantly improve the accuracy of k-nearest neighbor (kNN) classification. In problems involving thousands of features, distance learning algorithms cannot be used due to overfitting and high computational complexity. In such cases, previous work has relied on a two-step solution: first apply dimensionality reduction methods to the data, and then learn a metric in the resulting low-dimensional subspace. In this paper we show that better classification performance can be achieved by unifying the objectives of dimensionality reduction and metric learning. We propose a method that solves for the low-dimensional projection of the inputs, which minimizes a metric objective aimed at separating points in different classes by a large margin. This projection is defined by a significantly smaller number of parameters than metrics learned in input space, and thus our optimization reduces the risks of overfitting. Theory and results are presented for both a linear as well as a kernelized version of the algorithm. Overall, we achieve classification rates similar, and in several cases superior, to those of support vector machines. 1 Introduction The technique of k-nearest neighbor (kNN) is one of the most popular classification algorithms. Several reasons account for the widespread use of this method: it is straightforward to implement, it generally leads to good recognition performance thanks to the non-linearity of its decision boundaries, and its complexity is independent of the number of classes. In addition, unlike most alternatives, kNN can be applied even in scenarios where not all categories are given at the time of training, such as, for example, in face verification applications where the subjects to be recognized are not known in advance. The distance metric defining the neighbors of a query point plays a fundamental role in the accuracy of kNN classification. In most cases Euclidean distance is used as a similarity measure. This choice is logical when it is not possible to study the statistics of the data prior to classification or when it is fair to assume that all features are equally scaled and equally relevant. However, in most cases the data is distributed in a way so that distance analysis along some specific directions of the features space can be more informative than along others. In such cases and when training data is available in advance, distance metric learning [5, 10, 4, 1, 9] has been shown to yield significant improvement in kNN classification. The key idea of these methods is to apply transformations to the data in order to emphasize the most discriminative directions. Euclidean distance computation in the transformed space is then equivalent to a non-uniform metric analysis in the original input space. In this paper we are interested in cases where the data to be used for classification is very highdimensional. An example is classification of imagery data, which often involves input spaces of thousands of dimensions, corresponding to the number of pixels. Metric learning in such highdimensional spaces cannot be carried out due to overfitting and high computational complexity. In these scenarios, even kNN classification is prohibitively expensive in terms of storage and computational costs. The traditional solution is to apply dimensionality reduction methods to the data and then learn a suitable metric in the resulting low-dimensional subspace. For example, Principal Component Analysis (PCA) can be used to compute a linear mapping that reduces the data to tractable dimensions. However, dimensionality reduction methods generally optimize objectives unrelated to classification and, as a consequence, might generate representations that are significantly less discriminative than the original data. Thus, metric learning within the subspace might lead to suboptimal similarity measures. In this paper we show that better performance can be achieved by directly solving for a low-dimensional embedding that optimizes a measure of kNN classification performance. Our approach is inspired by the solution proposed by Weinberger et al. [9]. Their technique learns a metric that attempts to shrink distances of neighboring similarly-labeled points and to separate points in different classes by a large margin. Our contribution over previous work is twofold: 1. We describe the Large Margin Component Analysis (LMCA) algorithm, a technique that solves directly for a low-dimensional embedding of the data such that Euclidean distance in this space minimizes the large margin metric objective described in [9]. Our approach solves for only D ? d unknowns, where D is the dimensionality of the inputs and d is the dimensionality of the target space. By contrast, the algorithm of Weinberger et al. [9] learns a Mahalanobis distance of the inputs, which requires solving for a D ? D matrix, using iterative semidefinite programming methods. This optimization is unfeasible for large values of D. 2. We propose a technique that learns Mahalanobis distance metrics in nonlinear feature spaces. Our approach combines the goal of dimensionality reduction with a novel ?kernelized? version of the metric learning objective of Weinberger et al. [9]. We describe an algorithm that optimizes this combined objective directly. We demonstrate that, even when data is low-dimensional and dimensionality reduction is not needed, this technique can be used to learn nonlinear metrics leading to significant improvement in kNN classification accuracy over [9]. 2 Linear Dimensionality Reduction for Large Margin kNN Classification In this section we briefly review the algorithm presented in [9] for metric learning in the context of kNN classification. We then describe how this approach can be generalized to compute low dimensional projections of the inputs via a novel direct optimization. A fundamental characteristic of kNN is that its performance does not depend on linear separability of classes in input space: in order to achieve accurate kNN classification it is sufficient that the majority of the k-nearest points of each test example have correct label. The work of Weinberger et al. [9] exploits this property by learning a linear transformation of the input space that aims at creating consistently labeled k-nearest neighborhoods, i.e. clusters where each training example and its k-nearest points have same label and where points differently labeled are distanced by an additional safety margin. Specifically, given n input examples x1 , ..., xn in ?D and corresponding class labels y1 , ..., yn , the technique in [9] learns the D ? D transformation matrix L that optimizes the following objective function: X X ?(L) = ?ij ||L(xi ? xj )||2 + c ?ij (1 ? yil )h(||L(xi ? xj )||2 ? ||L(xi ? xl )||2 + 1), ij ijl (1) where ?ij ? {0, 1} is a binary variable indicating whether example xj is one the k-closest points of xi that share the same label yi , c is a positive constant, yil ? {0, 1} is 1 iff (yi = yl ), and h(s) = max(s, 0) is the hinge function. The objective ?(L) consists of two contrasting terms. The first aims at pulling closer together points sharing the same label and that were neighbors in the original space. The second term encourages distancing each example xi from differently labeled points by an amount equal to 1 plus the distance from xi to any of its k similarly-labeled closest points. This term corresponds to a margin condition similar to that of SVMs and it is used to improve generalization. The constant c controls the relative importance of these two competing terms and it can be chosen via cross validation. Upon optimization of ?(L), test example xq is classified according to the kNN rule applied to its projection x?q = Lxq , using Euclidean distance as metric. Equivalently, such classification can be interpreted as kNN classification in the original input space under the Mahalanobis distance metric induced by matrix M = LT L. Although Equation 1 is non-convex in L, it can be rewritten as a semidefinite program ?(M) in terms of the metric M [9]. Thus, optimizing the objective in M guarantees convergence to the global minimum, regardless of initialization. When data is very high-dimensional, minimization of ?(M) using semidefinite programming methods is impractical because of slow convergence and overfitting problems. In such cases [9] propose applying dimensionality reduction methods, such as PCA, followed by metric learning within the resulting low-dimensional subspace. As outlined above, this procedure leads to suboptimal metric learning. In this paper we propose an alternative approach that solves jointly for dimensionality reduction and metric learning. The key idea is to choose the transformation L in Equation 1 to be a nonsquare matrix of size d?D, with d << D. Thus L defines a mapping from the high-dimensional input space to a low-dimensional embedding. Euclidean distance in this low-dimensional embedding is equivalent to Mahalanobis distance in the original input space under the rank-deficient metric M = LT L (M has now rank at most d). Unfortunately, optimization of ?(M) subject to rank-constraints on M leads to a minimization problem that is no longer convex [8] and that is awkward to solve. Here we propose an approach for minimizing the objective that differs from the one used in [9]. The idea is to optimize Equation 1 directly with respect to the nonsquare matrix L. We argue that minimizing the objective with respect to L rather than with respect to the rank-deficient D ? D matrix M, offers several advantages. First, our optimization involves only d?D rather than D2 unknowns, which considerably reduces the risk of overfitting. Second, the optimal rectangular matrix L computed with our method automatically satisfies the rank constraints on M without requiring the solution of difficult constrained minimization problems. Although the objective optimized by our method is also not convex, we experimentally demonstrate that our solution converges consistently to better metrics than those computed via the application of PCA followed by subspace distance learning (see Section 4). We minimize ?(L) using gradient-based optimizers, such as conjugate gradient methods. Differentiating ?(L) with respect to the transformation matrix L gives the following gradient for the update rule: X ??(L) = 2L ?ij (xi ? xj )(xi ? xj )T + ?L ij X   2cL ?ij (1 ? yil ) (xi ? xj )(xi ? xj )T ? (xi ? xl )(xi ? xl )T ijl h? (||L(xi ? xj )||2 ? ||L(xi ? xl )||2 + 1) (2) We handle the non-differentiability of h(s) at s = 0, by adopting a smooth hinge function as in [8]. 3 Nonlinear Feature Extraction for Large Margin kNN Classification In the previous section we have described an algorithm that jointly solves for linear dimensionality reduction and metric learning. We now describe how to ?kernelize? this method in order to compute non-linear features of the inputs that optimize our distance learning objective. Our approach learns a low-rank Mahalanobis distance metric in a high dimensional feature space F , related to the inputs by a nonlinear map ? : ?D ? F . We restrict our analysis to nonlinear maps ? for which there exist kernel functions k that can be used to compute the feature inner products without carrying out the map, i.e. such that k(xi , xj ) = ?Ti ?j , where for brevity we denoted ?i = ?(xi ). We modify our objective ?(L) by substituting inputs xi with features ?(xi ) into Equation 1. L is now a transformation from the space F into a low-dimensional space ?d . We seek the transformation L minimizing the modified objective function ?(L). The gradient in feature space can now be written as: X ??(L) = 2 ?ij L(?i ? ?j )(?i ? ?j )T + ?L ij X   2c ?ij (1 ? yil )h? (sijl )L (?i ? ?j )(?i ? ?j )T ? (?i ? ?l )(?i ? ?l )T (3) ijl where sijl = (||L(?i ? ?j )||2 ? ||L(?i ? ?l )||2 + 1). T Let ? = [?1 , ..., ?n ] . We consider parameterizations of L of the form L = ??, where ? is some matrix allowing us to write L as a linear combination of the feature points. This form of nonlinear map is analogous to that used in kernel-PCA and it allows us to parameterize the transformation L in terms of only d ? n parameters, the entries of the matrix ?. We now introduce the following Lemma which we will later use to derive an iterative update rule for L. Lemma 3.1 The gradient in feature space can be computed as features ?i solely in terms of dot products (?Ti ?j ). ??(L) ?L = ??, where ? depends on T Proof Defining ki = ??i = [k(x1 , xi ), ..., k(xn , xi )] , non-linear feature projections can be computed as L?i = ???i = ?ki . From this we derive: X ??(L) = 2? ?ij (ki ? kj )(?i ? ?j )T + ?L ij X   2c? ?ij (1 ? yil )h? (sijl ) (ki ? kj )(?i ? ?j )T ? (ki ? kl )(?i ? ?l )T ijl = 2? h i (k ?k ) (k ?k ) ?ij Ei i j ? Ej i j ? + X ij 2c? X i h (k ?k ) (k ?k ) (k ?k ) (k ?k ) ?ij (1 ? yil )h? (sijl ) Ei i j ? Ej i j ? Ei i l + El i l ? ijl Evi where = [0, ..., v, 0, ..0] is the n ? n matrix having vector v in the i-th column and all 0 in the other columns. Setting h i X (k ?k ) (k ?k ) ? = 2? ?ij Ei i j ? Ej i j + ij 2c? X i h (k ?k ) (k ?k ) (k ?k ) (k ?k ) ?ij (1 ? yil )h? (sijl ) Ei i j ? Ej i j ? Ei i l + El i l (4) ijl proves the Lemma. This result allows us to implicitly solve for the transformation without ever computing the features in the high-dimensional space F : the key idea is to iteratively update ? rather than L. For example, using gradient descent as optimization we derive update rule: ??(L) = [?old ? ??old ] ? = ?new ? (5) Lnew = Lold ? ? ?L L=Lold where ? is the learning rate. We carry out this optimization by iterating the update ? ? (? ? ??) until convergence. For classification, we project points onto the learned low-dimensional space by exploiting the kernel trick: L?q = ?kq . 4 Experimental results We compared our methods to the metric learning algorithm of Weinberger et al. [9], which we will refer to as LMNN (Large Margin Nearest Neighbor). We use KLMCA (kernel-LMCA) to denote the nonlinear version of our algorithm. In all of the experiments reported here, LMCA was initialized using PCA, while KLMCA used the transformation computed by kernel-PCA as initial guess. The objectives of LMCA and KLMCA were optimized using the steepest descent algorithm. We experimented with more sophisticated minimization techniques, including the conjugate gradient method and the Broyden-Fletcher-Goldfarb-Shanno quasi-Newton algorithm [6], but no substantial improvement in performance or speed of convergence was achieved. The KLMCA algorithm was implemented using a Gaussian RBF kernel. The number of nearest neighbors, the weight c in Equation 1, and the variance of the RBF kernel, were all automatically tuned using cross-validation. The first part of our experimental evaluation focuses on classification results on datasets with highdimensionality, Isolet, AT&T Faces, and StarPlus fMRI: AT&T Faces Isolet 8 6 4 2 20 15 10 10 15 20 25 projection dimensions 0 0 30 AT&T Faces 8 6 4 0 5 5 0 0 200 30 10 15 20 25 projection dimensions 30 30 16 PCA + LMNN LMCA + kNN KLMCA + kNN 20 10 0 0 10 20 projection dimensions fMRI 40 PCA + LMNN LMCA + kNN KLMCA + kNN testing error % testing error % 50 100 150 projection dimensions 14 PCA + LMNN LMCA + kNN KLMCA + kNN 12 10 8 2 (b) 10 Isolet 12 10 PCA + LMNN LMCA + kNN KLMCA + kNN 5 testing error % 0 5 (a) 15 PCA + LMNN LMCA + kNN KLMCA + kNN 25 training error % training error % 10 fMRI 30 PCA + LMNN LMCA + kNN KLMCA + kNN training error % 12 50 100 150 projection dimensions 200 6 0 10 20 projection dimensions 30 Figure 1: Classification error rates on the high-dimensional datasets Isolet, AT&T Faces and StarPlus fMRI for different projection dimensions. (a) Training error. (b) Testing error. ? Isolet1 is a dataset of speech features from the UC Irvine repository, consisting of 6238 training examples and 1559 testing examples with 617 attributes. There are 26 classes corresponding to the spoken letters to be recognized. ? The AT&T Faces2 database contains 10 grayscale face images of each of 40 distinct subjects. The images were taken at different times, with varying illumination, facial expressions and poses. As in [9], we downsampled the original 112 ? 92 images to size 38 ? 31, corresponding to 1178 input dimensions. ? The StarPlus fMRI3 dataset contains fMRI sequences acquired in the context of a cognitive experiment. In these trials the subject is shown for a few seconds either a picture or a sentence describing a picture. The goal is to recognize the viewing activity of the subject from the fMRI images. We reduce the size of the data by considering only voxels corresponding to relevant areas of the brain cortex and by averaging the activity in each voxel over the period of the stimulus. This yields data of size 1715 for subject ?04847,? on which our analysis was restricted. A total number of 80 trials are available for this subject. Except for Isolet, for which a separate testing set is specified, we computed all of the experimental results by averaging over 100 runs of random splitting of the examples into training and testing sets. For the fMRI experiment we used at each iteration 70% of the data for training and 30% for testing. For AT&T Faces, training sets were selected by sampling 7 images at random for each person. The remaining 3 images of each individual were used for testing. Unlike LMCA and KLMCA, which directly solve for low-dimensional embeddings of the input data, LMNN cannot be run on datasets of dimensionalities such as those considered here and must be trained on lower-dimensional representations of the inputs. As in [9], we applied the LMNN algorithm on linear projections of the data computed using PCA. Figure 1 summarizes the training and testing performances of kNN classification using the metrics learned by the three algorithms for different subspace dimensions. LMCA and KLMCA give considerably better classification accuracy than LMNN on all datasets, with the kernelized version of our algorithm always outperforming the linear version. The difference in accuracy between our algorithms and LMNN is particularly dramatic when a small number of projection dimensions is used. In such cases, LMNN is unable to find good metrics in the low-dimensional subspace computed by PCA. By contrast, LMCA and KLMCA solve for the low-dimensional subspace that optimizes the classification-related objective 1 Available at http://www.ics.uci.edu/?mlearn/MLRepository.html Available at http://www.cl.cam.ac.uk/Research/DTG/attarchive/facedatabase.html 3 Available at http://www.cs.cmu.edu/afs/cs.cmu.edu/project/theo-81/www/ 2 (a) (b) (c) Figure 2: Image reconstruction from PCA and LMCA features. (a) Input images. (b) Reconstructions using PCA (left) and LMCA (right). (c) Absolute difference between original images and reconstructions from features for PCA (left) and LMCA (right). Red denotes large differences, blue indicates similar grayvalues. LMCA learns invariance to effects that are irrelevant for classification: non-uniform illumination, facial expressions, and glasses (training data contains images with and without glasses for same individuals). of Equation 1, and therefore achieve good performance even when projecting to very low dimensions. In our experiments we found that all three classification algorithms (LMNN, LMCA+kNN, and KLMCA+kNN) performed considerably better than kNN using the Euclidean metric in the PCA and KPCA subspaces. For example, using d = 10 in the AT&T dataset, kNN gives a 10.9% testing error rate when used on the PCA features, and a 9.7% testing error rate when applied to the nonlinear features computed by KPCA. While LMNN is applied to features in a low-dimensional space, LMCA and KLMCA learn a lowrank metric directly from the high-dimensional inputs. Consequently the computational complexity of our algorithms is higher than that of LMNN. However, we have found that LMCA and KLMCA converge to a minimum quite rapidly, typically within 20 iterations, and thus the complexity of these algorithms has not been a limiting factor even when applied to very high-dimensional datasets. As a reference, using d = 10 and K = 3 on the AT&T dataset, LMNN learns a metric in about 5 seconds, while LMCA and KLMCA converge to a minimum in 21 and 24 seconds, respectively. It is instructive to look at the preimages of LMCA data embeddings. Figure 2 shows comparative reconstructions of images obtained from PCA and LMCA features by inverting their linear mappings. The PCA and LMCA subspaces in this experiment were computed from cropped face images of size 50 ? 50 pixels, taken from a set of consumer photographs. The dataset contains 2459 face images corresponding to 152 distinct individuals. A total of d = 125 components were used. The subjects shown in Figure 2 were not included in the training set. For a given target dimensionality, PCA has the property of computing the linear transformation minimizing the reconstruction error under the L2 norm. Unsurprisingly, the PCA face reconstructions are extremely faithful reproductions of the original images. However, PCA accurately reconstructs also visual effects, such as lighting variations and changes in facial expressions, that are unimportant for the task of face verification and that might potentially hamper recognition. By contrast, LMCA seeks a subspace where neighboring examples belong to the same class and points differently labeled are separated by a large margin. As a result, LMCA does not encode effects that are found to be insignificant for classification or that vary largely among examples of the same class. For the case of face verification, LMCA de-emphasizes changes in illumination, presence or absence of glasses and smiling expressions (Figure 2). When the input data does not require dimensionality reduction, LMNN and LMCA solve the same optimization problem, but LMNN should be preferred over LMCA in light of its guarantees of convergence to the global minimum of the objective. However, even in such cases, KLMCA can be used in lieu of LMNN in order to extract nonlinear features from the inputs. We have evaluated this use of KLMCA on the following low-dimensional datasets from the UCI repository: Bal, Wine, Iris, and Ionosphere. All of these datasets, except Ionosphere, have been previously used in [9] to assess the performance of LMNN. The dimensionality of the data in these sets ranges from 4 to 34. In order BAL ? training error % 14.1 WINE ? training error % 30 IRIS ? training error % IONO ? training error % 4.3 15.7 3.5 10 3 17.1 7.6 6.5 2.3 1.1 (a) kNN w/ KLMCA LMNN EUCL + kNN kNN w/ KLMCA LMNN EUCL + kNN BAL ? testing error % 14.4 kNN w/ KLMCA LMNN EUCL + kNN kNN w/ KLMCA LMNN EUCL + kNN WINE ? testing error % IRIS ? testing error % 4.7 30.1 4.4 4.3 IONO ? testing error % 16.5 13.7 3.4 9.7 6.7 17.6 7.8 19 5.8 2.6 (b) kNN w/ KLMCA LMNN SVM EUCL + kNN kNN w/ KLMCA LMNN SVM EUCL + kNN kNN w/ KLMCA LMNN SVM EUCL + kNN kNN w/ KLMCA LMNN EUCL + kNN Figure 3: kNN classification accuracy on low-dimensional datasets: Bal, Wine, Iris, and Ionosphere. (a) Training error. (b) Testing error. Algorithms are kNN using Euclidean distance, LMNN [9], kNN in the nonlinear feature space computed by our KLMCA algorithm, and multiclass SVM. to compare LMNN with KLMCA under identical conditions, KLMCA was restricted to compute a number of features equal to the input dimensionality, although in our experience using additional nonlinear features often results in better classification performance. Figure 3 summarizes the results of this comparison. Again, we averaged the errors over 100 runs with different 70/30 splits of the data for training and testing. On all datasets except on Wine, for which the mapping to the highdimensional space seems to hurt performance (note also the high error rate of SVM), KLMCA gives better classification accuracy than LMNN. Note also that the error rates of KLMCA are consistently lower than those reported in [9] for SVM under identical training and testing conditions. 5 Relationship to other methods Our method is most similar to the work of Weinberger et al. [9]. Our approach is different in focus as it specifically addresses the problem of kNN classification of very high-dimensional data. The novelty of our method lies in an optimization that solves for data reduction and metric learning simultaneously. Additionally, while [9] is limited to learning a global linear transformation of the inputs, we describe a kernelized version of our method that extracts non-linear features of the inputs. We demonstrate that this representation leads to significant improvements in kNN classification both on high-dimensional as well as on low-dimensional data. Our approach bears similarities with Linear Discriminant Analysis (LDA) [2], as both techniques solve for a low-rank Mahalanobis distance metric. However, LDA relies on the assumption that the class distributions are Gaussian and have identical covariance. These conditions are almost always violated in practice. Like our method, the Neighborhood Component Analysis (NCA) algorithm by Goldberger et al. [4] learns a lowdimensional embedding of the data for kNN classification using a direct gradient-based approach. NCA and our method differ in the definition of the objective function. Moreover, unlike our method, NCA provides purely linear embeddings of the data. A contrastive loss function analogous to the one used in this paper is adopted in [1] for training a similarity metric. A siamese architecture consisting of identical convolutional networks is used to parameterize and train the metric. In our work the metric is parameterized by arbitrary nonlinear maps for which kernel functions exist. Recent work by Globerson and Roweis [3] also proposes a technique for learning low-rank Mahalanobis metrics. Their method includes an extension for computing low-dimensional non-linear features using the kernel trick. However, this approach computes dimensionality reductions through a two-step solution which involves first solving for a possibly full-rank metric and then estimating the low-rank approximation via spectral decomposition. Besides being suboptimal, this approach is impractical for classification problems with high-dimensional data, as it requires solving for a number of unknowns that is quadratic in the number of input dimensions. Furthermore, the metric is trained with the aim of collapsing all examples in the same class to a single point. This task is difficult to achieve and not strictly necessary for good kNN classification performance. The Support Vector Decompo- sition Machine (SVDM) [7] is also similar in spirit to our approach. SVDM optimizes an objective that is a combination of dimensionality reduction and classification. Specifically, a linear mapping from input to feature space and a linear classifier applied to feature space, are trained simultaneously. As in our work, results in their paper demonstrate that this joint optimization yields better accuracy than that achieved by learning a low-dimensional representation and a classifier separately. Unlike our method, which can be applied without any modification to classification problems with more than two classes, SVDM is formulated for binary classification only. 6 Discussion We have presented a novel algorithm that simultaneously optimizes the objectives of dimensionality reduction and metric learning. Our algorithm seeks, among all possible low-dimensional projections, the one that best satisfies a large margin metric objective. Our approach contrasts techniques that are unable to learn metrics in high-dimensions and that must rely on dimensionality reduction methods to be first applied to the data. Although our optimization is not convex, we have experimentally demonstrated that the metrics learned by our solution are consistently superior to those computed by globally-optimal methods forced to search in a low-dimensional subspace. The nonlinear version of our technique requires us to compute the kernel distance of a query point to all training examples. Future research will focus on rendering this algorithm ?sparse?. In addition, we will investigate methods to further reduce overfitting when learning dimensionality reduction from very high dimensions. Acknowledgments We are grateful to Drago Anguelov and Burak Gokturk for discussion. We thank Aaron Hertzmann and the anonymous reviewers for their comments. References [1] S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application to face verification. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2005. [2] R. A. Fisher. The use of multiple measurements in taxonomic problems. Ann. Eugenics, 7:179?188, 1936. [3] A. Globerson and S. Roweis. Metric learning by collapsing classes. In Y. Weiss, B. Sch?olkopf, and J. Platt, editors, Advances in Neural Information Processing Systems 18. MIT Press, Cambridge, MA, 2006. [4] J. Goldberger, S. Roweis, G. Hinton, and R. Salakhutdinov. Neighbourhood components analysis. In L. K. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural Information Processing Systems 17, 2005. [5] T. Hastie and R. Tibshirani. Discriminant adaptive nearest neighbor classification. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 18:607?616, 1996. [6] A. Mordecai. Nonlinear Programming: Analysis and Methods. Dover Publishing, 2003. [7] F. Pereira and G. Gordon. The support vector decomposition machine. In Proceedings of the International Conference on Machine Learning (ICML), 2006. [8] J. D. M. Rennie and N. Srebro. Fast maximum margin matrix factorization for collaborative prediction. In Proceedings of the 22nd International Conference on Machine Learning (ICML), 2005. [9] K. Q. Weinberger, J. Blitzer, and L. K. Saul. Distance metric learning for large margin nearest neighbor classification. In Y. Weiss, B. Sch?olkopf, and J. Platt, editors, Advances in Neural Information Processing Systems 18, 2006. [10] E. P. Xing, A. Y. Ng, M. I. Jordan, , and S. Russell. Distance metric learning, with application to clustering with side-information. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, 2002.
3088 |@word trial:2 repository:2 briefly:1 version:7 norm:1 seems:1 nd:1 d2:1 seek:3 covariance:1 decomposition:2 contrastive:1 dramatic:1 carry:1 reduction:17 initial:1 contains:4 tuned:1 com:2 goldberger:2 written:1 must:2 informative:1 update:5 intelligence:1 selected:1 guess:1 steepest:1 dover:1 provides:1 parameterizations:1 along:2 direct:2 consists:1 combine:1 introduce:1 acquired:1 brain:1 inspired:1 lmnn:31 globally:1 salakhutdinov:1 automatically:2 considering:1 project:2 estimating:1 linearity:1 unrelated:1 moreover:1 interpreted:1 minimizes:2 lmca:29 contrasting:1 spoken:1 transformation:12 impractical:2 guarantee:2 ti:2 prohibitively:1 scaled:1 classifier:2 uk:1 control:1 platt:2 yn:1 safety:1 positive:1 modify:1 consequence:1 solely:1 pami:1 might:3 plus:1 initialization:1 limited:1 factorization:1 range:1 averaged:1 faithful:1 nca:3 globerson:2 testing:19 acknowledgment:1 practice:1 lecun:1 implement:1 differs:1 optimizers:1 lnew:1 procedure:1 area:1 significantly:3 projection:15 downsampled:1 cannot:3 unfeasible:1 onto:1 storage:1 context:2 risk:2 applying:1 optimize:3 equivalent:2 map:5 www:4 demonstrated:1 reviewer:1 straightforward:1 regardless:1 convex:4 rectangular:1 hadsell:1 splitting:1 rule:4 isolet:5 embedding:5 handle:1 variation:1 hurt:1 analogous:2 limiting:1 kernelize:1 target:2 play:1 programming:3 trick:2 recognition:3 expensive:1 particularly:1 labeled:6 database:1 role:1 parameterize:2 thousand:2 russell:1 substantial:1 complexity:5 hertzmann:1 cam:1 trained:3 depend:1 solving:4 carrying:1 grateful:1 purely:1 upon:1 joint:1 differently:3 train:1 separated:1 distinct:2 forced:1 describe:5 fast:1 query:2 neighborhood:2 quite:1 solve:6 rennie:1 statistic:1 knn:55 jointly:2 advantage:1 sequence:1 propose:5 reconstruction:6 lowdimensional:1 product:2 neighboring:2 relevant:2 uci:2 rapidly:1 iff:1 achieve:4 roweis:3 olkopf:2 exploiting:1 convergence:5 cluster:1 comparative:1 converges:1 derive:3 blitzer:1 ac:1 pose:1 nearest:9 ij:19 lowrank:1 solves:6 implemented:1 c:2 involves:3 differ:1 direction:2 correct:1 attribute:1 viewing:1 require:1 generalization:1 anonymous:1 extension:1 strictly:1 considered:1 ic:1 fletcher:1 mapping:5 substituting:1 vary:1 wine:5 proc:1 label:5 minimization:4 mit:1 gaussian:2 always:2 aim:3 modified:1 rather:3 ej:4 gokturk:1 varying:1 encode:1 focus:3 improvement:4 consistently:4 rank:10 indicates:1 contrast:4 glass:3 lold:2 el:2 facedatabase:1 typically:1 kernelized:4 transformed:1 quasi:1 interested:1 pixel:2 overall:1 classification:41 html:2 among:2 denoted:1 proposes:1 constrained:1 uc:1 equal:2 extraction:1 having:1 sampling:1 ng:1 identical:4 look:1 icml:2 fmri:7 future:1 others:1 stimulus:1 torresani:1 gordon:1 few:1 simultaneously:3 recognize:1 hamper:1 individual:3 consisting:2 iono:2 attempt:1 drago:1 investigate:1 evaluation:1 semidefinite:3 light:1 eucl:8 accurate:1 closer:1 necessary:1 experience:1 facial:3 euclidean:7 old:2 initialized:1 burak:1 column:2 kpca:2 cost:1 entry:1 kuang:1 uniform:2 kq:1 reported:2 considerably:3 combined:1 thanks:1 person:1 fundamental:2 shanno:1 international:2 lee:1 yl:1 together:1 imagery:1 again:1 reconstructs:1 choose:1 possibly:1 collapsing:2 cognitive:1 creating:1 leading:1 account:1 de:1 includes:1 inc:2 depends:1 later:1 performed:1 red:1 xing:1 relied:1 contribution:1 ass:1 minimize:1 collaborative:1 accuracy:8 convolutional:1 variance:1 characteristic:1 largely:1 yield:3 accurately:1 emphasizes:1 lighting:1 classified:1 mlearn:1 sharing:1 definition:1 proof:1 nonsquare:2 irvine:1 riya:4 dataset:5 popular:1 logical:1 dimensionality:22 sophisticated:1 higher:1 awkward:1 attarchive:1 wei:3 evaluated:1 shrink:1 furthermore:1 until:1 ei:6 nonlinear:14 widespread:1 defines:1 lda:2 pulling:1 effect:3 smiling:1 requiring:1 dietterich:1 iteratively:1 goldfarb:1 mahalanobis:7 encourages:1 mlrepository:1 iris:4 bal:4 generalized:1 ijl:6 demonstrate:4 image:14 novel:3 superior:2 yil:7 belong:1 significant:3 refer:1 anguelov:1 measurement:1 cambridge:1 broyden:1 outlined:1 similarly:2 dot:1 similarity:5 longer:1 cortex:1 closest:2 recent:1 optimizing:1 optimizes:6 irrelevant:1 scenario:2 binary:2 outperforming:1 preimages:1 yi:2 minimum:4 additional:2 recognized:2 converge:2 novelty:1 period:1 siamese:1 full:1 multiple:1 reduces:3 smooth:1 distanced:1 cross:2 offer:1 equally:2 prediction:1 involving:1 sition:1 metric:49 cmu:2 vision:1 iteration:2 kernel:10 adopting:1 achieved:4 addition:2 cropped:1 separately:1 sch:2 unlike:4 comment:1 subject:8 induced:1 deficient:2 spirit:1 jordan:1 chopra:1 presence:1 split:1 embeddings:3 rendering:1 xj:9 architecture:1 competing:1 suboptimal:3 restrict:1 inner:1 idea:4 reduce:2 hastie:1 multiclass:1 whether:1 expression:4 pca:24 becker:1 speech:1 generally:2 iterating:1 aimed:1 unimportant:1 amount:1 svms:1 category:1 differentiability:1 dtg:1 generate:1 http:3 exist:2 tibshirani:1 blue:1 write:1 key:3 run:3 letter:1 parameterized:1 taxonomic:1 almost:1 chih:1 decision:1 summarizes:2 ki:5 followed:2 quadratic:1 activity:2 constraint:2 speed:1 extremely:1 according:1 combination:2 conjugate:2 smaller:1 separability:1 modification:1 projecting:1 restricted:2 isolet1:1 taken:2 equation:6 previously:1 describing:1 needed:1 tractable:1 lieu:1 adopted:1 available:5 rewritten:1 apply:3 spectral:1 neighbourhood:1 alternative:2 weinberger:7 evi:1 original:8 denotes:1 remaining:1 clustering:1 publishing:1 hinge:2 newton:1 unifying:1 exploit:1 ghahramani:1 prof:1 objective:22 traditional:1 gradient:8 subspace:12 distance:23 separate:2 unable:2 separating:1 thank:1 majority:1 argue:1 discriminant:2 reason:1 consumer:1 besides:1 relationship:1 minimizing:4 equivalently:1 difficult:2 unfortunately:1 potentially:1 unknown:3 allowing:1 datasets:9 descent:2 defining:2 hinton:1 ever:1 y1:1 arbitrary:1 inverting:1 kl:1 specified:1 optimized:2 sentence:1 learned:4 address:1 eugenics:1 pattern:2 program:1 max:1 including:1 suitable:1 afs:1 rely:1 improve:2 lorenzo:2 picture:2 carried:1 extract:2 kj:2 xq:1 prior:1 review:1 voxels:1 l2:1 relative:1 unsurprisingly:1 loss:1 bear:1 discriminatively:1 srebro:1 validation:2 verification:4 sufficient:1 editor:4 share:1 theo:1 side:1 neighbor:8 saul:2 face:13 differentiating:1 absolute:1 sparse:1 distributed:1 boundary:1 dimension:16 xn:2 computes:1 adaptive:1 voxel:1 transaction:1 emphasize:1 implicitly:1 preferred:1 global:3 overfitting:6 discriminative:2 xi:20 grayscale:1 search:1 iterative:2 additionally:1 learn:5 bottou:1 cl:2 fair:1 x1:2 slow:1 pereira:1 xl:4 lie:1 learns:8 specific:1 experimented:1 insignificant:1 svm:6 ionosphere:3 reproduction:1 importance:1 illumination:3 margin:14 lt:2 photograph:1 visual:1 corresponds:1 satisfies:2 relies:1 ma:1 goal:2 formulated:1 consequently:1 rbf:2 ann:1 twofold:1 absence:1 fisher:1 experimentally:2 change:2 included:1 specifically:3 except:3 averaging:2 principal:1 lemma:3 total:2 invariance:1 experimental:3 indicating:1 aaron:1 highdimensional:3 support:3 brevity:1 violated:1 instructive:1
2,301
3,089
Analysis of Empirical Bayesian Methods for Neuroelectromagnetic Source Localization David Wipf1 , Rey Ram??rez2 , Jason Palmer1,2 , Scott Makeig2 , & Bhaskar Rao1 ? 1 Signal Processing and Intelligent Systems Lab 2 Swartz Center for Computational Neuroscience University of California, San Diego 92093 {dwipf,japalmer,brao}@ucsd.edu, {rey,scott}@sccn.ucsd.edu Abstract The ill-posed nature of the MEG/EEG source localization problem requires the incorporation of prior assumptions when choosing an appropriate solution out of an infinite set of candidates. Bayesian methods are useful in this capacity because they allow these assumptions to be explicitly quantified. Recently, a number of empirical Bayesian approaches have been proposed that attempt a form of model selection by using the data to guide the search for an appropriate prior. While seemingly quite different in many respects, we apply a unifying framework based on automatic relevance determination (ARD) that elucidates various attributes of these methods and suggests directions for improvement. We also derive theoretical properties of this methodology related to convergence, local minima, and localization bias and explore connections with established algorithms. 1 Introduction Magnetoencephalography (MEG) and electroencephalography (EEG) use an array of sensors to take EM field measurements from on or near the scalp surface with excellent temporal resolution. In both cases, the observed field is generated by the same synchronous, compact current sources located within the brain. Because the mapping from source activity configuration to sensor measurement is many to one, accurately determining the spatial locations of these unknown sources is extremely difficult. The relevant localization problem can be posed as follows: The measured EM signal is B ? <db ?n , where db equals the number of sensors and n is the number of time points at which measurements are made. The unknown sources S ? <ds ?n are the (discretized) current values at ds candidate locations distributed throughout the cortical surface. These candidate locations are obtained by segmenting a structural MR scan of a human subject and tesselating the gray matter surface with a set of vertices. B and S are related by the generative model B = LS + E, (1) where L is the so-called lead-field matrix, the i-th column of which represents the signal vector that would be observed at the scalp given a unit current source at the i-th vertex with a fixed orientation (flexible orientations can be incorporated by including three columns per location, one for each directional component). Multiple methods based on the physical properties of the brain and Maxwell?s equations are available for this computation. Finally, E is a noise term with columns drawn independently from N (0, ? ). To obtain reasonable spatial resolution, the number of candidate source locations will necessarily be much larger than the number of sensors (ds  db ). The salient inverse problem then becomes the ill-posed estimation of these activity or source regions, which are reflected by the nonzero rows of ? Because the inverse model is underdetermined, all efforts at source the source estimate matrix S. reconstruction are heavily dependent on prior assumptions, which in a Bayesian framework are embedded in the distribution p(S). Such a prior is often considered to be fixed and known, as in the ? This work was supported by NSF grants DGE-0333451 and IIS-0613595. case of minimum `2 -norm approaches, minimum current estimation (MCE) [6, 18], FOCUSS [2, 5], and sLORETA [10]. Alternatively, a number of empirical Bayesian approaches have been proposed that attempt a form of model selection by using the data to guide the search for an appropriate prior. Examples include variational Bayesian methods [14, 15], hierarchial covariance component models [4, 8, 11], and automatic relevance determination (ARD) [7, 9, 12, 13, 17]. While seemingly quite different in some respects, we present a generalized framework that encompasses many of these methods and points to connections between algorithms. We also analyze several theoretical properties of this framework related to computational/convergence issues, local minima, and localization bias. Overall, we envision that by providing a unifying perspective on these approaches, neuroelectromagnetic imaging practitioners will be better able to assess the relative strengths with respect to a particular application. This process also points to several promising directions for future research. 2 A Generalized Bayesian Framework for Source Localization In this section, we present a general-purpose Bayesian framework for source localization. In doing so, we focus on the common ground between many of the methods discussed above. While derived using different assumptions and methodology, they can be related via the notion of automatic relevance determination [9] and evidence maximization [7]. To begin we involve the noise model from (1), which fully defines the assumed likelihood p(B|S). While the unknown noise covariance can also be parameterized and estimated from the data, for simplicity we assume that ? is known and fixed. Next we adopt the following source prior for S: p (S; ?s ) = N (0, ?s ) , ?s = d? X ?i Ci , (2) i=1 where the distribution is understood to apply independently to each column of S. Here ? = [?1 , . . . , ?d? ]T is a vector of d? nonnegative hyperparameters that control the relative contribution of each covariance basis matrix Ci , all of which we assume are fixed and known. The unknown hyperparameters can be estimated from the data by first integrating out the unknown sources S giving Z p(B; ?b ) = p (B|S) p (S; ?s ) dS = N (0, ?b ), (3) where ?b = ? + L?s LT . A hyperprior p(?) can also be included if desired. This expression is then maximized with respect to the unknown hyperparameters, a process referred to as type-II maximum likelihood or evidence maximization [7, 9] or restricted maximum likelihood [4]. Thus the optimization problem shifts from finding the maximum a posteriori sources given a fixed prior to finding the optimal hyperparameters of a parameterized prior. Once these estimates are obtained ? s) (computational issues will be discussed in Section 2.1), a tractable posterior distribution p(S|B; ? ? s = P ??i Ci . To the extent that the ?learned? prior p(S; ? ? s ) is exists in closed form, where ? i realistic, this posterior quantifies regions of significant current density and point estimates for the unknown sources can be obtained by evaluating the posterior mean h i  ?1 ?s = ? ? s LT ? + L? ? s LT S? , E S|B; ? B. (4) The specific choice of the Ci ?s is crucial and can be used to reflect any assumptions about the possible distribution of current sources. It is this selection, rather than the adoption of a covariance component model per se, that primarily differentiates the many different empirical Bayesian approaches and points to novel algorithms for future study. The optimization strategy adopted for ? , as well as the particular choice of hyperprior p(?), if any, can also be distinguishing computing ? factors. In the simplest case, use of the single component ?s = ?1 C1 = ?1 I leads to a regularized minimum`2 -norm solution. More interesting covariance component terms have been used to effect spatial smoothness, depth bias compensation, and candidate locations of likely activity [8, 11]. With regard to the latter, it has been suggested that prior information about a source location can be codified by including a Ci term with all zeros except a patch of 1?s along the diagonal signifying a location of probable source activity, perhaps based on fMRI data [11]. An associated hyperparameter ?i is then estimated to determine the appropriate contribution of this component to the overall prior covariance. The limitation of this approach is that we generally do not know, a priori, the regions where activity is occurring with both high spatial and temporal resolution. Therefore, we cannot reliably known how to choose an appropriate location-prior term in many situations. The empirical Bayesian solution to this dilemma, which amounts to a form of model selection, is to try out many different (or even all possible) combinations of location priors, and determine which one has the highest Bayesian evidence, i.e., maximizes p(B; ?b ) [7]. For example, if we assume the underlying currents are formed from a collection of dipolar point sources located at each vertex of the Pds ?i ei eTi , where each ei is a standard indexing vector lead-field grid, then we may choose ?s = i=1 of zeros with a ?1? for the i-th element (and so Ci = ei eTi encodes a prior preference for a single dipolar source at location i).1 This specification for the prior involves the counterintuitive addition of an unknown hyperparameter for every candidate source location which, on casual analysis may seem prone to severe overfitting (in contrast to [11], which uses only one or two fixed location priors). However, the process of marginalization, or the integrating out of the unknown sources S, provides an extremely powerful regularizing effect, driving most of the unknown ?i to zero during the evidence maximization stage (more on this in Section 3). This ameliorates the overfitting problem and effectively reduces the space of possible active source locations by choosing a small relevant subset of location priors that optimizes the Bayesian evidence (hence ARD). With this ?learned? prior in place, a once ill-posed inverse problem is no longer untenable, with the posterior mean providing a good estimate of source activity. Such a procedure has been empirically successful in the context of neural networks [9], kernel machines [17], and multiple dipole fitting for MEG [12], a significant benefit to the latter being that the optimal number of dipoles need not be known a priori. In contrast, to model sources with some spatial extent, we can choose Ci = ?i ?iT , where each ?i represents, for example, an ds ? 1 geodesic neural basis vector that specifies an a priori weight location and activity extent [13]. In this scenario, the number of hyperparameters satisfies d? = vds , where v is the number of scales we wish to examine in a multi-resolution decomposition, and can be quite large (d? ? 106 ). As mentioned above, the ARD framework tests many priors corresponding to many hypotheses or beliefs regarding the locations and scales of the nonzero current activity within the brain, ultimately choosing the one with the highest evidence. The net result of this formulation is a source prior composed of a mixture of Gaussian kernels of varying scales. The number of mixture components, or the number of nonzero ?i ?s, is learned from the data and is naturally forced to be small (sparse). In general, the methodology is quite flexible and other prior specifications can be included as well, such as temporal and spectral constraints. But the essential ingredient of ARD, that marginalization and subsequent evidence maximization leads to a pruning of unsupported hypotheses, remains unchanged. We turn now to empirical Bayesian procedures that incorporate variational methods. In [15], a plausible hierarchical prior is adopted that, unfortunately, leads to intractable integrations when computing the desired source posterior. This motivates the inclusion of a variational approximation that models the true posterior as a factored distribution over parameters at two levels of the prior hierarchy. While seemingly quite different, drawing on results from [1], we can show that the resulting cost function is exactly equivalent to standard ARD assuming ?s is parameterized as ?s = ds X i=1 ?i ei ei + ds X ?(ds +j) ?j ?jT , (5) j=1 and so d? = 2ds . When fMRI data is available, it is incorporated into a particular inverse Gamma hyperprior on ?, as is also commonly done with ARD methods [1]. Optimization is then performed using simple EM update rules. In summary then, the general methods of [4, 8, 11] and [12, 13, 17] as well as the variational method of [15] are all identical with respect to their ARD-based cost functions; they differ only in which covariance components (and possibly hyperpriors) are used and in how optimization is performed as will be discussed below. In contrast, the variational model from [14] introduces an additional hierarchy to the ARD framework to explicitly model temporal correlations between sources which may be spatially separated.2 Here it is assumed that S can be decomposed with respect to dz pre1 Here we assume dipoles with orientations constrained to be orthogonal to the cortical surface; however, the method is easily extended to handle unconstrained dipoles. 2 Although standard ARD does not explicitly model correlated sources that are spatially separated, it still works well in this situation (see Section 3) and can reflect such correlations via the inferred posterior mean. sources via S = W Z, p(W ; ?w ) = N (0, ?w ), p(Z) = N (0, I), (6) dz ?n where Z ? < represents the pre-source matrix and ?w is analogous to ?s . As stated in [14], direct application of ARD would involve integration over W and Z to find the hyperparameters ? that maximize p(B; ?b ). While such a procedure is not analytically tractable, it remains insightful to explore the characteristics of this method were we able to perform the necessary computation. This allows us to relate the full model of [14] to standard ARD. Interestingly, it can be shown that the first and second order statistics of the full prior (6) and the standard ARD prior (2) are equivalent (up to a constant factor), although higher-order moments will be different. However, as the number of pre-sources dz becomes large, multivariate centrallimit-theorem arguments can be used to explicitly show that the distribution of S converges to an identical Gaussian prior as ARD. So exact evaluation of the full model, which is espoused as the ideal objective were it feasible, approaches regular ARD when the number of pre-sources grows large. In practice, because the full model is intractable, a variational approximation is adopted similar to that proposed in [15]. In fact, if we assume the appropriate hyperprior on ?, then this correlated source method is essentially the same as the procedure from [15] but with an additional level in the approximate posterior factorization for handling the decomposition (6). This produces approximate posteriors on W and Z but the result cannot be integrated to form the posterior on S. However, the ? , is used as an estimate of the source correlation matrix (using W ?W ? T ) to posterior mean of W , W substantially improve beamforming results that were errantly based on uncorrelated source models. Note however that this procedure implicitly uses the somewhat peculiar criteria of combining the posterior mean of W with the prior on Z to form an estimate of the distribution of S. 2.1 Computational Issues The primary objective of ARD is to maximize the evidence p(B; ?b ) with respect to ? or equivalently, to minimize   L(?) , ? log p(B; ?b ) ? n log |?b | + trace B T ??1 (7) b B . In [4], a restricted maximum likelihood (ReML) approach is proposed for this optimization, which utilizes what amounts to EM-based updates. This method typically requires a nonlinear search for each M-step and does not guarantee that the estimated covariance is positive definite. While shown to be successful in estimating a handful of hyperparameters in [8, 11], this could potentially be problematic when very large numbers of hyperparameters are present. For example, in several toy problems (with d? large) we have found that a fraction of the hyperparameters obtained can be negative-valued, inconsistent with our initial premise. As such, we present three alternative optimization procedures that extend the methods from [7, 12, 15, 17] to the arbitrary covariance model discussed above and guarantee that ?i ? 0 for all i. Because of the flexibility this allows in constructing ?s , and therefore ?b , some additional notation is required to proceed. A new decomposition of ?b is defined as ? ? d? d? X X ei L e Ti , ?b = ? + L ? ?i Ci ? LT = ? + ?i L (8) i=1 i=1 ei L e T , LCi LT with ri , rank(L ei L e T ) ? db . Also, using commutative properties of the where L i i trace operator, L(?) only depends on the data B through the db ?db sample correlation matrix BB T . e ? <db ?rank(B) such Therefore, to reduce the computational burden, we replace B with a matrix B T T e e that B B = BB . This removes any per-iteration dependency on n, which can potentially be large, without altering that actual cost function. By treating the unknown sources as hidden data, an update can be derived for the (k + 1)-th iteration    ?1  ?1 2 1 1 (k+1) (k) e T (k) (k) (k) e T (k) (k) e e ?i = ? Li ?b B + trace ?i I ? ?i Li ?b Li ?i , (9) nri i ri F which reduces to the algorithm from [15] given the appropriate simplifying assumptions on the form of ?s and some additional algebraic manipulations. It is also equivalent to ReML with a different effective computation for the M-step. By casting the update rules in this way and noting that off-diagonal of the second term need not be computed, the per-iteration cost is at  P elements   d? ri ? O d3b d? . This expense can be significantly reduced still further in cases most O d2b i=1 e i and L e j , contain one or more columns where different pseudo lead-field components, e.g., some L in common. This situation occurs if we desire to use the geodesic basis functions with flexible orientation constraints, as opposed to the fixed orientations assumed above. In general, the linear dependence on d? is one of the attractive aspects of this method, effectively allowing for extremely large numbers of hyperparameters and covariance components. The problem then with (9) is not the per-iteration complexity but the convergence rate, which we have observed to be prohibitively slow in practical situations with high-resolution lead-field matrices and large numbers of hyperparameters. The only reported localization results using this type of EM algorithm are from [15], where a relatively low resolution lead-field matrix is used in conjunction with a simplifying heuristic that constrains some of the hyperparameter values. However, to avoid these types of constraints, which can potentially degrade the quality of source estimates, a faster update rule is needed. To this end, we modified the procedure of [7], which involves taking the gradient of L(?) with respect to ?, rearranging terms, and forming the fixed-point update    (k) ?1 ?1  ?1 2 ?i (k+1) (k) (k) T T e ei e e ?i = L L . (10) trace ? L B ? i b b n i F The complexity of each iteration is the same as before, only now the convergence rate can be orders of magnitude faster. For example, given db = 275 sensors, n = 1000 observation vectors, and using a pseudo lead-field with 120,000 unique columns and an equal number of hyperparameters, requires approximately 5-10 mins. runtime using Matlab code on a PC to completely converge. The EM update does not converge after 24 hours. Example localization results using (10) demonstrate the ability to recover very complex source configurations with variable spatial extent [13]. Unlike the EM method, one criticism of (10) is that there currently exists no proof that it represents a descent function, although we have never observed it to increase (7) in practice. While we can show that (10) is equivalent to iteratively solving a particular min-max problem in search of a saddle point, provable convergence is still suspect. However, a similar update rule can be derived that is both  significantly faster than EM and is proven to produce ? vectors such that L ? (k+1) ? L ? (k) for every iteration k. Using a dual-form representation of L(?) that leads to a more tractable auxiliary cost function, this update is given by    (k)  ?1 ?1 ?1/2 ?i (k+1) (k) T e ? e T ?(k) ei ?i = ? Li ?b B trace L L . (11) i b n F Details of the derivation can be found in [20]. Finally, the correlated source method from [14] can be incorporated into the general ARD framework as well using update rules related to the above; however, because all off-diagonal terms are required P 2 by this method, the iterations now scale as ( i ri ) in the general case. This quadratic dependence can be prohibitive in applications with large numbers of covariance components. 2.2 Relationship with Other Bayesian Methods As a point of comparison, we now describe how ARD can be related to alternative Bayesian-inspired approaches such as the sLORETA paradigm [10] and the iterative FOCUSS source localization algorithm [5]. The connection is most transparent when we substitute the prior covariance ?s = Pds T i=1 ?i ei ei = diag[?] into (10), giving the modified update ?1  ?1 (k) T  2  (k) ?1 (k+1) (k) T (k) (k) T (k) T ?i = ?i `i ? + L? L B nR , R , ? L ? + L? L L,  ii 2 (12) where ? , diag[?], `i is the i-th column of L, and R(k) is the effective resolution matrix given the hyperparameters at the current iteration. The j-th column of R (called a point-spread function) equals the source estimate obtained using (4) when the true source is a unit dipole at location j [16]. Continuing, if we assume that initialization of ARD occurs with ? (0) = 1 (as is customary), then the hyperparameters produced after a single iteration of ARD are equivalent to computing the sLORETA estimate for standardized current density power [10] (this assumes fixed orientation constraints). In this context, the inclusion of R as a normalization factor helps to compensate for depth bias, which is the propensity for deep current sources within the brain to be underrepresented at the scalp surface [10, 12]. So ARD can be interpreted as a recursive refinement of what amounts to the non-adaptive, linear sLORETA estimate. As a further avenue for comparison, if we assume that R = I for all iterations, then the update (12) is nearly the same as the FOCUSS iterations modified to simultaneously handle multiple observation vectors [2]. The only difference is the factor of n in the denominator in the case of ARD, but this can be offset by an appropriate rescaling of the FOCUSS ? trade-off parameter (analogous to ? ). Therefore, ARD can be viewed in some sense as taking the recursive FOCUSS update rules and including the sLORETA normalization that, among other things, allows for depth bias compensation. Thus far, we have focused on similarities in update rules between the ARD formulation (restricted to the case where ?s = ?) and sLORETA and FOCUSS. We now switch gears and examine how the general ARD cost function relates to that of FOCUSS and MCE and suggests a useful generalization of both approaches. Recall that the evidence maximization procedure upon which ARD is based involves integrating out the unknown sources before optimizing the hyperparameters ?. However, if some p(?) is assumed for ?, then we could just as easily do the opposite: namely, we can integrate out the hyperparameters and then maximize S directly, thus solving the MAP estimation problem Z max S p (B|S) p (S; ?s ) p(?)d? ? min P {S:S= i ei } Ai S kB ? LSk2??1 +  d?   X g kSei kF , (13) i=1 where each Ai is derived from the i-th covariance component such that Ci = Ai ATi , and g(?) is a function dependent on p(?). For example, when p(?) is a noninformative Jeffreys prior, then g(x) = log x and (13) becomes a generalized form of the FOCUSS cost function (and reduces to the exact FOCUSS cost when Ai = ei for all i). Likewise, when an exponential prior chosen, then g(x) = x and we obtain a generalized version of MCE. In both cases, multiple simultaneous constraints (e.g., flexible dipole orientations, spatial smoothing, etc.) can be naturally handled and, if desired, the noise covariance ? can be seamlessly estimated as well (see [3] for a special case of the latter in the context of kernel regression). This addresses many of the concerns raised in [8] pertaining to existing MAP methods. Additionally, as with ARD, source components that are not sufficiently important in representing the observed data are pruned; however, the undesirable discontinuities in standard FOCUSS or MCE source estimates across time, which previously have required smoothing using heuristic measures [6], do not occur when using (13). This is because sparsity is only encouraged between components due to the concavity of g(?), but not within components where the Frobenius norm operator promotes smooth solutions [2]. All of these issues, as well as efficient ARD-like update rules for optimizing (13), are discussed in [20]. 3 General Properties of ARD Methods ARD methods maintain several attributes that make them desirable candidates for source localization. For example, unlike most MAP procedures, the ARD cost function is often invariant to leadfield column normalizations, which only affect the implicit initialization that is used or potentially the selection of the Ci ?s. In contrast, MCE produces a different globally minimizing solution for every normalization scheme. As such, ARD is considerably more robust to the particular heuristic used for this task and can readily handle deep current sources. Previously, we have claimed that the ARD process naturally forces excessive/irrelevant hyperparameters to converge to zero, thereby reducing model complexity. While this observation has been verified empirically by ourselves and others in various application settings, there has been relatively little corroborating theoretical evidence, largely because of the difficulty in analyzing the potentially multimodal, non-convex ARD cost function. As such, we provide the following result: Result 1. Every local minimum of the generalized ARD cost function (7) is achieved at a solution with at most rank(B)db ? d2b nonzero hyperparameters. The proof follows from a result in [19] and the fact that the ARD cost only depends on the rank(B) matrix BB T . Result 1 comprises a worst-case bound that is only tight in very nuanced situations; in practice, for any reasonable value of ? , the number of nonzero hyperparameters is typically much smaller than db . The bound holds for all ? , including ? = 0, indicating that some measure of hyperparameter pruning, and therefore covariance component pruning, is built into the ARD framework irrespective of the noise-based regularization. Moreover, the number of nonzero hyperparameters decreases monotonically to zero as ? is increased. And so there is always some ? = ?0 sufficiently large such that all hyperparameters converge to exactly zero. Therefore, we can be reasonable confident that the pruning mechanism of ARD is not merely an empirical phenomena. Nor is it dependent on a particular sparse hyperprior, since the ARD cost from (7) implicitly assumes a flat (uniform) hyperprior. The number of observation vectors n also plays an important role in shaping ARD solutions. Increasing n has two primary benefits: (i) it facilitates convergence to the global minimum (as opposed to getting stuck in a suboptimal extrema) and (ii), it improves the quality of this minimum by mitigating the effects of noise [20]. With perfectly correlated (spatially separated) sources, primarily only the later benefit is in effect. For example, with low noise and perfectly correlated sources, the estimation problem reduces to an equivalent problem with n = 1, so the local minima profile of the cost function does not improve with increasing n. Of course standard ARD can still be very effective in this scenario [13]. In contrast, geometric arguments can be made to show that uncorrelated sources with large n offer the best opportunity for local minima avoidance. However, when strong correlations are present as well as high noise levels, the method of [14] (which explicitly attempts to model correlations) could offer a worthwhile alternative, albeit at a high computational cost. Further theoretical support for ARD is possible in the context of localization bias assuming simple source configurations. For example, substantial import has been devoted to quantifying localization bias when estimating a single dipolar source. Recently it has been shown, both empirically [10] and theoretically [16], that sLORETA has zero location bias under this condition at high SNR. Viewed then as an iterative enhancement of sLORETA as described in Section 2.2, the question naturally arises whether ARD methods retain this desirable property. In fact, it can be shown that this is indeed the case in two general situations. We assume that the lead-field matrix L represents a sufficiently high sampling of the source space such that any active dipole aligns with some lead-field column. Unbiasedness can also be shown in the continuous case for both sLORETA and ARD, but the discrete scenario is more straightforward and of course more relevant to any practical task. Result 2. Assume that ?s includes (among others) ds covariance components of the form Ci = ei eTi . Then in the absence of noise (high SNR), ARD has provably zero localization bias when estimating a single dipolar source, regardless of the value of n. If we are willing to tolerate some additional assumptions, then this result can be significantly expanded. For example, multiple dipolar sources can be localized with zero bias if they are perfectly uncorrelated (orthogonal) across time and assuming some mild technical conditions [20]. This result also formalizes the notion, mentioned above, that ARD performs best with uncorrelated sources. Turning to the more realistic scenario where noise is present gives the following: Result 3. Let ?s be constructed as above and assume the noise covariance matrix ? is known up to a scale factor. Then given a single dipolar source, in the limit as n becomes large the ARD cost function is unimodal, and a source estimate with zero localization bias achieves the global minimum. For most reasonable lead-fields and covariance components, this global minimum will be unique, and so the unbiased solution will be found as in the noiseless case. As for proofs, all the theoretical results pertaining to localization bias in this section follow from local minima properties of ML covariance component estimates. While details have been deferred to [20], the basic idea is that if the outerproduct BB T can be expressed as some non-negative linear combination of the available covariance components, then the ARD cost function is unimodal and ?b = n?1 BB T at any minimizing solution. This ?b in turn produces unbiased source estimates in a variety of situations. While theoretical results of this kind are admittedly limited, other iterative Bayesian schemes in fact fail to exhibit similar performance. For example, all of the MAP-based focal algorithms we are aware of, including FOCUSS and MCE methods, provably maintain a localization bias in the general setting, although in particular cases they may not exhibit one. (Also, because of the additional complexity involved, it is still unclear whether the correlated source method of [14] satisfies a similar result.) When we move to more complex source configurations with possible correlations and noise, theoretical results are not available; however, empirical tests provide a useful means of comparison. For example, given a 275 ? 40, 000 lead-field matrix constructed from an MR scan and assuming fixed orientation constraints and a spherical head model, ARD using ?s = diag[?] and n = 1 (equivalent to having perfectly correlated sources) consistently maintains zero empirical localization bias when estimating up to 15-20 dipoles, while sLORETA starts to show a bias with only a few. 4 Discussion The efficacy of modern empirical Bayesian techniques and variational approximations make them attractive candidates for source localization. However, it is not always transparent how these methods relate nor which should be expected to perform best in various situations. By developing a general framework around the notion of ARD, deriving several theoretical properties, and showing connections between algorithms, we hope to bring an insightful perspective to these techniques. References [1] C. M. Bishop and M. E. Tipping, ?Variational relevance vector machines,? Proc. 16th Conf. Uncertainty in Artificial Intelligence, 2000. [2] S.F. Cotter, B.D. Rao, K. Engan, and K. Kreutz-Delgado, ?Sparse solutions to linear inverse problems with multiple measurement vectors,? IEEE Trans. Sig. Proc., vol. 53, no. 7, 2005. [3] M.A.T. Figueiredo, ?Adaptive sparseness using Jeffreys prior,? Advances in Neural Information Processing Systems 14, MIT Press, 2002. [4] K. Friston, W. Penny, C. Phillips, S. Kiebel, G. Hinton, and J. Ashburner, ?Classical and Bayesian inference in neuroimaging: Theory,? NeuroImage, vol. 16, 2002. [5] I.F. Gorodnitsky, J.S. George, and B.D. Rao, ?Neuromagnetic source imaging with FOCUSS: A recursive weighted minimum norm algorithm,? J. Electroencephalography and Clinical Neurophysiology, vol. 95, no. 4, 1995. [6] M. Huang, A. Dale, T. Song, E. Halgren, D. Harrington, I. Podgorny, J. Canive, S. Lewis, and R. Lee, ?Vector-based spatial-temporal minimum `1 -norm solution for MEG,? NeuroImage, vol. 31, 2006. [7] D.J.C. MacKay, ?Bayesian interpolation,? Neural Computation, vol. 4, no. 3, 1992. [8] J. Mattout, C. Phillips, W.D. Penny, M.D. Rugg, and K.J. Friston, ?MEG source localization under multiple constraints: An extended Bayesian framework,? NeuroImage, vol. 30, 2006. [9] R.M. Neal, Bayesian Learning for Neural Networks, Springer-Verlag, New York, 1996. [10] R.D. Pascual-Marqui, ?Standardized low resolution brain electromagnetic tomography (sLORETA): Technical details,? Methods and Findings in Experimental and Clinical Pharmacology, vol. 24, no. Suppl D, 2002. [11] C. Phillips, J. Mattout, M.D. Rugg, P. Maquet, and K.J. Friston, ?An empirical Bayesian solution to the source reconstruction problem in EEG,? NeuroImage, vol. 24, 2005. [12] R.R. Ram??rez, Neuromagnetic Source Imaging of Spontaneous and Evoked Human Brain Dynamics, PhD thesis, New York University, 2005. [13] R.R. Ram??rez and S. Makeig, ?Neuroelectromagnetic source imaging using multiscale geodesic neural bases and sparse Bayesian learning,? 12th Conf. Human Brain Mapping, 2006. [14] M. Sahani and S.S. Nagarajan, ?Reconstructing MEG sources with unknown correlations,? Advances in Neural Information Processing Systems 16, MIT Press, 2004. [15] M. Sato, T. Yoshioka, S. Kajihara, K. Toyama, N. Goda, K. Doya, and M. Kawato, ?Hierarchical Bayesian estimation for MEG inverse problem,? NeuroImage, vol. 23, 2004. [16] K. Sekihara, M. Sahani, and S.S. Nagarajan, ?Localization bias and spatial resolution of adaptive and non-adaptive spatial filters for MEG source reconstruction,? NeuroImage, vol. 25, 2005. [17] M.E. Tipping, ?Sparse Bayesian learning and the relevance vector machine,? J. Machine Learning Research, vol. 1, 2001. [18] K. Uutela, M. H?am?al?ainen, and E. Somersalo, ?Visualization of magnetoencephalographic data using minimum current estimates,? NeuroImage, vol. 10, 1999. [19] D.P. Wipf and B.D. Rao, ?Sparse Bayesian learning for basis selection,? IEEE Trans. Sig. Proc., vol. 52, no. 8, 2004. [20] D.P. Wipf and R.R. Ram??rez and J.A. Palmer and S. Makeig and B.D. Rao, Automatic Relevance Determination for Source Localization with MEG and EEG Data, Technical Report, University of California, San Diego, 2006.
3089 |@word mild:1 neurophysiology:1 version:1 norm:5 willing:1 covariance:20 decomposition:3 simplifying:2 thereby:1 delgado:1 moment:1 initial:1 configuration:4 efficacy:1 interestingly:1 envision:1 ati:1 existing:1 current:13 import:1 readily:1 kiebel:1 realistic:2 subsequent:1 noninformative:1 remove:1 treating:1 ainen:1 update:15 generative:1 prohibitive:1 intelligence:1 gear:1 provides:1 location:19 preference:1 outerproduct:1 along:1 constructed:2 direct:1 magnetoencephalographic:1 fitting:1 theoretically:1 expected:1 indeed:1 examine:2 nor:2 multi:1 brain:7 discretized:1 inspired:1 globally:1 decomposed:1 spherical:1 actual:1 little:1 electroencephalography:2 increasing:2 becomes:4 begin:1 estimating:4 underlying:1 notation:1 maximizes:1 moreover:1 what:2 kind:1 interpreted:1 substantially:1 finding:3 extremum:1 guarantee:2 temporal:5 pseudo:2 every:4 formalizes:1 ti:1 toyama:1 runtime:1 exactly:2 prohibitively:1 makeig:2 control:1 unit:2 grant:1 segmenting:1 positive:1 before:2 understood:1 local:6 limit:1 analyzing:1 interpolation:1 approximately:1 initialization:2 quantified:1 evoked:1 suggests:2 factorization:1 limited:1 palmer:1 adoption:1 practical:2 unique:2 practice:3 recursive:3 definite:1 procedure:9 empirical:11 significantly:3 pre:3 integrating:3 regular:1 cannot:2 undesirable:1 selection:6 operator:2 context:4 equivalent:7 map:4 center:1 dz:3 straightforward:1 regardless:1 l:1 independently:2 focused:1 resolution:9 underrepresented:1 simplicity:1 convex:1 dipole:8 factored:1 rule:8 avoidance:1 array:1 counterintuitive:1 deriving:1 handle:3 notion:3 analogous:2 diego:2 hierarchy:2 heavily:1 play:1 elucidates:1 exact:2 spontaneous:1 distinguishing:1 us:2 hypothesis:2 sig:2 element:2 located:2 observed:5 role:1 worst:1 region:3 trade:1 highest:2 decrease:1 mentioned:2 substantial:1 pd:2 complexity:4 constrains:1 neuromagnetic:2 dynamic:1 geodesic:3 ultimately:1 solving:2 tight:1 dilemma:1 localization:22 upon:1 basis:4 completely:1 easily:2 multimodal:1 various:3 derivation:1 separated:3 forced:1 effective:3 describe:1 pertaining:2 artificial:1 choosing:3 quite:5 heuristic:3 posed:4 larger:1 unsupported:1 plausible:1 drawing:1 valued:1 ability:1 statistic:1 seemingly:3 net:1 reconstruction:3 relevant:3 combining:1 japalmer:1 flexibility:1 frobenius:1 getting:1 convergence:6 enhancement:1 produce:4 converges:1 help:1 derive:1 measured:1 ard:50 strong:1 auxiliary:1 involves:3 differ:1 direction:2 attribute:2 filter:1 kb:1 human:3 goda:1 premise:1 nagarajan:2 transparent:2 generalization:1 electromagnetic:1 probable:1 gorodnitsky:1 underdetermined:1 hold:1 sufficiently:3 considered:1 ground:1 around:1 mapping:2 driving:1 achieves:1 adopt:1 purpose:1 estimation:5 leadfield:1 proc:3 currently:1 propensity:1 weighted:1 cotter:1 eti:3 hope:1 mit:2 sensor:5 gaussian:2 always:2 modified:3 rather:1 avoid:1 varying:1 casting:1 conjunction:1 derived:4 focus:13 improvement:1 consistently:1 rank:4 likelihood:4 seamlessly:1 contrast:5 criticism:1 sense:1 am:1 posteriori:1 inference:1 yoshioka:1 dependent:3 integrated:1 typically:2 hidden:1 provably:2 mitigating:1 issue:4 dual:1 flexible:4 ill:3 orientation:8 priori:3 overall:2 among:2 spatial:10 integration:2 constrained:1 smoothing:2 special:1 equal:3 once:2 never:1 having:1 raised:1 sampling:1 encouraged:1 aware:1 field:12 identical:2 represents:5 nearly:1 excessive:1 wipf:2 future:2 report:1 fmri:2 others:2 intelligent:1 primarily:2 few:1 modern:1 composed:1 gamma:1 simultaneously:1 uutela:1 ourselves:1 maintain:2 attempt:3 evaluation:1 severe:1 deferred:1 introduces:1 mixture:2 pc:1 devoted:1 peculiar:1 neuroelectromagnetic:3 necessary:1 orthogonal:2 continuing:1 hyperprior:6 desired:3 kajihara:1 theoretical:8 increased:1 column:10 rao:4 altering:1 maximization:5 cost:17 vertex:3 subset:1 snr:2 uniform:1 successful:2 reported:1 dependency:1 considerably:1 confident:1 unbiasedness:1 density:2 retain:1 lee:1 off:3 thesis:1 reflect:2 espoused:1 opposed:2 choose:3 possibly:1 huang:1 conf:2 rescaling:1 toy:1 li:4 includes:1 matter:1 explicitly:5 depends:2 later:1 performed:2 try:1 jason:1 lab:1 analyze:1 doing:1 closed:1 start:1 recover:1 maintains:1 contribution:2 ass:1 formed:1 minimize:1 characteristic:1 likewise:1 maximized:1 largely:1 directional:1 bayesian:26 accurately:1 produced:1 reml:2 lci:1 casual:1 simultaneous:1 somersalo:1 aligns:1 ashburner:1 involved:1 naturally:4 associated:1 proof:3 recall:1 improves:1 sloreta:11 shaping:1 maxwell:1 higher:1 tolerate:1 tipping:2 follow:1 methodology:3 reflected:1 formulation:2 done:1 just:1 stage:1 implicit:1 correlation:8 d:10 ei:15 nonlinear:1 multiscale:1 defines:1 quality:2 gray:1 perhaps:1 nuanced:1 dge:1 grows:1 effect:4 contain:1 true:2 unbiased:2 hence:1 analytically:1 regularization:1 spatially:3 nonzero:6 iteratively:1 neal:1 attractive:2 during:1 criterion:1 generalized:5 demonstrate:1 performs:1 bring:1 variational:8 novel:1 recently:2 common:2 kawato:1 physical:1 empirically:3 discussed:5 extend:1 measurement:4 significant:2 ai:4 phillips:3 smoothness:1 automatic:4 unconstrained:1 grid:1 focal:1 inclusion:2 specification:2 longer:1 surface:5 similarity:1 etc:1 base:1 posterior:12 multivariate:1 perspective:2 optimizing:2 optimizes:1 irrelevant:1 scenario:4 manipulation:1 claimed:1 verlag:1 harrington:1 minimum:16 additional:6 somewhat:1 george:1 mr:2 determine:2 maximize:3 swartz:1 monotonically:1 converge:4 signal:3 ii:4 multiple:7 unimodal:2 desirable:2 full:4 reduces:4 pre1:1 relates:1 smooth:1 faster:3 determination:4 technical:3 offer:2 compensate:1 clinical:2 promotes:1 ameliorates:1 regression:1 dipolar:6 denominator:1 essentially:1 noiseless:1 basic:1 iteration:11 kernel:3 normalization:4 suppl:1 achieved:1 c1:1 addition:1 source:75 crucial:1 unlike:2 subject:1 suspect:1 db:10 thing:1 beamforming:1 facilitates:1 inconsistent:1 bhaskar:1 seem:1 practitioner:1 structural:1 near:1 noting:1 ideal:1 switch:1 marginalization:2 affect:1 variety:1 perfectly:4 opposite:1 suboptimal:1 reduce:1 regarding:1 idea:1 avenue:1 shift:1 synchronous:1 whether:2 expression:1 handled:1 engan:1 effort:1 mattout:2 song:1 algebraic:1 proceed:1 york:2 rey:2 matlab:1 deep:2 useful:3 generally:1 se:1 involve:2 amount:3 tomography:1 simplest:1 reduced:1 specifies:1 nsf:1 problematic:1 neuroscience:1 estimated:5 per:5 discrete:1 hyperparameter:4 vol:13 salient:1 drawn:1 verified:1 ram:4 imaging:4 merely:1 fraction:1 inverse:6 parameterized:3 powerful:1 uncertainty:1 place:1 throughout:1 reasonable:4 doya:1 patch:1 utilizes:1 bound:2 quadratic:1 d3b:1 nonnegative:1 scalp:3 activity:8 strength:1 untenable:1 occur:1 incorporation:1 constraint:7 handful:1 sato:1 ri:4 flat:1 encodes:1 aspect:1 argument:2 extremely:3 min:3 pruned:1 expanded:1 relatively:2 developing:1 combination:2 across:2 smaller:1 em:8 reconstructing:1 jeffreys:2 restricted:3 indexing:1 invariant:1 equation:1 visualization:1 remains:2 previously:2 turn:2 differentiates:1 mechanism:1 needed:1 know:1 fail:1 tractable:3 end:1 adopted:3 available:4 nri:1 apply:2 hyperpriors:1 hierarchical:2 worthwhile:1 appropriate:8 spectral:1 alternative:3 customary:1 substitute:1 standardized:2 assumes:2 include:1 opportunity:1 unifying:2 giving:2 hierarchial:1 classical:1 unchanged:1 objective:2 move:1 question:1 occurs:2 strategy:1 primary:2 dependence:2 diagonal:3 nr:1 unclear:1 exhibit:2 gradient:1 capacity:1 vd:1 degrade:1 extent:4 provable:1 assuming:4 meg:9 code:1 relationship:1 providing:2 minimizing:2 equivalently:1 difficult:1 unfortunately:1 neuroimaging:1 potentially:5 relate:2 expense:1 trace:5 stated:1 negative:2 reliably:1 motivates:1 unknown:13 perform:2 allowing:1 observation:4 compensation:2 descent:1 situation:8 extended:2 incorporated:3 head:1 hinton:1 ucsd:2 arbitrary:1 inferred:1 david:1 namely:1 required:3 connection:4 california:2 learned:3 established:1 hour:1 discontinuity:1 trans:2 address:1 able:2 suggested:1 below:1 scott:2 sparsity:1 encompasses:1 built:1 including:5 max:2 belief:1 power:1 difficulty:1 force:1 regularized:1 friston:3 dwipf:1 turning:1 representing:1 scheme:2 improve:2 irrespective:1 paradigm:1 sahani:2 prior:31 geometric:1 kf:1 determining:1 relative:2 embedded:1 fully:1 interesting:1 limitation:1 proven:1 ingredient:1 localized:1 integrate:1 uncorrelated:4 row:1 prone:1 course:2 summary:1 supported:1 figueiredo:1 guide:2 allow:1 bias:16 taking:2 sparse:6 penny:2 distributed:1 regard:1 benefit:3 depth:3 cortical:2 evaluating:1 concavity:1 dale:1 stuck:1 made:2 collection:1 san:2 commonly:1 refinement:1 adaptive:4 far:1 bb:5 pruning:4 compact:1 approximate:2 implicitly:2 ml:1 global:3 overfitting:2 active:2 kreutz:1 corroborating:1 assumed:4 rugg:2 alternatively:1 search:4 iterative:3 continuous:1 quantifies:1 additionally:1 promising:1 nature:1 robust:1 rearranging:1 correlated:7 eeg:4 excellent:1 necessarily:1 complex:2 brao:1 constructing:1 diag:3 sekihara:1 codified:1 spread:1 noise:12 hyperparameters:21 profile:1 pharmacology:1 marqui:1 referred:1 pascual:1 slow:1 neuroimage:7 comprises:1 wish:1 exponential:1 candidate:8 rez:3 theorem:1 specific:1 bishop:1 jt:1 showing:1 insightful:2 offset:1 evidence:10 concern:1 exists:2 essential:1 intractable:2 burden:1 albeit:1 effectively:2 ci:11 phd:1 magnitude:1 occurring:1 commutative:1 sparseness:1 mce:6 lt:5 explore:2 likely:1 forming:1 saddle:1 desire:1 expressed:1 springer:1 satisfies:2 lewis:1 mackay:1 viewed:2 magnetoencephalography:1 quantifying:1 replace:1 absence:1 feasible:1 included:2 infinite:1 except:1 reducing:1 admittedly:1 called:2 experimental:1 indicating:1 support:1 latter:3 scan:2 signifying:1 arises:1 relevance:6 incorporate:1 regularizing:1 phenomenon:1 handling:1
2,302
309
A Recurrent Neural Network Model of Velocity Storage in the Vestibulo-Ocular Reflex Thomas J. Anastasio Department of Otolaryngology University of Southern California School of Medicine Los Angeles, CA 90033 Abstract A three-layered neural network model was used to explore the organization of the vestibulo-ocular reflex (VOR). The dynamic model was trained using recurrent back-propagation to produce compensatory, long duration eye muscle motoneuron outputs in response to short duration vestibular afferent head velocity inputs. The network learned to produce this response prolongation, known as velocity storage, by developing complex, lateral inhibitory interactions among the interneurons. These had the low baseline, long time constant, rectified and skewed responses that are characteristic of real VOR interneurons. The model suggests that all of these features are interrelated and result from lateral inhibition. 1 SIGNAL PROCESSING IN THE VOR The VOR stabilizes the visual image by producing eye rotations that are nearly equal and opposite to head rotations (Wilson and Melvill Jones 1979). The VOR utilizes head rotational velocity signals, which originate in the semicircular canal receptors of the inner ear, to control contractions of the extraocular muscles. The reflex is coordinated by brainstem interneurons in the vestibular nuclei (VN), that relay signals from canal afferent sensory neurons to eye muscle motoneurons. 32 A Recurrent Neural Network Model of Velocity Storage The VN intemeurons, however, do more than just relay signals. Among other functions, the VN neurons process the canal afferent signals, stretching out their time constants by about four times before transmitting this signal to the motoneurons. This time constant prolongation, which is one of the clearest examples of signal processing in motor neurophysiology, has been termed velocity storage (Raphan et al. 1979). The neural mechanisms underlying velocity storage, however, remain unidentified. The VOR is bilaterally symmetric (Wilson and Melvill Jones 1979). The semicircular canals operate in push-pull pairs, and the extraocular muscles are arranged in agonist/antagonist pairs. The VN are also arranged bilaterally and interact via inhibitory commissural connections. The commissures are necessary for velocity storage, which is eliminated by cutting the commissures in monkeys (Blair and Gavin 1981). When the overall VOR fails to compensate for head rotations, the visual image is not stabilized but moves across the retina at a velocity that is equal to the amount of VOR error. This 'retinal slip' signal is transmitted back to the VN, and is known to modify VOR operation (Wilson and Melvill Jones 1979). Thus the VOR can be modeled beautifully as a three-layered neural network, complete with recurrent connections and error signal back-propagation at the VN level. By modeling the VOR as a neural network, insight can be gained into the global organization of this reflex. Figure 1: Architecture of the Horizontal VOR Neural Network Model. lhc and rhc, left and right horizontal canal afferents; Ivn and rvn, left and right VN neurons; lr and mr, lateral and medial rectus motoneurons of the left eye. This and all subsequent figures are redrawn from Anastasio (1991), with permission. 33 34 Anastasio 2 ARCHITECTURE OF THE VOR NEURAL NETWORK MODEL The recurrent neural network model of the horizontal VOR is diagrammed in Fig. 1. The input units represent afferents from the left and right horizontal semicircular canals (thc and rhc). These are the canals and afferents that respond to yaw head rotations (as in shaking the head 'no'). The output units represent motoneurons of the lateral and medial rectus muscles of the left eye (Ir and mr). These are the motoneurons and muscles that move the eye in the yaw plane. The units in the hidden layer correspond to interneurons in the VN, on both the left and right sides of the brainstem (Ivnl, Ivn2, rvnl and rvn2). All units compute the weighted sum of their inputs and then pass this sum through the sigmoidal squashing function. To represent the VOR relay, input project to hidden units and hidden project to output units. Commissural connections are modeled as lateral interconnections between hidden units on opposite sides of the brainstem. The model is constrained to allow only those connections that have been experimentally well described in mammals. For example, canal afferents do not project directly to motoneurons in mammals, and so direct connections from input to output units are not included in the model. Evidence to date suggests that plastic modification of synapses may occur at the VN level but not at the motoneurons. The weights of synapses from hidden to output units are therefore fixed. All fixed hidden-to-output weights have the same absolute value, and are arranged in a reciprocal pattern. Hidden units lvnl and Ivn2 inhibit lr and excite mr; the opposite pattern obtains for rvnl and rvn2. The connections to the hidden units, from input or contralateral hidden units, were initially randomized and then modified by the continually running, recurrent back-propagation algorithm of Williams and Zipser (1989). 3 TRAINING AND ANALYZING mE VOR NETWORK MODEL The VOR neural network model was trained to produce compensatory motoneuron responses to two impulse head accelerations, one to the left and the other to the right, presented repeatedly in random order. The preset impulse responses of the canal afferents (input units) decay with a time constant of one network cycle or tick (Fig. 2, A and B, solid). The desired motoneuron (output unit) responses are equal and opposite in amplitude to the afferent responses, producing compensatory eye movements, but decay with a time constant four times longer, reflecting velocity storage (Fig. 2, A and B, dashed). Because of the three-layered architecture of the VOR, a delay of one network cycle is introduced between the input and output responses. After about 5000 training set presentations, the network learned to match actual and desired output responses quite closely (Fig. 2, C and D, solid and dashed, respectively). The input-to-hidden connections arranged themselves in a reciprocal pattern, each input unit exciting the ipsilateral hidden units and inhibiting the contralateral ones. This arrangement is also observed for the actual VOR (Wilson and Melvill Jones 1979). The hidden-to-hidden (commissural) connections formed overlapping, lateral inhibitory feedback loops. These loops mediate velocity storage in the network. Their removal results in a loss of velocity storage (a decrease in output time constants from four to one tick), and also slightly increases output unit sensitivity (Fig. 2, C and D, dotted). These effects on VOR are also observed following commissurotomy in monkeys (Blair and Gavin 1981). A Recurrent Neural Network Model of Velocity Storage 0.85 O~.--------------------~ A (f') UJ 0.8 Z 0 0.55 (f') 0.5 I (f') a: ~ Z ::::> ,, , ,, ,, 0.45 ". -- ( I " 0.4 - ", 0.85 o I I' I' \ I 0.35 en UJ en oz . D.55 -- 8 ~~~ -- I' ,,, , , , I' Cl. UJ ," , ,, , , 0.8 \ D.5 , , ", " 0.4 10 20 30 40 50 60 r-----------.--, c 0.6 ' ....-~..--~ V....----t' 0.45 0.35 0 10 20 30 40 50 0.85 60 o 0.6 - \\~~ 0.55 ~ .: ~ V'---'::;:=-=---.J \.------~--. . . 0.45 ~ Z ::::> 0.4 0.35 0.4 o 10 20 30 40 50 NETWORK CYCLES 60 0.35 ' - _ ' - - - - _ ' - - - - _ ' - - - - ' ' - - - - - ' _ - J 0 10 20 30 40 50 60 NETWORK CYCLES Figure 2: Training the VOR Network Model. A and B, input unit responses (solid), desired output unit responses (dashed), and incorrect output responses of initially randomized network (dotted); Ihc and lr in A, rhc and mr in B. C and D, desired output responses (dashed), actual output responses of trained network (solid), and output responses following removal of commissural connections (dotted); lr in C, mr in D. Although all the hidden units project equally strongly to the output units, the inhibitory connections between them, and their response patterns, are different. Hidden units lvnl and rvnl have developed strong mutual inhibition. Thus units Ivnl and rvnl exert net positive feedback on themselves. Their responses appear as low-pass filtered versions of the input unit responses (Fig. 3, A, solid and dashed). In contrast, hidden units Ivn2 and rvn2 have almost zero mutual inhibition, and tend to pass the sharply peaked input responses unaltered (Fig. 3, B, solid and dashed). Thus the hidden units appear to form parallel integrated (lvnl and rvnl) and direct (lvn2 and rvn2) pathways to the outputs. This parallel arrangement for velocity storage was originally suggested by Raphan and coworkers (1979). However, units Ivn2 and rvn2 are coupled to units rvnl and Ivnl, respectively, with moderately strong mutual inhibition. This coupling endows units Ivn2 and rvn2 with longer overall decay times than they would have by themselves. This arrangement resembles the mechanism of feedback through a neural low-pass filter, suggested by Robinson (1981) to account for velocity storage. Thus, the network model gracefully combines the two mechanisms that have been identified for velocity storage, in what may be a more optimal configuration than either one alone. 35 36 Anastasio en UJ en 0.8 0 a.. 0.4 z en 0.5 A. , 'f' , ' A ......... 0.5 I 0.4 ' 0.3 0.3 .... 0.2 0.2 :::> 0.1 z ,. , , 0.1 I , I > 10 UJ 0.7 .: Z 0.8 0.5 UJ 0.4 .... 0.3 :::> 0.2 CJ) a: Z ,, V -------, ' " "\ ,. ---------I 40 30 50 0 60 10 20 40 30 50 60 0.8 CJ) 0 a.. ~ 0 20 0.8 CJ) t"'' ' ' ' "'"' r"'''''''''' ,, " UJ a: B 0.8 C 0.7 " " 1, ,' ,, , \., .( .11 0.8 ," ,, 0.5 " " " ( 0.4 ," - 0 ~ I II --------, ,, ---------" " ") 0.3 0.2 0.1 0 10 0.1 20 30 40 50 NETWORK CYCLES 60 0 10 20 30 40 50 60 NETWORK CYCLES Figure 3: Responses of Model VN Intemeurons. Networks trained with (A and B) and without (C and D) velocity storage. A and C, rvnl, solid; lvnl, dashed. B and D, rvn2, solid; Ivn2, dashed. rhc, dotted, all plots. Besides having longer time constants, the hidden units also have lower baseline firing rates and higher sensitivities than the input units (Fig. 3, A and B). The lower baseline forces the hidden units to operate closer to the bottom of the squashing function. This in tum causes the hidden units to have asymmetric responses, larger in the excitatory than in the inhibitory directions. Actual VN intemeurons also have higher sensitivities, longer time constants, lower baseline firing rates and asymmetric responses as compared to canal afferents (Fuchs and Kimm 1975; Buettner et a1. 1978). For purposes of comparison, the network was retrained to produce a VOR without velocity storage (inputs and desired outputs had the same time constant of one tick). All of the hidden units in this network developed almost zero lateral inhibition. Although they also had higher sensitivities than the input units, their responses otherwise resembled input responses (Fig. 3, C and D). This demonstrates that the long time constant, low baseline and asymmetric responses of the hidden units are all interrelated by commissural inhibition in the network, which may be the case for actual VN interneurons as well. A Recurrent Neural Network Model of Velocity Storage 4 NONLINEAR BEHAVIOR OF THE VOR NETWORK MODEL Because hidden units have low baseline firing rates, larger inputs can produce inhibitory hidden unit responses that are forced into the low-sensitivity region of squashing function or even into cut-off. Hidden unit cut-off breaks the feedback loops that sub serve velocity storage. This produces nonlinearities in the responses of the hidden and output units. For example, an impulse input at twice the amplitude of the training input produces larger output unit responses (Fig. 4, A, solid), but these decay at a faster rate than expected (Fig. 4, A, dot-dash). Faster decay results because inhibitory hidden unit responses are cutting-off at the higher amplitude level (Fig. 4, C, solid). This cut-off disrupts velocity storage, decreasing the integrative properties of the hidden units (Fig. 4, C, solid) and increasing output unit decay rate. Nonlinear responses are even more apparent with sinusoidal input. At low input levels, the output responses are also sinusoidal and their phase lag relative to the input is commensurate with their time constant of four ticks (Fig. 4, B, dashed). As sinusoidal in0.75 0.7 A en ~ Z 0 a.. en w 0.65 - 0.8 0.55 0.5 a: 0.45 IZ D.35 - :::> 0.25 0 10 20 30 40 eo 50 0.8 en w en Z 0 a.. en w 0.3 0 10 20 30 40 50 eo 10 20 30 40 50 eo 0.8 C 0.8 - 0.4 a: IZ 0.2 :::> 10 20 30 40 50 NETWORK CYCLES eo NETWORK CYCLES Figure 4: Nonlinear Responses of Model VOR Neurons. A and C, responses of Ir (A) and rvnl (C) to impulse inputs at low (dashed), medium (training, dotted) and high (solid) amplitudes. A, expected lr response at high input amplitude with time constant of four ticks (dot-dash). B and D, response of lr (B) and rvnl (D) to sinusoidal inputs at low (dashed), medium (dotted) and high (solid) amplitudes. 37 38 Anastasio put amplitude increases, however, output response phase lag decreases, signifying a decrease in time constant (Fig. 4, B, dotted and solid). Also, the output responses skew, such that the excursions from baseline are steeper than the returns. Time constant decrease and skewing with increase in head rotation amplitude are also characteristic of the VOR in monkeys (Paige 1983). Again, these nonlinearities are associated with hidden unit cut-off (Fig. 4, D, dotted and solid), which disrupts velocity storage, decreasing time constant and phase lag. Skewing results as the system time constant is lowered at peak and raised again midrange throughout each cycle of the responses. Actual VN neurons in monkeys exhibit similar cut-off (rectification) and skew (Fuchs and Kimm 1975; Buettner et al. 1978). 5 CONCLUSIONS The VOR lends itself well to neural network modeling. The results summarized here, presented in detail elsewhere (Anastasio 1991), illustrate how neural network analysis can be used to study the organization of the VOR, and how its organization determines the response properties of the neurons that subserve this reflex. Acknowledgments This work was supported by the Faculty Research and Innovation Fund of the University of Southern California. References Anastasio, TJ (1991) Neural network models of velocity storage in the horizontal vestibulo-ocular reflex. BioI Cybern 64: 187-196 Blair SM, Gavin M (1981) Brainstem commissures and control of time constant of vestibular nystagmus. Acta Otolaryngol 91: 1-8 Buettner UW, Buttner U, Henn V (1978) Transfer characteristics of neurons in vestibular nuclei of the alert monkey. I Neurophysiol 41: 1614-1628 Fuchs AF, Kimm I (1975) Unit activity in vestibular nucleus of the alert monkey during horizontal angular acceleration and eye movement. I Neurophysiol 38: 11401161 Paige GC (1983) Vestibuloocular reflex and its interaction with visual following mechanisms in the squirrel monkey. 1. Response characteristics in normal animals. I Neurophysiol49: 134-151 Raphan Th, Matsuo V, Cohen B (1979) Velocity Storage in the vestibulo-ocular reflex arc (VOR). Exp Brain Res 35: 229-248 Robinson DA (1981) The use of control systems analysis in the neurophysiology of eye movements. Ann Rev Neurosci 4: 463-503 Williams RJ, Zipser D (1989) A learning algorithm for continually running fully recurrent neural networks. Neural Comp 1: 270-280 Wilson VI, Melvill Iones G (1979) Mammalian Vestibular Physiology. Plenum Press, New York
309 |@word neurophysiology:2 unaltered:1 version:1 faculty:1 integrative:1 contraction:1 mammal:2 solid:15 configuration:1 vor:28 subsequent:1 motor:1 plot:1 medial:2 fund:1 alone:1 plane:1 reciprocal:2 short:1 lr:6 filtered:1 sigmoidal:1 alert:2 direct:2 incorrect:1 pathway:1 combine:1 expected:2 behavior:1 themselves:3 disrupts:2 brain:1 decreasing:2 actual:6 increasing:1 project:4 underlying:1 medium:2 what:1 monkey:7 developed:2 demonstrates:1 control:3 unit:46 appear:2 producing:2 continually:2 before:1 positive:1 modify:1 receptor:1 analyzing:1 firing:3 exert:1 twice:1 resembles:1 acta:1 suggests:2 acknowledgment:1 physiology:1 layered:3 storage:21 unidentified:1 put:1 cybern:1 williams:2 duration:2 insight:1 ivn2:6 pull:1 plenum:1 slip:1 velocity:24 asymmetric:3 mammalian:1 cut:5 observed:2 bottom:1 rhc:4 region:1 cycle:9 extraocular:2 movement:3 inhibit:1 decrease:4 moderately:1 dynamic:1 diagrammed:1 trained:4 serve:1 neurophysiol:2 forced:1 quite:1 apparent:1 larger:3 lag:3 interconnection:1 otherwise:1 itself:1 net:1 interaction:2 loop:3 date:1 shaking:1 oz:1 los:1 produce:7 coupling:1 recurrent:9 illustrate:1 school:1 strong:2 blair:3 direction:1 closely:1 filter:1 redrawn:1 brainstem:4 squirrel:1 gavin:3 normal:1 exp:1 stabilizes:1 inhibiting:1 relay:3 purpose:1 weighted:1 modified:1 vestibuloocular:1 wilson:5 lvnl:4 commissure:3 contrast:1 baseline:7 integrated:1 initially:2 hidden:29 overall:2 among:2 animal:1 constrained:1 raised:1 mutual:3 equal:3 having:1 eliminated:1 jones:4 nearly:1 yaw:2 peaked:1 retina:1 lhc:1 phase:3 organization:4 interneurons:5 tj:1 closer:1 necessary:1 desired:5 re:1 modeling:2 contralateral:2 delay:1 peak:1 randomized:2 sensitivity:5 off:6 transmitting:1 again:2 ear:1 otolaryngology:1 return:1 account:1 nonlinearities:2 sinusoidal:4 retinal:1 summarized:1 coordinated:1 afferent:10 vi:1 break:1 steeper:1 parallel:2 nystagmus:1 formed:1 ir:2 characteristic:4 stretching:1 correspond:1 plastic:1 agonist:1 comp:1 rectified:1 synapsis:2 ocular:4 clearest:1 commissural:5 associated:1 cj:3 amplitude:8 back:4 reflecting:1 rectus:2 tum:1 originally:1 higher:4 melvill:5 response:40 arranged:4 strongly:1 just:1 angular:1 bilaterally:2 horizontal:6 nonlinear:3 overlapping:1 propagation:3 impulse:4 effect:1 intemeurons:3 symmetric:1 skewed:1 during:1 antagonist:1 complete:1 rvn2:7 image:2 rotation:5 cohen:1 subserve:1 had:3 dot:2 lowered:1 longer:4 inhibition:6 termed:1 muscle:6 transmitted:1 motoneuron:10 mr:5 eo:4 coworkers:1 signal:9 dashed:11 ii:1 rj:1 match:1 faster:2 af:1 long:3 compensate:1 prolongation:2 equally:1 a1:1 represent:3 operate:2 tend:1 zipser:2 architecture:3 identified:1 opposite:4 inner:1 angeles:1 fuchs:3 paige:2 york:1 cause:1 repeatedly:1 skewing:2 amount:1 raphan:3 inhibitory:7 stabilized:1 dotted:8 canal:10 ipsilateral:1 iz:2 four:5 uw:1 sum:2 respond:1 throughout:1 almost:2 vn:13 utilizes:1 excursion:1 layer:1 dash:2 activity:1 occur:1 sharply:1 department:1 developing:1 remain:1 across:1 slightly:1 rev:1 modification:1 rectification:1 skew:2 mechanism:4 operation:1 permission:1 thomas:1 running:2 in0:1 medicine:1 uj:7 move:2 arrangement:3 southern:2 beautifully:1 exhibit:1 lends:1 lateral:7 gracefully:1 me:1 originate:1 besides:1 modeled:2 rotational:1 innovation:1 neuron:7 commensurate:1 sm:1 arc:1 semicircular:3 head:8 gc:1 retrained:1 introduced:1 pair:2 compensatory:3 connection:10 california:2 learned:2 vestibular:6 robinson:2 suggested:2 pattern:4 force:1 endows:1 eye:9 kimm:3 coupled:1 removal:2 relative:1 loss:1 fully:1 matsuo:1 nucleus:3 ivn:1 ivnl:3 vestibulo:4 exciting:1 squashing:3 excitatory:1 elsewhere:1 supported:1 side:2 allow:1 tick:5 absolute:1 feedback:4 sensory:1 obtains:1 cutting:2 global:1 excite:1 transfer:1 ca:1 interact:1 complex:1 cl:1 da:1 thc:1 neurosci:1 mediate:1 fig:16 en:10 fails:1 sub:1 rvnl:9 ihc:1 anastasio:7 midrange:1 resembled:1 decay:6 otolaryngol:1 evidence:1 gained:1 push:1 interrelated:2 explore:1 visual:3 henn:1 reflex:8 determines:1 bioi:1 presentation:1 acceleration:2 ann:1 experimentally:1 included:1 preset:1 pas:4 signifying:1
2,303
3,090
Inducing Metric Violations in Human Similarity Judgements 2 Julian Laub1 , Jakob Macke2 , Klaus-Robert M?ller1,3 and Felix A. Wichmann2 1 Fraunhofer FIRST.IDA, Kekulestr. 7, 12489 Berlin, Germany Max Planck Institut for Biological Cybernetics, Spemannstr. 38, 72076 T?bingen, Germany 3 University of Potsdam, Department of Computer Science August-Bebel-Strasse 89, 14482 Potsdam, Germany {jlaub,klaus}@first.fhg.de {felix,jakob}@tuebingen.mpg.de Abstract Attempting to model human categorization and similarity judgements is both a very interesting but also an exceedingly difficult challenge. Some of the difficulty arises because of conflicting evidence whether human categorization and similarity judgements should or should not be modelled as to operate on a mental representation that is essentially metric. Intuitively, this has a strong appeal as it would allow (dis)similarity to be represented geometrically as distance in some internal space. Here we show how a single stimulus, carefully constructed in a psychophysical experiment, introduces l2 violations in what used to be an internal similarity space that could be adequately modelled as Euclidean. We term this one influential data point a conflictual judgement. We present an algorithm of how to analyse such data and how to identify the crucial point. Thus there may not be a strict dichotomy between either a metric or a non-metric internal space but rather degrees to which potentially large subsets of stimuli are represented metrically with a small subset causing a global violation of metricity. 1 Introduction The central aspect of quantitative approaches in psychology is to adequately model human behaviour. In perceptual research, for example, all successful models of visual perception tacitly assume that at least simple visual stimuli are processed, transformed and compared to some internal reference in a metric space. In cognitive psychology many models of human categorisation, too, assume that stimuli ?similar? to each other are grouped together in categories. Within a category similarity is very high whereas between categories similarity is low. This coincides with intuitive notions of categorization which, too, tend to rely on similarity despite serious problems in defining what similarity means or ought to mean [6]. Work on similarity and generalization in psychology has been hugely influenced by the work of Roger Shepard on similarity and categorization [12, 14, 11, 4, 13]. Shepard explicitly assumes that similarity is a distance measure in a metric space, and many perceptual categorization models follow Shepard?s general framework [8, 3]. This notion of similarity is frequently linked to a geometric representation where stimuli are points in a space and the similarity is linked to an intuitive metric on this space, e.g. the Euclidean metric. In a well-known and influential series of papers Tversky and colleagues have challenged the idea of a geometric representation of similarity, however [16, 17]. They provided convincing evidence that (intuitive, and certainly Euclidean) geometric representations cannot account for human similarity judgements?at least for the highly cognitive and non-perceptual stimuli they employed in their studies. Within their experimental context pairwise dissimilarity measurements violated metricity, in particular symmetry and the triangle inequality. Technically, violations of Euclideanity translate into non positive semi-definite similarity matrices (?pseudo-Gram? matrices) [15], a fact, which imposes severe constraints on the data analysis procedures. Typical approaches to overcome these problems involve leaving out negative eigenvalues altogether or shifting the spectrum for subsequent (Kernel-)PCA analysis [10, 7]. The shortcomings of such methods are that they assume that the data really are Euclidean and that all violations are only due to noise. Shepard?s solution to non-metricity was to find non-linear transformations of the similarity data of the subjects to make them Euclidean, and/or use non-Euclidean metrics such as the city-block metric (or other Minkowski p-norms with p 6= 2)[11, 4]. Yet another way how metric violations may arise in experimental data?whilst retaining the notion that the internal, mental representation is really metric?is to invoke attentional re-weighting of dimensions during similarity judgements and categorisation tasks [1]. Here we develop a position in between the seeming dichotomy of ?metric versus non-metric? internal representations: Our alternative and complementary suggestion is that a potentially very small subset?in fact a single observation or data point or stimulus?of the data may induce the non-metricity, or at least a non-Euclidean metric: in a theoretical setting it has been shown that systematic violation of metricity can be due to an interesting subset of the data?i.e. not due to noise [5]. We show how conflictual judgments can introduce metric violation in a situation where the human similarity judgments are based upon smooth geometric features and are otherwise essentially Euclidean. First we present a simple model which explains the occurrence of metric violations in similarity data, with a special focus on human similarity judgments. Thereafter both models are tested with data obtained from psychophysical experiments specifically designed to induce conflictual judgments. 2 Modeling metric violations for single conflictual situations A dissimilarity function d is called metric if: d(xi , xj ) > 0 ? xi , xj ? X, d(xi , xj ) = 0 iff xi = xj , d(xi , xj ) = d(xj , xi ) ? xi , xj ? X, d(xi , xk ) + d(xk , xj ) > d(xi , xj ) ? xi , xj , xk ? X. A dissimilarity matrix D = (Dij ) will be called metric if there exists a metric d such that Dij = d(?, ?). D = (Dij ) will be called squared Euclidean if the metric derives from l2 . It can be shown that D is l2 (Euclidean) iff C = ? 21 QDQ is positive semi-definite (Q = I ? n1 ee0 be the projection matrix on the orthogonal complement of e = (1, 1, . . . 1)0 ). C is called the Gram matrix. An indefinite C will be called a pseudo-Gram matrix. A non-metric D is, a fortiori, non l2 and thus its associated C is indefinite. On the other hand, when C is indefinite, we can conclude that D is non l2 , but not necessarily non-metric. Non-metricity of D must be verified by testing the above four requirements. We now introduce a simple model for conflictual human similarity. Let {f1 , f2 , . . . fn } be a basis. Pn (i) A given data point xi can be decomposed in this basis as xi = k=1 ?k fk . The squared l2 distance P (i) (j)  2 n between xi and xj therefore reads: dij = ||xi ? xj ||2 = k=1 ?k ? ?k fk . However this assumes constant feature-perception, i.e. a constant mental image with respect to different tasks. In the realm of human perception this is not always the case, as illustrated by the following well known ambiguous figure (Fig. 1). We hypothesise that the ambiguous perception of such figures corresponds to some kind of ?perceptual state-switching?. If the state-switching could be experimentally induced within a single experiment and subject, this may cause metric or at least Euclidean violations by this conflictual judgment. A possible way to model such conflictual situations in human similarity judgments is to introduce states {? (1) , ? (2) . . . ? (d) }, ? (l) ? Rn for l = 1, 2, . . . d, affecting the features. The similarity judgment between objects then depends on the perceptual state (weight) the subject is in. Assuming that Pn (i) (j)  (l) 2 the person is in state ? (l) the distance becomes: dij = ||xi ?xj ||2 = k=1 ?k ??k ?k fk . With no further restriction this model yields non-metric distance matrices. ? may vary between different subjects reflecting their different focus of attention, thus we will not average the similarity judgments over different subjects but only over different trials of one single subject, assuming that for a given person ? is constant. In order to interpret the metric violations, we propose the following simple algorithm, which allows to specifically visualize the information coded by the negative eigenvalues. It essentially relies upon 13 10 14 15 12 119 16 y 76843215 x Figure 1: Left: What do you see? A young lady or an old woman? If you were to compare this picture to a large set of images of young ladies or old women, the perceptual state-switch could induce large individual weights on the similarity. Right: Simple data distribution (left) used in the proof of concept illustration in subsection 2.1. the embedding of non-metric pairwise data into a pseudo-Euclidean space (see [2, 9] and references therein for details): C = ?1/2QDQ non squared-Euclidean D ?????????????? C with negative eigenvalues spectral decomposition 1 1 C ???????????????? V ?V > = V |?| 2 M |?| 2 V > XP? = |?P |1/2 VP> , where V is the column matrix of eigenvectors, ? the diagonal matrix of the corresponding eigenvalues and M the block-matrix consisting of the blocks Ip?p , ?Iq?q and 0k?k (with k = n ? p ? q) The columns of XP? contain the vectors xi in p-dimensional subspace P . Retaining only the first two coordinates (P = {v1 , v2 }) of the obtained vectors corresponds to a projection onto the first two leading eigendirections. Retaining the last two (P = {vn , vn?1 }) is a projection onto the last two eigendirections: This corresponds to a projection onto directions related to the negative part of C and containing the information coded by the l2 violations. 2.1 Proof of concept illustration: single conflicts introduce metric violations We now illustrate the model for a single conflictual situation. Consider a weight ? (l) constant for all feature-vectors, taken to be the unit vectors ek in this example. 2 Pn 2 (i) (j)  2 Then we have dij = ? lij k=1 ?k ? ?k ek = ? lij kxi ? xj k22 , where k ? k2 is the usual unweighted Euclidean norm. For a simple illustration we take 16 points distributed in two Gaussian blobs (Fig. 1, right) with squared Euclidean distance given by d2 to represent the objects to compare. Suppose an experimental subject is to pairwise compare these objects to give it a dissimilarity score and that a conflictual situation arises for the pairs (2, 3), (7, 2) and (6, 5) translating in a strong weighting of these dissimilarities. For the sake of the example, we chose the (largely exaggerated) weights to be 150, 70 and 220 respectively, acting as follows: d(2, 3) = d2 (2, 3) ? 150, d(7, 2) = d2 (7, 2) ? 70, d(6, 5) = d2 (6, 5) ? 220. The corresponding d is non-Euclidean and its associated C is indefinite. The spectrum of C is given in Fig. 2, right, and exhibits a clear negative spectrum. Fig. 2 shows the projection onto the leading positive and leading negative eigendirections of the both the unweighted distance (top row) and the weighted distance matrix (bottom row). Both yield the same grouping in the positive part. In the negative eigenspace we obtain a singular distribution for the unweighted case. This is not the case for the weighted dissimilarity: we see that the distribution in the negative separates the points whose mutual distance has been (strongly) weighted. The information contained in the negative part, reflecting the information coded by metric or l2 violations, codes in this case for the individual weighting of the (dis)similarities. . . ............ . Eigenvalues . Last component 15 14 103 12 16 111 9 76 834125 5 16 15 14 12 911 10 13 78 2 143 6 First component 13 14 11 12 10 16 15 7 4 6 3 5 1 9 2 8 Second last component First component Last component Eigenvalues Second component ............... Second component . 7 2 15 16 14 11 12 9 10 13 8143 65 Second last component Figure 2: Proof of concept: Unperturbed dissimilarity matrix (no conflict) and weighted dissimilarity matrix (conflict). Single weighting of dissimilarities introduce metric violations and hence l2 violations which reflect in negative spectra. The conflictual points are peripheral in the projection onto the negative eigenspace centered around the bulk of points whose dissimilarities are essentially Euclidean. Note that because of the huge weights, these effects are largely exaggerated in comparison to real world judgments. 3 Experiments Twenty gray-scale 256 x 256-pixel images of faces were generated from the MPI-face database 1 . All faces were normalized to have the same mean and standard deviation of pixel intensities, the same area, and were aligned such that the cross-correlation of each face to a mean face of the database was maximal. Faces were presented at an angle of 15 degrees and were illuminated primarily with ambient light together with an additional but weak point source at 65 degrees azimuth and 25 degree eccentricity. To show the viability of our approach we require a data set with a good representation of the notion of facial similarity, and to ensure that the data set encompasses both extremes of (dis-)similarity. In the absence of a formal theory of facial similarity we hand-selected a set of faces we thought may show the hypothesised effect: Sixteen of the twenty faces were selected because prior studies had shown them to be consistently and correctly categorised as male or female [18]. Three of the remaining four faces were females that previous subjects found very difficult to categorise and labelled them as female or male almost exactly half of the time. The last face was the mean (androgynous) face across the database. Figure 3 shows the twenty faces thus selected. Prior to the pairwise comparisons all subjects viewed all twenty faces simultaneously arranged in a 4 x 5 grid on the experimental monitor. The subjects were asked to inspect the entire set of faces to obtain a general notion of the relative similarity of the faces and they were instructed to use the entire scale in the following rating task. Subjects were allowed to view the stimuli for however long they wanted. Only thereafter did they  proceed to the actual similarity rating stage. Pairwise comparisons of twenty faces requires 20 = 190 trials; each of our four subjects completed four 2 repetitions resulting in a total of 760 trials per subject. During the rating stage faces were shown in pairs in random order for a total duration of 4 seconds (200 msec fade-in, 3600 msec full contrast view, 200 msec fade-out). Subjects were allowed to respond as fast as the wished but had to respond within 5 seconds, i.e. 1 second after the faces had disappeared at the very latest. Similarity was rated on a discrete integer scale between 1 (very dissimilar) and 5 (very similar). The final similarity rating per subject was the mean of the four repetitions within a single subject. 1 The MPI face database is located at http://faces.kyb.tuebingen.mpg.de 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Figure 3: Our data set: Faces 1 to 8 are unambiguous males, faces 9 to 16 are unambiguous females. Faces 17 to 19 are ambiguous and have been attributed to either sex in roughly half the cases. Face 20 is a mean face. All stimuli were presented on a carefully linearised Siemens SMM21106LS gray-scale monitor with 1024 x 768 resolution at a refresh rate of 130Hz driven by a Cambridge Research Systems Visage graphics controller using purpose-written software. The mean luminance of the display was 213 cd/m2 and presentation of the stimuli did not change the mean luminance of the display. Three subjects with normal or corrected-to-normal vision?naive to the purpose of the experiment? acted as observers; they were paid for their participation. We will discuss in detail the results obtained with the first subject. The results from the other subjects are summarized. In order to exhibit how a single conflictual judgment can break metricity, we follow a two-fold procedure: we first chose a data set of unambiguous faces whose dissimilarities are Euclidean or essential Euclidean. Second, we compare this subsets of faces to a set with those very same unambiguous males and females extended by one additional conflict generating face creating (see Figure 4 for a schematic illustration). Figure 4: The unambiguous females and unambiguous males lead to a pairwise dissimilarity matrix which is essentially Euclidean. The addition of one single conflicting face introduces the l2 violations. 3.1 Subject 1 . . . . . . . . . . Eigenvalues 9 7 31 4 2 11 10 8 6 5 Last component . Second component We chose a subset of faces which has the property that their mutual dissimilarities are essentially Euclidean (Fig. 5). The conflict generating face is 19 and will be denoted as X. Fig. 5 shows that the set of unambiguous faces is essentially Euclidean: the smallest eigenvalues of the spectrum are almost zero. This reflects in an almost singular projection in the eigenspace spanned by the eigenvectors associated to the negative eigenvalues. The projection onto the eigenspace spanned by the eigenvectors associated to the positive eigenvalues separates males from females which corresponds to the unique salient feature in the data set. 310 264 895711 1 Second last component First component Figure 5: Left: Spectrum with only minor l2 violations, Middle: males vs. females. Right: when a metric is (essentially) Euclidean, the points are concentrated on a singularity in the negative eigenspace. . . . . . . . . . . . Eigenvalues 9 7 1 34 2 11 X 10 8 6 5 First component Last component . Second component In order to provoke the conflictual situation, we add one single conflicting face, denoted by X. This face has been attributed in previous experiences to either sex in 50 % of the cases. This addition causes the spectrum to flip down, hinting at a unambiguous l2 violation, see Fig. 6. Furthermore, it can be verified that the triangle inequality is violated in several instances by addition of this conflicting judgment reflecting that violation indeed is metric in this case. 1 4 61198 275 10 3 X Second last component Figure 6: Left: Spectrum with l2 -violations, Middle: males vs. females. Right: The conflicting face X is separated from the bulk of faces corresponding to the Euclidean dissimilarities. The positive projection remains almost unchanged and again separates male from female faces with X in between, reflecting its intermediate position between the males and the females. In the negative projection the X can be seen as separating of the bulk of points which are mutually Euclidean. This corresponds to the effect, albeit not as pronounced, described in the proof of concept illustration 2.1. Thus we see that the introduction of a conflicting face within a coherent set of unambiguous faces is the cause of the metric violation. 3.2 Subject 2 and 3 The same procedure was applied to the similarity judgments given by Subject 2 and 3. Since the individual perceptual states are incommensurable between different subjects (the reason why we do not average over subjects but only within a subject) the extracted Euclidean subset were different for each of them. However, the process which created the l2 -violation is the same. Figures 7 and 8 show this process: a conflicting observation destroys the underlying Euclidean structure in the judgements. Both for Subject 2 and 3 the X lying between the unambiguous faces reflects outside the bulk of Euclidean points concentrated around the singularity in the negative projections. . Second component Eigenvalues . . . . . . . . . Eigenvalues 9 16 14 5 11 4 Last component . . . . . . 1 7 1 7 X 9 16 14 11 5 4 1 4 11 9 14 5 7 16 Second last component First component Last component Second component . X 5 11 16 9 14 7 1 4 Second last component First component Eigenvalues . .. .. . . . .... .. Eigenvalues 84 2 7 1215 13 10 119 6 Last component 5 3 8 3 52 7 X 13 1215 10 4 6 9 11 First component 10 5 6 8 15 3 11 7 9 13 2 12 4 Second last component First component Last component . .. . . ... .... Second component . Second component Figure 7: Subject 2: In the upper row, the subset of faces which whose dissimilarities are Euclidean. The lower row shows the effect of introducing a conflicting face X and the subsequent weighting. X 5 6 10 7 11 8 9 15 3 13 2 4 12 Second last component Figure 8: Subject 3: In the upper row, the subset of faces which whose dissimilarities are essentially Euclidean. The lower row shows the effect of introducing a conflicting face X and the subsequent weighting. Again we obtain that the introduction of a single conflicting face within a set of unambiguous faces for which the human similarity judgment is essentially Euclidean introduces the l2 violations. This strongly corroborates our conflict model and the statement that metric violations in human similarity judgments have a specific meaning, a conflictual judgment for this case. 4 Conclusion We presented a simple experiment in which we could show how a single, purposely selected stimulus introduces l2 violations in what appeared to have been an internal Euclidean similarity space of facial attributes. Importantly, thus, it may not be that there is a clear dichotomy in that internal representations of similarity are either metric or not, rather that they may be for ?easy? stimuli but ?ambiguous? ones can cause metric violations?at least l2 violations in our setting. We have clearly shown that these violations are caused by conflictual points in a data set: the addition of one such point caused the spectra of the Gram matrices to ?flip down? reflecting the l2 violation. Further research will involve the acquisition of more pairwise similarity judgements in conflicting situations as well as the refinement of our existing experiments. In particular, we would like to know whether it is possible to create larger, scalable conflicts, i.e. conflicts which lead to a much stronger re-weighting and thus to clearer separation of the conflicting point from the bulk of Euclidean points. References [1] F. Gregory Ashby and W. William Lee. Predicting similarity and categorization from identification. Journal of Experimental Psychology: General, 120(2):150?172, 1991. [2] L. Goldfarb. A unified approach to pattern recognition. Pattern Recognition, 17:575?582, 1984. [3] J.K. Kruschke. ALCOVE: an exemplar-based connectionist model of category learning. Psychological Review, 99(1):22?44, 1992. [4] J.B. Kruskal. Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika, 29(1):1?27, 1964. [5] J. Laub and K.-R. M?ller. Feature discovery in non-metric pairwise data. Journal of Machine Learning, 5:801?818, 2004. [6] D.L. Medin, R.L. Goldstone, and D. Gentner. Respects for similarity. Psychological Review, 100(2):254? 278, 1993. [7] S. Mika, B. Sch?lkopf, A.J. Smola, K.-R. M?ller, M. Scholz, and G. R?tsch. Kernel PCA and de?noising in feature spaces. In M.S. Kearns, S.A. Solla, and D.A. Cohn, editors, Advances in Neural Information Processing Systems, volume 11, pages 536?542. MIT Press: Cambridge, MA, 1999. [8] R.M. Nosofsky. Attention, similarity, and the indentification-categorization relationship. Journal of Experimental Psychology: General, 115:39?57, 1986. [9] E. P?ekalska, P. Pacl?k, and R. P. W. Duin. A generalized kernel approach to dissimilarity-based classification. Journal of Machine Learning Research, 2:175?211, 2001. [10] B. Sch?lkopf, A. Smola, and K.-R. M?ller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299?1319, 1998. [11] R. N. Shepard. The analysis of proximities: Multidimensional scaling with an unknown distance function. Psychometrika, 27(2):125?140, 1962. [12] R.N. Shepard. Stimulus and response generalization: A stochastic model relating generalization to distance in psychological space. Psychometrika, 22:325?345, 1957. [13] R.N. Shepard. Toward a universal law of generalization for psychological science. 237(4820):1317?1323, 1987. Science, [14] Roger N. Shepard, Carl I. Hovland, and Herbert M. Jenkins. Learning and memorization of classifications. Psychological Monographs, 75(13):1?42, 1961. [15] W. S. Torgerson. Theory and Methods of Scaling. John Wiley and Sons, New York, 1958. [16] A. Tversky. Features of similarity. Psychological Review, 84(4):327?352, 1977. [17] A. Tversky and I. Gati. Similarity, separability, and the triangle inequality. Psychological Review, 89(2):123?154, 1982. [18] F. A. Wichmann, A. B. A. Graf, E. P. Simoncelli, H. H. B?lthoff, and B. Sch?lkopf. Machine learning applied to perception: decision-images for classification. In Advances in Neural Information Processing Systems 17, pages 1489?1496. MIT Press, 2005.
3090 |@word trial:3 middle:2 judgement:8 norm:2 stronger:1 sex:2 d2:4 decomposition:1 paid:1 series:1 score:1 existing:1 ida:1 yet:1 must:1 written:1 john:1 refresh:1 fn:1 subsequent:3 kyb:1 wanted:1 designed:1 v:2 half:2 selected:4 xk:3 mental:3 constructed:1 laub:1 introduce:5 pairwise:8 indeed:1 roughly:1 mpg:2 frequently:1 decomposed:1 actual:1 becomes:1 ller1:1 provided:1 underlying:1 psychometrika:3 eigenspace:5 what:4 kind:1 whilst:1 unified:1 transformation:1 ought:1 pseudo:3 quantitative:1 multidimensional:2 exactly:1 k2:1 unit:1 planck:1 positive:6 felix:2 switching:2 despite:1 mika:1 chose:3 gati:1 therein:1 scholz:1 medin:1 qdq:2 unique:1 testing:1 block:3 definite:2 categorise:1 procedure:3 area:1 strasse:1 universal:1 thought:1 projection:11 induce:3 lady:2 cannot:1 onto:6 noising:1 context:1 memorization:1 restriction:1 latest:1 attention:2 hugely:1 duration:1 kruschke:1 resolution:1 fade:2 m2:1 importantly:1 spanned:2 embedding:1 notion:5 coordinate:1 suppose:1 carl:1 hypothesis:1 recognition:2 located:1 database:4 bottom:1 hypothesised:1 solla:1 monograph:1 asked:1 tsch:1 tacitly:1 tversky:3 technically:1 upon:2 f2:1 basis:2 triangle:3 represented:2 separated:1 fast:1 shortcoming:1 dichotomy:3 klaus:2 outside:1 whose:5 larger:1 otherwise:1 analyse:1 ip:1 final:1 blob:1 eigenvalue:16 propose:1 maximal:1 causing:1 aligned:1 translate:1 iff:2 intuitive:3 inducing:1 pronounced:1 requirement:1 eccentricity:1 categorization:7 disappeared:1 generating:2 object:3 iq:1 develop:1 illustrate:1 clearer:1 exemplar:1 minor:1 wished:1 strong:2 direction:1 attribute:1 stochastic:1 centered:1 human:13 translating:1 explains:1 require:1 behaviour:1 f1:1 generalization:4 really:2 biological:1 singularity:2 lying:1 around:2 proximity:1 normal:2 visualize:1 kruskal:1 vary:1 smallest:1 hovland:1 purpose:2 grouped:1 repetition:2 create:1 city:1 alcove:1 weighted:4 reflects:2 mit:2 destroys:1 clearly:1 always:1 gaussian:1 rather:2 pn:3 focus:2 consistently:1 contrast:1 bebel:1 entire:2 transformed:1 fhg:1 germany:3 pixel:2 classification:3 denoted:2 retaining:3 special:1 mutual:2 connectionist:1 stimulus:13 serious:1 primarily:1 simultaneously:1 individual:3 consisting:1 n1:1 william:1 linearised:1 huge:1 highly:1 certainly:1 severe:1 violation:31 introduces:4 extreme:1 male:10 light:1 ambient:1 experience:1 facial:3 institut:1 orthogonal:1 euclidean:33 old:2 re:2 theoretical:1 psychological:7 instance:1 column:2 modeling:1 goodness:1 challenged:1 hypothesise:1 introducing:2 deviation:1 subset:9 successful:1 dij:6 azimuth:1 too:2 graphic:1 gregory:1 kxi:1 person:2 systematic:1 invoke:1 lee:1 together:2 nosofsky:1 squared:4 central:1 reflect:1 again:2 containing:1 woman:2 cognitive:2 creating:1 ek:2 leading:3 account:1 de:4 seeming:1 summarized:1 provoke:1 explicitly:1 caused:2 depends:1 view:2 observer:1 break:1 linked:2 kekulestr:1 largely:2 judgment:15 identify:1 yield:2 vp:1 modelled:2 weak:1 identification:1 lkopf:3 cybernetics:1 influenced:1 colleague:1 acquisition:1 associated:4 proof:4 attributed:2 realm:1 subsection:1 carefully:2 nonmetric:1 reflecting:5 follow:2 response:1 arranged:1 strongly:2 furthermore:1 roger:2 stage:2 smola:2 correlation:1 hand:2 cohn:1 nonlinear:1 gray:2 effect:5 k22:1 concept:4 contain:1 normalized:1 adequately:2 hence:1 read:1 goldfarb:1 illustrated:1 during:2 ambiguous:4 unambiguous:11 mpi:2 coincides:1 generalized:1 image:4 meaning:1 purposely:1 shepard:8 volume:1 relating:1 interpret:1 measurement:1 cambridge:2 fk:3 grid:1 had:3 similarity:46 add:1 fortiori:1 female:11 exaggerated:2 optimizing:1 driven:1 inequality:3 seen:1 herbert:1 additional:2 employed:1 ller:3 semi:2 full:1 simoncelli:1 smooth:1 lthoff:1 cross:1 long:1 coded:3 schematic:1 scalable:1 controller:1 essentially:10 metric:38 vision:1 kernel:4 represent:1 whereas:1 affecting:1 addition:4 singular:2 leaving:1 source:1 crucial:1 sch:3 operate:1 strict:1 subject:28 tend:1 induced:1 hz:1 spemannstr:1 integer:1 intermediate:1 viability:1 easy:1 switch:1 xj:14 fit:1 psychology:5 idea:1 incommensurable:1 whether:2 pca:2 bingen:1 proceed:1 cause:4 york:1 clear:2 involve:2 eigenvectors:3 concentrated:2 processed:1 category:4 http:1 goldstone:1 gentner:1 torgerson:1 correctly:1 per:2 bulk:5 discrete:1 androgynous:1 thereafter:2 indefinite:4 four:5 salient:1 monitor:2 verified:2 v1:1 luminance:2 geometrically:1 angle:1 you:2 respond:2 eigendirections:3 almost:4 vn:2 separation:1 decision:1 scaling:3 illuminated:1 ashby:1 display:2 fold:1 duin:1 constraint:1 categorisation:2 software:1 sake:1 aspect:1 attempting:1 minkowski:1 acted:1 department:1 influential:2 peripheral:1 across:1 son:1 separability:1 wichmann:1 intuitively:1 taken:1 mutually:1 remains:1 discus:1 know:1 flip:2 jenkins:1 v2:1 spectral:1 occurrence:1 alternative:1 altogether:1 assumes:2 top:1 ensure:1 remaining:1 completed:1 unchanged:1 psychophysical:2 usual:1 diagonal:1 exhibit:2 subspace:1 distance:11 attentional:1 separate:3 berlin:1 separating:1 tuebingen:2 reason:1 toward:1 assuming:2 code:1 relationship:1 illustration:5 julian:1 convincing:1 difficult:2 robert:1 potentially:2 statement:1 negative:15 twenty:5 unknown:1 upper:2 inspect:1 observation:2 defining:1 situation:7 extended:1 rn:1 jakob:2 august:1 intensity:1 rating:4 complement:1 pair:2 conflict:8 coherent:1 potsdam:2 conflicting:12 perception:5 pattern:2 appeared:1 challenge:1 encompasses:1 max:1 shifting:1 difficulty:1 rely:1 participation:1 predicting:1 rated:1 picture:1 ekalska:1 created:1 fraunhofer:1 naive:1 lij:2 prior:2 geometric:4 l2:18 review:4 discovery:1 relative:1 law:1 graf:1 interesting:2 suggestion:1 versus:1 sixteen:1 degree:4 xp:2 imposes:1 editor:1 cd:1 row:6 last:19 dis:3 formal:1 allow:1 face:46 distributed:1 overcome:1 dimension:1 gram:4 world:1 unweighted:3 exceedingly:1 instructed:1 refinement:1 categorised:1 global:1 conclude:1 corroborates:1 xi:16 spectrum:9 why:1 symmetry:1 necessarily:1 did:2 noise:2 arise:1 allowed:2 complementary:1 fig:7 wiley:1 position:2 msec:3 perceptual:7 weighting:7 young:2 metricity:7 down:2 specific:1 unperturbed:1 appeal:1 hinting:1 evidence:2 derives:1 exists:1 grouping:1 essential:1 albeit:1 metrically:1 dissimilarity:17 visual:2 contained:1 corresponds:5 relies:1 extracted:1 ma:1 viewed:1 presentation:1 labelled:1 absence:1 experimentally:1 change:1 typical:1 specifically:2 corrected:1 acting:1 kearns:1 called:5 total:2 experimental:6 siemens:1 internal:8 arises:2 dissimilar:1 violated:2 tested:1
2,304
3,091
Game theoretic algorithms for Protein-DNA binding Luis E. Ortiz CSAIL - MIT leortiz@csail.mit.edu Luis P?erez-Breva CSAIL-MIT lpbreva@csail.mit.edu Tommi Jaakkola CSAIL - MIT tommi@csail.mit.edu Chen-Hsiang Yeang UCSC chyeang@soe.ucsc.edu Abstract We develop and analyze game-theoretic algorithms for predicting coordinate binding of multiple DNA binding regulators. The allocation of proteins to local neighborhoods and to sites is carried out with resource constraints while explicating competing and coordinate binding relations among proteins with affinity to the site or region. The focus of this paper is on mathematical foundations of the approach. We also briefly demonstrate the approach in the context of the ?-phage switch. 1 Introduction Transcriptional control relies in part on coordinate operation of DNA binding regulators and their interactions with various co-factors. We believe game theory and economic models provide an appropriate modeling framework for understanding interacting regulatory processes. In particular, the problem of understanding coordinate binding of regulatory proteins has many game theoretic properties. Resource constraints, for example, are critical to understanding who binds where. At low nuclear concentrations, regulatory proteins may occupy only high affinity sites, while filling weaker sites with increasing concentration. Overlapping or close binding sites create explicit competition for the sites, the resolution of which is guided by the available concentrations around the binding sites. Similarly, explicit coordination such as formation of larger protein complexes may be required for binding or, alternatively, binding may be facilitated by the presence of another protein. The key advantage of games as models of binding is that they can provide causally meaningful predictions (binding arrangements) in response to various experimental perturbations or disruptions. Our approach deviates from an already substantial body of computational methods used for resolving transcriptional regulation (see, e.g., [3, 10]). From a biological perspective our work is closest in spirit to more detailed reaction equation models [5, 1], while narrower in scope. The mathematical approach is nevertheless substantially different. 2 Protein-DNA binding We decompose the binding problem into transport and local binding. By transport, we refer to the mechanism that transports proteins to the neighborhood of sites to which they have affinity. The biological processes underlying the transport are not well-understood although several hypotheses exist[12, 4]. We abstract the process initially by assuming separate affinities for proteins to explore neighborhoods of specific sites, modulated by whether the sites are available. This abstraction does not address the dynamics of the transport process and therefore does not distinguish (nor stand in contradiction to) underlying mechanisms that may or may not involve diffusion as a major com- ponent. We aim to capture the differentiated manner in which proteins may accumulate in the neighborhoods of sites depending on the overall nuclear concentrations and regardless of the time involved. Local binding, on the other hand, captures which proteins bind to each site as a consequence of local accumulations or concentrations around the site or a larger region. In a steady state, the local environment of the site is assumed to be closed and well-mixed. We therefore model the binding as being governed by chemical equilibria: for a type of protein i around site j, {free protein i} + {free site j} {bound ij}, where concentrations involving the site should be thought of as time averages or averages across a population of cells depending on the type of predictions sought. The concentrations of various molecular species around and bound to the sites as well as the rate at which the sites are occupied are then governed by the law of mass action at chemical equilibrium: [bound ij]/([free protein i][free site j]) = Kij , where i ranges over proteins with affinity to site j and Kij is a positive equilibrium constant characterizing protein i?s ability to bind to site j in the absence of other proteins. Broadly speaking, the combination of transport and local binding results in an arrangement of proteins along the possible DNA binding sites. This is what we aim to predict with our game-theoretic models, not how such arrangements are reached. The predictions should be viewed as functions of the overall (nuclear) concentrations of proteins, the affinities of proteins to explore neighborhoods of individual sites, as well as the equilibrium constants characterizing the ability of proteins to bind to specific sites when in close proximity. Any perturbation of such parameters leads to a potentially different arrangement that we can predict. 3 Game Theoretic formulation There are two types of players in our game, proteins and sites. A protein-player refers to a type of protein, not an individual protein, and decides how its nuclear concentration is allocated to the proximity of sites (transport process). The protein-players are assumed non-cooperative and rational. In other words, their allocations are based on the transport affinities and the availability of sites rather than through some negotiation process involving multiple proteins. The non-coopeative nature of the protein allocations does not, however, preclude the formation of protein complexes or binding facilitated by other proteins. Such extensions can be incorporated at the sites. Each possible binding site is associated with a site-player. Site-players choose the fraction of time (or fraction of cells in a population) a specific type of protein is bound to the site. The site may also remain empty. The strategies of the site-players are guided by local chemical equilibria. Indeed, the site-players are introduced merely to reproduce this physical understanding of the binding process in a game theoretic context. The site-players are non-cooperative and self-interested, always aiming and succeeding at reproducing the local chemical equilibria. The binding game has no global objective function that serves to guide how the players choose their strategies. The players choices are instead guided by their own utilities that depend on the choices of other players. For example, the protein-player allocates its nuclear concentration to the proximity of the sites based on how occupied the sites are, i.e., in a manner that depends on the strategies of the site-players. Similarly, the site-players reproduce the chemical equilibrium at the sites on the basis of the available local protein concentrations, i.e., depending on the choices of the protein-players. The predictions we can make based on the game theoretic formulation are equilibria of the game (not to be confused with the local chemical equilibria at the sites). At an equilibrium, no reallocation of proteins to sites is required and, conversely, the sites have reproduced the local chemical equilibria based on the current allocations of proteins. While games need not have equilibria in pure strategies (actions available to the players), our game will always have one. 4 The binding game To specify the game more formally we proceed to define players? strategies, their utilities, and the notion of an equilibrium of the game. To this end, let f i represent the (nuclear) concentration of protein i. This is the amount of protein available to be allocated P to the neighborhoods of sites. The fraction of protein i allocated to site j is specified by pij , where j pij = 1. The numerical values of pij , where j ranges over the possible sites, define a possible strategy for the ith protein player. The set of such strategies is denoted by P i . The choices of which strategies to play are guided by parameters Eij , the affinity of protein i to explore the neighborhood of site j (we will generally index proteins with i and sites with j). The utility for protein i, defined below, provides a numerical ranking of possible strategy choices and is parameterized by Eij . Each player aims to maximize its own utility over the set of possible strategy choices. The strategy for site-player j specifies the fraction of time that each type of protein is actually bound to the site. The strategy is denoted by sji , where i ranges over proteins with affinity to the site. Note P that the values of sji are in principle observable from binding assays (cf. [9]). i sji ? 1 since there P is only one site and it may remain empty part of the time. The availability of site j is 1 ? i sji ? 1, P i.e., the fraction of time that nothing is bound. We will also use ?j = i sji to denote how occupied the site is. The utilities of the site players will depend on Kij , the chemical equilibrium constants characterizing the local binding reaction between protein i and site j. Utilities The utility function for protein-player i is formally defined as X X j si0 ) + ?H(pi ) ui (pi , s) ? pij Eij (1 ? (1) i0 j P where H(pi ) = ? j pij log pij is the Shannon entropy of the strategy pij and j ranges over possible sites. The utility of the protein-player essentially states that protein i ?prefers? to be around sites that are unbound and for which it has high affinity. The parameter ? ? 0 balances how much protein allocations are guided by the differentiated process, characterized by the exploration affinities Eij , as opposed to allocated uniformly (maximizing the entropy function). Since the overall scaling of the utilities is immaterial, only the ratios Eij /? are relevant for guiding the protein-players. Note P that since the utility depends on the strategies of site-players through (1 ? i0 sji0 ), one cannot find the equilibrium strategy for proteins by considering sji to be fixed; the sites will respond to any pij chosen by the protein-player. As discussed earlier, the site-players always reproduce the chemical equilibrium between the site and the protein species allocated to the neighborhood of the site. The utility for site-player i is defined such that the maximizing strategy corresponds to the chemical equilibrium: h X j i sji / (pij f i ? sji )(1 ? si0 ) = Kij (2) i0 sji where specifies how much protein i is bound, the first term in the denominator (pij f i ? sji ) P specifies the amount of free protein i, and the second term (1 ? i0 sji0 ), the fraction of time the site is available. The equilibrium equation holds for all protein species around the site and for the same strategy {sji } of the site-player. The units of each ?concentration? in the above equation should be interpreted as numbers of available molecules (e.g., there?s only one site). The utility function that reproduces this chemical equilibrium when maximized over possible strategies is given by  X j X j vj (sj , p) ? si ? Kij (pij f i ? sji ) 1 ? si0 (3) i0 i sji subject to ? Kij (pij f i ? ? ? pij f i , and i0 sji0 ? 1. These constraints guarantee that the utility is always non-positive and zero exactly when the chemical equilibrium holds. sji ? pij f i ensures that we cannot have more protein bound than is allocated to the proximity of the site. These constraints define the set of strategies available for site-player j or S j (p). Note that the available strategies for the site-player depend on the current strategies for protein-players. The set of strategies S j (p) is not convex. 4.1 sji )(1 j i0 si0 ), P sji P The game and equilibria The protein-DNA binding game is now fully specified by the set of parameters {Eij /?}, {Kij } and {f i }, along with the utility functions {ui } and {vj } and the allocation constraints {P i } and {S j }. We assume that the biological system being modeled reaches a steady state, at least momentarily, preserving the average allocations. In terms of our game theoretic model, this corresponds to what we call an equilibrium of the game. Informally, an equilibrium of a game is a strategy for each player such that no individual has any incentive to unilaterally deviate from their strategy. Formally, if the allocations (? p, s?) are such that for each protein i and each site j, p?i ? arg max ui (pi , s?), and s?j ? arg i i p ?P max sj ?S j (p?j ) vj (sj , p?j ), (4) then we call (? p, s?) an equilibrium of the protein-DNA binding game. Put another way, at an equilibrium, the current strategies of the players must be among the strategies that maximize their utilities assuming the strategies of other players are held fixed. Does the protein-DNA binding game always have an equilibrium? While we have already stated this in the affirmative, we emphasize that there is no reason a priori to believe that there exists an equilibrium in the pure strategies, especially since the sets of possible strategies for the site-players are non-convex (cf. [2]). The existence is guaranteed by the following theorem: Theorem 1. Every protein-DNA binding game has an equilibrium. A constructive proof is provided by the algorithm discussed below. The theorem guarantees that at least one equilibrium exists but there may be more than one. At any such equilibrium of the game, all the protein species around each site are at a chemical equilibrium; that is, if (? p, s?) is an equilibrium of the game, then for all sites j and proteins i, s?j and p?ij satisfy (2). Consequently, the site utilities vj (? sj , p?j ) are all zero for the equilibrium strategies. 4.2 Computing equilibria The equilibria of the binding game represent predicted binding arrangements. Our game has special structure and properties that permit us to find an equilibrium efficiently through a simple iterative algorithm. The algorithm monotonically fills the sites up to the equilibrium levels, starting with all sites empty. We begin by first expressing any joint equilibrium strategy of the game as a function of how filled the sites are, and reduce the problem of finding equilibria to finding fixed points of a monotone function. P To this end, let ?j = i0 sji0 denote site j occupancy, the fraction of time it is bound by any protein. ?j ?s are real numbers in the interval [0, 1]. If we fix ? = (?1 , . . . , ?m ), i.e., the occupancies for all the m sites, then we can readily obtain the maximizing strategies for proteins expressed as a function of site occupancies: pij (?) ? exp(Eij (1 ? ?j )/?), where the maximizing strategies are functions of ?. Similarly, at the equilibrium, each site-player achieves a local chemical equilibrium specified P in (2). By replacing ?j = i0 sji0 , and solving for sji in (2), we get sji (?) = Kij (1 ? ?j ) pi (?) f i 1 + Kij (1 ? ?j ) j (5) So, for example, the fraction of time the site is bound by a specific protein is proportional to the amount of that protein in the neighborhood of the site, modulated by the equilibrium constant. Note that sji (?) depends not only on how filled site j is but also on how occupied the other sites are through pij (?). The equilibrium condition can be now expressed solely in terms of ? and reduces to a simple consistency constraint: overall occupancy should equal the fraction of time any protein is bound or X j X Kij (1 ? ?j ) pi (?) f i = Gj (?) (6) ?j = si (?) = j) j 1 + K (1 ? ? ij i i We have therefore reduced the problem of finding equilibria of the game to finding fixed points of P j the mapping Gj (?) = i si (?). This mapping, written explicitly as has a simple but powerful monotonicity property that forms the basis for our iterative algorithm. Specifically, Lemma 1. Let ??j denote all components ?k except ?j . Then for each j, Gj (?) ? Gj (?j , ??j ) is a strictly decreasing function of ?j for any fixed ??j . We omit the proof as it is straightforward. This lemma, together with the fact that Gj (1, ??j ) = 0, immediately guarantees that there is a unique solution to ?j = Gj (?j , ??j ) for any fixed and valid ??j . The solution ?j also lies in the interval [0, 1] and can be found efficiently via binary search. The algorithm Let ?(t) denote the site occupancies at the tth iteration of the algorithm. ?j (t) specifies the j th component of this vector, while ??j (t) contains all but the j th component. The algorithm proceeds as follows: ? Set ?j (0) = 0 for all j = 1, . . . , m. ? Find each new component ?j (t + 1), j = 1, . . . , m, on the basis of the corresponding ??j (t) such that ?j (t + 1) = Gj (?j (t + 1), ??j (t)) ? Stop when ?j (t + 1) ? ?j (t) for all j = 1, . . . , m. Note that the inner loop of the algorithm, i.e., finding ?j (t + 1) on the basis of ??j (t) reduces to a simple binary search as discussed earlier. The algorithm generates a monotonically increasing sequence of ??s that converge to a fixed point (equilibrium) solution. We also provide a formal convergence analysis of the algorithm. To this end, we begin with the following critical lemma. Lemma 2. Let ?1 and ?2 be two possible assignments to ?. If for all k 6= j, ?1k ? ?2k , then Gj (?j , ?1?j ) ? Gj (?j , ?2?j ) for all ?j . The proof is straightforward and essentially based on the fact that ?1?j and ?2?j appear only in the normalization terms for the protein allocations. We omit further details for brevity. On the basis of this lemma, we can show that the algorithm indeed generates a monotonically increasing sequence of ??s Theorem 2. ?j (t + 1) ? ?j (t) for all j and t. Proof. By induction. Since ?j (0) = 0 and the range of Gj (?j , ??j (0)) lies in [0, 1], clearly ?j (1) ? ?j (0) for all j. Assume then that ?j (t) ? ?j (t ? 1) for all j. We extend the induction step by contradiction. Suppose ?j (t + 1) < ?j (t) for some j. Then ?j (t + 1) < ?j (t) = Gj (?j (t), ??j (t ? 1)) ? Gj (?j (t), ??j (t)) < Gj (?j (t + 1), ??j (t)) = ?j (t + 1) which is a contradiction. The first ??? follows from the induction hypothesis and lemma 2, and the last ?<? derives from lemma 1 and ?j (t + 1) < ?j (t). Since ?j (t) for any t will always lie in the interval [0, 1], and because of the continuity of Gj (?j , ??j ) in the two arguments, the algorithm is guaranteed to converge to a fixed point solution. More formally, the Monotone Convergence Theorem for sequences and the continuity of Gj ?s imply that Theorem 3. The algorithm converges to a fixed point ? ? such that ? ? j = Gj (? ?j , ? ? ?j ) for all j. 4.3 The ?-phage binding game We use the well-known ?-phage viral infection [11, 1] to illustrate the game theoretic approach. A genetic two-state control switch specifies whether the infection remains dormant (lysogeny) or whether the viral DNA is aggressively replicated (lysis). The components of the ??switch are 1) two adjacent genes cI and Cro that encode cI2 and Cro proteins, respectively; 2) the promoter regions PRM and PR of these genes, and 3) an operator (OR ) with three binding sites OR 1, OR 2, and OR 3. We focus on lysogeny, in which cI2 dominates over Cro. There are two relevant protein-players, RNA-polymerase and cI2 , and three sites, OR 1, OR 2, and OR 3 (arranged close together in this order). Since the presence of cI2 in either OR 1 or OR 3 blocks the access of RNA-polymerase to the promoter region PR , or PRM respectively, we can safely restrict ourselves to operator sites as the site-players. There are three phases of operation depending on the concentration of cI2 : 1. cI2 binds to OR 1 first and blocks the Cro promoter PR 2. Slightly higher concentrations of cI2 lead to binding at OR 2 which in turn facilitates RNApolymerase to initiate transcription at PRM 3. At sufficiently high levels cI2 also binds to OR 3 and inhibits its own transcription Binding in OR3 Binding in OR2 0.5 cI2 RNA? polym. 0.25 ?1 10 0 1 10 10 f cI /fRNA? p 2 10 3 probability of binding 0.75 0 Binding in OR1 1 1 probability of binding probability of binding 1 0.75 0.5 cI2 0.25 RNA? polym. 0 ?1 10 10 2 0 1 10 10 f cI /fRNA? p 2 10 3 10 0.75 cI2 RNA? polym. 0.5 0.25 0 ?1 10 0 10 (a) OR 3 1 10 f cI /fRNA? p 2 10 2 2 (b) OR 2 (c) OR 1 Figure 1: Predicted protein binding to sites OR 3, OR 2, and OR 1 for increasing amounts of cI2 . The rightmost figure illustrates a comparison with [1]. The shaded area indicates the range of concentrations of cI2 at which stochastic simulation predicts a decline in transcription from OR 1. Our model predicts that cI2 begins to occupy OR 1 at the same concentration. Game parameters The game requires three sets of parameters: chemical equilibrium constants, affinities, and protein concentrations. To use constants derived from experiment we assign units to these quantities. We define f i as the total number of proteins i available, and arrange the units of Kij accordingly: f i ? fei VT NA , e ij = e??G/RT K e ij /(NA VS ) Kij ? K (7) where VT and VS are the volumes of cell and site neighborhood, respectively, NA is the Avogadro number, R is the universal gas constant, T is temperature, fei is the concentration of protein i in the e ij is the equilibrium constant in units of `/mol. As we show in [6] these definitions are cell, and K consistent with our previous derivation. Note that when game parameters are learned from data any dependence on the volumes will be implicit. For a typical Escherichia coli ( 2?m length) at room temperature, the Gibbs? Free energies ?G tabulated by [11] yield the equilibrium constants shown below; in addition, we set transport affinities in accordance with the qualitative description in [7, 8], Kij cI2 RNA-p OR 3 .0020 .0212 OR 2 .0020 0 OR 1 .0296 .1134 Eij cI2 RNA-p OR 3 .1 .2 OR 2 .1 .01 OR 1 1 1 Note that the overall scaling of the affinities is immaterial; only their relative values will guide the protein-players. Note also that we have chosen not to incorporate any protein-protein interactions in the affinities. Finally, we set feRN A?p = 30nM (cf. [11]) (around fRN A?p ' 340 copies for a typical E. coli). And varied fcI2 from 1 to 10, 000 copies to study the dynamical behavior of the lysogeny cycle. The results are reported as a function of the ratio fcI2 /fRN A?p . We set ? = 10?5 . Simulation Results The predictions from the game theoretic model exactly mirror the known behavior. Here we summarize the main results and refer the reader to [6] for a thorough analysis. Figure 1 illustrates how the binding at different sites changes as a function of increasing fcI2 . The simulation mirrors the behavior of the lysogeny cycle discussed earlier. Although our model does not capture dynamics, and figure 1 does not involve time, it is nevertheless useful for assessing quantitative changes and the order of events as a function of increasing fcI2 . Note, for example, that the levels at which cI2 occupies OR 1 and OR 2 rise much faster than at OR 3. While the result is expected, the behavior is attributed to protein-protein interactions which are not encoded in our model. Similarly, RNA-polymerase occupation at OR 3 bumps up as the probability that OR 2 is bound by cI2 increases. In [6] we further discuss the implications of the simultaneous occupancy of OR 1 and OR 2, via simulation of OR 1 knockout experiments. Finally, figure 1(c) shows a comparison with stochastic simulation (v. [1]). Our model predicts that cI2 begins binding OR 1 at the same level as [1] predicts a decline in the transcription of Cro. While consistent, we emphasize that the methods differ in their goals; stochastic simulation focuses on the dynamics of transcription while we study the strategic allocation of proteins as a function of their concentration. 4.4 A structured extension The game theoretic formulation of the binding problem described previously involves a transport mechanism that is specific to individual sites. In other words, proteins are allocated to the proximity of sites based on parameters Eij and occupancies ?j associated with individual sites. We generalize the game further here by assuming that the transport mechanism has a coarser spatial structure, e.g., specific to promoters (regulatory regions of genes) rather than sites. In this extension the amount of protein allocated to any promoter is shared by the sites it contains. The sharing creates specific challenges to the algorithms for finding the equilibria and we will address those challenges here. Let R represent possible promoter regions each of which may be bound by multiple proteins (at distinct or overlapping sites). Let pi = {pir }r?R represent an allocation of protein i into these regions in a manner that is not specific to the possible sites within each promoter. The utility for protein i is given by X ui (pi ) = pir Eir (ar ) + ?H(pi ) r?R where N (r) is the set of possible binding sites within promoter region r and ar = P j?N (r) ?j is the overall occupancy of the promoter (how many proteins bound). As before, ? = i?P sji , where the summation is over proteins. N (r) ? N (r0 ) = ? whenever r 6= r0 (promoters don?t share sites). We assume only that Eir (ar ) is a decreasing and a differentiable function of ar . The protein utility is based on the assumption that the attraction to the promoter decreases based on the number P of proteins already bound at the promoter. The maximizing strategy for protein i given ar = j?N (r) ?j for all r, is pir (a) ? exp(Eir (ar )/?), where a = {ar }r?R . j P Sites j ? N (r) within a promoter region r reproduce the following chemical equilibrium h i P sji / (f i pir (a) ? k?N (r) ski )(1 ? ?j ) = Kij for all proteins i ? P . Note the shared protein resource within the promoter. We can find this chemical equilibrium by solving the following fixed point equations X K (1 ? ?j ) P ij ?j = f i pir (a) = Gjr (?, a?r ) 1 + k?N (r) Kik (1 ? ?k ) i?P j The site occupancies ? are now tied within the promoter as well as influencing the overall allocation of proteins across different promoters through a = {ar }r?R . The following theorem provides the basis for solving the coupled fixed point equations: Theorem 4. Let {? ?1j } be the fixed point solution ?1j = Gjr (?1 , a?r ?2j } the solution to 1 ) and {? j ?r ?2 = Gjr (?2 , a2 ). If al1 ? al2 for all l 6= r then a ?r1 ? a ?r2 . The proof is not straightforward but we omit it for brevity (two pages). The result guarantees that if we can solve the fixed point equations within each promoter then the overall occupancies {ar }r?R have the same monotonicity property as in the simpler version of the game where ar consisted of a single site. In other words, any algorithm that successively solves the fixed point equations within promoters will result in a monotone and therefore convergent filling of the promoters, beginning with all empty promoters. We will redefine the notation slightly to illustrate the algorithm for finding the solution ?j = Gjr (?, a?r ) for j ? N (r) where a?r is fixed. Specifically, let X Kij (1 ? ?j ) P ? j , a?r ) = f i pir (?j , ? ? ?j , a?r ) Gjr (?j , ??j , ? 1 + Kij (1 ? ?j ) + k6=j Kik (1 ? ?k ) i?P In other words, the first argument refers to ?j anywhere on the right hand side, the second argument refers to ??j in the denominator of the first expression in the sum, and the third argument refers to ??j in pir (?). The algorithm is now defined as follows: initialize by setting ?j (0) = 0 and ? ? j (0) = 1 for all j ? N (r), then Iteration t, upper bounds: Find ? ? j = Gjr (? ?j , ? ? ?j (t), ??j (t), a?r ) separately for each j j j ? N (t). Update ? ? (t + 1) = ? ? , j ? N (r) Iteration t, lower bounds: Find ? ? j = Gjr (? ?j , ??j (t), ? ? ?j (t + 1), a?r ) separately for each j j j ? N (r). Update ? (t + 1) = ? ? , j ? N (r) The iterative optimization proceeds until1 ? ? j (t) ? ?j (t) ?  for all j ? N (r). The algorithm successively narrows down the gap between upper and lower bounds. Specifically, ? ? j (t + 1) ? j j j ? ? (t) and ? (t + 1) ? ? (t). The fact that these indeed remain upper and lower bounds follows directly from the fact that Gjr (?, ??j , ? ? j , a?r ), viewed as a function of the first argument, increases uniformly as we increase the components of the second argument. Similarly, it uniformly decreases as a function of the third argument. 5 Discussion We have presented a game theoretic approach to predicting protein arrangements along the DNA. The model is complete with convergent algorithms for finding equilibria on a genome-wide scale. The results from the small scale application are encouraging. Our model successfully reproduces known behavior of the ??switch on the basis of molecular level competition and resource constraints, without the need to assume protein-protein interactions between cI2 dimers and cI2 and RNA-polymerase. Even in the context of this well-known sub-system, however, few quantitative experimental results are available about binding (see the comparison). Proper validation and use of our model therefore relies on estimating the game parameters from available protein-DNA binding data. This will be addressed in subsequent work. This work was supported in part by NIH grant GM68762 and by NSF ITR grant 0428715. Luis P?erez-Breva is a ?Fundaci?on Rafael del Pino? Fellow. References [1] Adam Arkin, John Ross, and Harley H. McAdams. Stochastic kinetic analysis of developmental pathway bifurcation in phage ?-infected escherichia coli cells. Genetics, 149:1633?1648, August 1998. [2] Kenneth J. Arrow and Gerard Debreu. Existence of an equilibrium for a competitive economy. Econometrica, 22(3):265?290, July 1954. [3] Z. Bar-Joseph, G. Gerber, T. Lee, N. Rinaldi, J. Yoo, B. Gordon F. Robert, E. Fraenkel, T. Jaakkola, R. Young, and D. Gifford. Computational discovery of gene modules and regulatory networks. Nature Biotechnology, 21(11):1337?1342, 2003. [4] Otto G. Berg, Robert B. Winter, and Peter H. von Hippel. Diffusion- driven mechanisms of protein translocation on nucleic acids. 1. models and theory. Biochemistry, 20(24):6929?48, November 1981. [5] HarleyH. McAdams and Adam Arkin. Stochastic mechanisms in geneexpression. PNAS, 94(3):814?819, 1997. [6] Luis P?erez-Breva, Luis Ortiz, Chen-Hsiang Yeang, and Tommi Jaakkola. DNA binding and games. Technical Report MIT-CSAIL-TR-2006-018, Massachusetts Institute of Technology, Computer Science and Artificial Intelligence Laboratory, March 2006. [7] Mark Ptashne. A Genetic Switch: Gene control and phage ?. Cell Press AND Blackwell Scientific Publications, 3rd edition, 1987. [8] Mark Ptashne and Alexander Gann. Genes and Signals. Cold Spring Harbor Laboratory press, 1st edition, 2002. [9] Bing Ren, Franois Robert, John J. Wyrick, Oscar Aparicio, Ezra G. Jennings, Itamar Simon, Julia Zeitlinger, Jrg Schreiber, Nancy Hannett, Elenita Kanin, Thomas L. Volkert, Christopher J. Wilson, Stephen P. Bell, , and Richard A. Young. Genome-wide location and function of DNA-binding proteins. Science, 290(2306), December 2000. [10] E. Segal, M. Shapira, A. Regev, D. Pe?er, D. Botstein, D. Koller, and N. Friedman. Module networks: identifying regulatory modules and their condition-specific regulators from gene expression data. Nature Genetics, 34(2):166?76, 2003. [11] Madeline A. Shea and Gary K. Ackers. The or control system of bacteriophage lambda. a physicalchemical model for gene regulation. Journal of Molecular Biology, 181:211?230, 1985. [12] Neil P. Stanford, Mark D. Szczelkun, John F. Marko, and Stephen E. Halford. One- and three-dimensional pathways for proteins to reach specific DNA sites. EMBO, 19(23):6546?6557, December 2000. 1 In the case of multiple equilibria the bounds might converge but leave a finite gap. The algorithm will identify those cases as the monotone convergence of the bounds can be assessed separately.
3091 |@word version:1 briefly:1 simulation:6 ci2:21 franois:1 tr:1 contains:2 genetic:2 rightmost:1 reaction:2 current:3 com:1 si:3 must:1 readily:1 john:3 luis:5 written:1 subsequent:1 numerical:2 succeeding:1 update:2 v:2 intelligence:1 accordingly:1 beginning:1 ith:1 provides:2 location:1 simpler:1 mathematical:2 along:3 ucsc:2 qualitative:1 pathway:2 redefine:1 manner:3 expected:1 indeed:3 behavior:5 nor:1 decreasing:2 encouraging:1 preclude:1 considering:1 increasing:6 confused:1 provided:1 underlying:2 begin:4 notation:1 mass:1 estimating:1 what:2 interpreted:1 substantially:1 affirmative:1 finding:8 guarantee:4 safely:1 thorough:1 every:1 quantitative:2 fellow:1 exactly:2 control:4 unit:4 grant:2 omit:3 appear:1 causally:1 knockout:1 positive:2 before:1 influencing:1 understood:1 local:13 bind:6 accordance:1 consequence:1 aiming:1 solely:1 might:1 conversely:1 shaded:1 escherichia:2 co:1 range:6 unique:1 block:2 cold:1 area:1 universal:1 bell:1 thought:1 word:4 refers:4 polymerase:4 shapira:1 protein:105 get:1 cannot:2 close:3 operator:2 put:1 context:3 accumulation:1 maximizing:5 straightforward:3 regardless:1 starting:1 convex:2 resolution:1 identifying:1 immediately:1 pure:2 contradiction:3 attraction:1 nuclear:6 fill:1 unilaterally:1 population:2 leortiz:1 notion:1 coordinate:4 play:1 suppose:1 hypothesis:2 arkin:2 predicts:4 cooperative:2 coarser:1 eir:3 module:3 frn:2 capture:3 region:9 ensures:1 momentarily:1 cycle:2 gifford:1 cro:5 decrease:2 substantial:1 environment:1 developmental:1 ui:4 econometrica:1 dynamic:3 immaterial:2 depend:3 solving:3 ackers:1 creates:1 basis:7 joint:1 various:3 derivation:1 distinct:1 artificial:1 ponent:1 formation:2 neighborhood:10 encoded:1 larger:2 solve:1 stanford:1 otto:1 ability:2 neil:1 kik:2 mcadams:2 reproduced:1 advantage:1 sequence:3 differentiable:1 interaction:4 relevant:2 loop:1 description:1 competition:2 convergence:3 empty:4 assessing:1 r1:1 gerard:1 adam:2 converges:1 geneexpression:1 leave:1 depending:4 develop:1 illustrate:2 ij:8 solves:1 predicted:2 involves:1 tommi:3 differ:1 guided:5 dormant:1 stochastic:5 exploration:1 occupies:1 assign:1 fix:1 decompose:1 biological:3 summation:1 extension:3 strictly:1 hold:2 proximity:5 around:8 sufficiently:1 exp:2 equilibrium:54 scope:1 predict:2 mapping:2 prm:3 bump:1 dimer:1 biochemistry:1 major:1 achieves:1 sought:1 a2:1 arrange:1 al1:1 coordination:1 si0:4 ross:1 schreiber:1 create:1 successfully:1 mit:7 clearly:1 always:6 rna:9 aim:3 rather:2 occupied:4 jaakkola:3 publication:1 wilson:1 encode:1 derived:1 focus:3 indicates:1 economy:1 abstraction:1 i0:9 initially:1 relation:1 koller:1 reproduce:4 interested:1 arg:2 overall:8 among:2 denoted:2 priori:1 negotiation:1 k6:1 spatial:1 special:1 initialize:1 bifurcation:1 equal:1 biology:1 filling:2 report:1 gordon:1 richard:1 few:1 winter:1 individual:5 phase:1 ourselves:1 ortiz:2 harley:1 friedman:1 held:1 implication:1 allocates:1 filled:2 phage:5 gerber:1 kij:16 modeling:1 ezra:1 earlier:3 ar:10 infected:1 assignment:1 strategic:1 reported:1 st:1 csail:7 lee:1 together:2 na:3 von:1 nm:1 successively:2 opposed:1 choose:2 lambda:1 coli:3 segal:1 availability:2 satisfy:1 explicitly:1 ranking:1 depends:3 closed:1 analyze:1 reached:1 competitive:1 simon:1 acid:1 who:1 efficiently:2 maximized:1 yield:1 identify:1 generalize:1 fern:1 ren:1 simultaneous:1 reach:2 sharing:1 infection:2 whenever:1 definition:1 energy:1 involved:1 associated:2 proof:5 attributed:1 rational:1 stop:1 massachusetts:1 nancy:1 actually:1 higher:1 botstein:1 response:1 specify:1 formulation:3 arranged:1 anywhere:1 implicit:1 hand:2 transport:11 replacing:1 christopher:1 overlapping:2 del:1 continuity:2 scientific:1 believe:2 zeitlinger:1 consisted:1 chemical:17 aggressively:1 laboratory:2 assay:1 adjacent:1 marko:1 game:43 self:1 steady:2 translocation:1 theoretic:12 demonstrate:1 complete:1 julia:1 temperature:2 disruption:1 nih:1 viral:2 physical:1 al2:1 volume:2 discussed:4 extend:1 accumulate:1 refer:2 expressing:1 pino:1 gibbs:1 rd:1 consistency:1 similarly:5 erez:3 access:1 gj:16 closest:1 own:3 perspective:1 driven:1 sji:21 binary:2 vt:2 embo:1 preserving:1 r0:2 converge:3 maximize:2 monotonically:3 july:1 signal:1 resolving:1 multiple:4 stephen:2 pnas:1 reduces:2 debreu:1 technical:1 faster:1 characterized:1 constructive:1 molecular:3 prediction:5 involving:2 denominator:2 essentially:2 iteration:3 represent:4 normalization:1 cell:6 addition:1 separately:3 interval:3 addressed:1 allocated:8 subject:1 facilitates:1 december:2 spirit:1 call:2 presence:2 switch:5 harbor:1 competing:1 restrict:1 economic:1 reduce:1 inner:1 decline:2 itr:1 whether:3 pir:7 expression:2 utility:18 tabulated:1 peter:1 speaking:1 proceed:1 biotechnology:1 action:2 prefers:1 generally:1 useful:1 detailed:1 involve:2 informally:1 jennings:1 amount:5 dna:15 reduced:1 tth:1 occupy:2 specifies:5 exist:1 nsf:1 halford:1 broadly:1 incentive:1 key:1 nevertheless:2 diffusion:2 kenneth:1 rinaldi:1 merely:1 fraction:9 monotone:4 sum:1 facilitated:2 parameterized:1 powerful:1 respond:1 oscar:1 reader:1 breva:3 scaling:2 bound:21 guaranteed:2 distinguish:1 convergent:2 constraint:7 fei:2 generates:2 regulator:3 argument:7 spring:1 inhibits:1 structured:1 combination:1 march:1 across:2 remain:3 slightly:2 joseph:1 pr:3 yeang:2 resource:4 equation:7 remains:1 previously:1 turn:1 discus:1 mechanism:6 bing:1 jrg:1 initiate:1 serf:1 end:3 available:12 operation:2 permit:1 reallocation:1 appropriate:1 differentiated:2 existence:2 thomas:1 cf:3 especially:1 objective:1 arrangement:6 already:3 quantity:1 strategy:34 concentration:20 rt:1 dependence:1 regev:1 transcriptional:2 affinity:15 separate:1 reason:1 induction:3 assuming:3 length:1 index:1 modeled:1 ratio:2 balance:1 regulation:2 robert:3 potentially:1 stated:1 rise:1 ski:1 proper:1 upper:3 nucleic:1 finite:1 november:1 gas:1 incorporated:1 interacting:1 perturbation:2 reproducing:1 varied:1 august:1 introduced:1 required:2 specified:3 blackwell:1 learned:1 narrow:1 address:2 bar:1 proceeds:2 below:3 dynamical:1 gjr:8 summarize:1 challenge:2 unbound:1 max:2 critical:2 event:1 predicting:2 occupancy:10 technology:1 imply:1 carried:1 coupled:1 deviate:2 understanding:4 discovery:1 relative:1 law:1 occupation:1 fully:1 mixed:1 allocation:12 proportional:1 validation:1 foundation:1 pij:16 consistent:2 principle:1 pi:9 share:1 genetics:2 supported:1 last:1 free:6 copy:2 guide:2 weaker:1 formal:1 side:1 institute:1 wide:2 characterizing:3 stand:1 valid:1 genome:2 replicated:1 sj:4 observable:1 emphasize:2 rafael:1 transcription:5 gene:8 monotonicity:2 global:1 decides:1 reproduces:2 assumed:2 alternatively:1 don:1 search:2 regulatory:6 iterative:3 nature:3 molecule:1 mol:1 complex:2 vj:4 main:1 promoter:20 arrow:1 edition:2 nothing:1 body:1 site:110 hsiang:2 sub:1 guiding:1 explicit:2 lie:3 governed:2 tied:1 pe:1 third:2 young:2 theorem:8 down:1 specific:10 er:1 r2:1 dominates:1 derives:1 exists:2 ci:4 mirror:2 shea:1 illustrates:2 chen:2 gap:2 entropy:2 eij:9 explore:3 expressed:2 ptashne:2 binding:50 corresponds:2 gary:1 relies:2 kinetic:1 viewed:2 narrower:1 goal:1 consequently:1 room:1 shared:2 absence:1 soe:1 change:2 specifically:3 except:1 uniformly:3 typical:2 lemma:7 total:1 specie:4 experimental:2 player:40 shannon:1 meaningful:1 formally:4 berg:1 mark:3 modulated:2 assessed:1 brevity:2 alexander:1 incorporate:1 yoo:1
2,305
3,092
iLSTD: Eligibility Traces and Convergence Analysis Alborz Geramifard Michael Bowling Martin Zinkevich Richard S. Sutton Department of Computing Science University of Alberta Edmonton, Alberta {alborz,bowling,maz,sutton}@cs.ualberta.ca Abstract We present new theoretical and empirical results with the iLSTD algorithm for policy evaluation in reinforcement learning with linear function approximation. iLSTD is an incremental method for achieving results similar to LSTD, the dataefficient, least-squares version of temporal difference learning, without incurring the full cost of the LSTD computation. LSTD is O(n2 ), where n is the number of parameters in the linear function approximator, while iLSTD is O(n). In this paper, we generalize the previous iLSTD algorithm and present three new results: (1) the first convergence proof for an iLSTD algorithm; (2) an extension to incorporate eligibility traces without changing the asymptotic computational complexity; and (3) the first empirical results with an iLSTD algorithm for a problem (mountain car) with feature vectors large enough (n = 10, 000) to show substantial computational advantages over LSTD. 1 Introduction A key part of many reinforcement learning algorithms is a policy evaluation process, in which the value function of a policy is estimated online from data. In this paper, we consider the problem of policy evaluation where the value function estimate is a linear function of state features and is updated after each time step. Temporal difference (TD) learning is a common approach to this problem [Sutton, 1988]. The TD algorithm updates its value-function estimate based on the observed TD error on each time step. The TD update takes only O(n) computation per time step, where n is the number of features. However, because conventional TD methods do not make any later use of the time step?s data, they may require a great deal of data to compute an accurate estimate. More recently, LSTD [Bradtke and Barto, 1996] and its extension LSTD(?) [Boyan, 2002] were introduced as alternatives. Rather than making updates on each step to improve the estimate, these methods maintain compact summaries of all observed state transitions and rewards and solve for the value function which has zero expected TD error over the observed data. However, although LSTD and LSTD(?) make more efficient use of the data, they require O(n2 ) computation per time step, which is often impractical for the large feature sets needed in many applications. Hence, practitioners are often faced with the dilemma of having to chose between excessive computational expense and excessive data expense. Recently, Geramifard and colleagues [2006] introduced an incremental least-squares TD algorithm, iLSTD, as a compromise between the computational burden of LSTD and the relative data inefficiency of TD. The algorithm focuses on the common situation of large feature sets where only a small number of features are non-zero on any given time step. iLSTD?s per-time-step computational complexity in this case is only O(n). In empirical results on a simple problem, iLSTD exhibited a rate of learning similar to that of LSTD. In this paper, we substantially extend the iLSTD algorithm, generalizing it in two key ways. First, we include the use of eligibility traces, defining iLSTD(?) consistent with the family of TD(?) and LSTD(?) algorithms. We show that, under the iLSTD assumptions, the per-time-step computational complexity of this algorithm remains linear in the number of features. Second, we generalize the feature selection mechanism. We prove that for a general class of selection mechanisms, iLSTD(?) converges to the same solution as TD(?) and LSTD(?), for all 0 ? ? ? 1. 2 Background Reinforcement learning is an approach to finding optimal policies in sequential decision making problems with an unknown environment [e.g., see Sutton and Barto, 1998]. We focus on the class of environments known as Markov decision processes (MDPs). An MDP is a tuple, a a a (S, A, Pss 0 , Rss0 , ?), where S is a set of states, A is a set of actions, Pss0 is the probability of 0 a reaching state s after taking action a in state s, and Rss0 is the reward received when that transition occurs, and ? ? [0, 1] is a discount rate parameter. A trajectory of experience is a sequence s0 , a0 , r1 , s1 , a1 , r2 , s2 , . . ., where the agent in s1 takes action a1 and receives reward r2 while transitioning to s2 before taking a2 , etc. Given a policy, one often wants to estimate the policy?s state-value function, or expected sum of discounted future rewards: "? # X ? t?1 V (s) = E ? rt s0 = s, ? . t=1 In particular, we are interested in approximating V ? using a linear function approximator. Let ? : S ? <n , be some features of the state space. Linear value functions are of the form V? (s) = ?(s)T ?, where ? ? <n are the parameters of the value function. In this work we will exclusively consider sparse feature representations: for all states s the number of non-zero features in ?(s) is no more than k  n. Sparse feature representations are quite common as a generic approach to handling non-linearity [e.g., Stone et al., 2005].1 2.1 Temporal Difference Learning TD(?) is the traditional approach to policy evaluation [see Sutton and Barto, 1998]. It is based on the computation of a ?-return, Rt? (V ), at each time step: ! ? k X X ? k?1 k i?1 Rt (V ) = (1 ? ?) ? ? V (st+k ) + ? rt+i . k=1 i=1 Note that the ?-return is a weighted sum of k-step returns, each of which looks ahead k steps summing the discounted rewards as well as the estimated value of the resulting state. The ?-return forms the basis of the update to the value function parameters: ? t+1 = ? t + ?t ?(st ) (Rt (V?t ) ? V?t (st )) , where ?t is the learning rate. This ?forward view? requires a complete trajectory to compute the ?-return and update the parameters. The ?backward view? is a more efficient implementation that depends only on one-step returns and an eligibility trace vector: ? t+1 = ? t + ?t ut (?t ) ut (?) = zt (rt+1 + ?V? (st+1 ) ? V? (st )) zt = ??zt+1 + ?(st ), where zt is the eligibility trace and ut (?) is the TD update. Notice that TD(?) requires only a constant number of vector operations and so is O(n) per time step. In the special case where ? = 0 and the feature representation is sparse, this complexity can be reduced to O(k). In addition, TD(?) is guaranteed to converge [Tsitsiklis and Van Roy, 1997]. 1 Throughout this paper we will use non-bolded symbols to refer to scalars (e.g., ? and ?t ), bold-faced lower-case symbols to refer to vectors (e.g., ? and bt ), and bold-faced upper-case symbols for matrices (e.g., At ). 2.2 Least-Squares TD Least-squares TD (LSTD) was first introduced by Bradtke and Barto [1996] and later extended with ?-returns by Boyan [2002]. LSTD(?) can be viewed as immediately solving for the value function parameters which would result in the sum of TD updates over the observed trajectory being zero. Let ?t (?) be the sum of the TD updates through time t. If we let ?t = ?(st ) then, ?t (?) = = t X i=1 t X ui (?) = t X  zi ri+1 + ?V? (si+1 ) ? V? (si ) i=1  zi ri+1 + ??Ti+1 ? ? ?Ti ?  i=1 = t X zi ri+1 ? zi (?i ? ??i+1 )T ? = bt ? At ?. (1) i=1 i=1 | t X {z bt } | {z At } Since we want to choose parameters such that the sum of TD updates is zero, we set Equation 1 to zero and solve for the new parameter vector, ? t+1 = A?1 t bt . The online version of LSTD(?) incorporates each observed reward and state transition into the b vector and the A matrix and then solves for a new ?. Notice that, once b and A are updated, the experience tuple can be forgotten without losing any information. Because A only changes by a small amount on each time step, A?1 can also be maintained incrementally. The computation requirement is O(n2 ) per time step. Like TD(?), LSTD(?) is guaranteed to converge [Boyan, 2002]. 2.3 iLSTD iLSTD was recently introduced to provide a balance between LSTD?s data efficiency and TD?s time efficiency for ? = 0 when the feature representation is sparse [Geramifard et al., 2006]. The basic idea is to maintain the same A matrix and b vector as LSTD, but to only incrementally solve for ?. The update to ? requires some care as the sum TD update itself would require O(n2 ). iLSTD instead updates only single dimensions of ?, each of which requires O(n). By updating m parameters of ?, which is a parameter that can be varied to trade off data and computational efficiency, iLSTD requires O(mn + k 2 ) per time step, which is linear in n. The result is that iLSTD can scale to much larger feature spaces than LSTD, while still retaining much of its data efficiency. Although the original formulation of iLSTD had no proof of convergence, it was shown in synthetic domains to perform nearly as well as LSTD with dramatically less computation. In the remainder of the paper, we describe a generalization, iLSTD(?), of the original algorithm to handle ? > 0. By also generalizing the mechanism used to select the feature parameters to update, we additionally prove sufficient conditions for convergence. 3 The New Algorithm with Eligibility Traces The iLSTD(?) algorithm is shown in Algorithm 1. The new algorithm is a generalization of the original iLSTD algorithm in two key ways. First, it uses eligibility traces (z) to handle ? > 0. Line 5 updates z, and lines 5?9 incrementally compute the same At , bt , and ?t as described in Equation 1. Second, the dimension selection mechanism has been relaxed. Any feature selection mechanism can be employed in line 11 to select a dimension of the sum TD update vector (?).2 Line 12 will then take a step in that dimension, and line 13 updates the ? vector accordingly. The original iLSTD algorithm can be recovered by simply setting ? to zero and selecting features according to the dimension of ? with maximal magnitude. We now examine iLSTD(?)?s computational complexity. 2 The choice of this mechanism will determine the convergence properties of the algorithm, as discussed in the next section. Algorithm 1: iLSTD(?) Complexity 0 s ? s0 , z ? 0, A ? 0, ? ? 0, t ? 0 1 Initialize ? arbitrarily 2 repeat 3 Take action according to ? and observe r, s0 4 t?t+1 5 z ? ??z + ?(s) O(n) 6 ?b ? zr O(n) 7 ?A ? z(?(s) ? ??(s0 ))T O(kn) 8 A ? A + ?A O(kn) 9 ? ? ? + ?b ? (?A)? O(kn) 10 for i from 1 to m do 11 j ? choose an index of ? using some feature selection mechanism 12 ?j ? ?j + ??j O(1) 13 ? ? ? ? ??j Aej O(n) 14 end for 15 s ? s0 16 end repeat Theorem 1 Assume that the feature selection mechanism takes O(n) computation. If there are n features and, for any given state s, ?(s) has at most k non-zero elements, then the iLSTD(?) algorithm requires O((m + k)n) computation per time step. Proof Outside of the inner loop, lines 7?9 are the most computationally expensive steps of iLSTD(?). Since we assumed that each feature vector has at most k non-zero elements, and the T z vector can have up to n non-zero elements, the z (?(s) ? ??(s0 )) matrix (line 7) has at most 2kn non-zero elements. This leads to O(nk) complexity for the outside of the loop. Inside, the complexity remains unchanged from iLSTD with the most expensive lines being 11 and 13. Because ? and A do not have any specific structure, the inside loop time3 is O(n). Thus, the final bound for the algorithm?s per-time-step computational complexity is O((m + k)n).  4 Convergence We now consider the convergence properties of iLSTD(?). Our analysis follows that of Bertsekas and Tsitsiklis [1996] very closely to establish that iLSTD(?) converges to the same solution that TD(?) does. However, whereas in their analysis they considered Ct and dt that had expectations that converged quickly, we consider Ct and dt that may converge more slowly, but in value instead of expectation. In order to establish our result, we consider the theoretical model where for all t, yt ? Rn ,dt ? Rn , Rt , Ct ? Rn?n , ?t ? R, and: yt+1 = yt + ?t (Rt )(Ct yt + dt ). (2) On every round, Ct and dt are selected first, followed by Rt . Define Ft to be the state of the algorithm on round t before Rt is selected. Ct and dt are sequences of random variables. In order to prove convergence of yt , we assume that there is a C? , d? , v, ? > 0, and M such that: C? is negative definite, Ct converges to C? with probability 1, dt converges to d? with probability 1, E[Rt |Ft ] = I, and kRt k ? M , PT A5. limT ?? t=1 ?t = ?, and A6. ?t < vt?? . A1. A2. A3. A4. 3 Note that Aei selects the ith column of A and so does not require the usual quadratic time for multiplying a vector by a square matrix. Theorem 2 Given the above assumptions, yt converges to ?(C? )?1 d? with probability 1. The proof of this theorem is included in the additional material and will be made available as a companion technical report. Now we can map iLSTD(?) on to this mathematical model: 1. 2. 3. 4. 5. yt = ? t , ?t = t?/n, Ct = ?At /t, dt = bt /t, and Rt is a matrix, where there is an n on the diagonal in position (kt , kt ) (where kt is uniform random over the set {1,. . . , n} and i.i.d.) and zeroes everywhere else. The final assumption defines the simplest possible feature selection mechanism sufficient for convergence, viz., uniform random selection of features. Theorem 3 If the Markov decision process is finite, iLSTD(?) with a uniform random feature selection mechanism converges to the same result as TD(?). Although this result is for uniform random selection, note that Theorem 2 outlines a broad range of possible mechanisms sufficient for convergence. However, the greedy selection of the original iLSTD algorithm does not meet these conditions, and so has no guarantee of convergence. As we will see in the next section, though, greedy selection performs quite well despite this lack of asymptotic guarantee. In summary, finding a good feature selection mechanism remains an open research question. As a final aside, one can go beyond iLSTD(?) and consider the case where Rt = I, i.e., we take a step in all directions at once on every round. This does not correspond to any feature selection mechanism and in fact requires O(n2 ) computation. However, we can examine this algorithm?s rate of convergence. In particular we find it converges linearly fast to LSTD(?). Theorem 4 If Ct is negative definite, for some ? dependent upon Ct , if R t = I, then there exists an ? ? (0, 1) such that for all yt , if yt+1 = yt + ?(Ct yt + dt ), then yt+1 + (Ct )?1 dt < ? yt + (Ct )?1 dt . This may explain why iLSTD(?)?s performance, despite only updating a single dimension, approaches LSTD(?) so quickly in the experimental results in the next section. 5 Empirical Results We now examine the empirical performance of iLSTD(?). We first consider the simple problem introduced by Boyan [2002] and on which the original iLSTD was evaluated. We then explore the larger mountain car problem with a tile coding function approximator. In both problems, we compare TD(?), LSTD(?), and two variants of iLSTD(?). We evaluate both the random feature selection mechanism (?iLSTD-random?), which is guaranteed to converge,4 as well as the original iLSTD feature selection rule (?iLSTD-greedy?), which is not. In both cases, the number of dimensions picked per iteration is m = 1. The step size (?) used for both iLSTD(?) and TD(?) was of the same form as in Boyan?s experiments, with a slightly faster decay rate in order to make it consistent with the proof?s assumption. N0 + 1 ?t = ?0 N0 + Episode#1.1 For the TD(?) and iLSTD(?) algorithms, the best ?0 and N0 have been selected through experimental search of the sets of ?0 ? {0.01, 0.1, 1} and N0 ? {100, 100, 106 } for each domain and ? value, which is also consistent with Boyan?s original experiments. 4 When selecting features randomly we exclude dimensions with zero sum TD update. To be consistent with the assumptions of Theorem 2, we compensate by multiplying the learning rate ?t by the fraction of features that are non-zero at time t. -3 -3 -3 -3 -3 Goal 13 -3 -3 1 0 0 0 -3 5 0 0 1 0 4 0 0 .75 .25 -3 3 0 0 .5 .5 -3 2 -2 0 0 .25 .75 1 0 0 0 1 0 0 0 0 0 0 (a) (b) Figure 1: The two experimental domains: (a) Boyan?s chain example and (b) mountain car. 0 RMS error of V(s) over all states 10 ?1 10 ?2 10 TD iLSTD?Random iLSTD?Greedy LSTD 0 0.5 0.7 0.8 0.9 1 Figure 2: Performance of various algorithms in Boyan?s chain problem with 6 different lambda values. Each line represents the averaged error over last 100 episodes after 100, 200, and 1000 episodes respectively. Results are also averaged over 30 trials. 5.1 Boyan Chain Problem The first domain we consider is the Boyan chain problem. Figure 1(a) shows the Markov chain together with the feature vectors corresponding to each state. This is an episodic task where the discount factor ? is one. The chain starts in state 13 and finishes in state 0. For all states s > 2, there exists an equal probability of ending up in (s ? 1) and (s ? 2). The reward is - 3 for all transitions except from state 2 to 1 and state 1 to 0, where the rewards are - 2 and 0, respectively. Figure 2 shows the comparative results. The horizontal axis corresponds to different ? values, while the vertical axis illustrates the RMS error in a log scale averaged over all states uniformly. Note that in this domain, the optimum solution is in the space spanned by the feature vectors: ? ? = (?24, ?16, ?8, 0)T . Each line shows the averaged error over last 100 episodes after 100, 200, and 1000 episodes over the same set of observed trajectories based on 30 trials. As expected, LSTD(?) requires the least amount of data, obtaining a low average error after only 100 episodes. With only 200 episodes, though, the iLSTD(?) methods are performing as well as LSTD(?), and dramatically outperforming TD(?). Finally, notice that iLSTD- Greedy(?) despite its lack of asymptotic guarantee, is actually performing slightly better than iLSTD- Random(?) for all cases of ?. Although ? did not play a significant role for LSTD(?) which matches the observation of Boyan [Boyan, 1999], ? > 0 does show an improvement in performance for the iLSTD(?) methods. Table 1 shows the total averaged per- step CPU time for each method. For all methods sparse matrix optimizations were utilized and LSTD used the efficient incremental inverse implementation. Although TD(?) is the fastest method, the overall difference between the timings in this domain is very small, which is due to the small number of features and a small ratio nk . In the next domain, we illustrate the effect of a larger and more interesting feature space where this ratio is larger. Algorithm TD(?) iLSTD(?) LSTD(?) CPU time/step (msec) Boyan?s chain Mountain car 0.305?7.0e-4 5.35?3.5e-3 0.370?7.0e-4 9.80?2.8e-1 0.367?7.0e-4 253.42 Table 1: The averaged CPU time per step of the algorithms used in Boyan?s chain and mountain car problems. !=0 !=0 8 875 10 10 Error Measure Error Measure Error Measure Error Measure Loss Loss Function Function 7 68 10 10 4 10 6 57 10 1084 10 5 46 73 10 10 10 10 4 353 6 10 10 10 75 10 10 6 !?Greedy RandomiLSTD-Greedy Boltzmann Greedy TD !?Greedy Random BoltzmanniLSTD-Random TD Greedy !?Greedy iLSTD-Random Boltzmann LSTD LSTD 32 4 10 1052 10 10 0 213 10 1042 10 0 200 400 600 Episode 800 1000 200 400 600 Episode 800 1000 1 02 10 103 10 0?1 1 2 !=0 ! = 0.9 8 10 5 10 Random Greedy !?Greedy Random Boltzmann iLSTD-Greedy Greedy Measure Error MeasureError MeasureError Loss Function Loss Function 10 5 10 8 10 10 4 10 5 7 10 10 4 64 10 1083 10 10 10 3 57 10 10 10 2 46 10 10 10 2 10 0 1 35 10 10 10 3 0 0 2 10 4 10 10 Random Greedy !?Greedy Boltzmann iLSTD-Greedy Random Greedy TD !?Greedy BoltzmanniLSTD-Random Random TD Greedy LSTD !?Greedy iLSTD-Random Boltzmann 200 200 LSTD 400 400 600 Episode 800 1000 600 Episode 800 1000 ?1 1 3 10 10 10 Figure 3:10 car problem with two different lambda 10 10 Performance of various methods in mountain values. LSTD was run only every 100 episodes. Results are averaged over 30 trials. 10 ?1 ?2 0 1 10 10 10 ?2 ?1 10 100 10 5.2 Small Small Mountain Car 10 ?2 10?1 Small Medium Large Boyan Chain ?2 02 10 10 Easy Hard Mountain Car Small Medium Large Boyan Chain Easy Hard Mountain Car Small Medium Large Boyan Chain Easy Hard Mountain Car ?1 10 1 Medium Large Boyan Chain 10 Easy Hard Mountain Car Medium Large Boyan Chain 0 10 10 Easy Hard Mountain Car ?2 Our second test-bed is Medium the mountain car domain [e.g., see Sutton and Barto, 1998]. Illustrated in 10 10 Small Large Easy Hard Car the goal state. Possible actions are accelerate Figure 1(b), the episodicBoyan taskChain for the car isMountain to reach 10 forward, accelerate backward, and coast. The observation is a pair of continuous values: Small Medium Large Easy position Hard Chain Mountain Car and velocity. The initial value of the state was -1 for position andBoyan 0 for velocity. Further details about the mountain car problem are available online [RL Library, 2006]. As we are focusing on policy evaluation, the policy was fixed for the car to always accelerate in the direction of its current velocity, although the environment is stochastic and the chosen action is replaced with a random one with 10% probability. Tile Coding [e.g., see Sutton, 1996] was selected as our linear function approximator. We used ten tilings (k = 10) over the combination of the two parameter sets and hashed the tilings into 10,000 features (n = 10, 000). The rest of the settings were identical to those in the Boyan chain domain. ?2 ?1 ?2 Figure 3 shows the results of the different methods on this problem with two different ? values. The horizontal axis shows the number of episodes, while the vertical axis represents our loss function in log scale. The loss we used was ||b? ? A? ?||2 , where A? and b? were computed for each ? from 200,000 episodes of interaction with the environment. With ? = 0, both iLSTD(?) methods performed considerably better than TD(?) in terms of data efficiency. The iLSTD(?) methods even reached a level competitive with LSTD(?) after 600 episodes. For ? = 0.9, it proved to be difficult to find stable learning rate parameters for iLSTD-Greedy(?). While some iterations performed competitively with LSTD(?), others performed extremely poorly with little show of convergence. Hence, we did not include the performance line in the figure. This fact may suggest that the greedy feature selection mechanism does not converge, or it may simply be more sensitive to the learning rate. Finally, notice that the plotted loss depends upon ?, and so the two graphs cannot be directly compared. In this environment the nk is relatively large ( 10,000 = 1000), which translates into a dramatic im10 provement of iLSTD(?) over LSTD as can be see in Table 1. Again sparse matrix optimizations were utilized and LSTD(?) used the efficient incremental ivnerse implementation. The computational demands of LSTD(?) can easily prohibit its application in domains with a large feature space. When the feature representation is sparse, though, iLSTD(?) can still achieve results competitive with LSTD(?) using computation more on par with the time efficient TD(?). 6 Conclusion In this paper, we extended the previous iLSTD algorithm by incorporating eligibility traces without increasing the asymptotic per time-step complexity. This extension resulted in improvements in performance in both the Boyan chain and mountain car domains. We also relaxed the dimension selection mechanism of the algorithm and presented sufficient conditions on the mechanism under which iLSTD(?) is guaranteed to converge. Our empirical results showed that while LSTD(?) can be impractical in on-line learning tasks with a large number of features, iLSTD(?) still scales well while having similar performance to LSTD. This work opens up a number of interesting directions for future study. Our results have focused on two very simple feature selection mechanisms: random and greedy. Although the greedy mechanism does not meet our sufficient conditions for convergence, it actually performed slightly better on the examined domains than the theoretically guaranteed random selection. It would be interesting to perform a thorough exploration of possible mechanisms to find a mechanism with both good empirical performance while satisfying our sufficient conditions for convergence. In addition, it would be interesting to apply iLSTD(?) in even more challenging environments where the large number of features has completely prevented the least-squares approach, such as in simulated soccer keepaway [Stone et al., 2005]. References [Bertsekas and Tsitsiklis, 1996] Dmitri P. Bertsekas and John N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996. [Boyan, 1999] Justin A. Boyan. Least-squares temporal difference learning. In Proceedings of the Sixteenth International Conference on Machine Learning, pages 49?56. Morgan Kaufmann, San Francisco, CA, 1999. [Boyan, 2002] Justin A. Boyan. Technical update: Least-squares temporal difference learning. Machine Learning, 49:233?246, 2002. [Bradtke and Barto, 1996] S. Bradtke and A. Barto. Linear least-squares algorithms for temporal difference learning. Machine Learning, 22:33?57, 1996. [Geramifard et al., 2006] Alborz Geramifard, Michael Bowling, and Richard S. Sutton. Incremental least-squares temporal difference learning. In Proceedings of the Twenty-First National Conference on Artificial Intelligence (AAAI), pages 356?361. AAAI Press, 2006. [RL Library, 2006] RL Library. The University of Alberta reinforcement learning library. http: //rlai.cs.ualberta.ca/RLR/environment.html, 2006. [Stone et al., 2005] Peter Stone, Richard S. Sutton, and Gregory Kuhlmann. Reinforcement learning for robocup soccer keepaway. International Society for Adaptive Behavior, 13(3):165?188, 2005. [Sutton and Barto, 1998] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. [Sutton, 1988] Richard S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3:9?44, 1988. [Sutton, 1996] Richard S. Sutton. Generalization in reinforcement learning: Successful examples using sparse coarse coding. In Advances in Neural Information Processing Systems 8, pages 1038?1044. The MIT Press, 1996. [Tsitsiklis and Van Roy, 1997] John N. Tsitsiklis and Benjamin Van Roy. An analysis of temporaldifference learning with function approximation. IEEE Transactions on Automatic Control, 42(5):674?690, 1997.
3092 |@word trial:3 version:2 maz:1 open:2 dramatic:1 initial:1 inefficiency:1 exclusively:1 selecting:2 recovered:1 current:1 si:2 john:2 update:18 n0:4 aside:1 greedy:26 selected:4 intelligence:1 accordingly:1 ith:1 coarse:1 mathematical:1 prove:3 inside:2 theoretically:1 expected:3 behavior:1 examine:3 discounted:2 alberta:3 td:39 cpu:3 little:1 increasing:1 linearity:1 medium:7 mountain:16 substantially:1 finding:2 impractical:2 guarantee:3 temporal:8 forgotten:1 every:3 thorough:1 ti:2 control:1 bertsekas:3 before:2 timing:1 sutton:15 despite:3 meet:2 chose:1 examined:1 challenging:1 fastest:1 range:1 averaged:7 definite:2 krt:1 episodic:1 empirical:7 suggest:1 cannot:1 selection:20 rlai:1 zinkevich:1 conventional:1 map:1 yt:13 go:1 focused:1 immediately:1 rule:1 spanned:1 handle:2 updated:2 pt:1 play:1 ualberta:2 losing:1 programming:1 us:1 element:4 roy:3 expensive:2 velocity:3 updating:2 utilized:2 satisfying:1 observed:6 ft:2 role:1 episode:15 trade:1 substantial:1 benjamin:1 environment:7 complexity:10 ui:1 reward:8 dynamic:1 solving:1 compromise:1 dilemma:1 upon:2 efficiency:5 basis:1 completely:1 ilstd:64 accelerate:3 easily:1 aei:1 various:2 fast:1 describe:1 artificial:1 outside:2 quite:2 larger:4 solve:3 itself:1 final:3 online:3 advantage:1 sequence:2 interaction:1 maximal:1 remainder:1 loop:3 poorly:1 achieve:1 sixteenth:1 bed:1 convergence:15 requirement:1 r1:1 optimum:1 comparative:1 incremental:5 converges:7 illustrate:1 received:1 solves:1 c:2 direction:3 closely:1 stochastic:1 exploration:1 material:1 require:4 generalization:3 extension:3 considered:1 great:1 predict:1 a2:2 sensitive:1 weighted:1 mit:2 always:1 rather:1 reaching:1 barto:9 focus:2 viz:1 improvement:2 ps:1 dependent:1 bt:6 a0:1 interested:1 selects:1 overall:1 html:1 geramifard:5 retaining:1 special:1 initialize:1 equal:1 once:2 having:2 identical:1 represents:2 broad:1 look:1 excessive:2 nearly:1 future:2 report:1 others:1 richard:5 randomly:1 resulted:1 national:1 replaced:1 maintain:2 a5:1 evaluation:5 chain:16 accurate:1 kt:3 tuple:2 experience:2 plotted:1 theoretical:2 column:1 a6:1 cost:1 uniform:4 successful:1 kn:4 gregory:1 synthetic:1 considerably:1 st:7 international:2 off:1 michael:2 together:1 quickly:2 again:1 aaai:2 choose:2 slowly:1 tile:2 lambda:2 return:7 exclude:1 bold:2 coding:3 depends:2 later:2 view:2 picked:1 performed:4 reached:1 start:1 competitive:2 square:10 robocup:1 kaufmann:1 bolded:1 correspond:1 generalize:2 trajectory:4 multiplying:2 rss0:2 converged:1 explain:1 reach:1 colleague:1 kuhlmann:1 proof:5 proved:1 car:19 ut:3 actually:2 focusing:1 dt:11 alborz:3 formulation:1 evaluated:1 though:3 receives:1 horizontal:2 lack:2 incrementally:3 defines:1 scientific:1 mdp:1 effect:1 hence:2 time3:1 deal:1 illustrated:1 round:3 bowling:3 eligibility:8 maintained:1 prohibit:1 soccer:2 stone:4 outline:1 complete:1 performs:1 bradtke:4 rlr:1 coast:1 recently:3 common:3 rl:3 extend:1 discussed:1 refer:2 significant:1 automatic:1 aej:1 had:2 stable:1 hashed:1 etc:1 showed:1 outperforming:1 arbitrarily:1 vt:1 morgan:1 additional:1 care:1 relaxed:2 employed:1 converge:6 determine:1 full:1 technical:2 faster:1 match:1 compensate:1 prevented:1 a1:3 variant:1 neuro:1 basic:1 expectation:2 iteration:2 limt:1 background:1 want:2 addition:2 whereas:1 else:1 rest:1 exhibited:1 incorporates:1 practitioner:1 enough:1 easy:7 finish:1 zi:4 inner:1 idea:1 translates:1 rms:2 peter:1 action:6 dramatically:2 amount:2 discount:2 ten:1 simplest:1 reduced:1 http:1 notice:4 estimated:2 per:13 key:3 achieving:1 changing:1 backward:2 graph:1 dmitri:1 fraction:1 sum:8 run:1 inverse:1 everywhere:1 family:1 throughout:1 decision:3 bound:1 ct:13 guaranteed:5 followed:1 quadratic:1 ahead:1 ri:3 extremely:1 performing:2 martin:1 relatively:1 department:1 according:2 combination:1 slightly:3 making:2 s1:2 computationally:1 equation:2 remains:3 mechanism:21 needed:1 end:2 tiling:2 available:2 operation:1 incurring:1 competitively:1 apply:1 observe:1 generic:1 alternative:1 original:8 include:2 a4:1 establish:2 approximating:1 society:1 unchanged:1 question:1 occurs:1 rt:13 usual:1 traditional:1 diagonal:1 simulated:1 athena:1 index:1 ratio:2 balance:1 difficult:1 expense:2 trace:8 negative:2 implementation:3 zt:4 policy:10 unknown:1 perform:2 boltzmann:5 upper:1 vertical:2 observation:2 twenty:1 markov:3 finite:1 situation:1 defining:1 extended:2 rn:3 varied:1 introduced:5 pair:1 beyond:1 justin:2 boyan:25 zr:1 mn:1 improve:1 mdps:1 library:4 axis:4 keepaway:2 faced:3 asymptotic:4 relative:1 loss:7 par:1 interesting:4 approximator:4 agent:1 sufficient:6 consistent:4 s0:7 temporaldifference:1 summary:2 repeat:2 last:2 tsitsiklis:6 taking:2 sparse:8 van:3 dimension:9 transition:4 ending:1 forward:2 made:1 reinforcement:7 san:1 adaptive:1 pss0:1 transaction:1 compact:1 summing:1 assumed:1 francisco:1 search:1 continuous:1 why:1 table:3 additionally:1 ca:3 obtaining:1 domain:12 did:2 linearly:1 s2:2 n2:5 edmonton:1 position:3 msec:1 theorem:7 companion:1 transitioning:1 specific:1 symbol:3 r2:2 decay:1 a3:1 burden:1 exists:2 incorporating:1 sequential:1 magnitude:1 illustrates:1 demand:1 nk:3 generalizing:2 simply:2 explore:1 scalar:1 lstd:42 corresponds:1 viewed:1 goal:2 change:1 hard:7 included:1 except:1 uniformly:1 total:1 experimental:3 select:2 incorporate:1 evaluate:1 handling:1
2,306
3,093
Real-time adaptive information-theoretic optimization of neurophysiology experiments? Jeremy Lewi? School of Bioengineering Georgia Institute of Technology jlewi@gatech.edu Robert Butera School of Electrical and Computer Engineering Georgia Institute of Technology rbutera@ece.gatech.edu Liam Paninski ? Department of Statistics Columbia University liam@stat.columbia.edu Abstract Adaptively optimizing experiments can significantly reduce the number of trials needed to characterize neural responses using parametric statistical models. However, the potential for these methods has been limited to date by severe computational challenges: choosing the stimulus which will provide the most information about the (typically high-dimensional) model parameters requires evaluating a high-dimensional integration and optimization in near-real time. Here we present a fast algorithm for choosing the optimal (most informative) stimulus based on a Fisher approximation of the Shannon information and specialized numerical linear algebra techniques. This algorithm requires only low-rank matrix manipulations and a one-dimensional linesearch to choose the stimulus and is therefore efficient even for high-dimensional stimulus and parameter spaces; for example, we require just 15 milliseconds on a desktop computer to optimize a 100-dimensional stimulus. Our algorithm therefore makes real-time adaptive experimental design feasible. Simulation results show that model parameters can be estimated much more efficiently using these adaptive techniques than by using random (nonadaptive) stimuli. Finally, we generalize the algorithm to efficiently handle both fast adaptation due to spike-history effects and slow, non-systematic drifts in the model parameters. Maximizing the efficiency of data collection is important in any experimental setting. In neurophysiology experiments, minimizing the number of trials needed to characterize a neural system is essential for maintaining the viability of a preparation and ensuring robust results. As a result, various approaches have been developed to optimize neurophysiology experiments online in order to choose the ?best? stimuli given prior knowledge of the system and the observed history of the cell?s responses. The ?best? stimulus can be defined a number of different ways depending on the experimental objectives. One reasonable choice, if we are interested in finding a neuron?s ?preferred stimulus,? is the stimulus which maximizes the firing rate of the neuron [1, 2, 3, 4]. Alternatively, when investigating the coding properties of sensory cells it makes sense to define the optimal stimulus in terms of the mutual information between the stimulus and response [5]. Here we take a system identification approach: we define the optimal stimulus as the one which tells us the most about how a neural system responds to its inputs [6, 7]. We consider neural systems in ? ? http://www.prism.gatech.edu/?gtg120z http://www.stat.columbia.edu/?liam which the probability p(rt |{~xt , ~xt?1 , ..., ~xt?tk }, {rt?1 , . . . , rt?ta }) of the neural response rt given the current and past stimuli {~xt , ~xt?1 , ..., ~xt?tk }, and the observed recent history of the neuron?s ~ specified by a finite activity, {rt?1 , . . . , rt?ta }, can be described by a model p(rt |{~xt }, {rt?1 }, ?), ~ Since we estimate these parameters from experimental trials, we want to vector of parameters ?. ~ choose our stimuli so as to minimize the number of trials needed to robustly estimate ?. Two inconvenient facts make it difficult to realize this goal in a computationally efficient manner: 1) model complexity ? we typically need a large number of parameters to accurately model a system?s ~ and 2) stimulus complexity ? we are typically interested in neural response p(rt |{~xt }, {rt?1 }, ?); responses to stimuli ~xt which are themselves very high-dimensional (e.g., spatiotemporal movies if we are dealing with visual neurons). In particular, it is computationally challenging to 1) update our ~ t }, {~xt }) given new stimulus-response data, a posteriori beliefs about the model parameters p(?|{r and 2) find the optimal stimulus quickly enough to be useful in an online experimental context. In this work we present methods for solving these problems using generalized linear models (GLM) ~ and certain Gaussian approximations of the for the input-output relationship p(rt |{~xt }, {rt?1 }, ?) posterior distribution of the model parameters. Our emphasis is on finding solutions which scale well in high dimensions. We solve problem (1) by using efficient rank-one update methods to update the Gaussian approximation to the posterior, and problem (2) by a reduction to a highly tractable onedimensional optimization problem. Simulation results show that the resulting algorithm produces a set of stimulus-response pairs which is much more informative than the set produced by random sampling. Moreover, the algorithm is efficient enough that it could feasibly run in real-time. Neural systems are highly adaptive and more generally nonstatic. A robust approach to opti~ We emphasize that the model mal experimental design must be able to cope with changes in ?. framework analyzed here can account for three key types of changes: stimulus adaptation, spike rate adaptation, and random non-systematic changes. Adaptation which is completely stimulus dependent can be accounted for by including enough stimulus history terms in the model p(rt |{~xt , ..., ~xt?tk }, {rt?1 , ..., rt?ta }). Spike-rate adaptation effects, and more generally spike history-dependent effects, are accounted for explicitly in the model (1) below. Finally, we consider slow, non-systematic changes which could potentially be due to changes in the health, arousal, or attentive state of the preparation. Methods We model a neuron as a point process whose conditional intensity function (instantaneous firing rate) is given as the output of a generalized linear model (GLM) [8, 9]. This model class has been discussed extensively elsewhere; briefly, this class is fairly natural from a physiological point of view [10], with close connections to biophysical models such as the integrate-and-fire cell [9], and has been applied in a wide variety of experimental settings [11, 12, 13, 14]. The model is summarized as: XX  tk ta X aj rt?j ?t = E(rt ) = f ki,t?l xi,t?l + (1) i l=1 j=1 In the above summation the filter coefficients ki,t?l capture the dependence of the neuron?s instantaneous firing rate ?t on the ith component of the vector stimulus at time t ? l, ~xt?l ; the model therefore allows for spatiotemporal receptive fields. For convenience, we arrange all the stimulus coefficients in a vector, ~k, which allows for a uniform treatment of the spatial and temporal components of the receptive field. The coefficients aj model the dependence on the observed recent activity r at time t ? j (these terms may reflect e.g. refractory effects, burstiness, firing-rate adaptation, etc., depending on the value of the vector ~a [9]). For convenience we denote the unknown parameter vector as ?~ = {~k; ~a}. ~ given knowledge The experimental objective is the estimation of the unknown filter coefficients, ?, of the stimuli, ~xt , and the resulting responses rt . We chose the nonlinear stage of the GLM, the link function f (), to be the exponential function for simplicity. This choice ensures that the log likelihood of the observed data is a concave function of ?~ [9]. Representing and updating the posterior. As emphasized above, our first key task is to efficiently update the posterior distribution of ?~ after t trials, p(?~t |~xt , rt ), as new stimulus-response pairs are trial 100 trial 500 trial 2500 trial 5000 ? true 1 info. max. trial 0 0 random ?1 (a) random info. max. 2000 Time(Seconds) Entropy 1500 1000 500 0 ?500 0 1000 2000 3000 Iteration (b) 4000 5000 0.1 total time diagonalization posterior update 1d line Search 0.01 0.001 0 200 400 Dimensionality 600 (c) Figure 1: A) Plots of the estimated receptive field for a simulated visual neuron. The neuron?s receptive field ?~ has the Gabor structure shown in the last panel (spike history effects were set to zero for simplicity here, ~a = 0). The estimate of ?~ is taken as the mean of the posterior, ? ~ t . The images compare the accuracy of the estimates using information maximizing stimuli and random stimuli. B) Plots of the posterior entropies for ?~ in these two cases; note that the information-maximizing stimuli constrain the posterior of ?~ much more effectively than do random stimuli. C) A plot of the timing of the three steps performed on each iteration as a ~ The timing for each step was well-fit by a polynomial of degree 2 for the function of the dimensionality of ?. diagonalization, posterior update and total time, and degree 1 for the line search. The times are an average over many iterations. The error-bars for the total time indicate ?1 std. observed. (We use ~xt and rt to abbreviate the sequences {~xt , . . . , ~x0 } and {rt , . . . , r0 }.) To solve this problem, we approximate this posterior as a Gaussian; this approximation may be justified by the fact that the posterior is the product of two smooth, log-concave terms, the GLM likelihood function and the prior (which we assume to be Gaussian, for simplicity). Furthermore, the main theorem of [7] indicates that a Gaussian approximation of the posterior will be asymptotically accurate. We use a Laplace approximation to construct the Gaussian approximation of the posterior, p(?~t |~xt , rt ): we set ? ~ t to the peak of the posterior (i.e. the maximum a posteriori (MAP) esti~ and the covariance matrix Ct to the negative inverse of the Hessian of the log posterior mate of ?), ~ at ? ~ t . In general, computing these terms directly requires O(td2 + d3 ) time (where d = dim(?); the time-complexity increases with t because to compute the posterior we must form a product of t likelihood terms, and the d3 term is due to the inverse of the Hessian matrix), which is unfortunately too slow when t or d becomes large. Therefore we further approximate p(?~t?1 |~xt?1 , rt?1 ) as Gaussian; to see how this simplifies matters, we use Bayes to write out the posterior:   ?1 ~ ~ , ~x ) = ? 1 (?~ ? ? log p(?|r (2) ~ t?1 )T Ct?1 (? ? ? ~ t?1 ) + ? exp {~xt ; rt?1 }T ?~ t t 2 + rt {~xt ; rt?1 }T ?~ + const   ~ , ~x ) d log p(?|r ?1 t t ~ + rt {~xt ; r }T = ?(?~ ? ? ~ t?1 )T Ct?1 + ? exp({~xt ; rt?1 }T ?) t?1 d?~ ~ , ~x ) d2 log p(?|r ?1 t t ~ xt ; r }{~xt ; r }T (3) = ?Ct?1 ? exp({~xt ; rt?1 }T ?){~ t?1 t?1 d?i d?j Now, to update ?t we only need to find the peak of a one-dimensional function (as opposed to a d-dimensional function); this follows by noting that that the likelihood only varies along a single ~ At the peak of the posterior, ?t , the first term in the gradient direction, {~xt ; rt?1 }, as a function of ?. must be parallel to {~xt ; rt?1 } because the gradient is zero. Since Ct?1 is non-singular, ?t ? ? ~ t?1 must be parallel to Ct?1 {~xt ; rt?1 }. Therefore we just need to solve a one dimensional problem now to determine how much the mean changes in the direction Ct?1 {~xt ; rt?1 }; this requires only O(d2 ) time. Moreover, from the second derivative term above it is clear that computing Ct requires just a rank-one matrix update of Ct?1 , which can be evaluated in O(d2 ) time via the Woodbury matrix lemma. Thus this Gaussian approximation of p(?~t?1 |~xt?1 , rt?1 ) provides a large gain in efficiency; our simulations (data not shown) showed that, despite this improved efficiency, the loss in accuracy due to this approximation was minimal. Deriving the (approximately) optimal stimulus. To simplify the derivation of our maximization strategy, we start by considering models in which the firing rate does not depend on past spiking, so ?~ = {~k}. To choose the optimal stimulus for trial t + 1, we want to maximize the conditional mutual information ~ x , r ) ? H(?|~ ~ x ,r ) ~ rt+1 |~xt+1 , ~x , r ) = H(?|~ (4) I(?; t t t t t+1 t+1 with respect to the stimulus ~xt+1 . The first term does not depend on ~xt+1 , so maximizing the ~ x ,r ) = information requires minimizing the conditional entropy H(?|~ t+1 t+1 Z X ~ ~ xt+1 ) log p(?|r xt+1 )d?~ = Ert+1 |~xt+1 log det[Ct+1 ] + const. p(rt+1 |~xt+1 ) ?p(?|r t+1 , ~ t+1 , ~ rt+1 (5) ~ We do not average the entropy of p(?|r , ~ x ) over ~ x because we are only interested t+1 t+1 t+1 in the conditional entropy for the particular ~xt+1 which will be presented next. The equality ~ x , r ). Therefore, we need to minimize above is due to our Gaussian approximation of p(?|~ t+1 t+1 Ert+1 |~xt+1 log det[Ct+1 ] with respect to ~xt+1 . Since we set Ct+1 to be the negative inverse Hessian of the log-posterior, we have: ?1 Ct+1 = Ct?1 + Jobs (rt+1 , ~xt+1 ) , (6) Jobs is the observed Fisher information. 2 ~ Jobs (rt+1 , ~xt+1 ) = ?? 2 log p(rt+1 |? = ~xtt+1 ?)/?? ~xt+1 ~xtt+1 (7) ~ Here we use the fact that for the GLM, the likelihood depends only on the dot product, ? = ~xtt+1 ?. We can use the Woodbury lemma to evaluate the inverse:   Ct+1 = Ct I + D(rt+1 , ?)(1 ? D(rt+1 , ?)~xtt+1 Ct ~xt+1 )?1 ~xt+1 ~xtt+1 Ct (8) where D(rt+1 , ?) = ? 2 log p(rt+1 |?)/??2 . Using some basic matrix identities, log det[Ct+1 ] = log det[Ct ] ? log(1 ? D(rt+1 , ?)~xtt+1 Ct ~xt+1 ) = log det[Ct ] + D(rt+1 , ?)~xtt+1 Ct ~xt+1 + o(D(rt+1 , ?)~xtt+1 Ct ~xt+1 ) (9) (10) Ignoring the higher order terms, we need to minimize Ert+1 |~xt+1 D(rt+1 , ?)~xtt+1 Ct ~xt+1 . In our case, with f (?~t ~xt+1 ) = exp(?~t ~xt+1 ), we can use the moment-generating function of the multivariate Trial info. max. i.i.d 2 400 ?10?4 0 ?10?1 ?2 0.05 ai 800 2 0 ?2 ?7 ?10 i i.i.d k info. max. 1 1 50 i 1 50 i 1 10 1 i (a) i 10 1 100 0 ?0.05 1 (b) i 10 (c) Figure 2: A comparison of parameter estimates using information-maximizing versus random stimuli for a model neuron whose conditional intensity depends on both the stimulus and the spike history. The images in the top row of A and B show the MAP estimate of ?~ after each trial as a row in the image. Intensity indicates the value of the coefficients. The true value of ?~ is shown in the second row of images. A) The estimated stimulus coefficients, ~k. B) The estimated spike history coefficients, ~a. C) The final estimates of the parameters after 800 trials: dashed black line shows true values, dark gray is estimate using information maximizing stimuli, and light gray is estimate using random stimuli. Using our algorithm improved the estimates of ~k and ~a. ~ x , r ) to evaluate this expectation. After some algebra, we find that to maximize Gaussian p(?|~ t t ~ I(?; rt+1 |~xt+1 , ~xt , rt ), we need to maximize 1 F (~xt+1 ) = exp(~xTt+1 ? ~ t ) exp( ~xTt+1 Ct ~xt+1 )~xTt+1 Ct ~xt+1 . 2 (11) Computing the optimal stimulus. For the GLM the most informative stimulus is undefined, since increasing the stimulus power ||~xt+1 ||2 increases the informativeness of any putatively ?optimal? stimulus. To obtain a well-posed problem, we optimize the stimulus under the usual power constraint ||~xt+1 ||2 ? e < ?. We maximize Eqn. 11 under this constraint using Lagrange multipliers and an eigendecomposition to reduce our original d-dimensional optimization problem to a onedimensional problem. Expressing Eqn. 11 in terms of the eigenvectors of Ct yields: X 1X 2 X 2 F (~xt+1 ) = exp( u i yi + ci yi ) ci yi (12) 2 i i i X X = g( ui yi )h( ci yi2 ) (13) i i where ui and yi represent the projection of ? ~ t and ~xt+1 onto the ith eigenvector and ci is the corresponding eigenvalue. To simplify notation we also introduce the functions g() and h() which are monotonically strictly increasing functions implicitly defined by Eqn. 12. We maximize F (~xt+1 ) P by breaking the problem into an inner and outer problem by fixing the value of i uiP yi and maximizing h() subject to that constraint. A single line search over all possible values of i ui yi will then find the global maximum of F (.). This approach is summarized by the equation:  h i X 2 max F (~y ) = max g(b) ? max t h( ci yi ) y :||~ ~ y ||2 =e b y :||~ ~ y ||2 =e,~ y ~ u=b i Since h() is increasing, to solve the inner problem we only need to solve: X max t ci yi2 y :||~ ~ y ||2 =e,~ y ~ u=b (14) i This last expression is a quadratic function with quadratic and linear constraints and we can solve it using the Lagrange method for constrained optimization. The result is an explicit system of 1 true ? random info. max. info. max. no diffusion 1 0.8 0.6 trial 0.4 0.2 400 0 ?0.2 ?0.4 800 1 100 ?i 1 ?i 100 1 ?i 100 1 ? i 100 ?0.6 random info. max. ? true ? i 1 0 ?1 Entropy ? i 1 0 ?1 random info. max. 250 200 i 1 ? Trial 400 Trial 200 Trial 0 (a) 0 ?1 20 40 (b) i 60 80 100 150 0 200 400 600 Iteration 800 (c) ~ t and true ?~t plotted Figure 3: Estimating the receptive field when ?~ is not constant. A) The posterior means ? after each trial. ?~ was 100 dimensional, with its components following a Gabor function. To simulate nonsystematic changes in the response function, the center of the Gabor function was moved according to a random walk in between trials. We modeled the changes in ?~ as a random walk with a white covariance matrix, Q, with variance .01. In addition to the results for random and information-maximizing stimuli, we also show the ? ~t given stimuli chosen to maximize the information under the (mistaken) assumption that ?~ was constant. Each row of the images plots ?~ using intensity to indicate the value of the different components. B) Details of the posterior means ? ~ t on selected trials. C) Plots of the posterior entropies as a function of trial number; once again, we see that information-maximizing stimuli constrain the posterior of ?~t more effectively. equations for the optimal yi as a function of the Lagrange multiplier ?1 . ui e yi (?1 ) = ||~y ||2 2(ci ? ?1 ) (15) Thus to find the global optimum we simply vary ?1 (this is equivalent to performing a search over b), and compute the corresponding ~y (?1 ). For each value of ?1 we compute F (~y (?1 )) and choose the stimulus ~y (?1 ) which maximizes F (). It is possible to show (details omitted) that the maximum of F () must occur on the interval ?1 ? c0 , where c0 is the largest eigenvalue. This restriction on the optimal ?1 makes the implementation of the linesearch significantly faster and more stable. To summarize, updating the posterior and finding the optimal stimulus requires three steps: 1) a rankone matrix update and one-dimensional search to compute ?t and Ct ; 2) an eigendecomposition of Ct ; 3) a one-dimensional search over ?1 ? c0 to compute the optimal stimulus. The most expensive step here is the eigendecomposition of Ct ; in principle this step is O(d3 ), while the other steps, as discussed above, are O(d2 ). Here our Gaussian approximation of p(?~t?1 |~xt?1 , rt?1 ) is once again quite useful: recall that in this setting Ct is just a rank-one modification of Ct?1 , and there exist efficient algorithms for rank-one eigendecomposition updates [15]. While the worst-case running time of this rank-one modification of the eigendecomposition is still O(d3 ), we found the average running time in our case to be O(d2 ) (Fig. 1(c)), due to deflation which reduces the cost of matrix multiplications associated with finding the eigenvectors of repeated eigenvalues. Therefore the total time complexity of our algorithm is empirically O(d2 ) on average. Spike history terms. The preceding derivation ignored the spike-history components of the GLM model; that is, we fixed ~a = 0 in equation (1). Incorporating spike history terms only affects the optimization step of our algorithm; updating the posterior of ?~ = {~k; ~a} proceeds exactly as before. The derivation of the optimization strategy proceeds in a similar fashion and leads to an analogous optimization strategy, albeit with a few slight differences in detail which we omit due to space constraints. The main difference is that instead of maximizing the quadratic expression in Eqn. 14 to find the maximum of h(), we need to maximize a quadratic expression which includes a linear term due to the correlation between the stimulus coefficients, ~k, and the spike history coefficients,~a. The results of our simulations with spike history terms are shown in Fig. 2. ~ In addition to fast changes due to adaptation and spike-history effects, animal preparaDynamic ?. tions often change slowly and nonsystematically over the course of an experiment [16]. We model these effects by letting ?~ experience diffusion: ?~t+1 = ?~t + wt (16) Here wt is a normally distributed random variable with mean zero and known covariance matrix Q. This means that p(?~t+1 |~xt , rt ) is Gaussian with mean ? ~ t and covariance Ct + Q. To update the posterior and choose the optimal stimulus, we use the same procedure as described above1 . Results Our first simulation considered the use of our algorithm for learning the receptive field of a visually sensitive neuron. We took the neuron?s receptive field to be a Gabor function, as a proxy model of a V1 simple cell. We generated synthetic responses by sampling Eqn. 1 with ?~ set to a 25x33 Gabor function. We used this synthetic data to compare how well ?~ could be estimated using information maximizing stimuli compared to using random stimuli. The stimuli were 2-d images which were rasterized in order to express ~x as a vector. The plots of the posterior means ? ~ t in Fig. 1 (recall these ~ show that the information maximizing strategy converges are equivalent to the MAP estimate of ?) ~ These results are supported by the conclusion an order of magnitude more rapidly to the true ?. of [7] that the information maximization strategy is asymptotically never worse than using random stimuli and is in general more efficient. The running time for each step of the algorithm as a function of the dimensionality of ?~ is plotted in Fig. 1(c). These results were obtained on a machine with a dual core Intel 2.80GHz XEON processor running Matlab. The solid lines indicate fitted polynomials of degree 1 for the 1d line search and degree 2 for the remaining curves; the total running time for each trial scaled as O(d2 ), as predicted. When ?~ was less than 200 dimensions, the total running time was roughly 50 ms (and ~ ? 100, the runtime was close to 15 ms), well within the range of tolerable latencies for for dim(?) many experiments. In Fig. 2 we apply our algorithm to characterize the receptive field of a neuron whose response depends on its past spiking. Here, the stimulus coefficients ~k were chosen to follow a sine-wave; The one difference is that the covariance matrix of p(?~t+1 |~xt+1 , rt+1 ) is in general no longer just a rankone modification of the covariance matrix of p(?~t |~xt , rt ); thus, we cannot use the rank-one update to compute the eigendecomposition. However, it is often reasonable to take Q to be white, Q = cI; in this case the eigenvectors of Ct + Q are those of Ct and the eigenvalues are ci + c where ci is the ith eigenvalue of Ct ; thus in this case, our methods may be applied without modification. 1 the spike history coefficients ~a were inhibitory and followed an exponential function. When choosing stimuli we updated the posterior for the full ?~ = {~k; ~a} simultaneously and maximized the information about both the stimulus coefficients and the spike history coefficients. The information maximizing strategy outperformed random sampling for estimating both the spike history and stimulus coefficients. Our final set of results, Fig. 3, considers a neuron whose receptive field drifts non-systematically with time. We take the receptive field to be a Gabor function whose center moves according to a random walk (we have in mind a slow random drift of eye position during a visual experiment). The results demonstrate the feasibility of the information-maximization strategy in the presence of non~ and emphasize the superiority of adaptive methods in this context. stationary response properties ?, Conclusion We have developed an efficient implementation of an algorithm for online optimization of neurophysiology experiments based on information-theoretic criterion. Reasonable approximations based on a GLM framework allow the algorithm to run in near-real time even for high dimensional parameter and stimulus spaces, and in the presence of spike-rate adaptation and time-varying neural response properties. Despite these approximations the algorithm consistently provides significant improvements over random sampling; indeed, the differences in efficiency are large enough that the information-optimization strategy may permit robust system identification in cases where it is simply not otherwise feasible to estimate the neuron?s parameters using random stimuli. Thus, in a sense, the proposed stimulus-optimization technique significantly extends the reach and power of classical neurophysiology methods. Acknowledgments JL is supported by the Computational Science Graduate Fellowship Program administered by the DOE under contract DE-FG02-97ER25308 and by the NSF IGERT Program in Hybrid Neural Microsystems at Georgia Tech via grant number DGE-0333411. LP is supported by grant EY018003 from the NEI and by a Gatsby Foundation Pilot Grant. We thank P. Latham for helpful conversations. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] I. Nelken, et al., Hearing Research 72, 237 (1994). P. Foldiak, Neurocomputing 38?40, 1217 (2001). K. Zhang, et al., Proceedings (Computational and Systems Neuroscience Meeting, 2004). R. C. deCharms, et al., Science 280, 1439 (1998). C. Machens, et al., Neuron 47, 447 (2005). A. Watson, et al., Perception and Psychophysics 33, 113 (1983). L. Paninski, Neural Computation 17, 1480 (2005). P. McCullagh, et al., Generalized linear models (Chapman and Hall, London, 1989). L. Paninski, Network: Computation in Neural Systems 15, 243 (2004). E. Simoncelli, et al., The Cognitive Neurosciences, M. Gazzaniga, ed. (MIT Press, 2004), third edn. P. Dayan, et al., Theoretical Neuroscience (MIT Press, 2001). E. Chichilnisky, Network: Computation in Neural Systems 12, 199 (2001). F. Theunissen, et al., Network: Computation in Neural Systems 12, 289 (2001). L. Paninski, et al., Journal of Neuroscience 24, 8551 (2004). M. Gu, et al., SIAM Journal on Matrix Analysis and Applications 15, 1266 (1994). N. A. Lesica, et al., IEEE Trans. On Neural Systems And Rehabilitation Engineering 13, 194 (2005).
3093 |@word neurophysiology:5 trial:23 briefly:1 polynomial:2 c0:3 d2:7 simulation:5 covariance:6 solid:1 moment:1 reduction:1 past:3 current:1 must:5 realize:1 numerical:1 informative:3 plot:6 update:12 stationary:1 selected:1 desktop:1 ith:3 core:1 provides:2 putatively:1 zhang:1 along:1 above1:1 introduce:1 manner:1 x0:1 indeed:1 roughly:1 themselves:1 considering:1 increasing:3 becomes:1 xx:1 moreover:2 notation:1 maximizes:2 panel:1 estimating:2 lesica:1 eigenvector:1 developed:2 finding:4 temporal:1 esti:1 concave:2 runtime:1 exactly:1 scaled:1 normally:1 grant:3 omit:1 superiority:1 before:1 engineering:2 timing:2 despite:2 opti:1 firing:5 approximately:1 black:1 chose:1 emphasis:1 challenging:1 limited:1 liam:3 range:1 graduate:1 acknowledgment:1 woodbury:2 lewi:1 procedure:1 significantly:3 gabor:6 projection:1 er25308:1 convenience:2 onto:1 close:2 cannot:1 context:2 optimize:3 www:2 map:3 equivalent:2 center:2 maximizing:13 restriction:1 simplicity:3 deriving:1 handle:1 laplace:1 ert:3 analogous:1 updated:1 edn:1 machens:1 expensive:1 updating:3 std:1 theunissen:1 observed:6 electrical:1 capture:1 worst:1 mal:1 ensures:1 burstiness:1 complexity:4 ui:4 depend:2 solving:1 algebra:2 efficiency:4 completely:1 gu:1 various:1 derivation:3 fast:3 london:1 tell:1 choosing:3 whose:5 quite:1 posed:1 solve:6 otherwise:1 statistic:1 final:2 online:3 sequence:1 eigenvalue:5 biophysical:1 took:1 product:3 adaptation:8 date:1 rapidly:1 moved:1 optimum:1 produce:1 generating:1 converges:1 rankone:2 tk:4 tions:1 depending:2 fixing:1 stat:2 school:2 job:3 predicted:1 indicate:3 direction:2 filter:2 require:1 summation:1 strictly:1 considered:1 hall:1 exp:7 visually:1 arrange:1 vary:1 omitted:1 estimation:1 outperformed:1 sensitive:1 largest:1 mit:2 gaussian:12 varying:1 gatech:3 improvement:1 rasterized:1 rank:7 likelihood:5 indicates:2 consistently:1 tech:1 sense:2 posteriori:2 dim:2 helpful:1 dependent:2 dayan:1 typically:3 interested:3 dual:1 animal:1 spatial:1 integration:1 psychophysics:1 fairly:1 mutual:2 constrained:1 field:10 construct:1 once:2 never:1 sampling:4 chapman:1 stimulus:63 simplify:2 feasibly:1 few:1 simultaneously:1 neurocomputing:1 fire:1 highly:2 severe:1 analyzed:1 light:1 undefined:1 accurate:1 bioengineering:1 experience:1 walk:3 arousal:1 inconvenient:1 plotted:2 theoretical:1 minimal:1 fitted:1 xeon:1 linesearch:2 maximization:3 cost:1 hearing:1 uniform:1 too:1 characterize:3 varies:1 spatiotemporal:2 synthetic:2 adaptively:1 peak:3 siam:1 systematic:3 contract:1 quickly:1 again:2 reflect:1 opposed:1 choose:6 slowly:1 worse:1 cognitive:1 derivative:1 account:1 jeremy:1 potential:1 de:1 coding:1 summarized:2 includes:1 coefficient:14 matter:1 explicitly:1 depends:3 performed:1 view:1 sine:1 start:1 bayes:1 wave:1 parallel:2 minimize:3 accuracy:2 variance:1 efficiently:3 maximized:1 yield:1 igert:1 generalize:1 identification:2 accurately:1 produced:1 processor:1 history:17 reach:1 ed:1 attentive:1 uip:1 associated:1 gain:1 pilot:1 treatment:1 recall:2 knowledge:2 conversation:1 dimensionality:3 ta:4 higher:1 follow:1 response:15 improved:2 evaluated:1 furthermore:1 just:5 stage:1 correlation:1 eqn:5 nonlinear:1 aj:2 gray:2 dge:1 effect:7 true:7 multiplier:2 equality:1 butera:1 white:2 during:1 m:2 generalized:3 criterion:1 theoretic:2 demonstrate:1 latham:1 image:6 instantaneous:2 specialized:1 spiking:2 empirically:1 refractory:1 jl:1 discussed:2 slight:1 onedimensional:2 expressing:1 significant:1 ai:1 mistaken:1 dot:1 stable:1 longer:1 etc:1 posterior:28 multivariate:1 recent:2 showed:1 foldiak:1 optimizing:1 manipulation:1 certain:1 prism:1 watson:1 meeting:1 yi:10 preceding:1 r0:1 determine:1 maximize:7 monotonically:1 dashed:1 full:1 simoncelli:1 reduces:1 smooth:1 faster:1 feasibility:1 ensuring:1 basic:1 expectation:1 iteration:4 represent:1 cell:4 justified:1 addition:2 want:2 fellowship:1 interval:1 singular:1 subject:1 near:2 noting:1 presence:2 viability:1 enough:4 variety:1 affect:1 fit:1 reduce:2 simplifies:1 inner:2 det:5 administered:1 expression:3 hessian:3 matlab:1 ignored:1 useful:2 generally:2 clear:1 eigenvectors:3 latency:1 dark:1 extensively:1 http:2 exist:1 nsf:1 millisecond:1 inhibitory:1 estimated:5 neuroscience:4 write:1 express:1 key:2 d3:4 diffusion:2 v1:1 nonadaptive:1 asymptotically:2 run:2 inverse:4 extends:1 reasonable:3 ki:2 ct:37 followed:1 quadratic:4 activity:2 occur:1 constraint:5 constrain:2 simulate:1 performing:1 department:1 according:2 lp:1 modification:4 rehabilitation:1 glm:8 taken:1 computationally:2 equation:3 deflation:1 needed:3 mind:1 letting:1 tractable:1 permit:1 apply:1 tolerable:1 robustly:1 original:1 top:1 running:6 remaining:1 maintaining:1 const:2 classical:1 objective:2 move:1 spike:17 parametric:1 receptive:10 rt:54 dependence:2 responds:1 strategy:8 usual:1 gradient:2 link:1 thank:1 simulated:1 outer:1 considers:1 modeled:1 relationship:1 minimizing:2 difficult:1 unfortunately:1 robert:1 potentially:1 decharms:1 info:8 negative:2 design:2 implementation:2 unknown:2 neuron:15 finite:1 mate:1 nei:1 drift:3 intensity:4 pair:2 specified:1 chichilnisky:1 connection:1 trans:1 able:1 bar:1 proceeds:2 below:1 microsystems:1 perception:1 gazzaniga:1 challenge:1 summarize:1 program:2 including:1 max:12 belief:1 power:3 natural:1 hybrid:1 abbreviate:1 representing:1 movie:1 technology:2 eye:1 columbia:3 health:1 prior:2 xtt:12 multiplication:1 loss:1 versus:1 x33:1 eigendecomposition:6 integrate:1 foundation:1 degree:4 proxy:1 informativeness:1 principle:1 systematically:1 row:4 elsewhere:1 course:1 accounted:2 supported:3 last:2 allow:1 institute:2 wide:1 distributed:1 ghz:1 curve:1 dimension:2 evaluating:1 sensory:1 nelken:1 collection:1 adaptive:5 cope:1 approximate:2 emphasize:2 preferred:1 implicitly:1 dealing:1 global:2 investigating:1 xi:1 alternatively:1 search:7 robust:3 ignoring:1 main:2 yi2:2 repeated:1 fig:6 intel:1 georgia:3 fashion:1 slow:4 gatsby:1 position:1 explicit:1 exponential:2 breaking:1 third:1 theorem:1 xt:68 emphasized:1 physiological:1 essential:1 incorporating:1 albeit:1 effectively:2 ci:10 diagonalization:2 magnitude:1 fg02:1 entropy:7 paninski:4 simply:2 visual:3 lagrange:3 conditional:5 goal:1 identity:1 fisher:2 feasible:2 change:10 mccullagh:1 wt:2 lemma:2 total:6 ece:1 experimental:8 shannon:1 preparation:2 evaluate:2
2,307
3,094
A Scalable Machine Learning Approach to Go Lin Wu and Pierre Baldi School of Information and Computer Sciences University of California, Irvine Irvine, CA 92697-3435 lwu,pfbaldi@ics.uci.edu Abstract Go is an ancient board game that poses unique opportunities and challenges for AI and machine learning. Here we develop a machine learning approach to Go, and related board games, focusing primarily on the problem of learning a good evaluation function in a scalable way. Scalability is essential at multiple levels, from the library of local tactical patterns, to the integration of patterns across the board, to the size of the board itself. The system we propose is capable of automatically learning the propensity of local patterns from a library of games. Propensity and other local tactical information are fed into a recursive neural network, derived from a Bayesian network architecture. The network integrates local information across the board and produces local outputs that represent local territory ownership probabilities. The aggregation of these probabilities provides an effective strategic evaluation function that is an estimate of the expected area at the end (or at other stages) of the game. Local area targets for training can be derived from datasets of human games. A system trained using only 9 ? 9 amateur game data performs surprisingly well on a test set derived from 19 ? 19 professional game data. Possible directions for further improvements are briefly discussed. 1 Introduction Go is an ancient board game?over 3,000 years old [6, 5]?that poses unique opportunities and challenges for artificial intelligence and machine learning. The rules of Go are deceptively simple: two opponents alternatively place black and white stones on the empty intersections of an odd-sized square board, traditionally of size 19 ? 19. The goal of the game, in simple terms, is for each player to capture as much territory as possible across the board by encircling the opponent?s stones. This disarming simplicity, however, conceals a formidable combinatorial complexity [2]. On a 19 ? 19 board, there are approximately 319?19 = 10172.24 possible board configurations and, on average, on the order of 200-300 possible moves at each step of the game, preventing any form of semi-exhaustive search. For comparison purposes, the game of chess has a much smaller branching factor, on the order of 35-40 [10, 7]. Today, computer chess programs, built essentially on search techniques and running on a simple PC, can rival or even surpass the best human players. In contrast, and in spite of several decades of significant research efforts and of progress in hardware speed, the best Go programs of today are easily defeated by an average human amateur. Besides the intrinsic challenge of the game, and the non-trivial market created by over 100 million players worldwide, Go raises other important questions for our understanding of natural or artificial intelligence in the distilled setting created by the simple rules of a game, uncluttered by the endless complexities of the ?real world?. For example, to many observers, current computer solutions to chess appear ?brute force?, hence ?unintelligent?. But is this perception correct, or an illusion?is there something like true intelligence beyond ?brute force? and computational power? Where is Go situated in the apparent tug-of-war between intelligence and sheer computational power? Another fundamental question that is particularly salient in the Go setting is the question of knowledge transfer. Humans learn to play Go on boards of smaller sized?typically 9 ? 9?and then ?transfer? their knowledge to the larger 19 ? 19 standard size. How can we develop algorithms that are capable of knowledge transfer? Here we take modest steps towards addressing these challenges by developing a scalable machine learning approach to Go. Clearly good evaluation functions and search algorithms are essential ingredients of computer board-game systems. Here we focus primarily on the problem of learning a good evaluation function for Go in a scalable way. We do include simple search algorithms in our system, as many other programs do, but this is not the primary focus. By scalability we imply that a main goal is to develop a system more or less automatically, using machine learning approaches, with minimal human intervention and handcrafting. The system ought to be able to transfer information from one board size (e.g. 9 ? 9), to another size (e.g. 19 ? 19). We take inspiration in three ingredients that seem to be essential to the Go human evaluation process: the understanding of local patterns, the ability to combine patterns, and the ability to relate tactical and strategic goals. Our system is built to learn these three capabilities automatically and attempts to combine the strengths of existing systems while avoiding some of their weaknesses. The system is capable of automatically learning the propensity of local patterns from a library of games. Propensity and other local tactical information are fed into a recursive neural network, derived from a Bayesian network architecture. The network integrates local information across the board and produces local outputs that represent local territory ownership probabilities. The aggregation of these probabilities provides an effective strategic evaluation function that is an estimate of the expected area at the end (or at other stages) of the game. Local area targets for training can be derived from datasets of human games. The main results we present here are derived on a 19 ? 19 board using a player trained using only 9 ? 9 game data. 2 Data Because the approach to be described emphasizes scalability and learning, we are able to train our systems at a given board size and use it to play at different sizes, both larger and smaller. Pure bootstrap approaches to Go where computer players are initialized randomly and play large numbers of games, such as evolutionary approaches or reinforcement learning, have been tried [11]. We have implemented these approaches and used them for small board sizes 5 ? 5 and 7 ? 7. However, in our experience, these approaches do not scale up well to larger board sizes. For larger board sizes, better results are obtained using training data derived from records of games played by humans. We used available data at board sizes 9 ? 9, 13 ? 13, and 19 ? 19. Data for 9 ? 9 Boards: This data consists of 3,495 games. We randomly selected 3,166 games (90.6%) for training, and the remaining 328 games (9.4%) for validation. Most of the games in this data set are played by amateurs. A subset of 424 games (12.13%) have at least one player with an olf ranking of 29, corresponding to a very good amateur player. Data for 13 ? 13 Boards: This data consists of 4175 games. Most of the games, however, are played by rather weak players and therefore cannot be used for training. For validation purposes, however, we retained a subset of 91 games where both players have an olf ranking greater or equal to 25?the equivalent of a good amateur player. Data for 19 ? 19 Boards: This high-quality data set consists of 1835 games played by professional players (at least 1 dan). A subset of 1131 games (61.6%) are played by 9 dan players (the highest possible ranking). This is the dataset used in [12]. 3 3.1 System Architecture Evaluation Function, Outputs, and Targets Because Go is a game about territory, it is sensible to have ?expected territory? be the evaluation function, and to decompose this expectation as a sum of local probabilities. More specifically, let Aij (t) denote the ownership of intersection ij on the board at time t during the game. At the end of a game, each intersection can be black, white, or both 1 . Black is represented as 1, white as 0, and both as 0.5. The same scheme with 0.5 for empty intersections, or more complicated schemes, can be used to represent ownership at various intermediate stages of the game. Let Oij (t) be the output of the learning system at intersection ij at time t in the game. Likewise, let Tij (t) be the corresponding training target. In the most simple case, we can use Tij (t) = Aij (T ), where T denotes the end of the game. In this case, the output Oij (t) can be interpreted as the probability Pij (t), estimated at P time t, of owning the ij intersection at the end of the game. Likewise, ij Oij (t) is the estimate, computed at time t, of the total expected area at the end of the game. Propagation of information provided by targets/rewards computed at the end of the game only, however, can be problematic. With a dataset of training examples, this problem can be addressed because intermediary area values Aij (t) are available for training for any t. In the simulations presented here, we use a simple scheme Tij (t) = (1 ? w)Aij (T ) + wAij (t + k) (1) w ? 0 is a parameter that controls the convex combination between the area at the end of the game and the area at some step t + k in the more near future. w = 0 corresponds to the simple case described above where only the area at the end of the game is used in the target function. Other ways of incorporating target information from intermediary game positions are discussed briefly at the end. To learn the evaluation function and the targets, we propose to use a graphical model (Bayesian network) which in turn leads to a directed acyclic graph recursive neural network (DAG-RNN) architecture. 3.2 DAG-RNN Architectures The architecture is closely related to an architecture originally proposed for a problem in a completely different area ? the prediction of protein contact maps [8, 1]. As a Bayesian network, the architecture can be described in terms of the DAG in Figure 1 where the nodes are arranged in 6 lattice planes reflecting the Go board spatial organization. Each plane contains N ? N nodes arranged on the vertices of a square lattice. In addition to the input and output planes, there are four hidden planes for the lateral propagation and integration of information across the Go board. Within each hidden plane, the edges of the quadratic lattice are oriented towards one of the four cardinal directions (NE, NW, SE, and SW). Directed edges within a column of this architecture are given in Figure 1b. Thus each intersection ij in a N ? N board is associated with six units. These units consist of NE NW SW SE an input unit Iij , four hidden units Hij , Hij , Hij , Hij , and an output unit Oij . In a DAG-RNN the relationships between the variables are deterministic, rather than probabilistic, and implemented in terms of neural networks with weight sharing. Thus the previous architecture, leads to a DAG-RNN architecture consisting of 5 neural networks in the form ? NW NE SW SE Oi,j = NO (Ii,j , Hi,j , Hi,j , Hi,j , Hi,j ) ? ? ? NE NE NE ? Hi,j = NN E (Ii,j , Hi?1,j , Hi,j?1 ) ? NW NW NW Hi,j = NN W (Ii,j , Hi+1,j , Hi,j?1 ) ? SW SW SW ? H = N (I , H , H ) ? SW i,j i,j i+1,j i,j+1 ? ? SE SE SE Hi,j = NSE (Ii,j , Hi?1,j , Hi,j+1 ) (2) where, for instance, NO is a single neural network that is shared across all spatial locations. In addition, since Go is ?isotropic? we use a single network shared across the four hidden planes. Go however involves strong boundaries effects and therefore we add one neural network NC for the corners, shared across all four corners, and one neural network NS for each side position, shared across all four sides. In short, the entire Go DAG-RNN architecture is described by four feedforward NNs (corner, side, lateral, output) that are shared at all corresponding locations. For each one of these feedforward neural networks, we have experimented with several architectures, but we 1 This is called ?seki?. Seki is a situation where two live groups share liberties and where neither of them can fill them without dying. typically use a single hidden layer. The DAG-RNN in the main simulation results uses 16 hidden nodes and 8 output nodes for the lateral propagation networks, and 16 hidden nodes and one output node for the output network. All transfer functions are logistic. The total number of free parameters is close to 6000. Because the underlying graph is acyclic, these networks can be unfolded in space and training can proceed by simple gradient descent (back-propagation) taking into account relevant symmetries and weight sharing. Networks trained at one board size can be reused at any other board size, providing a simple mechanism for reusing and extending acquired knowledge. For a board of size N ? N , the training procedure scales like O(W M N 4 ) where W is the number of adjustable weights, and M is the number of training games. There are roughly N 2 board positions in a game and, for each position, N 2 outputs Oij to be trained, hence the O(N 4 ) scaling. Both game records and the positions within each selected game record are randomly selected during training. Weights are updated essentially on line, once every 10 game positions. Training a single player on our 9 ? 9 data takes on the order of a week on a current desktop computer, corresponding roughly to 50 training epochs at 3 hours per epoch. O i,j Output Plane NE i,j-1 NE NE i,j NE i+1,j NW i,j NW 4 Hidden Planes SW i-1,j NW i+1,j SW NW i,j+1 SW i,j SW i,j+1 SE i-1,j SE SE i,j-1 Input Plane a. Planar lattices of the architecture. SE i,j I i,j b. Connection details within an ij column. Figure 1: (a) The nodes of a DAG-RNN are regularly arranged in one input plane, one output plane, and four hidden planes. In each plane, nodes are arranged on a square lattice. The hidden planes contain directed edges associated with the square lattices. All the edges of the square lattice in each hidden plane are oriented in the direction of one of the four possible cardinal corners: NE, NW, SW, and SE. Additional directed edges run vertically in column from the input plane to each hidden plane and from each hidden plane to the output plane. (b) Connection details within one column of Figure 1a. The input node is connected to four corresponding hidden nodes, one for each hidden plane. The input node and the hidden nodes are connected to the output node. Iij is the vector of inputs at intersection ij. Oij is the corresponding output. Connections of each hidden node to its lattice neighbors within the same plane are also shown. 3.3 Inputs At a given board intersection, the input vector Iij has multiple components?listed in Table 1. The first three components?stone type, influence, and propensity?are associated with the corresponding intersection and a fixed number of surrounding locations. Influence and propensity are described below in more detail. The remaining features correspond to group properties involving variable numbers of neighboring stones and are self explanatory for those who are familiar with Go. The group Gij associated with a given intersection is the maximal set of stones of the same color that are connected to it. Neighboring (or connected) opponent groups of Gij are groups of the opposite color that are directly connected (adjacent) to Gij . The idea of using higher order liberties is from Werf [13]. O1st and O2nd provide the number of true eyes and the number of liberties of the weakest and the second weakest neighboring opponent groups. Weakness here is defined in alphabetical order with respect to the number of eyes first, followed by the number of liberties. Table 1: Typical input features. The first three features?stone type, influence, and propensity? are properties associated with the corresponding intersection and a fixed number of surrounding locations. The other properties are group properties involving variable numbers of neighboring stones. Feature b,w,e influence propensity Neye N1st N2nd N3rd N4th O1st O2nd Description the stone type: black, white or empty the influence from the stones of the same color and the opposing color a local statistics computed from 3 ? 3 patterns in the training data (section 3.3) the number of true eyes the number of liberties, which is the number of empty intersections connected to a group of stones. We also call it the 1st-order liberties the number of 2nd-order liberties, which is defined as the liberties of the 1storder liberties the number of 3rd-order liberties, which is defined as the liberties of the 2ndorder liberties the number of 4th-order liberties, which is defined as the liberties of the 3rdorder liberties features of the weakest connected opponent group (stone type, number of liberties, number of eyes) features of the second weakest connected opponent group (stone type, number of liberties, number of eyes) Influence: We use two types of influence calculation. Both algorithms are based on Chen?s method [4]. One is an exact implementation of Chen?s method. The other uses a stringent influence propagation rule. In Chen?s exact method, any opponent stone can block the propagation of influence. With a stringent influence propagation rule, an opponent stone can block the propagation of influence if and only if it is stronger than the stone emitting the influence. Strength is again defined in alphabetical order with respect to the number of eyes first, followed by the number of liberties. Propensity?Automated Learning and Scoring of a Pattern Library: We develop a method to learn local patterns and their value automatically from a database of games. The basic method is illustrated in the case of 3 ? 3 patterns, which are used in the simulations. Considering rotation and mirror symmetries, there are 10 unique locations for a 3 ? 3 window on a 9 ? 9 board (see also [9]). Given any 3 ? 3 pattern of stones on the board and a set of games, we then compute nine numbers, one for each intersection. These numbers are local indicators of strength or propensity. The propensity Sij (p) of each intersection ij associated with stone pattern p and a 3 ? 3 window w is defined as: N Bij (p) ? N Wij (p) w Sij (p) = (3) N Bij (p) + N Wij (p) + C where N Bij (p) is the number of times that pattern p ends with a black stone at intersection ij at the end of the games in the data, and N Wij (p) is the same for a white stone. Both N Bij (p) and N Wij (p) are computed taking into account the location and the symmetries of the corresponding window w. C plays a regularizing role in the case of rare patterns and is set to 1 in the simulations. w Thus Sij (p) is an empirical normalized estimate of the local differential propensity towards conquering the corresponding intersection in the local context provided by the corresponding pattern and window. In general, a given intersection ij on the board is covered by several 3 ? 3 windows. Thus, for a w given intersection ij on a given board, we can compute a value Sij (p) for each different window that contains the intersection. In the following simulations, a single final value Sij (p) is computed by averaging over the different w?s. However, more complex schemes that retain more information w can easily be envisioned by, for instance: (1) computing also the standard deviation of the Sij (p) as a function of w; (2) using a weighted average, weighted by the importance of the window w; and w (3) using the entire set of Sij (p) values, as w varies around ij, to augment the input vector. 3.4 Move Selection and Search For a given position, the next move can be selected using one-level search by considering P all possible legal moves and computing the estimate at time t of the total expected area E = ij Oij (t) at the end of the game, or some intermediate position, or a combination of both, where Oij (t) are the outputs (predicted probabilities) of the DAG-RNNs. The next move can be chosen by maximizing this evaluation function (1-ply search). Alternatively, Gibbs sampling can be used to choose the next move among all the legal moves with a probability proportional to eE/T emp , where T emp is a temperature parameter [3, 11, 12]. We have also experimented with a few other simple search schemes, such as 2-ply search (MinMax). 4 Results 0.7 We trained a large number of players using the methods described above. In the absence of training data, we used pure bootstrap approaches (e.g. reinforcement learning) at sizes 5 ? 5 and 7 ? 7 with results that were encouraging but clearly insufficient. Not surprisingly, when used to play at larger board sizes, the RNNs trained at these small board sizes yield rather weak players. The quality of most 13 ? 13 games available to us is too poor for proper training, although a small subset can be used for validation purposes. We do not have any data for sizes N = 11, 15, and 17. And because of the O(N 4 ) scaling, training systems directly at 19 ? 19 takes many months and is currently in progress. Thus the most interesting results we report are derived by training the RNNs using the 9?9 game data, and using them to play at 9 ? 9 and, more importantly, at larger board sizes. Several 9 ? 9 players achieve top comparable performance. For conciseness, here we report the results obtained with one of them, trained with target parameters w = 0.25 and k = 2 in Equation 1, 20 0.3 30 0.4 40 0.5 50 0.6 60 rand top1 top5 top10 top20 top30 10 0 0.1 0.2 w1 w2 w30 w38 0 20 40 60 a. Validation error vs. game phase 80 0 50 100 150 200 b. Percentage vs. game phase Figure 2: (a) Validation error vs. game phase. Phase is defined by the total number of stones on the board. The four curves respectively represent the validation errors of the neural network after 1, 2, 33, and 38 epochs of training. (b) Percentage of moves made by professional human players on boards of size 19 ? 19 that are contained in the m top-ranked moves according to the DAG-RNN trained on 9 ? 9 amateur data, for various values of m. The baseline associated with the red curve corresponds to a random uniform player. Figure 2a shows how the validation error changes as training progresses. Validation error here is defined as the relative entropy between the output probabilities produced by the RNN and the target probabilities, computed on the validation data. The validation error decreases quickly during the first epochs. In this case, no substantial decrease in validation error is observed after epoch 30. Note also how the error is smaller towards the end of the game due both to the reduction in the number of possible moves and the strong end-of-game training signal. An area and hence a probability can be assigned by the DAG-RNN to each move, and used to rank them, as described in section 3.4. Thus we can compute the average probability of moves played by good human players according to the DAG-RNN or other probabilistic systems such as [12]. In Table 2, we report such probabilities for several systems and at different board sizes. For size 19 ? 19, we use the same test set used in [12]. Boltzmann5 and BoltzmannLiberties are their results reported in the pre-published version of their NIPS paper. At this size, the probabilities in Table 2: Probabilities assigned by different systems to moves played by human players in test data. Board Size 9?9 9?9 13 ? 13 13 ? 13 19 ? 19 19 ? 19 19 ? 19 19 ? 19 System Random player RNN(1-ply search) Random player RNN(1-ply search) Random player Boltzmann5 BoltzmannLiberties RNN(1-ply search) Log Probability -4.13 -1.86 -4.88 -2.27 -5.64 -5.55 -5.27 -2.70 Probability 1/62 1/7 1/132 1/10 1/281 1/254 1/194 1/15 the table are computed using the 80-83rd moves of each game. For boards of size 19 ? 19, a random player that selects moves uniformly at random among legal moves assigns a probability of 1/281 to the moves played by professional players in the data set. BoltzmannLiberties was able to improve this probability to 1/194. Our best DAG-RNNs trained using amateur data at 9 ? 9 are capable of bringing this probability further down to 1/15 (also a considerable improvement over our previous 1/42 performance presented in April 2006 at the Snowbird Learning Conference). A remarkable example where the top ranked move according to the DAG-RNN coincides with the move actually played in a game between two very highly-ranked players is given in Figure 3, illustrating also the underlying probabilistic territory calculations. A B C D E F G H J K L M N O P Q R S T A B C D E F G H J K L M N O P Q R S T 19 19 19 18 18 18 19 18 17 17 17 17 16 16 16 16 15 15 15 15 14 14 14 14 13 13 13 13 12 12 12 12 11 11 11 11 10 10 10 10 9 9 9 9 8 8 8 8 7 7 7 7 6 6 6 6 5 5 5 5 4 4 4 4 3 3 3 3 2 2 2 2 1 1 1 1 A B C D E F G H J K L M N O P Q R S T A B C D E F G H J K L M N O P Q R S T Figure 3: Example of an outstanding move based on territory predictions made by the DAG-RNN. For each intersection, the height of the green bar represents the estimated probability that the intersection will be owned by black at the end of the game. The figure on the left shows the predicted probabilities if black passes. The figure on the right shows the predicted probabilities if black makes the move at N12. N12 causes the greatest increase in green area and is top-ranked move for the DAG-RNN. Indeed this is the move selected in the game played by Zhou, Heyang (black, 8 dan) and Chang, Hao (white, 9 dan) on 10/22/2000. Figure 2b, provides a kind of ROC curve by displaying the percentage of moves made by professional human player on boards of size 19 ? 19 that are contained in the m top-ranked moves according to the DAG-RNN trained on 9 ? 9 amateur data, for various values of m across all phases of the game. For instance, when there are 80 stones on the board, and hence on the order of 300 legal moves available, there is a 50% chance that a move selected by a very highly ranked human player (dan 9) is found among the top 30 choices produced by the DAG-RNN. 5 Conclusion We have designed a DAG-RNN for the game of Go and demonstrated that it can learn territory predictions fairly well. Systems trained using only a set of 9 ? 9 amateur games achieve surprisingly good performance on a 19 ? 19 test set that contains 1835 professional played games. The methods and results presented clearly point also to several possible direction of improvement that are currently under active investigation. These include: (1) obtaining larger data sets and training systems of size greater than 9 ? 9; (2) exploiting patterns that are larger than 3 ? 3, especially at the beginning of the game when the board is sparsely occupied and matching of large patterns is possible using, for instance, Zobrist hashing techniques [14]; (3) combining different players, such as players trained at different board sizes, or players trained on different phases of the game; and (4) developing better, non-exhaustive but deeper, search methods. Acknowledgments The work of PB and LW has been supported by a Laurel Wilkening Faculty Innovation award and awards from NSF, BREP, and Sun Microsystems to PB. We would like to thank Jianlin Chen for developing a web-based Go graphical user interface, Nicol Schraudolph for providing the 9 ? 9 and 13 ? 13 data, and David Stern for providing the 19 ? 19 data. References [1] P. Baldi and G. Pollastri. The principled design of large-scale recursive neural network architectures?DAG-RNNs and the protein structure prediction problem. Journal of Machine Learning Research, 4:575?602, 2003. [2] E. Berlekamp and D. Wolfe. Mathematical Go?Chilling gets the last point. A K Peters, Wellesley, MA, 1994. [3] B. Brugmann. Monte Carlo Go. 1993. URL: ftp://www.joy.ne.jp/welcome/igs/ Go/computer/mcgo.tex.Z. [4] Zhixing Chen. Semi-empirical quantitative theory of Go part 1: Estimation of the influence of a wall. ICGA Journal, 25(4):211?218, 2002. [5] W. S. Cobb. The Book of GO. Sterling Publishing Co., New York, NY, 2002. [6] K. Iwamoto. GO for Beginners. Pantheon Books, New York, NY, 1972. [7] Aske Plaat, Jonathan Schaeffer, Wim Pijls, and Arie de Bruin. Exploiting graph properties of game trees. In 13th National Conference on Artificial Intelligence (AAAI?96), pages 234?239. 1996. [8] G. Pollastri and P. Baldi. Prediction of contact maps by GIOHMMs and recurrent neural networks using lateral propagation from all four cardinal corners. Bioinformatics, 18:S62? S70, 2002. [9] Liva Ralaivola, Lin Wu, and Pierre Balid. SVM and Pattern-Enriched Common Fate Graphs for the game of Go. ESANN 2005, 27-29:485?490, 2005. [10] Stuart J. Russell and Peter Norvig. Artificial Intelligence: A Modern Approach. Prentice Hall, 2nd edition, 2002. [11] N. N. Schrauldolph, P. Dayan, and T. J. Sejnowski. Temporal difference learning of position evaluation in the game of Go. In Advances in Neural Information Processing Systems 6, pages 817?824. 1994. [12] David H. Stern, Thore Graepel, and David J. C. MacKay. Modelling uncertainty in the game of Go. In Advances in Neural Information Processing Systems 17, pages 1353?1360. 2005. [13] E. Werf, H. Herik, and J. Uiterwijk. Learning to score final positions in the game of Go. In Advances in Computer Games: Many Games, Many Challenges, pages 143?158. 2003. [14] Albert L. Zobrist. A new hashing method with application for game playing. 1970. Technical report 88, University of Wisconsin, April 1970. Reprinted in ICCA Journal, 13(2), (1990), pp. 69-73.
3094 |@word illustrating:1 version:1 faculty:1 briefly:2 stronger:1 nd:2 reused:1 aske:1 simulation:5 tried:1 reduction:1 configuration:1 contains:3 minmax:1 cobb:1 score:1 icga:1 existing:1 current:2 liva:1 designed:1 joy:1 v:3 intelligence:6 selected:6 plane:21 isotropic:1 desktop:1 beginning:1 short:1 record:3 provides:3 node:14 location:6 height:1 mathematical:1 differential:1 consists:3 combine:2 dan:5 baldi:3 acquired:1 market:1 indeed:1 expected:5 roughly:2 automatically:5 unfolded:1 encouraging:1 tex:1 window:7 considering:2 provided:2 underlying:2 formidable:1 kind:1 interpreted:1 dying:1 ought:1 temporal:1 quantitative:1 every:1 brute:2 control:1 unit:5 intervention:1 appear:1 local:20 vertically:1 giohmms:1 approximately:1 black:9 rnns:5 co:1 directed:4 unique:3 acknowledgment:1 recursive:4 alphabetical:2 block:2 illusion:1 bootstrap:2 procedure:1 area:13 rnn:20 empirical:2 matching:1 pre:1 spite:1 protein:2 get:1 cannot:1 close:1 selection:1 ralaivola:1 prentice:1 context:1 live:1 influence:13 www:1 equivalent:1 map:2 deterministic:1 demonstrated:1 maximizing:1 go:33 convex:1 simplicity:1 assigns:1 pure:2 rule:4 deceptively:1 importantly:1 fill:1 bruin:1 n12:2 traditionally:1 updated:1 target:10 today:2 play:6 user:1 exact:2 norvig:1 us:2 wolfe:1 particularly:1 sparsely:1 database:1 observed:1 role:1 capture:1 connected:8 sun:1 decrease:2 highest:1 russell:1 envisioned:1 substantial:1 principled:1 complexity:2 reward:1 trained:13 raise:1 completely:1 easily:2 represented:1 various:3 surrounding:2 train:1 effective:2 monte:1 sejnowski:1 artificial:4 exhaustive:2 apparent:1 larger:8 ability:2 statistic:1 itself:1 final:2 zobrist:2 propose:2 maximal:1 neighboring:4 uci:1 relevant:1 combining:1 achieve:2 description:1 scalability:3 exploiting:2 empty:4 extending:1 produce:2 ftp:1 develop:4 recurrent:1 pose:2 snowbird:1 ij:13 school:1 odd:1 progress:3 esann:1 strong:2 implemented:2 predicted:3 involves:1 direction:4 liberty:18 closely:1 correct:1 human:13 stringent:2 disarming:1 wall:1 decompose:1 investigation:1 boltzmann5:2 around:1 hall:1 ic:1 nw:11 week:1 purpose:3 estimation:1 integrates:2 intermediary:2 combinatorial:1 currently:2 wim:1 propensity:12 weighted:2 clearly:3 rather:3 occupied:1 zhou:1 derived:8 focus:2 boltzmannliberties:3 improvement:3 rank:1 modelling:1 laurel:1 contrast:1 baseline:1 waij:1 dayan:1 nn:2 typically:2 entire:2 explanatory:1 hidden:17 wij:4 selects:1 among:3 augment:1 spatial:2 integration:2 fairly:1 mackay:1 equal:1 once:1 distilled:1 sampling:1 represents:1 stuart:1 future:1 report:4 cardinal:3 primarily:2 few:1 jianlin:1 randomly:3 oriented:2 modern:1 national:1 familiar:1 sterling:1 phase:6 consisting:1 opposing:1 attempt:1 organization:1 highly:2 evaluation:11 weakness:2 pc:1 endless:1 edge:5 capable:4 experience:1 amateur:9 modest:1 tree:1 old:1 ancient:2 initialized:1 minimal:1 instance:4 column:4 top5:1 beginner:1 lattice:8 strategic:3 vertex:1 deviation:1 addressing:1 subset:4 rare:1 uniform:1 too:1 reported:1 varies:1 nns:1 st:1 fundamental:1 retain:1 probabilistic:3 quickly:1 w1:1 again:1 aaai:1 choose:1 corner:5 book:2 reusing:1 account:2 de:1 tactical:4 ranking:3 observer:1 wellesley:1 red:1 tug:1 aggregation:2 capability:1 complicated:1 square:5 oi:1 who:1 likewise:2 correspond:1 yield:1 weak:2 bayesian:4 territory:8 emphasizes:1 produced:2 carlo:1 published:1 sharing:2 pollastri:2 pp:1 associated:7 conciseness:1 schaeffer:1 irvine:2 dataset:2 knowledge:4 color:4 graepel:1 actually:1 reflecting:1 back:1 focusing:1 originally:1 higher:1 hashing:2 planar:1 rand:1 april:2 arranged:4 stage:3 web:1 propagation:9 logistic:1 quality:2 thore:1 effect:1 contain:1 true:3 normalized:1 hence:4 inspiration:1 assigned:2 illustrated:1 white:6 adjacent:1 game:76 branching:1 during:3 self:1 coincides:1 stone:21 performs:1 temperature:1 interface:1 common:1 rotation:1 jp:1 million:1 discussed:2 significant:1 brugmann:1 gibbs:1 ai:1 dag:20 olf:2 rd:2 add:1 something:1 top1:1 w30:1 scoring:1 greater:2 additional:1 signal:1 semi:2 ii:4 multiple:2 worldwide:1 uncluttered:1 technical:1 defeated:1 calculation:2 schraudolph:1 lin:2 award:2 prediction:5 scalable:4 involving:2 basic:1 essentially:2 expectation:1 albert:1 represent:4 addition:2 addressed:1 w2:1 bringing:1 pass:1 regularly:1 fate:1 seem:1 call:1 ee:1 near:1 intermediate:2 feedforward:2 automated:1 pfbaldi:1 architecture:15 opposite:1 idea:1 reprinted:1 six:1 war:1 url:1 effort:1 peter:2 proceed:1 nine:1 cause:1 york:2 tij:3 se:11 listed:1 covered:1 rival:1 situated:1 hardware:1 welcome:1 percentage:3 problematic:1 nsf:1 estimated:2 per:1 group:10 pantheon:1 salient:1 sheer:1 four:12 pb:2 neither:1 graph:4 year:1 sum:1 run:1 uncertainty:1 place:1 wu:2 scaling:2 comparable:1 layer:1 hi:13 followed:2 played:11 quadratic:1 strength:3 speed:1 conquering:1 developing:3 according:4 combination:2 poor:1 across:10 smaller:4 chilling:1 chess:3 sij:7 handcrafting:1 legal:4 equation:1 turn:1 mechanism:1 fed:2 end:16 available:4 opponent:8 pierre:2 professional:6 denotes:1 running:1 include:2 remaining:2 top:6 graphical:2 opportunity:2 publishing:1 sw:12 especially:1 contact:2 move:27 question:3 primary:1 evolutionary:1 gradient:1 thank:1 lateral:4 sensible:1 trivial:1 besides:1 retained:1 relationship:1 insufficient:1 providing:3 innovation:1 nc:1 relate:1 hij:4 hao:1 implementation:1 design:1 proper:1 stern:2 adjustable:1 herik:1 datasets:2 descent:1 situation:1 david:3 connection:3 california:1 hour:1 nip:1 beyond:1 able:3 bar:1 below:1 pattern:18 perception:1 microsystems:1 challenge:5 program:3 built:2 green:2 power:2 greatest:1 natural:1 force:2 oij:8 ranked:6 indicator:1 scheme:5 improve:1 library:4 imply:1 ne:12 eye:6 created:2 epoch:5 understanding:2 conceals:1 nicol:1 relative:1 wisconsin:1 interesting:1 proportional:1 acyclic:2 ingredient:2 remarkable:1 validation:11 pij:1 displaying:1 playing:1 share:1 nse:1 surprisingly:3 supported:1 free:1 last:1 aij:4 side:3 deeper:1 neighbor:1 emp:2 taking:2 boundary:1 curve:3 world:1 preventing:1 made:3 reinforcement:2 ig:1 emitting:1 active:1 alternatively:2 search:13 decade:1 table:5 learn:5 transfer:5 arie:1 ca:1 symmetry:3 obtaining:1 complex:1 main:3 edition:1 enriched:1 owning:1 roc:1 board:48 ny:2 iij:3 n:1 position:10 ply:5 lw:1 bij:4 down:1 seki:2 experimented:2 svm:1 weakest:4 essential:3 intrinsic:1 incorporating:1 consist:1 importance:1 mirror:1 werf:2 chen:5 entropy:1 intersection:22 contained:2 chang:1 corresponds:2 owned:1 chance:1 ma:1 sized:2 goal:3 month:1 towards:4 ownership:4 shared:5 absence:1 considerable:1 change:1 specifically:1 typical:1 uniformly:1 averaging:1 surpass:1 total:4 called:1 gij:3 player:31 jonathan:1 bioinformatics:1 outstanding:1 regularizing:1 avoiding:1
2,308
3,095
Graph-Based Visual Saliency Jonathan Harel, Christof Koch , Pietro Perona California Institute of Technology Pasadena, CA 91125 {harel,koch}@klab.caltech.edu, perona@vision.caltech.edu Abstract A new bottom-up visual saliency model, Graph-Based Visual Saliency (GBVS), is proposed. It consists of two steps: rst forming activation maps on certain feature channels, and then normalizing them in a way which highlights conspicuity and admits combination with other maps. The model is simple, and biologically plausible insofar as it is naturally parallelized. This model powerfully predicts human xations on 749 variations of 108 natural images, achieving 98% of the ROC area of a human-based control, whereas the classical algorithms of Itti & Koch ([2], [3], [4]) achieve only 84%. 1 Introduction Most vertebrates, including humans, can move their eyes. They use this ability to sample in detail the most relevant features of a scene, while spending only limited processing resources elsewhere. The ability to predict, given an image (or video), where a human might xate in a xed-time freeviewing scenario has long been of interest in the vision community. Besides the purely scienti c goal of understanding this remarkable behavior of humans, and animals in general, to consistently xate on "important" information, there is tremendous engineering application, e.g. in compression and recognition [13]. The standard approaches (e.g., [2], [9]) are based on biologically motivated feature selection, followed by center-surround operations which highlight local gradients, and nally a combination step leading to a "master map". Recently, Bruce [5] and others [4] have hypothesized that fundamental quantities such as "self-information" and "surprise" are at the heart of saliency/attention. However, ultimately, Bruce computes a function which is additive in feature maps, with the main contribution materializing as a method of operating on a feature map in such a way to get an activation, or saliency, map. Itti and Baldi de ne "surprise" in general, but ultimately compute a saliency map in the classical [2] sense for each of a number of feature channels, then operate on these maps using another function aimed at highlighting local variation. By organizing the topology of these varied approaches, we can compare them more rigorously: i.e., not just endto-end, but also piecewise, removing some uncertainty about the origin of observed performance differences. Thus, the leading models of visual saliency may be organized into the these three stages: (s1) extraction: extract feature vectors at locations over the image plane (s2) activation: form an "activation map" (or maps) using the feature vectors (s3) normalization/combination: normalize the activation map (or maps, followed by a combination of the maps into a single map) In this light, [5] is a contribution to step (s2), whereas [4] is a contribution to step (s3). In the classic algorithms, step (s1) is done using biologically inspired lters, step (s2) is accomplished by subtracting feature maps at different scales (henceforth, "c-s" for "center" - "surround"), and step (s3) is accomplished in one of three ways: 1. a normalization scheme based on local maxima [2] ( "max-ave"), 2. an iterative scheme based on convolution with a difference-of-gaussians lter ("DoG"), and 3. a nonlinear interactions ("NL") approach which divides local feature values by weighted averages of surrounding values in a way that is modelled to t psychophysics data [11]. We take a different approach, exploiting the computational power, topographical structure, and parallel nature of graph algorithms to achieve natural and ef cient saliency computations. We de ne Markov chains over various image maps, and treat the equilibrium distribution over map locations as activation and saliency values. This idea is not completely new: Brockmann and Geisel [8] suggest that scanpaths might be predicted by properly de ned Levy ights over saliency elds, and more recently Boccignone and Ferraro [7] do the same. Importantly, they assume that a saliency map is already available, and offer an alternative to the winner-takes-all approach of mapping this object to a set of xation locations. In an unpublished pre-print, L.F. Costa [6] notes similar ideas, however offers only sketchy details on how to apply this to real images, and in fact includes no experiments involving xations. Here, we take a uni ed approach to steps (s2) and (s3) of saliency computation, by using dissimilarity and saliency to de ne edge weights on graphs which are interpreted as Markov chains. Unlike previous authors, we do not attempt to connect features only to those which are somehow similar. We also directly compare our method to others, using power to predict human xations as a performance metric. The contributions of this paper are as follows: (1) A complete bottom-up saliency model based on graph computations, GBVS, including a framework for "activation" and "normalization/combination". (2) A comparison of GBVS against existing benchmarks on a data set of grayscale images of natural environments (viz., foliage) with the eye-movement xation data of seven human subjects, from a recent study by Einh?user et. al. [1]. 2 The Proposed Method: Graph-Based Saliency (GBVS) Given an image I, we wish to ultimately highlight a handful of `signi cant' locations where the image is `informative' according to some criterion, e.g. human xation. As previously explained, this process is conditioned on rst computing feature maps (s1), e.g. by linear ltering followed by some elementary nonlinearity [15]. "Activation" (s2), "normalization and combination" (s3) steps follow as described below. 2.1 Forming an Activation Map (s2) Suppose we are given a feature map1 M : [n]2 ! R. Our goal is to compute an activation map A : [n]2 ! R, such that, intuitively, locations (i; j) 2 [n]2 where I, or as a proxy, M (i; j); is somehow unusual in its neighborhood will correspond to high values of activation A. 2.1.1 Existing Schemes Of course "unusual" does not constrain us suf ciently, and so one can choose several operating de nitions. "Improbable" would lead one to the formulation of Bruce [5], where a histogram of M (i; j) values is computed in some region around (i; j), subsequently normalized and treated as a probability distribution, so that A(i; j) = log(p(i; j)) is clearly de ned with p(i; j) = PrfM (i; j)jneighborhoodg: Another approach compares local "center" distributions to broader "surround" distributions and calls the Kullback-Leibler tension between the two "surprise" [4]. 1 in the context of a mathematical formulation, let [n] , f1; 2; :::; ng. Also, the maps M , and later A, are presented as square (n n) only for expository simplicity. Nothing in this paper will depend critically on the square assumtion, and, in practice, rectangular maps are used instead. 2.1.2 A Markovian Approach We propose a more organic (see below) approach. Let us de ne the dissimilarity of M (i; j) and M (p; q) as M (i; j) d((i; j)jj(p; q)) , log : M (p; q) This is a natural de nition of dissimilarity: simply the distance between one and the ratio of two quantities, measured on a logarithmic scale. For some of our experiments, we use jM (i; j) M (p; q)j instead, and we have found that both work well. Consider now the fully-connected directed graph GA , obtained by connecting every node of the lattice M , labelled with two indices (i; j) 2 [n]2 , with all other n 1 nodes. The directed edge from node (i; j) to node (p; q) will be assigned a weight w1 ((i; j); (p; q)) , d((i; j)jj(p; q)) F (i a2 + b2 F (a; b) , exp : 2 2 p; j q), where is a free parameter of our algorithm2 . Thus, the weight of the edge from node (i; j) to node (p; q) is proportional to their dissimilarity and to their closeness in the domain of M . Note that the edge in the opposite direction has exactly the same weight. We may now de ne a Markov chain on GA by normalizing the weights of the outbound edges of each node to 1, and drawing an equivalence between nodes & states, and edges weights & transition probabilities . The equilibrium distribution of this chain, re ecting the fraction of time a random walker would spend at each node/state if he were to walk forever, would naturally accumulate mass at nodes that have high dissimilarity with their surrounding nodes, since transitions into such subgraphs is likely, and unlikely if nodes have similar M values. The result is an activation measure which is derived from pairwise contrast. We call this approach "organic" because, biologically, individual ?nodes? (neurons) exist in a connected, retinotopically organized, network (the visual cortex), and communicate with each other (synaptic ring) in a way which gives rise to emergent behavior, including fast decisions about which areas of a scene require additional processing. Similarly, our approach exposes connected (via F ) regions of dissimilarity (via w), in a way which can in principle be computed in a completely parallel fashion. Computations can be carried out independently at each node: in a synchronous environment, at each time step, each node simply sums incoming mass, then passes along measured partitions of this mass to its neighbors according to outbound edge weights. The same simple process happening at all nodes simultaneously gives rise to an equilibrium distribution of mass. Technical Notes The equilibrium distribution of this chain exists and is unique because the chain is ergodic, a property which emerges from the fact that our underlying graph GA is by construction strongly connected. In practice, the equilibrium distribution is computed using repeated multiplication of the Markov matrix with an initially uniform vector. The process yields the principal eigenvector of the matrix. The computational complexity is thus O(n4 K) where K n2 is some small number of iterations required to meet equilibrium3 . 2.2 "Normalizing" an Activation Map (s3) The aim of the "normalization" step of the algorithm is much less clear than that of the activation step. It is, however, critical and a rich area of study. Earlier, three separate approaches were mentioned as existing benchmarks, and also the recent work of Itti on surprise [4] comes into the saliency computation at this stage of the process (although it can also be applied to s2 as mentioned above). We shall state the goal of this step as: concentrating mass on activation maps. If mass is not concentrated on individual activation maps prior to additive combination, then the resulting master map may be too nearly uniform and hence uninformative. Although this may seem trivial, it is on some level the very soul of any saliency algorithm: concentrating activation into a few key locations. 2 In our experiments, this parameter was set to approximately one tenth to one fth of the map width. Results were not very sensitive to perturbations around these values. 3 Our implementation, not optimized for speed, converges on a single map of size 25 37 in fractions of a second on a 2.4 GHz Pentium. Armed with the mass-concentration de nition, we propose another Markovian algorithm as follows: This time, we begin with an activation map4 A : [n]2 ! R, which we wish to "normalize". We construct a graph GN with n2 nodes labelled with indices from [n]2 . For each node (i; j) and every node (p; q) (including (i; j)) to which it is connected, we introduce an edge from (i; j) to (p; q) with weight: w2 ((i; j); (p; q)) , A(p; q) F (i p; j q): Again, normalizing the weights of the outbound edges of each node to unity and treating the resulting graph as a Markov chain gives us the opportunity to compute the equilibrium distribution over the nodes5 . Mass will ow preferentially to those nodes with high activation. It is a mass concentration algorithm by construction, and also one which is parallelizable, as before, having the same natural advantages. Experimentally, it seems to behave very favorably compared to the standard approaches such as "DoG" and "NL". 3 3.1 Experimental Results Preliminaries and paradigm We perform saliency computations on real images of the natural world, and compare the power of the resulting maps to predict human xations. The experimental paradigm we pursue is the following: for each of a set of images, we compute a set of feature maps using standard techniques. Then, we proccess each of these feature maps using some activation algorithm, and then some normalization algorithm, and then simply sum over the feature channels. The resulting master saliency map is scored (using an ROC area metric described below) relative to xation data collected for the corresponding image, and labelled according to the activation and normalization algorithms used to obtain it. We then pool over a corpus of images, and the resulting set of scored and labelled master saliency maps is analyzed in various ways presented below. Some notes follow: Algorithm Labels: Hereafter, "graph (i)" and "graph (ii)" refer to the activation algorithm described in section 2.1.2. The difference is that in graph (i), the parameter = 2:5, whereas in graph (ii), = 5. "graph (iii)" and "graph (iv)" refer to the an iterated repitition of the normalization algorithm described in section 2.2. The difference is the termination rule associated with the iterative process: for graph (iii), a complicated termination rule is used which looks for a local maximum in the number of matrix multiplications required to achieve a stable equilibrium distribution6 , and for graph (iv), the termination rule is simply "stop after 4 iterations". The normalization algorithm referred to as "I" corresponds to "Identity", with the most naive normalization rule: it does nothing, leaving activations unchanged prior to subsequent combination. The algorithm "max-ave" and "DoG" were run using the publicly available "saliency toolbox"7 . The parameters of this were checked against the literature [2] and [3], and were found to be almost identical, with a few slight alterations that actually improved performance relative to the published parameters. The parameters of "NL" were set according to the better of the two sets of parameters provided in [11]. Performance metric: We wish to give a reward quantity to a saliency map, given some target locations, e.g., in the case of natural images, a set of locations at which human observers xated. For any one threshold saliency value, one can treat the saliency map as a classi er, with all points above threshold indicated as "target" and all points below threshold as "background". For any particular value of the threshold, there is some fraction of the actual target points which are labelled as such (true positive rate), and some fraction of points which were not target but labelled as such anyway (false positive rate). Varying over all such thresholds yields an ROC curve [14] and the area beneath it is generally regarded as an indication of the classifying power of the detector. This is the performance metric we use to measure how well a saliency map predicts xation locations on a given image. 4 To be clear, if A is the result of the eigenvector computation described in 2.1, i.e., if the graph-based activation step is concatenated with the graph-based normalization step, we will call the resulting algorithm GBVS. However, A may be computed using other techniques. 5 We note that this normalization step of GBS can be iterated times to improve performance. In practice, we use 2 f2; 3; 4g. Performance does not vary signi cantly in this regime with respect to . 6 with the intuition being that competition among competing saliency regions can settle, at which point it is wise to terminate 7 http://www.saliencytoolbox.net 3.2 Human Eye-Movement Data on Images of Nature In a study by Einh?user et al. [1], human and primate xation data was collected on 108 images, each modi ed8 in nine ways. Figure 2 shows an example image from this collection, together with "x"s marking the xation points of three human subjects on this particular picture. In the present study, 749 unique modi cations of the 108 original images, and 24149 human xations from [1] were used. Only pictures for which xation data from three human subjects were available were used. Each image was cropped to 600 400 pixels and was presented to subjects so that it took up 76 55 of their visual eld. In order to facilitate a fair comparison of algorithms, the rst step of the saliency algorithm, feature extraction (s1), was the same for every experiment. Two spatial scales 21 ; 14 were used, and for each of these, four orientation maps corresponding to orientations = f0 ; 45 ; 90 ; 135 g were computed using Gabor lters, one contrast map was computed using luminance variance in a local neighborhood of size 80 80, and the last map was simply a luminance map (the grayscale values). Each of these 12 maps was nally downsampled to a 25 37 raw feature map. "c-s" (center-surround) activation maps were computed by subtracting, from each raw feature map, a feature map on the same channel originally computed at a scale 4 binary orders of magnitude smaller in overall resolution and then resized smoothly to size 25 37. In [2], this overall scheme would be labelled c = f2; 3g, for 21 and 14 , and = f4g, corresponding to a scale change of 4 orders. The other activation procedures are described in section 2.1.2 and 2.1.1. The normalization procedures are all earlier described and named. Figure 2 shows an actual image with the resulting saliency maps from two different (activation, normalization) schemes. (a) Sample Picture With Fixation (b) Graph-Based Saliency Map (c) Traditional Saliency Map ROC area = 0.74 ROC area = 0.57 Figure 2: (a) An image from the data-set with xations indicated using x's. (b) The saliency map formed when using (activation,normalization)= (graph (i),graph (iii)). (c) Saliency map for (activation,normalization)=(c-s,DoG) Finally, we show the performance of this algorithm on the corpus of images. For each image, a mean inter-subject ROC area was computed as follows: for each of the three subjects who viewed an image, the xation points of the remaining two subjects were convolved with a circular, decaying kernel with decay constant matched to the decaying cone density in the retina. This was treated as a saliency map derived directly from human xations, and with the target points being set to the 8 Modi cations were made to change the luminance contrast either up or down in selected circular regions. Both modi ed and unmodi ed stimuli were used in these experiments. Please refer to [1], [12]. xations of the rst subject, an ROC area was computed for a single subject. The mean over the three is termed "inter-subject ROC value" in the following gures. For each range of this quantity, a mean performance metric was computed for various activation and normalization schemes. For any particular scheme, an ROC area was computed using the resulting saliency map together with the xations from all 3 human subjects as target points to detect. The results are shown below. (a) Activation Comparison (b) Normalization Comparison Comparison of Normalization Algorithms Comparison of Activation Algorithms 0.7 0.7 0.65 mean ROC value for algorithm mean ROC value for algorithm 0.65 c-s graph (i) graph (ii) self-info 0.6 0.55 0.5 0.45 0.55 0.6 0.65 0.7 inter-subject RO C value 0.75 0.8 graph (iii) graph (iv) ave-max NL DoG 0.6 0.55 0.5 0.45 0.55 0.6 0.65 0.7 inter-subject RO C value 0.75 0.8 Figure 3: (a) A mean ROC metric is computed for each range of inter-subject ROC values. Each curve represents a different activation scheme, while averaging over individual image numbers and normalization schemes. (b) A mean ROC metric is similarly computed, instead holding the normalization constant while varying the activation scheme. In both Figures 3 and 4, The boundary lines above and below show a rough upper9 and strict lower bounds on performance (based on a human control and chance performance). Figure 3(a) and Figure 3(b) clearly demonstrate the tremendous predictive power of the graph-based algorithms over standard approaches. Figure 4 demonstrates the especially effective performance of combining the best graph-based activation and normalization schemes, contrasted against the standard Itti & Koch approaches, and also the "self-information" approach which includes no mention of a normalization step (hence, set here to "I"). Co m p a ri s on o f A l g ori thm s E n d -to-E n d mean ROC value for algorithm 0 .7 0 .6 5 g ra p h s el f-in fo a ve -m a x NL Do G 0 .6 0 .5 5 0 .5 0 .4 5 0 .5 5 0 .6 0 .6 5 0 .7 i n ter-s u bj e c t R OC va l u e 0 .7 5 0 .8 Figure 4: We compare the predictive power of ve saliency algorithms. The best performer is the method which combines a graph based activation algorithm with a graph based normalization algorithm. The combination of a few possible pairs of activation schemes together with normalization schemes is summarized in Table 1, with notes indicating where certain combinations correspond to established benchmarks. Performance is shown as a fraction of the inter-subject ROC area. Overall, we nd an median ROC area of 0.55 for the Itti & Koch saliency algorithms [2] on these images. In [1] 9 To form a true upper bound, one would need the xation data of many more than three humans on each image. the mean is reported as 0.57, which is remarkably close and plausible if you assume slightly more sophisticated feature maps (for instance, at more scales). Table 1: Performance of end-to-end algorithms 4 activation algorithm normalization algorithm ROC area (fraction10 ) graph (ii) graph (i) graph (ii) graph (ii) graph (ii) graph (i) self-info graph (iv) graph (iv) I ave-max graph (iii) graph (iii) I 0.981148 0.975313 0.974592 0.974578 0.974227 0.968414 0.841054 c-s c-s DoG ave-max 0.840968 0.840725 c-s NL 0.831852 published *Bruce & Tsotsos [5] *Itti & Koch [3] *Itti, Koch, & Niebur [2] *Lee, Itti, Koch, & Braun [10] Discussion and Conclusion Although a novel, simple approach to an old problem is always welcome, we must also seek to answer the scienti c question of how it is possible that, given access to the same feature information, GBVS predicts human xations more reliably than the standard algorithms. We nd experimentally that there are at least two reasons for this observed difference. The rst observation is that, because nodes are on average closer to a few center nodes than to any particular point along the image periphery, it is an emergent property that GBVS promotes higher saliency values in the center of the image plane. We hypothesize that this "center bias" is favorable with respect to predicting xations due to human experience both with photographs, which are typically taken with a central subject, and with everyday life in which head motion often results in gazing straight ahead. Notably, the images of foliage used in the present study had no central subject. One can quantify the GBVS-induced center bias by activating, then normalizing, a uniform image using our algorithms. However, if we introduce this center bias to the output of the standard algorithms' master maps (via pointwise multiplication), we nd that the standard algorithms predict xations better, but still worse than GBVS. In some cases (e.g., "DoG"), introducing this center bias only explains 20% of the performance gap to GBVS ? in the best case (viz., "max-ave"), it explains 90% of the difference. We conjecture that the other reason for the performance difference stems from the robustness of our algorithm with respect to differences in the sizes of salient regions. Experimentally, we nd that the "c-s" algorithm has trouble activating salient regions distant from object borders, even if one varies over many choices of scale differences and combinations thereof. Since most of the standard algorithms have "c-s" as a rst step, they are weakened ab initio. Similarly, the "self-info" algorithm suffers the same weakness, even if one varies over the neighborhood size parameter. On the other hand, GBVS robustly highlights salient regions, even far away from object borders. We note here that what lacks from GBVS described as above is any notion of a multiresolution representation of map data. Therefore, because multiresolution representations are so basic, one may extend both the graph-based activation and normalization steps to a multiresolution version as follows: We begin with, instead of a single map A : [n]2 ! R, a collection of maps fAi g, with each Ai : [ni ]2 ! R representing the same underlying information but at different resolutions. Proceeding as we did before, we instantiate a node for every point on every map, introducing edges again between every pair of nodes, with weights computed same as before with one caveat: the 10 performance here is measured by the ratio of (ROC area using the given algorithm for xation detection) to (ROC area using a saliency map formed from the xations of other subjects on a single picture) distance penalty function F (a; b) accepts two arguments each of which is a distance between two nodes along a particular dimension. In order to compute F in this case, one must de ne a distance over points taken from different underlying domains. The authors suggest a de nition whereby: (1) each point in each map is assigned a set of locations, (2) this set corresponds to the spatial support of this point in the highest resolution map, and (3) the distance between two sets of locations is given as the mean of the set of pairwise distances. The equilibrium distribution can be computed as before. We nd that this extension (say, GBVS Multiresolution, or GBVSM) improves performance with little added computation. Therefore, we have presented a method of computing bottom-up saliency maps which shows a remarkable consistency with the attentional deployment of human subjects. The method uses a novel application of ideas from graph theory to concentrate mass on activation maps, and to form activation maps from raw features. We compared our method with established models and found that ours performed favorably, for both of the key steps in our organization of saliency computations. Our model is extensible to multiresolutions for better performance, and it is biologically plausible to the extent that a parallel implementation of the power-law algorithm for Markov chains is trivially accomplished in hardware. Acknowledgments The authors express sincere gratitude to Wolfgang Einh?user for his offering of natural images, and the xation data associated with them from a study with seven human subjects. We also acknowledge NSF, NIH, DARPA, and ONR for their generous support of our research. References [1] W. Einh?user, W. Kruse, K.P. Hoffmann, & P. K?nig "Differences of Monkey and Human Overt Attention under Natural Conditions", Vision Research 2006. [2] L. Itti, C. Koch, & E. Niebur "A model of saliency based visual attention for rapid scene analysis", IEEE Transactions on Pattern Analysis and Machine 1998 [3] L. Itti & C. Koch "A saliency-based search mechanism for overt and covert shifts of visual attention", Vision Research, 2000 [4] L. Itti, & P. Baldi "Bayesian Surprise Attracts Human Attention", NIPS*2005 [5] N. Bruce & J. Tsotsos "Saliency Based on Information Maximization", NIPS*2005 [6] L.F. Costa "Visual Saliency and Attention as Random Walks on Complex Networks", arXiv preprint 2006 [7] G. Boccignone, & M. Ferraro "Modelling gaze shift as a constrained random walk", Physica A 331, 207 2004 [8] D. Brockmann, T. Geisel "Are human scanpaths Levy ights?", ICANN 1999 [9] D. Parkhurst, K. Law, & E. Niebur "Modeling the role of salience in the allocation of overt visual attention", Vision Research, 2002 [10] D.K. Lee, L. Itti, C. Koch, & J. Braun "Attention activates winner-take-all competition among visual features", Nature Neuroscience, 1999 [11] L. Itti, J. Braun, D.K. Lee, & C. Koch "Attention Modulation of Human Pattern Discrimination Psychophysics Reproduced by a Quantitative Model", NIPS*1998 [12] W. Einh?user & P. K?nig, "Does luminance-contrast contribute to saliency map for overt visual attention?", Eur. J. Neurosci. 2003 [13] U. Rutishauser, D. Walther, C. Koch, & P. Perona "Is bottom-up attention useful for object recognition?", CVPR 2004 [14] B.W. Tatler, R.J. Baddeley, & I.D. Gilchrist "Visual correlates of xation selection: Effects of scale and time." Vision Research 2005 [15] J. Malik & P. Perona "Preattentive texture discrimination with early vision mechanisms" Journal of the Optical Society of America A 1990
3095 |@word version:1 compression:1 seems:1 nd:5 termination:3 seek:1 eld:2 mention:1 hereafter:1 offering:1 ours:1 existing:3 nally:2 activation:41 must:2 subsequent:1 distant:1 additive:2 partition:1 informative:1 cant:1 hypothesize:1 treating:1 discrimination:2 selected:1 instantiate:1 plane:2 caveat:1 node:26 location:11 contribute:1 mathematical:1 along:3 walther:1 consists:1 fth:1 fixation:1 combine:1 baldi:2 introduce:2 pairwise:2 inter:6 notably:1 gbvs:13 ra:1 rapid:1 behavior:2 inspired:1 actual:2 jm:1 armed:1 little:1 vertebrate:1 begin:2 provided:1 underlying:3 matched:1 mass:10 what:1 xed:1 interpreted:1 pursue:1 eigenvector:2 monkey:1 quantitative:1 every:6 braun:3 exactly:1 ro:2 demonstrates:1 control:2 christof:1 before:4 positive:2 engineering:1 local:7 treat:2 meet:1 modulation:1 approximately:1 might:2 weakened:1 equivalence:1 co:1 deployment:1 limited:1 range:2 directed:2 unique:2 acknowledgment:1 practice:3 procedure:2 area:15 gabor:1 organic:2 pre:1 downsampled:1 suggest:2 get:1 ga:3 selection:2 close:1 context:1 www:1 map:66 center:10 attention:11 independently:1 rectangular:1 ergodic:1 resolution:3 simplicity:1 subgraphs:1 rule:4 importantly:1 regarded:1 his:1 classic:1 anyway:1 variation:2 notion:1 construction:2 suppose:1 target:6 user:5 us:1 origin:1 recognition:2 nitions:1 predicts:3 bottom:4 observed:2 role:1 preprint:1 region:7 connected:5 xation:13 movement:2 highest:1 mentioned:2 intuition:1 environment:2 complexity:1 reward:1 rigorously:1 ultimately:3 depend:1 predictive:2 purely:1 powerfully:1 f2:2 completely:2 darpa:1 emergent:2 various:3 america:1 surrounding:2 fast:1 effective:1 neighborhood:3 spend:1 plausible:3 cvpr:1 say:1 drawing:1 ability:2 reproduced:1 advantage:1 indication:1 net:1 took:1 propose:2 subtracting:2 interaction:1 relevant:1 beneath:1 combining:1 tatler:1 organizing:1 achieve:3 multiresolution:4 normalize:2 competition:2 everyday:1 rst:6 exploiting:1 ring:1 converges:1 object:4 measured:3 geisel:2 predicted:1 signi:2 come:1 quantify:1 foliage:2 direction:1 concentrate:1 subsequently:1 human:27 settle:1 explains:2 require:1 activating:2 f1:1 preliminary:1 elementary:1 extension:1 physica:1 initio:1 koch:13 klab:1 around:2 exp:1 equilibrium:8 mapping:1 predict:4 bj:1 vary:1 generous:1 a2:1 early:1 favorable:1 freeviewing:1 overt:4 label:1 expose:1 sensitive:1 weighted:1 rough:1 clearly:2 activates:1 always:1 aim:1 resized:1 varying:2 broader:1 derived:2 viz:2 properly:1 consistently:1 modelling:1 contrast:4 ave:6 pentium:1 sense:1 detect:1 sketchy:1 el:1 unlikely:1 typically:1 initially:1 perona:4 pasadena:1 pixel:1 overall:3 among:2 orientation:2 animal:1 spatial:2 constrained:1 psychophysics:2 construct:1 extraction:2 ng:1 having:1 identical:1 represents:1 look:1 nearly:1 others:2 stimulus:1 piecewise:1 sincere:1 few:4 retina:1 modi:4 harel:2 simultaneously:1 ve:2 individual:3 attempt:1 ab:1 detection:1 organization:1 interest:1 circular:2 weakness:1 analyzed:1 nl:6 light:1 scienti:2 chain:8 edge:10 algorithm2:1 closer:1 improbable:1 experience:1 iv:5 divide:1 old:1 walk:3 re:1 instance:1 earlier:2 modeling:1 markovian:2 gn:1 extensible:1 lattice:1 maximization:1 introducing:2 uniform:3 distribution6:1 too:1 reported:1 connect:1 answer:1 varies:2 eur:1 density:1 fundamental:1 cantly:1 lee:3 pool:1 gaze:1 connecting:1 together:3 w1:1 again:2 central:2 outbound:3 choose:1 henceforth:1 worse:1 itti:13 leading:2 de:12 alteration:1 b2:1 summarized:1 includes:2 parkhurst:1 later:1 ori:1 observer:1 performed:1 wolfgang:1 decaying:2 parallel:3 complicated:1 bruce:5 contribution:4 ltering:1 ni:1 square:2 publicly:1 variance:1 formed:2 who:1 correspond:2 saliency:45 yield:2 modelled:1 raw:3 bayesian:1 iterated:2 critically:1 niebur:3 published:2 cation:2 straight:1 detector:1 parallelizable:1 fo:1 suffers:1 ed:3 synaptic:1 checked:1 against:3 thereof:1 naturally:2 associated:2 costa:2 stop:1 concentrating:2 emerges:1 improves:1 organized:2 sophisticated:1 actually:1 originally:1 higher:1 follow:2 tension:1 improved:1 formulation:2 done:1 strongly:1 just:1 stage:2 hand:1 nonlinear:1 lack:1 somehow:2 fai:1 indicated:2 facilitate:1 effect:1 hypothesized:1 normalized:1 true:2 hence:2 assigned:2 ferraro:2 leibler:1 self:5 width:1 please:1 whereby:1 oc:1 criterion:1 complete:1 demonstrate:1 covert:1 motion:1 image:32 spending:1 wise:1 ef:1 recently:2 novel:2 nih:1 gilchrist:1 winner:2 brockmann:2 retinotopically:1 extend:1 he:1 slight:1 accumulate:1 refer:3 surround:4 ai:1 consistency:1 trivially:1 similarly:3 nonlinearity:1 had:1 stable:1 f0:1 cortex:1 operating:2 access:1 map1:1 recent:2 scenario:1 termed:1 certain:2 periphery:1 binary:1 onr:1 life:1 accomplished:3 caltech:2 nition:3 additional:1 performer:1 parallelized:1 paradigm:2 kruse:1 ii:7 stem:1 technical:1 offer:2 long:1 promotes:1 va:1 involving:1 basic:1 vision:7 metric:7 arxiv:1 histogram:1 normalization:27 iteration:2 kernel:1 whereas:3 uninformative:1 background:1 cropped:1 remarkably:1 walker:1 leaving:1 median:1 scanpaths:2 w2:1 operate:1 unlike:1 nig:2 pass:1 strict:1 subject:20 induced:1 seem:1 call:3 ciently:1 ter:1 iii:6 insofar:1 attracts:1 topology:1 opposite:1 competing:1 idea:3 shift:2 synchronous:1 motivated:1 gb:1 penalty:1 materializing:1 nine:1 jj:2 generally:1 useful:1 clear:2 aimed:1 welcome:1 concentrated:1 hardware:1 http:1 exist:1 conspicuity:1 nsf:1 s3:6 neuroscience:1 shall:1 express:1 key:2 four:1 salient:3 threshold:5 achieving:1 tenth:1 lter:1 luminance:4 graph:43 pietro:1 fraction:5 sum:2 cone:1 tsotsos:2 run:1 master:5 uncertainty:1 communicate:1 you:1 named:1 almost:1 decision:1 bound:2 followed:3 multiresolutions:1 ahead:1 handful:1 constrain:1 scene:3 ri:1 speed:1 argument:1 ecting:1 optical:1 conjecture:1 ned:2 marking:1 according:4 expository:1 combination:11 smaller:1 slightly:1 unity:1 n4:1 biologically:5 s1:4 primate:1 explained:1 intuitively:1 heart:1 taken:2 resource:1 previously:1 mechanism:2 end:3 unusual:2 available:3 operation:1 gaussians:1 apply:1 away:1 robustly:1 alternative:1 robustness:1 convolved:1 original:1 remaining:1 trouble:1 opportunity:1 concatenated:1 especially:1 classical:2 society:1 unchanged:1 move:1 malik:1 already:1 quantity:4 print:1 question:1 added:1 concentration:2 hoffmann:1 traditional:1 gradient:1 ow:1 distance:6 separate:1 attentional:1 seven:2 collected:2 extent:1 trivial:1 reason:2 besides:1 index:2 pointwise:1 ratio:2 preferentially:1 holding:1 favorably:2 info:3 rise:2 implementation:2 reliably:1 perform:1 upper:1 convolution:1 neuron:1 markov:6 observation:1 benchmark:3 acknowledge:1 behave:1 head:1 varied:1 perturbation:1 thm:1 community:1 gratitude:1 dog:7 unpublished:1 required:2 toolbox:1 optimized:1 pair:2 california:1 accepts:1 tremendous:2 established:2 nip:3 below:7 pattern:2 soul:1 regime:1 including:4 max:6 video:1 endto:1 power:7 critical:1 natural:9 treated:2 predicting:1 representing:1 scheme:13 improve:1 technology:1 eye:3 ne:6 picture:4 carried:1 extract:1 naive:1 prior:2 understanding:1 literature:1 multiplication:3 relative:2 law:2 fully:1 highlight:4 suf:1 proportional:1 allocation:1 remarkable:2 rutishauser:1 lters:2 proxy:1 principle:1 classifying:1 elsewhere:1 course:1 einh:5 last:1 free:1 salience:1 bias:4 institute:1 neighbor:1 ghz:1 curve:2 boundary:1 dimension:1 transition:2 world:1 rich:1 computes:1 author:3 collection:2 made:1 ights:2 far:1 transaction:1 correlate:1 uni:1 forever:1 kullback:1 incoming:1 corpus:2 grayscale:2 search:1 iterative:2 table:2 channel:4 nature:3 terminate:1 ca:1 complex:1 domain:2 did:1 icann:1 main:1 neurosci:1 s2:7 border:2 scored:2 n2:2 nothing:2 repeated:1 fair:1 referred:1 cient:1 roc:20 fashion:1 wish:3 levy:2 removing:1 down:1 er:1 decay:1 admits:1 normalizing:5 closeness:1 exists:1 false:1 texture:1 dissimilarity:6 magnitude:1 conditioned:1 gap:1 surprise:5 smoothly:1 logarithmic:1 photograph:1 simply:5 likely:1 forming:2 visual:13 happening:1 highlighting:1 corresponds:2 gures:1 chance:1 goal:3 identity:1 viewed:1 labelled:7 experimentally:3 change:2 contrasted:1 averaging:1 classi:1 principal:1 experimental:2 preattentive:1 f4g:1 indicating:1 support:2 jonathan:1 topographical:1 baddeley:1
2,309
3,096
Temporal dynamics of information content carried by neurons in the primary visual cortex Danko NikoliC* Department of Neurophysiology Max-Planck-Institute for Brain Research, Frankfurt (Main), Germany danko@mpih -frankfurt.mpg.de Stefan Haeusler* Institute for Theoretical Computer Science Graz University of Technology A-80lO Graz, Austria haeusler@igi.tugraz.at Wolf Singer Department of Neurophysiology Max-Planck-Institute for Brain Research, Frankfurt (Main), Germany singer@mpih-frankfurt . mpg . de Wolfgang Maass Institute for Theoretical Computer Science Graz University of Technology A-80lO Graz, Austria maass@ i gi.tugraz . at Abstract We use multi-electrode recordings from cat primary visual cortex and investigate whether a simple linear classifier can extract information about the presented stimuli. We find that information is extractable and that it even lasts for several hundred milliseconds after the stimulus has been removed. In a fast sequence of stimulus presentation, information about both new and old stimuli is present simultaneously and nonlinear relations between these stimuli can be extracted. These results suggest nonlinear properties of cortical representations. The important implications of these properties for the nonlinear brain theory are discussed. 1 Introduction It has recently been argued that the most fundamental aspects of computations in visual cortex are still unknown [1]. This could be partially because of the narrow and reductionist approaches in the design of experiments and partially because of the nonlinear properties of cortical neurons that are ignored by the current theories [1]. It has also been argued that the recurrent neuronal circuits in the visual cortex are highly complex and thus, that notions such as "feedforward" and "feedback" are inadequate concepts for the analysis of nonlinear dynamical systems [2]. Furthermore, current theories do not take in account the precise timing of neuronal activity and synchronicity in responses, which should play an important computational role [3] . Alternative computational models from dynamical systems theory [4] argue that fading memory properties of neural circuits are essential for real-time processing of quickly varying visual stimuli. However, an experimental test of this prediction has been missing. An example for an experimental study that may be seen as a step in this direction is [5], where it was shown that the firing activity of neurons in macaque inferior temporal cortex (IT) contains information about an image that has been just presented and that this information lasts for several hundred milliseconds. This information was extracted by algorithms from machine learning that classified the patterns of neuronal responses to different images. The present paper extends the results from [5] in several directions: ' These authors contributed equally to this work. A 50 " " 40 30 20 o 200 400 Time [ms] 600 n ill I:l ''';' o _ IL_ I l..ll . 200 400 600 o 500 [ Time [ms] Figure 1: A: An example of a visual stimulus in relation to the constellation of receptive fields (gray rectangles) from one Michigan probe. B: Spike times recorded from one electrode across 50 stimulus presentations and for two stimulation sequences (ABC and DBC). In this and in all other figures the gray boxes indicate the periods during which the letter-stimuli were visible on the screen. C: Peri -stimulus time histogram for the responses shown in B. Mfr: mean firing rate. ? We show that also neurons in cat primary visual cortex (area 17) and under anesthesia contain information about previously shown images and that this information lasts even longer. ? We analyze the information content in neuronal activity recorded simultaneously from multiple electrodes. ? We analyze the information about a previously shown stimulus for rapid sequences of images and how the information about consecutive images in a sequence is superimposed (i.e., we probe the system's memory for images). 2 Methods 2.1 Experiments In three cats anaesthesia was induced with ketamine and maintained with a mixture of 70% N 2 0 and 30% 0 2 and with halothane (0.4-0.6%). The cats were paralysed with pancuronium bromide applied intravenously (Pancuronium, Organon, 0.15 mg kg -1 h- 1). Multi-unit activity (MUA) was recorded from area 17 and by using mUltiple silicon-based 16-channel probes (organized in a 4 x 4 spatial matrix) which were supplied by the Center for Neural Communication Technology at the University of Michigan (Michigan probes). The inter-contact distances were 200 !Jm (0.3-0.5 M O AID 100 r------.------,------.~~--_r------,_----_,------_,----_.80 N ~ ~-;:- 80 ID c u '? co ID 60 /" E t:: .. Cl -c . ' .. ~ 0 C - - - - o u 40 .;:: ..-:: Performance ~# 20 - - 0 100 200 400 500 600 0 300 ". a..~ , Time [ms] Figure 2: The abi lity of a linear classifier to determine which of the letters A or D was previously used as a stimul us. The classification performance is shown as a fu nction of time passed between the initiation of the experimental trial and the moment at which a sample of neuronal activity that was taken for training/test of the classifier. The classificatio n performance peaks at about 200 ms (reaching almost 100% accuracy) and remains high until at least 700 ms. Dash-dotted line: the mean firing rate across the entire population of investigated neurons. Dotted line: Performance at the chance level (50% correct). impedance at 1000 Hz). Signals were amplified 1000x and, to extract unit activity, were filtered between 500 Hz and 3.5 kHz. Digital sampling was made with 32 kHz frequency and the waveforms of threshold-detected action potentials were stored for an off-line spike sorting procedure. The probes were inserted approximately perpendicular to the surface of the cortex, allowing us to record simultaneously from neurons at different cortical layers and at different columns. This setup resulted in a cluster of overlapping receptive fields (RF), all RFs being covered by the stimuli (see Fig. 1A) (more details on recording techniques can be found in [6, 7]). Stimuli were presented binocularly on a 21" computer monitor (HITACHI CM813ET, 100 Hz refresh rate) and by using the software for visual stimulation ActiveSTIM (www.ActiveSTIM.com). Binocular fusion of the two eyes was achieved by mapping the borders of the respective RFs and then by aligning the optical axes with an adjustable prism placed in front of one eye. The stimuli consisted of single white letters with elementary features suitable for area 17 and spanning approximately 5? of visual angle. The stimuli were presented on a black background for a brief period of time. Fig. 1A illustrates the spatial relation between the constellation of RFs and the stimulus in one of the experimental setups. In each stimulation condition either a single letter was presented or a sequence of up to three letters. For presentation of single letters we used letters A and D each presented for 100 ms. Stimulus sequences were made with letters A, B, C, D, and E and we compared either the responses across the sequences ABC, DBC, and ADC (cat 1) or across sequences ABE, CBE, ADE, and CDE (cats 2 and 3). Each member of a sequence was presented for 100 ms and the blank delay-period separating the presentation of letters lasted also 100 ms. Each stimulation condition (single letter or a sequence) was presented 50 to 150 times and the order of presentation was randomized across the stimulation conditions. Example raster plots of responses to two different sets of stimuli can be seen in Fig. lB. 2.2 Data analysis The typical spike trains prior to the application of the spike-sorting procedure are illustrated in Fig. lB. All datasets showed high trial-to-trial variability, with an average fano factor of about 8. If we included into our analysis all the single units that resulted from the spike-sorting procedure, this resulted in too sparse data representations and hence in overfitting. We therefore used only units with mean firing rates ::::: 10 Hz and pooled single units with less frequent firings into multi-unit signals. These methods resulted in datasets with 66 to 124 simultaneously recorded units for further analysis. The recorded spike times were convolved with an exponential kernel with a decay time constant of = 20 ms. A linear classifier was trained to discriminate between pairs of stimuli on the basis of the convolved spike trains at time points t E {O, 10, .. ., 700} ms after stimulus onset (using only the vectors of 66 to 124 values of the convolved time series at time t). We refer to this classifier as R t . T A second type of classifier, which we refer to as R i n t , was trained to carry out such classification simultaneously for all time points t E {ISO, 160, ... , 450} (see Fig. 7). If not otherwise stated, the results for type R t classifiers are reported. A linear classifier applied to the convolved spike data (i.e., an equivalent to low-pass-filtering) can be interpreted as an integrate-and-fire (I&F) neuron with synaptic inputs modeled as Dirac delta functions. The time constant of 20 ms reflects the temporal properties of synaptic receptors and of the membrane. A classification is obtained due to the firing threshold of the I&F neuron. The classifiers were trained with linear-kernel support vector machines with parameter G chosen to be 10 in case of 50 samples per stimulus class and 50 in case of 150 samples per stimulus. The classification performance was estimated with lO-fold cross validation in which we balanced the number of examples for the training and for the test class. All the reported performance data are for the test class. Error bars in the figures denote the average standard error of the mean for one cross validation run. 3 3.1 Results High classification performance As observed in IT [5], the classification performance peaks also in area 17 at about 200 ms after the stimulus onset. Therefore, a classifier can detect the identity of the stimulus with high reliability. In contrast to [5] information about stimuli is in our data available much longer and can last up to 700 ms after the stimulus onset (Fig. 2). 3.2 Memory for the past stimuli We also find that even when new stimuli are presented, information about old stimuli is not erased. Instead, neuronal activity continued to maintain information about the previously presented stimuli. In Fig. 3 we show that classifiers can extract substantial information about the first image well after this image is removed and when new images are shown. Thus, the system maintains a memory of previous activations and this memory lasts at least several hundred milliseconds. Note that the information remains in memory even if neuronal rate responses decrease for a brief period of time and approach the level that is close to that of spontaneous activity. 3.3 Simultaneous availability of different pieces of information Simultaneous presence of information about different stimuli is a necessary prerequisite for efficient coding. Fig. 4A shows that classifiers can identify the second letter in the sequence. In Fig. 4B we show the results of an experiment in which both the first and second letter were varied simultaneously. Two classifiers, each for the identity of letters at one time slot, performed both very well. During the period from 250 to 300 ms information about both letters was available. This information can be used to perform a nonlinear XOR classification function, i.e., return one if the sequences ADE or GBE have been presented but not if both A and B or none of them was presented in the same sequence, in which case a zero should be returned. In Fig. 4C we show XOR classification based on the information extracted from the two classifiers in Fig. 4B (dashed line). In this case, the nonlinear component of the XOR computation is made externally by the observer and not by the brain. We compared these results with the performance of a single linear classifier that was trained to extract XOR information directly from the brain responses (solid line). As this classifier was linear, the nonlinear component of the computation could have been performed only by the brain. The classification performance was in both cases well above the chance level (horizontal dotted line in Fig. 4C). More interestingly, the two performance functions were similar, the brain slightly outperforming the external computation of XOR in this nonlinear task. Therefore, the brain can perform also nonlinear computations. AID 100 C B 80 N ~ ~-;:- 80 c co Q) '? U Q) E t:: ~ 0 ,Eu (fi ef a..~ 60 Cl c .;:: 40 ..-:: c 20 0 0 AIC 100 E B 80 N ~ ~-;:- 80 Q) c u co Q) E t:: ~ 0 ,Eu (fi ef a..~ '? 60 Cl c .;:: ..-:: c 40 20 - - - - / "- / Cat 21 0 0 AIC 100 E B ~ ~-;:- 80 Q) E t:: ~ 0 ,Eu (fi ef a..~ Q) '? 60 40 ~ ~ Cl .... C .;:: ./ ./ 20 0 co Q) ~ 80 N c u co co Q) ~ Cat ;31 0 100 200 300 Time [ms] 400 500 600 0 700 ..-:: c co Q) ~ Figure 3: Classifiers were trained to identify the first letter in the sequences ABC vs. DBC in one experiment (cat 1) and sequences ABE vs. CBE in other two experiments (cats 2 and 3). In all cases, the performance reached its maximum shortly before or during the presentation of the second letter. In one case (cat 1) information about the first letter remained present even with multiple exchanges of the stimuli, i.e. , during the presentation of the third letter. Notification is the same as in Fig. 2. 3.4 Neural code It is also important to understand how this information is encoded in neuronal activity. Fig. 5 shows lower bounds for the informatio n contents of neuronal firing rates . The ability of the classifiers to distinguish between two stimuli was positively correlated to the difference in the average firing rate responses to these stimuli. For the th ree experiments (cats 1 to 3) the Pearson coefficients of correlation between these two variables were 0.37,0.42 and 0.46, respectively (14-21 % of explained variance). The correlation coefficients with the absolute rate responses were always larger (0.45, 0.68 and 0.66). In contrast to [5], we also fo und that in addition to rate responses, the precise ti ming relationships between neuronal spiking events carry important information about stimulus identity. To show this, we perturbed the recorded data by j ittering the ti mings of the spikes for various amounts. Only a few milliseconds of jitter were suffi cient to decrease the performance of the classi fiers significantly (Fig. 6). Therefore, the information is also contained in the timing of spikes. Timing is therefore also a neuronal code. Moreover, like rate, timing also carried information about the past. We could demonstrate that jitter induces a significant drop in classification performance even for time points as far as 200 ms past the stimulus onset (the rightmost panel of Fig. 6). We also investigated the 'synaptic' weights of the classifiers and this enabled us to study the temporal evolution of the code. We asked the foll owing question: Do the same pieces of information indicate the identity of a stimulus early and late in the stimulation sequence? Hence, we compared the performance of R t classifiers, for which the weights were allowed to change along the stimulation sequence, against the performance of R i nt classifiers, for which the weights were fixed. The res ults indicated that the neuronal code was invariant during a single stimulation-response event (e.g., on- A A 100 ~-;:- 80 c U co ID 60 E t:: ~ 0 ,Eu 40 (fief 20 0 ~-;:c u co ID E t:: ~ 0 ,Eu (fief a..~ C ID /~. ~-;:c u co ID E t:: ~ 0 ,Eu (fief a..~ '? J, 1'\ Cl c .;:: \ " ..-:: " - - c co Mean fi ring rate 0 AIC 100 80 60 40 20 0 100 80 60 40 20 0 80 N ~ a..~ B BID BID . '-. .\ ID ~ E /"' . - - - 1st letter - - - 2nd letter AIC BID E )-;.1 .? Cat3 - - - XOR Extern al XOR o 100 200 300 400 500 600 700 Time [ms] Figure 4: Classification of the second letter in a sequence and of a combination of letters. A: Performance of a classifier trained to identify the second letter in the sequences ABC and ADC. Similarly to the results in Fig. 3, the performance is high during the presentation of the subsequent letter. B: Simultaneously available information about two different letters of a sequence. Two classifiers identified either the first or the second letter of the following four sequences: ABE, CBE, ADE, and CDE . C: The same data as in B but a linear classifier was trained to compute the XOR function of the 2 bits encoded by the 2 choices AIC and BID (solid line). The dashed line indicates the performance of a control calculation made by an external computation of the XOR function that was based on the information extracted by the classifiers whose performance fu nctions are plotted in B. responses to the presentation of a letter) but changed across such events (e.g., off-response to the same letter or on-response to the subsequent letter)(Fig. 7). Finally, as in [5], an application of nonlinear radial-basis kernels did not produce significan t improvement in the number of correct classifications when compared to linear kernels and this was the case for type R t classifiers for which the improvement never exceeded 2% (results not shown) . However, the performance of type R int classifiers increased considerably (~8 % ) when they were trai ned with nonli near as opposed to linear kernels (time interval t = [150, 450] ms, res ults not shown). 4 Discussion In the present study we fi nd that information about preceding visual stimuli is preserved for several hundred ms in neurons of the primary vis ual cortex of anesthesized cats. These results are consistent to those reported by [5] who investigated neuronal activity in awake state and in a higher cortical area (IT-cortex). We show that information about a previously shown stimulus can last in visual cortex up to 700 ms, much longer than reported for IT-cortex. Hence, we can conclude that it is a general property of cortical networks to contain information about the stimuli in a distributed and time-dynamic manner. Thus, a trained classifier is capable to reliably determine fro m a short sample of this distributed activity the identity of previously presented stimuli. AID 100 C B 160 N ~ ~-;:- 80 c co Q) "? U Q) 60 E t:: ~ 0 ,Eu .,.. 40 (fief 20 a..~ . Ii'- -..... " -::- 0 AIC 100 Ir ........ ~;/ f Cat 1 - - - Performance - - Mean firing rates - Rate differences - ":..-- .i- Cl c .;:: ..-:: c 0 E B 80 N ~ ~-;:- 80 Q) c u co Q) "? 60 E t:: ~ 0 ,Eu Cl 40 (fief I 20 a..~ "- , /'-.~ /' 0 c .;:: ..-:: c p / - \ ..... AIC 100 / '-. j Cat 2 1 0 E B ~ ~-;:- 80 Q) E t:: ~ 0 ,Eu (fief a..~ co Q) ~ 80 N Q) c u co co Q) ~ "? 60 Cl -..... 40 - 20 0 0 ", 100 200 300 Time [ms] - "-... ~ / 400 500 ---._. Cat 3 1 - 600 0 700 c .;:: ..-:: c co Q) ~ Figure 5 : The relation between classifier's performance and i) the mean firing rates (dash-dotted lines) and ii) the difference in the mean firing rates between two stimulation conditions (8 fold magnified, dashed lines). The results are for the same data as in Fig. 3. During the 1st letter presentation After the 1st letter presentation During the 2nd letter presentation 20 20 20 co 15 15 15 .g 10 10 10 Q. 5 5 5 -,R. 0 0 o o Q) u c E Q) Q. .!: e "0 o 5 10 15 20 SD of the jitter [ms] o 5 10 15 20 SD of the jitter [ms] o 5 10 15 20 SD of the jitter [ms] Figure 6 : Drop in performance for the classifiers in Fig. 3 due to the Gaussian jitter of spiking times. The drop in performance was computed for three points in time accordi ng to the original peaks in performance in Fig. 3. For cat 1, these peaks were t E {60, 120, 200} ms and for cat 3, t E {40, 120, 230} ms. The performance drops for these three points in time are shown in the three panels, respectively and in the order left to right. SO: standard deviation of jitter. A standard deviation of only a few milliseconds decreased the classification performance significantly. Furthermore, the system's memory for the past stimulation is not necessarily erased by the presentation of a new stimulus, but it is instead possible to extract information about m Ultiple stim uli simultaneously. We show that different pieces of information are superimposed to each other and that they allow extraction of nonli near relations between the stim uli such as the XOR function. A B C B C R.In t readout Rt readout 100 60 60 40 40 ~ 20 'c 20 ::> 80 ~~ c u t1l Ql E t:: ~ 0 o u ~# x 60 - .!: 40 ___ Rt readout ll..~ 20 0 - Rint readout 0 200 300 400 500 Time [ms] Readout weight 200 300 400 0 Readout weight at time t [ms] Figure 7: Temporal evolution of the weights needed for optimal linear classification. A: Comparison in performance between R t and Rint classifiers. Rint classifier was trained on the time intervals t = [150,450] ms and on the data in Fig. 3. The performance drop of a type R int classifier during the presentation of the third letter indicates that the neural code has changed since the presentation of the second letter. B: Weight vector of the type R int classifier used in A. C: Weight vectors of the type R t classifier shown in A for t E {200, 300, 400} ms. Our results indicate that the neuronal code is not only contained in rate responses but that the precise spike-timing relations matter as well and that they carry additional and important information about the stimulus. Furthermore, almost all information extracted by the state-of-the-art nonlinear classifiers can be extracted by using simple linear classification mechanisms. This is in agreement with the results reported in [5] for IT-cortex. Hence, similarly to our classifiers, cortical neurons should also be able to read out such information from distributed neuronal activity. These results have important implications for theories of brain function and for understanding the nature of computations performed by natural neuronal circuits. In agreement with the recent criticism [1, 2], the present results are not compatible with computational models that require a precise "frame by frame" processing of visual inputs or focus on comparing each frame with an internally generated reconstruction or prediction. These models require a more precise temporal organisation of information about subsequent frames of visual inputs. Instead, our results support the view recently put forward by theoretical studies [4, 8], in which computations are performed by complex dynamical systems while information about results of these computations is read out by simple linear classifiers. These theoretical systems show memory and information-superposition properties that are similar to those reported here for the cerebral cortex. References [1] B. A. Olshausen and D. J. Field. What is the other 85 % of vi doing? In J. L. van Hemmen and T. J. Sejnowski, editors, 23 Problems in Systems Neuroscience, pages 182-211. Oxford Univ. Press (Oxford, UK),2006. [2] A. M. Sillito and H. E. Jones. Feedback systems in visual processing. In L. M. Chalupa and J. S. Werner, editors, The Visual Neurosciences , pages 609-624. MIT Press, 2004. [3] W. Singer. Neuronal synchrony: a versatile code for the definition of relations? Neuron , 24(1):49-65, 111-25, 1999. [4] W. Maass, T. Natschlager, and H. Markram. Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation, 14(11):2531-2560,2002. [5] C. P. Hung, G. Kreiman, T. Poggio, and J. J. DiCarlo. Fast readout of object identity from macaque inferior temporal cortex. Science, 310(5749):863-866, 2005. [6] G. Schneider, M. N. Havenith, and D. Nikolic. Spatio-temporal structure in large neuronal networks detected from cross correlation. Neural Computation, 18(10):2387-2413,2006. [7] G. Schneider and D. Nikolic. Detection and assessment of near-zero delays in neuronal spiking activity. J Neurosci Methods , 152(1-2):97-106,2006. [8] H. Jager and H. Haas. Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science, 304:78-80, 2004.
3096 |@word neurophysiology:2 trial:3 nd:3 rint:3 versatile:1 solid:2 carry:3 moment:1 contains:1 series:1 interestingly:1 rightmost:1 past:4 current:2 com:1 blank:1 nt:1 comparing:1 activation:1 refresh:1 synchronicity:1 visible:1 subsequent:3 plot:1 drop:5 v:2 iso:1 short:1 record:1 filtered:1 anesthesia:1 along:1 manner:1 inter:1 rapid:1 mpg:2 multi:3 brain:9 ming:2 jm:1 moreover:1 circuit:3 panel:2 natschlager:1 what:1 kg:1 interpreted:1 adc:2 magnified:1 temporal:8 ti:2 classifier:38 uk:1 control:1 unit:7 internally:1 planck:2 before:1 timing:5 sd:3 receptor:1 id:7 oxford:2 firing:11 ree:1 approximately:2 black:1 co:18 perpendicular:1 chaotic:1 procedure:3 area:5 significantly:2 radial:1 suggest:1 close:1 put:1 www:1 equivalent:1 missing:1 center:1 continued:1 enabled:1 population:1 notion:1 spontaneous:1 play:1 agreement:2 chalupa:1 observed:1 role:1 inserted:1 graz:4 readout:7 eu:9 decrease:2 removed:2 balanced:1 substantial:1 und:1 asked:1 dynamic:2 trained:9 basis:2 cat:18 various:1 train:2 univ:1 fast:2 sejnowski:1 detected:2 pearson:1 harnessing:1 whose:1 bromide:1 encoded:2 larger:1 nction:1 otherwise:1 ability:1 gi:1 sequence:21 mg:1 reconstruction:1 nctions:1 frequent:1 amplified:1 dirac:1 electrode:3 cluster:1 produce:1 ring:1 object:1 recurrent:1 indicate:3 direction:2 waveform:1 correct:2 owing:1 reductionist:1 hitachi:1 argued:2 exchange:1 require:2 mua:1 elementary:1 mapping:1 anaesthesia:1 consecutive:1 early:1 suffi:1 superposition:1 reflects:1 stefan:1 halothane:1 mit:1 always:1 gaussian:1 reaching:1 varying:1 ax:1 focus:1 improvement:2 superimposed:2 indicates:2 lasted:1 uli:2 contrast:2 criticism:1 detect:1 entire:1 relation:7 germany:2 classification:15 ill:1 spatial:2 art:1 field:3 never:1 extraction:1 ng:1 sampling:1 saving:1 jones:1 stimulus:43 few:2 simultaneously:8 resulted:4 fire:1 maintain:1 detection:1 investigate:1 highly:1 mixture:1 implication:2 paralysed:1 fu:2 capable:1 necessary:1 poggio:1 respective:1 old:2 re:2 plotted:1 theoretical:4 increased:1 column:1 werner:1 deviation:2 hundred:4 delay:2 inadequate:1 front:1 too:1 stored:1 reported:6 perturbed:1 considerably:1 st:3 peri:1 fundamental:1 peak:4 randomized:1 off:2 quickly:1 recorded:6 opposed:1 external:2 return:1 account:1 potential:1 de:2 pooled:1 coding:1 availability:1 coefficient:2 int:3 matter:1 igi:1 onset:4 vi:2 piece:3 performed:4 view:1 observer:1 wolfgang:1 analyze:2 doing:1 reached:1 maintains:1 synchrony:1 ir:1 accuracy:1 xor:10 variance:1 who:1 identify:3 none:1 classified:1 simultaneous:2 fo:1 synaptic:3 notification:1 definition:1 against:1 raster:1 energy:1 frequency:1 austria:2 organized:1 exceeded:1 higher:1 response:15 box:1 furthermore:3 just:1 binocular:1 until:1 correlation:3 horizontal:1 nonlinear:12 overlapping:1 assessment:1 indicated:1 gray:2 olshausen:1 concept:1 contain:2 dbc:3 consisted:1 evolution:2 hence:4 read:2 maass:3 illustrated:1 white:1 ll:2 during:9 inferior:2 maintained:1 m:30 demonstrate:1 image:9 ef:3 recently:2 fi:5 stimulation:10 spiking:3 khz:2 cerebral:1 discussed:1 silicon:1 refer:2 haeusler:2 significant:1 frankfurt:4 fano:1 similarly:2 nonlinearity:1 reliability:1 cbe:3 stable:1 cortex:14 longer:3 surface:1 aligning:1 showed:1 recent:1 initiation:1 prism:1 outperforming:1 seen:2 additional:1 preceding:1 schneider:2 determine:2 mfr:1 period:5 signal:2 dashed:3 ii:2 multiple:3 calculation:1 cross:3 equally:1 prediction:2 histogram:1 kernel:5 fiers:1 achieved:1 preserved:1 background:1 addition:1 interval:2 decreased:1 abi:1 recording:2 induced:1 hz:4 member:1 near:3 presence:1 feedforward:1 bid:4 identified:1 whether:1 passed:1 returned:1 action:1 ignored:1 covered:1 amount:1 induces:1 il_:1 supplied:1 millisecond:5 dotted:4 delta:1 estimated:1 per:2 neuroscience:2 ketamine:1 four:1 threshold:2 monitor:1 rectangle:1 run:1 angle:1 letter:34 jitter:7 extends:1 almost:2 bit:1 layer:1 bound:1 dash:2 distinguish:1 aic:7 fold:2 activity:13 kreiman:1 fading:1 awake:1 software:1 aspect:1 ults:2 extractable:1 optical:1 ned:1 department:2 combination:1 membrane:1 across:6 slightly:1 explained:1 invariant:1 binocularly:1 taken:1 previously:6 remains:2 mechanism:1 singer:3 needed:1 available:3 prerequisite:1 probe:5 extern:1 alternative:1 shortly:1 convolved:4 original:1 tugraz:2 contact:1 question:1 spike:11 receptive:2 primary:4 rt:2 distance:1 t1l:1 separating:1 haas:1 argue:1 spanning:1 stim:2 code:7 dicarlo:1 modeled:1 relationship:1 setup:2 ql:1 stated:1 design:1 reliably:1 unknown:1 contributed:1 allowing:1 adjustable:1 perform:2 neuron:11 datasets:2 lity:1 communication:2 precise:5 variability:1 frame:4 varied:1 perturbation:1 lb:2 abe:3 pair:1 narrow:1 macaque:2 able:1 bar:1 dynamical:3 pattern:1 rf:4 max:2 memory:8 suitable:1 event:3 ual:1 natural:1 predicting:1 technology:3 brief:2 eye:2 carried:2 fro:1 extract:5 prior:1 understanding:1 filtering:1 digital:1 validation:2 integrate:1 consistent:1 editor:2 lo:3 compatible:1 changed:2 placed:1 last:6 wireless:1 allow:1 understand:1 institute:4 markram:1 absolute:1 sparse:1 distributed:3 van:1 feedback:2 cortical:6 author:1 made:4 forward:1 far:1 overfitting:1 conclude:1 spatio:1 sillito:1 impedance:1 channel:1 nature:1 correlated:1 investigated:3 complex:2 cl:8 necessarily:1 did:1 main:2 neurosci:1 border:1 allowed:1 positively:1 neuronal:19 fig:22 cient:1 hemmen:1 screen:1 aid:3 nikolic:3 exponential:1 third:2 late:1 externally:1 remained:1 constellation:2 decay:1 organisation:1 fusion:1 essential:1 trai:1 illustrates:1 sorting:3 michigan:3 visual:15 contained:2 partially:2 wolf:1 chance:2 extracted:6 abc:4 slot:1 identity:6 presentation:15 erased:2 content:3 change:1 included:1 typical:1 cde:2 classi:1 ade:3 discriminate:1 pas:1 experimental:4 support:2 danko:2 hung:1
2,310
3,097
Learning Nonparametric Models for Probabilistic Imitation David B. Grimes Daniel R. Rashid Rajesh P.N. Rao Department of Computer Science University of Washington Seattle, WA 98195 grimes,rashid8,rao@cs.washington.edu Abstract Learning by imitation represents an important mechanism for rapid acquisition of new behaviors in humans and robots. A critical requirement for learning by imitation is the ability to handle uncertainty arising from the observation process as well as the imitator?s own dynamics and interactions with the environment. In this paper, we present a new probabilistic method for inferring imitative actions that takes into account both the observations of the teacher as well as the imitator?s dynamics. Our key contribution is a nonparametric learning method which generalizes to systems with very different dynamics. Rather than relying on a known forward model of the dynamics, our approach learns a nonparametric forward model via exploration. Leveraging advances in approximate inference in graphical models, we show how the learned forward model can be directly used to plan an imitating sequence. We provide experimental results for two systems: a biomechanical model of the human arm and a 25-degrees-of-freedom humanoid robot. We demonstrate that the proposed method can be used to learn appropriate motor inputs to the model arm which imitates the desired movements. A second set of results demonstrates dynamically stable full-body imitation of a human teacher by the humanoid robot. 1 Introduction A fundamental and versatile mechanism for learning in humans is imitation. Infants as young as 42 minutes of age have been found to imitate facial acts such as tongue protrusion while older children can perform complicated forms of imitation ranging from learning to manipulate novel objects in particular ways to imitation based on inference of goals from unsuccessful demonstrations (see [11] for a review). Robotics researchers have become increasingly interested in learning by imitation (also called ?learning by watching? or ?learning from demonstration?) as an attractive alternative to manually programming robots [5, 8, 19]. However, most of these approaches do not take uncertainty into account. Uncertainty in imitation arises from many sources including the internal dynamics of the robot, the robot?s interactions with its environment, observations of the teacher, etc. Being able to handle uncertainty is especially critical in robotic imitation because executing actions that have high uncertainty during imitation could lead to potentially disastrous consequences. In this paper, we propose a new technique for imitation that explicitly handles uncertainty using a probabilistic model of actions and their sensory consequences. Rather than relying on a physicsbased parametric model of system dynamics as in traditional methods, our approach learns a nonparametric model of the imitator?s internal dynamics during a constrained exploration period. The learned model is then used to infer appropriate actions for imitation using probabilistic inference in a dynamic Bayesian network (DBN) with teacher observations as evidence. We demonstrate the viability of the approach using two systems: a biomechanical model of the human arm and a 25- a2 a1 s1 s3 s2 c1 o1 aT ?1 c2 sT c3 o3 o2 (a) cT oT (b) (c) Figure 1: Graphical model and systems for imitation learning. (a) Dynamic Bayesian network for inferring a sequence of imitative actions a1:T?1 from a sequence of observations of the teacher o1:T . The model also allows for probabilistic constraint variables ct on the imitators states st . Nonparametric model learning constructs the model P (st+1 |st , at ) from empirical data. (b) The two link biomechanical model of the human arm (from [10]) used in experiments on learning reaching movements via imitation. (c) The Fujitsu Hoap-2 humanoid robot used in our experiments on fullbody, dynamic imitation. degrees-of-freedom humanoid robot. Our first set of results illustrate how the proposed method can be used to learn appropriate motor commands for producing imitative movements in the model human arm. The second set of results demonstrates dynamically stable full-body imitation of a human teacher by the humanoid robot. Taken together, the results suggest that a probabilistic approach to imitation based on nonparametric model learning could provide a powerful and flexible platform for acquiring new behaviors in complex robotic systems. 2 Imitation via Inference and Constrained Exploration In this section we present our inference-based approach to selecting a set of actions based on observations of another agent?s state during demonstration, and a set of probabilistic constraints. We present our algorithms within the framework of the graphical model shown in Fig. 1(a). We denote the sequence of continuous action variables a1 , ??, at , ??, aT?1 . We use the convention that the agent starts in an initial state s1 , and as the result of executing the actions visits the set of continuous states s2 , ??, st , ??, sT . Note that an initial action a0 can be trivially included. In our imitation learning framework the agent observes a sequence of continuous variables o1 , ??, ot , ??, oT?1 providing partial information about the state of the teacher during demonstration. The conditional probability density P (ot |st ) encodes how likely an observation of the teacher (ot ) agrees with an an agent?s state (st ) while performing the same motion or task. This marks a key difference with the Partially Observable Markov Decision Process (POMDP) framework. Here the observations are of the demonstrator (generally with different embodiment), and we currently assume that the learner can observe it?s own state. Probabilistic constraints on state variables are included within the graphical model by a set of variables ct . The corresponding constraint models P (ct |st ) encode the likelihood of satisfying the constraint in state st . Constraint variables are used in our framework to represent goals such as reaching a desired goal state (cT = sG ), or a going through a way point (ct = sW ). The choice of the constraint model is domain dependent. Here we utilize a central Gaussian density P (ct |st ) = N (ct ? st ; 0, ?c ). The variance parameter for each constraint may be set by hand using domain knowledge, or could be learned using feedback from the environment. Given a set of evidence E ? {o1 , ??, oT , c1 , ??, cT } we desire actions which maximize the likelihood of the evidence. Although space limitations rule out a thorough discussion, to achieve a tractable inference we focus here on computing marginal posteriors over each action rather than the maximum a posteriori (MAP) sequence. While in principle any algorithm for computing the marginal posterior distributions of the action variables could be used, we find it convenient here to use Pearl?s belief propagation (BP) algorithm [13]. BP was originally restricted to tree structured graphical models with discrete variables. Several advances have broadened the applicability to general graph struc- tures [18] and to continuous variables in undirected graph structures [16]. Here we derive belief propagation for the directed case though we note that the difference is largely a semantic convenience, as any Bayesian network can be represented as a Markov random field, or more generally a factor graph [9]. Our approach is most similar to Nonparametric Belief Propagation (NBP) [16], with key differences highlighted throughout this section. The result of performing belief propagation is a set of marginal belief distributions B(x) = P (x|E) = ?(x)?(x). This belief distribution is the product of two sets of messages ?(x) and ?(x), which represent the information coming from neighboring parent and children variable nodes respectively. Beliefs are computed via messages passed along the edges of the graphical model, which are distributions over single variables. The i-th parent of variable x passes to x the distribution ?x (ui ). Child j of variable x would pass to x the distribution ?yj (x). In the discrete (finite space) case, messages are easily represented by discrete distributions. For the case of arbitrary continuous densities message representation is in itself a challenge. As we propose a nonparametric, model-free approach to learning system dynamics it follows that we also want to allow for (approximately) representing the multi-modal, non-Gaussian distributions that arise during inference. As in the NBP approach [16] we adopt the use of a mixture of Gaussian kernels (Eq. 5) to represent arbitrary message and belief distributions. For convenience we treat observed and hidden variables in the graph identically by allowing a node X to send itself the message ?X (x). This ?self message? represented using a Dirac delta distribution about the observed value is considered in the product of all messages from the m children (denoted Yj ) of X: m Y (1) ?(x) = ?X (x) ?Yj (x). j Messages from parent variables are incorporated by integrating the conditional probability of x over all possible values of the k parents times the probability that combination of values as evaluated in the corresponding messages from a parent node: Z n Y ?(x) = P (x|u1 , ??, un ) ?X (ui ) du1:n . (2) u1:n i Messages are updated according to the following two equations: Z Z n Y ?X (uj ) = ?(x) P (x|u1 , ??, un ) ?x (ui ) du1:n/j dx x u1:n/j ?Yj (x) = ?(x)?X (x) (3) i6=j Y ?Yi (x). (4) i6=j The main operations in Eqs 1-4 are integration and multiplication of mixtures of Gaussians. The evaluation of integrals will be discussed after first introducing Gaussian Mixture Regression in Sec. 3. Although the product of a set of mixtures of Gaussians is simply another mixture of Gaussians, the complexity (in terms of the number of components in the output mixture) grows exponentially in the number of input mixtures. Thus an approximation is needed to keep inference tractable in the action sequence length T . Rather than use a multiscale sampling method to obtain a set of representative particles from the product as in [7] we first assume that we can compute the exact product density for a given set of input mixtures. We then apply the simple heuristic of keeping a fixed number of mixture components, which through experimentation we found to be highly effective. This heuristic is based on the empirical sparseness of product mixture components? prior probabilities. For example, when the message ?st+1 (st ) coming from a previous state has M = 10 components, the message from the action ?at?1 (at ) has N = 1 component (based on a unimodal Gaussian prior), and the GMR model has P = 67 components, the conditional product has M N P = 670 components. However, we see experimentally that less than ten components have a weight which is within five orders of magnitude of the maximal weight. Thus we can simply select the top K 0 = 10 components. This sparsity should not be surprising as the P model components represent localized data, and only a few of these components tend to have overlap with the belief state being propagated. Currently we fix K 0 although an adaptive mechanism could further speed inference. a) 4 5 x 10 4.5 Likelihood 4 3.5 3 2.5 2 1.5 1 b) 100 50 1 K 3 3 K=140 3 K=67 K=1 2 2 2 1 1 1 0 0 0 ?1 ?1 ?1 ?2 ?2 ?2 ?1 0 1 2 ?2 ?2 ?1 0 1 2 ?2 ?1 0 1 2 Figure 2: Nonparametric GMR model selection. a) The value of our model selection criteria rises from the initial model with K = L components, to a peak around 67 components, after which it falls off. b) The three series of plots show the current parameters of the model (blue ellipses), layered over the set of regression test points (in green), and the minimum spanning tree (red lines) between neighboring components. Shown here is a projection of the 14-dimensional data (projected onto the first two principal components of the training data). We now briefly describe our algorithm 1 for action inference and constrained exploration. The inputs to the action inference algorithm are the set of evidence E, and an instance of the dynamic Bayesian network M = {PS , PA , PF , PO , PC }, composed of the prior on the initial state, the prior on actions, the forward model, the imitation observation model, and the probabilistic constraint model respectively. Inference proceeds by first ?inverting? the evidence from the observations and constraint variables yielding the messages ?ot (st ), ?ct (st ). After initialization from the priors PS , PA we perform a forward planning pass thereby computing the forward state messages ?st+1 (st ). Similarly a backward planning sweep produces the messages ?st (st?1 ). The algorithm then combines information from forward and backward messages (via Eq. 3) to compute belief distributions of actions. ?t from the belief distribution using the mode We then select the maximum marginal belief action a finding algorithm described in [4]. Our algorithm to iteratively explore the state and action spaces while satisfying the constraints placed on the system builds on the inference based action selection algorithm described above. The inputs are an initial model M0 , a set of evidence E, and a number of iterations to be performed N . At each iteration we infer a sequence of maximum marginal actions and execute them. Execution yields a sequence of states, which are used to update the learned forward model (see Section 3). Using the new (ideally more accurate) forward model we show we are able to obtain a better imitation of the teacher via the newly inferred actions. The final model and sequence of actions are then returned after N constrained exploration trials. For simplicity, we currently assume that state and action prior distributions, and the observation and constraint models are pre-specified. Evidence from our experiments shows that specifying these parts of the model is not overly cumbersome even in the real-world domains we have studied. The focus of our algorithms presented here is to learn the forward model which in many real-world domains is extremely complex to derive analytically. Sections 4.1 and 4.2 describe the results of our algorithms applied in the human arm model and humanoid robot domains respectively. 3 Nonparametric Model Learning In this section we investigate an algorithm for learning a nonparametric forward model via Gaussian Mixture Regression (GMR) [17]. The motivation behind selecting GMR is that it allows for closed form evaluation of the integrals found in Eqs 1-4. Thus it allows efficient inference without the need to resort to Monte Carlo (sample based) approximations in inference. 1 For detailed algorithms please refer to the technical report available at http://www.cs.washington.edu/homes/grimes/dil The common Gaussian Mixture Model (GMM) forms the basis of Gaussian Mixture Regression: X X p(x|?) = p(k|?k )p(x|k, ?k ) = wk N (x; ?k , ?k ) . (5) k k We now assume that the random variable X is formed via the concatenation of the n random vari> > > ables X1 , X2 , ??, Xn , such that x = [x> 1 x2 ? ?xn ] . The theorem of Gaussian conditioning states that if x ? N (?, ?) where ? = [(?i )] and ? = [(?ij )] then the variable Xi is normally distributed given Xj :  ?1 p(Xi = xi |Xj = xj ) = N ?i + ?ij ??1 (6) jj (xj ? ?j ), ?ii ? ?ij ?jj ?ji . Gaussian mixture regression is derived by applying the result of this theorem to Eq. 5: X p(xi |xj , ?) = wkj (xj ) N (xi ; ?kij (xj ), ?kij ) . (7) k We use ?kj denote the mean of the j-th variable in the k-th component of the mixture model. Likewise ?kij denotes the covariance between the variables xi and xj in the k-th component. Instead of a fixed weight and mean for each component we now have a weight function dependent on the conditioning variable xj : wk N (x; ?kj , ?kjj ) . 0 0 k0 wk0 N (x; ?k j , ?k jj ) wkj (x) = P (8) Likewise the mean of the k-th conditioned component of xi given xj is a function of xj : ?kij (x) = ?ki + ?kij ??1 kjj (x ? ?kj ). (9) Belief propagation requires the evaluation of integrals convolving the conditional distribution of one variable xi given a GMM distribution ?(?, ?0 ) of another variable xj : Z p(xi |xj , ?)?(xj ; ?0 )dxj (10) Fortunately rearranging the terms in the densities reduces the product of the two GMMs to a third GMM, which is then marginalized w.r.t. xi under the integral operator. We now turn to the problem of learning a GMR model from data. As the learning methodology we wish to adopt is non-parametric we do not want to select the number of components K a priori. This rules out the common strategy of using the well known expectation maximization (EM) algorithm for learning a model of the full joint density p(x). Although Bayesian strategies exist for selecting the number of components, as pointed out by [17] a joint density modeling approach rarely yields the best model under a regression loss function. Thus we adopt a very similar algorithm to the Iterative Pairwise Replace Algorithm (IPRA) [15, 17] which simultaneously performs model fitting and selection of the GMR model parameters ?. We assume that a set of state and action histories have been observed during the N trial histories: {[si1 , ai1 , si2 , ai2 , ??, aiT?1 siT ]}N i=1 . To learn a GMR forward model we first construct the joint variable space: x = [s> a> (s0 )> ]> where s0 denotes the resulting state when executing action a in state s. The time-invariant dataset is then represented with by matrix Xtr = [x1 , ??, xL ] Model learning and selection first constructs the fully non-parametric representation of the training set with K = L isotropic mixture components centered on each data point ?k = xk . This parametrization is exact at making predictions at points within the training set, but generalizes extremely poorly. The algorithm proceeds by merging components which are very similar, as determined by a symmetric similarity metric between two mixture components. Following [17] we use the Hellinger distance metric. To perform efficient merging we first compute the minimum spanning tree of all mixture components. Iteratively, the algorithm merges the closest pair in the minimum spanning tree. Merging continues until there is only a single Gaussian component left. Merging the two mixtures requires computation of new local mixture parameters (to fit the data covered by both). Rather than the ?method of moments? (MoM) approach to merging components and then later running expectation maximization to fine-tune the selected model, we that found performing local maximum likelihood estimation (MLE) within model selection to be more effective at finding an accurate model. In order to effectively perform MLE merges we first randomly partition the training data into two sets: one of ?basis? vectors that we compute the minimum spanning tree on, and one of regression data points. In our experiments here we used a random fifth of the data for basis vectors. The goal of our modified IPRA algorithm is to find the model which best describes the regression points. We then define the regression likelihood over the current GMR model parameters ?: L(?, Xtr ) = L X n X p(xli |xl1,2,i?1,i+1,n , ?). (11) i l The model of size K which maximizes this criteria is returned for use in our inference procedure. Fig. 2 demonstrates the learning of a forward model for the biomechanical arm model from Section 4.1. We found the regression-based model selection criteria to be effective at generalizing well outside both the basis and regression sets. 4 4.1 Results Imitating Reaching Movements Constrained Exploration Trials Hand position a) Joint velocity b) Demonstration 1 5 10 15 20 0.55 0.5 0.45 ?0.04 ?0.03 ?0.02 ?0.1 0 0.1 ?0.04 ?0.02 0 ?0.1 ?0.05 0 ?0.04 ?0.03 ?0.02 ?0.06 ?0.04 ?0.02 ?0.04 ?0.03 ?0.02 0.1 0 ?0.1 c) Torque (input) Random Exp. 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 20 0 ?20 Figure 3: Learning to imitate a reaching motion. a) The first row shows the motion of the teacher?s hand (shown in black) in Cartesian space, along with the target position. The imitator explores the space via body babbling (second plot, shown in red). From this data a GMR model is learned, constrained exploration is performed to find an imitative reaching motion (shown every 5 iterations). b) The velocities of the two joints during the imitation learning process. By trial number 20 the imitator?s velocities (thin lines) closely match the demonstrator?s velocities (the thick,light colored lines), and meet the zero final velocity constraint. c) The teacher?s original muscle torques, followed by the babbling torques, and the torques computed during constrained exploration. In the first set of experiments we learn reaching movements via imitation in a complex non-linear model of the human arm. The arm simulator we use is a biomechanical arm model [10] consisting of two degrees of freedom (denoted ?) representing the shoulder and elbow joints. The arm is controlled via two torque inputs (denoted ? ) for the two degrees of freedom. The dynamics of the arm are described via the following differential equation: ? + B(?) ? =? M (?)?? + C(?, ?) (12) where M is the inertial force matrix, C is a vector of centripetal and Coriolis forces, and B is the matrix of force due to friction at the joints. Fig. 3 shows the process of learning to perform a reaching motion via imitation. First we compute the teacher?s simulated arm motion using the model-based iLQG algorithm [10] based on start and target positions of the hand. By executing the sequence of computed torque inputs [? a1:T?1 ] from a specified initial state s1 , we obtain the state history of the demonstrator [?s1:T ]. To simulate the 70 50 balanced duration constrained exploration period random (from prior) exploration period 60 40 30 20 10 0 0 (a) 5 10 15 20 25 trial # 30 35 40 45 (b) Figure 4: Humanoid robot dynamic imitation. a) The first row consists of frames from an IK fit to the marker data during observation. The second row shows the result of performing a kinematic imitation in the simulator. The third and fourth rows show the final imitation result obtained by our method of constrained exploration, in the simulator, and on the Hoap-2 robot. b) The duration that the executed imitation was balanced (out of a total of T = 63) shown vs the trial number. The random exploration trials are shown in red, and the inferred imitative trials are shown in blue. Note that the balanced duration rapidly rises and by the 15th inferred sequence is able to perform the imitation without falling. natural partial observability of a human demonstrator and a human learner, we provide our inference algorithm with noisy measurements of the kinematic state only (not the torques). A probabilistic constraint dictates that the final joint velocities be very close to zero. 4.2 Dynamic Humanoid Imitation We applied our algorithms for nonparametric action selection, model learning, and constrained exploration to the problem of full-body dynamic imitation in a humanoid robot. The experiment consisted of a humanoid demonstrator performing motions such as squatting and standing on one leg. Due to space limitations only briefly describe the experiments, for more details see [6]. First, the demonstrators? kinematics were obtained using a commercially available retroreflective marker-based optical motion capture system based on inverse kinematics (IK). The IK skeletal model of the human was restricted to have the same degrees of freedom as the Fujitsu Hoap-2 humanoid robot. Representing humanoid motion using a full kinematic configuration is problematic (due to the curse of dimensionality). Fortunately with respect to a wide class of motion (such as walking, kicking, squatting) the full number of degrees of freedom (25 in the Hoap-2) is highly redundant. For simplicity here, we use linear principal components analysis (PCA) but we are investigating the use of non-linear embedding techniques. Using PCA we were able to represent the observed instructor?s kinematics in a compact four-dimensional space, thus forming the first four dimensions of the state space. The goal of the experiment is to perform dynamic imitation, i.e. considering the dynamic balance involved in stably imitating the human demonstrator. Dynamic balance is considered using a sensorbased model. The Hoap-2 robot?s sensors provide measurements of the angular rotation gt (via a gyroscope in the torso) and foot pressure ft (at eight points on the feet) every 1 millisecond. By computing four differential features of the pressure sensors, and extracting the two horizontal gyroscope axis, we form a six dimensional representation of the dynamic state of the robot. Concatenating the four dimensional kinematic state and the six dimensional dynamic state we form the full ten dimensional state representation st . Robot actions at are then simply points in the embedded kinematic space. We bootstrap the forward model (of robot kinematics and dynamics) by first performing random exploration (body babbling) about the instructor?s trajectory. Once we have collected sufficient data (around 20 trials) we learn an initial forward model. Subsequently we place a probabilistic constraint on the dynamic configuration of the robot (using a tight, central Gaussian distribution around zero angular velocity, and zero pressure differentials). Using this constraint on dynamics we perform constrained exploration, until we obtain a stable motion for the Hoap-2 which imitates the human motion. The results we obtained in imitating a difficult one-legged balance motion are shown in Fig. 4. 5 Conclusion Our results demonstrate that probabilistic inference and learning techniques can be used to successfully acquire new behaviors in complex robotic systems such as a humanoid robot. In particular, we showed how a nonparametric model of forward dynamics can be learned from constrained exploration and used to infer actions for imitating a teacher while simultaneously taking the imitator?s dynamics into account. There exists a large body of previous work on robotic imitation learning (see, for example [2, 5, 14, 19]). Some rely on producing imitative behaviors using nonlinear dynamical systems (e.g., [8]) while others focus on biologically motivated algorithms (e.g., [3]). In the field of reinforcement learning, techniques such as inverse reinforcement learning [12] and apprenticeship learning [1] have been proposed to learn controllers for complex systems based on observing an expert and learning their reward function. However, the role of this type of expert and that of our human demonstrator must be distinguished. In the former case, the teacher is directly controlling the artificial system. In the imitation learning paradigm, one can only observe the teacher controlling their own body. Further, despite kinematic similarities between the human and humanoid robot, the dynamic properties of the robot and human are very different. Finally, the fact that our approach is based on inference in graphical models confers two major advantages: (1) we can continue to leverage algorithmic advances in the rapidly developing area of inference in graphical models, and (2) the approach promises generalization to graphical models of more complex systems such as with semi-Markov dynamics and hierarchical systems. References [1] P. Abbeel and A. Y. Ng. Exploration and apprenticeship learning in reinforcement learning. In In Proceedings of the Twenty-first International Conference on Machine Learning, 2005. [2] C. Atkeson and S. Schaal. Robot learning from demonstration. pages 12?20, 1997. [3] A. Billard and M. Mataric. Learning human arm movements by imitation: Evaluation of a biologically-inspired connectionist architecture. Robotics and Autonomous Systems, (941), 2001. [4] M. A. Carreira-Perpinan. Mode-finding for mixtures of gaussian distributions. IEEE Trans. Pattern Anal. Mach. Intell., 22(11):1318?1323, 2000. [5] J. Demiris and G. Hayes. A robot controller using learning by imitation, 1994. [6] D. B. Grimes, R. Chalodhorn, and R. P. N. Rao. Dynamic imitation in a humanoid robot through nonparametric probabilistic inference. In Proceedings of Robotics: Science and Systems (RSS?06), Cambridge, MA, 2006. MIT Press. [7] A. T. Ihler, E. B. Sudderth, W. T. Freeman, and A. S. Willsky. Efficient multiscale sampling from products of gaussian mixtures. In S. Thrun, L. Saul, and B. Sch?olkopf, editors, Advances in Neural Information Processing Systems 16. MIT Press, Cambridge, MA, 2004. [8] A. J. Ijspeert, J. Nakanishi, and S. Schaal. Trajectory formation for imitation with nonlinear dynamical systems. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 752?757, 2001. [9] F. R. Kschischang, B. J. Frey, and H.-A. Loeliger. Factor graphs and the sum-product algorithm. IEEE Transactions on Information Theory, 47(2):498?519, 2001. [10] W. Li and E. Todorov. Iterative linear-quadratic regulator design for nonlinear biological movement systems. In Proceedings of the 1st Int. Conf. on Informatics in Control, Automation and Robotics, volume 1, pages 222?229, 2004. [11] A. N. Meltzoff. Elements of a developmental theory of imitation. pages 19?41, 2002. [12] A. Y. Ng and S. Russell. Algorithms for inverse reinforcement learning. In Proc. 17th International Conf. on Machine Learning, pages 663?670, 2000. [13] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, 1988. [14] S. Schaal, A. Ijspeert, and A. Billard. Computational approaches to motor learning by imitation. 1431:199?218, 2004. [15] D. Scott and W. Szewczyk. From kernels to mixtures. Technometrics, 43(3):323?335. [16] E. B. Sudderth, A. T. Ihler, W. T. Freeman, and A. S. Willsky. Nonparametric belief propagation. In CVPR (1), pages 605?612, 2003. [17] H.-G. Sung. Gaussian Mixture Regression and Classification. PhD thesis, Rice University, 2004. [18] Y. Weiss. Correctness of local probability propagation in graphical models with loops. Neural Computation, 12(1):1?41, 2000. [19] M. Y. Kuniyoshi and H. Inoue. ?learning by watching: Extracting reusable task knowledge from visual observation of human performance? ieee transaction on robotics and automation, vol.10, no.6, pp.799?822, dec., 1994.
3097 |@word trial:9 briefly:2 r:1 covariance:1 pressure:3 thereby:1 versatile:1 moment:1 initial:7 configuration:2 series:1 selecting:3 daniel:1 loeliger:1 o2:1 current:2 surprising:1 dx:1 must:1 biomechanical:5 partition:1 motor:3 plot:2 update:1 v:1 infant:1 selected:1 imitate:2 isotropic:1 xk:1 parametrization:1 colored:1 node:3 si1:1 five:1 along:2 c2:1 become:1 differential:3 ik:3 consists:1 combine:1 fitting:1 hellinger:1 apprenticeship:2 pairwise:1 rapid:1 behavior:4 planning:2 multi:1 simulator:3 torque:7 inspired:1 relying:2 freeman:2 curse:1 pf:1 considering:1 elbow:1 maximizes:1 nbp:2 finding:3 sung:1 thorough:1 every:2 act:1 demonstrates:3 control:1 normally:1 broadened:1 producing:2 squatting:2 local:3 treat:1 frey:1 consequence:2 despite:1 mach:1 meet:1 approximately:1 black:1 kuniyoshi:1 initialization:1 studied:1 dynamically:2 specifying:1 directed:1 yj:4 kicking:1 coriolis:1 bootstrap:1 procedure:1 area:1 empirical:2 dictate:1 convenient:1 projection:1 pre:1 integrating:1 instructor:2 suggest:1 convenience:2 onto:1 selection:8 layered:1 operator:1 close:1 applying:1 www:1 confers:1 map:1 send:1 duration:3 pomdp:1 simplicity:2 rule:2 embedding:1 handle:3 autonomous:1 updated:1 target:2 controlling:2 exact:2 programming:1 pa:2 velocity:7 element:1 satisfying:2 walking:1 continues:1 observed:4 ft:1 role:1 capture:1 movement:7 russell:1 observes:1 balanced:3 environment:3 developmental:1 ui:3 complexity:1 reward:1 ideally:1 dynamic:29 legged:1 gmr:9 tight:1 learner:2 basis:4 easily:1 po:1 joint:8 k0:1 represented:4 effective:3 describe:3 monte:1 artificial:1 formation:1 outside:1 heuristic:2 plausible:1 cvpr:1 ability:1 highlighted:1 itself:2 noisy:1 final:4 sequence:12 advantage:1 propose:2 interaction:2 product:10 coming:2 maximal:1 neighboring:2 loop:1 rapidly:2 poorly:1 achieve:1 protrusion:1 dirac:1 olkopf:1 seattle:1 parent:5 requirement:1 p:2 produce:1 executing:4 object:1 illustrate:1 derive:2 ij:3 eq:5 c:2 convention:1 foot:2 thick:1 closely:1 meltzoff:1 subsequently:1 exploration:17 human:20 centered:1 fix:1 generalization:1 abbeel:1 imitative:6 biological:1 around:3 considered:2 exp:1 algorithmic:1 m0:1 major:1 adopt:3 a2:1 estimation:1 proc:1 currently:3 physicsbased:1 agrees:1 correctness:1 successfully:1 xtr:2 mit:2 sensor:2 gaussian:15 modified:1 rather:5 reaching:7 command:1 encode:1 derived:1 focus:3 schaal:3 likelihood:5 posteriori:1 inference:22 dependent:2 a0:1 hidden:1 going:1 interested:1 classification:1 flexible:1 denoted:3 priori:1 plan:1 constrained:12 platform:1 integration:1 marginal:5 field:2 construct:3 once:1 washington:3 sampling:2 manually:1 ng:2 represents:1 thin:1 commercially:1 report:1 others:1 connectionist:1 intelligent:2 few:1 randomly:1 composed:1 simultaneously:2 intell:1 consisting:1 technometrics:1 freedom:6 message:17 highly:2 investigate:1 kinematic:6 evaluation:4 ai1:1 mixture:24 grime:4 yielding:1 pc:1 behind:1 light:1 accurate:2 rajesh:1 edge:1 imitator:7 partial:2 integral:4 facial:1 tree:5 desired:2 tongue:1 instance:1 kij:5 modeling:1 rao:3 maximization:2 hoap:6 applicability:1 introducing:1 struc:1 teacher:15 st:22 density:7 fundamental:1 peak:1 explores:1 international:3 standing:1 probabilistic:14 off:1 informatics:1 together:1 thesis:1 central:2 watching:2 conf:2 resort:1 convolving:1 expert:2 li:1 account:3 sec:1 wk:2 automation:2 int:1 explicitly:1 performed:2 later:1 closed:1 mataric:1 observing:1 red:3 start:2 complicated:1 contribution:1 formed:1 variance:1 largely:1 likewise:2 kaufmann:1 yield:2 xli:1 bayesian:5 carlo:1 trajectory:2 researcher:1 history:3 cumbersome:1 acquisition:1 pp:1 involved:1 ihler:2 propagated:1 newly:1 dataset:1 knowledge:2 dimensionality:1 torso:1 inertial:1 originally:1 methodology:1 modal:1 wei:1 evaluated:1 though:1 execute:1 angular:2 until:2 hand:4 horizontal:1 nonlinear:3 multiscale:2 propagation:7 marker:2 mode:2 stably:1 dil:1 grows:1 consisted:1 du1:2 analytically:1 former:1 ai2:1 iteratively:2 symmetric:1 semantic:1 attractive:1 during:9 self:1 please:1 criterion:3 o3:1 demonstrate:3 performs:1 motion:13 reasoning:1 ranging:1 novel:1 common:2 rotation:1 ji:1 conditioning:2 exponentially:1 volume:1 discussed:1 refer:1 measurement:2 cambridge:2 dbn:1 trivially:1 i6:2 similarly:1 particle:1 pointed:1 robot:26 stable:3 similarity:2 etc:1 gt:1 posterior:2 own:3 closest:1 showed:1 continue:1 yi:1 muscle:1 morgan:1 minimum:4 fortunately:2 maximize:1 period:3 redundant:1 babbling:3 semi:1 ii:1 full:7 unimodal:1 paradigm:1 infer:3 reduces:1 technical:1 match:1 nakanishi:1 manipulate:1 visit:1 mle:2 a1:4 ellipsis:1 controlled:1 prediction:1 regression:12 controller:2 expectation:2 metric:2 iteration:3 represent:5 kernel:2 robotics:5 dec:1 c1:2 want:2 fine:1 wkj:2 source:1 sudderth:2 sch:1 ot:7 pass:1 tend:1 undirected:1 leveraging:1 dxj:1 gmms:1 extracting:2 leverage:1 viability:1 identically:1 todorov:1 xj:14 fit:2 architecture:1 observability:1 six:2 pca:2 motivated:1 passed:1 returned:2 jj:3 action:30 generally:2 detailed:1 covered:1 tune:1 nonparametric:15 ten:2 wk0:1 demonstrator:8 http:1 exist:1 problematic:1 millisecond:1 s3:1 delta:1 arising:1 overly:1 blue:2 discrete:3 skeletal:1 promise:1 vol:1 key:3 four:4 reusable:1 falling:1 gmm:3 utilize:1 backward:2 graph:5 sum:1 inverse:3 uncertainty:6 powerful:1 fourth:1 place:1 throughout:1 home:1 decision:1 ki:1 ct:10 followed:1 quadratic:1 constraint:16 bp:2 x2:2 encodes:1 regulator:1 u1:4 speed:1 ables:1 extremely:2 friction:1 performing:6 simulate:1 optical:1 department:1 structured:1 according:1 developing:1 combination:1 describes:1 increasingly:1 em:1 making:1 s1:4 biologically:2 leg:1 restricted:2 invariant:1 imitating:5 taken:1 equation:2 turn:1 kinematics:4 mechanism:3 needed:1 tractable:2 available:2 generalizes:2 operation:1 gaussians:3 experimentation:1 apply:1 observe:2 eight:1 hierarchical:1 appropriate:3 distinguished:1 alternative:1 original:1 top:1 denotes:2 running:1 graphical:10 marginalized:1 sw:1 especially:1 uj:1 build:1 rsj:1 sweep:1 parametric:3 strategy:2 traditional:1 distance:1 link:1 simulated:1 concatenation:1 thrun:1 collected:1 spanning:4 willsky:2 length:1 o1:4 providing:1 demonstration:6 balance:3 acquire:1 difficult:1 executed:1 disastrous:1 potentially:1 rise:2 anal:1 design:1 twenty:1 perform:8 allowing:1 billard:2 observation:13 markov:3 finite:1 rashid:1 incorporated:1 shoulder:1 frame:1 arbitrary:2 inferred:3 david:1 inverting:1 pair:1 specified:2 c3:1 learned:6 merges:2 pearl:2 trans:1 able:4 proceeds:2 dynamical:2 pattern:1 scott:1 sparsity:1 challenge:1 gyroscope:2 unsuccessful:1 including:1 green:1 belief:14 critical:2 overlap:1 natural:1 force:3 rely:1 arm:14 representing:3 older:1 inoue:1 axis:1 imitates:2 kj:3 review:1 sg:1 prior:7 mom:1 multiplication:1 embedded:1 loss:1 fully:1 ilqg:1 limitation:2 tures:1 localized:1 age:1 humanoid:15 degree:6 agent:4 sufficient:1 s0:2 principle:1 editor:1 fujitsu:2 row:4 placed:1 free:1 keeping:1 allow:1 fall:1 wide:1 taking:1 saul:1 fifth:1 distributed:1 feedback:1 embodiment:1 xn:2 world:2 vari:1 dimension:1 sensory:1 forward:16 adaptive:1 projected:1 reinforcement:4 atkeson:1 si2:1 transaction:2 approximate:1 observable:1 compact:1 keep:1 robotic:4 investigating:1 hayes:1 xi:10 imitation:41 continuous:5 un:2 iterative:2 learn:7 rearranging:1 kschischang:1 kjj:2 complex:6 domain:5 main:1 s2:2 motivation:1 arise:1 ait:1 child:4 body:7 x1:2 fig:4 representative:1 inferring:2 position:3 wish:1 concatenating:1 xl:1 perpinan:1 third:2 learns:2 young:1 minute:1 theorem:2 evidence:7 sit:1 exists:1 merging:5 effectively:1 demiris:1 phd:1 magnitude:1 execution:1 conditioned:1 sparseness:1 cartesian:1 generalizing:1 simply:3 likely:1 explore:1 forming:1 visual:1 desire:1 partially:1 xl1:1 acquiring:1 ma:2 rice:1 conditional:4 goal:5 replace:1 experimentally:1 included:2 determined:1 carreira:1 principal:2 called:1 total:1 pas:2 ijspeert:2 experimental:1 rarely:1 select:3 internal:2 mark:1 centripetal:1 arises:1
2,311
3,098
Multi-Robot Negotiation: Approximating the Set of Subgame Perfect Equilibria in General-Sum Stochastic Games Geo?rey J. Gordon Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 Chris Murray Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 Abstract In real-world planning problems, we must reason not only about our own goals, but about the goals of other agents with which we may interact. Often these agents? goals are neither completely aligned with our own nor directly opposed to them. Instead there are opportunities for cooperation: by joining forces, the agents can all achieve higher utility than they could separately. But, in order to cooperate, the agents must negotiate a mutually acceptable plan from among the many possible ones, and each agent must trust that the others will follow their parts of the deal. Research in multi-agent planning has often avoided the problem of making sure that all agents have an incentive to follow a proposed joint plan. On the other hand, while game theoretic algorithms handle incentives correctly, they often don?t scale to large planning problems. In this paper we attempt to bridge the gap between these two lines of research: we present an e?cient game-theoretic approximate planning algorithm, along with a negotiation protocol which encourages agents to compute and agree on joint plans that are fair and optimal in a sense de?ned below. We demonstrate our algorithm and protocol on two simple robotic planning problems.1 1 INTRODUCTION We model the multi-agent planning problem as a general-sum stochastic game with cheap talk: the agents observe the state of the world, discuss their plans with each other, and then simultaneously select their actions. The state and actions determine a one-step reward for each player and a distribution over the world?s next state, and the process repeats. While talking allows the agents to coordinate their actions, it cannot by itself solve the problem of trust: the agents might lie or make false promises. So, we are interested in planning algorithms that ?nd subgame-perfect Nash equilibria. In a subgame-perfect equilibrium, every deviation from the plan is deterred by the threat of a suitable punishment, and every threatened punishment is believable. To ?nd these equilibria, planners must reason about their own and other agents? incentives to deviate: if other agents have incentives to deviate then I can?t trust them, while if I have an incentive to deviate, they can?t trust me. In a given game there may be many subgame-perfect equilibria with widely di?ering payo?s: some will be better for some agents, and others will be better for other agents. It is generally not feasible to compute all equilibria [1], and even if it were, there would be no obvious way 1 We gratefully acknowledge help and comments from Ron Parr on this research. This work was supported in part by DARPA contracts HR0011-06-0023 (the CS2P program) and 55-00069 (the RADAR program). All opinions, conclusions, and errors are our own. to select one to implement. It does not make sense for the agents to select an equilibrium without consulting one another: there is no reason that agent A?s part of one joint plan would be compatible with agent B?s part of another joint plan. Instead the agents must negotiate, computing and proposing equilibria until they ?nd one which is acceptable to all parties. This paper describes a planning algorithm and a negotiation protocol which work together to ensure that the agents compute and select a subgame-perfect Nash equilibrium which is both approximately Pareto-optimal (that is, its value to any single agent cannot be improved very much without lowering the value to another another agent) and approximately fair (that is, near the so-called Nash bargaining point). Neither the algorithm nor the protocol is guaranteed to work in all games; however, they are guaranteed correct when they are applicable, and applicability is easy to check. In addition, our experiments show that they work well in some realistic situations. Together, these properties of fairness, enforceability, and Pareto optimality form a strong solution concept for a stochastic game. The use of this de?nition is one characteristic that distinguishes our work from previous research: ours is the ?rst e?cient algorithm that we know of to use such a strong solution concept for stochastic games. Our planning algorithm performs dynamic programming on a set-based value function: for P players, at a state s, V ? V(s) ? RP is an estimate of the value the players can achieve. We represent V(s) by sampling points on its convex hull. This representation is conservative, i.e., guarantees that we ?nd a subset of the true V? (s). Based on the sampled points we can e?ciently compute one-step backups by checking which joint actions are enforceable in an equilibrium. Our negotiation protocol is based on a multi-player version of Rubinstein?s bargaining game. Players together enumerate a set of equilibria, and then take turns proposing an equilibrium from the set. Until the players agree, the protocol ends with a small probability  after each step and defaults to a low-payo? equilibrium; the fear of this outcome forces players to make reasonable o?ers. 2 2.1 BACKGROUND STOCHASTIC GAMES A stochastic game represents a multi-agent planning problem in the same way that a Markov Decision Process [2] represents a single-agent planning problem. As in an MDP, transitions in a stochastic game depend on the current state and action. Unlike MDPs, the current (joint) action is a vector of individual actions, one for each player. More formally, a generalsum stochastic game G is a tuple (S, sstart , P, A, T, R, ?). S is a set of states, and sstart ? S is the start state. P is the number of players. A = A1 ?A2 ?. . .?AP is the ?nite set of joint actions. We deal with fully observable stochastic games with perfect monitoring, where all players can observe previous joint actions. T : S ? A ? P (S) is the transition function, where P (S) is the set of probability distributions over S. R : S ? A ? RP is the reward function. We will write Rp (s, a) for the pth component of R(s, a). ? ? [0, 1) is the discount factor. Player p wants to maximize her discounted ?total value for the observed sequence of states and joint actions s1 , a1 , s2 , a2 , . . ., Vp = t=1 ? t?1 Rp (st , at ). A stationary policy for player p is a function ?p : S ? P (Ap ). A stationary joint policy is a vector of policies ? = (?1 , . . . , ?P ), one for each player. A nonstationary policy for player p is a function t ?p : (?? t=0 (S ? A) ? S) ? P (Ap ) which takes a history of states and joint actions and produces a distribution over player p?s actions; we can de?ne a nonstationary joint policy analogously. For any nonstationary joint policy, there is a stationary policy that achieves the same value at every state [3]. The value function Vp? : S ? R gives expected values for player p under joint policy ?. The value vector at state s, V? (s), is the vector with components Vp? (s). (For a nonstationary policy ? we will de?ne Vp? (s) to be the value if s were the start state, and Vp? (h) to be the value after observing history h.) A vector V is feasible at state s if there is a ? for which V? (s) = V, and we will say that ? achieves V. We will assume public randomization: the agents can sample from a desired joint action distribution in such a way that everyone can verify the outcome. If public randomization is not directly available, there are cryptographic protocols which can simulate it [4]. This assumption means that the set of feasible value vectors is convex, since we can roll a die at the ?rst time step to choose from a set of feasible policies. 2.2 EQUILIBRIA While optimal policies for MDPs can be determined exactly via various algorithms such as linear programming [2], it isn?t clear what it means to ?nd an optimal policy for a general sum stochastic game. So, rather than trying to determine a unique optimal policy, we will de?ne a set of reasonable policies: the Pareto-dominant subgame-perfect Nash equilibria. A (possibly nonstationary) joint policy ? is a Nash equilibrium if, for each individual player, no unilateral deviation from the policy would increase that player?s expected value for playing the game. Nash equilibria can contain incredible threats, that is, threats which the agents have no intention of following through on. To remove this possibility, we can de?ne the subgame-perfect Nash equilibria. A policy ? is a subgame-perfect Nash equilibrium if it is a Nash equilibrium in every possible subgame: that is, if there is no incentive for any player to deviate after observing any history of joint actions. Finally, consider two policies ? and ?. If Vp? (sstart ) ? Vp? (sstart ) for all players p, and if Vp? (sstart ) > Vp? (sstart ) for at least one p, then we will say that ? Pareto dominates ?. A policy which is not Pareto dominated by any other policy is Pareto optimal. 2.3 RELATED WORK Littman and Stone [5] give an algorithm for ?nding Nash equilibria in two-player repeated games. Hansen et al. [6] show how to eliminate very-weakly-dominated strategies in partially observable stochastic games. Doraszelski and Judd [7] show how to compute Markov perfect equilibria in continuous-time stochastic games. The above papers use solution concepts much weaker than Pareto-dominant subgame-perfect equilibrium, and do not address negotiation and coordination. Perhaps the closest work to the current paper is by Brafman and Tennenholtz [8]: they present learning algorithms which, in repeated self-play, ?nd Pareto-dominant (but not subgame-perfect) Nash equilibria in matrix and stochastic games. By contrast, we consider a single play of our game, but allow ?cheap talk? beforehand. And, our protocol encourages arbitrary algorithms to agree on Pareto-dominant equilibria, while their result depends strongly on the self-play assumption. 2.3.1 FOLK THEOREMS In any game, each player can guarantee herself an expected discounted value regardless of what actions the other players takes. We call this value the safety value. Suppose that there is a stationary subgame-perfect equilibrium which achieves the safety value for both players; call this the safety equilibrium policy. Suppose that, in a repeated game, some stationary policy ? is better for both players than the safety equilibrium policy. Then we can build a subgame-perfect equilibrium with the same payo? as ?: start playing ?, and if someone deviates, switch to the safety equilibrium policy. So long as ? is su?ciently large, no rational player will want to deviate. This is the folk theorem for repeated games: any feasible value vector which is strictly better than the safety values corresponds to a subgame-perfect Nash equilibrium [9]. (The proof is slightly more complicated if there is no safety equilibrium policy, but the theorem holds for any repeated game.) There is also a folk theorem for general stochastic games [3]. This theorem, while useful, is not strong enough for our purposes: it only covers discount factors ? which are so close to 1 that the players don?t care which state they wind up in after a possible deviation. In most practical stochastic games, discount factors this high are unreasonably patient. When ? is signi?cantly less than 1, the set of equilibrium vectors can change in strange ways as we change ? [10]. Value to player 2 1 0.5 0 0 0.5 1 1.5 2 Value to player 1 2.5 3 Figure 1: Equilibria of a Rubinstein game with ? = 0.8. Shaded area shows feasible value vectors (U1 (x), U2 (x)) for outcomes x. Right-hand circle corresponds to equilibrium when player 1 moves ?rst, left-hand circle when player 2 moves ?rst. The Nash point is at 3. 2.3.2 RUBINSTEIN?S GAME Rubinstein [11] considered a game where two players divide a slice of pie. The ?rst player o?ers a division x, 1 ? x to the second; the second player either accepts the division, or refuses and o?ers her own division 1 ? y, y. The game repeats until some player accepts an o?er or until either player gives up. In the latter case neither player gets any pie. Rubinstein showed that if player p?s utility for receiving a fraction x at time t is Up (x, t) = ? t Up (x) for a discount factor 0 ? ? < 1 and an appropriate time-independent utility function Up (x) ? 0, then rational players will agree on a division near the so-called Nash bargaining point. This is the point which maximizes the product of the utilities that the players gain by cooperating, U1 (x)U2 (1 ? x). As ? ? 1, the equilibrium will approach the Nash point. See Fig. 1 for an illustration. For three or more players, a similar result holds where agents take turns proposing multi-way divisions of the pie [12]. See the technical report [13] for more detail on the multi-player version of Rubinstein?s game and the Nash bargaining point. 3 NEGOTIATION PROTOCOL The Rubinstein game implicitly assumes that the result of a failure to cooperate is known to all players: nobody gets any pie. The multi-player version of the game assumes in addition that giving one player a share of the pie doesn?t force us to give a share to any other player. Neither of these properties holds for general stochastic games. They are, however, easy to check, and often hold or can be made to hold for planning domains of interest. So, we will assume that the players have agreed beforehand on a subgame-perfect equilibrium ? dis , called the disagreement policy, that they will follow in the event of a negotiation failure. In addition, for games with three or more players, we will assume that each player can unilaterally reduce her own utility by any desired amount without a?ecting other players? utilities. Given these assumptions, our protocol proceeds in two phases (pseudocode is given in the technical report [13]. In the ?rst phase agents compute subgame-perfect equilibria and take turns revealing them. On an agent?s turn she either reveals an equilibrium or passes; if all agents pass consecutively, the protocol proceeds to the second phase. When an agent states a policy ?, the other agents verify that ? is a subgame-perfect equilibrium and calculate its payo? vector V? (sstart ); players who state non-equilibrium policies miss their turn. At the end of the ?rst phase, suppose the players have revealed a set ? of policies. De?ne Xp (?) = Vp? (sstart ) ? Vpdis (sstart ) U = convhull {X(?) | ? ? ?} U = {u ? 0 | (?v ? U | u ? v)} where Vdis is the value function of ? dis , Xp (?) is the excess of policy ? for player p, and U is the set of feasible excess vectors. In the second phase, players take turns proposing points u ? U along with policies or mixtures of policies in ? that achieve them. After each proposal, all agents except the pro- poser decide whether to accept or reject. If everyone accepts, the proposal is implemented: everyone starts executing the agreed equilibrium. Otherwise, the players who accepted are removed from future negotiation and have their utilities ?xed at the proposed levels. Fixing player p?s utility at up means that all future proposals must give p exactly up . Invalid proposals cause the proposer to lose her turn. To achieve this, the proposal may require p to voluntarily lower her own utility; this requirement is enforced by the threat that all players will revert to ? dis if p fails to act as required. If at some point we hit the  chance of having the current round of communication end, all remaining players are assigned their disagreement values. The players execute the last proposed policy ? (or ? dis if there has been no valid proposal), and any player p for whom Vp? (sstart ) is greater than her assigned utility up voluntarily lowers her utility to the correct level. (Again, failure to do so results in all players reverting to ? dis .) Under the above protocol, player?s preferences are the same as in a Rubinstein game with utility set U: because we have assumed that negotiation ends with probability  after each message, agreeing on u after t additional steps is exactly as good as agreeing on u(1?)t now. So with  su?ciently small, the Rubinstein or Krishna-Serrano results show that rational players will agree on a vector u ? U which is close to the Nash point argmaxu?U ?p up . 4 COMPUTING EQUILIBRIA In order to use the protocol of Sec. 3 for bargaining in a stochastic game, the players must be able to compute some subgame-perfect equilibria. Computing equilibria is a hard problem, so we cannot expect real agents to ?nd the entire set of equilibria. Fortunately, each player will want to ?nd the equilibria which are most advantageous to herself to in?uence the negotiation process in her favor. But equilibria which o?er other players reasonably high reward have a higher chance of being accepted in negotiation. So, self interest will naturally distribute the computational burden among all the players. In this section we describe an e?cient dynamic-programming algorithm for computing equilibria. The algorithm takes some low-payo? equilibria as input and (usually) outputs higherpayo? equilibria. It is based on the intuition that we can use low-payo? equilibria as enforcement tools: by threatening to switch to an equilibrium that has low value to player p, we can deter p from deviating from a cooperative policy. In more detail, we will assume that we are given P di?erent equilibria ?1pun , . . . , ?Ppun ; we will use ?ppun to punish player p if she deviates. We can set ?ppun = ? dis for all p if ? dis is the only equilibrium we know; or, we can use any other equilibrium policies that we happen to have discovered. The algorithm will be most e?ective when the value of ?ppun to player p is as low as possible in all states. We will then search for cooperative policies that we can enforce with the given threats ?ppun . We will ?rst present an algorithm which pretends that we can e?ciently take direct sums and convex hulls of arbitrary sets. This algorithm is impractical, but ?nds all enforceable value vectors. We will then turn it into an approximate algorithm which uses ?nite data structures to represent the set-valued variables. As we allow more and more storage for each set, the approximate algorithm will approach the exact one; and in any case the result will be a set of equilibria which the agents can execute. 4.1 THE EXACT ALGORITHM Our algorithm maintains a set of value vectors V(s) for each state s. It initializes V(s) to a set which we know contains the value vectors for all equilibrium policies. It then re?nes V by dynamic programming: it repeatedly attempts to improve the set of values at each state by backing up all of the joint actions, excluding joint actions from which some agent has an incentive to deviate. In more detail, we will compute Vpdis (s) ? Vp?dis (s) for all s and p and use the vector Vdis (s) in our initialization. (Recall that we have de?ned Vp? (s) for a nonstationary policy ? as the value of ? if s were the start state.) We also need the values of the punishment policies for Initialization for s ? S V(s) ? {V | Vpdis (s) ? Vp ? Rmax /(1 ? ?)} end Repeat until converged for iteration ? 1, 2, . . . for s ? S Compute value vector set for each joint action, then throw away unenforceable vectors for a ? A  Q(s, a) ? {R(s, a)} + ? s ?S T (s, a)(s )V(s ) Q(s, a) ? {Q ? Q(s, a) | Q ? Vdev (s, a)} end We can now randomize among joint actions  V(s) ? convhull a Q(s, a) end end Figure 2: Dynamic programming using exact operations on sets of value vectors ? pun their corresponding players, Vppun (s) ? Vp p (s) for all p and s. Given these values, de?ne  T (s, a)(s )Vppun (s ) (1) Qdev p (s, a) = Rp (s, a) + ? s ?S to be the value to player p of playing joint action a from state s and then following ?ppun forever after. From the above Qdev values we can compute player p?s value for deviating from an equilibp  rium which recommends action a in state s: it is Qdev p (s, a ) for the best possible deviation   a , since p will get the one-step payo? for a but be punished by the rest of the players starting on the following time step. That is,  Vpdev (s, a) = max Qdev p (s, a1 ? . . . ? ap ? . . . ? aP )  ap ?Ap (2) Vpdev (s, a) is the value we must achieve for player p in state s if we are planning to recommend action a and punish deviations with ?ppun : if we do not achieve this value, player p would rather deviate and be punished. Our algorithm is shown in Fig. 2. After k iterations, each vector in V(s) corresponds to a k-step policy in which no agent ever has an incentive to deviate. In the k + 1st iteration, the ?rst assignment to Q(s, a) computes the value of performing action a followed by any k-step policy. The second assignment throws out the pairs (a, ?) for which some agent would want to deviate from a given that the agents plan to follow ? in the future. And the convex hull accounts for the fact that, on reaching state s, we can select an action a and future policy ? at random from the feasible pairs.2 Proofs of convergence and correctness of the exact algorithm are in the technical report [13]. Of course, we cannot actually implement the algorithm of Fig. 2, since it requires variables whose values are convex sets of vectors. But, we can approximate V(s) by choosing a ?nite set of witness vectors W ? RP and storing V(s, w) = arg maxv?V(s) (v?w) for each w ? W. V(s) is then approximated by the convex hull of {V(s, w) | w ? W}. If W samples the P dimensional unit hypersphere densely enough, the maximum possible approximation error will be small. (In practice, each agent will probably want to pick W di?erently, to focus her computation on policies in the portion of the Pareto frontier where her own utility is relatively high.) As |W| increases, the error introduced at each step will go to zero. The approximate algorithm is given in more detail in the technical report [13]. 2 It is important for this randomization to occur after reaching state s to avoid introducing incentives to deviate, and it is also important for the randomization to be public. P1 1 P2 1 P1 1 P2 1 P1 1 P2 1 P2 2 P1 2 P2 2 P1 2 P2 2 P1 2 Figure 3: Execution traces for our motion planning example. Left and Center: with 2 witness vectors , the agents randomize between two sel?sh paths. Right: with 4?32 witnesses, the agents ?nd a cooperative path. Steps where either player gets a goal are marked with ?. 90 shop C D E D Value to Player 2 85 A 80 75 70 ABC 65 E D A B 60 40 50 60 Value to Player 1 70 80 Figure 4: Supply chain management problem. In the left ?gure, Player 1 is about to deliver part D to the shop, while player 2 is at the warehouse which sells B. The right ?gure shows the tradeo? between accuracy and computation time. The solid curve is the Pareto frontier for sstart , as computed using 8 witnesses per state. The dashed and dotted lines were computed using 2 and 4 witnesses, respectively. Dots indicate computed value vectors; ? marks indicate the Nash points. 5 EXPERIMENTS We tested our value iteration algorithm and negotiation procedure on two robotic planning domains: a joint motion planning problem and a supply-chain management problem. In our motion planning problem (Fig. 3), two players together control a two-wheeled robot, with each player picking the rotational velocity for one wheel. Each player has a list of goal landmarks which she wants to cycle through, but the two players can have di?erent lists of goals. We discretized states based on X, Y, ? and the current goals, and discretized m actions into stop, slow (0.45 m s ), and fast (0.9 s ), for 9 joint actions and about 25,000 states. We discretized time at ?t = 1s, and set ? = 0.99. For both the disagreement policy and all punishment policies, we used ?always stop,? since by keeping her wheel stopped either player can prevent the robot from moving. Planning took a few hours of wall clock time on a desktop workstation for 32 witnesses per state. Based on the planner?s output, we ran our negotiation protocol to select an equilibrium. Fig. 3 shows the results: with limited computation the players pick two sel?sh paths and randomize equally between them, while with more computation they ?nd the cooperative path. Our experiments also showed that limiting the computation available to one player allows the unrestricted player to reveal only some of the equilibria she knows about, tilting the outcome of the negotiation in her favor (see the technical report [13] for details). For our second experiment we examined a more realistic supply-chain problem. Here each player is a parts supplier competing for the business of an engine manufacturer. The manufacturer doesn?t store items and will only pay for parts which can be used immediately. Each player controls a truck which moves parts from warehouses to the assembly shop; she pays for parts when she picks them up, and receives payment on delivery. Each player gets parts from di?erent locations at di?erent prices and no one player can provide all of the parts the manufacturer needs. Each player?s truck can be at six locations along a line: four warehouse locations (each of which provides a di?erent type of part), one empty location, and the assembly shop. Building an engine requires ?ve parts, delivered in the order A, {B, C}, D, E (parts B and C can arrive in either order). After E, the manufacturer needs A again. Players can move left or right along the line at a small cost, or wait for free. They can also buy parts at a warehouse (dropping any previous cargo), or sell their cargo if they are at the shop and the manufacturer wants it. Each player can only carry one part at a time and only one player can make a delivery at a time. Finally, any player can retire and sell her truck; in this case the game ends and all players get the value of their truck plus any cargo. The disagreement policy is for all players to retire at all states. Fig. 4 shows the computed sets V(sstart ) for various numbers of witnesses. The more witnesses we use, the more accurately we represent the frontier, and the closer our ?nal policy is to the true Nash point. All of the policies computed are ?intelligent? and ?cooperative?: a human observer would not see obvious ways to improve them, and in fact would say that they look similar despite their di?ering payo?s. Players coordinate their motions, so that one player will drive out to buy part E while the other delivers part D. They sit idle only in order to delay the purchase of a part which would otherwise be delivered too soon. 6 CONCLUSION Real-world planning problems involve negotiation among multiple agents with varying goals. To take all agents incentives into account, the agents should ?nd and agree on Paretodominant subgame-perfect Nash equilibria. For this purpose, we presented e?cient planning and negotiation algorithms for general-sum stochastic games, and tested them on two robotic planning problems. References [1] V. Conitzer and T. Sandholm. Complexity results about Nash equilibria. Technical Report CMU-CS-02-135, School of Computer Science, Carnegie-Mellon University, 2002. [2] D. P. Bertsekas. Dynamic Programming and Optimal Control. Athena Scienti?c, Massachusetts, 1995. [3] Prajit K. Dutta. A folk theorem for stochastic games. Journal of Economic Theory, 66:1?32, 1995. [4] Yevgeniy Dodis, Shai Halevi, and Tal Rabin. A cryptographic solution to a game theoretic problem. In Lecture Notes in Computer Science, volume 1880, page 112. Springer, Berlin, 2000. [5] Michael L. Littman and Peter Stone. A polynomial-time Nash equilibrium algorithm for repeated games. In ACM Conference on Electronic Commerce, pages 48?54. ACM, 2003. [6] E. Hansen, D. Bernstein, and S. Zilberstein. Dynamic programming for partially observable stochastic games. In Proceedings of the Nineteenth National Conference on Artificial Intelligence, pages 709?715, 2004. [7] Ulrich Doraszelski and Kenneth L. Judd. Avoiding the curse of dimensionality in dynamic stochastic games. NBER Technical Working Paper No. 304, January 2005. [8] R. Brafman and M. Tennenholtz. E?cient learning equilibrium. Artificial Intelligence, 2004. [9] D Fudenberg and E. Maskin. The folk theorem in repeated games with discounting or with incomplete information. Econometrica, 1986. [10] David Levine. The castle on the hill. Review of Economic Dynamics, 3(2):330?337, 2000. [11] Ariel Rubinstein. Perfect equilibrium in a bargaining model. Econometrica, 50(1):97?109, 1982. [12] V. Krishna and R. Serrano. Multilateral bargaining. Review of Economic Studies, 1996. [13] Chris Murray and Geo?rey J. Gordon. Multi-robot negotiation: approximating the set of subgame perfect equilibria in general-sum stochastic games. Technical Report CMU-ML-06114, Carnegie Mellon University, 2006.
3098 |@word version:3 polynomial:1 advantageous:1 nd:12 pick:3 solid:1 carry:1 contains:1 ours:1 poser:1 current:5 must:8 realistic:2 happen:1 cheap:2 remove:1 maxv:1 stationary:5 intelligence:2 item:1 desktop:1 incredible:1 gure:2 hypersphere:1 provides:1 consulting:1 ron:1 preference:1 location:4 pun:2 along:4 direct:1 supply:3 warehouse:4 expected:3 p1:6 planning:21 nor:2 multi:9 discretized:3 discounted:2 curse:1 maximizes:1 what:2 xed:1 rmax:1 proposing:4 impractical:1 guarantee:2 every:4 act:1 exactly:3 hit:1 control:3 unit:1 conitzer:1 bertsekas:1 safety:7 despite:1 joining:1 punish:2 path:4 approximately:2 ap:7 might:1 plus:1 initialization:2 examined:1 shaded:1 someone:1 limited:1 unique:1 practical:1 commerce:1 practice:1 implement:2 subgame:20 procedure:1 nite:3 area:1 reject:1 revealing:1 intention:1 idle:1 wait:1 get:6 cannot:4 close:2 wheel:2 storage:1 center:1 go:1 regardless:1 starting:1 convex:6 immediately:1 unilaterally:1 handle:1 coordinate:2 limiting:1 play:3 suppose:3 exact:4 programming:7 us:1 pa:2 velocity:1 approximated:1 cooperative:5 observed:1 levine:1 calculate:1 cycle:1 removed:1 ran:1 voluntarily:2 intuition:1 nash:22 complexity:1 reward:3 littman:2 econometrica:2 dynamic:8 convhull:2 radar:1 depend:1 weakly:1 deliver:1 division:5 completely:1 joint:24 darpa:1 various:2 herself:2 talk:2 revert:1 fast:1 describe:1 artificial:2 rubinstein:10 outcome:4 choosing:1 whose:1 widely:1 solve:1 valued:1 say:3 nineteenth:1 otherwise:2 favor:2 itself:1 delivered:2 sequence:1 took:1 product:1 serrano:2 aligned:1 achieve:6 rst:9 convergence:1 empty:1 requirement:1 negotiate:2 produce:1 perfect:22 executing:1 help:1 fixing:1 erent:5 school:1 p2:6 throw:2 implemented:1 c:1 signi:1 indicate:2 strong:3 correct:2 stochastic:22 hull:4 consecutively:1 deter:1 human:1 opinion:1 public:3 require:1 wall:1 randomization:4 strictly:1 frontier:3 hold:5 considered:1 wheeled:1 equilibrium:66 parr:1 achieves:3 a2:2 purpose:2 applicable:1 lose:1 tilting:1 punished:2 hansen:2 coordination:1 bridge:1 correctness:1 tool:1 always:1 rather:2 reaching:2 avoid:1 sel:2 varying:1 zilberstein:1 focus:1 she:6 check:2 contrast:1 sense:2 eliminate:1 entire:1 accept:1 her:13 interested:1 backing:1 arg:1 among:4 negotiation:17 plan:8 having:1 yevgeniy:1 sampling:1 represents:2 sell:3 look:1 fairness:1 future:4 purchase:1 others:2 report:7 gordon:2 recommend:1 few:1 distinguishes:1 intelligent:1 simultaneously:1 densely:1 ve:1 individual:2 national:1 deviating:2 phase:5 attempt:2 interest:2 message:1 possibility:1 threatening:1 mixture:1 sh:2 scienti:1 chain:3 beforehand:2 tuple:1 closer:1 folk:5 incomplete:1 divide:1 desired:2 circle:2 re:1 uence:1 stopped:1 rabin:1 payo:8 cover:1 assignment:2 applicability:1 geo:2 deviation:5 subset:1 introducing:1 cost:1 delay:1 too:1 punishment:4 st:2 cantly:1 contract:1 receiving:1 picking:1 michael:1 together:4 analogously:1 again:2 management:2 opposed:1 choose:1 possibly:1 argmaxu:1 castle:1 account:2 distribute:1 de:9 sec:1 depends:1 wind:1 observer:1 observing:2 portion:1 start:5 maintains:1 complicated:1 shai:1 forbes:2 dutta:1 accuracy:1 roll:1 characteristic:1 who:2 vp:15 accurately:1 monitoring:1 drive:1 history:3 converged:1 failure:3 bargaining:7 obvious:2 naturally:1 proof:2 di:8 workstation:1 sampled:1 rational:3 gain:1 ective:1 stop:2 massachusetts:1 recall:1 dimensionality:1 dodis:1 agreed:2 actually:1 retire:2 higher:2 follow:4 improved:1 execute:2 strongly:1 until:5 clock:1 hand:3 receives:1 working:1 trust:4 su:2 perhaps:1 reveal:1 mdp:1 nber:1 building:1 concept:3 true:2 verify:2 contain:1 believable:1 assigned:2 discounting:1 deal:2 round:1 game:47 self:3 encourages:2 die:1 trying:1 stone:2 hill:1 theoretic:3 demonstrate:1 performs:1 motion:4 delivers:1 pro:1 cooperate:2 pseudocode:1 volume:1 mellon:4 gratefully:1 nobody:1 dot:1 moving:1 robot:4 dominant:4 closest:1 own:8 showed:2 store:1 nition:1 krishna:2 greater:1 care:1 additional:1 fortunately:1 unrestricted:1 determine:2 maximize:1 dashed:1 multiple:1 technical:8 long:1 equally:1 a1:3 patient:1 cmu:2 iteration:4 represent:3 tradeo:1 proposal:6 addition:3 background:1 separately:1 want:7 rest:1 unlike:1 sure:1 comment:1 pass:1 probably:1 call:2 ciently:4 nonstationary:6 near:2 revealed:1 bernstein:1 easy:2 enough:2 recommends:1 switch:2 competing:1 ering:2 reduce:1 economic:3 avenue:2 whether:1 six:1 utility:13 unilateral:1 peter:1 cause:1 rey:2 action:26 repeatedly:1 enumerate:1 generally:1 useful:1 clear:1 involve:1 amount:1 discount:4 dotted:1 sstart:12 correctly:1 per:2 cargo:3 carnegie:4 write:1 promise:1 incentive:10 dropping:1 threat:5 four:1 prevent:1 neither:4 nal:1 kenneth:1 lowering:1 cooperating:1 fraction:1 sum:6 erently:1 enforced:1 arrive:1 planner:2 reasonable:2 strange:1 decide:1 electronic:1 delivery:2 decision:1 acceptable:2 proposer:1 pay:2 guaranteed:2 followed:1 truck:4 occur:1 tal:1 dominated:2 u1:2 simulate:1 optimality:1 enforceable:2 ecting:1 performing:1 relatively:1 ned:2 describes:1 slightly:1 sandholm:1 agreeing:2 making:1 s1:1 ariel:1 mutually:1 agree:6 payment:1 discus:1 turn:8 reverting:1 know:4 enforcement:1 end:9 available:2 operation:1 manufacturer:5 observe:2 away:1 appropriate:1 disagreement:4 enforce:1 rp:6 unreasonably:1 assumes:2 ensure:1 remaining:1 assembly:2 opportunity:1 giving:1 pretend:1 murray:2 build:1 approximating:2 move:4 initializes:1 strategy:1 randomize:3 berlin:1 landmark:1 athena:1 chris:2 me:1 whom:1 reason:3 illustration:1 rotational:1 pie:5 trace:1 cryptographic:2 policy:48 markov:2 acknowledge:1 january:1 situation:1 witness:8 communication:1 excluding:1 ever:1 discovered:1 arbitrary:2 introduced:1 david:1 pair:2 required:1 engine:2 accepts:3 hour:1 hr0011:1 address:1 tennenholtz:2 proceeds:2 below:1 able:1 usually:1 refuse:1 program:2 max:1 everyone:3 suitable:1 event:1 business:1 force:3 generalsum:1 shop:5 improve:2 mdps:2 ne:7 nding:1 isn:1 deviate:12 review:2 checking:1 fully:1 expect:1 lecture:1 rium:1 agent:46 xp:2 ulrich:1 pareto:11 playing:3 share:2 storing:1 compatible:1 cooperation:1 course:1 repeat:3 supported:1 brafman:2 last:1 keeping:1 dis:8 free:1 soon:1 weaker:1 allow:2 slice:1 curve:1 default:1 judd:2 world:4 transition:2 valid:1 doesn:2 computes:1 made:1 avoided:1 pth:1 party:1 excess:2 approximate:5 observable:3 implicitly:1 forever:1 ml:1 robotic:3 reveals:1 buy:2 halevi:1 pittsburgh:2 assumed:1 don:2 continuous:1 search:1 reasonably:1 interact:1 protocol:14 domain:2 fudenberg:1 backup:1 s2:1 fair:2 repeated:7 fig:6 cient:5 slow:1 fails:1 lie:1 supplier:1 theorem:7 er:5 list:2 dominates:1 sit:1 burden:1 false:1 execution:1 gap:1 partially:2 talking:1 fear:1 u2:2 springer:1 doraszelski:2 corresponds:3 chance:2 abc:1 acm:2 goal:8 marked:1 invalid:1 price:1 feasible:8 change:2 hard:1 determined:1 except:1 miss:1 conservative:1 called:3 total:1 pas:1 accepted:2 player:102 formally:1 select:6 mark:1 latter:1 tested:2 avoiding:1
2,312
3,099
Gaussian and Wishart Hyperkernels Risi Kondor, Tony Jebara Computer Science Department, Columbia University 1214 Amsterdam Avenue, New York, NY 10027, U.S.A. {risi,jebara}@cs.columbia.edu Abstract We propose a new method for constructing hyperkenels and define two promising special cases that can be computed in closed form. These we call the Gaussian and Wishart hyperkernels. The former is especially attractive in that it has an interpretable regularization scheme reminiscent of that of the Gaussian RBF kernel. We discuss how kernel learning can be used not just for improving the performance of classification and regression methods, but also as a stand-alone algorithm for dimensionality reduction and relational or metric learning. 1 Introduction The performance of kernel methods, such as Support Vector Machines, Gaussian Processes, etc. depends critically on the choice of kernel. Conceptually, the kernel captures our prior knowledge of the data domain. There is a small number of popular kernels expressible in 2 closed form, such as the Gaussian RBF kernel k(x, x0 ) = exp(? k x ? x0 k /(2? 2 )), which boasts attractive and unique properties from an abstract function approximation point of view. In real world problems, however, and especially when the data is heterogenous or discrete, engineering an appropriate kernel is a major part of the modelling process. It is natural to ask whether instead it might be possible to learn the kernel itself from the data. Recent years have seen the development of several approaches to kernel learning [5][1]. Arguably the most principled method proposed to date is the hyperkernels idea introduced by Ong, Smola and Williamson [8][7][9]. The current paper is a continuation of this work, introducing a new family of hyperkernels with attractive properties. Most work on kernel learning has focused on finding a kernel which is subsequently to be used in a conventional kernel machine, turning learning into an essentially two-stage process: first learn the kernel, then use it in a conventional algorithm such as an SVM to solve a classification or regression task. Recently there has been increasing interest in using the kernel in its own right to answer relational questions about the dataset. Instead of predicting individual labels, a kernel characterizes which pairs of labels are likely to be the same, or related. Kernel learning can be used to infer the network structure underlying data. A different application is to use the learnt kernel to produce a low dimensional embedding via kernel PCA. In this sense, kernel learning can be also be regarded as a dimensionality reduction or metric learning algorithm. 2 Hyperkernels We begin with a brief review of the kernel and hyperkernel formalism. Let X be the input space, Y the output space, and {(x1 , y1 ) , (x2 , y2 ) , . . . , (xm , ym )} the training data. By kernel we mean a symmetric function k : X ? X ? R that is positive definite on X . Whenever we refer to a function being positive definite, we assume that it is also symmetric. Positive definiteness guarantees that k induces a Reproducing Kernel Hilbert Space (RKHS) F, which is a vector space of functions spanned by { kx (?) = k(x, ?) | x ? X } and endowed with an inner product satisfying hkx , kx0 i = k(x, x0 ). Kernel-based learning algorithms find a hypothesis f?? F by solving some variant of the Regularized Risk Minimzation problem " # m X 1 1 2 f? = arg min L(f (xi ), yi ) + k f kF f ?F m 2 i=1 where L is a loss function of our choice. By the Representer Theorem [2], f? is expressible Pm in the form ?(x) = i=1 ?i k(xi , x) for some ?1 , ?2 , . . . , ?m ? R. The idea expounded in [8] is to set up an analogous optimization problem for finding k itself in the RKHS of a hyperkernel K : X ? X ? R, where X = X 2 . We will sometimes view K as a function of four arguments, K((x1 , x01 ), (x2 , x02 )), and sometimes as a function of two pairs, K(x1 , x2 ), with x1 = (x1 , x01 ) and x2 = (x2 , x02 ). To induce an RKHS K must be positive definite in the latter sense. Additionaly, we have to ensure that the solution of our regularized risk minimization problem is itself a kernel. To this end, we require that the functions Kx1 ,x01 (x2 , x02 ) that we get by fixing the first two arguments of K((x1 , x01 ), (x2 , x02 )) be symmetric and positive definite kernel in the remaining two arguments. Definition 1. Let X be a nonempty set, X = X ? X and K : X ? X ? R with Kx ( ? ) = K(x, ? ) = K( ? , x). Then K is called a hyperkernel on X if and only if 1. K is positive definite on X and 2. for any x ? X , Kx is positive definite on X . Denoting the RKHS of K by K, potential kernels lie in the cone K pd = { k ? K | k is pos.def. }. Unfortunately, there is no simple way of restricting kernel learnpd + ing to  algorithms K . Instead, we will restrict ourselvespdto the positive quadrant K = k ? K | k, Kx ? 0 ? x ? X , which is a subcone of K . The actual learning procedure involved in finding k is very similar to conventional kernel methods, except that now regularized risk minimization is to be performed over all pairs of data points:   1 2 ? k = arg min Q(X, Y, k) + k k kK , (1) K? 2 where Q is a quality functional describing how well k fits the training data and K ? = K+ . Several candidates for Q are described in [8]. If K? has the property that for any S ? X the orthogonal projection of any k ? K ? to the subspace spanned by Kx | x ? X remains in K? , then b k is expressible as b k(x, x0 ) = m X i,j=1 ?ij K(xi ,xj ) (x, x0 ) = m X ?ij K((xi , xj ), (x, x0 )) (2) i,j=1 for some real coefficients (?ij )i.j . In other words, we have a hyper-representer theorem. It is easy to see that for K? = K+ this condition is satisfied provided that K((x1 , x01 ), (x2 , x02 )) ? 0 for all x1 , x01 , x2 , x02 ? X . Thus, in this case to solve (1) it is sufficient to optimize the m variables (?ij )i,j=1 , introducing the additional constraints ?ij ? 0 to enforce b k ? K+ . Finding functions that satisfy Definition 1 and also make sense in terms of regularization theory or practical problem domains in not trivial. Some potential choices are presented in [8]. In this paper we propose some new families of hyperkernels. The key tool we use is the following simple lemma. Lemma 1. Let {gz : X ? R} be a family of functions indexed by z ? Z and let h : Z?Z ? R be a kernel. Then Z Z 0 k(x, x ) = gz (x) h(z, z 0 ) gz0 (x0 ) dz dz 0 (3) is a kernel on X . Furthermore, if h is pointwise positive (h(z, z 0 ) ? 0) and { gz : X ? X ? R } is a family of pointwise positive kernels, then Z Z K ((x1 , x01 ) , (x2 , x02 )) = gz1 (x1 , x01 ) h(z1 , z2 ) gz2 (x2 , x02 ) dz1 dz2 (4) is a hyperkernel on X , and it satisfies K((x1 , x01 ), (x2 , x02 )) ? 0 for all x1 , x01 , x2 , x02 ? X . 3 Convolution hyperkernels One interpreation of a kernel k(x, x0 ) is that it quantifies some notion of similarity between points x and x0 . For the Gaussian RBF kernel, and heat kernels in general, this similarity can be regarded as induced by a diffusion process in the ambient space [4]. Just as physical substances diffuse in space, the similarity between x and x0 is mediated by intermediate points, in the sense that by virtue of x being similar to some x0 and x0 being similar to x0 , x and x0 themselves become similar to each other. This captures the natural transitivity of similarity. Specifically, the normalized Gaussian kernel on Rn of variance 2t = ? 2 , kt (x, x0 ) = 1 (4?t) n/2 e?k x?x 0 k2 /(4t) , satisfies the well known convolution property Z 0 kt (x, x ) = kt/2 (x, x0 ) kt/2 (x0 , x) dx0 . (5) Such kernels are by definition homogenous and isotropic in the ambient space. What we hope for from the hyperkernels formalism is to be able to adapt to the inhomogeneous and anisotropic nature of training data, while retaining the transitivity idea in some form. Hyperkernels achieve this by weighting the integrand of (5) in relation to what is ?on the other side? of the hyperkernel. Specifically, we define convolution hyperkernels by setting gz (x, x0 ) = r(x, z) r(x0 , z) in (4) for some r : X ? X ? R. By (3), the resulting hyperkernel always satisfies the conditions of Definition 1. Definition 2. Given functions r : X ?X ? R and h : X ?X ? R where h is positive definite, the convolution hyperkernel induced by r and h is Z Z K ((x1 , x01 ) , (x2 , x02 )) = r(x1 , z1 ) r(x01 , z1 ) h(z1 , z2 ) r(x2 , z2 ) r(x02 , z2 ) dz1 dz2 . (6) A good way to visualize the structure of convolution hyperkernels is to note that (6) is proportional to the likelihood of the graphical model in the figure to the right. The only requirements on the graphical model are to have the same potential function ?1 at each of the extremities and to have a positive definite potential function ?2 at the core. 3.1 The Gaussian hyperkernel To make the foregoing more concrete we now investigate the case where r(x, x 0 ) and h(z, z 0 ) are Gaussians. To simplify the notation we use the shorthand hx, x0 i?2 = 1 n/2 (2?? 2 ) e?k x?x 0 k2 /(2? 2 ) . The Gaussian hyperkernel on X = Rn is then defined as Z Z 0 0 K((x1 , x1 ), (x2 , x2 )) = hx1 , zi?2 hz, x01 i?2 hz, z 0 i?2 hx2 , z 0 i?2 hz 0 , x02 i?2 dz dz 0 . X X h (7) Fixing x and completing the square we have   1 1  2 0 2 hx1 , zi?2 hz, x01 i?2 = exp ? k z?x k + k z?x k = 1 n 1 2? 2 (2?? 2 ) w w2  2 1 w 1 x1 +x01 w k x1 ?x01 k w w ? 2 wz ? = hx1 , x01 i2?2 hz, x1 i?2/2 , n exp w ? ? 2 4? 2 (2?? 2 ) where xi = (xi +x0i )/2. By the convolution property of Gaussians it follows that K((x1 , x01 ), (x2 , x02 )) = hx1 , x01 i2?2 hx2 , x02 i2?2 Z Z X X hx1 , zi?2 /2 hz, z 0 i?2 hz, x2 i?2/2 dz dz 0 = h hx1 , x01 i2?2 hx2 , x02 i2?2 hx1 , x2 i?2 +?2 . h (8) It is an important property of the Gaussian hyperkernel that it can be evaluated in closed form. A noteworthy special case is when h(x, x0 ) = ?(x, x0 ), corresponding to ?h2 ? 0. At the opposite extreme, in the limit ?h2 ? ?, the hyperkernel decouples into the product of two RBF kernels. Since the hyperkernel expansion (2) is a sum over hyperkernel evaluations with one pair of arguments fixed, it is worth examining what these functions look like:   2 2 k x1 ? x2 k k x2 ? x02 k 0 Kx1 ,x01 (x2 , x2 ) ? exp ? (9) exp ? 2 (? 2 + ?h2 ) 2? 0 2 ? with ? 0 = 2?. This is really a conventional Gaussian kernel between x2 and x02 multiplied by a spatially varying Gaussian intensity factor depending on how close the mean of x 2 and x02 is to the mean of the training pair. This can be regarded as a localized Gaussian, and the full kernel (2) will be a sum of such terms with positive weights. As x 2 and x02 move around in X , whichever localized Gaussians are centered close to their mean will dominate the sum. By changing the (?ij ) weights, the kernel learning algorithm can choose k from a highly flexible class of potential kernels. The close relationship of K to the ordinary ? Gaussian RBF 0kernel ? is further borne out by changing coordinates to x ? = (x + x0 ) / 2 and x ? = (x ? x ) / 2, which factorizes the hyperkernel in the form    ? x1 , x ? x1 , x K((? x1 , x ?1 ), (? x2 , x ?2 )) = K(? ?2 )K(? ?2 ) = h? x1 , x ?2 i2(?2 +?2 ) h? x1 , 0i?2 h? x2 , 0i?2 . h ? ? K, ? where K ? Omitting details for brevity, the consequences of this include that K = K ? is the one-dimensional space generis the RKHS of a Gaussian kernel over X , while K ? x) h? ated by h? x, 0i?2 : each k ? K can be written as k(? x, x ?) = k(? x, 0i?2 . Furthermore, the regularization operator ? (defined by hk, k 0 iK = h?k, ?k 0 iL2 [10]) will be Z Z 2 2 2 i?x h? x, 0i?2 ? b(?) e d? 7? h? x, 0i?2 e(? +?h ) ? /2 ? b(?) ei?x d? where ? b(?) is the Fourier transform of b k(b x), establishing the same exponential regularization penalty scheme in the Fourier components of k? that is familiar from the theory of Gaussian RBF kernels. In summary, K behaves in (? x1 , x ?2 ) like a Gaussian kernel with variance 2(? 2 + ?h2 ), but in x ? it just effects a one-dimensional feature mapping. 4 Anisotropic hyperkernels With the hyperkernels so far far we can only learn kernels that are a sum of rotationally invariant terms. Consequently, the learnt kernel will have a locally isotropic character. Yet, rescaling of the axes and anisotropic dilations are one of the most common forms of variation in naturally occurring data that we would hope to accomodate by learning the kernel. 4.1 The Wishart hyperkernel We define the Wishart hyperkernel as Z Z K((x1 , x01 ), (x2 , x02 )) = hx1 , zi? hz, x01 i? hx2 , zi? hz, x02 i? IW(?; C, r) dz d?. ?0 (10) X where 1 hx, x0 i? = 0 > n/2 1/2 e?(x?x ) ??1 (x?x0 )/2 (2?) |?| and IW(?; C, r) is the inverse Wishart distribution r/2 |C | Zr,n | ? | (n+r+1)/2 ,   exp ?tr ??1 C /2 over positive definite matrices (denoted ?  0) [6]. Here r is an integer Qn parameter, C is an rn/2 n(n?1)/4 n ? n positive definite parameter matrix and Zr,n = 2 ? i=1 ?((r+1?i)/2) is a normalizing factor. The Wishart hyperkernel can be seen as the anisotropic analog of (7) in the limit ?h2 ? 0, hz, z 0 i?2 ? ?(z, z 0 ). Hence, by Lemma 1, it is a valid hyperkernel. In h analogy with (8), Z 0 0 K((x1 , x1 ), (x2 , x2 )) = hx1 , x01 i2? hx2 , x02 i2? hx1 , x2 i? IW(?; C, r) d? . (11) ?0 > By using the identity v A v = tr(A(vv > )), hx, x0 i? IW(?; C, r) = |C | (2?)n/2 Z r,n r/2 |?| (n+r+2)/2   exp ?tr ??1 (C +S) /2 = r/2 Zr+1,n |C | IW( ? ; C +S, r+1 ) , n/2 (2?) Zr,n | C + S |(r+1)/2 where S = (x?x0 )(x?x0 )> . Cascading this through each of the terms in the integrand of (11) and noting that the integral of a Wishart density is unity, we conclude that K((x1 , x01 ), (x2 , x02 )) ? |C | r/2 | C + Stot | (r+3)/2 , (12) where Stot = S1 + S2 + S? ; Si = 12 (xi ? x0i )(xi ? x0i )>; and S? = (x1 ? x2 )(x1 ? x2 )> . We can read off that for given k x1 ? x01 k, k x2 ? x02 k, and k x ? x0 k, the hyperkernel will favor quadruples where x1 ? x01 , x2 ? x02 , and x ? x0 are close to parallel to each other and to the largest eigenvector of C. It is not so easy to immediately see the dependence of K on the relative distances between x1 , x01 , x2 and x02 . To better expose the qualitative behavior of the Wishart hyperkernel, we fix (x1, x01 ), assume > n?1 that C = cI for some c ? R and use the identity cI + vv = c c + kvk2 to write " #r+3 #(r+3)/2 " Qc (2S1 , 2S? ) Qc (S1 + S? , S2 ) 0 Kx1 ,x01 (x2 , x2 ) ? 2 1/4 2 1/4 c + 4 k x 1 ? x2 k c + k x2 ? x02 k where Qc (A, B) is the affinity Qc (A, B) = | cI + 2A | 1/4 ? | cI + 2B | 1/4 . 1/2 | cI + A + B | This latter expression is a natural positive definite similarity metric between positive definite matrices, as we can see from the fact that it is the overlap integral (Bhattacharyya kernel) Z h i1/2 h i1/2 Qc (A, B) = hx, 0i(cI+2A)?1 hx, 0i(cI+2B)?1 dx between two zero-centered Gaussian distributions with inverse covariances cI +2A and cI + 2B, respectively [3]. 0.25 0.2 0.2 0.2 0.15 0.15 0.1 0.1 0.1 0.05 0.05 0 0 0 ?0.05 ?0.05 ?0.1 ?0.1 ?0.1 ?0.15 ?0.15 ?0.2 ?0.2 ?0.2 ?0.25 ?0.3 ?0.25 ?0.1 ?0.05 0 0.05 ?0.15 ?0.1 ?0.05 0 0.05 0.1 0.15 ?0.2 ?0.15 ?0.1 ?0.05 0 0.05 0.1 0.15 0.2 Figure 1: The first two panes show the separation of ?3?s and ?8?s in the training and testing sets respectively achieved by the Gaussian hyperkernel (the plots show the data plotted by its first two eigenvectors according to the learned kernel k). The right hand pane shows a similar KernelPCA plot but based on a fixed RBF kernel. 5 Experiments We conducted preliminary experiments with the hyperkernels in relation learning between pairs of datapoints. p The idea here is that the learned kernel k naturally induces a distance metric d(x, x0 ) = k(x, x) ? 2k(x, x0 ) + k(x0 , x0 ), and in this sense kernel learning is equivalent to learning d. Given a labeled dataset, we can learn a kernel which effectively remaps the data in such a way that data points with the same label are close to each other, while those with different labels are far apart. For classification problems (yi being the classP label), a natural choice of quality functional m similar to the hinge loss is Q(X, Y, k) = m12 i,j=1 | 1 ? yij k(xi , xj ) |+ , where | z |+ = z if z ? 0 and | z |+ = 0 for z < 0, while yij = 1 if yi = yj . The corresponding optimization Pm P m problem learns k(x, x0 ) = i=1 j=1 ?ij K((x, x0 ), (xi , xj )) + b minimizing X 1 XX ?ij ?i0 j 0 K((xi , xj ), (xi0 , xj 0 )) + C ?ij 2 i,j 0 0 i,j i ,j subject to the classification constraints X  yij ?i0 j 0 K((xi0 , xj 0 ), (xi , xj )) + b ? 1 ? ?ij i0 ,j 0 ?ij ? 0 ?ij ? 0 for all pairs of i, j ? {1, 2, . . . , m}. In testing we interpret k(x, x0 ) > 0 to mean that x and x0 are of the same class and k(x, x0 ) ? 0 to mean that they are of different classes. As an illustrative example we learned a kernel (and hence, a metric) between a subset of the NIST handwritten digits1 . The training data consisted of 20 ?3?s and 20 ?8?s randomly rotated by ?45 degrees to make the problem slightly harder. Figure 1 shows that a kernel learned by the above strategy with a Gaussian hyperkernel with parameters set by cross validation is extremely good at separating the two classes in training as well as testing. In comparison, in a similar plot for a fixed RBF kernel the ?3?s and ?8?s are totally intermixed. Interpreting this as an information retrieval problem, we can imagine inflating a ball around each data point in the test set and asking how many other data points in this ball are of the same class. The corresponding area under the curve (AUC) in the original space is just 0.5575, while in the hyperkernel space it is 0.7341. 1 Provided at http://yann.lecun.com/exdb/mnist/ courtesy of Yann LeCun and Corinna Cortes. ? =0? ? =1? h 1 SVM Linear HyperKernel Conic HyperKernel 0.95 0.9 0.85 0.85 0.85 0.8 AUC 0.9 0.75 0.8 0.75 0.8 0.75 0.7 0.7 0.7 0.65 0.65 0.65 0.6 0.1 0.2 0.3 0.4 ? 0.5 0.6 0.7 0.6 0.1 0.8 0.2 0.3 ? =4? 0.4 ? 0.5 0.6 0.7 0.6 0.1 0.8 h h 0.85 AUC 0.85 AUC 0.9 0.85 0.8 0.75 0.7 0.7 0.65 0.65 ? 0.5 0.6 0.7 0.8 0.6 0.1 0.7 0.8 0.7 0.8 0.8 0.7 0.4 0.6 0.75 0.65 0.3 0.5 SVM Linear HyperKernel Conic HyperKernel 0.95 0.9 0.8 ? 1 SVM Linear HyperKernel Conic HyperKernel 0.95 0.75 0.4 h 1 SVM Linear HyperKernel Conic HyperKernel 0.2 0.3 ? =10? 0.9 0.6 0.1 0.2 ? =6? 1 0.95 SVM Linear HyperKernel Conic HyperKernel 0.95 0.9 AUC AUC h 1 SVM Linear HyperKernel Conic HyperKernel 0.95 AUC ? =2? h 1 0.2 0.3 0.4 ? 0.5 0.6 0.7 0.8 0.6 0.1 0.2 0.3 0.4 ? 0.5 0.6 Figure 2: Test area under the curve (AUC) for Olivetti face recognition under varying ? and ?h . We ran a similar experiment but with multiple classes on the Olivetti faces dataset, which consists of 92 ? 112 pixel normalized gray-scale images of 30 individuals in 10 different poses. Here we also experimented with dropping the ?ij ? 0 constraints, which breaks the positive definiteness of k, but might still give a reasonable similarity measure. The first case we call ?conic hyperkernels?, whereas the second are just ?linear hyperkernels?. Both involve solving a quadratic program over 2m2 +1 variables. Finally, as a baseline, we trained an SVM over pairs of datapoints to predict yij , representing (xi , xj ) with a concatenated feature vector [xi , xj ] and using a Gaussian RBF between these concatenations. The results on the Olivetti dataset are summarized in Figure 2. We trained the system with m = 20 faces and considered all pairs of the training data-points (i.e. 400 constraints) to find a kernel that predicted the labeling matrix. When speed becomes an issue it often suffices to work with a subsample of the binary entries in the m ? m label matrix and thus avoid having m2 constraints. Also, we only need to consider half the entries due to symmetry. Using the learned kernel, we then test on 100 unseen faces and predict all their pairwise kernel evaluations, in other words, 104 predicted pair-wise labelings. Test error rates are averaged over 10 folds of the data. For both the baseline Gaussian RBF and the Gaussian hyperkernels we varied the ? parameter from 0.1 to 0.6. For the Gaussian hyperkernel we also varied ?h from 0 to 10?. We used a value of C = 10 for all experiments and for all algorithms. The value of C had very little effect on the testing accuracy. Using a conic hyperkernel combination did best in labeling new faces. The advantage over SVMs is dramatic. The support vector machine can only achieve an AUC of less than 0.75 while the Gaussian hyperkernel methods achieve an AUC of almost 0.9 with only T = 20 training examples. While the difference between the conic and linear hyperkernel methods is harder to see, across all settings of ? and ?h , the conic combination outperformed the linear combination over 92% of the time. The conic hyperkernel combination is also the only method of the three that guarantees a true Mercer kernel as an output which can then be converted into a valid metric. The average runtime for the three methods was comparable. The SVM took 2.08s ? 0.18s, the linear hyperkernel took 2.75s ? 0.10s and the conic hyperkernel took 7.63s ? 0.50s to train on m = 20 faces with m2 constraints. We implemented quadratic programming using the MOSEK optimization package on a single CPU workstation. 6 Conclusions The main barrier to hyperkernels becoming more popular is their high computational demands (out of the box algorithms run in O(m6 ) time as opposed to O(m3 ) in regular learning). In certain metric learning and on-line settings however this need not be forbidding, and is compensated for by the elegance and generality of the framework. The Gaussian and Wishart hyperkernels presented in this paper are in a sense canonical, with intuitively appealing interpretations. In the case of the Gaussian hyperkernel we even have a natural regularization scheme. Preliminary experiments show that these new hyperkernels can capture the inherent structure of some input spaces. We hope that their introduction will give a boost to the whole hyperkernels field. Acknowledgements The authors wish to thank Zoubin Ghahramani, Alex Smola and Cheng Soon Ong for discussions related to this work. This work was supported in part by National Science Foundation grants IIS-0347499, CCR-0312690 and IIS-0093302. References [1] N. Cristianini, J. Shawe-Taylor, A. Elisseeff, and J. Kandola. On kernel-target alignment. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, pages 367 ? 373, Cambridge, MA, 2002. MIT Press. [2] G. S. Kimeldorf and G. Wahba. Some results on Tchebycheffian spline functions. J. Math. Anal. Applic., 33:82?95, 1971. [3] R. Kondor and T. Jebara. A kernel between sets of vectors. In Machine Learning: Tenth International Conference, ICML 2003, 2003. [4] R. Kondor and J. Lafferty. Diffusion kernels on graphs and other discrete input spaces. In Machine Learning: Proceedings of the Nineteenth International Conference (ICML ?02), 2002. [5] G. Lanckriet, N. Cristianini, P. Bartlett, L. El Ghaoui, and M. I. Jordan. Learning the kernel matrix with semi-definite programming. Journal of Machine Learning Research, 5:27 ? 72, 2004. [6] T. P. Minka. Inferring a Gaussian distribution, 2001. http://www.stat.cmu.edu/ minka/papers/learning.html. Tutorial paper available at [7] C. S. Ong and A. J. Smola. Machine learning using hyperkernels. In Proceedings of the International Conference on Machine Learning, 2003. [8] Cheng Soon Ong, Alexander J. Smola, and Robert C. Williamson. Hyperkernels. In S. Thrun S. Becker and K. Obermayer, editors, Advances in Neural Information Processing Systems 15, pages 478?485. MIT Press, Cambridge, MA, 2003. [9] Cheng Soon Ong, Alexander J. Smola, and Robert C. Williamson. Learning the kernel with hyperkernels. Sumbitted to the Journal of Machine Learning Research, 2003. [10] B. Sch? olkopf and A. J. Smola. Learning with Kernels. MIT Press, 2002.
3099 |@word kondor:3 dz1:2 covariance:1 elisseeff:1 dramatic:1 tr:3 harder:2 reduction:2 denoting:1 rkhs:5 bhattacharyya:1 kx0:1 current:1 z2:4 com:1 si:1 yet:1 dx:1 reminiscent:1 must:1 written:1 forbidding:1 plot:3 interpretable:1 alone:1 half:1 isotropic:2 core:1 math:1 kvk2:1 become:1 ik:1 qualitative:1 shorthand:1 consists:1 pairwise:1 x0:39 behavior:1 themselves:1 actual:1 little:1 cpu:1 increasing:1 totally:1 begin:1 provided:2 underlying:1 notation:1 xx:1 kimeldorf:1 becomes:1 what:3 eigenvector:1 finding:4 inflating:1 guarantee:2 runtime:1 decouples:1 k2:2 grant:1 arguably:1 positive:18 engineering:1 limit:2 consequence:1 extremity:1 establishing:1 quadruple:1 becoming:1 noteworthy:1 might:2 averaged:1 unique:1 practical:1 lecun:2 testing:4 yj:1 definite:13 procedure:1 area:2 projection:1 word:2 induce:1 quadrant:1 regular:1 zoubin:1 get:1 hx1:10 close:5 operator:1 risk:3 optimize:1 conventional:4 equivalent:1 www:1 dz:7 courtesy:1 compensated:1 focused:1 qc:5 immediately:1 m2:3 cascading:1 regarded:3 spanned:2 dominate:1 datapoints:2 embedding:1 notion:1 coordinate:1 variation:1 analogous:1 imagine:1 target:1 programming:2 hypothesis:1 lanckriet:1 satisfying:1 recognition:1 labeled:1 capture:3 ran:1 principled:1 pd:1 cristianini:2 ong:5 trained:2 solving:2 po:1 train:1 heat:1 labeling:2 hyper:1 solve:2 foregoing:1 nineteenth:1 favor:1 unseen:1 expounded:1 transform:1 itself:3 advantage:1 took:3 propose:2 product:2 date:1 kx1:3 achieve:3 olkopf:1 requirement:1 produce:1 rotated:1 depending:1 hyperkernel:43 pose:1 stat:1 fixing:2 x0i:3 ij:13 implemented:1 c:1 predicted:2 inhomogeneous:1 subsequently:1 additionaly:1 centered:2 require:1 hx:5 fix:1 suffices:1 really:1 preliminary:2 yij:4 around:2 considered:1 exp:7 mapping:1 predict:2 visualize:1 major:1 outperformed:1 label:6 iw:5 expose:1 largest:1 tool:1 minimization:2 hope:3 mit:3 gaussian:28 always:1 avoid:1 varying:2 factorizes:1 ax:1 modelling:1 likelihood:1 hk:1 hkx:1 baseline:2 sense:6 el:1 i0:3 relation:2 expressible:3 labelings:1 i1:2 pixel:1 issue:1 arg:2 classification:4 flexible:1 denoted:1 html:1 retaining:1 development:1 special:2 homogenous:1 field:1 having:1 look:1 icml:2 representer:2 mosek:1 spline:1 simplify:1 inherent:1 randomly:1 kandola:1 national:1 individual:2 familiar:1 interest:1 investigate:1 highly:1 evaluation:2 alignment:1 extreme:1 kt:4 ambient:2 integral:2 il2:1 orthogonal:1 indexed:1 taylor:1 plotted:1 formalism:2 asking:1 ordinary:1 introducing:2 subset:1 entry:2 examining:1 conducted:1 answer:1 learnt:2 density:1 international:3 off:1 ym:1 concrete:1 satisfied:1 opposed:1 choose:1 wishart:9 borne:1 rescaling:1 potential:5 converted:1 summarized:1 coefficient:1 satisfy:1 depends:1 ated:1 performed:1 view:2 break:1 closed:3 characterizes:1 parallel:1 square:1 accuracy:1 variance:2 conceptually:1 handwritten:1 critically:1 worth:1 whenever:1 definition:5 involved:1 minka:2 naturally:2 elegance:1 workstation:1 dataset:4 popular:2 ask:1 knowledge:1 dimensionality:2 hilbert:1 evaluated:1 box:1 generality:1 furthermore:2 just:5 smola:6 stage:1 hand:1 ei:1 quality:2 gray:1 dietterich:1 omitting:1 effect:2 true:1 normalized:2 y2:1 gz0:1 former:1 regularization:5 hence:2 spatially:1 symmetric:3 read:1 i2:8 attractive:3 transitivity:2 auc:10 illustrative:1 exdb:1 interpreting:1 image:1 wise:1 recently:1 common:1 behaves:1 functional:2 physical:1 anisotropic:4 analog:1 xi0:2 interpretation:1 stot:2 interpret:1 refer:1 cambridge:2 pm:2 shawe:1 had:1 similarity:6 etc:1 kernelpca:1 own:1 recent:1 olivetti:3 apart:1 certain:1 binary:1 yi:3 seen:2 rotationally:1 additional:1 x02:28 ii:2 semi:1 full:1 multiple:1 infer:1 ing:1 adapt:1 cross:1 retrieval:1 variant:1 regression:2 essentially:1 metric:7 cmu:1 kernel:68 sometimes:2 achieved:1 whereas:1 sch:1 w2:1 induced:2 hz:10 subject:1 lafferty:1 jordan:1 call:2 integer:1 noting:1 intermediate:1 easy:2 m6:1 xj:10 fit:1 zi:5 restrict:1 opposite:1 wahba:1 inner:1 idea:4 avenue:1 whether:1 expression:1 pca:1 bartlett:1 becker:2 boast:1 penalty:1 york:1 eigenvectors:1 involve:1 locally:1 induces:2 svms:1 continuation:1 http:2 canonical:1 tutorial:1 ccr:1 discrete:2 write:1 dropping:1 key:1 four:1 tchebycheffian:1 changing:2 tenth:1 diffusion:2 graph:1 year:1 cone:1 sum:4 run:1 inverse:2 package:1 family:4 reasonable:1 almost:1 yann:2 separation:1 comparable:1 def:1 completing:1 cheng:3 fold:1 quadratic:2 constraint:6 alex:1 x2:41 diffuse:1 integrand:2 fourier:2 argument:4 min:2 extremely:1 pane:2 speed:1 department:1 according:1 ball:2 combination:4 across:1 slightly:1 dz2:2 character:1 unity:1 appealing:1 s1:3 intuitively:1 invariant:1 ghaoui:1 remains:1 discus:1 describing:1 nonempty:1 whichever:1 end:1 available:1 gaussians:3 endowed:1 multiplied:1 appropriate:1 enforce:1 corinna:1 original:1 remaining:1 tony:1 ensure:1 include:1 graphical:2 hinge:1 risi:2 concatenated:1 especially:2 ghahramani:2 move:1 question:1 strategy:1 dependence:1 obermayer:1 affinity:1 subspace:1 distance:2 thank:1 separating:1 concatenation:1 thrun:1 trivial:1 pointwise:2 relationship:1 kk:1 minimizing:1 unfortunately:1 intermixed:1 robert:2 anal:1 m12:1 convolution:6 nist:1 relational:2 y1:1 rn:3 varied:2 reproducing:1 jebara:3 intensity:1 introduced:1 pair:10 z1:4 learned:5 boost:1 heterogenous:1 applic:1 able:1 xm:1 program:1 wz:1 overlap:1 natural:5 regularized:3 predicting:1 turning:1 zr:4 representing:1 scheme:3 brief:1 conic:12 gz:4 mediated:1 columbia:2 prior:1 review:1 acknowledgement:1 kf:1 relative:1 loss:2 proportional:1 analogy:1 localized:2 validation:1 h2:5 foundation:1 x01:30 degree:1 sufficient:1 mercer:1 editor:2 summary:1 supported:1 soon:3 side:1 vv:2 face:6 barrier:1 curve:2 stand:1 world:1 valid:2 qn:1 author:1 far:3 conclude:1 xi:14 quantifies:1 dilation:1 promising:1 learn:4 nature:1 symmetry:1 improving:1 expansion:1 williamson:3 constructing:1 domain:2 did:1 main:1 s2:2 subsample:1 whole:1 x1:37 definiteness:2 ny:1 consisted:1 inferring:1 wish:1 exponential:1 candidate:1 lie:1 weighting:1 learns:1 theorem:2 substance:1 experimented:1 svm:9 cortes:1 virtue:1 normalizing:1 mnist:1 restricting:1 effectively:1 ci:9 accomodate:1 occurring:1 kx:5 demand:1 likely:1 amsterdam:1 satisfies:3 ma:2 identity:2 consequently:1 rbf:10 specifically:2 except:1 hyperkernels:24 lemma:3 called:1 m3:1 support:2 latter:2 dx0:1 brevity:1 alexander:2
2,313
31
192 PHASE TRANSITIONS IN NEURAL NETWORKS Joshua Chover University of Wisconsin, Madison, WI 53706 ABSTRACT Various simulat.ions of cort.ical subnetworks have evidenced something like phase transitions with respect to key parameters. We demonstrate that. such transi t.ions must. indeed exist. in analogous infinite array models. For related finite array models classical phase transi t.ions (which describe steady-state behavior) may not. exist., but. there can be distinct. quali tative changes in ("metastable") transient behavior as key system parameters pass through crit.ical values . INTRODUCTION Suppose that one st.imulates a neural network - actual or simulated - and in some manner records the subsequent firing activity of cells. Suppose further that. one repeats the experiment. for different. values of some parameter (p) of the system: and that one finds a "cri t.ical value" (p) of the parameter, such that. (say) for values p it. is for values p > pc < p. c c the act.ivity tends to be much higher than Then, by analogy with statist.ical mechanics (where, e.g., p may be temperature, with criUcal values for boiling and freezing) one can say that. the neural network undergoes a "phase transition" at. p. Intracellular phase c transi t.ions, parametrized by membrane potential, are well mown. Here we consider intercellular phase transi t.ions. These have been evidenced in several detailed cort.ical simulations: e.g., of the 1 2 piriform cortex and of the hippocampus In the piriform case, the parameter p represented the frequency of high amplitude spontaneous EPSPs received by a typical pyramidal cell; in the hippocampal case, the parameter was the ratio of inhibitory to excitatory cells in the system. By what. mechanisms could approach to, and retreat. from, a cri t.ical value of some parameter be brought about.? An intriguing conjecture is that. neuromodulators can play such a role in certain 3 networks; temporarily raising or depressing synaptic efficacies What. possible interesting consequences could approach to criticality have for system performance. Good effects could be these: for a network with plasticity, heightened firing response to a stimulus can mean faster changes in synaptic efficacies, which would bring about. faster memory storage. More and longer activi ty could also mean faster access to memory. A bad effect. of ? American Institute of Physics 1988 193 near-criticality - depending on other parameters - can be wild, epileptiform activity. Phase transitions as they might. relate to neural networks have 4 been studied by many authors Here, for clarity, we look at. a particular category of network models - abstracted from the piriform cortex set.ting referred to above - and show the following: a) For "elementary" reasons, phase transition would have to exist if there were infinitely many cells; and the near-subcrit.ical state involves prolonged cellular firing activity in response to an ini t.ial stimulation. b) Such prolonged firing activity takes place for analogous large finite cellular arrays - as evidenced also by computer simulat.i ons. What. we shall be examining is space-time patterns which describe the mid-term transient. activity of (Markovian) systems that. tend to silence (with high probability) in the long run. (There is no reference to energy functions, nor to long-run stable firing rates - as such rates would be zero in most. of our cases.) In the following models time will proceed in discrete steps. (In the more complicated set.tings these will be short. in comparison to other time constants, so that. the effect of quant.ization becomes smaller.) The parameter p will be the probability that at. any given t.ime a given cell will experience a certain amount. of exci tatory "spontaneous firing" input.: by itself this amount. will be insufficient. to cause the cell to fire, but. in conjunction wi th sufficiently many exci tatory inputs from other cells it. can assist. in reaching firing threshold. (Other related parameters such as average firing threshold value and average efficacy value give similar results.) In all the models there is a refractory period after a cell fires, during which it cannot fire again; and there may be local (shunt. type) inhibition by a firing cell on near neighbors as well as on itself - but. there is no long-distance inhibi tion. We look first. at. limi ting cases where there are infini tely many cells and - classically - phase transi tion appears in a sharp form. A "SIMPLE" MODEL We consider an infinite linear array of similar cells which obey the following rules, pictured in Fig. lA: (i) If cell k fires at. time n, then it. must. be silent. at. t.ime n+l; (11) if cell k is silent. at. time n but. both of its neighbors k-l and k+l do fire at. time n, then cell k fires at. t .ime n+l; (iii) if cell k is silent at time n and just one of its neighbors (k-l or k+ I) fires at. time n, then ce 11 k wi 11 fire at t .ime n+l with probability p and not. fire with probability l-p, independently of similar decisions at. other cells and at. other times. 194 A ~ TIM~ CELLS"""'> ~ ???0. 0 ~ i 000 1'\ ~ oollio DOD i Fig. 1. "Simple model". A: firing rules; cells are represented horizontally, time proceeds downwards: filled squares denote firing. B: sample development. Thus, effecUvely, signal propagat ion speed here is one cell per uni t . time, and a cell's firing threshold value is 2 (EPSP units). If we sUmulate ~ cell to fire at time n::O, will its influence necessarily die out or can it. go on forever? (See Fig. lB.) For an answer we note that. in this simple case the firing paUern (if any) at. Ume n must. be an alternat.ing stretch of firing/silent. cells of some length, call it. L. Moreover, n 2 = L I L +2 with probability p (when there are sponteneous n+ n firing assists on both ends of the stretch), or Ln+l Ln -2 with = 2 probability (l-p) (when there is no assist at. either. end of the stretch), or Ln+l = Ln with probability 2p(l-p) (when there is an assist. at. just. one end of the stretch). Start.ing wi th any fini te al ternating stretch successive values L n nonnegat.ive integers. same conclusion: La, the consUtute a "random walk" among the Intui t.ion and simple analysis 5 lead to the if the probability for L n 2 to decrease ?1_p)2) is greater than that. for it. to increase (p) - i.e. if the average step taken by the random walk is negative - then ul t .imately L n will reach a and the firing response dies out. COntrariwise, if 195 P 2 ) (l-p) 2 then the L n can drift. to even higher values wi th positive probability. In Fig. 2A we sketch the probability for ultimate die-out as a function of p: and in Fig. 2B, the average time until die out. Figs. 2A and B show a classic example of phase transition (p = 1/2) for this infinite array. c , \ A \ 8 I - ?1?-- - - I'~ l<'I) f. Fig. 2. Critical behavior. A: probability of ultimate die out. (or of reaching other traps. in finite array case). B: average time until die-out (or for reaching other traps). Solid curves refer to an infinite array; dashed, to finite arrays. MORE mMPLEX MODELS For an infinite linear array of cells, as sketched in Fig. 3 . we describe now a much more general (and hopefully more realistic) set of rules: (i') A cell cannot fire, nor receive excitatory inputs. at. time n if it has fired at any time during the preceding ~ Hme units (refraction and feedback inhibition). (11 .) Each cell x has a local "inhibitory neighborhood" consisting of a number (j) of cells to its immediate right. and left.. The given cell x cannot. fire or receive excitatory inputs at Hme n if any other cell y in its inhibi tory neighborhood has fired at. any t .ime between t. and t+m I uni ts preceding n, where to x t . is the t .ime it. would take for a message to travel from at. a speed of VI cells per unit time. (This rule y represents local shunt~type inhibition.) (iii') Each cell x has an "excitatory neighborhood" consisting of a number (e) of cells to the immediate right. and left of its inhibitory neighborhood. If a cell y in that. neighborhood fires at a certain time. that firing causes a unit impulse to travel to cell x at a speed of vE cells per uni t. time. The impulse is received at. x subject to rules (i') and (11'). 196 (iv') All cells share a "firing threshold" value 9 and an "integraUon Ume constant." s (s < 9). In addition each cell. at. each t.ime n and independent ly of other times and other cells. can receive a random amount. X of "spontaneous excitatory input." . n The variable Xn can have a general distribution: however. for simplicity we suppose here that. it. assumes only one of two values: b or O. with probabilities p and 1-p respecUvely. (We suppose that. b <. e. so that. the spontaneous "assist." itself is insufficient. for firing.) The above quant.i ties enter into the following firing rule: a cell will fire at. time n if it. is not. prevented by rules (i') and (ii') and if the total number of inputs from other cells. received during the integration "window" last.ing between t.imes n-s+1 and n inclusive. plus the assist. X , n equals or exceeds the threshold 9. (The propagat.ion speeds vI and VE and the neighborhoods are here given left.-right. syrrmetry merely for ease in exposi tion.) o0 0 0 tl 0 ? [J ? I~h t t '"" ~ Jl tl tl ? [J U ' 0 I 0 0 D ? 11 0 ~Iit' 0 Jl 0 '" I I )l,. Fig. 3. Message travel in complex model: (i')-(iv'). see text. rules Wi 11 such a mode 1 d i sp lay phase trans i t i on a t. some cr i t .i cal value of the spontaneous firing frequency p? The dependence of responses upon the ini t.ial condi tions and upon the various parameters is intricate and wi 11 affect. the answer. We briefly discuss here conditions under which the answer is again yes. (1) For a given configuration of parameters and a given ini Ual stimulation (of a stretch of cont.iguous cells) we compare the development. of the model's firing response first. to that. of an auxil iary "more act.ive" system: Suppose that. L now denotes the n distance at. t.ime n between the left:- and right.- most cells which are either firing or in refractory mode. Because no cell can fire wi thout. influence from others and because such influence travels at. a given speed, there is a maximal amount. (D) whereby L 1 can n+ exceed L. There is also a maximum probability Q(p) - which n 197 depends on the spontaneous firing parameter (whatever n). 'We can compare defined so that. An+l = An+D An+l = An-1 with probability A L n n with probability l-Q(p). more likely to increase than ou t . than p - that. Ln+ 1 ~ Ln with a random walk "A" L. In the many cases where n does, the average step size of wi 11 become negat.ive for p A L n Q(p) (viz., n and At each transition, Hence n Q(p) An is is more likely to die tends to zero as p DQ(p)+(-I)(I-Q(b?) below a "cri tical" value p. a Thus, as in the "simple" model above, the probability of ultimate die-out for the A, hence also for the L of the complex model, will be n 1 when 0 ~ p < pa . n (2) There will be a phase transition for the complex model if its probability of die out. - given the same parameters and initial stimulation is in (1) - becomes less than 1 for some p values with p < p < 1. Comparison of the complex process with a simpler a "less act.ive" process is difficul t. in general. However, there are parameter configurat.ions which ul timately can channel all or part. of the firing activity into a (space-t.ime) sublat.t .ice analgous to that. in Fig. 1. Fig. 4 illustrates such a case. For p sufficiently large there is posi tive probabili ty that. the act.ivity will not. die out, just as in the "simple" model. Fig. 4. Activity on a sublattice. (Parameter values: j=2, e=6, MR=2, M1=I, VR=V 1=I, 9=3, s=2, and b=I.) Rectangular areas indicate refract.ionlinhibi tion: diagonal lines, excitatory influence. 198 LARGE FINITE ARRAYS Consider now a large finite array of N cells, again as sketched in Fig. 3 ; and operating according to rules similar to (i')-(iv') above, with suitable modifications near the edges. Appropriately encoded, its activity can be described by a (huge) Markov transit.ion matrix, and - depending on the initial st.imulation - must. tend 5 to one of a set. of steady-state distribut.ions over firing patterns. For example, (a) if N is odd and the rules are those for Fig. I, then extinct.ion is the unique steady state, for any p (1 (since the L form a random n walk with "reflecUng" upper barrier). But, ?(3) if N is even and the cells are arranged in a ring, then, for any P with o < p < 1. both ext.inction and an alternate flip-flop firing pat.tern of period 2 are "traps" for the system - wi th relative long run probabilities determined by the initial state. See the dashed line in Fig. 2A for the extinction probability in the ?(3) case, and in Fig. 2B for the expected time until hitting a trap in the (a) case 1 (P(2) and the {(3) case. What quali tat.ive properties related to phase transi tion and critical p values carryover from the infinite to the finite array case? The (a) example above shows that long term activity may now be the same for all 0 ( p (1 but. that parameter intervals can exist. whose key feature is a particularly large expected t.ime before the system hi ts a trap. (Again. the cri tical region can depend upon the ini tial st.imulation.) Prior to being trapped the system spends its time among many states in a kind of "metastable" equilibrium. (We have some preliminary theoretical results on this conditional equilibrium and on its relation to the infinite array case. See also Ref. 6 concerning time scales for which certain corresponding infinite and finite stochastic automata systems display similar behavior . ) Simulat.ion of models satisfying rules (i' )-( iv') does indeed display large changes in length of firing activity corresponding to parameter changes near a critical value. See Fig. 5 for a typical example: As a function of p, the expected time until the system is trapped (for the given parameters) rises approximately linearly in the interval .05<p( .12, wi th most. runs resul ting in extinction - as is the case in Fig. 5A at. time n=115 (for p=.10). But. for p).15 a relatively rigid patterning sets in which leads with high probability to very long runs or to traps other than extinction as is the case in Fig. 5B (p=.20) where the run is arbitrarity truncated at. n=525. (The patterning is highly influenced by the large size of the excitatory neighborhoods.) 199 A _........ - . ...._... 0' ..- .: .... ... ._.... . - -" .- .." " -" '. _ ...? _.. _... .. -_'0 ., . . -,' ..... " ' : ', ' - ' ??_ ? , ,- . : . ,0. .. . ' ? ? '0' ,0 ... -.' ? ? . - ,.- . -. .... . ? ? .. O' ' 0' ,0 ? ---"-' .. ":... . .. . . ... .' ... . --- -- .....- _...._..... .. . ... ...... - ..... -. . .:. .' ...... .... ". .... ......... ' . ' . .. .. . . .... _.... - o .: . .. . . . ..... . . .. _- --.0 ? _ ... - ? ? _ ? .. ' 0 ?? ..... ' - ? ? ._ '_ , _. '0' ' . . ..: ....: ...: .---. ,. .... ,0 '0 - ? ' 0 ' ?? _ . . .. ... ? ,0 -- .. -.- _ 0? - .. .. : . ... ? ? . ? ? . . .. _ --......:. _.- ? ?? _,' " ,_ ?.' . . ... '0 ,. ' 0 ... ' ? ,' . : ?? ? ? _ .... . . 0 ?. ?? ???? " . . . - 0 . ? ..._. . ... ?,..... . ' . " ' - ', . ... . ,0 , ..... -...... _.,.- ... _.... . .. ? ?? ___ ?? _ ..... : ?? - ....... _ . '.' ,-.- ,'.'" .0_ ' .... . .... . -. . --'- -- ....__._-.... . .-.. '. " B . __ . . .-.. -...... _... ....... .::.:. : __ . .........: ........_..- ..... .---_0_ __ , ... . _ ,0 ? .' . .... . . ,0 ? ... . . . . . _ .. .. ,. . .._-_ ......... .-_. .._-_._ ..:.'. -'. '-----._-.... .- .. . -----_._-_ .. .'--- ----_ ...- ......__._--- ' ' ' ..----,..... - , --- --- ,--"--- .-- .. ..- . .... -. -------- .---....?....?'------_ . ' - ..'---- ---------- .-... _.....--_._.. --.......--_ . .-. --------------. ---_.-._- - '--. ' 1 TIME ---- "- -'.'._-'. " ---------........--.---- ------- . .. ---. --.._-- ... ------- --------. ------------ . .. _.. _-- .. _------------ --- ....? -.._- .._------------_._ ....... -._ ......._. ----'----_. -'-'- .'-'--...... ---.- -' . .. _",--. _.._-----.- ------- ----. --- ' ... ~. Fig. 5. Space t .ime firing patterns for one configuration of basic parameters. (There are 200 cells; j=2, e=178, MR=10, M1=9, VR=V 1=7, 9=25, s=2, and b=12; 50 are stimulated init.ially.) A: p=.10. B: p=.20. mNa..USION Mechanisms such as neuromodulators, which can (temporarily) bring spontaneous firing levels - or synapt.ic efficacies, or average firing thresholds, or other similar parameters - to near-critical values, can thereby induce large amplification of response act.ivi ty to selected stimul i" The repertoire of such responses is an important. aspect- of the system's function. 200 [Acknowledgement.: Thanks to C. Bezuidenhout. and J. Kane for help wi th simulat.ions.] REFERENCES 1. 2. 3. 4. 5. 6. M. Wilson. J. Bower. J. Chover. L. Haberly. 16th Neurosci. Soc. Mtg. Abstr. 370.11 (1986). R. D. Traub. R. Miles. R.K.S. Wong. 16th Neurosci. Soc. Mtg. Abstr. 196.12 (1986). A. Selverston. this conference. also. Model Neural Networks and Behavior. Plenum (1985); E. Marder. S. Hooper. J. Eisen. SynapUc Function. Wiley (1987) p.305. E.g.: W. Kinzel. Z. Phys. B58, p. 231 (1985); A. Noest.. Phys. Rev. Let.. 57( 1), p. 90 (1986); R. DurreU. (to appear); G. Carpenter, J. Diff. Eqns. 23, p.335 (1977); G. Ermentraut, S. Cohen. BioI. Cyb. 34, p.137 (1979); H. Wilson. S. Cowan. Biophys. J. 12 (1972). W. Feller. An Introd. to Prob. Th?y. and Appl?ns. I. Wiley (1968) Ch. 14. 15. T. Cox and A. Graven (to appear).
31 |@word cox:1 briefly:1 hippocampus:1 extinction:3 simulation:1 tat:1 thereby:1 solid:1 initial:3 configuration:2 efficacy:4 cort:2 intriguing:1 must:4 realistic:1 subsequent:1 plasticity:1 selected:1 patterning:2 ial:2 short:1 record:1 successive:1 simpler:1 become:1 wild:1 manner:1 intricate:1 expected:3 indeed:2 behavior:5 nor:2 mechanic:1 prolonged:2 actual:1 window:1 becomes:2 moreover:1 what:4 kind:1 spends:1 selverston:1 act:5 tie:1 whatever:1 unit:4 ly:1 appear:2 positive:1 ice:1 before:1 local:3 tends:2 consequence:1 pauern:1 ext:1 firing:31 approximately:1 might:1 plus:1 shunt:2 studied:1 tory:1 kane:1 appl:1 ease:1 unique:1 intercellular:1 area:1 induce:1 cannot:3 cal:1 storage:1 influence:4 wong:1 go:1 independently:1 automaton:1 rectangular:1 simplicity:1 activi:1 rule:11 array:13 classic:1 analogous:2 plenum:1 spontaneous:7 suppose:5 play:1 heightened:1 synapt:1 pa:1 satisfying:1 particularly:1 lay:1 mna:1 role:1 probabili:1 imes:1 region:1 decrease:1 feller:1 depend:1 cyb:1 crit:1 upon:3 iit:1 various:2 represented:2 distinct:1 describe:3 neighborhood:7 whose:1 encoded:1 ive:5 say:2 simulat:4 itself:3 maximal:1 epsp:1 fired:2 amplification:1 inhibi:2 abstr:2 ring:1 tim:1 depending:2 tions:1 help:1 odd:1 received:3 epsps:1 soc:2 involves:1 indicate:1 stochastic:1 transient:2 preliminary:1 repertoire:1 elementary:1 stretch:6 sufficiently:2 ic:1 equilibrium:2 travel:4 epileptiform:1 refract:1 brought:1 reaching:3 cr:1 wilson:2 conjunction:1 viz:1 cri:4 rigid:1 ical:7 relation:1 sketched:2 among:2 distribut:1 development:2 integration:1 equal:1 represents:1 look:2 others:1 stimulus:1 carryover:1 imately:1 ime:11 ve:2 phase:13 consisting:2 fire:15 huge:1 message:2 highly:1 traub:1 pc:1 tical:2 usion:1 edge:1 experience:1 filled:1 iv:4 walk:4 theoretical:1 markovian:1 tial:1 dod:1 examining:1 hooper:1 answer:3 st:3 thanks:1 infini:1 physic:1 posi:1 again:4 neuromodulators:2 classically:1 american:1 potential:1 ivity:2 hme:2 vi:2 depends:1 tion:5 start:1 complicated:1 square:1 yes:1 iary:1 reach:1 influenced:1 phys:2 synaptic:2 ty:3 energy:1 ternating:1 frequency:2 refraction:1 amplitude:1 ou:1 appears:1 higher:2 response:7 depressing:1 arranged:1 just:3 until:4 sketch:1 freezing:1 hopefully:1 undergoes:1 mode:2 impulse:2 effect:3 ization:1 hence:2 mile:1 during:3 eqns:1 whereby:1 steady:3 die:9 hippocampal:1 ini:4 demonstrate:1 temperature:1 bring:2 stimulation:3 kinzel:1 cohen:1 refractory:2 jl:2 m1:2 refer:1 enter:1 access:1 stable:1 cortex:2 longer:1 inhibition:3 operating:1 something:1 certain:4 joshua:1 greater:1 preceding:2 mr:2 period:2 signal:1 dashed:2 ii:1 ing:3 exceeds:1 faster:3 long:6 concerning:1 prevented:1 basic:1 transi:6 cell:45 ion:14 receive:3 condi:1 addition:1 interval:2 pyramidal:1 appropriately:1 resul:1 ivi:1 subject:1 tend:2 negat:1 cowan:1 call:1 integer:1 near:6 exceed:1 iii:2 affect:1 quant:2 silent:4 o0:1 introd:1 assist:6 ultimate:3 ul:2 proceed:1 cause:2 detailed:1 amount:4 mid:1 statist:1 category:1 exist:4 inhibitory:3 trapped:2 per:3 boiling:1 discrete:1 shall:1 key:3 threshold:6 clarity:1 ce:1 merely:1 run:6 prob:1 place:1 decision:1 dy:1 ially:1 hi:1 display:2 mtg:2 activity:10 marder:1 inclusive:1 aspect:1 speed:5 extinct:1 relatively:1 conjecture:1 metastable:2 according:1 alternate:1 membrane:1 smaller:1 wi:12 rev:1 modification:1 taken:1 ln:6 discus:1 mechanism:2 flip:1 tatory:2 subnetworks:1 end:3 obey:1 assumes:1 denotes:1 madison:1 ting:4 graven:1 classical:1 dependence:1 propagat:2 diagonal:1 distance:2 simulated:1 parametrized:1 transit:1 cellular:2 reason:1 analgous:1 length:2 cont:1 insufficient:2 ratio:1 piriform:3 relate:1 negative:1 rise:1 upper:1 markov:1 noest:1 finite:8 t:2 pat:1 immediate:2 criticality:2 flop:1 truncated:1 sharp:1 lb:1 drift:1 tive:1 evidenced:3 raising:1 trans:1 proceeds:1 below:1 pattern:3 memory:2 critical:4 suitable:1 ual:1 pictured:1 text:1 prior:1 acknowledgement:1 relative:1 wisconsin:1 interesting:1 analogy:1 retreat:1 ume:2 haberly:1 dq:1 share:1 excitatory:7 repeat:1 last:1 silence:1 institute:1 neighbor:3 limi:1 barrier:1 curve:1 tative:1 feedback:1 transition:8 contrariwise:1 xn:1 eisen:1 author:1 uni:3 forever:1 ons:1 abstracted:1 stimulated:1 channel:1 init:1 necessarily:1 complex:4 sp:1 intracellular:1 linearly:1 neurosci:2 ref:1 carpenter:1 fig:20 referred:1 tl:3 downwards:1 vr:2 wiley:2 n:1 bower:1 quali:2 bad:1 trap:6 te:1 illustrates:1 biophys:1 exci:2 likely:2 infinitely:1 horizontally:1 hitting:1 temporarily:2 ch:1 conditional:1 bioi:1 change:4 infinite:8 typical:2 determined:1 diff:1 total:1 pas:1 la:2 chover:2 tern:1
2,314
310
Closed-Form Inversion of Backpropagation Networks: Theory and Optimization Issues Michael L. Rossen HNC, Inc. 5.501 Oberlin Drive San Diego, CA 92121 rossen@amos.ucsd.edu Abstract We describe a closed-form technique for mapping the output of a trained backpropagation network int.o input activity space. The mapping is an inverse mapping in the sense that, when the image of the mapping in input activity space is propagat.ed forward through the normal network dynamics, it reproduces the output used to generate that image. When more than one such inverse mappings exist, our inverse ma.pping is special in that it has no projection onto the nullspace of the activation flow operator for the entire network. An important by-product of our calculation, when more than one invel'se mappings exist, is an orthogonal basis set of a significant portion of the activation flow operator nullspace. This basis set can be used to obtain an alternate inverse mapping that is optimized for a particular rea.l-world application. 1 Overview This paper describes a closed-form technique for mapping a particular output of a trained backpropagation net.work int.o input activity space. The mapping produced by our technique is an inverse mapping in the sense that, when the image in input space of the mapping of an output a.ctivity is propa.gated forward through the norma.l network dynamics, it reproduces the output used to generate it.! \Vhen mult.iple inverse mappings exist, our inverse mapping is unique in that it has no 1 868 It is possible that no such inverse mappings exist. This point is addressed in sect.ion 4. Closed-Form Inversion of Backpropagation Networks projection onto the nullspace of the activation flow operator for the entire network. An important by-product of our calculation is an orthogonal basis set of a significant portion of this nullspace. Any vector within this nullspace can be added to the image from the inverse mapping, producing a new point in input space that is still an inverse mapping image in t.he above sense. Using this nullspace, the inverse mapping can be opt.imized for a particular applicat.ion by minimizing a cost function over the input element.s, relevant to that applicat.ion, to obtain the vector from the nullspace to add to the original inverse mapping image. For this reason and because of t.he closed-form we obt.ain for calculation of the network inverse mapping, our met.hod compares favorably to previously proposed iterative methods of network inversion [';Yidrow & Stearns, 1985, Linden & Kinderman, 1989]. We now briefly summarize our method of closed-form inversion of a backpropagation network. 2 The Inverse Mapping Operator To outline the calculation of our inverse mapping operator, we start by considering a trained feed-forward backpropagation network with one hidden layer and bipolar sigmoidal activation functions. \Ve calculate this inverse as a sequence of the inverses of the sub-operations constituting the dynamics of activation flow. If we use the 'I, II, 0' as suhscripts indicating input, hidden and output modules of model neurons, respectively. the act.ivation flow from input through hidden module to output module is: Lo) u0 W(O,H) 84H) u (:;) W(O,H) 8 u0 'W(H,I) 8!{I) (1) where u : bipolar sigmoid funct.ion; W(dest,sotlrce) : rvlatl'ix operator of connection weights, indexed by 'solll'ce' and 'dest'{destination) modules; 4k) : Vector of activit.ies for module 'k'. A is defined here as t.he activation flow operat.or for the entire network. The symbol 8 separat.es operators sequent.ially applied to the argument. Since the sub-operators constit.uting A are applied sequentially, the inverse that we calculate, A+ , is equal to a composit.ion of inverses of the individual sub-operators, with the order of the composition reversed from the order in activation flow. The closed-form mapping of a specified output !(O) to input space is then: (2) A+ GL'o) W?o,H) (:> u- 1 8 }V~I,I) 8 u- 1 8 f(o), where u- 1 : Inverse of t.he bipolar logistic sigmoid; W(dest,soUJ'ce) : Pseudo-inverse of W(de$t,source) . 869 870 Rossen Subject to the existence conditions discussed in section 4, !{I) is an inverse mapping of!{o) in that it reproduces f(O) when it is propagated forward through the network: f(O) A0~I). (3) We use singular value decomposit.ion (SVD), a well-known matrix analysis method (e.g., [Lancaster, 1985]), to ca.lculate a particular matrix inverse, the pseudo-inverse W(~ .) (also known as the Moore-Penrose inverse) of each connection weight matrix J" block. In the case of W( ll,I), for example, SVD yields the two unitary matrices, S(ll'!) and V(H,I), and a rectangular matrix V(H'!) , all zero except for the singular values on its diagonal, sllch that S(fl,I)V(fl,!) V(H, I) (4) V(fl,/) V(fl, 1) S(fl ,/) , (5) where V CH ,/) , V("fl,J) : Transposes of SCH,/) , vtJ,J) : Pseudo-inverse V(H,l), respectively; of V(ll,I), which is simply it.s transpose wit.h each non-zero singular value replaced by its inverse. 3 Uniqueness and Optimization Considerations The pseudo-inverse (calculated by SVD or other methods) is one of a class of solutions t.o the inverse of a mat.rix operator that may exist, called generalized inverses. For our purposes, each of these generalized inverses, if they exist, are inverses in the useful sense tha.t when subst.it.ued for W(j,i) in eq. (2), the resultant !{/) will be and inverse mapping image as defined by eq. (3). When a matrix operator W does not have a nullspace, the pseudo-inverse is the only generalized inverse that exists. If W does have a nullspace, the pseudo-inverse is special in that its range cont.ains no projection onto the nullspace of W. It follows that if either of t.he mat.rix operat.ors )/\,'(H,J) or W(O,H) in eq. (1) have a nullspace, then multiple inverse mapping operators WIll exist. However, the inverse mapping operator A+ calculated llsing pseudo-inverses will be the only inverse mapping operator that has no projection in the nullspace of A. The derivation of these propert.ies follow in a straightforward manner from the discussion of generalized inverses in [Lancaster, 1985]. An interesting result of using SVD to obtain the pseudo-inverse is that: SVD provides a direct method for varying ~J) within the space of inverse mapping images ill input space of L J. ? This becomes clear when we note that if 1" = P(W(H,I?) is the rank of W(H,!) , only the first 1" singular values in V(H,J) are non-zero. Thus, only the first r columns of S(H,/) and V(/l,J) participate in the activity flow of the network from input module to hidden module. Closed-Form Inversion of Backpropagation Networks The columns {Y.(II'/)(i)h>r of V(lI,I) span the nllllspace of W(H,I). This nullspace is also the nullspace of A, or at least a significant portion thereof. 2 If ~J) is an inverse mapping image of f(0), then the addition of any vector from the nullspace to ~I) would still be an inverse mapping image of ~O), satisfying eq. (3). If an inverse mapping image ~I) obtained from eq. (2) is unphysical, or somehow inappropriate for a particular application, it could possibly be optimized by combining it with a vector from the nullspace of A. 4 Existence and Stability Considerations There are still implementational issues of importance to address: 1. For a given Lo), can eq. (2) produce some mapping image t(I)? 2. For a given Lo), will the image ~I) produced by eq. (2) be a true inverse mapping image; i.e .. will it. sat.isfy eq. (3)? If not, is it a best approximation in some sense? 3. How stable is an inverse mapping from f(0) tha.t produces the answer 'yes' to questions 1 and 2; i.e., if L 0) is perturbed to produce a new output point, will this new output point satisfy questions 1 and 2? In general, eq. (2) will produce an image for any output point generated by the forward dynamics of the network, eq. (1). If Lo) is chosen arbitrarily, however, then whether it is in t.he domain of A+ is purely a function of the network weights. The domain is restricted because t.he domain of the inverse sigmoid sub-operator is restricted to (-1, +1). \Vhether an image produced by eq. (2) will be an inverse mapping image, i.e., satisfying eq. (3), is dependent on both the network weights and the network architecture. A strong sufficient condit.ion for guara.nteeing this condition is that the network have a c07l1'ergent archit.ecture; that is: ? The dimension of input. space is greater than or equal to the dimension of output space . ? The rank of V(H,I) is greater t.han or equal t.o the rank of'D(O,H)' The stability of inverse mappings of a desired output away from such an actual output depends wholly on the weights of the network. The range of singular values of weight mat.rix block W(O,H) can be used to address this issue. If the range is much more than one order of magnitude, then random perturbations about a given point in output space will often be outside the domain of A+. This is because the columns of S(O,H) and V(O,H) associated wit.h small singular values during forward 2Since its first sub-operation is linear, and the sigmoid non-linearity we employ maps zero to zero, the non-linear operator A can still have a nullspace. Subsequent layers of the network might add to this nullspace. however, and the added region may not be a linear subspace. 871 872 Rossen activity flow are associated with proportionately large inverse singular values in the inverse mapping. Thus, if singular value dO,Hi is small, a random perturbation wit.h a projection on column ?{O,ll)(i) of S(O,H) will cause a large magnitude swing in the inverse sub-operator }V(6,[f) , with t.he result possibly outside the domain of u- 1 . 5 SUllllllary ? 'Ve have shown t.hat. a closed-form inverse mapping operator of a backpropagat.ion network can be obt.ained using a composition of pseudo-inverses and inverse sigmoid operators. ? This inverse mapping operat.or, specified in eq. (2), operating on any point in the network's output space, will obtain an inverse image of that point that sat.isfies eq. (3), if snch an invf'rse image exist.s. ? "'hen many inverse images of an out.put. point exist, an extension of the SVD analyses used t.o ohtain t.he original inverse image can be used to obtain an alternate inverse image optimized t.o satisfy the problem const.raints of a particular application. ? The existence of an inverse image of a particular output point. depends on that output point. and the network weight.s. The dependence on the network can be expressed conveniently in t.erms of the singular values and the singular value vectors of the net.work weight mat.rices. ? Application for thesf' techniqllf'S include explanation of network operation and process control. References [Lancaster, 1985] Lancaster, P., & Tismenetsky, M. (1985). The TheaT'y af Matrices. Orlando: Academic. [Linden & Kinderman, 1989] Linden, A., & Kinderman, J. (1989). Inversion of multilayer nets. Proceedings of the Third Annual Inter1lational Joint Conference on Neural Networks, Fal II. 425-430. ["Tidrow & Stearns, 1985] 'Vidrow, B., & Stearns, S.D. (1985). Adpative Signal Processing. Englewood Cliffs: Prentice-Hall. Part XIII Learning and Generalization
310 |@word briefly:1 inversion:6 true:1 met:1 swing:1 added:2 sllch:1 moore:1 norma:1 vhen:1 question:2 dependence:1 propagat:1 diagonal:1 ll:4 ivation:1 during:1 subspace:1 reversed:1 orlando:1 generalized:4 generalization:1 participate:1 opt:1 outline:1 iple:1 reason:1 extension:1 erms:1 hall:1 image:23 activation:7 normal:1 consideration:2 cont:1 mapping:39 sigmoid:5 condit:1 minimizing:1 subsequent:1 favorably:1 overview:1 purpose:1 uniqueness:1 discussed:1 he:9 gated:1 significant:3 composition:2 neuron:1 ain:1 amos:1 provides:1 sigmoidal:1 ucsd:1 stable:1 han:1 perturbation:2 operating:1 sequent:1 varying:1 direct:1 add:2 specified:2 optimized:3 connection:2 rank:3 manner:1 arbitrarily:1 sense:5 address:2 dependent:1 greater:2 entire:3 a0:1 actual:1 hidden:4 inappropriate:1 considering:1 raints:1 becomes:1 llsing:1 ii:3 linearity:1 u0:2 issue:3 multiple:1 ill:1 signal:1 explanation:1 activit:1 operat:3 fal:1 academic:1 calculation:4 special:2 imized:1 af:1 equal:3 y:2 pseudo:9 act:1 multilayer:1 bipolar:3 ained:1 control:1 xiii:1 employ:1 hnc:1 ion:8 producing:1 rea:1 addition:1 ve:2 hen:1 individual:1 addressed:1 replaced:1 singular:10 source:1 interesting:1 sch:1 cliff:1 subject:1 might:1 englewood:1 flow:9 sufficient:1 adpative:1 kinderman:3 unitary:1 range:3 lo:4 separat:1 unique:1 gl:1 transpose:2 architecture:1 block:2 backpropagation:7 orthogonal:2 indexed:1 wholly:1 desired:1 whether:1 mult:1 calculated:2 projection:5 dimension:2 world:1 column:4 forward:6 san:1 onto:3 implementational:1 cause:1 operator:19 backpropagat:1 put:1 prentice:1 cost:1 useful:1 constituting:1 isfy:1 se:1 clear:1 map:1 obt:2 straightforward:1 reproduces:3 sequentially:1 sat:2 rectangular:1 wit:3 answer:1 perturbed:1 generate:2 stearns:3 exist:9 propa:1 iterative:1 dest:3 destination:1 stability:2 summarize:1 ca:2 michael:1 mat:4 diego:1 pping:1 domain:5 possibly:2 ce:2 element:1 satisfying:2 decomposit:1 li:1 de:1 module:7 inverse:63 calculate:2 int:2 region:1 inc:1 satisfy:2 sect:1 depends:2 ctivity:1 sub:6 ially:1 rix:3 closed:9 portion:3 start:1 layer:2 fl:6 hi:1 nullspace:18 third:1 ix:1 dynamic:4 proportionately:1 trained:3 annual:1 activity:5 oberlin:1 funct:1 rse:1 purely:1 symbol:1 yield:1 basis:3 ecture:1 propert:1 yes:1 subst:1 joint:1 linden:3 exists:1 argument:1 span:1 importance:1 produced:3 derivation:1 magnitude:2 drive:1 applicat:2 describe:1 hod:1 alternate:2 lancaster:4 outside:2 ed:1 describes:1 simply:1 snch:1 penrose:1 conveniently:1 expressed:1 thereof:1 resultant:1 associated:2 restricted:2 propagated:1 ch:1 tha:2 sequence:1 ma:1 previously:1 net:3 rice:1 vtj:1 product:2 relevant:1 combining:1 feed:1 operation:3 follow:1 except:1 away:1 unphysical:1 called:1 e:1 svd:6 hat:1 existence:3 original:2 produce:4 indicating:1 include:1 somehow:1 logistic:1 const:1 archit:1 eq:14 strong:1
2,315
3,100
Prediction on a Graph with a Perceptron Mark Herbster, Massimiliano Pontil Department of Computer Science University College London Gower Street, London WC1E 6BT, England, UK {m.herbster, m.pontil}@cs.ucl.ac.uk Abstract We study the problem of online prediction of a noisy labeling of a graph with the perceptron. We address both label noise and concept noise. Graph learning is framed as an instance of prediction on a finite set. To treat label noise we show that the hinge loss bounds derived by Gentile [1] for online perceptron learning can be transformed to relative mistake bounds with an optimal leading constant when applied to prediction on a finite set. These bounds depend crucially on the norm of the learned concept. Often the norm of a concept can vary dramatically with only small perturbations in a labeling. We analyze a simple transformation that stabilizes the norm under perturbations. We derive an upper bound that depends only on natural properties of the graph ? the graph diameter and the cut size of a partitioning of the graph ? which are only indirectly dependent on the size of the graph. The impossibility of such bounds for the graph geodesic nearest neighbors algorithm will be demonstrated. 1 Introduction We study the problem of robust online learning over a graph. Consider the following game for predicting the labeling of a graph. Nature presents a vertex vi1 ; the learner predicts the label of the vertex y?1 ? {?1, 1}; nature presents a label y1 ; nature presents a vertex vi2 ; the learner predicts y?2 ; and so forth. The learner?s goal is minimize the total number of mistakes (|{t : y?t 6= yt }|). If nature is adversarial, the learner will always mispredict; but if nature is regular or simple, there is hope that a learner may make only a few mispredictions. Thus, a methodological goal is to give learners whose total mispredictions can be bounded relative to the ?complexity? of nature?s labeling. In this paper, we consider the cut size as a measure of the complexity of a graph?s labeling, where the size of the cut is the number of edges between disagreeing labels. We will give bounds which depend on the cut size and the diameter of the graph and thus do not directly depend on the size of the graph. The problem of learning a labeling of a graph is a natural problem in the online learning setting, as well as a foundational technique for a variety of semi-supervised learning methods [2, 3, 4, 5, 6]. For example, in the online setting, consider a system which serves advertisements on web pages. The web pages may be identified with the vertices of a graph and the edges as links between pages. The online prediction problem is then that, at a given time t the system may receive a request to serve an advertisement on a particular web page. For simplicity, we assume that there are two alternatives to be served: either advertisement ?A? or advertisement ?B?. The system then interprets the feedback as the label and then may use this information in responding to the next request to predict an advertisement for a requested web page. 1.1 Related work There is a well-developed literature regarding learning on the graph. The early work of Blum and Chawla [2] presented an algorithm which explicitly finds min-cuts of the label set. Bounds have been + + - - Input: {(vit , yt )}`t=1 ? VM ?{?1,1}. Initialization: w1 = 0; MA = ?. + + for t = 1, . . . , ` do Predict: receive vit + + y?t = sign(e> it wt ) Update: receive yt Figure 2: Barbell if y?t = yt then + wt+1 = wt else + + wt+1 = wt + yt vit MA = MA ? {t} + + end Figure 1: Perceptron on set VM . Figure 3: Barbell with concept noise Figure 4: Flower + + + + + + - - - - - - - Figure 5: Octopus proven previously with smooth loss functions [6, 7] in a batch setting. Kernels on graph labelings were introduced in [3, 5]. This current work builds upon our work in [8]. There it was shown that, given a fixed labeling of a graph, the number of mistakes made by an algorithm similar to the kernel perceptron [9] with a kernel that was the pseudoinverse of the graph Laplacian, could be bounded by the quantity [8, Theorems 3.2, 4.1, and 4.2] 4?G (u)DG bal(u). (1) Here u ? {?1, 1}n is a binary vector defining the labeling of the graph, ?G (u) is the cut size1 defined as ?G (u) := |{(i, j) ? E(G) : ui 6= uj }|, that is, the number of edges between positive P ?2 and negative labels, DG is the diameter of the graph and bal(u) := 1 ? n1 | ui | measures the label balance. This bound is interesting in that the mistakes of the algorithm could be bounded in terms of simple properties of a labeled graph. However, there are a variety of shortcomings in this result. First, we observe that the bound above assumed a fixed labeling of the graph. In practice, the online data sequence could contain multiple labels for a single vertex; this is the problem of label noise. Second, for an unbalanced set of labels the bound is vacuous, for example, if u = {1, 1, . . . , 1, ?1} ? IRn then bal(u) = n2 . Third, consider the prototypical easy instance for the algorithm of two dense clusters connected by a few edges, for instance, two m-cliques connected by a single edge (a barbell graph, see Figure 2). If each clique is labeled with distinct labels then we have that 4?G (u)DG bal(u) = 4?1?3?1 = 12, which is independent of m. Now suppose that, say, the first clique contains one vertex which is labeled as the second clique (see Figure 3). Previously ?G (u) = 1, but now ?G (u) = m and the bound is vacuous. This is the problem of concept noise; in this example, a ?(1) perturbation of labeling increases the bound multiplicatively by ?(m). 1.2 Overview A first aim of this paper is to improve upon the bounds in [8], particularly, to address the three problems of label balance, label noise, and concept noise as discussed above. For this purpose, we apply the well-known kernel perceptron [9] to the problem of online learning on the graph. We discuss the background material for this problem in section 2, where we also show that the bounds of [1] can be specialized to relative mistake bounds when applied to, for example, prediction on the graph. A second important aim of this paper is to interpret the mistake bounds by an explanation in terms of high level graph properties. Hence, in section 3, we refine a diameter based bound of [8, Theorem 4.2] to a sharper bound based on the ?resistance distance? [10] on a weighted graph; which we then closely match with a lower bound. In section 4, we introduce a kernel which is a simple augmentation of the pseudoinverse of the graph Laplacian and then prove a theorem on the performance of the perceptron with this kernel which solves the three problems above. We conclude in section 5, with a discussion comparing the mistake bounds for prediction on the graph with the halving algorithm [11] and the k-nearest neighbors algorithm. 2 Preliminaries In this section, we describe our setup for Hilbert spaces on finite sets and its specification to the graph case. We then recall a result of Gentile [1] on prediction with the perceptron and discuss a special case in which relative 0?1 loss (mistake) bounds are obtainable. 1 Later in the paper we extend the definition of cut size to weighted graphs. 2.1 Hilbert spaces of functions defined on a finite set We denote matrices by capital bold letters and vectors by small bold case letters. So M denotes the n ? n matrix (Mij )ni,j=1 and w the n-dimensional vector (wi )ni=1 . The identity matrix is denoted by I. We also let 0 and 1 be the n-dimensional vectors all of whose components equal to zero and one respectively, and ei the i-th coordinate vector of IRn . Let IN be the set of natural numbers and IN` := {1, . . . , `}. We denote a generic Hilbert space with H. We identify V := INn as the indices of a set of n objects, e.g. the vertices of a graph. A vector w ? IRn can alternatively be seen as a function f : V ? IR such that f (i) = wi , i ? V . However, for simplicity we will use the notation w to denote both a vector in IRn or the above function. A symmetric positive semidefinite matrix M induces a semi-inner product on IRn which is defined as hu, wiM := u> Mw, where ?> ? denotes transposition. The reproducing kernel [12] associated with the above semi-inner product is K = M+ , where ?+ ? denotes pseudoinverse. We also define the coordinate spanning set VM := {vi := M+ ei : i = 1, . . . , n} (2) and let H(M) := span(VM ). The restriction of the semi-inner product h?, ?iM to H(M) is an inner product on H(M). The set VM acts as ?coordinates? for H(M), that is, if w ? H(M) we have + > wi = e> i M Mw = vi Mw = hvi , wiM , (3) although the vectors {v1 , . . . , vn } are not necessarily normalized and are linearly independent only if M is positive definite. We note that equation (3) is simply the reproducing kernel property [12] for kernel M+ . When V indexes the vertices of an undirected graph G, a natural norm to use is that induced by the graph Laplacian. We explain this in detail now. Let A be the n ? n symmetric weight matrix of the graph such that Aij ? 0, and define the edge set E(G) := {(i, j) : 0 < Aij , i < j}. The distance matrix ? associated with G is the per-element inverse of the weight matrix, that is, ?ij = A1ij (? may have +? as a matrix element). The graph Laplacian G is the n ? n matrix defined Pnas G := D ? A where D = diag(d1 , . . . , dn ) and di is the weighted degree of vertex i, di = j=1 Aij . The Laplacian is positive semidefinite and induces the semi-norm X kwk2G := w> Gw = Aij (wi ? wj )2 . (4) (i,j)?E(G) When the graph is connected, it follows from equation (4) that the null space of G is spanned by the constant vector 1 only. In this paper, we always assume that the graph G is connected. Where it is not ambiguous, we use the notation G to denote both the graph G and the graph Laplacian. 2.2 Online prediction of functions on a finite set with the perceptron Gentile [1] bounded the performance of the perceptron algorithm on nonseparable data with the linear hinge loss. Here, we apply his result to study the problem of prediction on a finite set with the perceptron (see Figure 1). In this case, the inputs are the coordinates in the set VM ? H(M) defined above. We additionally assume that matrix M is positive definite (not just positive semidefinite as in the previous subsection). This assumption, along with the fact that the inputs are coordinates, enables us to upper bound the hinge loss and hence we may give a relative mistake bound in terms of the complete set of base classifiers {?1, 1}n . Theorem 2.1. Let M be a symmetric positive definite matrix. If {(vit , yt )}`t=1 ? VM ? {?1, 1} is a sequence of examples, MA denotes the set of trials in which the perceptron algorithm predicted incorrectly and X = maxt?MA kvit kM , then the cumulative number of mistakes |MA | of the algorithm is bounded by s 4 2 kukM X 4 kukM X 2 2 + 2|MA ?Mu |kukM X 2 + (5) |MA | ? 2|MA ?Mu | + 2 4 for all u ? {?1, 1}n , where Mu = {t ? IN` : uit 6= yt }. In particular, if |Mu | = 0 then 2 |MA | ? kukM X 2 . Proof. This bound follows directly from [1, Theorem 8] with p = 2, ? = 1, and w1 = 0. Since M is assumed to be symmetric positive definite, it follows that {?1, 1}n ? H(M). Thus, the hinge loss Lu,t := max(0, 1 ? yt hu, vit iM ) of any classifier u ? {?1, 1}n with any example (vit , yt ) is either 0 or 2, since |hu, vit iM | = 1 by equation (3). This allows us to bound the hinge loss term of [1, Theorem 8] directly with mistakes. We emphasize that our hypothesis on M does not imply linear separability since multiple instances of an input vector in the training sequence may have distinct target labels. Moreover, we note that, for deterministic prediction the constant 2 in the first term of the right hand side of equation (5) is optimal for an online algorithm as a mistake may be forced on every trial. 3 Interpretation of the space H(G) The bound for prediction on a finite set in equation (5) involves two quantities, namely the squared norm of a classifier u ? {?1, 1}n and the maximum of the squared norms of the coordinates v on the graph, recall from equation (4) that kuk2G := u> Gu = P? VM . In the case of prediction 2 A (u ? u ) . Therefore, we may identify this semi-norm with the weighted cut size i j (i,j)?E(G) ij 1 2 (6) kukG 4 of the labeling induced when u ? {?1, 1}n . In particular, with boolean weighted edges (Aij ? {0, 1}) the cut simply counts the number of edges spanning disagreeing labels. ?G (u) := The norm kv ? wkG is a metric distance for v, w ? span(VG ) however, surprisingly, the square of 2 the norm kvp ? vq kG when restricted to graph coordinates vp , vq ? VG is also a metric known as the resistance distance [10], 2 rG (p, q) := (ep ? eq )> G+ (ep ? eq ) = kvp ? vq kG . (7) It is interesting to note that the resistance distance between vertex p and vertex q is the effective resistance between vertices p and q, where the graph is the circuit and edge (i, j) is a resistor with the resistance ?ij = A?1 ij . 2 2 As we shall see, our bounds in section 4 depend on kvp kG = kvp ? 0kG . Formally, this is not an effective resistance between vertex P p and another vertex ?0?. The vector 0, informally however, is the center of the graph as 0 = v?VG |VG | v , since 1 is in the null space of H(G). In the following, we 2 further characterize kvp kG . First, we observe qualitatively that the more interconnected the graph 2 the smaller the term kvp kG (Corollary 3.1). Second, in Theorem 3.2 we quantitatively upper bound 2 kvp kG by the average (over q) of the effective resistance between vertex p and each vertex q in the graph (including q = p), which in turn may be upper bounded by the eccentricity of p. We proceed with the following useful lemma and theorem, as a basis for our later results.  ?2 Lemma 3.1. Let x ? H then kxk = minw?H kwk2 : hw, xi = 1 . The proof is straightforward and we do not elaborate on the details. Theorem 3.1. If M and M0 are symmetric positive semidefinite matrices with span(VM ) = 2 2 span(VM0 ) and, for every w ? span(VM ), kwkM ? kwkM0 then n n X 2 X 2 ai vi0 , ai vi ? i=1 M i=1 M0 where vi ? VM , vi0 ? VM0 and a ? IRn . Proof. Let x = Pn Pn and x0 = i=1 ai vi0 then 2 2 0 0 x 2 ? x ? x = kx0 k?20 , = M 2 2 2 kxkM M kx0 kM0 M kx0 kM0 M0 i=1 ai vi ?2 kxkM 0 where the first inequality follows since h kx0xk2 , xi M0 M = 1, hence x0 kx0 k2M0 is a feasible solution to the minimization problem in the right hand side of Lemma 3.1. While the second one follows 2 2 immediately from the assumption that kwkM ? kwkM0 . As a corollary to the above theorem we have the following when M is a graph Laplacian. Corollary 3.1. Given connected graphs G and G0 with distance matrices ? and ?0 such that 2 2 ?ij ? ?0ij then for all p, q ? V , we have that kvp kG ? kvp0 kG0 and rG (p, q) ? rG0 (p, q). 2 The first inequality in the above corollary demonstrates that kvp kG is nonincreasing in a graph that is strictly more connected. The second inequality is the well-known Rayleigh?s monotonicity law which states that if any resistance in a circuit is decreased then the effective resistance between any two points cannot increase. We define the geodesic distance between vertices p, q ? V to be dG (p, q) := min |P (p, q)| where the minimum is P taken with respect to all paths P(p, q) from p to q, with the path length defined as |P(p, q)| := (i,j)?E(P(p,q)) ?ij . The eccentricity dG (p) of a vertex p ? V is the geodesic distance on the graph between p and the furthest vertex on the graph to p, that is, dG (p) = maxq?V dG (p, q) ? DG , and DG is the (geodesic) diameter of the graph, DG := maxp?V dG (p). A graph G is connected when DG < ?. A tree is an n-vertex connected graph with n ? 1 edges. The following lemma, a well known result (see e.g. [10]), establishes that the resistance distance can be be equated with the geodesic distance when the graph is a tree. Lemma 3.2. If the graph T is a tree with graph Laplacian T then rT (p, q) = dT (p, q). 2 The next theorem provides a quantitative relationship between kvp kG and two measures of the connectivity of vertex p, namely its eccentricity and the mean of the effective resistances between vertex p and each vertex on the graph. Theorem 3.2. If G is a connected graph then n 1X 2 kvp kG ? rG (p, q) ? dG (p). (8) n q=1 Pn Proof. Recall that rG (p, q) = kvp ? vq k2G (see equation (7)) and use q=1 vq = 0 to obtain that P P n n 1 1 2 > > q=1 kvp ? vq kG = vp Gvp + n q=1 vq Gvq which implies the left inequality in (8). Next, n by Corollary 3.1, if T is the Laplacian of a tree T ? G then rG (p, q) ? rT (p, q) for p, q ? V . Therefore, from Lemma 3.2 we conclude that rG (p, q) ? dT (p, q). Moreover, since T ? G can be any tree, we have that rG (p, q) ? minT dT (p, q) where the minimum is over all trees T ? G. Since the geodesic path from p to q is necessarily contained in some tree T ? G it follows that minT dT (p, q) = dG (p, q) and, so, rG (p, q) ? dG (p, q). Now the theorem follows by maximizing dG (p, q) over q and the definition of dG (p). We identify the resistance diameter of a graph G as RG := maxp,q?V rG (p, q); thus, from the previous theorem, we may also conclude that 2 max kvp kG ? RG ? DG . p?V (9) We complete this section by showing that there exists a family of graphs for which the above inequality is nearly tight. Specifically, we consider the ?flower graph? (see Figure 4) obtained by connecting the first vertex of a chain with p ? 1 vertices to the root vertex of an m-ary tree of depth one. We index the vertices of this graph so that vertices 1 to p correspond to ?stem vertices? and vertices p + 1 to p + m to ?petals?. Clearly, this graph has diameter equal to p, hence our upper 2 bound above establishes that kv1 kG ? p. We now argue that as m grows this bound is almost  ?2 tight. From Lemma 3.1 we have that kv1 kG = minw?H(G) kwk2G : hw, v1 i = 1 . We note that by symmetry, the solution w ? = (w ?i : i ? INp+m ) to the problem above satisfies w ?i = z if i ? p + 1 sincenw ? must take the same value on the petal vertices. Consequently, itofollows that Pp?1 Pp ?2 kv1 kG = min m(z ? wp )2 + i=1 (wi ? wi+1 )2 : w1 = 1, i=1 wi + mz = 0 . We upper p?i bound this minimum by choosing wi = p?1 for 1 ? i ? p. Thus, w1 = 1 as it is required, wp = 0 p and we compute z by the constraint set of the above minimization problem as z = ? 2m . A direct p2 1 (p?1) + 4m from which using a first order Taylor expansion it follows (p?1)2 p2 . Therefore, as m ? ? the upper bound on kv1 k2G (equation (8)) 4m ?2 computation gives kv1 kG ? 2 that kv1 kG ? (p ? 1) ? for the flower graph is matched by a lower bound with a gap of 1. 4 Prediction on the graph We define the following symmetric positive definite graph kernel, Kbc := G+ + b11> + cI, (0 < b, 0 ? c), (10) b b ?1 b where Gc = (Kc ) is the matrix of the associated Hilbert space H(Gc ). In Lemma 4.1 below we prove the needed properties of H(Gbc ) as a necessary step for the bound in Theorem 4.2. As we shall see, these properties moderate the consequences of label imbalance and concept noise. To prove Lemma 4.1, we use the following theorem which is a special case of [12, Thm I, ?I.6]. Theorem 4.1. If M1 and M2 are n ? n symmetric positive semidefinite matrices, and we set + + 2 2 2 M := (M+ 1 + M2 ) then kwkM = inf{kw1 kM1 + kw2 kM2 : wi ? H(Mi ), w1 + w2 = w} for every w ? H(M). Next, ?u ? [0, 1] as a measure of the balance of a labeling u ? {?1, 1}n as ?u := Pnwe define 1 2 ( n i=1 ui ) . Note that for a perfectly balanced labeling ?u = 0, while ?u = 1 for a perfectly unbalanced one. Lemma 4.1. Given a vertex p with associated coordinates vp ? VG and vp0 ? VGbc we have that 2 2 kvp0 kGb = kvp kG + b + c. (11) c Moreover, if u, u0 ? {?1, 1}n and where k := |{i : u0i = 6 ui }| we have that 4k ? 2 u 2 + . ku0 kGbc ? kukG + b c (12) 2 Proof. To prove equation (11) we recall equation (3) and note that kvp0 kGb =hvp0 , vp +b1+cep iGb = c 2 c hvp0 , vp iGb + hvp0 , b1+cep iGb = kvp kG + b + c. c c To prove inequality (12) we proceed in two steps. First, we show that ?u 2 2 kukGb = kukG + . (13) 0 b > Indeed, we can uniquely decompose u as the sum of a vector in H(G) and one in H( 11 n2 b ) as Pn Pn 1 1 u = (u ? 1 n i=1 ui ) + 1 n i=1 ui . Therefore, by Theorem 4.1 we conclude that kuk2Gb = 0 ? ? ? ku ? ?u 1k2G + k ?u 1k211> = kuk2G + ?bu , where ku ? ?u 1k2G = kuk2G since 1 ? H? (G). n2 b Second, we show, for any symmetric positive definite matrix M, u, u0 ? {?1, 1}n and c > 0, that 4k 2 2 , (14) ku0 kMc ? kukM + c where Mc := (M?1 + cI)?1 and k := |{i : u0i 6= ui }|. To this end, we decompose u0 as a sum of two elements of H(M) and H( 1c I) as u0 = u + (u0 ? u) and observe that ku0 ? uk21 I = 4k c . By c Theorem 4.1 it then follows that ku0 k2Mc ? kuk2M +ku0 ?uk21 I = kuk2M + 4k c . Now inequality (12) c follows by combining equations (13) and (14) with M = Gb0 . We can now state our relative mistake bound for online prediction on the graph. Theorem 4.2. Let G be a connected graph. If {(vit , yt )}`t=1 ? VGbc ? {?1, 1} is a sequence of examples and MA denotes the set of trials in which the perceptron algorithm predicted incorrectly, then the cumulative number of mistakes |MA | of the algorithm is bounded by r Z Z2 |MA | ? 2|MA ?Mu | + + 2|MA ?Mu |Z + , (15) 2 4 P n for all u,u0 ? {?1, 1}n , where k = |{i : u0i 6= ui }|, ?u0 = ( n1 i=1 u0i )2 , Mu = {t ? IN` : uit 6= yt }, and    ?u0 4k Z = 4?G (u0 ) + + RG + b + c . b c In particular, if b = 1, c = 0, k = 0 and |Mu | = 0 then |MA | ? (4?G (u) + ?u )(RG + 1). (16) 2 2 Proof. The proof follows by Theorem 2.1 with M = Gbc , then bounding kukGb and kvt kGb via 2 c c Lemma 4.1, and then using maxt?MA kvit kG ? RG by equation (9). The upper bound of the theorem is more resilient to label imbalance, concept noise, and label noise than the bound in [8, Theorems 3.2, 4.1, and 4.2] (see equation (1)). For example, given the noisy barbell graph in Figure 3 but with k  n noisy vertices the bound (1) is O(kn) while the bound (15) with b = 1, c = 1, and |Mu | = 0 is O(k). A similar argument may be given for label imbalance. In the bound above, for easy interpretability, one may upper bound the resistance diameter RG by the geodesic diameter DG . However, the resistance diameter makes for a sharper bound in a number of natural situations. For example now consider (a thick barbell) two m-cliques (one labeled ?+1?, one ?-1?) with ` edges (` < m) between the cliques. We observe between any two vertices there are at least ` edge-disjoint paths of length no more than five, therefore the resistance diameter is at most 5/` by the ?resistors-in-parallel? rule while the geodesic diameter is 3. Thus, for ?thick barbells? if we use the geodesic diameter we have a mistake bound of 16` (substituting ?u = 0, and RG ? 3 into (16)) while surprisingly with the resistance diameter the bound (substituting 1 b = 4n , c = 0, |Mu | = 0, ?u = 0, and RG ? 5/` into (15)) is independent of ` and is 20. 5 Discussion In this paper, we have provided a bound on the performance of the perceptron on the graph in terms of structural properties of the graph and its labeling which are only indirectly dependent on the number of vertices in the graph, in particular, they depend on the cut size and the diameter. In the following, we compare the perceptron with two other approaches. First, we compare the perceptron with the graph kernel K10 to the conceptually simpler k-nearest neighbors algorithm with either the graph geodesic distance or the resistance distance. In particular, we prove the impossibility of bounding performance of k-nearest neighbors only in terms of the diameter and the cut size. Specifically, we give a parameterized family of graphs for which the number of mistakes of the perceptron is upper bounded by a fixed constant independent of the graph size while k-nearest neighbors provably incurs mistakes linearly in the graph size. Second, we compare the perceptron with the graph kernel K10 with a simple application of the classical halving algorithm [11]. Here, we conclude that the upper bound for the perceptron is better for graphs with a small diameter while the halving algorithm?s upper bound is better for graphs with a large diameter. In the following, for simplicity we limit our discussion to binary-weighted graphs, noise-free data (see equation (16)) and upper bound the resistance diameter RG with the geodesic diameter DG (see equation (9)). 5.1 K-nearest neighbors on the graph We consider the k-nearest neighbors algorithms on the graph with both the resistance distance (see equation (7)) and the graph geodesic distance. The geodesic distance between two vertices is the length of the shortest path between the two vertices (recall the discussion in section 3). In the following, we use the emphasis distance to refer simultaneously to both distances. Now, consider the family of O`,m,p of octopus graphs. An octopus graph (see Figure 5) consists of a ?head? which is an `-clique (C (`) ) with vertices denoted by c1 , . . . , c` , and a set of m ?tentacles? ({Ti }m i=1 ), where each tentacle is a line graph of length p. The vertices of tentacle i are denoted by {ti,0 , ti,1 , . . . , ti,p }; the ti,0 are all identified as one vertex r which acts as the root of the m tentacles. There is an edge (the body) connecting root r to the vertex c1 on the head. Thus, this graph has diameter D = max(p + 2, 2p) and there are ` + mp + 1 vertices in total; an octopus Om,p is balanced if ` = mp + 1. Note that the distance of every vertex in the head to every other vertex in the graph is no more than p + 2, and every tentacle ?tip? ti,p is distance 2p to other tips tj,p : j 6= i. We now argue that k-nearest neighbors may incur mistakes linear in the number of tentacles. To this end, choose p ? 3 and suppose we have the following online data sequence {(c1 , +1), (t1,p , ?1), (c2 , +1), (t2,p , ?1), . . . , (cm , +1), (tm,p , ?1)}. Note that k-nearest neighbors will make a mistake on every instance (ti,p , ?1) and so, even assuming that it predicts correctly on (c1 , +1) it will always make m mistakes. We now contrast this result with the performance of the perceptron with the graph kernel K10 (see equation (10)). By equation (16), the number of mistakes will be upper bounded by 10p + 5 because there is a cut of size 1 and the diameter is 2p. Thus, for balanced octopi Om,p with p ? 3, as m grows the number of mistakes of the kernel perceptron will be bounded by a fixed constant. Whereas distance k-nearest neighbors will incur mistakes linearly in m. 5.2 Halving algorithm We now compare the performance of our algorithm to the classical halving algorithm [11]. The halving algorithm operates by predicting on each trial as the majority of the classifiers in the concept class which have been consistent over the trial sequence. Hence, the number of mistakes of the halving algorithm is upper bounded by the logarithm of the cardinality of the concept class. Let KGk = {u ? {?1, 1}n : ?G (u) = k} be the set all of all classifiers with a cut size equal to k on a  fixed graph G. The cardinality of KGk is upper bounded by n(n?1) since any classifier (cut) in KGk k can be uniquely identified by a choice of k edges and 1 bit which determines the sign of the vertices in the same of partition (however we overcount as not every set of edges determines a classifier). The number of mistakes of the halving algorithm is upper bounded by O(k log nk ). For example, on a line graph with a cut size of 1 the halving algorithm has an upper bound of dlog ne while the upper bound for the number of mistakes of the perceptron as given in equation (16) is 5n + 5. Although the halving algorithm has a sharper bound on such large diameter graphs as the line graph, it unfortunately has a logarithmic dependence on n. This contrasts to the bound of the perceptron which is essentially independent of n. Thus, the bound for the halving algorithm is roughly sharper on graphs with a diameter ?(log nk ), while the perceptron bound is roughly sharper on graphs with a diameter o(log nk ). We emphasize that this analysis of upper bounds is quite rough and sharper bounds for both algorithms could be obtained for example, by including a term representing the minimal possible cut, that is, the minimum number of edges necessary to disconnect a graph. For the halving algorithm this would enable a better bound on the cardinality of KGk (see [13]). While, for the perceptron the larger the connectivity of the graph, the weaker the diameter upper bound in Theorem 3.2 (see for example the discussion of ?thick barbells? at the end of section 4). Acknowledgments We wish to thank the anonymous reviewers for their useful comments. This work was supported by EPSRC Grant GR/T18707/01 and by the IST Programme of the European Community, under the PASCAL Network of Excellence IST-2002-506778. References [1] C. Gentile. The robustness of the p-norm algorithms. Machine Learning, 53(3):265?299, 2003. [2] A. Blum and S. Chawla. Learning from labeled and unlabeled data using graph mincuts. In ICML 2002, pages 19?26. Morgan Kaufmann, San Francisco, CA, 2002. [3] R. I. Kondor and J. Lafferty. Diffusion kernels on graphs and other discrete input spaces. In ICML 2002, pages 315?322. Morgan Kaufmann, San Francisco, CA, 2002. [4] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In ICML 2003, pages 912?919, 2003. [5] A. Smola and R.I. Kondor. Kernels and regularization on graphs. In COLT 2003, pages 144?158, 2003. [6] M. Belkin, I. Matveeva, and P. Niyogi. Regularization and semi-supervised learning on large graphs. In COLT 2004, pages 624 ? 638, Banff, Alberta, 2004. Springer. [7] T. Zhang and R. Ando. Analysis of spectral kernel design based semi-supervised learning. In Y. Weiss, B. Sch?olkopf, and J. Platt, editors, NIPS 18, pages 1601?1608. MIT Press, Cambridge, MA, 2006. [8] M. Herbster, M. Pontil, and L. Wainer. Online learning over graphs. In ICML 2005, pages 305?312, New York, NY, USA, 2005. ACM Press. [9] Y. Freund and R. E. Schapire. Large margin classification using the perceptron algorithm. Machine Learning, 37(3):277?296, 1999. [10] D. Klein and M. Randi?c. Resistance distance. Journal of Mathematical Chemistry, 12(1):81?95, 1993. [11] J. M. Barzdin and R. V. Frievald. On the prediction of general recursive functions. Soviet Math. Doklady, 13:1224?1228, 1972. [12] N. Aronszajn. Theory of reproducing kernels. Trans. Amer. Math. Soc., 68:337?404, 1950. [13] D. Karger and C. Stein. A new approach to the minimum cut problem. JACM, 43(4):601?640, 1996.
3100 |@word trial:5 kgk:4 kondor:2 norm:11 vi1:1 hu:3 km:1 crucially:1 incurs:1 contains:1 karger:1 kx0:4 current:1 comparing:1 z2:1 must:1 partition:1 frievald:1 enables:1 kv1:6 update:1 transposition:1 provides:1 math:2 banff:1 simpler:1 zhang:1 five:1 mathematical:1 dn:1 along:1 direct:1 c2:1 prove:6 consists:1 introduce:1 excellence:1 x0:2 indeed:1 roughly:2 nonseparable:1 alberta:1 cardinality:3 provided:1 bounded:13 notation:2 moreover:3 circuit:2 matched:1 null:2 kg:21 cm:1 developed:1 gvp:1 transformation:1 quantitative:1 every:8 act:2 ti:7 doklady:1 classifier:7 demonstrates:1 uk:2 partitioning:1 platt:1 grant:1 positive:12 t1:1 treat:1 mistake:26 consequence:1 limit:1 path:5 emphasis:1 initialization:1 kw2:1 tentacle:6 kg0:1 acknowledgment:1 practice:1 recursive:1 definite:6 pontil:3 kxkm:2 foundational:1 k10:3 regular:1 inp:1 cannot:1 unlabeled:1 restriction:1 deterministic:1 demonstrated:1 yt:11 center:1 maximizing:1 straightforward:1 reviewer:1 vit:8 mispredictions:2 simplicity:3 immediately:1 m2:2 rule:1 spanned:1 his:1 coordinate:8 target:1 suppose:2 hypothesis:1 matveeva:1 element:3 particularly:1 cut:17 predicts:3 labeled:5 disagreeing:2 ep:2 epsrc:1 wj:1 km0:2 connected:10 mz:1 balanced:3 mu:10 ui:8 complexity:2 geodesic:13 depend:5 tight:2 incur:2 serve:1 upon:2 learner:6 basis:1 gu:1 soviet:1 massimiliano:1 distinct:2 shortcoming:1 london:2 describe:1 forced:1 effective:5 labeling:14 choosing:1 whose:2 quite:1 larger:1 say:1 maxp:2 niyogi:1 noisy:3 online:13 sequence:6 inn:1 ucl:1 interconnected:1 product:4 km2:1 combining:1 forth:1 kv:1 olkopf:1 cluster:1 eccentricity:3 object:1 derive:1 ac:1 ij:7 nearest:10 eq:2 p2:2 soc:1 solves:1 c:1 predicted:2 involves:1 implies:1 thick:3 closely:1 vp0:1 enable:1 material:1 resilient:1 preliminary:1 decompose:2 kmc:1 anonymous:1 im:3 strictly:1 predict:2 kvt:1 stabilizes:1 m0:4 substituting:2 vary:1 early:1 hvi:1 purpose:1 label:21 wim:2 establishes:2 cep:2 weighted:6 hope:1 minimization:2 rough:1 clearly:1 mit:1 gaussian:1 always:3 aim:2 pn:5 corollary:5 derived:1 methodological:1 impossibility:2 contrast:2 adversarial:1 k2g:4 dependent:2 bt:1 irn:6 kc:1 transformed:1 labelings:1 provably:1 classification:1 colt:2 pascal:1 denoted:3 special:2 equal:3 field:1 u0i:4 icml:4 nearly:1 t2:1 quantitatively:1 few:2 belkin:1 dg:20 simultaneously:1 n1:2 ando:1 kukg:3 semidefinite:5 tj:1 nonincreasing:1 chain:1 edge:16 necessary:2 minw:2 vi0:3 tree:8 taylor:1 logarithm:1 minimal:1 instance:5 boolean:1 vertex:45 gr:1 characterize:1 kn:1 igb:3 herbster:3 bu:1 vm:11 tip:2 connecting:2 w1:5 augmentation:1 squared:2 connectivity:2 choose:1 leading:1 kwk2g:2 chemistry:1 bold:2 disconnect:1 explicitly:1 mp:2 depends:1 vi:5 later:2 root:3 pnwe:1 analyze:1 randi:1 parallel:1 minimize:1 square:1 ni:2 ir:1 om:2 kaufmann:2 correspond:1 identify:3 vp:5 conceptually:1 lu:1 mc:1 served:1 ary:1 explain:1 definition:2 overcount:1 pp:2 associated:4 di:2 proof:7 mi:1 size1:1 recall:5 subsection:1 hilbert:4 obtainable:1 vm0:2 dt:4 supervised:4 wei:1 amer:1 just:1 kgb:3 smola:1 hand:2 web:4 ei:2 aronszajn:1 grows:2 usa:1 concept:10 contain:1 normalized:1 hence:5 regularization:2 symmetric:8 wp:2 gw:1 game:1 uniquely:2 ambiguous:1 bal:4 complete:2 wainer:1 harmonic:1 specialized:1 overview:1 discussed:1 extend:1 interpretation:1 m1:1 interpret:1 kwk2:1 refer:1 cambridge:1 ai:4 framed:1 kw1:1 specification:1 base:1 t18707:1 moderate:1 mint:2 inf:1 inequality:7 binary:2 seen:1 minimum:5 gentile:4 morgan:2 shortest:1 semi:9 u0:9 multiple:2 pnas:1 stem:1 smooth:1 match:1 england:1 laplacian:9 prediction:16 halving:12 essentially:1 metric:2 kernel:18 c1:4 receive:3 background:1 whereas:1 decreased:1 else:1 sch:1 w2:1 comment:1 induced:2 undirected:1 lafferty:2 petal:2 structural:1 mw:3 easy:2 variety:2 identified:3 perfectly:2 interprets:1 inner:4 regarding:1 tm:1 resistance:20 proceed:2 york:1 dramatically:1 useful:2 informally:1 stein:1 induces:2 diameter:26 schapire:1 sign:2 disjoint:1 per:1 correctly:1 klein:1 discrete:1 shall:2 ist:2 blum:2 capital:1 diffusion:1 v1:2 graph:105 sum:2 inverse:1 letter:2 parameterized:1 family:3 almost:1 k2m0:1 vn:1 bit:1 bound:58 refine:1 constraint:1 argument:1 min:3 span:5 department:1 request:2 smaller:1 separability:1 wi:9 restricted:1 dlog:1 taken:1 equation:19 vq:7 previously:2 discus:2 count:1 turn:1 needed:1 serf:1 end:4 apply:2 observe:4 indirectly:2 generic:1 chawla:2 spectral:1 alternative:1 batch:1 robustness:1 responding:1 denotes:5 hinge:5 wc1e:1 gower:1 ghahramani:1 build:1 uj:1 classical:2 g0:1 quantity:2 rt:2 dependence:1 distance:21 link:1 thank:1 street:1 majority:1 argue:2 spanning:2 furthest:1 assuming:1 length:4 index:3 relationship:1 multiplicatively:1 balance:3 setup:1 unfortunately:1 sharper:6 km1:1 negative:1 design:1 upper:21 imbalance:3 mispredict:1 finite:7 incorrectly:2 defining:1 situation:1 head:3 y1:1 gc:2 perturbation:3 reproducing:3 thm:1 community:1 introduced:1 vacuous:2 namely:2 required:1 gb0:1 learned:1 maxq:1 nip:1 trans:1 address:2 flower:3 below:1 ku0:5 max:3 including:2 vi2:1 explanation:1 interpretability:1 natural:5 predicting:2 zhu:1 representing:1 improve:1 imply:1 ne:1 literature:1 relative:6 law:1 freund:1 loss:7 interesting:2 prototypical:1 proven:1 vg:5 degree:1 consistent:1 editor:1 maxt:2 surprisingly:2 supported:1 free:1 aij:5 side:2 weaker:1 perceptron:26 neighbor:10 feedback:1 depth:1 uit:2 cumulative:2 equated:1 made:1 qualitatively:1 san:2 programme:1 emphasize:2 clique:7 monotonicity:1 pseudoinverse:3 b1:2 assumed:2 conclude:5 francisco:2 xi:2 alternatively:1 additionally:1 nature:6 ku:2 robust:1 b11:1 ca:2 symmetry:1 requested:1 expansion:1 necessarily:2 european:1 diag:1 octopus:5 dense:1 linearly:3 bounding:2 noise:12 n2:3 body:1 a1ij:1 elaborate:1 ny:1 wish:1 resistor:2 kbc:1 third:1 advertisement:5 hw:2 theorem:24 showing:1 exists:1 ci:2 margin:1 nk:3 gap:1 rg:18 rayleigh:1 logarithmic:1 simply:2 jacm:1 barzdin:1 kxk:1 contained:1 springer:1 mij:1 satisfies:1 determines:2 acm:1 ma:18 goal:2 identity:1 consequently:1 feasible:1 specifically:2 operates:1 wt:5 lemma:11 total:3 mincuts:1 formally:1 college:1 mark:1 unbalanced:2 d1:1
2,316
3,101
Adaptor Grammars: A Framework for Specifying Compositional Nonparametric Bayesian Models Mark Johnson Microsoft Research / Brown University Mark Johnson@Brown.edu Thomas L. Griffiths University of California, Berkeley Tom Griffiths@Berkeley.edu Sharon Goldwater Stanford University sgwater@gmail.com Abstract This paper introduces adaptor grammars, a class of probabilistic models of language that generalize probabilistic context-free grammars (PCFGs). Adaptor grammars augment the probabilistic rules of PCFGs with ?adaptors? that can induce dependencies among successive uses. With a particular choice of adaptor, based on the Pitman-Yor process, nonparametric Bayesian models of language using Dirichlet processes and hierarchical Dirichlet processes can be written as simple grammars. We present a general-purpose inference algorithm for adaptor grammars, making it easy to define and use such models, and illustrate how several existing nonparametric Bayesian models can be expressed within this framework. 1 Introduction Probabilistic models of language make two kinds of substantive assumptions: assumptions about the structures that underlie language, and assumptions about the probabilistic dependencies in the process by which those structures are generated. Typically, these assumptions are tightly coupled. For example, in probabilistic context-free grammars (PCFGs), structures are built up by applying a sequence of context-free rewrite rules, where each rule in the sequence is selected independently at random. In this paper, we introduce a class of probabilistic models that weaken the independence assumptions made in PCFGs, which we call adaptor grammars. Adaptor grammars insert additional stochastic processes called adaptors into the procedure for generating structures, allowing the expansion of a symbol to depend on the way in which that symbol has been rewritten in the past. Introducing dependencies among the applications of rewrite rules extends the set of distributions over linguistic structures that can be characterized by a simple grammar. Adaptor grammars provide a simple framework for defining nonparametric Bayesian models of language. With a particular choice of adaptor, based on the Pitman-Yor process [1, 2, 3], simple context-free grammars specify distributions commonly used in nonparametric Bayesian statistics, such as Dirichlet processes [4] and hierarchical Dirichlet processes [5]. As a consequence, many nonparametric Bayesian models that have been used in computational linguistics, such as models of morphology [6] and word segmentation [7], can be expressed as adaptor grammars. We introduce a general-purpose inference algorithm for adaptor grammars, which makes it easy to define nonparametric Bayesian models that generate different linguistic structures and perform inference in those models. The rest of this paper is structured as follows. Section 2 introduces the key technical ideas we will use. Section 3 defines adaptor grammars, while Section 4 presents some examples. Section 5 describes the Markov chain Monte Carlo algorithm we have developed to sample from the posterior distribution over structures generated by an adaptor grammar. Software implementing this algorithm is available from http://cog.brown.edu/?mj/Software.htm. 2 Background In this section, we introduce the two technical ideas that are combined in the adaptor grammars discussed here: probabilistic context-free grammars, and the Pitman-Yor process. We adopt a nonstandard formulation of PCFGs in order to emphasize that they are a kind of recursive mixture, and to establish the formal devices we use to specify adaptor grammars. 2.1 Probabilistic context-free grammars A context-free grammar (CFG) is a quadruple (N, W, R, S) where N is a finite set of nonterminal symbols, W is a finite set of terminal symbols disjoint from N , R is a finite set of productions or rules of the form A ? ? where A ? N and ? ? (N ? W )? (the Kleene closure of the terminal and nonterminal symbols), and S ? N is a distinguished nonterminal called the start symbol. A CFG associates with each symbol A ? N ? W a set TA of finite, labeled, ordered trees. If A is a terminal symbol then TA is the singleton set consisting of a unit tree (i.e., containing a single node) labeled A. The sets of trees associated with nonterminals are defined recursively as follows: [ TA = T REEA (TB1 , . . . , TBn ) A?B1 ...Bn ?RA where RA is the subset of productions in R with left-hand side A, and T REEA (TB1 , . . . , TBn ) is the set of all trees whose root node is labeled A, that have n immediate subtrees, and where the ith subtree is a member of TBi . The set of trees generated by the CFG is TS , and the language generated by the CFG is the set {Y IELD(t) : t ? TS } of terminal strings or yields of the trees TS . A probabilistic context-free grammar (PCFG) is a quintuple (N, W, R, S, ?), where (N, W, R, S) is a CFG and ? is a vector of non-negative real numbers indexed by productions R such that X ?A?? = 1. A???RA Informally, ?A?? is the probability of expanding the nonterminal A using the production A ? ?. ? is used to define a distribution GA over the trees TA for each symbol A. If A is a terminal symbol, then GA is the distribution that puts all of its mass on the unit tree labeled A. The distributions GA for nonterminal symbols are defined recursively over TA as follows: X GA = ?A?B1 ...Bn T REE D ISTA (GB1 , . . . , GBn ) (1) A?B1 ...Bn ?RA where T REE D ISTA (GB1 , . . . , GBn ) is the distribution over T REEA (TB1 , . . . , TBn ) satisfying: ! n Y A Gi (ti ). T REE D ISTA (G1 , . . . , Gn ) = XX t1 . . . tn i=1 That is, T REE D ISTA (G1 , . . . , Gn ) is a distribution over trees where the root node is labeled A and each subtree ti is generated independently from Gi ; it is this assumption that adaptor grammars relax. The distribution over trees generated by the PCFG is GS , and the probability of a string is the sum of the probabilities of all trees with that string as their yields. 2.2 The Pitman-Yor process The Pitman-Yor process [1, 2, 3] is a stochastic process that generates partitions of integers. It is most intuitively described using the metaphor of seating customers at a restaurant. Assume we have a numbered sequence of tables, and zi indicates the number of the table at which the ith customer is seated. Customers enter the restaurant sequentially. The first customer sits at the first table, z1 = 1, and the n + 1st customer chooses a table from the distribution m X ma + b nk ? a zn+1 |z1 , . . . , zn ? ?m+1 + ?k (2) n+b n+b k=1 where m is the number of different indices appearing in the sequence z = (z1 , . . . , zn ), nk is the number of times k appears in z, and ?k is the Kronecker delta function, i.e., the distribution that puts all of its mass on k. The process is specified by two real-valued parameters, a ? [0, 1] and b ? 0. The probability of a particular sequence of assignments, z, with a corresponding vector of table counts n = (n1 , . . . , nm ) is Qm Qnk ?1 k=1 (a(k ? 1) + b) j=1 (j ? a) . (3) P(z) = PY(n | a, b) = Qn?1 i=0 (i + b) From this it is easy to see that the distribution produced by the Pitman-Yor process is exchangeable, with the probability of z being unaffected by permutation of the indices of the zi . Equation 2 instantiates a kind of ?rich get richer? dynamics, with customers being more likely to sit at more popular tables. We can use the Pitman-Yor process to define distributions with this character on any desired domain. Assume that every table in our restaurant has a value xj placed on it, with those values being generated from an exchangeable distribution G, which we will refer to as the generator. Then, we can sample a sequence of variables y = (y1 , . . . , yn ) by using the Pitman-Yor process to produce z and setting yi = xzi . Intuitively, this corresponds to customers entering the restaurant, and emitting the values of the tables they choose. The distribution defined on y by this process will be exchangeable, and has two interesting special cases that depend on the parameters of the Pitman-Yor process. When a = 1, every customer is assigned to a new table, and the yi are drawn from G. When a = 0, the distribution on the yi is that induced by the Dirichlet process [4], a stochastic process that is commonly used in nonparametric Bayesian statistics, with concentration parameter b and base distribution G. We can also identify another scheme that generates the distribution outlined in the previous paragraph. Let H be a discrete distribution produced by generating a set of atoms x from G and weights on those atoms from the two-parameter Poisson-Dirichlet distribution [2]. We could then generate a sequence of samples y from H. If we integrate over values of H, the distribution on y is the same as that obtained via the Pitman-Yor process [2, 3]. 3 Adaptor grammars In this section, we use the ideas introduced in the previous section to give a formal definition of adaptor grammars. We first state this definition in full generality, allowing any choice of adaptor, and then consider the case where the adaptor is based on the Pitman-Yor process in more detail. 3.1 A general definition of adaptor grammars Adaptor grammars extend PCFGs by inserting an additional component called an adaptor into the PCFG recursion (Equation 1). An adaptor C is a function from a distribution G to a distribution over distributions with the same support as G. An adaptor grammar is a sextuple (N, W, R, S, ?, C) where (N, W, R, S, ?) is a PCFG and the adaptor vector C is a vector of (parameters specifying) adaptors indexed by N . That is, CA maps a distribution over trees TA to another distribution over TA , for each A ? N . An adaptor grammar associates each symbol with two distributions GA and HA over TA . If A is a terminal symbol then GA and HA are distributions that put all their mass on the unit tree labeled A, while GA and HA for nonterminal symbols are defined as follows:1 X GA = ?A?B1 ...Bn T REE D ISTA (HB1 , . . . , GHn ) (4) A?B1 ...Bn ?RA HA ? CA (GA ) The intuition here is that GA instantiates the PCFG recursion, while the introduction of HA makes it possible to modify the independence assumptions behind the resulting distribution through the choice of the adaptor, CA . If the adaptor is the identity function, with HA = GA , the result is just a PCFG. However, other distributions over trees can be defined by choosing other adaptors. In practice, we integrate over HA , to define a single distribution on trees for any choice of adaptors C. 1 This definition allows an adaptor grammar to include self-recursive or mutually recursive CFG productions (e.g., X ? X Y or X ? Y Z, Y ? X W ). Such recursion complicates inference, so we restrict ourselves to grammars where the adapted nonterminals are not recursive. 3.2 Pitman-Yor adaptor grammars The definition given above allows the adaptors to be any appropriate process, but our focus in the remainder of the paper will be on the case where the adaptor is based on the Pitman-Yor process. Pitman-Yor processes can cache, i.e., increase the probability of, frequently occurring trees. The capacity to replace the independent selection of rewrite rules with an exchangeable stochastic process enables adaptor grammars based on the Pitman-Yor process to define probability distributions over trees that cannot be expressed using PCFGs. A Pitman-Yor adaptor grammar (PYAG) is an adaptor grammar where the adaptors C are based on the Pitman-Yor process. A Pitman-Yor adaptor CA (GA ) is the distribution obtained by generating a set of atoms from the distribution GA and weights on those atoms from the two-parameter PoissonDirichlet distribution. A PYAG has an adaptor CA with parameters aA and bA for each non-terminal A ? N . As noted above, if aA = 1 then the Pitman-Yor process is the identity function, so A is expanded in the standard manner for a PCFG. Each adaptor CA will also be associated with two vectors, xA and nA , that are needed to compute the probability distribution over trees. xA is the sequence of previously generated subtrees with root nodes labeled A. Having been ?cached? by the grammar, these now have higher probability than other subtrees. nA lists the counts associated with the subtrees in xA . The adaptor state can thus be summarized as CA = (aA , bA , xA , nA ). A Pitman-Yor adaptor grammar analysis u = (t, ?) is a pair consisting of a parse tree t ? TS together with an index function ?(?). If q is a nonterminal node in t labeled A, then ?(q) gives the index of the entry in xA for the subtree t? of t rooted at q, i.e., such that xA? (q) = t? . The sequence of analyses u = (u1 , . . . , un ) generated by an adaptor grammar contains sufficient information to compute the adaptor state C(u) after generating u: the elements of xA are the distinctly indexed subtrees of u with root label A, and their frequencies nA can be found by performing a top-down traversal of each analysis in turn, only visiting the children of a node q when the subanalysis rooted at q is encountered for the first time (i.e., when it is added to xA ). 4 Examples of Pitman-Yor adaptor grammars Pitman-Yor adaptor grammars provide a framework in which it is easy to define compositional nonparametric Bayesian models. The use of adaptors based on the Pitman-Yor process allows us to specify grammars that correspond to Dirichlet processes [4] and hierarchical Dirichlet processes [5]. Once expressed in this framework, a general-purpose inference algorithm can be used to calculate the posterior distribution over analyses produced by a model. In this section, we illustrate how existing nonparametric Bayesian models used for word segmentation [7] and morphological analysis [6] can be expressed as adaptor grammars, and describe the results of applying our inference algorithm in these models. We postpone the presentation of the algorithm itself until Section 5. 4.1 Dirichlet processes and word segmentation Adaptor grammars can be used to define Dirichlet processes with discrete base distributions. It is straightforward to write down an adaptor grammar that defines a Dirichlet process over all strings: Word Chars Chars ? ? ? Chars Char Chars Char (5) The productions expanding Char to all possible characters are omitted to save space. The start symbol for this grammar is Word. The parameters aChar and aChars are set to 1, so the adaptors for Char and Chars are the identity function and HChars = GChars is the distribution over words produced by sampling each character independently (i.e., a ?monkeys at typewriters? model). Finally, aWord is set to 0, so the adaptor for Word is a Dirichlet process with concentration parameter bWord . This grammar generates all possible strings of characters and assigns them simple right-branching structures of no particular interest, but the Word adaptor changes their distribution to one that reflects the frequencies of previously generated words. Initially, the Word adaptor is empty (i.e., xWord is empty), so the first word s1 generated by the grammar is distributed according to GChars . However, the second word can be generated in two ways: either it is retrieved from the adaptor?s cache (and hence is s1 ) with probability 1/(1 + bWord ), or else with probability bWord /(1 + bWord ) it is a new word generated by GChars . After n words have been emitted, Word puts mass n/(n + bWord ) on those words and reserves mass bWord /(n + bWord ) for new words (i.e., generated by Chars). We can extend this grammar to a simple unigram word segmentation model by adding the following productions, changing the start label to Words and setting aWords = 1. Words Words ? ? Word Word Words This grammar generates sequences of Word subtrees, so it implicitly segments strings of terminals into a sequence of words, and in fact implements the word segmentation model of [7]. We applied the grammar above with the algorithm described in Section 5 to a corpus of unsegmented child-directed speech [8]. The input strings are sequences of phonemes such as WAtIzIt. A typical parse might consist of Words dominating three Word subtrees, each in turn dominating the phoneme sequences Wat, Iz and It respectively. Using the sampling procedure described in Section 5 with bWord = 30, we obtained a segmentation which identified words in unsegmented input with 0.64 precision, 0.51 recall, and 0.56 f-score, which is consistent with the results presented for the unigram model of [7] on the same data. 4.2 Hierarchical Dirichlet processes and morphological analysis An adaptor grammar with more than one adapted nonterminal can implement a hierarchical Dirichlet process. A hierarchical Dirichlet process that uses the Word process as a generator can be defined by adding the production Word1 ? Word to (5) and making Word1 the start symbol. Informally, Word1 generates words either from its own cache xWord1 or from the Word distribution. Word itself generates words either from xWord or from the ?monkeys at typewriters? model Chars. A slightly more elaborate grammar can implement the morphological analysis described in [6]. Words are analysed into stem and suffix substrings; e.g., the word jumping is analysed as a stem jump and a suffix ing. As [6] notes, one of the difficulties in constructing a probabilistic account of such suffixation is that the relative frequencies of suffixes varies dramatically depending on the stem. That paper used a Pitman-Yor process to effectively dampen this frequency variation, and the adaptor grammar described here does exactly the same thing. The productions of the adaptor grammar are as follows, where Chars is ?monkeys at typewriters? once again: Word Word Stem Suffix ? ? ? ? Stem Suffix Stem Chars Chars We now give an informal description of how samples might be generated by this grammar. The nonterminals Word, Stem and Suffix are associated with Pitman-Yor adaptors. Stems and suffixes that occur in many words are associated with highly probable cache entries, and so have much higher probability than under the Chars PCFG subgrammar. Figure 1 depicts a possible state of the adaptors in this adaptor grammar after generating the three words walking, jumping and walked. Such a state could be generated as follows. Before any strings are generated all of the adaptors are empty. To generate the first word we must sample from HWord , as there are no entries in the Word adaptor. Sampling from HWord requires sampling from GStem and perhaps also GSuffix , and eventually from the Chars distributions. Supposing that these return walk and ing as Stem and Suffix strings respectively, the adaptor entries after generating the first word walking consist of the first entries for Word, Stem and Suffix. In order to generate another Word we first decide whether to select an existing word from the adaptor, or whether to generate the word using GWord . Suppose we choose the latter. Then we must sample from HStem and perhaps also from HSuffix . Suppose we choose to generate the new stem jump from GStem (resulting in the second entry in the Stem adaptor) but choose to reuse the existing Suffix adaptor entry, resulting in the word jumping. The third word walked is generated in a similar fashion: this time the stem is the first entry in the Stem adaptor, but the suffix ed is generated from GSuffix and becomes the second entry in the Suffix adaptor. Word Word: Stem Word Suffix Stem w a l k i n g Word Suffix Stem j u m p i n g Stem Stem w a l k j u m p Suffix Suffix i n g e d Suffix w a l k e d Stem: Suffix: Figure 1: A depiction of a possible state of the Pitman-Yor adaptors in the adaptor grammar of Section 4.2 after generating walking, jumping and walked. The model described in [6] is more complex than the one just described because it uses a hidden ?morphological class? variable that determines which stem-suffix pair is selected. The morphological class variable is intended to capture morphological variation; e.g., the present continuous form skipping is formed by suffixing ping instead of the ing form using in walking and jumping. This can be expressed using an adaptor grammar with productions that instantiate the following schema: Word Wordc Wordc ? ? ? Wordc Stemc Suffixc Stemc Stemc Suffixc ? ? Chars Chars Here c ranges over the hidden morphological classes, and the productions expanding Chars and Char are as before. We set the adaptor parameter aWord = 1 for the start nonterminal symbol Word, so we adapt the Wordc , Stemc and Suffixc nonterminals for each hidden class c. Following [6], we used this grammar with six hidden classes c to segment 170,015 orthographic verb tokens from the Penn Wall Street Journal corpus, and set a = 0 and b = 500 for the adapted nonterminals. Although we trained on all verbs in the corpus, we evaluated the segmentation produced by the inference procedure described below on just the verbs whose infinitival stems were a prefix of the verb itself (i.e., we evaluated skipping but ignored wrote, since its stem write is not a prefix). Of the 116,129 tokens we evaluated, 70% were correctly segmented, and of the 7,170 verb types, 66% were correctly segmented. Many of the errors were in fact linguistically plausible: e.g., eased was analysed as a stem eas followed by a suffix ed, permitting the grammar to also generate easing as eas plus ing. 5 Bayesian inference for Pitman-Yor adaptor grammars The results presented in the previous section were obtained by using a Markov chain Monte Carlo (MCMC) algorithm to sample from the posterior distribution over PYAG analyses u = (u1 , . . . , un ) given strings s = (s1 , . . . , sn ), where si ? W ? and ui is the analysis of si . We assume we are given a CFG (N, W, R, S), vectors of Pitman-Yor adaptor parameters a and b, and a Dirichlet prior with hyperparameters ? over production probabilities ?, i.e.: Y Y 1 P(? | ?) = ?A?? ?A?? ?1 where: B(?A ) A?N A???RA B(?A ) = Q ?(?A?? ) P ?( A???RA ?A?? ) A???RA with ?(x) being the generalized factorial function, and ?A is the subsequence of ? indexed by RA (i.e., corresponding to productions that expand A). The joint probability of u under this PYAG, integrating over the distributions HA generated from the two-parameter Poisson-Dirichlet distribution associated with each adaptor, is Y B(?A + fA (xA )) P(u | ?, a, b) = PY(nA (u)|a, b) (6) B(?A ) A?N where fA?? (xA ) is the number of times the root node of a tree in xA is expanded by production A ? ?, and fA (xA ) is the sequence of such counts (indexed by r ? RA ). Informally, the first term in (6) is the probability of generating the topmost node in each analysis in adaptor CA (the rest of the tree is generated by another adaptor), while the second term (from Equation 3) is the probability of generating a Pitman-Yor adaptor with counts nA . The posterior distribution over analyses u given strings s is obtained by normalizing P(u | ?, a, b) over all analyses u that have s as their yield. Unfortunately, computing this distribution is intractable. Instead, we draw samples from this distribution using a component-wise Metropolis-Hastings sampler, proposing changes to the analysis ui for each string si in turn. The proposal distribution is constructed to approximate the conditional distribution over ui given si and the analyses of all other strings u?i , P(ui |si , u?i ). Since there does not seem to be an efficient (dynamic programming) algorithm for directly sampling from P(ui |si , u?i ),2 we construct a PCFG G? (u?i ) on the fly whose parse trees can be transformed into PYAG analyses, and use this as our proposal distribution. 5.1 The PCFG approximation G? (u?i ) A PYAG can be viewed as a special kind of PCFG which adapts its production probabilities depending on its history. The PCFG approximation G? (u?i ) = (N, W, R? , S, ?? ) is a static snapshot of the adaptor grammar given the sentences s?i (i.e., all of the sentences in s except si ). Given an adaptor grammar H = (N, W, R, S, C), let: [ R? = R ? {A ? Y IELD(x) : x ? xA } A?N ? ?A?? =  mA aA + bA nA + bA  fA?? (xA ) + ?A?? P mA + A???RA ?A?? ! + X k: Y IELD(XAk )=?  nAk ? aA nA + bA  where Y IELD(x) is the terminal string or yield of the tree x and mA is the length of xA . R? contains all of the productions R, together with productions representing the adaptor entries xA for each A ? N . These additional productions rewrite directly to strings of terminal symbols, and their probability is the probability of the adaptor CA generating the corresponding value xAk . The two terms to the left of the summation specify the probability of selecting a production from the original productions R. The first term is the probability of adaptor CA generating a new value, and the second term is the MAP estimate of the production?s probability, estimated from the root expansions of the trees xA . It is straightforward to map parses of a string s produced by G? to corresponding adaptor analyses for the adaptor grammar H (it is possible for a single production of R? to correspond to several adaptor entries so this mapping may be non-deterministic). This means that we can use the PCFG G? with an efficient PCFG sampling procedure [9] to generate possible adaptor grammar analyses for ui . 5.2 A Metropolis-Hastings algorithm The previous section described how to sample adaptor analyses u for a string s from a PCFG approximation G? to an adaptor grammar H. We use this as our proposal distribution in a Metropolis2 The independence assumptions of PCFGs play an important role in making dynamic programming possible. In PYAGs, the probability of a subtree adapts dynamically depending on the other subtrees in u, including those in ui . Hastings algorithm. If ui is the current analysis of si and u?i 6= ui is a proposal analysis sampled from P(Ui |si , G? (u?i )) we accept the proposal ui with probability A(ui , u?i ), where:   P(u? | ?, a, b) P(ui | si , G? (u?i )) A(ui , u?i ) = min 1, P(u | ?, a, b) P(u?i | si , G? (u?i )) ? ? where u is the same as u except that ui replaces ui . Except when the number of training strings s is very small, we find that only a tiny fraction (less than 1%) of proposals are rejected, presumably because the probability of an adaptor analysis does not change significantly within a single string. Our inference procedure is as follows. Given a set of training strings s we choose an initial set of analyses for them at random. At each iteration we pick a string si from s at random, and sample a parse for si from the PCFG approximation G? (u?i ), updating u when the Metropolis-Hastings procedure accepts the proposed analysis. At convergence the u produced by this procedure are samples from the posterior distribution over analyses given s, and samples from the posterior distribution over adaptor states C(u) and production probabilities ? can be computed from them. 6 Conclusion The strong independence assumptions of probabilistic context-free grammars tightly couple compositional structure with the probabilistic generative process that produces that structure. Adaptor grammars relax that coupling by inserting an additional stochastic component into the generative process. Pitman-Yor adaptor grammars use adaptors based on the Pitman-Yor process. This choice makes it possible to express Dirichlet process and hierarchical Dirichlet process models over discrete domains as simple context-free grammars. We have proposed a general-purpose inference algorithm for adaptor grammars, which can be used to sample from the posterior distribution over analyses produced by any adaptor grammar. While our focus here has been on demonstrating that this algorithm can be used to produce equivalent results to existing nonparametric Bayesian models used for word segmentation and morphological analysis, the great promise of this framework lies in its simplification of specifying and using such models, providing a basic toolbox that will facilitate the construction of more sophisticated models. Acknowledgments This work was performed while all authors were at the Cognitive and Linguistic Sciences Department at Brown University and supported by the following grants: NIH R01-MH60922 and RO1DC000314, NSF 9870676, 0631518 and 0631667, the DARPA CALO project and DARPA GALE contract HR0011-06-2-0001. References [1] J. Pitman. Exchangeable and partially exchangeable random partitions. Probability Theory and Related Fields, 102:145?158, 1995. [2] J. Pitman and M. Yor. The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator. Annals of Probability, 25:855?900, 1997. [3] H. Ishwaran and L. F. James. Generalized weighted Chinese restaurant processes for species sampling mixture models. Statistica Sinica, 13:1211?1235, 2003. [4] T. Ferguson. A Bayesian analysis of some nonparametric problems. The Annals of Statistics, 1:209?230, 1973. [5] Y. W. Teh, M. Jordan, M. Beal, and D. Blei. Hierarchical Dirichlet processes. Journal of the American Statistical Association, to appear. [6] S. Goldwater, T. L. Griffiths, and M. Johnson. Interpolating between types and tokens by estimating powerlaw generators. In Advances in Neural Information Processing Systems 18, 2006. [7] S. Goldwater, T. L. Griffiths, and M. Johnson. Contextual dependencies in unsupervised word segmentation. In Proceedings of the 44th Annual Meeting of the Association for Computational Linguistics, 2006. [8] M. Brent. An efficient, probabilistically sound algorithm for segmentation and word discovery. Machine Learning, 34:71?105, 1999. [9] J. Goodman. Parsing inside-out. PhD thesis, Harvard University, 1998. available from http://research.microsoft.com/?joshuago/.
3101 |@word closure:1 bn:5 pick:1 recursively:2 initial:1 contains:2 score:1 selecting:1 prefix:2 past:1 existing:5 current:1 com:2 contextual:1 skipping:2 analysed:3 si:13 gmail:1 written:1 must:2 parsing:1 partition:2 enables:1 generative:2 selected:2 device:1 instantiate:1 ith:2 blei:1 node:8 successive:1 sits:1 constructed:1 tbi:1 paragraph:1 inside:1 manner:1 introduce:3 ra:11 frequently:1 morphology:1 terminal:10 metaphor:1 cache:4 becomes:1 project:1 xx:1 estimating:1 mass:5 kind:4 string:21 kleene:1 monkey:3 developed:1 proposing:1 berkeley:2 every:2 ti:2 exactly:1 qm:1 exchangeable:6 unit:3 underlie:1 penn:1 yn:1 grant:1 appear:1 t1:1 before:2 modify:1 consequence:1 quadruple:1 ree:5 might:2 easing:1 plus:1 dynamically:1 specifying:3 pcfgs:8 range:1 directed:1 acknowledgment:1 recursive:4 practice:1 postpone:1 implement:3 orthographic:1 procedure:7 significantly:1 word:61 induce:1 griffith:4 numbered:1 integrating:1 get:1 cannot:1 ga:13 selection:1 put:4 context:10 applying:2 py:2 equivalent:1 map:3 customer:8 deterministic:1 straightforward:2 tbn:3 independently:3 assigns:1 powerlaw:1 rule:6 variation:2 annals:2 construction:1 suppose:2 play:1 programming:2 us:3 associate:2 element:1 harvard:1 satisfying:1 walking:4 updating:1 labeled:8 role:1 fly:1 capture:1 calculate:1 morphological:8 topmost:1 intuition:1 ui:16 dynamic:3 traversal:1 hb1:1 depend:2 rewrite:4 segment:2 trained:1 htm:1 joint:1 darpa:2 describe:1 monte:2 choosing:1 whose:3 richer:1 stanford:1 valued:1 dominating:2 plausible:1 relax:2 grammar:72 cfg:7 statistic:3 gi:2 g1:2 itself:3 sextuple:1 beal:1 sequence:14 xak:2 remainder:1 inserting:2 adapts:2 description:1 convergence:1 empty:3 produce:3 generating:11 cached:1 illustrate:2 depending:3 coupling:1 nonterminal:9 adaptor:102 strong:1 stochastic:5 calo:1 char:20 implementing:1 wall:1 probable:1 summation:1 insert:1 presumably:1 great:1 mapping:1 reserve:1 adopt:1 omitted:1 purpose:4 linguistically:1 label:2 reflects:1 dampen:1 weighted:1 probabilistically:1 linguistic:3 derived:1 focus:2 ghn:1 indicates:1 inference:10 suffix:20 ferguson:1 typically:1 accept:1 initially:1 hidden:4 expand:1 transformed:1 among:2 augment:1 special:2 field:1 once:2 construct:1 having:1 atom:4 sampling:7 unsupervised:1 word1:3 tightly:2 intended:1 consisting:2 ourselves:1 microsoft:2 n1:1 interest:1 highly:1 introduces:2 mixture:2 behind:1 chain:2 subtrees:8 jumping:5 typewriter:3 tree:24 indexed:5 walk:1 desired:1 weaken:1 complicates:1 gn:2 zn:3 assignment:1 introducing:1 subset:1 entry:11 nonterminals:5 johnson:4 dependency:4 varies:1 combined:1 chooses:1 st:1 probabilistic:13 contract:1 together:2 na:8 again:1 thesis:1 nm:1 containing:1 choose:5 gale:1 cognitive:1 brent:1 american:1 return:1 account:1 singleton:1 summarized:1 performed:1 root:6 schema:1 start:5 walked:3 formed:1 phoneme:2 yield:4 identify:1 correspond:2 goldwater:3 generalize:1 bayesian:13 produced:8 substring:1 carlo:2 unaffected:1 history:1 nonstandard:1 ping:1 ed:2 definition:5 frequency:4 james:1 associated:6 static:1 couple:1 sampled:1 popular:1 recall:1 segmentation:10 sophisticated:1 ea:2 appears:1 ta:8 higher:2 tom:1 specify:4 formulation:1 evaluated:3 generality:1 just:3 xa:17 rejected:1 until:1 hand:1 hastings:4 parse:4 unsegmented:2 defines:2 perhaps:2 facilitate:1 brown:4 hence:1 assigned:1 entering:1 self:1 qnk:1 branching:1 rooted:2 noted:1 subordinator:1 generalized:2 tn:1 wise:1 nih:1 discussed:1 extend:2 association:2 refer:1 enter:1 ield:4 outlined:1 language:6 stable:1 depiction:1 base:2 posterior:7 own:1 retrieved:1 meeting:1 yi:3 additional:4 full:1 gb1:2 sound:1 stem:24 ing:4 technical:2 segmented:2 characterized:1 adapt:1 permitting:1 basic:1 poisson:3 iteration:1 nak:1 proposal:6 background:1 else:1 goodman:1 rest:2 induced:1 supposing:1 thing:1 member:1 seem:1 jordan:1 call:1 integer:1 emitted:1 easy:4 independence:4 restaurant:5 zi:2 xj:1 restrict:1 identified:1 idea:3 whether:2 six:1 reuse:1 speech:1 compositional:3 dramatically:1 ignored:1 informally:3 factorial:1 nonparametric:12 generate:8 http:2 suffixation:1 nsf:1 delta:1 disjoint:1 correctly:2 estimated:1 discrete:3 write:2 promise:1 iz:1 express:1 key:1 demonstrating:1 drawn:1 changing:1 sharon:1 fraction:1 sum:1 extends:1 eased:1 decide:1 draw:1 followed:1 simplification:1 replaces:1 encountered:1 g:1 annual:1 adapted:3 occur:1 kronecker:1 tb1:3 software:2 generates:6 u1:2 min:1 quintuple:1 expanded:2 performing:1 structured:1 department:1 according:1 instantiates:2 describes:1 slightly:1 character:4 metropolis:3 making:3 s1:3 intuitively:2 equation:3 mutually:1 previously:2 turn:3 count:4 eventually:1 needed:1 informal:1 available:2 rewritten:1 ishwaran:1 hierarchical:8 wat:1 appropriate:1 appearing:1 distinguished:1 save:1 thomas:1 original:1 top:1 dirichlet:21 linguistics:2 include:1 chinese:1 establish:1 r01:1 added:1 fa:4 concentration:2 visiting:1 capacity:1 street:1 seating:1 substantive:1 length:1 index:4 providing:1 sinica:1 unfortunately:1 negative:1 ba:5 perform:1 allowing:2 teh:1 snapshot:1 markov:2 finite:4 t:4 immediate:1 defining:1 y1:1 verb:5 introduced:1 pair:2 specified:1 toolbox:1 z1:3 sentence:2 california:1 accepts:1 hr0011:1 below:1 built:1 including:1 difficulty:1 recursion:3 representing:1 scheme:1 coupled:1 sn:1 prior:1 discovery:1 relative:1 par:1 permutation:1 interesting:1 generator:3 integrate:2 sufficient:1 consistent:1 seated:1 tiny:1 production:23 token:3 placed:1 supported:1 free:10 formal:2 side:1 pitman:33 yor:32 distinctly:1 distributed:1 rich:1 qn:1 author:1 made:1 commonly:2 jump:2 emitting:1 approximate:1 emphasize:1 implicitly:1 wrote:1 sequentially:1 b1:5 corpus:3 subsequence:1 un:2 continuous:1 gbn:2 table:9 mj:1 expanding:3 ca:10 expansion:2 complex:1 interpolating:1 constructing:1 domain:2 statistica:1 hyperparameters:1 child:2 ista:5 elaborate:1 depicts:1 fashion:1 precision:1 lie:1 third:1 down:2 cog:1 unigram:2 symbol:18 list:1 normalizing:1 sit:1 consist:2 intractable:1 pcfg:16 adding:2 effectively:1 phd:1 subtree:4 occurring:1 nk:2 likely:1 expressed:6 ordered:1 partially:1 aa:5 corresponds:1 determines:1 ma:4 conditional:1 identity:3 presentation:1 viewed:1 replace:1 change:3 typical:1 except:3 sampler:1 called:3 specie:1 select:1 mark:2 support:1 latter:1 mcmc:1
2,317
3,102
Uncertainty, phase and oscillatory hippocampal recall M?at?e Lengyel and Peter Dayan Gatsby Computational Neuroscience Unit University College London 17 Queen Square, London WC1N 3AR, United Kingdom {lmate,dayan}@gatsby.ucl.ac.uk Abstract Many neural areas, notably, the hippocampus, show structured, dynamical, population behavior such as coordinated oscillations. It has long been observed that such oscillations provide a substrate for representing analog information in the firing phases of neurons relative to the underlying population rhythm. However, it has become increasingly clear that it is essential for neural populations to represent uncertainty about the information they capture, and the substantial recent work on neural codes for uncertainty has omitted any analysis of oscillatory systems. Here, we observe that, since neurons in an oscillatory network need not only fire once in each cycle (or even at all), uncertainty about the analog quantities each neuron represents by its firing phase might naturally be reported through the degree of concentration of the spikes that it fires. We apply this theory to memory in a model of oscillatory associative recall in hippocampal area CA3. Although it is not well treated in the literature, representing and manipulating uncertainty is fundamental to competent memory; our theory enables us to view CA3 as an effective uncertainty-aware, retrieval system. 1 Introduction In a network such as hippocampal area CA3 that shows prominent oscillations during memory retrieval and other functions [1], there are apparently three, somewhat separate, ways in which neurons might represent information within a single cycle: they must choose how many spikes to fire; what the mean phase of those spikes is; and how concentrated those spikes are about that mean. Most groups working on the theory of spiking oscillatory networks have considered only the second of these ? this is true, for instance, of Hopfield?s work on olfactory representations [2] and Yoshioka?s [3] and Lengyel & Dayan?s work [4] on analog associative memories in CA3. Since neurons do really fire more or less than one spike per cycle, and furthermore in a way that can be informationally rich [5, 6], this poses a key question as to what the other dimensions convey. The number of spikes per cycle is an obvious analog of a conventional firing rate. Recent sophisticated models of firing rates of single neurons and neural populations treat them as representing uncertainty about the quantities coded, partly driven by the strong psychophysical and computational evidence that uncertainty plays a key role in many aspects of neural processing [7, 8, 9]. Single neurons can convey the certainty of a binary proposition by firing more or less strongly [10, 11]; a whole population can use firing rates to convey uncertainty about a collectively-coded analog quantity [12]. However, if neurons can fire multiple spikes per cycle, then the degree to which the spikes are concentrated around a mean phase is an additional channel for representing information. Concentration is not merely an abstract quantity; rather we can expect that the effect of the neuron on its postsynaptic partners will be strongly influenced by the burstiness of the spikes, an effect apparent, for instance, in the complex time-courses of short term synaptic dynamics. Here, we suggest that concentration codes for the uncertainty about phase ? highly concentrated spiking represents high certainty about the mean phase in the cycle. One might wonder whether uncertainty is actually important for the cases of oscillatory processing that have been identified. One key computation for spiking oscillatory networks is memory retrieval [3, 4]. Although it is not often viewed this way, memory retrieval is a genuinely probabilistic task [13, 14], with the complete answer to a retrieval query not being a single memory pattern, but rather a distribution over memory patterns. This is because at the time of the query the memory device only has access to incomplete information regarding the memory trace that needs to be recalled. Most importantly, the way memory traces are stored in the synaptic weight matrix implies a data lossy compression algorithm, and therefore the original patterns cannot be decompressed at retrieval with absolute certainty. In this paper, we first describe how oscillatory structures can use all three activity characteristics at their disposal to represent two pieces of information and two forms of uncertainty (Section 2). We then suggest that this representational scheme is appropriate as a model of uncertainty-aware probabilistic recall in CA3. We derive the recurrent neural network dynamics that manipulate these firing characteristics such that by the end of the retrieval process neurons represent a good approximation of the posterior distribution over memory patterns given the information in the recall cue and in the synaptic weights between neurons (Section 3). We show in numerical simulations that the derived dynamics lead to competent memory retrieval, supplemented by uncertainty signals that are predictive of retrieval errors (Section 4). 2 Representation Single cell The heart of our proposal is a suggestion for how to interpret the activity of a single neuron in a single oscillatory cycle (such as a theta-cycle in the hippocampus) as representing a probability distribution. This is a significant extension of standard work on single-neuron representations of probability [12]. We consider a distribution over two random variables, z ? {0, 1}, a Bernoulli variable (for the case of memory, representing the participation of the neuron in the memory pattern), and x ? [0, T ), where T is the period of the underlying oscillation, a real valued phase variable (representing an analog quantity associated with that neuron if it participates in that pattern). This distribution is based on three quantities associated with the neuron?s activity (figure 1A): r the number of spikes in a cycle, ? the circular mean phase of those spikes, under the assumption that there is at least one spike, c the concentration of the spikes (mean resultant length of their phases, [15]), which measures how tightly clustered they are about ? In keeping with conventional single-neuron models, we treat r, via a (monotonically increasing) probabilistic activation function 0 ? ?(r) ? 1, as describing the probability that z = 1 (figure 1B), z 1?z so the distribution is q (z; r) = ? (r) (1 ? ? (r)) . We treat the implied distribution over the true phase x as being conditional on z. If z = 0, then the phase is undefined. However, if z = 1, then the distribution over x is a mixture of qu (x), a uniform distribution on [0, T ), and a narrow, quasi-delta, distribution q? (x; ?) (of width   T ) around the mean firing phase (?) of the spikes. The mixing proportion in this case is determined by a (monotonically increasing) function 0 ? ?(c) ? 1 of the concentration of the spikes. In total: z 1?z q (x, z; ?, c, r) = [? (r) [? (c) q? (x; ?) + (1 ? ? (c)) qu (x)]] (1 ? ? (r)) (1) as shown in figure 1C. The marginal confidence in ? being correct is thus ? (c, r) = ? (c) ? ? (r), which we call ?burst strength?. We can rewrite equation 1 in a more convenient form: z 1?z q (x, z; ?, c, r) = [? (c, r) q? (x; ?) + (? (r) ? ? (c, r)) qu (x)] (1 ? ? (r)) (2) Population In the case of a population of neurons, the complexity of representing a full joint distribution P[x, z] over random variables x = {xi }, z = {zi } associated with each neuron i grows exponentially with the number of neurons N . The natural alternative is to consider an approximation in which neurons make independent contributions, with marginals as in equation 2. The joint A B c C q(z ; r) q(x | z=1; ?, c) ?(r) r=2 ? ?(c) z 0 1 ? 0 ? T x Figure 1: Representing uncertainty. A) A neuron?s firing times during a period [0, T ) are described by three parameters: r, the number of spikes; ? the mean phase of those spikes; and c, the phase concentration. B) The firing rate r determines the probability ?(r) that a Bernoulli variable associated with the unit takes the value z = 1. C) If z = 1, then ? and c jointly define a distribution over phase which is a mixture (weighted by ?(c)) of a distribution peaked at ? and a uniform distribution. distribution is then Q (x, z; ?, c, r) = Q i q (xi , zi ; ?i , ci , ri ) (3) whose complexity scales linearly with N . Dynamics When the actual distribution P the population has to represent lies outside the class of representable distributions Q in equation 3 with independent marginals, a key computational step is to find activity parameters ?, c, r for the neurons that make Q as close to P as possible. One way to formalize the discrepancy between the two distributions is the KL-divergence F (?, c, r) = KL [Q (x, z; ?, c, r) k P (x, z)] (4) Minimizing this by gradient descent ? d?i ? =? F (?, c, r) dt ??i ? ? dci =? F (?, c, r) dt ?ci ? ? dri =? F (?, c, r) dt ?ri (5) defines dynamics for the evolution of the parameters. In general, this couples the activities of neurons, defining recurrent interactions within the network.1 We have thus suggested a general representational framework, in which the specification of a computational task amounts to defining a P distribution which the network should represent as best as possible. Equation 5 then defines the dynamics of the interaction between the neurons that optimizes the network?s approximation. 3 CA3 memory One of the most widely considered tasks that recurrent neural networks need to solve is that of autoassociative memory storage and retrieval. Moreover, hippocampal area CA3, which is thought to play a key role in memory processing, exhibits oscillatory dynamics in which firing phases are known to play an important functional role. It is therefore an ideal testbed for our theory. We characterize the activity in CA3 neurons during recall as representing the probability distribution over memories being recalled. Treating storage from a statistical perspective, we use Bayes rule to define a posterior distribution over the memory pattern implied by a noisy and impartial cue. This distribution is represented approximately by the activities ?i , ri , ci of the neurons in the network as in equation 3. Recurrent dynamics among the neurons as in equation 5 find appropriate values of these parameters, and model network interactions during recall in CA3. Storage We consider CA3 as storing patterns in which some neurons are quiet (zim = 0, for the ith neuron in the mth pattern); and other neurons are active (zim = 1); their activity then defining 1 Of course, the firing rate is really an integer variable, since it is an actual number of spikes per cycle. For simplicitly, in the simulations below, we considered real-valued firing rates ? an important next step is to drop this assumption. a firing phase (xim ? [0, T ), where T is the period of the population oscillation. M such memory traces, each drawn from an (iid) prior distribution, Q 1?z z (6) P [x, z] = i [pz P (xi )] i (1 ? pz ) i , (where pz is the prior probability of firing in a memory pattern; P (x) is the prior distribution for firing phases) are stored locally and additively in the recurrent synaptic weight matrix of a network of N neurons, W, according to learning rule ?:  PM m for i 6= j, and Wii = 0 (7) Wij = m=1 zim zjm ? xm i , xj We assume that ? is T?oplitz and periodic in T , and either symmetric or anti-symmetric: ? (x1 , x2 ) = ? (x1 ? x2 ) = ? (x1 ? x2 mod T ) = ?? (x2 ? x1 ). Posterior for memory recall Following [14, 4], we characterize retrieval in terms of the posterior distribution over x, z given three sources of information: a recall cue (? x, ? z), the synaptic weight matrix, and the prior over the memories. Under some basic independence assumptions, this factorizes into three terms P [x, z | x ?, ? z, W] ? P [x, z] ? P [? x, ? z | x, z] ? P [W | x, z] (8) The first term is the prior (equation 6). The second term is the likelihood of receiving noisy or partial recall cue (? x, ? z) if the true pattern to be recalled was (x, z): zi  1?zi z?i z?i Y  1?? z ? 0 (? ? 1 (? (1 ? ?0 ) P xi ) ?01??zi ?1 P xi | xi ) (1 ? ?1 ) i (9) P [? x, ? z | x, z] = i where ?1 = P [? z = 1 | z = 1] and ?0 = P [? z = 0 | z = 0] are the probabilities of the presence or absence of a spike in the input given the presence or absence of a spike in the memory to be recalled, ? 1 (? ? 0 (? P x | x) and P x) are distributions of the phase of an input spike if there was or was not a spike in the memory to be recalled. The last term in equation 8 is the likelihood that weight matrixW arose from M patterns includ Q 1/2 . P [W | x , z , x , z ] ing (x, z). Making a factorized approximation P [W | x, z] ' ij i i j j i,j6=i Since the learning rule is additive and memory traces are drawn iid, the likelihood of a synaptic weight is approximately Gaussian for large M , with a quadratic log-likelihood [4]:   1 2 +c zi zj log P [Wij | xi , zi , xj , zj ] = 2 (Wij ? ?W ) ? (xi , xj ) ? ? (xi , xj ) (10) ?W 2 2 are the mean and variance of the distribution of synaptic weights after storing where ?W and ?W M ? 1 random memory traces (?W = 0 for antisymmetric ?). Dynamics for memory recall Plugging the posterior from equation 8 to the general dynamics equation 5 yields the neuronal update rules that will be appropriate for uncertainty-aware memory recall, and which we treat as a model of recurrent dynamics in CA3. We give the exact formul? for the dynamics in the supplementary material. They can be shown to couple together the various activity parameters of the neurons in appropriate ways, for instance weighting changes to ?i for neuron i according to the burst strength of its presynaptic inputs, and increasing the concentration when the log posterior of the firing phase of the neuron, given that it ?, z ?, W], is greater than the average of the log posterior. should fire, log P[?i |zi = 1, x These dynamics generalize, and thus inherit, some of the characteristics of the purely phase-based network suggested in [4]. This means that they also inherit the match with physiologically-measured phase response curves (PRCs) from in vitro CA3 neurons that were measured to test this suggestion [16]. The key difference here is that we expect the magnitude (though not the shape) of the influence of a presynaptic neuron on the phase of a postsynaptic one to scale with its rate, for high concentration. Preliminary in vitro results show that PRCs recorded in response to burst stimulation are not qualitatively different from PRCs induced by single spikes; however, it remains to be seen if their magnitude scales in the way implied by the dynamics here. z=1 0 !2 0 1 2 3 4 5 0.5 0 0 1 2 0 1 2 3 Time 3 4 1 0.5 0 5 0 1 2 z=0 4 5 0.5 0 0 1 2 3 Time 3 4 5 3 4 5 z=0 1 Firing rate Concentration Phase z=0 5 0 z=1 1 Firing rate Concentration Phase error z=1 2 4 5 1 0.5 0 0 1 2 Time Figure 2: A single retrieval trial in the network. Time evolution of firing phases (left panels), concentrations (middle panels), and rates (right panels) of neurons that should (top row) or should not (bottom row) participate in the memory pattern being retrieved. Note that firing phases in the top row are plotted as a difference from the stored firing phases so that ? = 0 means perfect retrieval. Color code shows precision (blue: low, yellow: high) of the phase of the input to neurons, with red lines showing cells receving incorrect input rate. 4 Simulations Figure 2 shows the course of recall in the full network (with N = 100 neurons, and 10 stored patterns with pz = 0.5). For didactic convenience, we consider the case that the noise in the phase input was varied systematically for different neurons within a recall cue (a fact known to the network, ie incorporated into its dynamics), so that it is possible to see how the differential certainty evolves over the course of the network?s dynamics. The top left panel shows that neurons that should fire in the memory trace (ie for which z = 1) quickly converge on their correct phase, and that this convergence usually takes a longer time for neurons receiving more uncertain input. This is paralleled by the way their firing concentrations change (top middle panel): neurons with reliable input immediately increase their concentrations from the initial ?(c) = 0.5 value to ?(c) = 1, while for those having more unreliable input it takes a longer time to build up confidence about their firing phases (and by the time they become confident their phases are indeed correct). Neurons that should not fire (z = 0) build up their confidence even more slowly, more often remain fairly uncertain or only moderately certain about their firing phases, as expressed by their concentrations (middle bottom panel) ? quite righteously. Finally, since the firing rate input to the network is correct 90%, most neurons that should or should not fire do or do not fire, respectively, with maximal certainty about their rate (top and bottom right panels). Various other metrics are important for providing insight into the operation of the network. In particular, we may expect there to be a relationship between the actual error in the phase of firing of the neurons recalled by the memory, and the firing rates and concentrations (in the form of burst strengths) of the associated neurons themselves. Neurons which are erring should whisper rather than shout. Figure 3A shows just this for the network. Here, we have sorted the neurons according to their burst strengths ?, and plotted histograms of errors in firing phase for each group. The lower the burst strength, the more likely are large errors ? at least to an approximation. A similar relationship exists between recalled (analogue) and stored (binary) firing rates, where extreme values of the recalled firing rate indicate that the stored firing rate was 0 or 1 with higher certainty (Figure 3B). Figure 3C shows the results of a related analysis of experimental data kindly donated by Francesco Battaglia. He recorded neurons in hippocampal area CA1 (not CA3, although we may hope for some similar properties) whilst rats were shuttling on a linear track for food reward. CA1 neurons have place fields ? locations in the environment where they respond with spikes ? and the phases of these spikes relative to the ongoing theta oscillation in the hippocampus are also known to convey information about location in space [5]. To create the plot, we first selected epochs with highquality and high power theta activity in the hippocampus (to ensure that phase is well estimated). We then computed the mean firing phase within the theta cycle, ?, of each neuron as a function of the location of the rat, separately for each visit to the same location. We assumed that the ?true? phase x a neuron should recall at a given location is the average of these phases across different visits. We B Frequency 0.5 0.4 0.3 0.2 0.2 1 0.8 0.6 0.4 0.1 burst strength (spikes / cycle) 0!0.5 0.5!1.5 1.5!2.5 2.5!3.5 3.5!4.5 0.2 0 0.1 0 !" C Frequency 0.6 burst strength 0.05 0.2 0.4 0.7 Stored firing rate A 0 Error in firing phase " 0 0.2 0.4 0.6 0.8 1 Retrieved firing rate 0 !" 0 ?Error? in firing phase " Figure 3: Uncertainty signals are predictive of the error a cell is making both in simulation (A,B), and as recorded from behaving animals (C). Burst strength signals overall uncertainty about and thus predicts error in mean firing phase (A,C), while graded firing rates signal certainty about whether to fire or not (B). then evaluated the error a neuron was making at a given location on a given visit as the difference between its ? in that trial at that location and the ?true? phase x associated with that location. This allowed us to compute statistics of the error in phase as a function of the burst strength. The curves in the figure show that, as for the simulation, burst strength is at least partly inversely correlated with actual phase error, defined in terms of the overall activity in the population. Of course, this does not constitute a proof of our representational theory. One further way to evaluate the memory is to compare it to two existing associative memories that have previously been studied, and can be seen as special cases. On one hand, our memory adds the dimension of phase to the uncertainty-aware rate-based memory that Sommer & Dayan [14] studied. This memory made a somewhat similar variational approximation, but, as for the meanfield Boltzmann machine [17], only involving r and ?(r) and no phases. On the other hand, the memory device can be seen as adding the dimension of rate to the phase-based memory that Lengyel & Dayan [4] treated. Note, however, that although this phase-based network used superficially similar probabilistic principles to the one we have developed here, in fact it did not operate according to uncertainty, since it made the key simplification that all neurons participate in all memories, and that they also fire exactly one spike on every cycle during recall. This restricted the dynamics of that network to perform maximum a posteriori (MAP) inference to find the single recalled pattern of activity that best accommodated the probabilistic constraints of the cue, the prior and the synaptic weights, rather than being able to work in the richer space of probabilistic recall of the dynamics we are suggesting here. Given these roots, we can follow the logic in figure 4 and compare the performance of our memory with these precursors in the cases for which they are designed. For instance, to compare with the rate-based network, we construct memories which include phase information. During recall, we present cues with relatively accurate rates, but relatively inaccurate phases, and evaluate the extent to which the network is perturbed by the presence of the phases (which, of course, it has to store in the single set of synaptic weights). Figure 4A shows exactly this comparison. Here, a relatively small network (N = 100) was used, with memories that are dense (pz = 0.5), and it is therefore a stringent test of the storage capacity. Performance is evaluated by calculating the average error made in recalled firing rates). In the figure, the two blue curves are for the full model (with the phase information in the input being relatively unreliable, its circular concentration parameter distributed uniformly between 0.1 and 10 across cells); the two yellow curves are for a network with only rates (which is similar to that described, but not simulated, by Sommer & Dayan [14]). Exactly the same rate information is provided to all networks, and is 10% inaccurate (a degree known to the dynamics in the form of ?0 and ?1 ). The two flat dashed lines show the performance in the case that there are no recurrent synaptic weights at all. This is an important control, since we are potentially presenting substantial information in the cues themselves. The two solid curves show that the full model tracks the reduced, rate-based, model almost perfectly until the performance totally breaks down. This shows that the phase information, and the existence of phase uncertainty and processing during recall, does not A B 0.45 0.9 0.4 0.8 0.7 0.3 Average error Average error 0.35 0.25 0.2 0.15 rate!coded model w/o learning rate!coded model full model w/o learning full model 0.1 0.05 0 0.6 0.5 0.4 phase!coded model w/o learning phase!coded model full model w/o learning full model 0.3 0.2 0.1 1 10 100 Number of stored patterns 1000 1 10 100 Number of stored patterns 1000 Figure 4: Recall performance compared with a rate-only network (A) and a phase-only network (B). The full model (blue lines) performs just as well as the reduced ?specialist? models (yellow lines) in comparable circumstances (when the information provided to the networks in the dimension they shared is exactly the same). All models (solid lines) outperform the standard control of using the input and the prior alone (dashed lines). corrupt the network?s capacity to recall rates. Given its small size, the network is quite competent as an auto-associator. Figure 4B shows a similar comparison between this network and a network that only has to deal with uncertainty in firing phases but not in rates. Again, its performance at recalling phase, given uncertain and noisy phase cues, but good rate-cues, is exactly on a par with the pure, phase-based network. Further, the average errors are only modest, so the capacity of the network for storing analog phases is also impressive. 5 Discussion We have considered an interpretation of the activities of neurons in oscillating structures such as area CA3 of the hippocampus as representing distributions over two underlying quantities, one binary and one analogue. We also showed how this representational capacity can be used to excellent effect in the key, uncertainty-sensitive computation of memory recall, an operation in which CA3 is known to be involved. The resulting network model of CA3 encompasses critical aspects of its physiological properties, notably information-bearing firing rates and phases. Further, since it generalizes earlier theories of purely phase-based memories, this model is also consistent with the measured phase response curves of CA3 neurons, which characterize their actual dynamical interactions. Various aspects of this new theory are amenable to experimental investigation. First, the full dynamics (see the supplementary material) imply that firing rate and firing phase should be coupled together both pre-synpatically, in terms of the influence of timed input spikes, and post-synaptically, in terms of how changes in the activity of a neuron should depend on its own activity. In vitro experiments along the lines of those carried out before [16], in which we have precise experimental control over pre- and post-synaptic activity can be used to test these predictions. Further, making the sort of assumptions that underlie figure 3C, we can use data from awake behaving rats to see if the gross statistics of the changes in the activity of the neurons fit the expectations licensed by the theory. From a computational perspective, we have demonstrated that the network is a highly competent associative memory, correctly recalling both binary and analog information, along with certainty about it, and degrading gracefully in the face of overload. In fact, compared with the representation of other analogue quantities (such as the orientation of a visually preseted bar), analogue memory actually poses a particularly tough problem for the representation of uncertainty. This is because for variables like orientation, a whole population is treated as being devoted to the representation of the distribution of a single scalar value. By contrast, for analogue memory, each neuron has an independent analogue value, and so the dimensionality of the distribution scales with the number of neurons involved. This extra representational power comes from the ability of neurons to distribute their spikes within a cycle to indicate their uncertainty about phase (using the dimension of time in just the same way that distributional population codes [12] used the dimension of neural space). This dimension for representing analogue uncertainty is coupled to that of the firing rate for representing binary uncertainty, since neurons have to fire multiple times in a cycle to have a measurable lack of concentration. However, this coupling is exactly appropriate given the form of the distribution assumed in equation 2, since weakly firing neurons express only weak certainty about phase in any case. In fact, it is conceivable that we could combine a different model for the firing rate uncertainty with this model for analogue uncertainty, if, for instance, it is found that neuronal firing rates covary in ways that are not anticipated from equation 2. Finally, the most important direction for future work is understanding the uncertainty-sensitive coupling between multiple oscillating memories, where the oscillations, though dynamically coordinated, need not have the same frequencies. Exactly this seems to characterize the interaction between the hippocampus and the necortex during both consolidation and retrieval [18, 19]. Acknowledgments Funding from the Gatsby Charitable Foundation. We are very grateful to Francesco Battaglia for allowing us to use his data to produce figure 3C, and to him, and Ole Paulsen and Jeehyun Kwag for very helpful discussions. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] ? Szaliszny?o K, Erdi P. In The Handbook of Brain Theory and Neural Networks, 533, 2003. Hopfield JJ. Nature 376:33, 1995. Yoshioka M. Physical Review E 65, 2001. Lengyel M, Dayan P. In Advances in Neural Information Processing Systems 17, 769, Cambridge, MA, 2005. MIT Press. O?Keefe J, Recce ML. Hippocampus 3:317, 1993. Huxter J, et al. Nature 425:828, 2003. Ernst M, Banks M. Nature 415:429, 2002. K?ording K, Wolpert D. Nature 427:244, 2004. Gold JI, Shadlen MN. Neuron 36:299, 2002. Hinton G. Neural Comput 1:143, 1990. Peterson C, Anderson J. Complex Systems 1:995, 1987. Pouget A, et al. Annu Rev Neurosci 26:381, 2003. MacKay DJC. In Maximum Entropy and Bayesian Methods, Laramie, 1990, 237, 1991. Sommer FT, Dayan P. IEEE Trans Neural Netw 9:705, 1998. Fisher NI. Statistical analysis of circular data. Cambridge University Press, 1995. Lengyel M, et al. Nat Neurosci 8:1677, 2005. Dayan P, Abbott LF. Theoretical Neuroscience. MIT Press, 2001. Siapas AG, Wilson MA. Neuron 21:1123, 1998. Jones M, Wilson M. PLoS Biol 3:e402, 2005.
3102 |@word trial:2 middle:3 compression:1 hippocampus:7 proportion:1 seems:1 additively:1 simulation:5 paulsen:1 solid:2 initial:1 united:1 ording:1 existing:1 activation:1 must:1 additive:1 numerical:1 shape:1 enables:1 treating:1 drop:1 update:1 plot:1 designed:1 alone:1 cue:10 selected:1 device:2 ith:1 short:1 location:8 burst:11 along:2 become:2 differential:1 incorrect:1 combine:1 olfactory:1 notably:2 indeed:1 behavior:1 themselves:2 brain:1 food:1 actual:5 precursor:1 increasing:3 totally:1 provided:2 underlying:3 moreover:1 panel:7 factorized:1 what:2 degrading:1 ca1:2 developed:1 whilst:1 ag:1 certainty:9 every:1 donated:1 exactly:7 uk:1 control:3 unit:2 highquality:1 underlie:1 impartial:1 before:1 treat:4 whisper:1 firing:48 approximately:2 might:3 studied:2 dynamically:1 acknowledgment:1 lf:1 area:6 thought:1 convenient:1 confidence:3 pre:2 suggest:2 prc:3 cannot:1 close:1 convenience:1 storage:4 influence:2 conventional:2 map:1 demonstrated:1 measurable:1 immediately:1 pure:1 pouget:1 rule:4 insight:1 importantly:1 his:1 population:12 play:3 exact:1 substrate:1 particularly:1 genuinely:1 predicts:1 distributional:1 observed:1 role:3 bottom:3 ft:1 capture:1 cycle:15 plo:1 burstiness:1 substantial:2 gross:1 environment:1 complexity:2 moderately:1 reward:1 dynamic:20 depend:1 rewrite:1 weakly:1 grateful:1 predictive:2 purely:2 joint:2 hopfield:2 represented:1 various:3 describe:1 london:2 effective:1 ole:1 query:2 outside:1 apparent:1 whose:1 widely:1 valued:2 solve:1 supplementary:2 quite:2 richer:1 ability:1 statistic:2 jointly:1 noisy:3 associative:4 ucl:1 interaction:5 maximal:1 mixing:1 ernst:1 representational:5 gold:1 convergence:1 xim:1 oscillating:2 produce:1 perfect:1 derive:1 recurrent:7 ac:1 pose:2 coupling:2 measured:3 ij:1 strong:1 implies:1 indicate:2 come:1 direction:1 correct:4 stringent:1 material:2 clustered:1 really:2 preliminary:1 investigation:1 proposition:1 extension:1 around:2 considered:4 visually:1 omitted:1 battaglia:2 sensitive:2 him:1 create:1 weighted:1 hope:1 mit:2 gaussian:1 rather:4 arose:1 erring:1 factorizes:1 wilson:2 derived:1 zim:3 bernoulli:2 likelihood:4 contrast:1 posteriori:1 inference:1 yoshioka:2 dayan:9 helpful:1 inaccurate:2 mth:1 manipulating:1 quasi:1 wij:3 overall:2 among:1 orientation:2 animal:1 special:1 fairly:1 mackay:1 marginal:1 field:1 construct:1 once:1 aware:4 having:1 represents:2 jones:1 anticipated:1 peaked:1 discrepancy:1 future:1 tightly:1 divergence:1 phase:72 fire:13 recalling:2 djc:1 highly:2 circular:3 mixture:2 extreme:1 undefined:1 devoted:1 wc1n:1 amenable:1 accurate:1 partial:1 modest:1 incomplete:1 accommodated:1 timed:1 plotted:2 theoretical:1 uncertain:3 instance:5 earlier:1 ar:1 queen:1 licensed:1 ca3:17 uniform:2 wonder:1 characterize:4 reported:1 stored:9 answer:1 perturbed:1 periodic:1 confident:1 fundamental:1 ie:2 probabilistic:6 participates:1 receiving:2 together:2 quickly:1 again:1 recorded:3 choose:1 slowly:1 suggesting:1 distribute:1 coordinated:2 piece:1 view:1 root:1 break:1 apparently:1 red:1 bayes:1 sort:1 contribution:1 square:1 ni:1 variance:1 characteristic:3 yield:1 yellow:3 generalize:1 weak:1 bayesian:1 iid:2 shout:1 lengyel:5 j6:1 oscillatory:10 influenced:1 synaptic:11 frequency:3 involved:2 obvious:1 naturally:1 associated:6 resultant:1 proof:1 couple:2 recall:21 color:1 dimensionality:1 formalize:1 sophisticated:1 actually:2 disposal:1 higher:1 dt:3 follow:1 response:3 evaluated:2 though:2 strongly:2 anderson:1 furthermore:1 just:3 until:1 working:1 hand:2 lack:1 defines:2 lossy:1 grows:1 effect:3 true:5 evolution:2 symmetric:2 covary:1 deal:1 during:8 width:1 rhythm:1 rat:3 hippocampal:5 prominent:1 presenting:1 complete:1 simplicitly:1 performs:1 variational:1 funding:1 functional:1 spiking:3 vitro:3 stimulation:1 physical:1 ji:1 exponentially:1 analog:8 he:1 interpretation:1 interpret:1 marginals:2 significant:1 cambridge:2 siapas:1 pm:1 access:1 specification:1 longer:2 behaving:2 impressive:1 add:1 posterior:7 own:1 recent:2 showed:1 perspective:2 retrieved:2 optimizes:1 driven:1 store:1 certain:1 binary:5 seen:3 additional:1 somewhat:2 greater:1 converge:1 period:3 monotonically:2 signal:4 dashed:2 multiple:3 full:10 ing:1 match:1 long:1 retrieval:14 post:2 manipulate:1 visit:3 coded:6 plugging:1 prediction:1 involving:1 basic:1 circumstance:1 metric:1 expectation:1 histogram:1 represent:6 synaptically:1 cell:4 proposal:1 separately:1 source:1 extra:1 operate:1 induced:1 dri:1 tough:1 mod:1 call:1 integer:1 presence:3 ideal:1 xj:4 independence:1 zi:8 fit:1 identified:1 perfectly:1 regarding:1 whether:2 peter:1 constitute:1 jj:1 autoassociative:1 clear:1 amount:1 locally:1 concentrated:3 decompressed:1 reduced:2 outperform:1 zj:2 neuroscience:2 delta:1 per:4 track:2 estimated:1 blue:3 correctly:1 kwag:1 didactic:1 express:1 group:2 key:8 drawn:2 abbott:1 lmate:1 merely:1 uncertainty:30 respond:1 place:1 almost:1 oscillation:7 comparable:1 simplification:1 quadratic:1 activity:17 strength:10 constraint:1 awake:1 ri:3 x2:4 flat:1 aspect:3 relatively:4 structured:1 according:4 representable:1 remain:1 across:2 increasingly:1 postsynaptic:2 qu:3 evolves:1 making:4 rev:1 restricted:1 heart:1 equation:12 remains:1 previously:1 describing:1 end:1 generalizes:1 wii:1 operation:2 apply:1 observe:1 appropriate:5 alternative:1 specialist:1 existence:1 original:1 top:5 ensure:1 sommer:3 include:1 calculating:1 build:2 graded:1 psychophysical:1 implied:3 question:1 quantity:8 spike:29 concentration:17 exhibit:1 gradient:1 quiet:1 conceivable:1 separate:1 simulated:1 capacity:4 gracefully:1 participate:2 partner:1 presynaptic:2 extent:1 code:4 length:1 relationship:2 providing:1 minimizing:1 kingdom:1 potentially:1 dci:1 trace:6 boltzmann:1 perform:1 allowing:1 neuron:67 francesco:2 descent:1 anti:1 defining:3 hinton:1 incorporated:1 precise:1 varied:1 kl:2 recalled:10 narrow:1 testbed:1 trans:1 able:1 suggested:2 bar:1 dynamical:2 pattern:17 below:1 xm:1 usually:1 encompasses:1 reliable:1 memory:50 analogue:8 power:2 meanfield:1 critical:1 treated:3 natural:1 participation:1 mn:1 representing:13 scheme:1 recce:1 theta:4 inversely:1 imply:1 carried:1 auto:1 coupled:2 prior:7 literature:1 epoch:1 understanding:1 review:1 relative:2 expect:3 par:1 suggestion:2 foundation:1 degree:3 consistent:1 shadlen:1 principle:1 charitable:1 systematically:1 storing:3 corrupt:1 bank:1 row:3 course:6 consolidation:1 informationally:1 keeping:1 last:1 peterson:1 face:1 absolute:1 distributed:1 curve:6 dimension:7 superficially:1 rich:1 qualitatively:1 made:3 netw:1 unreliable:2 logic:1 ml:1 active:1 handbook:1 assumed:2 xi:9 physiologically:1 channel:1 nature:4 associator:1 bearing:1 excellent:1 complex:2 antisymmetric:1 inherit:2 kindly:1 did:1 dense:1 linearly:1 neurosci:2 whole:2 noise:1 allowed:1 competent:4 convey:4 formul:1 x1:4 neuronal:2 gatsby:3 precision:1 comput:1 lie:1 weighting:1 down:1 annu:1 showing:1 supplemented:1 pz:5 physiological:1 evidence:1 essential:1 exists:1 adding:1 keefe:1 ci:3 magnitude:2 nat:1 wolpert:1 entropy:1 likely:1 expressed:1 scalar:1 collectively:1 determines:1 includ:1 ma:2 conditional:1 viewed:1 sorted:1 shared:1 absence:2 fisher:1 change:4 determined:1 uniformly:1 total:1 partly:2 experimental:3 college:1 paralleled:1 overload:1 ongoing:1 evaluate:2 biol:1 correlated:1
2,318
3,103
A Kernel Subspace Method by Stochastic Realization for Learning Nonlinear Dynamical Systems Yoshinobu Kawahara? Dept. of Aeronautics & Astronautics The University of Tokyo Takehisa Yairi Kazuo Machida Research Center for Advanced Science and Technology The University of Tokyo Komaba 4-6-1, Meguro-ku, Tokyo, 153-8904 JAPAN {kawahara,yairi,machida}@space.rcast.u-tokyo.ac.jp Abstract In this paper, we present a subspace method for learning nonlinear dynamical systems based on stochastic realization, in which state vectors are chosen using kernel canonical correlation analysis, and then state-space systems are identified through regression with the state vectors. We construct the theoretical underpinning and derive a concrete algorithm for nonlinear identification. The obtained algorithm needs no iterative optimization procedure and can be implemented on the basis of fast and reliable numerical schemes. The simulation result shows that our algorithm can express dynamics with a high degree of accuracy. 1 Introduction Learning dynamical systems is an important problem in several fields including engineering, physical science and social science. The objectives encompass a spectrum ranging from the control of target systems to the analysis of dynamic characterization, and for several decades, system identification for acquiring mathematical models from obtained input-output data has been researched in numerous fields, such as system control. Dynamical systems are learned by, basically, two different approaches. The first approach is based on the principles of minimizing suitable distance functions between data and chosen model classes. Well-known and widely accepted examples of such functions are likelihod functions [1] and the average squared prediction-errors of observed data. For multivariate models, however, this approach is known to have several drawbacks. First, the optimization tends to lead to an ill-conditioned estimation problem because of the over-parameterization, i.e., minimum parameters (called canonical forms) do not exist in multivariate systems. Second, the minimization, except in trivial cases, can only be carried out numerically using iterative algorithms. This often leads to there being no guarantee of reaching a global minimum and high computational costs. The second approach is a subspace method which involves geometric operations on subspaces spanned by the column or row vectors of certain block Hankel matrices formed by input-output data [2,3]. It is well known that subspace methods require no a priori choice of identifiable parameterizations and can be implemented by fast and reliable numerical schemes. The subspace method has been actively researched throughout the last few decades and several algorithms have been proposed, which are, for representative examples, based on the orthogonal decomposition of input-output data [2,4] and on stochastic realization using canonical correlation analysis [5]. Recently, nonlinear extensions have begun to be discussed for learning systems that cannot be modeled sufficiently with linear expressions. However, the nonlinear algorithms that ? URL: www.space.rcast.u-tokyo.ac.jp/kawahara/index e.html have been proposed to date include only those in which models with specific nonlinearities are assumed [6] or those which need complicated nonlinear regression [7,8]. In this study, we extend the stochastic-realization-based subspace method [5] to the nonlinear regime by developing it on reproducing kernel Hilbert spaces [9], and derive a nonlinear subspace identification algorithm which can be executed by a procedure similar to that in the linear case. The outline of this paper is as follows. Section 2 gives some theoretical materials for the subspace identification of dynamical systems with reproducing kernels. In section 3, we give some approximations for deriving a practical algorithm, then describe the algorithm specifically in section 4. Finally, an empirical result is presented in section 5, and we give conclusions in section 6. Notation Let x, y and z be random vectors, then denote the covariance matrix of x and y by ?xy and the conditional covariance matrix of x and y conditioned on z by ?xy|z . Let a be a vector in a Hilbert space, and B, C Hilbert spaces. Then, denote the orthogonal projection of a onto B by a/B and the oblique projection of a onto B along C by a/C B. Let A be an [m ? n] matrix, then L{A} := {A?|? ? Rn } will be referred to as the column space and L{A0 } := {A0 ?|? ? Rm } the row space of A. ?0 denotes the transpose of a matrix ?, and Id ? Rd?d is the identity matrix. 2 2.1 Rationales Problem Description and Some Definitions Consider two discrete-time wide-sense stationary vector processes {u(t), y(t), t = 0, ?1, ? ? ? } with dimensions nu and ny , respectively. The first component u(t) models the input signal while the second component y(t) models the output of the unknown stochastic system, which we want to construct from observed input-output data, as a nonlinear state-space system: x(t + 1) = g(x(t), u(t)) + v y(t) = h(x(t), u(t)) + w, (1) where x(t) ? Rn is the state vector, and v and w are the system and observation noises. Throughout this paper, we shall assume that the joint process (u, y) is a stationary and purely nondeterministic full rank process [3,5,10]. It is also assumed that the two processes are zero-mean and have finite joint covariance matrices. A basic step in solving this realization problem, which is also the core of the subspace identification algorithm presented later, is the construction of a state space of the system. In this paper, we will derive a practical algorithm for this problem based on stochastic realization with reproducing kernel Hilbert spaces. We denote the joint input-output process w(t)0 = [y(t)0 , u(t)0 ] ? Rnw (nw = nu + ny ) and feature maps ?u : Ut ? Fu ? Rn?u , ?y : Yt ? Fy ? Rn?y and ?w : Wt ? Fw ? Rn?w with the Mercer kernels ku , ky and kw , where Ut , Yt and Wt are the Hilbert spaces generated by the secondorder random variables u(t), y(t) and w(t), and Fy , Fu and Fw are the respective feature spaces. Moreover, we define the future output, input and the past input-output vectors in the feature spaces  0 as ? f ? (t) := ?y (y(t))0 , ?y (y(t + 1))0 , ? ? ? , ?y (y(t + l ? 1))0 ? Rlny , 0 ? u?+ (t) := [?u (u(t))0 , ?u (u(t + 1))0 , ? ? ? , ?u (u(t + l ? 1))0 ] ? Rlnu , (2) 0 p? (t) := [?w (w(t ? 1))0 , ?w (w(t ? 2))0 , ? ? ? ] ? R? , and the Hilbert spaces generated by these random variables as: Pt? = span{?(w(? ))|? < t}, Ut?+ = span{?(u(? ))|? ? t}, Yt?+ = span{?(y(? ))|? ? t}. (3) Ut?? and Yt?? are defined similarly. These spaces are assumed to be closed with respect to the root-mean-square norm k?k := [E{? 2 }]1/2 , where E{?} denotes the expectation value, and thus are thought of as Hilbert subspaces of an ambient Hilbert space H ? := U ? ? Y ? containing all linear functionals of the joint process in the feature spaces (?u (u), ?y (y)). 2.2 Optimal Predictor in Kernel Feature Space First, we require the following technical assumptions [3,5]. f ? (t) Ut?+ ? f? (t) ?u?+ (t)) !! ? Figure 1: Optimal predictor f? (t) of future output in feature space based on Pt? ? Ut+? . Pt? 0 ? ?p (t) A SSUMPTION 1. The input u is ?exogenous?, i.e., no feedback from the output y to the input u. A SSUMPTION 2. The input process is ?sufficiently rich?. More precisely, at each time  t, the input space Ut has the direct sum decomposition Ut = Ut? + Ut+ Ut? ? Ut+ = {0} . Note that assumption 2 implies that the input process is purely nondeterministic and admits a spectral density matrix without zeros on the unit circle (i.e., coercive). This is too restrictive in many practical situations and we can instead assume only a persistently exciting (PE) condition of sufficiently high order and finite dimensionality for an underlying ?true? system from the outset. Then, we can give the following proposition which enables us to develop a subspace method in feature space, as in the linear case. P ROPOSITION 1. If assumptions 1 and 2 are satisfied, then similar conditions in the feature spaces described below are fulfilled: (1) There is no feedback from ?y (y) to ?u (u). (2) Ut? has the direct sum decomposition Ut? = Ut?? + Ut?+ (Ut?? ? Ut?+ = {0}) P ROOF. Condition (2) is shown straightforwardly from assumption 2 and the properties of the reproducing kernel Hilbert spaces. As Ut+ ? Yt? |Ut? (derived from assumption 1) and Y ? /Ut+ ? Ut? = Yt? /Ut? are equivalent, if the orthogonal complement of Ut is denoted byUt? , we can obtain Yt? = Ut? + Ut? . Now, when representing Yt?? using the input space on feature space Ut? and the orthogonal complement Ut?? , we can write Yt?? = Ut?? + Ut?? because Ut? = Ut?? +Ut+? from condition (2), Ut+ ? Ut? , and owing to the properties of the reproducing kernel Hilbert spaces. Therefore, Ut+? ? Yt?? |Ut?? can be shown by tracing inversely. Using proposition 1, we now obtain the following representation result. ? T HEOREM 1. Under assumptions 1 and 2, the optimal predictor f? (t) of the future output vector in feature space f ? (t) based on Pt? ? Ut?+ is uniquely given by the sum of the oblique projections: ? f? (t) = f ? (t) /Pt? ? Ut?+ = ?p? (t) + ?u?+ (t), (4) in which ? and ? satisfy the discrete Wiener-Hopf-type equations ???p ?p |?u = ??f ?p |?u , ???u ?u |?p = ??f ?u |?p . (5) P ROOF. From proposition 1, the proof can be carried out as in the linear case (cf. [3,5]). 2.3 Construction of State Vector Let Lf , Lp be the square root matrices of ??f ?f |?u , ??p ?p |?u , i.e., ??f ?f |?u = Lf L0f , ??p ?p |?u = Lp L0p , and assume that the SVD of the normalized conditional covariance is given by ?1 0 0 L?1 f ??f ?p |?u (Lp ) = U SV , (6) where S ? Rln?y ?n?p is the matrix with all entries being zero, except the leading diagonal, which has the entries ?i satisfying ?1 ? ? ? ? ? ?n > 0 for n = min(ln?y , n?p ), and U , V are square orthogonal. We define the extended observability and controllability matrices O := Lf U S 1/2 , C := S 1/2 V 0 L0p , (7) where rank(O) = rank(R) = n. Then, from the SVD of Eq. (6), the block Hankel matrix ??f ?p |?u has the classical rank factorization ??f ?p |?u = OC. If a ?state vector? is now defined to be the n-dimensional vector ? 1/2 0 ?1 ? x (t) = C??1 V Lp p (t), ?p ?p |?u p (t) = S (8) it is readily seen that x(t) is a basis for the stationary oblique predictor space Xt := Yt?+ /U ?+ Pt? , t which, on the basis of general geometric principles, can be shown to be a minimal state space for the process ?y (y), as in the linear case [3,5]. This is also assured by the fact that the oblique projection of f ? (t) onto Ut?+ along Pt? can be expressed, using Eqs. (5), (7) and (8), as ? f ? (t)/U ?+ Pt? = ?p? (t) = ??f ?p |?u ??1 ?p ?p |?u p (t) = Ox(t) t (9) and rank(O) = n, and the variance matrix of x(t) is nonsingular. In terms of x(t), the optimal ? predictor f? (t) in Eq. (4) has the form ? f? = Ox(t) + ?u?+ (t). (10) It is seen that x(t) is a conditional minimal sufficient statistic carrying exactly all the information contained in Pt? that is necessary for estimating the future outputs, given the future inputs. In analogy with the linear case [3,5], the output process in feature space ?y (y(t)) now admits a minimal stochastic realization with the state vector x(t) of the form x(t + 1) = A? x(t) + B ? ?u (u(t)) + K ? e(t), ?y (y(t)) = C ? x(t) + D? ?u (u(t)) + e(t), (11) where A? ? Rn?n , B ? ? Rn?n?u , C ? ? Rn?y ?n , D? ? Rn?y ?n?u and K ? ? Rn?n?y are constant matrices and e(t) := ?(y(t)) ? (?(y(t))|Pt? ? Ut? ) is the prediction error. 2.4 Preimage In this section, we describe the state-space model for the output y(t) while the state-space model (11), derived in the previous section, represents the output in feature space ?y (y(t)). At first, we define the feature maps ?x : Xt 7? Fx ? Rn?x , ?u? := Ut 7? Fu? ? Rn?u? and the linear space Xt? , U?t? generated by ?x (x(t)), ?u? (u(t)). Then, the product of Xt? and U?t? satisfies Xt? ? U?t? = 0 because Xt ? Ut? = 0 and ?x , ?u? are bijective. Therefore, the output y(t) is represented as the direct sum of the oblique projections as ? ? ?u? (u(t)). y(t)/Xt? ? U?t? = C? ? ?x (x(t)) + D (12) As a result, we can obtain the following theorem. T HEOREM 2. Under assumptions 1 and 2, if rank ??f ?p |?u = n, then the output y can be represented in the following state-space model: ? ? ?e (? x(t + 1) = A? x(t) + B ? ?u (u(t)) + K e(t)), ? ? ? ? ?(t), y(t) = C ?x (x(t)) + D ?u? (u(t)) + e (13) ? ? := K ? Ae?, in which Ae? is the ?(t) := y(t) ? y(t)/Xt? ? U?t? is the prediction error and K where e 1 ?(t) to e(t) . coefficient matrix of the nonlinear regression from e 1 ?(t) to e and minimize a regularized risk c((? Let f be a map from e e1 , e1 , f (? e1 )), ? ? ? , (? em , em , f (? em )))+ ?(kf kH ), where ? : [0, ?) ? R is a strictly monotonically increasing function and c : (E? ? R2 )m ? R ? {?} (E? ? span{? e}) is an arbitrary loss function; then, from the representer theorem[9], f satisfies f ? span{ffie (? e(t))}, where ffie is a feature map with the associated Mercer kernel ke . Therefore, we can ?(t) to e(t) as Ae?ffie (? represent nonlinear regression from e e(t)). 3 3.1 Approximations Realization with Finite Data In practice, the state vector and associated state-space model should be constructed with available finite data. Let the past vector p? (t) be truncated to finite length, i.e., p?T (t) := [?w (w(t ? 1))0 , ?w (w(t ? 2))0 , ? ? ? , ?w (w(t ? T ))0 ]0 ? RT (n?y +n?u ) , where T > 0, and define P[t?T,t) := span{p?T (? )|? < t}. Then, the following theorem describes the construction of the state vec? tor and the corresponding state-space system which form the finite-memory predictor f? (t) := T ? f ? (t)/Ut?+ ? P[t?T,t) . T HEOREM 3. Under assumptions 1 and 2, if rank(??f ?p |?u ) = n, then the process ?y (y) is expressed by the following nonstationary state-space model: ? T (t + 1) = A? x ? T (t) + B ? ?u (u(t)) + K ? (t)? x eT (t), (14) ? T (t) + D? ?u (u(t)) + e ?T (t). ?y (y(t)) = C ? x ? ? T (t) is a basis on the finite-memory predictor space Yt?+ /U ?+ P[t?T,t) where the state vector x , ? ?T (t) := ?y (y(t)) ? (?y (y(t))|P[T,t) and e ? Ut?+ ) is the prediction error. t The proof can be carried out as in the linear case (cf. [3,5]). In other words, we can obtain the ? T by applying the facts in Section 2 to finite data. This state vector differs approximated state vector x ? T (t) and x(t) converges to from x(t) in Eq. (8); however, when T ? ?, the difference between x zero and the covariance matrix of the estimation error P ? also converges to the stabilizing solution of the following Algebra Riccati Equation (ARE): 0 0 0 0 0 0 0 0 0 0 0 P ? = A? P ? A? +??w ??w ?(A? P ? C ? +??w ??w )(C ? P ? C ? +??e ??e )?1 (A? P ? C ? +??w ??w )0 . (15) Moreover, the Kalman gain K ? converges to 0 K ? = (A? P ? C ? + ??w ??w )(C ? P ? C ? + ??e ??e )?1 , (16) where ??w and ??e are the covariance matrices of errors in the state and observation equations, respectively. 3.2 Using Kernel Principal Components Let z be a random variable, kz a Mercer kernel with a feature map ?z and a feature space Fz , and denote ?z := [?z (z 1 ), ? ? ? , ?z (z m )]0 and the associated Gram matrix Gz := ?z ?0z . The first ith principal components uz,i ? L{?0z }(i = 1, ? ? ? , dz ) combined in a matrix Uz = [uz,1 , ? ? ? , uz,dz ] form an orthonormal basis of a dz -dimensional subspace L{Uz } ? L{?0z }, and can therefore also be described as the linear combination Uz = ?0z Az , where the matrix Az ? Rm?dz holds the expansion coefficients. Az is found by, for example, the eigendecomposition Gz = ?z ?z ?? z 0 ?1/2 such that Az consists of the first dz columns of ?z ?z . Then, ?z with respect to the principal 0 components is given by Cz := ?z Uz = ?z ?z Az = Gz Az [11]. From the orthogonality of ?z (i.e., ?0z ?z = ?z ?0z = Im ), we can derive the following equation:  ?1 ?1/2 ?1/2 ?1 ? (A0z Gz Gz Az )?1 = (?z ?z,d )0 (?z ?z ?0z )(?z ?z ?0z )(?z ?z,d ) = A?0z G?1 (17) z Gz Az , 1/2 where ?z,d is the matrix which consists of the first dz columns of ?z , and A?z := ?z ?z,d satisfying 0 0 0 0 A?z Az = Az A?z = Idz and A?z Az = Az A?z = Im . This property of kernel principal components enables us to approximate matters described in the previous sections in computable forms. First, using Eq. (17), the conditional covariance matrix ??f ?f |?u can be expressed as ??f ?f |?u = ??f ?f ? ??f ?u ??1 ?u ?u ??u ?f ? A0f Gf Gf Af ? (A0f Gf Gu Au )(A0u Gu Gu Au )?1 (A0u Gu Gf Af )  ? f f |u Af ), = A0f Gf Gf ? Gf Gu (Gu Gu )?1 Gu Gf Af (:= A0f ? (18) ? f f |u may be called the empirical conditional covariance operators, and the regularized variwhere ? ant can be obtained by replacing Gf Gf , Gu Gu with (Gf +Im )2 , (Gu +Im )2 ( > 0) (cf.[12,13]). ? ?1 ? ? ??p ?p |?u and ??f ?p |?u can be approximated as well. Moreover, using L?1 ? = L? A? , where L? is 2 ? the square root matrix of ??? ?? |u (? = p, f ) , we can represent Eqs. (6) and (8) approximately as   ?1 ? ?1 0 0 ? 0 ? ?1 0 0 ? ? ? ?1 ? ? f p|u (L ? ?1 ? ??0 L?1 ? (L ) ? ( L A )(A ? A ) A ( L ) =L f ?f ?p |?u p f f p|u p p p p ) = U S V , (19) f f f ? 0 ?1/2 V? 0 (L ? ?1 ? ?1/2 V? 0 L ? ?1 x (t) = S 1/2 V 0 L?1 p p (t) ? S p Ap )(Ap k(p(t))) = S p k(p(t)), ? (20) 0 where k(p(t)) := ?p p (t) = [kp (p1 (t), p(t)), ? ? ? , kp (pm (t), p(t))] . In addition, we can apply this approximation with the kernel PCA to the state-space models derived in the previous sections. First, Eq. (11) can be approximated as x(t + 1) = A? x(t) + B ? A0u ku (u(t)) + K ? e(t), (21) A0y ky (y(t)) = C ? x(t) + D? A0u ku (u(t)) + e(t), where Au and Ay are the expansion coefficient matrices found by the eigendecomposition of Gu and Gy , respectively. Also, using the coefficient matrices Ax , Ae and Au? , Eq.(13) can be written as ? ? A0e ke (? x(t + 1) = A? x(t) + B ? A0u ku (u(t)) + K e(t)), (22) ? 0 ? 0 ? Au? ku (u(t)) + e ?(t). y(t) = C? Ax kx (x(t)) + D 4 Algorithm In this section, we give a subspace identification algorithm based on the discussions in the previous sections. Denote the finite input-output data as {u(t), y(t), t = 1, 2, ? ? ? , N + 2l ? 1}, where l > 0 is an integer larger than the dimension of system n and N is the sufficient large integer, and assume that all data is centered. First, using the Gram matrices Gu , Gy and Gw associated with the input, the output, and the input-output, repectively, we must to calculate the Gram matrices GU , GY and GW corresponding to the past input, the future output, and the past input-output defined as ? 2l ? 2l 2l P P P G G ? ? ? G u,ii u,i(i+1) u,i(i+N ?1) ? ? i=l+1 i=l+1 ? i=l+1 ? ? P ? 2l 2l 2l P P ? ? Gu,(i+1)i Gu,(i+1)(i+1) ??? Gu,(i+1)(i+N ?1) ? ? ? ?, i=l+1 i=l+1 GU := ? i=l+1 ? .. .. .. .. ? ? . . ? ? . . ? 2l ? 2l 2l P P ? P ? Gu,(i+N ?1)i Gu,(i+N ?1)(i+1) ? ? ? Gu,(i+N ?1)(i+N ?1) i=l+1 i=l+1 i=l+1 (23) ? GW l P ? i=1 Gw,ii ? l ? P ? Gw,(i+1)i ? := ? i=1 ? .. ? . ? ? P l Gw,(i+N ?1)i i=1 l P i=1 l P i=1 l P i=1 Gw,i(i+1) ??? Gw,(i+1)(i+1) ??? .. . .. Gw,(i+N ?1)(i+1) l P i=1 l P i=1 . ??? .. . l P i=1 Gw,i(i+N ?1) Gw,(i+1)(i+N ?1) Gw,(i+N ?1)(i+N ?1) and GY is defined analogously to GU . Now the procedure is given as follows. ? ? ? ? ? ? ?, ? ? ? ? (24) Step 1 Calculate the regularized empirical covariance operators and their square root matrices as ? f f |u = (GY + IN )2 ? GY GU (GU + IN )?2 GU GY = L ?f L ? 0f , ? ? pp|u = (GW + IN )2 ? GW GU (GU + IN )?2 GU GW = L ?pL ?0 , ? p ?2 ? ?f p|u = GY GW ? GY GU (GU + IN ) GU GW . 2 ?1 0 ?1 0 ? ?1 0 ? ?1 ? ? ?1 A?? = A?0? (L ? ?1 = A?0? ? This is given by (L?1 ? ) L? = ??? ?? |?u ? (A? ???|u A? ) ? ) L? A? . ??|u (25) Step 2 Calculate the SVD of the normalized covariance matrix (cf. Eq. (19)) 0 ? f p|u (L ? ?1 ? ??0 L?1 ? p ) = U S V ? U1 S1 V1 , f (26) where S1 is obtained by neglecting the small singular values so that the dimension of the state vector n equals the dimension of S1 . Step 3 Estimate the state sequence as (cf. Eq. (20)) 1/2 ? ?1 GW , Xl := [x(l), x(l + 1), ? ? ? , x(l + N ? 1)] = S1 V10 L p and define the following matrices consisting of N ? 1 columns: ? l+1 = X ? l (:, 2 : N ), X ?l = X ? l (:, 1 : N ? 1). X (27) (28) Step 4 Calculate the eigendecomposition of the Gram matrices Gu , Gu? , Gy and Gx and the corresponding expansion coefficient matrices Au , Au? , Ay and Ax . Then, determine the system ? ? by applying regularized least square regressions to matrices A? , B ? , C ? , D? , C? ? and D the following equations (cf. Eqs. (21) and (22)):    ?     ? k+1 ?k A B? ?w X X = + , (29) ?e A0y Gy (:, 2, N ) C ? D? A0u Gu (:, 1, N ? 1) ? ? (A?0u Gu (:, 2, N )) + ??e , Yl|l = C? ? (A0x Gx (:, 2, N )) + D (30) where the matrices ?w , ?e and ??e are the residuals. Step 5 Calculate the covariance matrices of the residuals    1 ?w ?we ?w ?0w = 0 ?ew ?e N ? 1 ?e ?w ?w ?0e ?e ?0e  , (31) solve ARE (15), and, using the stabilizing solution, calculate the Kalman gain K ? in Eq. (16). 5 Simulation Result In this section, we illustrate the proposed algorithm for learning nonlinear dynamical systems with synthetic data. The data was generated by simulating the following system [8] using the 4th- and 5th-order Runge-Kutta method with a sampling time of 0.05 seconds: x? 1 (t) = x2 (t) ? 0.1 cos(x1 (t))(5x1 (t) ? 4x31 (t) + x51 (t)) ? 0.5 cos(x1 (t))u(t), x? 2 (t) = ?65x1 (t) + 50x31 (t) ? 15x51 (t) ? x2 (t) ? 100u(t), y(t) = x1 (t), (32) where the input was a zero-order-hold white noise signal uniformly distributed between ?0.5 and 0.5. We applied our algorithm on a set of 600 data points, and then validated the obtained model using a fresh data set of 400 points. As a kernel function, we used the RBF Gaussian kernel k(z i , z j ) = exp(?kz i ? z j k2 /2?z ). The parameters to be tuned for our method are thus the widths of the kernels ? for u, y, w and x, the regularization degree , and the row-block number l of the Hankel matrix. In addition, we must select the order of the system and the number of kernel principal components npc ? for u, y and e. Figure 2 shows free-run simulation results of the model acquired by our algorithm, in which the parameters were set as ?u = 2.5, ?y = 3.5, ?w = 4.5, pc pc ?x = 1.0, npc u = ny = 4, nx = 9 and  = 0.05, and, for comparison, by the linear subspace identification [5]. The row-block number l was set as 10 in both identifications. The simulation errors [2] sP ny m ((y ) ? (yis )c )2 100 X i=1 Pm i c = , (33) 2 ny c=1 j=1 ((yi )c ) where y si are simulated values and the used initial state is a least square estimation with the initial few points, were improved to 40.2 for our algorithm, from 44.1 for the linear method. The accuracy was improved by about 10 percent. The system orders are 8 for our algorithm, whle 10 for the linear method, in this case. We can see that our method can estimate the state sequence with more information and yield the model capturing the dynamics more precisely. However, the parameters involved much time and effort for tuning. 3 3 Observation Simulation Observation Simulation 2 2 1 1 0 0 -1 -1 -2 -2 -3 -3 0 50 100 150 200 Data Point 250 300 350 400 0 50 100 150 200 250 300 350 400 Data Point Figure 2: Comparison of simulated outputs. Left: Kernel subspace identification method (proposed method). Right: Linear subspace identification method [5]. The broken lines represent the observations and the solid lines represent the simulated values. 6 Conclusion A new subspace method for learning nonlinear dynamical systems using reproducing kernel Hilbert spaces has been proposed. This approach is based on approximated solutions of two discrete WienerHopf equations by covariance factorization in kernel feature spaces. The algorithm needs no iterative optimization procedures, and hence, solutions can be obtained in a fast and reliable manner. The comparative empirical results showed the high performance of our method. However, the parameters involved much time and effort for tuning. In future work, we will develop the idea for closed-loop systems for the identification of more realistic applications. Moreover, it should be possible to extend other established subspace identification methods to nonlinear frameworks as well. Acknowledgments The present research was supported in part through the 21st Century COE Program, ?Mechanical Systems Innovation,? by the Ministry of Education, Culture, Sports, Science and Technology. References [1] Roweis, S. & Ghahramani, Z. (1999) ?A Unifying Review of Linear Gaussian Models? Neural Computation, 11 (2) : 305-345. [2] Van Overschee, P. & De Moor, B. (1996) ?Subspace Identification for Linear Systems: Theory, Implementation, Applications? Kluwer Academic Publishers, Dordrecht, Netherlands. [3] Katayama, T. (2005) ?Subspace Methods for System Identification: A Realization Approach? Communications and Control Engineering, Springer Verlag, 2005. [4] Moonen, M. & Moor, B. D. & Vandenberghe, L. & Vandewalle, J. (1989) ?On- and Off-line Identification of Linear State Space Models? International Journal of Control, 49 (1) : 219-232. [5] Katayama, T. & Picci, G. (1999) ?Realization of Stochastic Systems with Exogenous Inputs and Subspace Identification Methods? Automatica, 35 (10) : 1635-1652. [6] Goethals, I. & Pelckmans, K. & Suykens, J. A. K. & Moor, B. D. (2005) ?Subspace Identification of Hammerstein Systems Using Least Squares Support Vector Machines? IEEE Trans. on Automatic Control, 50 (10) : 1509-1519. [7] Ni, X. & Verhaegen, M. & Krijgsman, A. & Verbruggen, H. B. (1996) ?A New Method for Identification and Control of Nonlinear Dynamic Systems? Engineering Application of Artificial Intelligence, 9 (3) : 231-243. [8] Verdult, V. & Suykens, J. A. K. & Boets, J. & Goethals, I. & Moor, B. D. (2004) ?Least Squares Support Vector Machines for Kernel CCA in Nonlinear State-Space Identification? Proceedings of the 16th International Symposium on Mathematical Theory of Networks and Systems, (MTNS2004). [9] Sch?olkopf, B. & Smola, A. (2002) ?Learning with Kernels? MIT Press. [10] Rozanov, N. I. (1963) ?Stationary Random Processes? Holden-Day, San Francisco, CA. [11] Kuss, M. & Graepel, T. (2003) ?The Geometry of Kernel Canonical Correlation Analysis? Technical Report, Max Planck Institute for Biological Cybernetics, Tubingen, Germany (108). [12] Bach, F. R., & Jordan, M. I. (2002) ?Kernel Independent Component Analysis? Journal of Machine Learning Research (JMLR), 3 : 1-48. [13] Fukumizu, K. & Bach, F. R., & Jordan, M. I. (2004) ?Dimensionality Reduction for Supervised Learning with Reproducing Kernel Hilbert Spaces? Journal of Machine Learning Research (JMLR), 5 : 73-99.
3103 |@word norm:1 simulation:6 decomposition:3 covariance:12 solid:1 reduction:1 initial:2 tuned:1 past:4 yairi:2 si:1 written:1 readily:1 must:2 realistic:1 numerical:2 enables:2 stationary:4 intelligence:1 parameterization:1 pelckmans:1 ith:1 oblique:5 core:1 characterization:1 parameterizations:1 gx:2 mathematical:2 along:2 constructed:1 direct:3 hopf:1 symposium:1 consists:2 nondeterministic:2 manner:1 acquired:1 p1:1 uz:7 researched:2 increasing:1 estimating:1 notation:1 moreover:4 underlying:1 coercive:1 guarantee:1 exactly:1 rm:2 k2:1 control:6 unit:1 planck:1 engineering:3 tends:1 id:1 approximately:1 ap:2 au:7 co:2 factorization:2 a0y:2 practical:3 acknowledgment:1 practice:1 block:4 lf:3 differs:1 procedure:4 empirical:4 thought:1 projection:5 outset:1 word:1 cannot:1 onto:3 operator:2 risk:1 applying:2 www:1 equivalent:1 map:5 center:1 yt:12 dz:6 ke:2 stabilizing:2 spanned:1 deriving:1 orthonormal:1 vandenberghe:1 century:1 fx:1 target:1 construction:3 pt:10 secondorder:1 persistently:1 satisfying:2 approximated:4 observed:2 calculate:6 broken:1 dynamic:4 carrying:1 solving:1 algebra:1 purely:2 basis:5 gu:35 joint:4 represented:2 fast:3 describe:2 kp:2 artificial:1 kawahara:3 dordrecht:1 widely:1 larger:1 solve:1 statistic:1 runge:1 sequence:2 product:1 rln:1 loop:1 realization:10 date:1 riccati:1 repectively:1 roweis:1 meguro:1 description:1 kh:1 ky:2 az:12 olkopf:1 comparative:1 converges:3 derive:4 develop:2 ac:2 v10:1 illustrate:1 eq:12 implemented:2 involves:1 implies:1 drawback:1 tokyo:5 owing:1 stochastic:8 centered:1 material:1 education:1 require:2 proposition:3 biological:1 im:4 extension:1 strictly:1 pl:1 underpinning:1 hold:2 sufficiently:3 exp:1 nw:1 heorem:3 tor:1 estimation:3 moor:4 minimization:1 fukumizu:1 mit:1 gaussian:2 reaching:1 derived:3 ax:3 validated:1 rank:7 sense:1 holden:1 a0:2 germany:1 ill:1 html:1 denoted:1 priori:1 field:2 construct:2 equal:1 sampling:1 kw:1 represents:1 representer:1 future:7 report:1 komaba:1 few:2 roof:2 geometry:1 consisting:1 pc:2 ambient:1 fu:3 neglecting:1 necessary:1 xy:2 culture:1 respective:1 orthogonal:5 circle:1 theoretical:2 minimal:3 column:5 tubingen:1 verbruggen:1 cost:1 entry:2 npc:2 predictor:7 vandewalle:1 too:1 straightforwardly:1 sv:1 synthetic:1 combined:1 st:1 density:1 international:2 yl:1 off:1 analogously:1 concrete:1 squared:1 satisfied:1 containing:1 astronautics:1 leading:1 actively:1 japan:1 nonlinearities:1 de:1 gy:11 coefficient:5 matter:1 satisfy:1 later:1 root:4 closed:2 exogenous:2 complicated:1 minimize:1 formed:1 ni:1 accuracy:2 square:9 wiener:1 variance:1 yield:1 nonsingular:1 ant:1 identification:19 basically:1 cybernetics:1 kuss:1 definition:1 pp:1 involved:2 proof:2 associated:4 gain:2 begun:1 ut:45 dimensionality:2 hilbert:12 graepel:1 day:1 supervised:1 improved:2 ox:2 smola:1 correlation:3 replacing:1 nonlinear:16 preimage:1 normalized:2 true:1 regularization:1 hence:1 white:1 gw:18 width:1 uniquely:1 oc:1 bijective:1 outline:1 ay:2 percent:1 machida:2 ranging:1 recently:1 physical:1 jp:2 discussed:1 extend:2 kluwer:1 numerically:1 vec:1 rd:1 tuning:2 automatic:1 pm:2 similarly:1 aeronautics:1 multivariate:2 showed:1 certain:1 verlag:1 yi:2 seen:2 minimum:2 ministry:1 idz:1 kazuo:1 determine:1 monotonically:1 signal:2 ii:2 encompass:1 full:1 technical:2 academic:1 af:4 bach:2 e1:3 prediction:4 regression:5 basic:1 ae:4 expectation:1 kernel:26 represent:4 cz:1 suykens:2 addition:2 want:1 singular:1 publisher:1 sch:1 jordan:2 integer:2 nonstationary:1 identified:1 observability:1 idea:1 computable:1 expression:1 pca:1 url:1 effort:2 moonen:1 netherlands:1 fz:1 exist:1 canonical:4 fulfilled:1 discrete:3 write:1 shall:1 express:1 v1:1 sum:4 run:1 hankel:3 throughout:2 x31:2 capturing:1 cca:1 identifiable:1 precisely:2 orthogonality:1 x2:2 u1:1 span:6 min:1 developing:1 combination:1 describes:1 em:3 lp:4 s1:4 ln:1 equation:6 available:1 operation:1 apply:1 spectral:1 simulating:1 denotes:2 include:1 cf:6 coe:1 unifying:1 restrictive:1 ghahramani:1 l0p:2 classical:1 objective:1 rt:1 diagonal:1 subspace:23 distance:1 kutta:1 simulated:3 nx:1 fy:2 trivial:1 fresh:1 length:1 kalman:2 modeled:1 index:1 minimizing:1 innovation:1 executed:1 likelihod:1 implementation:1 unknown:1 observation:5 finite:9 controllability:1 truncated:1 situation:1 extended:1 communication:1 rn:12 reproducing:7 arbitrary:1 complement:2 mechanical:1 learned:1 established:1 nu:2 trans:1 dynamical:7 below:1 regime:1 program:1 reliable:3 including:1 memory:2 max:1 overschee:1 suitable:1 regularized:4 residual:2 advanced:1 representing:1 scheme:2 technology:2 inversely:1 numerous:1 carried:3 gz:6 gf:11 katayama:2 review:1 geometric:2 kf:1 loss:1 rationale:1 analogy:1 eigendecomposition:3 degree:2 sufficient:2 mercer:3 principle:2 exciting:1 row:4 supported:1 last:1 transpose:1 free:1 institute:1 wide:1 tracing:1 distributed:1 van:1 feedback:2 dimension:4 gram:4 rich:1 kz:2 san:1 social:1 functionals:1 approximate:1 global:1 rnw:1 goethals:2 hammerstein:1 automatica:1 assumed:3 francisco:1 spectrum:1 iterative:3 decade:2 ku:6 yoshinobu:1 ca:1 expansion:3 assured:1 sp:1 noise:2 x1:5 representative:1 referred:1 ny:5 ssumption:2 xl:1 pe:1 jmlr:2 theorem:3 specific:1 xt:8 r2:1 admits:2 conditioned:2 kx:1 expressed:3 contained:1 sport:1 acquiring:1 springer:1 satisfies:2 conditional:5 identity:1 rbf:1 fw:2 specifically:1 except:2 uniformly:1 wt:2 principal:5 called:2 accepted:1 svd:3 ew:1 select:1 support:2 dept:1
2,319
3,104
Nonnegative Sparse PCA Ron Zass and Amnon Shashua ? Abstract We describe a nonnegative variant of the ?Sparse PCA? problem. The goal is to create a low dimensional representation from a collection of points which on the one hand maximizes the variance of the projected points and on the other uses only parts of the original coordinates, and thereby creating a sparse representation. What distinguishes our problem from other Sparse PCA formulations is that the projection involves only nonnegative weights of the original coordinates ? a desired quality in various fields, including economics, bioinformatics and computer vision. Adding nonnegativity contributes to sparseness, where it enforces a partitioning of the original coordinates among the new axes. We describe a simple yet efficient iterative coordinate-descent type of scheme which converges to a local optimum of our optimization criteria, giving good results on large real world datasets. 1 Introduction Both nonnegative and sparse decompositions of data are desirable in domains where the underlying factors have a physical interpretation: In economics, sparseness increases the efficiency of a portfolio, while nonnegativity both increases its efficiency and reduces its risk [7]. In biology, each coordinate axis may correspond to a specific gene, the sparseness is necessary for finding focalized local patterns hidden in the data, and the nonnegativity is required due to the robustness of biological systems ? where observed change in the expression level of a specific gene emerges from either positive or negative influence, rather than a combination of both which partly cancel each other [1]. In computer vision, coordinates may correspond to pixels, and nonnegative sparse decomposition is related to the extraction of relevant parts from images [10]; and in machine learning sparseness is closely related to feature selection and to improved generalization in learning algorithms, while nonnegativity relates to probability distributions. Principal Component Analysis (PCA) is a popular wide spread method of data decomposition with applications throughout science and engineering. The decomposition performed by PCA is a linear combination of the input coordinates where the coefficients of the combination (the principal vectors) form a low-dimensional subspace that corresponds to the direction of maximal variance in the data. PCA is attractive for a number of reasons. The maximum variance property provides a way to compress the data with minimal information loss. In fact, the principal vectors provide the closest (in least squares sense) linear subspace to the data. Second, the representation of the data in the projected space is uncorrelated, which is a useful property for subsequent statistical analysis. Third, the PCA decomposition can be achieved via an eigenvalue decomposition of the data covariance matrix. Two particular drawbacks of PCA are the lack of sparseness of the principal vectors, i.e., all the data coordinates participate in the linear combination, and the fact that the linear combination may mix both positive and negative weights, which might partly cancel each other. The purpose of our work is to incorporate both nonnegativity and sparseness into PCA, maintaining the maximal variance property of PCA. In other words, the goal is to find a collection of sparse nonnegative principal ? School of Engineering and Computer Science, Hebrew University of Jerusalem, Jerusalem 91904, Israel. vectors spanning a low-dimensional space that preserves as much as possible the variance of the data. We present an efficient and simple algorithm for Nonnegative Sparse PCA, and demonstrate good results over real world datasets. 1.1 Related Work The desire of adding a sparseness property to PCA has been a focus of attention in the past decade starting from the work of [8] who applied axis rotations and component thresholding to the more recent computational techniques SCoTLASS L1 norm approach [9], elastic net L1 regression SPCA [14], DSPCA based on relaxing a hard cardinality cap constraint with a convex approximation [2], and most recently the work of [12] which applies post-processing renormalization steps to improve any approximate solution, in addition to two different algorithms that search for the active coordinates of the principal component based on spectral bounds. These references above can be divided into two paradigms: (i) adding L1 norm terms to the PCA formulation as it is known that L1 approximates L0 much better than L2 , (ii) relaxing a hard cardinality (L0 norm) constraint on the principal vectors. In both cases the orthonormality of the principal vector set is severely compromised or even abandoned and it is left unclear to what degree the resulting principal basis explains most of the variance present in the data. While the above methods do not deal with nonnegativity at all, other approaches focus on nonnegativity but are neutral to the variance of the resulting factors, and hence recover parts which are not necessarily informative. A popular example is the Nonnegative Matrix Factorization (NMF) [10] and the sparse versions of it [6, 11, 5, 4] that seek the best reconstruction of the input using nonnegative (sparse) prototypes and weights. We start with adding nonnegativity to PCA. An interesting direct byproduct of nonnegativity in PCA is that the coordinates split among the principal vectors. This makes the principal vectors disjoint, where each coordinate is non-zero in at most one vector. We can therefore view the principal vectors as parts. We then relax the disjoint property, as for most applications some overlap among parts is desired, allowing some overlap among the principal vectors. We further introduce a ?sparseness? term to the optimization criterion to cover situations where the part (or semi-part) decomposition is not sufficient to guarantee sparsity (such as when the dimension of the input space far exceeds the number of principal vectors). The structure of the paper is as follows: In Sections 2 and 3 we introduce the formulation of Nonnegative Sparse PCA. An efficient coordinate descent algorithm for finding a local optimum is derived in Section 4. Our experiments in Section 5 demonstrate the effectiveness of the approach on large real-world datasets, followed by conclusions in Section 6. 2 Nonnegative (Semi-Disjoint) PCA To the original PCA, which maximizes the variance, we add nonnegativity, showing that this addition alone ensures some sparseness by turning the principal vectors into a disjoint set of vectors, meaning that each coordinate is non-zero in at most one principal vector. We will later relax the disjoint property, as it is too excessive for most applications. Let x1 , ..., xn ? Rd form a zero mean collection of data points, arranged as the columns of the matrix X ? Rd?n , and u1 , ..., uk ? Rd be the desired principal vectors, arranged as the columns of the matrix U ? Rd?k . Adding a nonnegativity constraint to PCA gives us the following optimization problem: 1 max kU T Xk2F s.t. U T U = I, U ? 0 (1) U 2 P 2 T where kAk2F = ij aij is the square Frobenius norm. Clearly, the combination of U U = I and U ? 0 entails that U is disjoint, meaning that each row of U contains at most one non-zero element. While having disjoint principal component may be considered as a kind of sparseness, it is too restrictive for most problems. For example, a stock may be a part of more than one sector, genes are typically involved in several biological processes [1], a pixel may be a shared among several image parts, and so forth. We therefore wish to allow some overlap among the principal vectors. The degree of coordinate overlap can be represented by an orthonormality distance measure which is nonnegative and vanishes iff U is orthonormal. The function kI ? U T U k2F is typically used in the literature (cf. [13], pg. 275?277) as a measure for orthonormality and the relaxed version of eqn. 1 becomes, 1 ? max kU T Xk2F ? kI ? U T U k2F s.t. U ? 0 (2) U 2 4 where ? > 0 is a balancing parameter between reconstruction and orthonormality. We see that the tradeoff for relaxing the disjoint property of Nonnegative PCA is also to relax the maximum variance property of PCA ? the constrained optimization tries to preserve the variance when possible but allows to tradeoff higher variance with some degree of coordinate overlap among the principal vectors. Next, we add sparseness to this formulation. 3 Nonnegative Sparse PCA (NSPCA) While semi-disjoint principal components can be considered sparse when the number of coordinates is small, it may be too dense when the number of coordinates highly exceeds the number of principal vectors. In such case, the average number of non-zero elements per principal vector would be high. We therefore consider minimizing the number of non-zero elements directly, Pk Pn kU kL0 = i=1 j=1 ?uij , where ?x equals one if x is non-zero and zero otherwise. Adding this to the criteria of eqn. 2 we have, max U 1 T ? kU Xk2F ? kI ? U T U k2F ? ?kU kL0 2 4 s.t. U ? 0 where ? ? 0 controls the amount of additional sparseness required. The L0 norm could be relaxed by replacing it with a L1 term and since U is nonnegative we obtain the relaxed sparseness term: kU kL1 = 1T U 1, where 1 is a column vector with all elements equal to one. The relaxed problem becomes, 1 ? max kU T Xk2F ? kI ? U T U k2F ? ?1T U 1 s.t. U ? 0 (3) U 2 4 4 Algorithm For certain values of ? and ?, solving the problem of eqn. 3 is NP-hard. For example, for large enough values of ? and for ? = 0 we obtain the original problem of eqn. 1. This is a concave quadratic programming, which is an NP-hard problem [3]. It is therefore unrealistic to look for a global solution of eqn. 3, and we have to settle with a local maximum. The objective of eqn. 3 as a function of urs (the s row of the ur column vector) is, c2 ? f (urs ) = ? u4rs + u2rs + c1 urs + const 4 2 where const stands for terms that do not depend on urs and, c1 = d X asi uri ? ? ? i=1,i6=s c2 = ass + ? ? ? ? k X d X (4) urj uij uis ? ?, i=1,i6=r j=1,j6=s d X i=1,i6=s u2ri ? ? ? k X u2is i=1,i6=r T where A = XX . Setting the derivative with respect to urs to zero we obtain a cubic equation, ?f = ??u3rs + c2 urs + c1 = 0 ?urs (5) Evaluating eqn. 4 for the nonnegative roots of eqn. 5 and zero, the nonnegative global maximum of f (urs ) can be found (see Fig. 1). Note that as urs approaches ? the criteria goes to ??, and since the function is continues a nonnegative maximum must exist. A coordinate-descent scheme for updating each entry of U one following the other would converge to a local maximum of the 1000 1500 500 1000 ? f / ? urs f(urs) 0 ?500 ?1000 500 0 ?1500 ?2000 ?10 ?5 0 urs 5 10 ?500 ?10 ?5 0 urs 5 10 Figure 1: A 4th order polynomial (left) and its derivative (right). In order to find the global nonnegative maximum, the function has to be inspected at all nonnegative extrema (where the derivative is zero) and at urs = 0. constrained objective function, as summarized bellow: Algorithm 1 Nonnegative Sparse PCA (NSPCA) ? Start with an initial guess for U . ? Iterate over entries (r, s) of U until convergence: ? Set the value of urs to the global nonnegative maximizer of eqn. 4 by evaluating it over all nonnegative roots of eqn. 5 and zero. Caching some calculation results from the update of one element of U to the other, each update is done in O(d), and the entire matrix U is updated in O(d2 k). It is easy to see that the gradient at the convergence point of Alg. 1 is orthogonal to the constraints in eqn. 3, and therefore Alg. 1 converges to a local maximum of the problem. It is also worthwhile to compare this nonnegative coordinate-descent scheme with the nonnegative coordinate-descent scheme of Lee and Seung [10]. The update rule of [10] is multiplicative, which holds two inherent drawbacks. First, it cannot turn positive values into zero or vise versa, and therefore the solution will never be on the boundary itself, a drawback that does not exist in our scheme. Second, since it is multiplicative, the perseverance of nonnegativity is built upon the nonnegativity of the input, and therefore it cannot be applied to our problem while our scheme can be also applied to NMF. In other words, a practical aspect our the NSPCA algorithm is that it can handle general (not necessarily non-negative) input matrices ? such as zero-mean covariance matrices. 5 Experiments We start by demonstrating the role of the ? and ? parameters in the task of extracting face parts. We use the MIT CBCL Face Dataset #1 of 2429 aligned face images, 19 by 19 pixels each, a dataset that was extensively used to demonstrate the ability of Nonnegative Matrix Factorization (NMF) [10] methods. We start with ? = 2 ? 107 and ? = 0 to extract the 10 principal vectors in Fig. 2(a), and then increase ? to 5 ? 108 to get the principal vectors in Fig. 2(b). Note that as ? increases the overlap among the principal vectors decreases and the holistic nature of some of the vectors in Fig. 2(a) vanishes. The vectors also become sparser, but this is only a byproduct of their nonoverlapping nature. Fig. 3(a) shows the amount of overlap kI ? U T U k as a function of ?, showing a consistence drop in the overlap as ? increases. We now set ? back to 2 ? 107 as in Fig. 2(a), but set the value of ? to be 2 ? 106 to get the factors in Fig. 2(d). The vectors become sparser as ? increases, but this time the sparseness emerges from a drop of less informative pixels within the original vectors of Fig. 2(a), rather than a replacement of the holistic principal vectors with ones that are part based in nature. The amount of non-zero elements in the principal vectors, kU kL0 , is plotted as a function of ? in Fig. 3(b), showing the increment in sparseness as ? increases. (a) (b) (c) (d) (e) (f) Figure 2: The role of ? and ? is demonstrated in the task of extracting ten image features using the MIT-CBCL Face Dataset #1. At the top row (a), we use ? = 2 ? 107 and ? = 0. In (b) we increase ? to 5 ? 108 while ? stays zero, to get more localized parts that has lower amount of overlap. In (c) we reset ? to be 2 ? 107 as in (a), but increase ? to be 2 ? 106 . While we increase ?, pixels that explain less variance are dropped from the factors, but the overlapping nature of the factors remains. (See Fig. 3 for a detailed study.) In (d) we show the ten leading principal components of PCA, in (e) the ten factors of NMF, and in (f) the leading principal vectors of GSPCA when allowing 55 active pixels per principal vector. Next we study how the different dimensional reduction methods aid the generalization ability of SVM in the task of face detection. To measure the generalization ability we use the Receiver Operating Characteristics (ROC) curve, a two dimensional graph measuring the classification ability of an algorithm over a dataset, showing the amount of true-positives as a function of the amount of false-positives. The wider the area under this curve is, the better the generalization is. Again, we use the MIT CBCL Face Dataset #1, where 1000 face images and 2000 non-face images were used as a training set, and the rest of the dataset used as a test set. The dimensional reduction was performed over the 1000 face images of the training set. We run linear SVM on the ten features extracted by NSPCA when using different values of ? and ?, showing in Fig. 4(a) that as the principal factors become less overlapping (higher ?) and sparser (higher ?), the ROC curve is higher, meaning that SVM is able to generalize better. Next, we compare the ROC curve produced by linear SVM when using the NSPCA extracted features (with ? = 5 ? 108 and ? = 2 ? 106 ) to the ones produced when using PCA and NMF (the principal vectors are displayed in Fig. 2(d) and Fig. 2(e), correspondingly). As a representative of the Sparse PCA methods we use the recent Greedy Sparse PCA (GSPCA) of [12] that shows comparable or better results to all other Sparse PCA methods (see the principal vectors in Fig. 2(f)). Fig. 4(b) shows that better generalization is achieved when using the NSPCA extracted features, and hence a more reliable face detection. Since NSPCA is limited to nonnegative entries of the principal vectors, it can inherently explain less variance than Sparse PCA algorithms which are not constrained in that way, similarly to the fact that Sparse PCA algorithms can explain less variance than PCA. While this limitation holds, NSPCA still manages to explain a large amount of the variance. We demonstrate that in Fig. 5, where we compare the amount of cumulative explained variance and cumulative cardinality of different Sparse PCA algorithms over the Pit Props dataset, a classic dataset used throughout the Sparse PCA literature. In domains where nonnegativity is intrinsic to the problem, however, using NSPCA extracted features improves the generalization ability of learning algorithms, as we have demonstrated above for the face detection problem. 6 Summary Our method differs substantially from previous approaches to sparse PCA ? a difference that begins with the definition of the problem itself. Other sparse PCA methods try to limit the cardinality (number of non-zero elements) of each principal vector, and therefore accept as input a (soft) limitation on 8 2500 2000 0 |U|L || I ? UTU || 6 4 1500 1000 2 0 500 6 7 8 log10(?) 9 0 10 5 (a) 6 7 log10(?) 8 (b) Figure 3: (a) The amount of overlap and orthogonality as a function of ?, where higher values of ? decrease 100 100 80 80 60 % True Positives % True Positives the overlap and increase the orthogonality, and (b) the amount of non-zero elements as a function of ?, where higher values of ? enforce sparseness. ?=2x107, ?=0 ?=5x108, ?=0 40 ?=2x107, ?=2x106 ?=5x108, ?=2x106 20 0 0 20 40 60 80 % False Positives 60 NSPCA GSPCA NMF PCA 40 20 0 0 100 20 (a) 40 60 80 % False Positives 100 (b) Figure 4: The ROC curve of SVM in the task of face detection over the MIT CBCL Face Dataset #1 (a) when using different values of ? and ?, showing improved generalization when using principal vectors that has less overlap (higher ?) and that are sparser (higher ?); and (b) when using NMF, PCA, GSPCA and NSPCA extracted features, showing better generalization when using NSPCA. 90% 80% 70% 50 PCA SCoTLASS SPCA DSPCA ESPCA NSPCA Cumulative Cardinality Cumulative Variance 100% 60% 50% 40% 30% 20% 1 2 3 4 # of PCs (a) 5 6 40 30 SCoTLASS SPCA DSPCA ESPCA NSPCA 20 10 0 1 2 3 4 # of PCs 5 6 (b) Figure 5: (a) Cumulative explained variance and (b) cumulative cardinality as a function of the number of principal components on the Pit Props dataset, a classic dataset that is typically used to evaluate Sparse PCA algorithms. Although NSPCA is more constrained than other Sparse PCA algorithms, and therefore can explain less variance just like Sparse PCA algorithms can explain less variance than PCA, and although the dataset is not nonnegative in nature, NSPCA shows competitive results when the number of principal components increases. that cardinality. In addition, most sparse PCA methods focus on the task of finding a single principal vector. Our method, on the other hand, splits the coordinates among the different principal vectors, and therefore its input is the number of principal vectors, or parts, rather than the size of each part. As a consequence, the natural way to use our algorithm is to search for all principal vectors together. In that sense, it bears resemblance to the Nonnegative Matrix Factorization problem, from which our method departs significantly in the sense that it focus on informative parts, as it maximizes the variance. Furthermore, the non-negativity of the output does not rely on having non-negative input matrices to the process thereby permitting zero-mean covariance matrices to be fed into the process just as being done with PCA. References [1] Liviu Badea and Doina Tilivea. Sparse factorizations of gene expression guided by binding data. In Pacific Symposium on Biocomputing, 2005. [2] Alexandre d?Aspremont, Laurent El Ghaoui, Michael I. Jordan, and Gert R. G. Lanckriet. A direct formulation for sparse PCA using semidefinite programming. In Proceedings of the conference on Neural Information Processing Systems (NIPS), 2004. [3] C. A. Floudas and V. Visweswaran. Quadratic optimization. In Handbook of global optimization, pages 217?269. Kluwer Acad. Publ., Dordrecht, 1995. [4] Matthias Heiler and Christoph Schn?orr. Learning non-negative sparse image codes by convex programming. In Proc. of the 10th IEEE Intl. Conf. on Comp. Vision (ICCV), 2005. [5] Patrik O. Hoyer. Non-negative sparse coding. In Neural Networks for Signal Processing, 2002. Proceedings of the 2002 12th IEEE Workshop on, pages 557?565, 2002. [6] Patrik O. Hoyer. Non-negative matrix factorization with sparseness constraints. Journal of Machine Learning Research, 5:1457?1469, 2004. [7] Ravi Jagannathan and Tongshu Ma. Risk reduction in large portfolios: Why imposing the wrong constraints helps. Journal of Finance, 58(4):1651?1684, 08 2003. [8] Ian T. Jolliffe. Rotation of principal components: Choice of normalization constraints. Journal of Applied Statistics, 22(1):29?35, 1995. [9] Ian T. Jolliffe, Nickolay T. Trendafilov, and Mudassir Uddin. A modified principal component technique based on the LASSO. Journal of Computational and Graphical Statistics, 12(3):531?547, September 2003. [10] D. D. Lee and H. S. Seung. Learning the parts of objects by non-negative matrix factorization. Nature, 401(6755):788?791, October 1999. [11] S. Li, X. Hou, H. Zhang, and Q. Cheng. Learning spatially localized, parts-based representation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2001. [12] Baback Moghaddam, Yair Weiss, and Shai Avidan. Spectral bounds for sparse pca: Exact and greedy algorithms. In Proceedings of the conference on Neural Information Processing Systems (NIPS), 2005. [13] Beresford N. Parlett. The symmetric eigenvalue problem. Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 1980. [14] H. Zou, T. Hastie, and R. Tibshirani. Sparse principal component analysis, 2004.
3104 |@word version:2 polynomial:1 norm:5 d2:1 seek:1 decomposition:7 covariance:3 pg:1 thereby:2 reduction:3 initial:1 mudassir:1 contains:1 dspca:3 past:1 yet:1 must:1 hou:1 subsequent:1 informative:3 drop:2 update:3 alone:1 greedy:2 guess:1 provides:1 ron:1 zhang:1 c2:3 direct:2 become:3 symposium:1 introduce:2 consistence:1 cardinality:7 becomes:2 begin:1 xx:1 underlying:1 maximizes:3 what:2 israel:1 kind:1 substantially:1 finding:3 extremum:1 nj:1 guarantee:1 concave:1 finance:1 wrong:1 uk:1 partitioning:1 control:1 positive:9 engineering:2 local:6 dropped:1 limit:1 severely:1 consequence:1 acad:1 laurent:1 might:1 relaxing:3 pit:2 christoph:1 factorization:6 limited:1 practical:1 enforces:1 differs:1 area:1 floudas:1 asi:1 significantly:1 projection:1 word:2 get:3 cannot:2 selection:1 prentice:1 risk:2 influence:1 gspca:4 demonstrated:2 jerusalem:2 economics:2 attention:1 starting:1 convex:2 go:1 rule:1 orthonormal:1 classic:2 handle:1 gert:1 coordinate:21 increment:1 updated:1 inspected:1 exact:1 programming:3 us:1 lanckriet:1 element:8 recognition:1 updating:1 continues:1 observed:1 role:2 ensures:1 decrease:2 vanishes:2 seung:2 depend:1 solving:1 upon:1 efficiency:2 basis:1 stock:1 various:1 represented:1 describe:2 dordrecht:1 relax:3 otherwise:1 ability:5 statistic:2 itself:2 eigenvalue:2 net:1 matthias:1 reconstruction:2 maximal:2 reset:1 relevant:1 aligned:1 holistic:2 iff:1 forth:1 frobenius:1 convergence:2 optimum:2 intl:1 converges:2 object:1 wider:1 help:1 ij:1 school:1 involves:1 direction:1 guided:1 closely:1 drawback:3 settle:1 explains:1 generalization:8 biological:2 hold:2 considered:2 hall:1 cbcl:4 purpose:1 proc:1 create:1 mit:4 clearly:1 modified:1 rather:3 pn:1 caching:1 ax:1 focus:4 l0:3 derived:1 sense:3 el:1 typically:3 entire:1 accept:1 hidden:1 uij:2 pixel:6 among:9 classification:1 constrained:4 field:1 equal:2 never:1 extraction:1 having:2 biology:1 look:1 cancel:2 excessive:1 k2f:4 uddin:1 np:2 inherent:1 distinguishes:1 preserve:2 replacement:1 detection:4 highly:1 semidefinite:1 pc:2 beresford:1 moghaddam:1 heiler:1 byproduct:2 necessary:1 orthogonal:1 desired:3 plotted:1 minimal:1 visweswaran:1 column:4 soft:1 cover:1 measuring:1 neutral:1 entry:3 kl1:1 x108:2 too:3 river:1 stay:1 lee:2 michael:1 together:1 again:1 conf:1 creating:1 derivative:3 leading:2 li:1 nonoverlapping:1 orr:1 summarized:1 coding:1 coefficient:1 inc:1 doina:1 multiplicative:2 performed:2 view:1 later:1 try:2 kl0:3 root:2 shashua:1 start:4 recover:1 competitive:1 shai:1 square:2 variance:21 who:1 characteristic:1 correspond:2 xk2f:4 generalize:1 produced:2 manages:1 comp:1 j6:1 explain:6 definition:1 involved:1 x106:2 dataset:12 popular:2 emerges:2 cap:1 improves:1 back:1 alexandre:1 higher:8 improved:2 wei:1 formulation:5 arranged:2 done:2 furthermore:1 just:2 until:1 hand:2 eqn:11 replacing:1 maximizer:1 lack:1 overlapping:2 quality:1 resemblance:1 usa:1 orthonormality:4 true:3 hence:2 spatially:1 symmetric:1 deal:1 attractive:1 jagannathan:1 criterion:4 demonstrate:4 l1:5 image:8 meaning:3 recently:1 rotation:2 physical:1 interpretation:1 approximates:1 kluwer:1 versa:1 scotlass:3 imposing:1 rd:4 i6:4 similarly:1 portfolio:2 entail:1 operating:1 add:2 closest:1 recent:2 certain:1 additional:1 relaxed:4 baback:1 converge:1 paradigm:1 signal:1 semi:3 relates:1 ii:1 desirable:1 mix:1 reduces:1 exceeds:2 calculation:1 zass:1 divided:1 post:1 permitting:1 variant:1 regression:1 avidan:1 vision:4 normalization:1 achieved:2 c1:3 addition:3 utu:1 rest:1 effectiveness:1 jordan:1 extracting:2 spca:3 split:2 enough:1 easy:1 iterate:1 hastie:1 lasso:1 prototype:1 tradeoff:2 amnon:1 expression:2 pca:46 useful:1 detailed:1 amount:10 extensively:1 ten:4 exist:2 disjoint:9 per:2 x107:2 tibshirani:1 demonstrating:1 uis:1 ravi:1 graph:1 run:1 throughout:2 comparable:1 bound:2 ki:5 followed:1 cheng:1 quadratic:2 nonnegative:29 constraint:7 orthogonality:2 u1:1 aspect:1 pacific:1 combination:6 ur:16 explained:2 iccv:1 ghaoui:1 equation:1 remains:1 turn:1 jolliffe:2 fed:1 worthwhile:1 spectral:2 enforce:1 robustness:1 yair:1 original:6 compress:1 abandoned:1 top:1 cf:1 graphical:1 maintaining:1 log10:2 const:2 giving:1 restrictive:1 objective:2 unclear:1 hoyer:2 gradient:1 september:1 subspace:2 distance:1 participate:1 kak2f:1 reason:1 spanning:1 code:1 minimizing:1 hebrew:1 october:1 sector:1 negative:8 publ:1 allowing:2 upper:1 datasets:3 descent:5 displayed:1 situation:1 nmf:7 required:2 schn:1 nip:2 able:1 pattern:2 sparsity:1 built:1 including:1 max:4 reliable:1 unrealistic:1 overlap:12 natural:1 rely:1 turning:1 scheme:6 improve:1 axis:2 negativity:1 aspremont:1 extract:1 patrik:2 literature:2 l2:1 loss:1 urj:1 bear:1 interesting:1 limitation:2 localized:2 degree:3 sufficient:1 thresholding:1 uncorrelated:1 balancing:1 row:3 summary:1 liviu:1 aij:1 allow:1 wide:1 face:13 correspondingly:1 sparse:33 boundary:1 dimension:1 xn:1 world:3 stand:1 evaluating:2 curve:5 cumulative:6 parlett:1 collection:3 projected:2 far:1 approximate:1 gene:4 global:5 active:2 handbook:1 receiver:1 search:2 iterative:1 compromised:1 decade:1 why:1 ku:8 nature:6 elastic:1 inherently:1 contributes:1 alg:2 as:1 necessarily:2 zou:1 domain:2 pk:1 spread:1 dense:1 nspca:16 x1:1 fig:16 representative:1 roc:4 cubic:1 renormalization:1 aid:1 nonnegativity:14 wish:1 third:1 ian:2 departs:1 specific:2 showing:7 svm:5 espca:2 intrinsic:1 workshop:1 false:3 adding:6 sparseness:17 uri:1 sparser:4 vise:1 saddle:1 desire:1 applies:1 binding:1 trendafilov:1 corresponds:1 extracted:5 ma:1 prop:2 goal:2 shared:1 change:1 hard:4 principal:46 bellow:1 partly:2 bioinformatics:1 biocomputing:1 incorporate:1 evaluate:1
2,320
3,105
Temporal and Cross-Subject Probabilistic Models for fMRI Prediction Tasks Alexis Battle Gal Chechik Daphne Koller Department of Computer Science Stanford University Stanford, CA 94305-9010 {ajbattle,gal,koller}@cs.stanford.edu Abstract We present a probabilistic model applied to the fMRI video rating prediction task of the Pittsburgh Brain Activity Interpretation Competition (PBAIC) [2]. Our goal is to predict a time series of subjective, semantic ratings of a movie given functional MRI data acquired during viewing by three subjects. Our method uses conditionally trained Gaussian Markov random fields, which model both the relationships between the subjects? fMRI voxel measurements and the ratings, as well as the dependencies of the ratings across time steps and between subjects. We also employed non-traditional methods for feature selection and regularization that exploit the spatial structure of voxel activity in the brain. The model displayed good performance in predicting the scored ratings for the three subjects in test data sets, and a variant of this model was the third place entrant to the 2006 PBAIC. 1 Introduction In functional Magnetic Resonance Imaging, or fMRI, an MR scanner measures a physiological signal known to be correlated with neural activity, the blood-oxygenation-level dependent (BOLD) signal [12]. Functional scans can be taken during a task of interest, such as the subject viewing images or reading text, thus providing a glimpse of how brain activity changes in response to certain stimuli and tasks. An fMRI session produces scans of the brain volume across time, obtaining BOLD measurements from thousands of small sub-volumes, or voxels at each time step. Much of the current fMRI research focuses on the goal of identifying brain regions activated in response to some task or stimulus (e.g., [7]). The fMRI signal is typically averaged over many repeated stimulus presentations, multiple time points and even different subjects, in order to find brain regions with statistically significant response. However, in recent years, there has been growing interest in an alternative task, whose goal is to develop models which predict stimuli from functional data, in effect demonstrating the ability to ?read? information from the scans. For instance, Tong et al. [9] demonstrated the ability to predict the orientation of edges in a subject?s visual field from functional scans of visual cortex, and Mitchell et al. [13] successfully applied machine learning techniques to a predict a variety of stimuli, such as the semantic category of words presented to a subject. Such prediction work has demonstrated that, despite the relatively low spatial resolution of fMRI, functional data contains surprisingly reliable and detailed signal [9, 6, 13], even on time scales as short as a few seconds. Going beyond identifying the location of responsive regions, these models begin to demonstrate how the brain encodes states and stimuli [3], often capturing distributed patterns of activation across multiple brain regions simultaneously. This line of research could also eventually provide a mechanism for accurately tracking cognitive processes in a non-invasive way. Another recent innovation is the use of long and rich stimuli in fMRI experiments, such as a commercial movie [8], rather than the traditional controlled, repeating simple stimuli. These experiments present more difficulty in analysis, but more closely mirror natural stimulation of the brain, which may evoke different brain activity patterns from traditional experiments. The recent Pittsburgh Brain Activity Interpretation Competition [2] (PBAIC), featured both the use of complex stimuli and a prediction task, presenting a unique data set for predicting subjective experiences given functional MRI sessions. Functional scans from three subjects were taken while the subjects watched three video segments. Thus, during the scan, subjects were exposed to rich stimuli including rapidly changing images of people, meaningful sounds such as dialog and music, and even emotional stimuli, all overlapping in time. Each subject also re-viewed each movie multiple times, to rate over a dozen characteristics of the videos over time, such as Amusement, presence of Faces or Body Parts, Language, and Music. Given this data set, the goal was to predict these real-valued subjective ratings for each subject based only on the fMRI scans. In this paper, we present an approach to the PBAIC problem, based on the application of machine learning methods within the framework of probabilistic graphical models. The structured probabilistic framework allowed us to represent many relevant relationships in the data, including evolution of subjective ratings over time, the likelihood of different subjects rating experiences similarly, and of course the relationship between voxels and ratings. We also explored novel feature selection methods, which exploit the spatial characteristics of brain activity. In particular, we incorporate a bias in favor of jointly selecting nearby voxels. We demonstrate the performance of our model by training from a subset of the movie sessions and predicting ratings for held out movies. An earlier variant of our model was the third place entrant to the 2006 PBAIC out of forty entries. We demonstrated very good performance in predicting many of the ratings, suggesting that probabilistic modeling for the fMRI domain is a promising approach. An analysis of our learned models, in particular our feature selection results, also provides some insight into the regions of the brain activated by different stimuli and states. 2 Probabilistic Model Our system for prediction from fMRI data is based on a dynamic, undirected graphical probabilistic model, which defines a large structured conditional Gaussian over time and subjects. The backbone of the model is a conditional linear Gaussian model, capturing the dependence of ratings on voxel measurements. We then extend the basic model to incorporate dependencies between labels across time and between subjects. The variables in our model are voxel activations and ratings. For each subject s and each time point t, we have a collection of ratings Rs (?, t), with Rs (j, t) representing the jth rating type (for instance Language) for s at time t. Note that the rating sequences given by the subjects are actually convolved with a standard hemodynamic response function before use, to account for the delay inherent in the BOLD signal response [4]. For each s and t, we also have the voxel activities Vs (?, t). Both voxels and ratings are continuous variables. For mathematical convenience, we recenter the data such that all variables (ratings and voxels) have mean 0. Each rating Rs (j, t) is modeled as a linear Gaussian random variable, dependent only on voxels from that subject?s brain as features. We can express Rs (j, t) ? N (ws (j)T Vs (?, t), ?s2 ). We assume that the dependence of the rating on the voxels is time-invariant, so that the same parameters ws (j) and ?s are used for every time point. Importantly, however, each rating should not depend on all of the subject?s voxels, as this is neither biologically likely nor statistically plausible given the large number of voxels. In Sec. 3.1 we explore a variety of feature selection and regularization methods relevant to this problem. The linear regression model forms a component in a larger model that accounts for dependencies among labels across time and across subjects. This model takes the form of a (dynamic) Gaussian Markov Random Field (GMRF) [15, 11]. A GMRF is an undirected graphical probabilistic model that expresses a multi-dimensional joint Gaussian distribution in a reduced parameter space by making use of conditional independences. Specifically, we employ a standard representation of a GMRF derived from the inverse covariance matrix, or precision matrix Q = ??1 of the underlying Gaussian distribution: For X = (X1 , . . . , Xn ), a zero-mean joint Gaussian distribution over X can be written as P (X) ? exp(? 21 X T QX). The precision matrix maps directly to a Markov network V1 ( . , t2) V1 ( . , t1) V1 ( . , t3) R1 ( j , t2) R1 ( j , t1) R1 ( j , t3) R3 ( j , t2) R3 ( j , t1) R2 ( j , t2) R2 ( j , t1) V3 ( . , t2) V3 ( . , t1) V2 ( . , t1) R3 ( j , t3) R2 ( j , t3) V2 ( . , t2) V3 ( . , t3) V2 ( . , t3) Figure 1: GMRF model for one rating, R? (j, t), over three subjects and three time steps. representation, as Q(i, j) = 0 exactly when Xi is independent of Xj given the remaining variables, corresponding to the absence of an edge between Xi and Xj in the Markov network. In our setting, we want to express a conditional linear Gaussian of the ratings given the voxels. A distribution P (X | Y ) can also be parametrized using the joint precision matrix:   Y 1 1 Y exp ? QXX (i, i)Xi2 exp (?QXX (i, j)Xi Xj ) P (X | Y ) = Z(Y ) i 2 i,j?EX Y exp (?QXY (i, k)Xi Yk ), i,k?EY where EX is the set of edges between nodes in X, and EY represents edges from Y to X. Our particular GMRF is a joint probabilistic model that encompasses, for a particular rating type j, the value of the rating Rs (j, t) for all of the subjects s across all time points t. Our temporal model assumes a stationary distribution, so that both node and edge potentials are invariant across time. This means that several entries in the full precision matrix Q are tied to a single free parameter. We will treat each rating type separately. Thus, the variables in the model are: all of the voxel measurements Vs (l, t), for all s, t and voxels l selected to be relevant to rating j; and all of the ratings Rs (j, t) (for all s, t). As we discuss below, the model is trained conditionally, and therefore encodes a joint distribution over the rating variables, conditional on all of the voxel measurements. Thus, there will be no free parameters corresponding to the voxel nodes due to the use of a conditional model, while rating nodes Rs (j, ?) have an associated node potential parameter Qnode (s, j). Each rating node Rs (j, t) has edges connecting it to a subset of relevant voxels from Vs (?, t) at the same time slice. The set of voxels can vary for different ratings or subjects, but is consistent across time. The precision matrix entry Qvoxel (s, j, v) parametrizes the edge from voxel v to rating j. To encode the dependencies between the rating at different time points, our dynamic model includes edges between each rating Rs (j, t) and the previous and following ratings, Rs (j, t ? 1) and Rs (j, t + 1). The corresponding edge parameters are Qtime (s, j). We also use the GMRF to encode the dependencies between the ratings of different subjects, in a way that does not assume that the subjects gave identical ratings, by introducing appropriate edges in the model. Thus, we also have an edge between Rs (j, t) and Rs? (j, t) for all subject pairs s, s? , parametrized by Qsubj (s, s? , j). Overall, our model encodes the following conditional distribution: P (R? (j, ?) | V? (?, ?)) =   Y Y 1 1 exp ? Qnode (s, j)Rs (j, t)2 exp (?Qsubj (s, s? , j)Rs (j, t)Rs? (j, t)) Z(V? (?, ?)) s,t 2 t,s,s? Y Y exp (?Qtime (s, j)Rs (j, t)Rs (j, t + 1)) exp (?Qvoxel (s, j, l)Rs (j, t)Vs (l, t)). (1) s,t,t+1 3 s,l,t Learning and Prediction We learn the parameters of the model above from a data set consisting of all of the voxels and all the subjective ratings for all three subjects. We train the parameters discriminatively [10], to maximize the conditional likelihood of the observed ratings given the observed voxel measurements, as specified in Eq. (1). Conditional training is appropriate in our setting, as our task is precisely to predict the ratings given the voxels; importantly, this form of training allows us to avoid modeling the highly noisy, high-dimensional voxel activation distribution. We split parameter learning into two phases, first learning the dependence of ratings on voxels, and then learning the parameters between rating nodes. The entire joint precision matrix over all voxels and ratings would be prohibitively large for our learning procedure, and this approximation was computationally much more efficient. In the first phase, we learn linear models to predict each rating given only the voxel activations. We then modify our graph, replacing the very large set of voxel nodes with a new, much smaller set of nodes representing the linear combinations of the voxel activations which we just learned. Using the reduced graph, we learn a much smaller precision matrix. We describe each of these steps below. 3.1 From Voxels to Ratings To learn the dependencies of ratings on voxels for a single subject s, we find parameters ws (j), using linear regression, which optimize   Y 1 T 2 P (Rs (j, ?) | Vs (?, ?)) ? exp ? 2 (Rs (j, t) ? ws (j) Vs (?, t)) . (2) 2?s t However, to deal with the high dimensionality of the feature space relative to the number of training instances, we utilize feature selection; we also introduce regularization terms into the objective that can be viewed as a spatially-based prior over ws (j). First, we reduce the number of voxels involved in the objective for each rating using a simple feature selection method ? we compute the Pearson correlation coefficient for each voxel and each rating, and select the most highly correlated features. The number of voxels to select is a setting which we tuned, for each rating type individually, using five-fold cross-validation on the training set. We chose to use the same number of voxels across subjects, which is more restrictive but increases the amount of data available for cross-validation. Even following this feature selection process, we often still have a large number (perhaps hundreds) of relevant voxels as features, and these features are quite noisy. We therefore employ additional regularization over the parameters associated with these voxels. We explored both L2 (ridge) and L1 (Lasso) regularization, corresponding to a Gaussian and a Laplacian prior respectively. Introducing both types of regularization, we end up with a log-likelihood objective of the form: X X X 2 (Rs (j, t) ? ws (j)T Vs (?, t))2 + ? ws (j, i) + ? |ws (j, i)| (3) t i i Finally, we introduce a novel form of regularization, intended to model spatial regularities. Brain activity associated with some types of stimuli, such as language, is believed to be localized to some number of coherent regions, each of which contains multiple activated voxels. We therefore want to bias our feature selection process in favor of selecting multiple voxels that are nearby in space; more precisely, we would prefer to select a voxel which is in the vicinity of other correlated voxels, over a more strongly correlated voxel which is isolated in the brain, as the latter is more likely to result from noise. We therefore define a robust ?hinge-loss?-like distance function for voxels. Letting kvi ? vk k2 denote the Euclidean distance between voxels vi and vk in the brain, we define: ? ? if kvi ? vk k2 < dmin , ?1 if kvi ? vk k2 > dmax , D(i, k) = 0 ? ? dmax ?kvi ?vk k2 otherwise. dmax ?dmin We now introduce an additional regularization term X ?? |ws (j, i)|D(i, k)|w s (j, k)| ik into the objective Eq. (3). This term can offset the L1 term by co-activating voxels that are spatially nearby. Thus, it encourages, but does not force, co-selection of nearby voxels. Note that this regularization term is applied to the absolute values of the voxel weights, hence allowing nearby voxels to have opposite effects on the rating; we do observe such cases in our learned model. Note that, according to our definition, the spatial prior uses simple Euclidean distance in the brain. This is clearly too simplistic, as it ignores the structure of the brain, particularly the complex folding of the cortex. A promising extension of this idea would be to apply a geodesic version of distance instead, measuring distance over gray matter only. 3.2 Training the Joint Model We now describe the use of regression parameters, as learned in Sec. 3.1, to reduce the size of our joint precision matrix, and learn the final parameters including the inter-rating edge weights. Given ws (j), which we consider the optimal linear combination of Vs (?, j) for predicting Rs (j), we remove the voxel nodes Vs (?, t) from our model, and introduce new ?summary? nodes Uj (t) = ws (j)T Vs (?, t). Now, instead of finding Qvoxel (s, j, v) parameters for every voxel v individually, we only have to find a single parameter Qu (s, j). Given the structure of our original linear Gaussian model, there is a direct relationship between optimization in the reduced formulation and optimizing using the original formulation. Assuming ws (j) is the optimal set of regression parameters, the optimal Qvoxel (s, j, l) in the full form would be proportional to Qu (s, j)ws (j, l), optimized in the reduced form. This does not guarantee that our two phase learning results in globally optimal parameter settings, but simply that given ws (j), the reduction described is valid. The joint optimization of Qu (s, j), Qnode (s, j), Qtime (s, j), and Qsubj (s, s? , j) is performed according to the reduced conditional likelihood. The reduced form of Eq. (1) simply replaces the final terms containing Qvoxel (s, j,) with: Y exp (?Qu (s, j)Rs (j, t)Uj (t)). (4) s,t The final objective is computationally feasible due to the reduced parameter space. The log likelihood is a convex function of all our parameters, with the final joint precision matrix constrained to be positive semi-definite to ensure a legal Gaussian distribution. Thus, we can solve the problem with semi-definite programming using a standard convex optimization package [1]. Last, we combine all learned parameters from both steps, repeated across time steps, for the final joint model. 3.3 Prediction Prediction of unseen ratings given new fMRI scans can be obtained through probabilistic inference on the models learned for each rating type. We incorporate the observed voxel data from all three subjects as observed values in our GMRF, which induces a Gaussian posterior over the joint set of ratings. We only need to predict the most likely assignment to ratings, which is the mean (or mode) of this Gaussian posterior. The mean can be easily computed using coordinate ascent over the log likelihood of our joint Gaussian model. More precisely, we iterate over the nodes (recall there is one node for each subject at each time step), and update its mean to the most likely value given the current estimated means of its neighbors in the GMRF. Let QRR be the joint precision matrix, over all nodes over time and subject, constructed from Qu (?, ?), Qtime (?, ?), Qsubj (?, ?, ?), and Qnode (?, ?). Then for each P node k and neighbors Nk according to the graph structure of our GMRF, we update ?k ? ? j?Nk ?j QRR (k, j). As the objective is convex, this process is guaranteed to converge to the mode of the posterior Gaussian, providing the most likely ratings for all subjects, at all time points, given the functional data from scans during a new movie. 4 Experimental Results As described, the fMRI data collected for the PBAIC included fMRI scans of three different subjects, and three sessions each. In each of the sessions, a subject viewed a movie approximately 20 minutes in length, constructed from clips of the Home Improvement sitcom. All three subjects watched the same movies ? referred to as Movie1, Movie2 and Movie3. The scans produced volumes with approximately 30, 000 brain voxels, each approximately 3.28mm by 3.28mm by 3.5mm, with one volume produced every 1.75 seconds. Subsequently, the subject watched the movie again multiple times (not during an fMRI session), and rated a variety of characteristics at time intervals corresponding to the fMRI volume rate. Before use in prediction, the rating sequences were convolved with a standard hemodynamic response function [4]. The core ratings used in the competition were Amusement, Attention, Arousal, Body Parts, Environmental Sounds, Faces, Food, Language, Laughter, Motion, Music, Sadness, and Tools. Since the ratings are continuous values, competition scoring was based on the correlation (for frames where the movie is playing) of predicted 0.3 0.2 0 Linear Time Subj GMRF Model Type (a) Sp GMRF LinReg us e At me te nt n A ti B ro on En ody usa v Pa l So rt un s Fa d s c La F es ng oo La ua d ug ge h M t er ot i M on Sa us dn ic es To s ol s 0.1 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 Correlation/Best Subj 3 Correlation 0.4 0.8 Body Amus 0.6 Lang 0.4 25 50 75 100 200 300 500 700 Number Voxels Used Am Average Correlation 0.5 (b) (c) Figure 2: (a) Average correlation of predicted ratings and true ratings, for simple models, the full GMRF, finally including the spatial (Sp) prior. (b) Correlations for individual ratings, for subject 3. (c) Effect of varying the number of voxels used for Language, Amusement, and BodyParts. ratings with true ratings, across rating types and all subjects, combined using a z ? -transform. For consistency, we adhere to the use of correlation as our performance metric. To train our model, we use the fMRI measurements along with all ratings from all subjects? sessions for some set of movies, holding out other movies for testing. We chose to use an entire held out movie session because the additional variance between fMRI sessions is an important aspect of the prediction task. The training set is used both to learn the model parameters and for the crossvalidation step used to select regularization settings. The learned model is then used to predict ratings for the held out test movies for all subjects, from fMRI data alone. Our GMRF model shows significant improvement over simpler models on the prediction task, and a version of this model was used in our submission to the PBAIC. We also evaluate the results of our feature selection steps, examining which regions of the brain are used for each rating prediction. 4.1 Rating Prediction For our own evaluation outside of the competition, given that we did not have access to Movie3 ratings for testing, we trained a full model using functional data and ratings from the three subjects viewing Movie1, and then made predictions using the scans from all subjects for Movie2. The predictions made by the dynamic GMRF model were highly correlated with the true ratings. The best overall average correlation achieved for held out Movie2 was 0.482. For all subjects, the correlation for both Language and Faces was above 0.7, and we achieved correlations of above 0.5 on 19 of the 39 core tasks (three subjects time 13 ratings). To evaluate the contribution of various components of our model, we also tested simpler versions, beginning with a regularized linear regression model. We also constructed two simplified versions of our GMRF, one which includes edges between subjects but not time interactions, and conversely one which includes time interactions but removed subject edges. Finally, we tested our full GMRF model, plus our GMRF model along with the spatial prior. As shown in Fig. 2(a), both the time dependencies and the cross-subject interactions help greatly over the linear regression model. The final combined model, which includes both time and subject edges, demonstrates significant improvement over including either alone. We also see that the addition of a spatial prior (using cross-validation to select which ratings to apply it to), results in a small additional improvement, which we explore further in Sec. 4.2. Performance on each of the rating types individually is shown in Fig. 2(b) for subject 3, for both linear regression and our GMRF. One interesting note is that the relative ordering of rating type accuracy for the different models is surprisingly consistent. As mentioned, we submitted the third place entry to the 2006 PBAIC. For the competition, we used our joint GMRF model, but had not developed the spatial prior presented here. We trained the model using data from Movie1 and Movie2 and the corresponding ratings from all three subjects. We submitted predictions for the unseen Movie3 predictions. Around 40 groups made final submissions. Our final score in the competition was 0.493, whereas 80% of the entries fell below 0.4000. The first place group, Olivetti et al. [14], employed recurrent neural networks with mutual information based feature selection, scored 0.515. The second group, scoring 0.509, was Chigirev et al. [5] ? they applied regularized linear models with smoothing across time, spatially nearby voxels and (a) Motion (b) Faces (c) Arousal Figure 3: Voxels selected for various rating predictions, all for Subject 3. averaging across subjects. Some groups employed machine learning techniques such as Support Vector Regression, while others focused on defined Regions of Interest as features in prediction. 4.2 Voxel Selection and Regularization We also examined the results of feature selection and regularization, looking at the location of voxels used for each rating, and the differences resulting from various techniques. Starting with the approximately 30, 000 brain voxels per subject, we apply our feature selection techniques, using cross-validation on training sessions to determine the number of voxels used to predict each rating. The optimal number did vary significantly by rating, as the graph of performance in Fig. 2(c) demonstrates. For instance, a small voxel set (less than 100) performs well for the Body Parts rating, while the Language rating does well with several hundred voxels, and Amusement uses an intermediate number. This may reflect the actual size and number of brain regions activated by such stimuli, but likely also reflects voxel noise and the difficulty of the individual predictions. Visualization demonstrates that our selected voxels often occur in regions known to be responsive to relevant stimuli. For instance, voxels selected for Motion in all subjects include voxels in cortical areas known to respond to motion in the visual field (Fig. 3(a)). Likewise, many voxels selected for Language occur in regions linked to language processing (Fig. 4(b)). However, many other voxels were not from expected brain regions, attributable in part to noise in the data, but also due to the intermixed and correlated stimuli in the videos. For instance, the ratings Language and Faces for subject 1 in Movie1 have correlation 0.68, and we observed that the voxels selected for Faces and Language overlapped significantly. Voxels in the language centers of the brain improve the prediction of Faces since the two stimuli are causally related, but it might be preferable to capture this correlation by adding edges between the rating nodes of our GMRF. Interestingly, there was some consistency in voxel selection between subjects, even though our model did not incorporate cross-subject voxel selection. Comparing Faces voxels for Subject 3 Fig. 3(b), to voxels for Subject 2 Fig. 4(a), we see that the respective voxels do come from similar regions. This provides further evidence that the feature selection methods are finding real patterns in the fMRI data. Finally, we discuss the results of applying our spatial prior. We added the prior for ratings which it improved in cross-validation trials for all subjects ? Motion, Language, and Faces. Comparison of the voxels selected with and without our spatial prior reveal that it does result in more spatially coherent groups of voxels. Note the total number of voxels selected does not rise in general. As shown in Fig. 4(a), the voxels for Faces for subject 3 include a relevant group of voxels even without the prior, but including the spatial prior results in inluding additional voxels voxels near this region. Similar results for Language are shown for subject 1. Arousal prediction was actually hurt by including the spatial prior, and looking at the voxels selected for subject 2 for Arousal Fig. 3(c), we see that there is almost no spatial grouping originally, so perhaps here the spatial prior is implausible. 5 Discussion This work, and the other PBAIC entries, demonstrated that a wide range of subjective experiences can be predicted from fMRI data collected during subjects? exposure to rich stimuli. Our probabilistic model in particular demonstrated the value of time-series and multi-subject data, as the use of edges representing correlations across time and correlations between subjects each improved the (a) Faces, Subject 2 (b) Language, Subject 1 Figure 4: Effect of applying the spatial prior ? each left image is without, right is with prior applied. accuracy of our predictions significantly. Further, while voxels are very noisy, with appropriate regularization and the use of a spatially-based prior, reliable prediction was possible using individual voxels as features. Although voxels were selected from the whole brain, many of the voxels selected as features in our model were located in brain regions known to be activated by relevant stimuli. One natural extension to our work would include the addition of interactions between distinct rating types, such as Language and Faces, which are likely to be correlated. This may improve predictions, and could also result in more targeted voxel selection for each rating. More broadly, though, the PBAIC experiments provided an extremely rich data set, including complex spatial and temporal interactions among brain voxels and among features of the stimuli. There are many aspects of this data we have yet to explore, including modeling the relationships between the voxels themselves across time, perhaps identifying interesting cascading patterns of voxel activity. Another interesting direction would be to determine which temporal aspects of the semantic ratings are best encoded by brain activity ? for instance it is possible that brain activity may respond more strongly to changes in some stimuli rather than simply stimulus presence. Such investigations could provide further insight into brain activity in response to complex stimuli in addition to improving our ability to make accurate predictions from fMRI data. Acknowledgments This work was supported by NSF grant DBI-0345474. References [1] Cvx matlab software. http://www.stanford.edu/ boyd/cvx/. [2] Pittsburgh brain activity interpretation competition inferring experience based cognition from fmri. http://www.ebc.pitt.edu/competition.html. [3] What?s on your mind? Nature Neuroscience, 9(981), 2006. [4] G. M. Boynton, S. A. Engel, G. H. Glover, and D. J. Heeger. Linear systems analysis of functional magnetic resonance imaging in human v1. J. Neurosci, 16:4207?4221, 1996. [5] D. Chigirev, G. Stephens, and T. P. E. team. Predicting base features with supervoxels. Abstract presented, 12th HBM meeting, Florence, Italy, 2006. [6] D. D. Cox and R. L. Savoya. Functional magnetic resonance imaging (fmri) brain reading: detecting and classifying distributed patterns of fmri activity in human visual cortex. NeuroImage, 19:261?270, 2003. [7] K. J. Friston, A. P. Holmes, K. J. Worsley, J. P. Poline, C. D. Frith, and R. S. J. Frackowiak. Statistical parametric maps in functional imaging: A general linear approach. HBM, 2(4):189?210, 1995. [8] U. Hasson, Y. Nir, I. Levy, G. Fuhrmann, and R. Malach. Intersubject synchronization of cortical activity during natural vision. Science, 303(1634), 2004. [9] Y. Kamitani and F. Tong. Decoding the visual and subjective contents of the human brain. Nature Neuroscience, 8:679?685, 2005. [10] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML, 2001. [11] S. Lauritzen. Graphical Models. Oxford University Press, New York, 1996. [12] N. K. Logothetis. The underpinnings of the bold functional magnetic resonance imaging signal. The Journal of Neuroscience, 23(10):3963?3971, 2003. [13] T. Mitchell, R. Hutchinson, R. Niculescu, F.Pereira, X. Wang, M. Just, and S. Newman. Learning to decode cognitive states from brain images. Machine Learning, 57(1?2):145?175, 2004. [14] E. Olivetti, D. Sona, and S. Veeramachaneni. Gaussian process regression and recurrent neural networks for fmri image classification. Abstract presented, 12th HBM meeting, Florence, Italy, 2006. [15] T. P. Speed and H. T. Kiiveri. Gaussian Markov distributions over finite graphs. Annals of Statistics, 14.
3105 |@word trial:1 cox:1 version:4 mri:2 r:24 covariance:1 pbaic:10 reduction:1 series:2 contains:2 selecting:2 score:1 tuned:1 interestingly:1 subjective:7 current:2 comparing:1 nt:1 activation:5 lang:1 yet:1 written:1 oxygenation:1 remove:1 update:2 v:11 stationary:1 alone:2 selected:11 mccallum:1 beginning:1 short:1 core:2 ody:1 provides:2 detecting:1 node:16 location:2 simpler:2 daphne:1 five:1 mathematical:1 dn:1 constructed:3 direct:1 along:2 glover:1 ik:1 combine:1 introduce:4 acquired:1 inter:1 expected:1 themselves:1 dialog:1 growing:1 nor:1 brain:35 multi:2 ol:1 globally:1 food:1 actual:1 ua:1 begin:1 provided:1 underlying:1 what:1 backbone:1 developed:1 finding:2 gal:2 guarantee:1 temporal:4 every:3 ti:1 exactly:1 prohibitively:1 k2:4 ro:1 demonstrates:3 preferable:1 grant:1 causally:1 segmenting:1 before:2 t1:6 positive:1 treat:1 modify:1 despite:1 oxford:1 approximately:4 might:1 chose:2 plus:1 examined:1 sadness:1 conversely:1 co:2 range:1 statistically:2 averaged:1 unique:1 acknowledgment:1 testing:2 qxy:1 definite:2 procedure:1 area:1 featured:1 significantly:3 boyd:1 chechik:1 word:1 convenience:1 selection:18 applying:2 optimize:1 www:2 map:2 demonstrated:5 center:1 exposure:1 attention:1 starting:1 convex:3 focused:1 resolution:1 identifying:3 gmrf:20 insight:2 holmes:1 cascading:1 importantly:2 dbi:1 coordinate:1 hurt:1 annals:1 commercial:1 logothetis:1 decode:1 programming:1 us:3 alexis:1 pa:1 overlapped:1 particularly:1 located:1 submission:2 malach:1 observed:5 wang:1 capture:1 thousand:1 region:15 ordering:1 removed:1 yk:1 mentioned:1 dynamic:4 geodesic:1 trained:4 depend:1 segment:1 exposed:1 easily:1 joint:15 frackowiak:1 various:3 train:2 distinct:1 describe:2 labeling:1 newman:1 pearson:1 outside:1 whose:1 quite:1 stanford:4 valued:1 plausible:1 larger:1 solve:1 otherwise:1 encoded:1 ability:3 favor:2 statistic:1 unseen:2 jointly:1 noisy:3 transform:1 final:8 sequence:3 interaction:5 relevant:8 rapidly:1 competition:9 crossvalidation:1 regularity:1 r1:3 produce:1 help:1 oo:1 develop:1 recurrent:2 lauritzen:1 intersubject:1 sa:1 eq:3 c:1 qxx:2 predicted:3 come:1 direction:1 qnode:4 closely:1 subsequently:1 human:3 viewing:3 activating:1 investigation:1 extension:2 scanner:1 mm:3 around:1 ic:1 exp:10 cognition:1 predict:10 pitt:1 vary:2 label:2 individually:3 engel:1 successfully:1 tool:1 reflects:1 clearly:1 gaussian:18 rather:2 avoid:1 varying:1 encode:2 derived:1 focus:1 vk:5 improvement:4 likelihood:6 greatly:1 am:1 inference:1 dependent:2 niculescu:1 typically:1 entire:2 w:14 koller:2 going:1 overall:2 among:3 orientation:1 html:1 classification:1 resonance:4 spatial:17 constrained:1 smoothing:1 mutual:1 field:5 amusement:4 ng:1 identical:1 represents:1 qrr:2 icml:1 fmri:27 parametrizes:1 t2:6 stimulus:23 others:1 inherent:1 employ:2 few:1 simultaneously:1 individual:3 phase:3 consisting:1 intended:1 interest:3 highly:3 evaluation:1 activated:5 held:4 accurate:1 edge:17 underpinnings:1 glimpse:1 experience:4 respective:1 euclidean:2 re:1 arousal:4 isolated:1 instance:7 earlier:1 modeling:3 measuring:1 assignment:1 introducing:2 subset:2 entry:6 hundred:2 delay:1 examining:1 too:1 dependency:7 hutchinson:1 combined:2 probabilistic:12 decoding:1 connecting:1 again:1 reflect:1 containing:1 cognitive:2 worsley:1 sona:1 suggesting:1 account:2 potential:2 bold:4 sec:3 includes:4 coefficient:1 matter:1 kamitani:1 vi:1 performed:1 linked:1 florence:2 contribution:1 accuracy:2 variance:1 characteristic:3 likewise:1 t3:6 accurately:1 produced:2 submitted:2 implausible:1 definition:1 involved:1 invasive:1 associated:3 mitchell:2 recall:1 dimensionality:1 actually:2 originally:1 response:7 improved:2 formulation:2 though:2 strongly:2 just:2 correlation:15 replacing:1 overlapping:1 defines:1 mode:2 chigirev:2 perhaps:3 gray:1 reveal:1 usa:1 effect:4 true:3 evolution:1 regularization:13 vicinity:1 hence:1 read:1 spatially:5 semantic:3 deal:1 conditionally:2 during:7 encourages:1 presenting:1 ridge:1 demonstrate:2 performs:1 l1:2 motion:5 image:5 novel:2 functional:14 stimulation:1 ug:1 volume:5 extend:1 interpretation:3 measurement:7 significant:3 consistency:2 session:10 similarly:1 language:16 had:1 access:1 cortex:3 base:1 posterior:3 own:1 recent:3 olivetti:2 optimizing:1 ebc:1 italy:2 certain:1 meeting:2 scoring:2 additional:5 mr:1 employed:3 ey:2 determine:2 forty:1 v3:3 maximize:1 converge:1 signal:6 semi:2 multiple:6 sound:2 full:5 stephen:1 cross:8 long:1 believed:1 controlled:1 watched:3 prediction:26 variant:2 basic:1 regression:9 laplacian:1 simplistic:1 metric:1 vision:1 represent:1 achieved:2 folding:1 addition:3 want:2 separately:1 whereas:1 interval:1 adhere:1 ot:1 ascent:1 fell:1 subject:71 undirected:2 lafferty:1 near:1 presence:2 intermediate:1 split:1 variety:3 independence:1 xj:3 gave:1 iterate:1 lasso:1 opposite:1 reduce:2 idea:1 york:1 matlab:1 detailed:1 amount:1 repeating:1 induces:1 clip:1 category:1 reduced:7 http:2 nsf:1 estimated:1 neuroscience:3 per:1 broadly:1 express:3 group:6 demonstrating:1 blood:1 changing:1 neither:1 utilize:1 v1:4 imaging:5 graph:5 year:1 inverse:1 package:1 respond:2 place:4 almost:1 cvx:2 home:1 prefer:1 capturing:2 guaranteed:1 fold:1 replaces:1 activity:16 hasson:1 occur:2 subj:2 precisely:3 your:1 software:1 encodes:3 nearby:6 aspect:3 speed:1 extremely:1 relatively:1 department:1 structured:2 according:3 combination:2 supervoxels:1 battle:1 across:16 smaller:2 qu:5 biologically:1 making:1 invariant:2 taken:2 computationally:2 legal:1 visualization:1 discus:2 eventually:1 mechanism:1 r3:3 xi2:1 dmax:3 letting:1 ge:1 mind:1 hbm:3 end:1 available:1 apply:3 observe:1 v2:3 appropriate:3 magnetic:4 responsive:2 alternative:1 convolved:2 original:2 assumes:1 remaining:1 ensure:1 sitcom:1 include:3 graphical:4 hinge:1 emotional:1 music:3 exploit:2 restrictive:1 uj:2 objective:6 added:1 fa:1 parametric:1 dependence:3 rt:1 traditional:3 distance:5 parametrized:2 me:1 collected:2 assuming:1 length:1 modeled:1 relationship:5 providing:2 innovation:1 intermixed:1 holding:1 rise:1 allowing:1 dmin:2 markov:5 finite:1 displayed:1 looking:2 team:1 frame:1 rating:90 pair:1 specified:1 optimized:1 coherent:2 learned:7 beyond:1 below:3 pattern:5 reading:2 encompasses:1 reliable:2 including:9 video:4 difficulty:2 natural:3 force:1 predicting:6 regularized:2 friston:1 representing:3 improve:2 movie:14 rated:1 fuhrmann:1 nir:1 text:1 prior:17 voxels:65 l2:1 relative:2 synchronization:1 loss:1 discriminatively:1 interesting:3 proportional:1 entrant:2 localized:1 validation:5 consistent:2 boynton:1 playing:1 classifying:1 course:1 summary:1 poline:1 surprisingly:2 last:1 free:2 supported:1 jth:1 kiiveri:1 bias:2 neighbor:2 wide:1 face:12 absolute:1 distributed:2 slice:1 xn:1 valid:1 cortical:2 rich:4 ignores:1 collection:1 made:3 simplified:1 voxel:28 qx:1 evoke:1 pittsburgh:3 xi:4 continuous:2 un:1 promising:2 nature:2 learn:6 robust:1 ca:1 correlated:7 obtaining:1 frith:1 improving:1 complex:4 domain:1 sp:2 did:3 neurosci:1 s2:1 noise:3 scored:2 whole:1 repeated:2 allowed:1 body:4 x1:1 fig:9 referred:1 en:1 attributable:1 tong:2 precision:10 sub:1 laughter:1 inferring:1 heeger:1 neuroimage:1 pereira:2 tied:1 levy:1 third:3 dozen:1 minute:1 kvi:4 er:1 explored:2 r2:3 physiological:1 offset:1 evidence:1 grouping:1 adding:1 mirror:1 te:1 nk:2 recenter:1 simply:3 likely:7 explore:3 visual:5 tracking:1 environmental:1 conditional:11 goal:4 presentation:1 viewed:3 targeted:1 absence:1 feasible:1 change:2 content:1 included:1 specifically:1 averaging:1 total:1 experimental:1 la:2 e:2 meaningful:1 select:5 people:1 support:1 latter:1 scan:12 incorporate:4 evaluate:2 hemodynamic:2 tested:2 ex:2
2,321
3,106
Attentional Processing on a Spike-Based VLSI Neural Network Yingxue Wang, Rodney Douglas, and Shih-Chii Liu Institute of Neuroinformatics University of Zurich and ETH Zurich Winterthurerstrasse 190 CH-8057 Zurich, Switzerland yingxue,rjd,shih@ini.phys.ethz.ch Abstract The neurons of the neocortex communicate by asynchronous events called action potentials (or ?spikes?). However, for simplicity of simulation, most models of processing by cortical neural networks have assumed that the activations of their neurons can be approximated by event rates rather than taking account of individual spikes. The obstacle to exploring the more detailed spike processing of these networks has been reduced considerably in recent years by the development of hybrid analog-digital Very-Large Scale Integrated (hVLSI) neural networks composed of spiking neurons that are able to operate in real-time. In this paper we describe such a hVLSI neural network that performs an interesting task of selective attentional processing that was previously described for a simulated ?pointer-map? rate model by Hahnloser and colleagues. We found that most of the computational features of their rate model can be reproduced in the spiking implementation; but, that spike-based processing requires a modification of the original network architecture in order to memorize a previously attended target. 1 Introduction The network models described in the neuroscience literature have frequently used rate equations to avoid the difficulties of formulating mathematical descriptions of spiking behaviors; and also to avoid the excessive computational resources required for simulating spiking networks. Now, the construction of multi-chip hybrid VLSI (hVLSI) systems that implement large-scale networks of real-time spiking neurons and spike-based sensors is rapidly becoming a reality [3?5, 7], and so it becomes possible to explore the performance of event-based systems in various processing tasks and network behavior of populations of spiking, rather than rate neurons. In this paper we use an hVLSI network to implement a spiking version of the ?pointer-map? architecture previously described for rate networks by Hahnloser and colleagues [2]. In this architecture, a small number of pointer neurons are incorporated in the feedback of a recurrently connected network. The pointers steer the feedback onto the map, and so focus processing on the attended map neurons. This is an interesting architecture because it reflects a general computational property of sensorimotor and attentional/intentional processing based on pointing. Directing attention, foveating eyes, and reaching limbs all appeal to a pointer like interaction with the world, and such pointing is known to modulate the responses of neurons in a number of cortical and subcortical areas. The operation of the pointer-map depends on the steady focusing of feedback on the map neurons during the period of attention. It is easy to see how this steady control can be achieved in the neurons have continuous, rate outputs; but it is not obvious whether this behavior can be achieved also with intermittently spiking neural outputs. Our objective was thus to evaluate whether networks of spiking neurons would be able to combine the benefits of both event-based processing and the attentional properties of pointer-map architecture. 2 Pointer-Map Architecture A pointer-map network consists of two reciprocally connected populations of excitatory neurons. Firstly, there is a large population of map neurons that for example provide a place encoding of some variable such as the orientation of a visual bar stimulus. A second, small population of pointer neurons exercises attentional control on the map. In addition to the reciprocal connections between the two populations, the map neurons receive feedforward (e.g. sensory) input; and the pointer neurons receive top-down attentional inputs that instruct the pointers to modulate the location and intensity of the processing on the map (see Fig. 1(a)). The important functional difference between conventional recurrent networks (equivalently, ?recurrent maps?) and the pointer-map, is that the pointer neurons are inserted in the feedback loop, and so are able to modulate the effect of the feedback by their top-down inputs. The usual recurrent excitatory connections between neurons are replaced in the pointer-map by recurrent connections between the map neurons and the pointer neurons that have sine and cosine weight profiles. Consequently, the activities of the pointer neurons generate a vectorial pattern of recurrent excitation whose direction points to a particular location on the map (Fig. 1(b)). Global inhibition provides competition between the map neurons, so that overall the pointer-map behaves as an attentionally selective soft winner-take-all network. Figure 1: Pointer-map architecture. (a) Network consists of two layers of excitatory neurons. The map layer receives feedforward sensory inputs and inputs from two pointer neurons. The pointer neurons receive top-down attentional inputs and also inputs from the map layer. The recurrent connections between the map neurons and pointer neurons are set according to sine and cosine profiles. (b) The interaction between pointer neurons and map neurons. Each circle indicates the activity of one neuron. Clear circles indicate silent neurons and the sizes of the gray circles are proportional to the activation of the active neurons. The vector formed by the activities of the two pointer neurons on this angular plot points in the direction (the pointer angle ?) of the map neurons where the pointer-to-map input is the largest. The map-to-pointer input is proportional to the population vector of activities of the map neurons. 3 Spiking Network Chip Architecture We implemented the pointer-map architecture on a multi-neuron transceiver chip fabricated in a 4metal, 2-poly 0.35?m CMOS process. The chip (Fig. 2) has 16 VLSI integrate-and-fire neurons, of which one acts as the global inhibitory neuron. Each neuron has 8 input synapses (excitatory and inhibitory). The circuit details of the soma and synapses are described elsewhere [4]. Input and output spikes are communicated within and between chips using the asynchronous Address Event Representation (AER) protocol [3]. In this protocol the action potentials that travel along point-to-point axonal connections are replaced by digital addresses on a bus that are usually Figure 2: Architecture of multi-neuron chip. The chip has 15 integrate-and-fire excitatory neurons and one global inhibitory neuron. Each neuron has 8 input synapses. Input spikes and output spikes are communicated using an asynchronous handshaking protocol called Address Event Representation. When an input spike is to be sent to the chip, the handshaking signals, Req and Ack, are used to ensure that only valid addresses on a common digital bus are latched and decoded by X- and Y-decoders. The arbiter block arbitrates between all outgoing neuron spikes; and the neuron spike is sent off as the address of the neuron on a common digital bus through two handshaking signals (Reqout and Ackout). The synaptic weights of 2 out of the 8 synapses can be specified uniquely through an on-chip Digital-to-Analog converter that sets the synaptic weight of each synapse before that particular synapse is stimulated. The synaptic weight is specified as part of the digital address that normally codes the synaptic address. the labels of source neurons and/or target synapses. In our chip, five bits of the AER address space are used to encode also the synaptic weight [1]. An on-chip Digital-to-Analog Converter (DAC) transforms the digital weights into the analog signals that set the individual efficacy of the excitatory synapses and inhibitory synapses for each neuron (Fig. 2). (a) (b) Figure 3: Resulting spatial distribution of activity in map neurons in response to attentional input to pointer neurons. The frequencies of the attentional inputs to P1, P2 are (a) [200Hz, 0Hz] (b) [0Hz,200Hz]. The y-axis shows the firing rate (Hz) of the map neurons (1?9) listed on the x-axis. The polar plot on the side of each figure shows the pointer angle ? described by the pointer neuron activities. 4 Experiments Our pointer-map was composed of a total of 12 neurons: 2 served as pointer neurons; 9 as map neurons; and 1 as the global inhibitory neuron. The synaptic weights of these neurons have a coefficient of variance in synaptic efficacy of about 0.25 due to silicon process variations. Through the on-chip DAC, we were able to reduce this variance for the excitatory synapses by a factor of 10. We did not compensate for the variance in the inhibitory synapses because it was technically more challenging to do that. The synaptic weights from each pointer neuron to every map neuron j = 1, 2, ..., 9 (Fig. 4(a)) were set according to the profile shown in Fig. 1(a). We compared the match between the programmed spatial connnectivity and the desired sine/cosine profile by activating only the topdown connections from the pointer neurons to the map neurons, while the bottom-up connections from the map neurons to the pointer neurons, and global inhibition were inactivated. In fact, because of the lower signal resolution, chip mismatch, and system noise, the measured profiles were only a qualitative match to a sine and cosine (Fig. 3), and the worst case variation from the ideal value was up to 50% for very small weights. Nevertheless, in spite of the imperfect match, we were able to reproduce most of the observations of [2]. (a) (b) Figure 4: Network architecture used on the chip for attentional modulation. (a) Originally proposed pointer-map architecture. (b) New network architecture with no requirement for strong excitatory recurrent connections. The global inhibition is now replaced by neuron-specific inhibition. 4.1 Attentional Input Control We tested the attentional control of the pointer neurons for this network (Fig. 4(a)) with activated recurrent connections and global inhibition. In addition, a common small constant input was applied to all map neurons. The location and activity on the map layer can be steered via the inputs to the pointer neurons, as seen in the three examples of Fig. 5. These results are similar to those observed by Hahnloser and colleagues in their rate model. 4.2 Attentional Target Selection One computational feature of the rate pointer-map is its multistablity: If two or more sensory stimuli are presented to the network, strong attentional inputs to the pointer can select one of these stimuli even if the stimulus is not the strongest one. The preferred stimulus depends on the initial activities of the map and pointer neurons. Moreover, attentional inputs can steer the attention to a different location on the map, even after a stronger stimulus is selected initially. We repeated these experiments (Fig. 4 of [2]) for the spiking network. In our experiments, only two map neurons received feedforward sensory inputs which consist of two regular spike trains of different frequencies. As shown in Fig. 6(a), the map neuron with the stronger feedforward input was selected. Attention could be steered to a different part of the map array by providing the necessary attentional inputs. And, the map neuron receiving the weaker stimulus could suppress the activity of another map neuron. Furthermore, the original rate model can produce attentional memorization effects, that is, the location of the map layer activity is retained even after the inputs to the pointer neurons are withdrawn. However, we were unsuccessful in duplicating the results of these experiments (see Fig. 6(a)) because the recurrent connection strength parameter ? had to be greater than 1. To explain why this strong recurrent connection strength was necessary, we first describe the steady state rate activities M1 and M2 , of two arbitrary map neurons that are active: M1 = bm1 ? ? (M1 + M2 ) + ? (cos ?1 P1 + sin ?1 P2 )c+ (1) M2 = bm2 ? ? (M1 + M2 ) + ? (cos ?2 P1 + sin ?2 P2 )c+ (2) where P1 , P2 are the steady state rate activities of the pointer neurons; m1 , m2 are the activities of the map neurons due to the sensory inputs; ? and ? determine the strength of inhibitory and excitatory connections respectively; ? cos ?i and ? sin ?i determine the connection strengths between pointer neurons and map neurons i for ?i ? [0o , 90o ]. The activities of the pointer neurons are given by P1 = bp1 + ? (cos ?1 M1 + cos ?2 M2 )c+ P2 = bp2 + ? (sin ?1 M1 + sin ?2 M2 )c+ where p1 and p2 are the acitivities induced by inputs to the two pointer neurons. (3) (4) Through substitution of Eqn.(3)(4) into Eqn.(1)(2) respectively, and assuming p1 , p2 = 0, it shows that in order to satisfy the condition that M1 > M2 for m1 < m2 , we need 1 ?> p > 1. (5) 1 ? cos (?1 ? ?2 ) There are several factors that make it difficult for us to reproduce the attentional memorization experiments. Firstly, since we are only using a small number of neurons, each input spike has to create more than one output spike from a neuron in order to satisfy the above condition. On the one hand, this is very hard to implement, because the neurons have a refractory period, any input currents during this time will not influence the neuron. It means that we can not use self-excitation to get an effective ? > 1. On the other hand, even for ? = 1 (one input spike causes one output spike), it can easily lead to instability in the network because the timing of the arrival of the inhibitory and the excitatory inputs becomes a critical factor of the system stability. Secondly, the network has to operate in a hard winner-take-all mode because of the variance in the inhibitory synaptic efficacies. This means that the neuron is reset to its resting potential whenever it receives an inhibitory spike, thus removing all memory. 4.3 Attentional Memorization By modifying the network architecture (see Fig. 4(b)), we were able to avoid using the strong excitatory connections as required in the original network. In our modified architecture, the inhibition is no longer global. Instead, each neuron inhibits all other neurons in the map population but itself. The steady state rate activities M1 and M2 are now given by M1 = bm1 ? ? M2 + ? (cos ?1 P1 + sin ?1 P2 )c+ (6) M2 = bm2 ? ? M1 + ? (cos ?2 P1 + sin ?2 P2 )c+ (7) The equations for the steady-state pointer neuron activities P1 and P2 remain as before. The new condition for ? is now 1?? (8) ?> p 1 ? cos (?1 ? ?2 ) which means that ? can be smaller than one. The intuitive explanation for the decrease of ? is that, in the original architecture, the global inhibition inhibits all the map neurons including the winner. Therefore, in order to memorize the attended stimulus, the excitatory connections need to be strengthen to compensate for the self inhibition. But in the new scenario, we delete the self inhibitions, which then releases the requirement for strong excitations. Using this new architecture, we performed the same experiments as described in Section 4.2 and we were now able to demonstrate attentional memorization. That is, the attended neuron with the weaker sensory input stimulus survived even after the attentional inputs were withdrawn. The same qualitative results were obtained even if all the remaining map neurons had a low background firing rate which mimic the effect of weak sensory inputs to different locations. (a) (b) (c) Figure 5: Results of experiments showing responses of map neurons for 3 settings of input strengths to the pointer neurons. Each map neuron has a background firing rate of 30Hz measured in the absence of activated recurrent connections and global inhibition. The attentional inputs to pointer neurons P1 and P2 are (a) [700Hz,50Hz], (b) [700Hz,700Hz], (c) [50Hz,700Hz]. The y-axis shows the firing rate (Hz) of the map neurons (1?9) listed on the x-axis. 5 Conclusion In this paper, we have described a hardware ?pointer-map? neural network composed of spiking neurons that performs an interesting task of selective attentional processing previously described in a simulated ?pointer-map? rate model by Hahnloser and colleagues. Neural network behaviors in computer simulations that use rate equations would likely be observed also in spiking networks if many input spikes can be integrated before the post-synaptic neuron?s threshold is reached. However, extensive integration is not possible for practical electronic networks, in which there are relatively small numbers of neurons and synapses. We were find that most of the computational features of their simulated rate model could be reproduced in our hardware spiking implementation despite imprecisions of synaptic weights, and the inevitable fabrication related variability in the performance of individual neurons. One significant difference between our spiking implementation and the rate model is the mechanism required to memorize a previously attended target. In our spike-based implementation, it was necessary to modify the original pointer-map architecture so that the inhibition no longer depends on a single global inhibitory neuron. Instead, each excitatory neuron inhibits all other neurons in the map population but itself. Unfortunately, this approximate equivalence between excitatory and inhbitory neurons is inconsistent with the anatomical observation that only about 15% of cortical neurons are inhibitory. However, the original architecture could probably work if we had larger populations of map neurons, more synapses, and/or NMDA-like synapses with longer time constants. This is a scenario that we will explore in the future along with better characterization of the switching time dynamics of the attentional memorization experiments. (a) (b) Figure 6: Results of attentional memorization experiments using the two different architectures in Fig. 4. (a) Results from original architecture. The sensory inputs to two map neurons M3 and M7 were set to [200Hz,230Hz]. The experiment was divided into 5 phases. In phase 1, the bottomup connections and inhibitory connections were inactivated. In phase 2, the inhibitory connections were activated thus map neuron M3 which received the weaker input, was suppressed. In phase 3, the bottom-up connections were activated. Map neuron M3 was now active because of the steering activity from the pointer neurons. In phase 4, the pointer neurons P1 and P2 were stimulated by attentional inputs of frequencies [700Hz,0Hz] which amplified the activity of M3 but the map activity returned back to the activity shown in phase 3 once the attentional inputs were withdrawn in phase 5. (b) Results from modified architecture. The sensory inputs to M3 and M7 were of frequencies [200Hz,230Hz] for the red curve and [40Hz,50Hz] for the blue curve. The 5 phases in the experiment were as described in (a). However in phase 5, we could see that map neuron M3 retained its activity even after the attentional inputs were withdrawn (attentional inputs to P1 and P2 were [700Hz,0Hz] for the red curve and [300Hz,0Hz] for the blue curve). Acknowledgments The authors would like to thank M. Oster for help with setting up the AER infrastructure, S. Zahnd for the PCB design, and T. Delbr?uck for discussions on the digital-to-analog converter circuits. This work is partially supported by ETH Research Grant TH-20/04-2, and EU grant ?DAISY? FP6-2005015803. References [1] Y. X. Wang and S. C. Liu, ?Programmable synaptic weights for an aVLSI network of spiking neurons,? in Proceedings of the 2006 IEEE International Symposium on Circuits and Systems, pp. 4531?4534, 2006. [2] R. Hahnloser, R. J. Douglas, M. A. Mahowald, and K. Hepp, ?Feedback interactions between neuronal pointers and maps for attentional processing,? Nature Neuroscience, vol. 2, pp. 746? 752, 1999. [3] K. A. Boahen, ?Point-to-point connectivity between neuromorphic chips using address event,? IEEE Transactions on Circuits and Systems II, vol. 47, pp. 416-434, 2000. [4] S.-C. Liu, and R. Douglas, ?Temporal coding in a network of silicon integrate-and-fire neuron,? IEEE Transactions on Neural Networks: Special Issue on Temporal Coding for Neural Information Processing, vol 15, no 5, Sep., pp. 1305-1314, 2004. [5] S. R. Deiss, R. J. Douglas, and A. M. Whatley, ?A pulse-coded communications infrastructure for neuromorphic systems,? in Pulsed Neural Networks, W. Maass and C. M. Bishop, Eds. Boston, MA: MIT Press, 1999, ch. 6, pp. 157?178, ISBN 0-262-13350-4. [6] C. Itti, E. Niebur, and C. Koch, ?A model of saliency-based fast visual attention for rapid scene analysis,? IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 11, pp. 1254?1259, Apr 1998. [7] G. Indiveri, T. Horiuchi, E. Niebur, and R. Douglas, ?A competitive network of spiking VLSI neurons,? in World Congress on Neuroinformatics, F. Rattay, Ed. Vienna, Austria: ARGESIM/ASIM Verlag, Sept 24?29 2001, aRGESIM Reports. [8] M. Oster and S.-C. Liu, ?Spiking inputs to a winner-take-all network,? in Advances in Neural Information Processing Systems. Cambridge, MA: MIT Press, 2006, vol. 18.
3106 |@word version:1 stronger:2 pulse:1 simulation:2 attended:5 initial:1 liu:4 substitution:1 efficacy:3 current:1 activation:2 plot:2 intelligence:1 selected:2 reciprocal:1 pointer:56 infrastructure:2 characterization:1 provides:1 location:6 firstly:2 five:1 mathematical:1 along:2 m7:2 symposium:1 transceiver:1 qualitative:2 consists:2 combine:1 rapid:1 behavior:4 p1:13 frequently:1 multi:3 becomes:2 moreover:1 circuit:4 fabricated:1 temporal:2 duplicating:1 winterthurerstrasse:1 every:1 act:1 control:4 normally:1 grant:2 before:3 timing:1 modify:1 congress:1 switching:1 despite:1 encoding:1 firing:4 becoming:1 modulation:1 equivalence:1 challenging:1 co:9 programmed:1 practical:1 acknowledgment:1 block:1 implement:3 communicated:2 survived:1 area:1 eth:2 regular:1 spite:1 get:1 onto:1 selection:1 influence:1 memorization:6 instability:1 conventional:1 map:69 attention:5 resolution:1 simplicity:1 m2:12 array:1 bm1:2 population:9 stability:1 variation:2 target:4 construction:1 strengthen:1 delbr:1 approximated:1 bp2:1 bottom:2 inserted:1 observed:2 wang:2 worst:1 connected:2 eu:1 decrease:1 boahen:1 dynamic:1 technically:1 req:1 easily:1 sep:1 chip:15 various:1 train:1 horiuchi:1 fast:1 describe:2 effective:1 neuroinformatics:2 whose:1 larger:1 itself:2 reproduced:2 whatley:1 isbn:1 interaction:3 reset:1 loop:1 rapidly:1 amplified:1 description:1 intuitive:1 competition:1 requirement:2 produce:1 cmos:1 help:1 recurrent:11 avlsi:1 measured:2 received:2 strong:5 p2:13 implemented:1 memorize:3 indicate:1 switzerland:1 direction:2 modifying:1 activating:1 secondly:1 exploring:1 koch:1 intentional:1 withdrawn:4 pointing:2 polar:1 travel:1 label:1 largest:1 create:1 reflects:1 mit:2 sensor:1 latched:1 rather:2 reaching:1 avoid:3 modified:2 encode:1 foveating:1 focus:1 release:1 indiveri:1 indicates:1 integrated:2 initially:1 vlsi:4 selective:3 reproduce:2 overall:1 issue:1 orientation:1 bm2:2 development:1 spatial:2 integration:1 special:1 once:1 excessive:1 inevitable:1 mimic:1 future:1 report:1 stimulus:9 composed:3 individual:3 replaced:3 phase:9 fire:3 activated:4 necessary:3 circle:3 desired:1 delete:1 soft:1 obstacle:1 steer:2 neuromorphic:2 mahowald:1 fabrication:1 considerably:1 international:1 off:1 receiving:1 connectivity:1 steered:2 itti:1 account:1 potential:3 coding:2 coefficient:1 satisfy:2 depends:3 sine:4 performed:1 reached:1 red:2 competitive:1 rodney:1 daisy:1 formed:1 variance:4 saliency:1 chii:1 weak:1 niebur:2 served:1 explain:1 synapsis:12 phys:1 strongest:1 whenever:1 synaptic:12 ed:2 colleague:4 sensorimotor:1 frequency:4 pp:6 obvious:1 austria:1 nmda:1 back:1 focusing:1 originally:1 response:3 synapse:2 furthermore:1 angular:1 hand:2 receives:2 eqn:2 dac:2 mode:1 gray:1 effect:3 imprecision:1 maass:1 sin:7 during:2 self:3 uniquely:1 steady:6 excitation:3 cosine:4 ini:1 ack:1 demonstrate:1 performs:2 intermittently:1 common:3 behaves:1 functional:1 spiking:18 winner:4 refractory:1 analog:5 m1:12 resting:1 silicon:2 significant:1 cambridge:1 had:3 longer:3 inhibition:11 recent:1 pulsed:1 scenario:2 verlag:1 seen:1 greater:1 steering:1 determine:2 period:2 signal:4 ii:1 instruct:1 match:3 compensate:2 divided:1 post:1 coded:1 achieved:2 receive:3 background:2 addition:2 source:1 operate:2 probably:1 hz:25 induced:1 sent:2 inconsistent:1 axonal:1 ideal:1 feedforward:4 easy:1 architecture:22 converter:3 silent:1 reduce:1 imperfect:1 whether:2 returned:1 cause:1 action:2 programmable:1 detailed:1 clear:1 listed:2 transforms:1 neocortex:1 hardware:2 reduced:1 generate:1 inhibitory:14 neuroscience:2 anatomical:1 blue:2 vol:5 shih:2 soma:1 nevertheless:1 threshold:1 douglas:5 fp6:1 year:1 angle:2 communicate:1 place:1 electronic:1 bit:1 layer:5 activity:21 aer:3 strength:5 vectorial:1 scene:1 asim:1 formulating:1 inhibits:3 relatively:1 according:2 remain:1 smaller:1 suppressed:1 modification:1 equation:3 zurich:3 previously:5 resource:1 bus:3 mechanism:1 operation:1 limb:1 simulating:1 original:7 top:3 remaining:1 ensure:1 bp1:1 vienna:1 objective:1 spike:20 usual:1 attentional:30 thank:1 simulated:3 decoder:1 assuming:1 code:1 retained:2 providing:1 equivalently:1 difficult:1 unfortunately:1 pcb:1 suppress:1 implementation:4 design:1 neuron:115 observation:2 incorporated:1 variability:1 communication:1 directing:1 arbitrary:1 intensity:1 required:3 specified:2 extensive:1 connection:20 deiss:1 address:9 able:7 bar:1 topdown:1 usually:1 pattern:2 mismatch:1 unsuccessful:1 memory:1 reciprocally:1 explanation:1 including:1 event:7 critical:1 difficulty:1 hybrid:2 eye:1 axis:4 sept:1 oster:2 literature:1 interesting:3 subcortical:1 proportional:2 digital:9 integrate:3 metal:1 excitatory:14 elsewhere:1 supported:1 asynchronous:3 side:1 weaker:3 institute:1 taking:1 benefit:1 feedback:6 curve:4 cortical:3 world:2 valid:1 sensory:9 author:1 transaction:3 approximate:1 preferred:1 global:11 active:3 assumed:1 arbiter:1 bottomup:1 continuous:1 why:1 reality:1 stimulated:2 nature:1 inactivated:2 poly:1 protocol:3 did:1 apr:1 noise:1 profile:5 arrival:1 repeated:1 neuronal:1 fig:14 decoded:1 exercise:1 down:3 removing:1 specific:1 bishop:1 showing:1 recurrently:1 appeal:1 consist:1 boston:1 explore:2 likely:1 visual:2 partially:1 ch:3 ma:2 hahnloser:5 modulate:3 consequently:1 absence:1 hard:2 called:2 total:1 uck:1 hepp:1 m3:6 select:1 ethz:1 handshaking:3 evaluate:1 outgoing:1 tested:1
2,322
3,107
Convex Repeated Games and Fenchel Duality 1 Shai Shalev-Shwartz1 and Yoram Singer1,2 School of Computer Sci. & Eng., The Hebrew University, Jerusalem 91904, Israel 2 Google Inc. 1600 Amphitheater Parkway, Mountain View, CA 94043, USA Abstract We describe an algorithmic framework for an abstract game which we term a convex repeated game. We show that various online learning and boosting algorithms can be all derived as special cases of our algorithmic framework. This unified view explains the properties of existing algorithms and also enables us to derive several new interesting algorithms. Our algorithmic framework stems from a connection that we build between the notions of regret in game theory and weak duality in convex optimization. 1 Introduction and Problem Setting Several problems arising in machine learning can be modeled as a convex repeated game. Convex repeated games are closely related to online convex programming (see [19, 9] and the discussion in the last section). A convex repeated game is a two players game that is performed in a sequence of consecutive rounds. On round t of the repeated game, the first player chooses a vector wt from a convex set S. Next, the second player responds with a convex function gt : S ? R. Finally, the first player suffers an instantaneous loss gt (wt ). We study the game from P the viewpoint of the first player. The goal of the first player is to minimize its cumulative loss, t gt (wt ). To motivate this rather abstract setting let us first cast the more familiar setting of online learning as a convex repeated game. Online learning is performed in a sequence of consecutive rounds. On round t, the learner first receives a question, cast as a vector xt , and is required to provide an answer for this question. For example, xt can be an encoding of an email message and the question is whether the email is spam or not. The prediction of the learner is performed based on an hypothesis, ht : X ? Y, where X is the set of questions and Y is the set of possible answers. In the aforementioned example, Y would be {+1, ?1} where +1 stands for a spam email and ?1 stands for a benign one. After predicting an answer, the learner receives the correct answer for the question, denoted yt , and suffers loss according to a loss function `(ht , (xt , yt )). In most cases, the hypotheses used for prediction come from a parameterized set of hypotheses, H = {hw : w ? S}. For example, the set of linear classifiers, which is used for answering yes/no questions, is defined as H = {hw (x) = sign(hw, xi) : w ? Rn }. Thus, rather than saying that on round t the learner chooses a hypothesis, we can say that the learner chooses a vector wt and its hypothesis is hwt . Next, we note that once the environment chooses a question-answer pair (xt , yt ), the loss function becomes a function over the hypotheses space or equivalently over the set of parameter vectors S. We can therefore redefine the online learning process as follows. On round t, the learner chooses a vector wt ? S, which defines a hypothesis hwt to be used for prediction. Then, the environment chooses a questionanswer pair (xt , yt ), which induces the following loss function over the set of parameter vectors, gt (w) = `(hw , (xt , yt )). Finally, the learner suffers the loss gt (wt ) = `(hwt , (xt , yt )). We have therefore described the process of online learning as a convex repeated game. In this paper we assess the performance of the first player using the notion of regret. Given a number of rounds T and a fixed vector u ? S, we define the regret of the first player as the excess loss for not consistently playing the vector u, T T 1X 1X gt (wt ) ? gt (u) . T t=1 T t=1 Our main result is an algorithmic framework for the first player which guarantees low regret with respect to any vector u ? S. Specifically, we derive regret bounds that take the following form ?u ? S, T T 1X f (u) + L 1X ? gt (wt ) ? gt (u) ? , T t=1 T t=1 T (1) where f : S ? R and L ? R+ . Informally, the function f measures the ?complexity? of vectors in S and the scalar L is related to some generalized Lipschitz property of the functions g1 , . . . , gT . We defer the exact requirements we impose on f and L to later sections. Our algorithmic framework emerges from a representation of the regret bound given in Eq. (1) using an optimization problem. Specifically, we rewrite Eq. (1) as follows T T 1X 1X f (u) + L ? . gt (wt ) ? inf gt (u) + u?S T T t=1 T t=1 (2) That is, the average loss of the first player should be bounded above by the minimum value of an optimization problem in which we jointly minimize the average loss of u and the ?complexity? of u as measured by the function f . Note that the optimization problem on the right-hand side of Eq. (2) can only be solved in hindsight after observing the entire sequence of loss functions. Nevertheless, writing the regret bound as in Eq. (2) implies that the average loss of the first player forms a lower bound for a minimization problem. The notion of duality, commonly used in convex optimization theory, plays an important role in obtaining lower bounds for the minimal value of a minimization problem (see for example [14]). By generalizing the notion of Fenchel duality, we are able to derive a dual optimization problem, which can be optimized incrementally, as the game progresses. In order to derive explicit quantitative regret bounds we make an immediate use of the fact that dual objective lower bounds the primal objective. We therefore reduce the process of playing convex repeated games to the task of incrementally increasing the dual objective function. The amount by which the dual increases serves as a new and natural notion of progress. By doing so we are able to tie the primal objective value, the average loss of the first player, and the increase in the dual. The rest of this paper is organized as follows. In Sec. 2 we establish our notation and point to a few mathematical tools that we use throughout the paper. Our main tool for deriving algorithms for playing convex repeated games is a generalization of Fenchel duality, described in Sec. 3. Our algorithmic framework is given in Sec. 4 and analyzed in Sec. 5. The generality of our framework allows us to utilize it in different problems arising in machine learning. Specifically, in Sec. 6 we underscore the applicability of our framework for online learning and in Sec. 7 we outline and analyze boosting algorithms based on our framework. We conclude with a discussion and point to related work in Sec. 8. Due to the lack of space, some of the details are omitted from the paper and can be found in [16]. 2 Mathematical Background We denote scalars with lower case letters (e.g. x and w), and vectors with bold face letters (e.g. x and w). The inner product between vectors x and w is denoted by hx, wi. Sets are designated by upper case letters (e.g. S). The set of non-negative real numbers is denoted by R+ . For any k ? 1, the set of integers {1, . . . , k} is denoted by [k]. A norm of a vector x is denoted by kxk. The dual norm is defined as k?k? = sup{hx, ?i : kxk ? 1}. For P example, the Euclidean norm, kxk2 = (hx, xi)1/2 is dual to itself and the `1 norm, kxk1 = i |xi |, is dual to the `? norm, kxk? = maxi |xi |. We next recall a few definitions from convex analysis. The reader familiar with convex analysis may proceed to Lemma 1 while for a more thorough introduction see for example [1]. A set S is convex if for any two vectors w1 , w2 in S, all the line between w1 and w2 is also within S. That is, for any ? ? [0, 1] we have that ?w1 + (1 ? ?)w2 ? S. A set S is open if every point in S has a neighborhood lying in S. A set S is closed if its complement is an open set. A function f : S ? R is closed and convex if for any scalar ? ? R, the level set {w : f (w) ? ?} is closed and convex. The Fenchel conjugate of a function f : S ? R is defined as f ? (?) = supw?S hw, ?i ? f (w) . If f is closed and convex then the Fenchel conjugate of f ? is f itself. The Fenchel-Young inequality states that for any w and ? we have that f (w) + f ? (?) ? hw, ?i. A vector ? is a sub-gradient of a function f at w if for all w0 ? S we have that f (w0 ) ? f (w) ? hw0 ? w, ?i. The differential set of f at w, denoted ?f (w), is the set of all sub-gradients of f at w. If f is differentiable at w then ?f (w) consists of a single vector which amounts to the gradient of f at w and is denoted by ?f (w). Sub-gradients play an important role in the definition of Fenchel conjugate. In particular, the following lemma states that if ? ? ?f (w) then Fenchel-Young inequality holds with equality. Lemma 1 Let f be a closed and convex function and let ?f (w0 ) be its differential set at w0 . Then, for all ?0 ? ?f (w0 ) we have, f (w0 ) + f ? (?0 ) = h?0 , w0 i . A continuous function f is ?-strongly convex over a convex set S with respect to a norm k ? k if S is contained in the domain of f and for all v, u ? S and ? ? [0, 1] we have 1 (3) f (? v + (1 ? ?) u) ? ? f (v) + (1 ? ?) f (u) ? ? ? (1 ? ?) kv ? uk2 . 2 Strongly convex functions play an important role in our analysis primarily due to the following lemma. Lemma 2 Let k ? k be a norm over Rn and let k ? k? be its dual norm. Let f be a ?-strongly convex function on S and let f ? be its Fenchel conjugate. Then, f ? is differentiable with ?f ? (?) = arg maxx?S h?, xi ? f (x). Furthermore, for any ?, ? ? Rn we have 1 f ? (? + ?) ? f ? (?) ? h?f ? (?), ?i + k?k2? . 2? Two notable examples of strongly convex functions which we use are as follows. Example 1 The function f (w) = 21 kwk22 is 1-strongly convex over S = Rn with respect to the `2 norm. Its conjugate function is f ? (?) = 12 k?k22 . Pn Example 2 The function f (w) = i=1 wi log(wi / n1 ) is 1-strongly convex over the probabilistic simplex, S = {w ? Rn+ : kwk1 = 1}, with respect to the `1 norm. Its conjugate function is Pn f ? (?) = log( n1 i=1 exp(?i )). 3 Generalized Fenchel Duality In this section we derive our main analysis tool. We start by considering the following optimization problem,   PT inf c f (w) + t=1 gt (w) , w?S where c is a non-negative scalar. An equivalent problem is   PT inf c f (w0 ) + t=1 gt (wt ) s.t. w0 ? S and ?t ? [T ], wt = w0 . w0 ,w1 ,...,wT Introducing T vectors ?1 , . . . , ?T , each ?t ? Rn is a vector of Lagrange multipliers for the equality constraint wt = w0 , we obtain the following Lagrangian PT PT L(w0 , w1 , . . . , wT , ?1 , . . . , ?T ) = c f (w0 ) + t=1 gt (wt ) + t=1 h?t , w0 ? wt i . The dual problem is the task of maximizing the following dual objective value, D(?1 , . . . , ?T ) = inf L(w0 , w1 , . . . , wT , ?1 , . . . , ?T ) w0 ?S,w1 ,...,wT  P  PT T = ? c sup hw0 , ? 1c t=1 ?t i ? f (w0 ) ? t=1 sup (hwt , ?t i ? gt (wt )) wt w0 ?S  P  PT T 1 ? ? = ?c f ? c t=1 ?t ? t=1 gt (?t ) , where, following the exposition of Sec. 2, f ? , g1? , . . . , gT? are the Fenchel conjugate functions of f, g1 , . . . , gT . Therefore, the generalized Fenchel dual problem is  P  PT T (4) sup ? c f ? ? 1c t=1 ?t ? t=1 gt? (?t ) . ?1 ,...,?T Note that when T = 1 and c = 1, the above duality is the so called Fenchel duality. 4 A Template Learning Algorithm for Convex Repeated Games In this section we describe a template learning algorithm for playing convex repeated games. As mentioned before, we study convex repeated games from the viewpoint of the first player which we shortly denote as P1. Recall that we would like our learning algorithm to achieve a regret bound of the form given in Eq. (2). We start by rewriting Eq. (2) as follows ! T m X X gt (wt ) ? c L ? inf c f (u) + gt (u) , (5) u?S t=1 t=1 ? where c = T . Thus, up to the sublinear term c L, the cumulative loss of P1 lower bounds the optimum of the minimization problem on the right-hand side of Eq. (5). In the previous section we derived the generalized Fenchel dual of the right-hand side of Eq. (5). Our construction is based on the weak duality theorem stating that any value of the dual problem is smaller than the optimum value of the primal problem. The algorithmic framework we propose is therefore derived by incrementally ascending the dual objective function. Intuitively, by ascending the dual objective we move closer to the optimal primal value and therefore our performance becomes similar to the performance of the best fixed weight vector which minimizes the right-hand side of Eq. (5). Initially, we use the elementary dual solution ?1t = 0 for all t. We assume that inf w f (w) = 0 and for all t inf w gt (w) = 0 which imply that D(?11 , . . . , ?1T ) = 0. We assume in addition that f is ?-strongly convex. Therefore, based on Lemma 2, the function f ? is differentiable. At trial t, P1 uses for prediction the vector   PT (6) wt = ?f ? ? 1c i=1 ?ti . After predicting wt , P1 receives the function gt and suffers the loss gt (wt ). Then, P1 updates the dual variables as follows. Denote by ?t the differential set of gt at wt , that is, ?t = {? : ?w ? S, gt (w) ? gt (wt ) ? h?, w ? wt i} . (7) t+1 The new dual variables (?t+1 1 , . . . , ?T ) are set to be any set of vectors which satisfy the following two conditions: t+1 t t 0 t t (i). ??0 ? ?t s.t. D(?t+1 1 , . . . , ?T ) ? D(?1 , . . . , ?t?1 , ? , ?t+1 , . . . , ?T ) (ii). ?i > t, ?t+1 =0 i . (8) In the next section we show that condition (i) ensures that the increase of the dual at trial t is proportional to the loss gt (wt ). The second condition ensures that we can actually calculate the dual at trial t without any knowledge on the yet to be seen loss functions gt+1 , . . . , gT . We conclude this section with two update rules that trivially satisfy the above two conditions. The first update scheme simply finds ?0 ? ?t and set  0 ? if i = t t+1 ?i = . (9) ?ti if i 6= t The second update defines t+1 (?t+1 1 , . . . , ?T ) = argmax D(?1 , . . . , ?T ) ?1 ,...,?T s.t. ?i 6= t, ?i = ?ti . (10) 5 Analysis In this section we analyze the performance of the template algorithm given in the previous section. Our proof technique is based on monitoring the value of the dual objective function. The main result is the following lemma which gives upper and lower bounds for the final value of the dual objective function. Lemma 3 Let f be a ?-strongly convex function with respect to a norm k ? k over a set S and assume that minw?S f (w) = 0. Let g1 , . . . , gT be a sequence of convex and closed functions such that inf w gt (w) = 0 for all t ? [T ]. Suppose that a dual-incrementing algorithm which satisfies the conditions of Eq. (8) is run with f as a complexity function on the sequence g1 , . . . , gT . Let w1 , . . . , wT be the sequence of primal vectors that the algorithm generates and ?T1 +1 , . . . , ?TT +1 be its final sequence of dual variables. Then, there exists a sequence of sub-gradients ?01 , . . . , ?0T , where ?0t ? ?t for all t, such that T X T T X 1 X 0 2 T +1 T +1 gt (wt ) ? k? k ? D(?1 , . . . , ?T ) ? inf c f (w) + gt (w) . w?S 2 ? c t=1 t ? t=1 t=1 Proof The second inequality follows directly from the weak duality theorem. Turning to t+1 t t the left most inequality, denote ?t = D(?t+1 1 , . . . , ?T ) ? D(?1 , . . . , ?T ) and note that T +1 T +1 D(?1 , . . . , ?T ) can be rewritten as PT PT 1 1 D(?T1 +1 , . . . , ?TT +1 ) = (11) t=1 ?t ? D(?1 , . . . , ?T ) = t=1 ?t , where the last equality follows from the fact that f ? (0) = g1? (0) = . . . = gT? (0) = 0. The definition of the update implies that ?t ? D(?t1 , . . . , ?tt?1 , ?0t , 0, . . . , 0) ? D(?t1 , . . . , ?tt?1 , 0, 0, . . . , 0) for Pt?1 some subgradient ?0t ? ?t . Denoting ? t = ? 1c j=1 ?j , we now rewrite the lower bound on ?t as, ?t ? ?c (f ? (? t ? ?0t /c) ? f ? (? t )) ? gt? (?0t ) . Using Lemma 2 and the definition of wt we get that (12) ?t ? hwt , ?0t i ? gt? (?0t ) ? 2 ?1 c k?0t k2? . Since ?0t ? ?t and since we assume that gt is closed and convex, we can apply Lemma 1 to get that hwt , ?0t i ? gt? (?0t ) = gt (wt ). Plugging this equality into Eq. (12) and summing over t we obtain that PT PT PT 0 2 1 t=1 ?t ? t=1 gt (wt ) ? 2 ? c t=1 k?t k? . Combining the above inequality with Eq. (11) concludes our proof. The following regret bound follows as a direct corollary of Lemma 3. PT Theorem 1 Under the same conditions of Lemma 3. Denote L = T1 t=1 k?0t k2? . Then, for all w ? S we have, PT PT c f (w) 1 1 + 2 L? c . t=1 gt (wt ) ? T t=1 gt (w) ? T T ? In particular, if c = T , we obtain the bound, PT PT f (w)+L/(2 ?) 1 1 ? . t=1 gt (wt ) ? T t=1 gt (w) ? T T 6 Application to Online learning In Sec. 1 we cast the task of online learning as a convex repeated game. We now demonstrate the applicability of our algorithmic framework for the problem of instance ranking. We analyze this setting since several prediction problems, including binary classification, multiclass prediction, multilabel prediction, and label ranking, can be cast as special cases of the instance ranking problem. Recall that on each online round, the learner receives a question-answer pair. In instance ranking, the question is encoded by a matrix Xt of dimension kt ? n and the answer is a vector yt ? Rkt . The semantic of yt is as follows. For any pair (i, j), if yt,i > yt,j then we say that yt ranks the i?th row of Xt ahead of the j?th row of Xt . We also interpret yt,i ? yt,j as the confidence in which the i?th row should be ranked ahead of the j?th row. For example, each row of Xt encompasses a representation of a movie while yt,i is the movie?s rating, expressed as the number of stars this movie has received by a movie reviewer. The predictions of the learner are determined ? t = Xt wt . Finally, let us define based on a weight vector wt ? Rn and are defined to be y two loss functions for ranking, both generalize the hinge-loss used in binary classification problems. Denote by Et the set {(i, j) : yt,i > yt,j }. For all (i, j) ? Et we define a pair-based hinge-loss `i,j (w; (Xt , yt )) = [(yt,i ? yt,j ) ? hw, xt,i ? xt,j i]+ , where [a]+ = max{a, 0} and xt,i , xt,j are respectively the i?th and j?th rows of Xt . Note that `i,j is zero if w ranks xt,i higher than xt,j with a sufficient confidence. Ideally, we would like `i,j (wt ; (Xt , yt )) to be zero for all (i, j) ? Et . If this is not the case, we are being penalized according to some combination of the pair-based losses `i,j . For example, we can set `(w; (Xt , yt )) to be the average over the pair losses, P `avg (w; (Xt , yt )) = |E1t | (i,j)?Et `i,j (w; (Xt , yt )) . This loss was suggested by several authors (see for example [18]). Another popular approach (see for example [5]) penalizes according to the maximal loss over the individual pairs, `max (w; (Xt , yt )) = max(i,j)?Et `i,j (w; (Xt , yt )) . We can apply our algorithmic framework given in Sec. 4 for ranking, using for gt (w) either `avg (w; (Xt , yt )) or `max (w; (Xt , yt )). The following theorem provides us with a sufficient condition under which the regret bound from Thm. 1 holds for ranking as well. Theorem 2 Let f be a ?-strongly convex function over S with respect to a norm k ? k. Denote by Lt the maximum over (i, j) ? Et of kxt,i ? xt,j k2? . Then, for both gt (w) = `avg (w; (Xt , yt )) and gt (w) = `max (w; (Xt , yt )), the following regret bound holds PT PT PT L /(2 ?) f (u)+ T1 1 ?t=1 t g (w ) ? g (u) ? ?u ? S, T1 . t t t t=1 t=1 T T 7 The Boosting Game In this section we describe the applicability of our algorithmic framework to the analysis of boosting algorithms. A boosting algorithm uses a weak learning algorithm that generates weak-hypotheses whose performances are just slightly better than random guessing to build a strong-hypothesis which can attain an arbitrarily low error. The AdaBoost algorithm, proposed by Freund and Schapire [6], receives as input a training set of examples {(x1 , y1 ), . . . , (xm , ym )} where for all i ? [m], xi is taken from an instance domain X , and yi is a binary label, yi ? {+1, ?1}. The boosting process proceeds in a sequence of consecutive trials. At trial t, the booster first defines a distribution, denoted wt , over the set of examples. Then, the booster passes the training set along with the distribution wt to the weak learner. The weak learner is assumed to return a hypothesis ht : X ? {+1, ?1} whose average error is slightly smaller than 12 . That is, there exists a constant ? > 0 such that, def Pm 1?yi ht (xi ) t = ? 12 ? ? . (13) i=1 wt,i 2 The goal of the boosting algorithm is to invoke the weak learner several times with different distributions, and to combine the hypotheses returned by the weak learner into a final, so called strong, hypothesis whose error is small. The final hypothesis combines linearly the T hypotheses returned by the weak learner with coefficients ?1 , . . . , ?T , and is defined to be the sign of hf (x) where PT hf (x) = t=1 ?t ht (x) . The coefficients ?1 , . . . , ?T are determined by the booster. In Ad1 1 aBoost, the initial distribution is set to be the uniform distribution, w1 = ( m ,..., m ). At iter1 ation t, the value of ?t is set to be 2 log((1 ? t )/t ). The distribution is updated by the rule wt+1,i = wt,i exp(??t yi ht (xi ))/Zt , where Zt is a normalization factor. Freund and Schapire [6] have shown that under the assumption given in Eq. (13), the error of the final strong hypothesis is at most exp(?2 ? 2 T ). Several authors [15, 13, 8, 4] have proposed to view boosting as a coordinate-wise greedy optimization process. To do so, note first that hf errs on an example (x, y) iff y hf (x) ? 0. Therefore, the exp-loss function, defined as exp(?y hf (x)), is a smooth upper bound of the zero-one error, which equals to 1 if y hf (x) ? 0 and to 0 otherwise. Thus, we can restate the goal of boosting as minimizing the average exp-loss of hf over the training set with respect to the variables ?1 , . . . , ?T . To simplify our derivation in the sequel, we prefer to say that boosting maximizes the negation of the loss, that is,   PT Pm 1 ? h (x ) . (14) max exp ?y ?m t t i i t=1 i=1 ?1 ,...,?T In this view, boosting is an optimization procedure which iteratively maximizes Eq. (14) with respect to the variables ?1 , . . . , ?T . This view of boosting, enables the hypotheses returned by the weak learner to be general functions into the reals, ht : X ? R (see for instance [15]). In this paper we view boosting as a convex repeated game between a booster and a weak learner. To motivate our construction, we would like to note that boosting algorithms define weights in two different domains: the vectors wt ? Rm which assign weights to examples and the weights {?t : t ? [T ]} over weak-hypotheses. In the terminology used throughout this paper, the weights wt ? Rm are primal vectors while (as we show in the sequel) each weight ?t of the hypothesis ht is related to a dual vector ?t . In particular, we show that Eq. (14) is exactly the Fenchel dual of a primal problem for a convex repeated game, thus the algorithmic framework described thus far for playing games naturally fits the problem of iteratively solving Eq. (14). To derive the primal problem whose Fenchel dual is the problem given in Eq. (14) let us first denote by vt the vector in Rm whose ith element is vt,i = yi ht (xi ). For all t, we set gt to be the function gt (w) = [hw, vt i]+ . Intuitively, gt penalizes vectors w which assign large weights to examples which are predicted accurately, that is yi ht (xi ) > 0. In particular, if ht (xi ) ? {+1, ?1} and wt is a distribution over the m examples (as is the case in AdaBoost), gt (wt ) reduces to 1 ? 2t (see Eq. (13)). In this case, minimizing gt is equivalent to maximizing the error of the individual PT hypothesis ht over the examples. Consider the problem of minimizing c f (w) + t=1 gt (w) where f (w) is the relative entropy given in Example 2 and c = 1/(2 ?) (see Eq. (13)). To derive its Fenchel dual, we note that gt? (?t ) = 0 if there exists ?t ? [0, 1] such that ?t = ?t vt and otherwise gt? (?t ) = ? (see [16]). In addition, let us define ?t = 2 ? ?t . Since our goal is to maximize the dual, we can restrict ?t to take the form ?t = ?t vt = 2??t vt , and get that ! ! T m 1X 1 1 X ? PTt=1 ?t yi ht (xi ) ? D(?1 , . . . , ?T ) = ?c f ? . (15) =? log e ?t vt c t=1 2? m i=1 Minimizing the exp-loss of the strong hypothesis is therefore the dual problem of the following primal minimization problem: find a distribution over the examples, whose relative entropy to the uniform distribution is as small as possible while the correlation of the distribution with each vt is as small as possible. Since the correlation of w with vt is inversely proportional to the error of ht with respect to w, we obtain that in the primal problem we are trying to maximize the error of each individual hypothesis, while in the dual problem we minimize the global error of the strong hypothesis. The intuition of finding distributions which in retrospect result in large error rates of individual hypotheses was also alluded in [15, 8]. We can now apply our algorithmic framework from Sec. 4 to boosting. We describe the game with the parameters ?t , where ?t ? [0, 2 ?], and underscore that in our case, ?t = 2??t vt . At the beginning of the game the booster sets all dual variables to be zero, ?t ?t = 0. At trial t of the boosting game, the booster first constructs a primal weight vector wt ? Rm , which assigns importance weights to the examples in the trainingP set. The primal vector wt is constructed as in Eq. (6), that is, wt = ?f ? (? t ), where ? t = ? i ?i vi . Then, the weak learner responds by presenting the loss function gt (w) = [hw, vt i]+ . Finally, the booster updates the dual variables so as to increase the dual objective function. It is possible to show that if the range of ht is {+1, ?1} then the update given in Eq. (10) is equivalent to the update ?t = min{2 ?, 21 log((1 ? t )/t )}. We have thus obtained a variant of AdaBoost in which the weights ?t are capped above by 2 ?. A disadvantage of this variant is that we need to know the parameter ?. We would like to note in passing that this limitation can be lifted by a different definition of the functions gt . We omit the details due to the lack of space. To analyze our game of boosting, we note that the conditions given in Lemma 3 holds PT and therefore the left-hand side inequality given in Lemma 3 tells us that t=1 gt (wt ) ? PT T +1 T +1 0 2 1 , . . . , ?T ) . The definition of gt and the weak learnability ast=1 k?t k? ? D(?1 2c sumption given in Eq. (13) imply that hwt , vt i ? 2 ? for all t. Thus, gt (wt ) = hwt , vt i ? 2 ? which also implies that ?0t = vt . Recall that vt,i = yi ht (xi ). Assuming that the range of ht is [+1, ?1] we get that k?0t k? ? 1. Combining all the above with the left-hand side inequality given in Lemma 3 we get that 2 T ? ? 2Tc ? D(?T1 +1 , . . . , ?TT +1 ). Using the definition of D (see Eq. (15)), the value c = 1/(2 ?), and rearranging terms we recover the original bound for AdaBoost Pm ?yi PT ?t ht (xi ) 2 1 t=1 ? e?2 ? T . i=1 e m 8 Related Work and Discussion We presented a new framework for designing and analyzing algorithms for playing convex repeated games. Our framework was used for the analysis of known algorithms for both online learning and boosting settings. The framework also paves the way to new algorithms. In a previous paper [17], we suggested the use of duality for the design of online algorithms in the context of mistake bound analysis. The contribution of this paper over [17] is three fold as we now briefly discuss. First, we generalize the applicability of the framework beyond the specific setting of online learning with the hinge-loss to the general setting of convex repeated games. The setting of convex repeated games was formally termed ?online convex programming? by Zinkevich [19] and was first presented by Gordon in [9]. There is voluminous amount of work on unifying approaches for deriving online learning algorithms. We refer the reader to [11, 12, 3] for work closely related to the content of this paper. By generalizing our previously studied algorithmic framework [17] beyond online learning, we can automatically utilize well known online learning algorithms, such as the EG and p-norm algorithms [12, 11], to the setting of online convex programming. We would like to note that the algorithms presented in [19] can be derived as special cases of our algorithmic framework by setting f (w) = 21 kwk2 . Parallel and independently to this work, Gordon [10] described another algorithmic framework for online convex programming that is closely related to the potential based algorithms described by Cesa-Bianchi and Lugosi [3]. Gordon also considered the problem of defining appropriate potential functions. Our work generalizes some of the theorems in [10] while providing a somewhat simpler analysis. Second, the usage of generalized Fenchel duality rather than the Lagrange duality given in [17] enables us to analyze boosting algorithms based on the framework. Many authors derived unifying frameworks for boosting algorithms [13, 8, 4]. Nonetheless, our general framework and the connection between game playing and Fenchel duality underscores an interesting perspective of both online learning and boosting. We believe that this viewpoint has the potential of yielding new algorithms in both domains. Last, despite the generality of the framework introduced in this paper, the resulting analysis is more distilled than the earlier analysis given in [17] for two reasons. (i) The usage of Lagrange duality in [17] is somehow restricted while the notion of generalized Fenchel duality is more appropriate to the general and broader problems we consider in this paper. (ii) The strongly convex property we employ both simplifies the analysis and enables more intuitive conditions in our theorems. There are various possible extensions of the work that we did not pursue here due to the lack of space. For instanc, our framework can naturally be used for the analysis of other settings such as repeated games (see [7, 19]). The applicability of our framework to online learning can also be extended to other prediction problems such as regression and sequence prediction. Last, we conjecture that our primal-dual view of boosting will lead to new methods for regularizing boosting algorithms, thus improving their generalization capabilities. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] J. Borwein and A. Lewis. Convex Analysis and Nonlinear Optimization. Springer, 2006. S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006. M. Collins, R.E. Schapire, and Y. Singer. Logistic regression, AdaBoost and Bregman distances. Machine Learning, 2002. K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. Online passive aggressive algorithms. JMLR, 7, Mar 2006. Y. Freund and R.E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. In EuroCOLT, 1995. Y. Freund and R.E. Schapire. Game theory, on-line prediction and boosting. In COLT, 1996. J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical view of boosting. Annals of Statistics, 28(2), 2000. G. Gordon. Regret bounds for prediction problems. In COLT, 1999. G. Gordon. No-regret algorithms for online convex programs. In NIPS, 2006. A. J. Grove, N. Littlestone, and D. Schuurmans. General convergence results for linear discriminant updates. Machine Learning, 43(3), 2001. J. Kivinen and M. Warmuth. Relative loss bounds for multidimensional regression problems. Journal of Machine Learning, 45(3),2001. L. Mason, J. Baxter, P. Bartlett, and M. Frean. Functional gradient techniques for combining hypotheses. In Advances in Large Margin Classifiers. MIT Press, 1999. Y. Nesterov. Primal-dual subgradient methods for convex problems. Technical report, Center for Operations Research and Econometrics (CORE), Catholic University of Louvain (UCL), 2005. R. E. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37(3):1?40, 1999. S. Shalev-Shwartz and Y. Singer. Convex repeated games and fenchel duality. Technical report, The Hebrew University, 2006. S. Shalev-Shwartz and Y. Singer. Online learning meets optimization in the dual. In COLT, 2006. J. Weston and C. Watkins. Support vector machines for multi-class pattern recognition. In ESANN, April 1999. M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In ICML, 2003.
3107 |@word trial:6 briefly:1 norm:13 dekel:1 open:2 eng:1 initial:1 denoting:1 existing:1 yet:1 additive:1 benign:1 enables:4 update:9 greedy:1 warmuth:1 beginning:1 ith:1 core:1 provides:1 boosting:27 simpler:1 mathematical:2 along:1 constructed:1 direct:1 differential:3 consists:1 combine:2 redefine:1 p1:5 multi:1 eurocolt:1 automatically:1 considering:1 increasing:1 becomes:2 bounded:1 notation:1 maximizes:2 israel:1 mountain:1 minimizes:1 pursue:1 unified:1 hindsight:1 finding:1 guarantee:1 quantitative:1 thorough:1 every:1 ti:3 multidimensional:1 tie:1 exactly:1 classifier:2 k2:4 rm:4 omit:1 before:1 t1:8 mistake:1 despite:1 encoding:1 analyzing:1 meet:1 lugosi:2 studied:1 range:2 regret:14 procedure:1 maxx:1 attain:1 boyd:1 confidence:3 get:5 ast:1 context:1 writing:1 equivalent:3 zinkevich:2 lagrangian:1 yt:29 maximizing:2 reviewer:1 jerusalem:1 center:1 independently:1 convex:52 assigns:1 rule:2 deriving:2 vandenberghe:1 notion:6 coordinate:1 updated:1 annals:1 pt:28 play:3 construction:2 suppose:1 exact:1 programming:5 us:2 designing:1 hypothesis:24 element:1 recognition:1 econometrics:1 kxk1:1 role:3 solved:1 calculate:1 ensures:2 mentioned:1 intuition:1 environment:2 complexity:3 ideally:1 nesterov:1 multilabel:1 motivate:2 rewrite:2 solving:1 learner:17 various:2 derivation:1 describe:4 tell:1 shalev:4 neighborhood:1 whose:6 encoded:1 say:3 otherwise:2 statistic:1 g1:6 jointly:1 itself:2 final:5 online:25 sequence:10 differentiable:3 kxt:1 ucl:1 propose:1 product:1 maximal:1 combining:3 iff:1 achieve:1 intuitive:1 kv:1 convergence:1 requirement:1 optimum:2 derive:7 stating:1 frean:1 measured:1 school:1 received:1 progress:2 eq:23 esann:1 strong:5 predicted:1 come:1 implies:3 restate:1 closely:3 correct:1 explains:1 hx:3 assign:2 singer1:1 generalization:3 sumption:1 elementary:1 extension:1 hold:4 lying:1 considered:1 exp:8 algorithmic:15 consecutive:3 omitted:1 hwt:8 label:2 tool:3 minimization:4 mit:1 rather:3 pn:2 lifted:1 broader:1 corollary:1 derived:5 consistently:1 rank:2 underscore:3 entire:1 initially:1 voluminous:1 arg:1 aforementioned:1 dual:38 supw:1 denoted:8 classification:2 colt:3 special:3 equal:1 once:1 construct:1 distilled:1 icml:1 simplex:1 report:2 simplify:1 gordon:5 few:2 primarily:1 employ:1 individual:4 familiar:2 argmax:1 n1:2 negation:1 friedman:1 message:1 analyzed:1 yielding:1 primal:14 kt:1 bregman:1 grove:1 closer:1 minw:1 euclidean:1 penalizes:2 littlestone:1 minimal:1 fenchel:21 instance:5 earlier:1 disadvantage:1 applicability:5 introducing:1 uniform:2 learnability:1 answer:7 chooses:6 sequel:2 probabilistic:1 invoke:1 ym:1 rkt:1 w1:9 borwein:1 cesa:2 booster:7 questionanswer:1 return:1 aggressive:1 potential:3 star:1 sec:11 bold:1 coefficient:2 inc:1 satisfy:2 notable:1 ranking:7 vi:1 performed:3 view:8 later:1 closed:7 observing:1 sup:4 analyze:5 doing:1 start:2 hf:7 recover:1 parallel:1 shai:1 capability:1 defer:1 contribution:1 minimize:3 ass:1 yes:1 generalize:2 weak:15 accurately:1 monitoring:1 suffers:4 email:3 definition:7 infinitesimal:1 nonetheless:1 naturally:2 proof:3 popular:1 recall:4 knowledge:1 emerges:1 organized:1 e1t:1 actually:1 higher:1 adaboost:5 improved:1 april:1 strongly:10 generality:2 furthermore:1 just:1 mar:1 correlation:2 retrospect:1 hand:6 receives:5 nonlinear:1 lack:3 google:1 incrementally:3 somehow:1 defines:3 logistic:2 believe:1 usage:2 usa:1 k22:1 multiplier:1 equality:4 iteratively:2 semantic:1 eg:1 round:8 game:35 generalized:7 trying:1 presenting:1 outline:1 tt:5 demonstrate:1 theoretic:1 passive:1 wise:1 instantaneous:1 functional:1 interpret:1 kwk2:1 refer:1 cambridge:2 trivially:1 pm:3 gt:63 perspective:1 inf:9 termed:1 inequality:7 binary:3 arbitrarily:1 kwk1:1 errs:1 vt:15 yi:9 seen:1 minimum:1 somewhat:1 impose:1 maximize:2 ii:2 reduces:1 stem:1 smooth:1 technical:2 ptt:1 plugging:1 prediction:14 variant:2 regression:4 normalization:1 background:1 addition:2 w2:3 rest:1 pass:1 ascent:1 kwk22:1 integer:1 baxter:1 fit:1 hastie:1 restrict:1 reduce:1 inner:1 simplifies:1 multiclass:1 whether:1 bartlett:1 returned:3 proceed:1 passing:1 informally:1 amount:3 induces:1 schapire:6 uk2:1 sign:2 arising:2 tibshirani:1 terminology:1 nevertheless:1 rewriting:1 ht:18 utilize:2 subgradient:2 run:1 parameterized:1 letter:3 catholic:1 saying:1 throughout:2 reader:2 decision:1 prefer:1 bound:20 def:1 fold:1 ahead:2 constraint:1 generates:2 min:1 conjecture:1 designated:1 according:3 combination:1 conjugate:7 smaller:2 slightly:2 wi:3 intuitively:2 restricted:1 taken:1 alluded:1 previously:1 discus:1 singer:5 know:1 ascending:2 serf:1 generalizes:1 operation:1 rewritten:1 apply:3 appropriate:2 shortly:1 original:1 hinge:3 unifying:2 yoram:1 build:2 establish:1 objective:10 move:1 question:9 responds:2 guessing:1 pave:1 gradient:7 distance:1 sci:1 w0:19 discriminant:1 reason:1 assuming:1 modeled:1 providing:1 minimizing:4 hebrew:2 equivalently:1 negative:2 design:1 zt:2 bianchi:2 upper:3 immediate:1 defining:1 extended:1 y1:1 rn:7 thm:1 rating:1 introduced:1 complement:1 cast:4 required:1 pair:8 connection:2 optimized:1 louvain:1 nip:1 capped:1 able:2 suggested:2 proceeds:1 beyond:2 pattern:1 xm:1 encompasses:1 program:1 including:1 max:6 ation:1 natural:1 ranked:1 predicting:2 turning:1 kivinen:1 amphitheater:1 scheme:1 movie:4 rated:1 imply:2 inversely:1 concludes:1 relative:3 freund:4 loss:31 sublinear:1 interesting:2 limitation:1 proportional:2 sufficient:2 viewpoint:3 playing:7 row:6 penalized:1 last:4 side:6 template:3 face:1 dimension:1 stand:2 cumulative:2 author:3 commonly:1 avg:3 spam:2 far:1 excess:1 global:1 parkway:1 summing:1 conclude:2 assumed:1 xi:14 shwartz:3 continuous:1 ca:1 aboost:1 rearranging:1 obtaining:1 improving:1 schuurmans:1 domain:4 did:1 main:4 linearly:1 incrementing:1 repeated:21 x1:1 sub:4 explicit:1 kxk2:1 answering:1 iter1:1 jmlr:1 watkins:1 hw:9 young:2 theorem:7 xt:31 specific:1 maxi:1 mason:1 ad1:1 exists:3 importance:1 keshet:1 margin:1 entropy:2 generalizing:2 lt:1 tc:1 simply:1 lagrange:3 kxk:3 contained:1 expressed:1 scalar:4 springer:1 satisfies:1 lewis:1 weston:1 goal:4 exposition:1 lipschitz:1 content:1 specifically:3 determined:2 wt:52 lemma:15 called:2 duality:17 player:13 hw0:2 formally:1 support:1 crammer:1 collins:1 regularizing:1
2,323
3,108
Relational Learning with Gaussian Processes Wei Chu CCLS Columbia Univ. New York, NY 10115 Vikas Sindhwani Dept. of Comp. Sci. Univ. of Chicago Chicago, IL 60637 Zoubin Ghahramani Dept. of Engineering Univ. of Cambridge Cambridge, UK S. Sathiya Keerthi Yahoo! Research Media Studios North Burbank, CA 91504 Abstract Correlation between instances is often modelled via a kernel function using input attributes of the instances. Relational knowledge can further reveal additional pairwise correlations between variables of interest. In this paper, we develop a class of models which incorporates both reciprocal relational information and input attributes using Gaussian process techniques. This approach provides a novel non-parametric Bayesian framework with a data-dependent covariance function for supervised learning tasks. We also apply this framework to semi-supervised learning. Experimental results on several real world data sets verify the usefulness of this algorithm. 1 Introduction Several recent developments such as the growth of the world wide web and the maturation of genomic technologies, have brought new domains of application to machine learning research. Many such domains involve relational data in which instances have ?links? or inter-relationships between them that are highly informative for learning tasks, e.g. (Taskar et al., 2002). For example, hyper-linked web-documents are often about similar topics, even if their textual contents are disparate when viewed as bags of words. In document categorization, the citations are important as well since two documents referring to the same reference are likely to have similar content. In computational biology, knowledge about physical interactions between proteins can supplement genomic data for developing good similarity measures for protein network inference. In such cases, a learning algorithm can greatly benefit by taking into account the global network organization of such inter-relationships rather than relying on input attributes alone. One simple but general type of relational information can be effectively represented in the form of a graph G = (V, E). The vertex set V represents a collection of input instances (which may contain the labelled inputs as a subset, but is typically a much larger set of instances). The edge set E ? V ? V represents the pairwise relations over these input instances. In this paper, we restrict our attention to undirected edges, i.e., reciprocal relations, though directionality may be an important aspect of some relational datasets. These undirected edges provide useful structural knowledge about correlation between the vertex instances. In particular, we allow edges to be of two types ? ?positive? or ?negative? depending on whether the associated adjacent vertices are positively or negatively correlated, respectively. On many problems, only positive edges may be available. This setting is also applicable to semi-supervised tasks even on traditional ?flat? datasets where the linkage structure may be derived from data input attributes. In graph-based semi-supervised methods, G is typically an adjacency graph constructed by linking each instance (including labelled and unlabelled) to its neighbors according to some distance metric in the input space. The graph G then serves as an estimate of the global geometric structure of the data. Many algorithmic frameworks for semi-supervised (Sindhwani et al., 2005) and transductive learning, see e.g. (Zhou et al., 2004; Zhu et al., 2003), have been derived under the assumption that data points nearby on this graph are positively correlated. Several methods have been proposed recently to incorporate relational information within learning algorithms, e.g. for clustering (Basu et al., 2004; Wagstaff et al., 2001), metric learning (Bar-Hillel et al., 2003), and graphical modeling (Getoor et al., 2002). The reciprocal relations over input instances essentially reflect the network structure or the distribution underlying the data, which enrich our prior belief of how instances in the entire input space are correlated. In this paper, we integrate relational information with input attributes in a non-parametric Bayesian framework based on Gaussian processes (GP) (Rasmussen & Williams, 2006), which leads to a data-dependent covariance/kernel function. We highlight the following aspects of our approach: 1) We propose a novel likelihood function for undirected linkages and carry out approximate inference using efficient Expectation Propagation techniques under a Gaussian process prior. The covariance function of the approximate posterior distribution defines a relational Gaussian process, hereafter abbreviated as RGP. RGP provides a novel Bayesian framework with a data-dependent covariance function for supervised learning tasks. We also derive explicit formulae for linkage prediction over pairs of test points. 2) When applied to semi-supervised learning tasks involving labelled and unlabelled data, RGP is closely related to the warped reproducing kernel Hilbert Space approach of (Sindhwani et al., 2005) using a novel graph regularizer. Unlike many recently proposed graph-based Bayesian approaches, e.g. (Zhu et al., 2003; Krishnapuram et al., 2004; Kapoor et al., 2005), which are mainly transductive by design, RGP delineates a decision boundary in the input space and provides probabilistic induction over unseen test points. Furthermore, by maximizing the joint evidence of known labels and linkages, we explicitly involve unlabelled data in the model selection procedure. Such a semi-supervised hyper-parameter tuning method can be very useful when there are very few, possibly noisy labels. 3) On a variety of classification tasks, RGP requires very few labels for providing high-quality generalization on unseen test examples as compared to standard GP classification that ignores relational information. We also report experimental results on semi-supervised learning tasks comparing with competitive deterministic methods. The paper is organized as follows. In section 2 we develop relational Gaussian processes. Semisupervised learning under this framework is discussed in section 3. Experimental results are presented in section 4. We conclude this paper in section 5. 2 Relational Gaussian Processes In the standard setting of learning from data, instances are usually described by a collection of input attributes, denoted as a column vector x ? X ? Rd . The key idea in Gaussian process models is to introduce a random variable fx for all points in the input space X . The values of these random variables {fx }x?X are treated as outputs of a zero-mean Gaussian process. The covariance between fx and fz is fully determined by the coordinates of the data pair x and z, and is defined by any Mercer kernel function K(x, z). Thus, the prior distribution over f = [fx1 . . . fxn ] associated with any collection of n points x1 . . . xn is a multivariate Gaussian, written as   1 T ?1 1 exp ? f ? f (1) P(f ) = 2 (2?)n/2 det(?)1/2 where ? is the n ? n covariance matrix whose ij-th element is K(xi , xj ). In the following, we consider the scenario with undirected linkages over a set of instances. 2.1 Undirected Linkages Let the vertex set V in the relational graph be associated with n input instances x1 . . . xn . Consider a set of observed pairwise undirected linkages on these instances, denoted as E = {Eij }. Each linkage is treated as a Bernoulli random variable, i.e. Eij ? {+1, ?1}. Here Eij = +1 indicates that the instances xi and xj are ?positively tied? and Eij = ?1 indicates the instances are ?negatively tied?. We propose a new likelihood function to capture these undirected linkages, which is defined as follows:    1 if fxi fxj Eij > 0 Pideal Eij |fxi , fxj = (2) 0 otherwise This formulation is for ideal, noise-free cases; it enforces that the variable values corresponding to positive and negative edges have the same and opposite signs respectively. In the presence of uncertainty in observing Eij , we assume the variable values fxi and fxj are contaminated with Gaussian noise that allows some tolerance for noisy observations. The Gaussian noise is of zero mean and unknown variance ?2 .1 Let N (?; ?, ? 2 ) denote a Gaussian random variable ? with mean ? and variance ? 2 . Then the likelihood function (2) becomes      2 P Eij = +1|fxi , fxj =  Pideal + ?j N (?i; 0, ? )N (?j ; 0, ? 2 ) d?i d?j  Eij =+1|f xi + ?i, fxj  =? fxi ? ? fxj ? + 1?? fxi ? 1?? fxj ? (3) z where ?(z) = ?? N (?; 0, 1) d?. The integral in (3) evaluates the volume of a joint Gaussian in the   first and fxi and fxj have the same   sign.  Note that P Eij = ?1|f  xi , fxj =  third quadrants where 1 ? P Eij = +1|fxi , fxj and P Eij = +1|fxi , fxj = P Eij = +1| ? fxi , ?fxj . Remarks: One may consider other ways to define a likelihood function for the observed edges. 1 For example, we could define Pl (Eij = +1|fxi , fxj ) = 1+exp(??f where ? > 0. However xi fxj ) the computation of the predictive probability (9) and its derivatives becomes complicated with this form. Instead of treating edges as Bernoulli variables, we could consider a graph itself as a random variable and then the  probability of observing the graph G can be simply evaluated as: P(G|f ) = 1 1 T Z exp ? 2 f ? f where ? is a graph-regularization matrix (e.g. graph Laplacian) and Z is a normalization factor that depends on the variable values f . Given that there are numerous graph structures over the instances, the normalization factor Z is intractable in general cases. In the rest of this paper, we will use the likelihood function developed in (3). 2.2 Approximate Inference Combining the Gaussian process prior (1) with the likelihood function (3), we obtain the posterior distribution as follows,   1 P(f |E) = P(f ) (4) P Eij |fxi , fxj P(E) ij where f = [fx1 , . . . , fxn ]T and ij runs over the set of observed undirected linkages. The normalization factor P(E) = P(E|f )P(f )df is known as the evidence of the model parameters that serves as a yardstick for model selection. The posterior distribution is non-Gaussian and multi-modal with a saddle point at the origin. Clearly the posterior mean is at the origin as well. It is important to note that reciprocal relations update the correlation between examples but never change individual mean. To preserve computational tractability and the true posterior mean, we would rather approximate the posterior distribution as a joint Gaussian centered at the true mean than resort to sampling methods. A family of inference techniques can be applied for the Gaussian approximation. Some popular methods include Laplace approximation, mean-field methods, variational methods and expectation propagation. It is inappropriate to apply the Laplace approximation to this case since the posterior distribution is not unimodal and it is a saddle point at the true posterior mean. The standard mean-field methods are also hard to use due to the pairwise relations in observation. Both the variational methods and the expectation propagation (EP) algorithm (Minka, 2001) can be applied here. In this paper, we employ the EP algorithm to approximate the posterior distribution as a zero-mean Gaussian. Importantly this still captures the posterior covariance structure allowing prediction of link presence.   The key idea of our EP algorithm here is to approximate P(f ) ij P Eij |fxi , fxj as a parametric product distribution2 in the form of   Q(f ) = P(f ) ij t?(f ij ) = P(f ) ij sij exp ? 12 f Tij ?ij f ij where ij runs over the edge set, f ij = [fxi , fxj ]T , and ?ij is a symmetric 2 ? 2 matrix. The parameters {sij , ?ij } in {t?(f ij )} are successively optimized by locally minimizing the KullbackLeibler divergence, 1 We could specify different noise levels for weighted edges. In this paper, we focus on unweighted edges only. 2 The likelihood function we defined could also be approximated by a Gaussian mixture of two symmetric components, but the difficulty lies in the number of components growing exponentially after multiplication. t?(f ij )new = arg min KL t?(f ij ) Q(f ) Q(f ) t?(f ij ) . P(Eij |f ij ) t?(f ij )old t?(f ij )old (5) Since Q(f ) is in the exponential family, this minimization can be simply solved by moment matching up to the second order. At the equilibrium the EP algorithm returns a Gaussian approximation to the posterior distribution P(f |E) ? N (0, A) (6)  ? ?1 ?1 ? where A = (? + ?) , ? = ij ?ij and ?ij is an n ? n matrix with four non-zero entries augmented from ?ij . Note that the matrix ? could be very sparse. The normalization factor in this Gaussian approximation serves as approximate model evidence that can be explicitly written as 1 |A| 2 P(E) ? sij (7) 1 |?| 2 ij The detailed updating formulations have to be omitted here to save space. The approximate evidence (7) holds an upper bound on the true value of P(E) (Wainwright et al., 2005). Its partial derivatives with respect to the model parameters can be analytically derived (Seeger, 2003) and then a gradientbased procedure can be employed for hyperparameter tuning. Although the EP algorithm is known to work quite well in practice, there is no guarantee of convergence to the equilibrium in general. Opper and Winther (2005) proposed expectation consistent (EC) as a new framework for approximations that requires two tractable distributions matching on a set of moments. We plan to investigate the EC algorithm as future work. 2.3 Data-dependent Covariance Function After approximate inference as outlined above, the posterior process conditioned on E is explicitly given by a modified covariance function defined in the following proposition. Proposition: Given (6), for any finite collection of data points X , the latent random variables ? where ? ? is the co{fx }x?X conditioned on E have a multivariate normal distribution N (0, ?) ? variance matrix whose elements are given by evaluating the kernel function K(x, z) : X ? X ? R for x, z ? X given by: ? K(x, z) = K(x, z) ? kTx (I + ??)?1 ?kz (8) where I is an n ? n identity matrix, kx is the column vector [K(x1 , x), . . . , K(xn , x)]T , ? is an n ? n covariance matrix of the vertex set V obtained by evaluating the base kernel K, and ? is defined as in (6). A proof of this proposition involves some simple matrix algebra and is omitted for brevity. RGP is obtained by a Bayesian update of a standard GP using relational knowledge, which is closely related to the warped reproducing kernel Hilbert space approach (Sindhwani et al., 2005) using a novel graph regularizer ? in place of the standard graph Laplacian. Alternatively, we could simply employ the standard graph Laplacian as an approximation of the matrix ?. This efficient approach has been studied by (Sindhwani et al., 2007) for semi-supervised classification problems. 2.4 Linkage Prediction Given a RGP, the joint distribution of the random variables f rs = [fxr , fxs ]T , associated with a test pair xr and xs , is a Gaussian as well. The linkage predictive distribution P(f rs |E) can be explicitly ? rs ) with covariance matrix written as a zero-mean bivariate Gaussian N (f rs ; 0, ?   ? r , xr ) K(x ? r , xs ) K(x ? ?rs = ? s , xr ) K(x ? s , xs ) K(x ? is defined as in (8). The predictive probability of having a positive edge can be evaluated where K as  ? rs )dfx dfx P(Ers |E) = Pideal (Ers |f rs )N (f rs ; 0, ? r s which can be simplified as P(Ers |E) = 1 arcsin(?Ers ) + 2 ? (9) where ? = ? ? ? r ,xs ) K(x ? r ,xr ) K(xs ,xs )K(x . It essentially evaluates the updated correlation between fxr and fxs after we learn from the observed linkages. 3 Semi-supervised Learning We now apply the RGP framework for semi-supervised learning where a large collection of unlabelled examples are available and labelled data is scarce. Unlabelled examples often identify data clusters or low-dimensional data manifolds. It is commonly assumed that the labels of points within a cluster or nearby on a manifold are highly correlated (Chapelle et al., 2003; Zhu et al., 2003). To apply RGP, we construct positive reciprocal relations between examples within K nearest neighborhood. K could be heuristically set at the minimal integer of nearest neighborhood that could setup a connected graph over labelled and unlabelled examples, where there is a path between each pair of nodes. Learning on these constructed relational data results in a RGP as described in the previous section (see section 4.1 for an illustration). With the RGP as our new prior, supervised learning can be carried out in a straightforward way. In the following we focus on binary classification, but this procedure is also applicable to regression, multi-class classification and ranking. Given a set of labelled pairs {z , y }m =1 where y ? {+1, ?1}, the Gaussian process classifier (Rasmussen & Williams, 2006) relates the variable fz at z to the label y through a probit noise y f model, i.e. P(y |fz ) = ?( ?nz ) where ? is the cumulative normal and ?n2 specifies the label noise level. Combining the probit likelihood with the RGP prior defined by the covariance function (8), we have the posterior distribution as follows, 1 P(f  |E) P(y |fz ) P(f  |Y, E) = P(Y|E)  ? where f  = [fz1 , . . . , fzm ]T , P(f  |E) is a zero-mean Gaussian with an m?m covariance matrix ? whose entries are defined by (8), and P(Y|E) is the normalization factor. The posterior distribution can be approximated as a Gaussian as well, denoted as N (?, C), and the quantity P(Y|E) can be evaluated accordingly (Seeger, 2003). The predictive distribution of the variable fzt at a test case ? ?1 ? and ?t2 = zt then becomes a Gaussian, i.e. P(fzt |Y, E) ? N (?t , ?t2 ), where ?t = kt ?  ? t , zt ) ? kT (? ? ?1 ? ? ? ?1 C ? ? ?1 )kt with kt = [K(z ? 1 , zt ), . . . , K(z ? m , zt )]T . One can compute K(z t    the Bernoulli distribution over the test label yt by ?t . (10) P(yt |Y, E) = ?  ?n2 + ?t2 To summarize, we first incorporate linkage information into a standard GP that leads to a RGP, and then perform standard inference with the RGP as the prior in supervised learning. Although we describe RGP in two separate steps, these procedures can be seamlessly merged within the Bayesian framework. As for model selection, it is advantageous to directly use the joint evidence P(Y, E) = P(Y|E)P(E), (11) to determine the model parameters (such as the kernel parameter, the edge noise level and the label noise level). Note that P(Y, E) explicitly involves unlabelled data for model selection. This can be particularly useful when labelled data is very scarce and possibly noisy. 4 Numerical Experiments We start with a synthetic case to illustrate the proposed algorithm (RGP), and then verify the usefulness of this approach on three real world data sets. Throughout the experiments, we consistently compare with the standard Gaussian process classifier (GPC). RGP and GPC are different in the prior  only. We employ the linear kernel K(x, z) = x ? z or the Gaussian kernel K(x, z) = exp ? ?2 x ? z22 , and shift the origin of the kernel space to the empirical mean, i.e.     K(x, z)? n1 i K(x, xi )? n1 i K(z, xi )+ n12 i j K(xi , xj ) where n is the number of available labelled and unlabelled data. The centralized kernel is then used as base kernel in our experiments. The label noise level ?n2 in the GPC and RGP models is fixed at 10?4 . The edge noise level ? 2 of the RGP models is usually varied from 5 to 0.05. The optimal setting of the ?2 and the ? in the Gaussian kernel is determined by the joint evidence (11) in each trial. When constructing undirected K nearest- neighbor graphs, K is fixed at the minimal integer required to have a connected graph. ?18 1 ?4 ?20 P(?1|x) 4 log P(E) P(+1|x) 0.8 5 (C) (b) (a) 0.9 ?3 3 ?22 ?2 0.7 2 log P(E,Y) ?24 ?1 0.6 1 ?26 x 0.5 0 0 1 ?1 2 ?2 3 ?3 ?28 0.4 ?30 0.3 ?32 0.2 ?34 0.1 ?4 4 0 ?5 ?4 ?3 ?2 ?1 0 x 1 2 3 4 5 ?36 ?2 10 ?1 0 10 10 ? 1 10 5 ?5 ?4 ?3 ?2 ?1 0 x 1 2 3 4 5 Figure 1: Results on the synthetic dataset. The 30 samples drawn from the Gaussian mixture are presented as dots in (a) and the two labelled samples are indicated by a diamond and a circle respectively. The best ? value is marked by the cross in (b). The curves in (a) present the semi-supervised predictive distributions. The prior covariance matrix of RGP learnt from the data is presented in (c). Table 1: The four universities are Cornell University, the University of Texas at Austin, the University of Washington and the University of Wisconsin. The numbers of categorized Web pages and undirected linkages in the four university dataset are listed in the second column. The averaged AUC scores of label prediction on unlabelled cases are recorded along with standard deviation over 100 trials. Task Web&Link Number Student or Not Other or Not Univ. Stud Other All Link GPC LapSVM RGP GPC LapSVM RGP Corn. 128 617 865 13177 0.825?0.016 0.987?0.008 0.989?0.009 0.708?0.021 0.865?0.038 0.884?0.025 Texa. 148 571 827 16090 0.899?0.016 0.994?0.007 0.999?0.001 0.799?0.021 0.932?0.026 0.906?0.026 Wash. 126 939 1205 15388 0.839?0.018 0.957?0.014 0.961?0.009 0.782?0.023 0.828?0.025 0.877?0.024 Wisc. 156 942 1263 21594 0.883?0.013 0.976?0.029 0.992?0.008 0.839?0.014 0.812?0.030 0.899?0.015 4.1 Demonstration Suppose samples are distributed as a Gaussian mixture with two components in one-dimensional space, e.g. 0.4 ? N (?2.5, 1) + 0.6 ? N (2.0, 1). We randomly collected 30 samples from this distribution, shown as dots on the x axis of Figure 1(a). With K = 3, there are 56 ?positive? edges over these 30 samples. We fixed ?2 = 1 for all the edges, and varied the parameter ? from 0.01 to 10. At each setting, we carried out the Gaussian approximation by EP as described in section 2.2. Based on the approximate model evidence P(E) (7), presented in Figure ? (8) 1(b), we located the best ? = 0.4. Figure 1(c) presents the posterior covariance function K at this optimal setting. Compared to the data-independent prior covariance function defined by the Gaussian kernel, the posterior covariance function captures the density information of the unlabelled samples. The pairs within the same cluster become positively correlated, whereas the pairs between the two clusters turn out to be negatively correlated. This is learnt without any explicit assumption on density distributions. Given two labelled samples, one per class, indicated by the diamond and ? as the circle in Figure 1(a), we carried out supervised learning on the basis of the new prior K, described in section 3. The joint model evidence P(Y|E)P(E) is plotted out in Figure 1(b). The corresponding predictive distribution (10) with the optimal ? = 0.4 is presented in Figure 1(a). Note that the decision boundary of the standard GPC should be around x = 1. We observed our decision boundary significantly shifts to the low-density region that respects the geometry of the data. 4.2 The Four University Dataset We considered a subset of the WebKB dataset for categorization tasks.3 The subset, collected from the Web sites of computer science departments of four universities, contains 4160 pages and 9998 hyperlinks interconnecting them. These pages have been manually classified into seven categories: student, course, faculty, staff, department, project and other. The text content of each Web page was preprocessed as bag-of-words, a vector of ?term frequency? components scaled by ?inverse document frequency?, which was used as input attributes. The length of each document vector was normalized to unity. The hyperlinks were translated into 66249 undirected ?positive? linkages over the pages under the assumption that two pages are likely to be positively correlated if they are hyper-linked by the same hub page. Note there are no ?negative? linkages in this case. We considered two classification tasks, student vs. non-student and other vs. non-other, for each of the four universities. The numbers of samples and linkages of the four universities are listed in Table 1. We randomly selected 10% samples as labelled data and used the remaining samples as unlabelled data. The selection was repeated 100 times. The linear kernel 3 The dataset comes from the Web?KB project, see http://www-2.cs.cmu.edu/?webkb/. PCMAC USPS 3vs5 1 1 0.95 0.95 Area under ROC on Test Data Area under ROC on Test Data (a) 0.9 0.85 0.8 0.75 0.7 0.65 0.6 0.55 0.5 (b) 0.9 0.85 0.8 0.75 0.7 0.65 0.6 0.55 0.1 0.5 1 5 10 Percentage of Labelled Data in Training Samples 0.5 0.1 0.5 1 5 10 Percentage of Labelled Data in Training Samples Figure 2: Test AUC results of the two semi-supervised learning tasks, PCMAC in (a) and USPS in (b). The grouped boxes from left to right represent the results of GPC, LapSVM, and RGP respectively at different percentages of labelled samples over 100 trials. The notched-boxes have lines at the lower quartile, median, and upper quartile values. The whiskers are lines extending from each end of the box to the most extreme data value within 1.5 interquartile range. Outliers are data with values beyond the ends of the whiskers, which are displayed as dots. was used as base kernel in these experiments. We conducted this experiment in a transductive setting where the entire linkage data was used to learn the RGP model and comparisons were made with GPC for predicting labels of unlabelled samples. We make comparisons with a discriminant kernel approach to semi-supervised learning ? the Laplacian SVM (Sindhwani et al., 2005) using the linear kernel and a graph Laplacian based regularizer. We recorded the average AUC for predicting labels of unlabelled cases in Table 1.4 Our RGP models significantly outperform the GPC models by incorporating the linkage information in modelling. RGP is very competitive with LapSVM on ?Student or Not? while yields better results on 3 out of 4 tasks of ?Other or Not?. As future work, it would be interesting to utilize weighted linkages and to compare with other graph kernels. 4.3 Semi-supervised Learning We chose a binary classification problem in the 20 newsgroup dataset, 985 PC documents vs. 961 MAC documents. The documents were preprocessed, same as we did in the previous section, into vectors with 7510 elements. We randomly selected 1460 documents as training data, and tested on the remaining 486 documents. We varied the percentage of labelled data from 0.1% to 10% gradually, and at each percentage repeated the random selection of labelled data 100 times. We used the linear kernel in the RGP and GPC models. With K = 4, we got 4685 edges over the 1460 training samples. The test results on the 486 documents are presented in Figure 2(a) as a boxplot. Model parameters for LapSVM were tuned using crossvalidation with 50 labelled samples, since it is difficult for discriminant kernel approaches to carry out cross validation when the labelled samples are scarce. Our algorithm yields much better results than GPC and LapSVM, especially when the fraction of labelled data is less than 5%. When the labelled samples are few (a typical case in semi-supervised learning), cross validation becomes hard to use while our approach provides a Bayesian model selection by the model evidence. U.S. Postal Service dataset (USPS) of handwritten digits consists of 16 ? 16 gray scale images. We focused on constructing a classifier to distinguish digit 3 from digit 5. We used the training/test split, generated and used by (Lawrence & Jordan, 2005), in our experiment for comparison purpose. This partition contains 1214 training samples (556 samples of digit 3 and 658 samples of digit 5) and 326 test samples. With K = 3, we obtained 2769 edges over the 1214 training samples. We randomly picked up a subset of the training samples as labelled data and treated the remaining samples as unlabelled. We varied the percentage of labelled data from 0.1% to 10% gradually, and at each percentage repeated the selection of labelled data 100 times. In this experiment, we employed the Gaussian kernel, varied the edge noise level ?2 from 5 to 0.5, and tried the following values for ?, [0.001, 0.0025, 0.005, 0.0075, 0.01, 0.025, 0.05, 0.075, 0.1]. The optimal values of ? and ? 2 were decided by the joint evidence P(Y, E) (11). We report the error rate and AUC on the 326 test data in Figure 2(b) as a boxplot, along with the test results of GPC and LapSVM. When the percentage of labelled data is less than 5%, our algorithm achieved greatly better performance than GPC, and very competitive results compared with LapSVM (tuned with 50 labelled samples) though RGP used 4 AUC stands for the area under the Receiver-Operator Characteristic (ROC) curve. fewer labelled samples in model selection. Comparing with the performance of transductive SVM (TSVM) and the null category noise model for binary classification (NCNM) reported in (Lawrence & Jordan, 2005), we are encouraged to see that our approach outperforms TSVM and NCNM on this experiment. 5 Conclusion We developed a Bayesian framework to learn from relational data based on Gaussian processes. The resulting relational Gaussian processes provide a unified data-dependent covariance function for many learning tasks. We applied this framework to semi-supervised learning and validated this approach on several real world data. While this paper has focused on modelling symmetric (undirected) relations, this relational Gaussian process framework can be generalized for asymmetric (directed) relations as well as multiple classes of relations. Recently, Yu et al. (2006) have represented each relational pair by a tensor product of the attributes of the associated nodes, and have further proposed efficient algorithms. This is a promising direction. Acknowledgements W. Chu is partly supported by a research contract from Consolidated Edison. We thank Dengyong Zhou for sharing the preprocessed Web-KB data. References Bar-Hillel, A., Hertz, T., Shental, N., & Weinshall, D. (2003). Learning distance functions using equivalence relations. Proceedings of International Conference on Machine Learning (pp. 11?18). Basu, S., Bilenko, M., & Mooney, R. J. (2004). A probabilisitic framework for semi-supervised clustering. Proceedings of ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 59?68). Chapelle, O., Weston, J., & Sch?olkopf, B. (2003). Cluster kernels for semi-supervised learning. Neural Information Processing Systems 15 (pp. 585?592). Getoor, L., Friedman, N., Koller, D., & Taskar, B. (2002). Learning probabilistic models of link structure. Journal of Machine Learning Research, 3, 679?707. Kapoor, A., Qi, Y., Ahn, H., & Picard, R. (2005). Hyperparameter and kernel learning for graph-based semisupervised classification. Neural Information Processing Systems 18. Krishnapuram, B., Williams, D., Xue, Y., Carin, L., Hartemink, A., & Figueiredo, M. (2004). On semisupervised classification. Neural Information Processing Systems (NIPS). Lawrence, N. D., & Jordan, M. I. (2005). Semi-supervised learning via Gaussian processes. Advances in Neural Information Processing Systems 17 (pp. 753?760). Minka, T. P. (2001). A family of algorithms for approximate Bayesian inference. Ph.D. thesis, Massachusetts Institute of Technology. Opper, M., & Winther, O. (2005). Expectation consistent approximate inference. Journal of Machine Learning Research, 2117?2204. Rasmussen, C. E., & Williams, C. K. I. (2006). Gaussian processes for machine learning. The MIT Press. Seeger, M. (2003). Bayesian Gaussian process models: PAC-Bayesian generalisation error bounds and sparse approximations. Doctoral dissertation, University of Edinburgh. Sindhwani, V., Chu, W., & Keerthi, S. S. (2007). Semi-supervised Gaussian process classification. The Twentieth International Joint Conferences on Artificial Intelligence. to appear. Sindhwani, V., Niyogi, P., & Belkin, M. (2005). Beyound the point cloud: from transductive to semi-supervised learning. Proceedings of the 22th International Conference on Machine Learning (pp. 825?832). Taskar, B., Abbeel, P., & Koller, D. (2002). Discriminative probabilistic models for relational data. Proceedings of Conference on Uncertainty in Artificial Intelligence. Wagstaff, K., Cardie, C., Rogers, S., & Schroedl, S. (2001). Constrained k-means clustering with background knowledge. Proceedings of International Conference on Machine Learning (pp. 577?584). Wainwright, M. J., Jaakkola, T., & Willsky, A. S. (2005). A new class of upper bounds on the log partition function. IEEE Trans. on Information Theory, 51, 2313?2335. Yu, K., Chu, W., Yu, S., Tresp, V., & Xu, Z. (2006). Stochastic relational models for discriminative link prediction. Advances in Neural Information Processing Systems. to appear. Zhou, D., Bousquet, O., Lal, T., Weston, J., & Sch?olkopf, B. (2004). Learning with local and global consistency. Advances in Neural Information Processing Systems 18 (pp. 321?328). Zhu, X., Ghahramani, Z., & Lafferty, J. (2003). Semi-supervised learning using Gaussian fields and harmonic functions. Proceedings of the 20th International Conference on Machine Learning.
3108 |@word trial:3 faculty:1 advantageous:1 heuristically:1 r:8 tried:1 covariance:18 carry:2 moment:2 contains:2 score:1 hereafter:1 tuned:2 document:11 outperforms:1 comparing:2 chu:4 written:3 numerical:1 chicago:2 informative:1 partition:2 treating:1 update:2 v:3 alone:1 intelligence:2 fx1:2 selected:2 fewer:1 accordingly:1 reciprocal:5 dissertation:1 provides:4 node:2 postal:1 along:2 constructed:2 become:1 stud:1 consists:1 introduce:1 pairwise:4 inter:2 growing:1 multi:2 probabilisitic:1 relying:1 bilenko:1 inappropriate:1 becomes:4 project:2 webkb:2 underlying:1 medium:1 vs5:1 null:1 weinshall:1 consolidated:1 developed:2 unified:1 guarantee:1 growth:1 classifier:3 scaled:1 uk:1 appear:2 positive:7 service:1 engineering:1 local:1 fxn:1 path:1 chose:1 nz:1 studied:1 doctoral:1 equivalence:1 co:1 range:1 averaged:1 decided:1 directed:1 enforces:1 practice:1 xr:4 digit:5 burbank:1 procedure:4 area:3 empirical:1 significantly:2 got:1 matching:2 word:2 quadrant:1 zoubin:1 protein:2 krishnapuram:2 selection:9 operator:1 www:1 deterministic:1 yt:2 maximizing:1 williams:4 attention:1 straightforward:1 focused:2 importantly:1 n12:1 fx:5 coordinate:1 laplace:2 updated:1 suppose:1 origin:3 pcmac:2 element:3 approximated:2 particularly:1 updating:1 located:1 asymmetric:1 observed:5 taskar:3 ep:6 cloud:1 solved:1 capture:3 region:1 distribution2:1 connected:2 algebra:1 predictive:6 negatively:3 basis:1 usps:3 translated:1 joint:9 represented:2 regularizer:3 univ:4 describe:1 artificial:2 hyper:3 neighborhood:2 hillel:2 whose:3 quite:1 larger:1 otherwise:1 niyogi:1 unseen:2 transductive:5 gp:4 noisy:3 itself:1 propose:2 interaction:1 product:2 combining:2 kapoor:2 olkopf:2 crossvalidation:1 convergence:1 cluster:5 extending:1 categorization:2 depending:1 develop:2 derive:1 illustrate:1 dengyong:1 nearest:3 ij:26 c:1 involves:2 come:1 direction:1 closely:2 merged:1 attribute:8 quartile:2 kb:2 centered:1 stochastic:1 rogers:1 adjacency:1 notched:1 abbeel:1 generalization:1 proposition:3 pl:1 hold:1 gradientbased:1 around:1 considered:2 normal:2 exp:5 equilibrium:2 algorithmic:1 lawrence:3 omitted:2 purpose:1 applicable:2 bag:2 label:12 grouped:1 weighted:2 minimization:1 mit:1 brought:1 clearly:1 genomic:2 gaussian:44 modified:1 rather:2 zhou:3 cornell:1 jaakkola:1 derived:3 focus:2 validated:1 consistently:1 bernoulli:3 likelihood:8 mainly:1 indicates:2 greatly:2 seeger:3 seamlessly:1 sigkdd:1 modelling:2 rgp:28 inference:8 dependent:5 typically:2 entire:2 relation:10 koller:2 arg:1 classification:11 denoted:3 yahoo:1 development:1 enrich:1 plan:1 constrained:1 field:3 construct:1 never:1 having:1 washington:1 sampling:1 manually:1 biology:1 represents:2 encouraged:1 yu:3 carin:1 future:2 report:2 contaminated:1 t2:3 few:3 employ:3 belkin:1 randomly:4 preserve:1 divergence:1 individual:1 geometry:1 keerthi:2 n1:2 friedman:1 organization:1 interest:1 centralized:1 highly:2 investigate:1 interquartile:1 mining:1 picard:1 mixture:3 extreme:1 pc:1 kt:4 edge:19 integral:1 partial:1 old:2 circle:2 plotted:1 minimal:2 instance:17 column:3 modeling:1 tractability:1 mac:1 vertex:5 subset:4 entry:2 deviation:1 usefulness:2 conducted:1 kullbackleibler:1 reported:1 learnt:2 xue:1 synthetic:2 referring:1 density:3 winther:2 international:6 probabilistic:3 contract:1 thesis:1 reflect:1 recorded:2 successively:1 possibly:2 warped:2 derivative:2 resort:1 return:1 account:1 student:5 north:1 explicitly:5 ranking:1 depends:1 picked:1 linked:2 observing:2 competitive:3 start:1 complicated:1 tsvm:2 il:1 variance:3 characteristic:1 yield:2 identify:1 modelled:1 bayesian:11 handwritten:1 cardie:1 comp:1 mooney:1 classified:1 sharing:1 evaluates:2 frequency:2 pp:7 minka:2 associated:5 proof:1 dataset:7 popular:1 massachusetts:1 knowledge:6 hilbert:2 organized:1 supervised:27 maturation:1 modal:1 wei:1 specify:1 formulation:2 evaluated:3 though:2 box:3 furthermore:1 correlation:5 web:8 propagation:3 defines:1 quality:1 reveal:1 indicated:2 gray:1 semisupervised:3 verify:2 contain:1 true:4 normalized:1 regularization:1 analytically:1 symmetric:3 adjacent:1 auc:5 generalized:1 image:1 variational:2 harmonic:1 novel:5 recently:3 physical:1 exponentially:1 volume:1 linking:1 discussed:1 cambridge:2 tuning:2 rd:1 outlined:1 consistency:1 dot:3 chapelle:2 similarity:1 ahn:1 base:3 ktx:1 posterior:16 multivariate:2 recent:1 scenario:1 binary:3 additional:1 staff:1 employed:2 determine:1 semi:22 relates:1 multiple:1 unimodal:1 unlabelled:14 cross:3 laplacian:5 qi:1 prediction:5 involving:1 regression:1 essentially:2 metric:2 expectation:5 df:1 cmu:1 kernel:25 normalization:5 represent:1 achieved:1 whereas:1 background:1 median:1 sch:2 rest:1 unlike:1 undirected:12 incorporates:1 lafferty:1 jordan:3 integer:2 structural:1 presence:2 ideal:1 split:1 variety:1 xj:3 restrict:1 opposite:1 idea:2 det:1 shift:2 texas:1 whether:1 linkage:21 york:1 remark:1 useful:3 tij:1 detailed:1 involve:2 gpc:13 listed:2 locally:1 ph:1 category:2 http:1 fz:4 specifies:1 percentage:8 outperform:1 sign:2 per:1 hyperparameter:2 shental:1 key:2 four:7 drawn:1 wisc:1 preprocessed:3 utilize:1 graph:22 fraction:1 run:2 inverse:1 uncertainty:2 place:1 family:3 throughout:1 decision:3 bound:3 distinguish:1 fxj:17 boxplot:2 flat:1 nearby:2 bousquet:1 aspect:2 min:1 corn:1 department:2 developing:1 according:1 hertz:1 unity:1 outlier:1 gradually:2 sij:3 wagstaff:2 abbreviated:1 turn:1 tractable:1 serf:3 end:2 available:3 apply:4 fxi:14 save:1 vikas:1 clustering:3 include:1 remaining:3 graphical:1 fxr:2 ghahramani:2 especially:1 tensor:1 schroedl:1 quantity:1 parametric:3 traditional:1 distance:2 link:6 separate:1 sci:1 thank:1 topic:1 manifold:2 seven:1 collected:2 discriminant:2 induction:1 willsky:1 length:1 relationship:2 illustration:1 providing:1 minimizing:1 demonstration:1 setup:1 difficult:1 negative:3 disparate:1 design:1 zt:4 unknown:1 perform:1 allowing:1 upper:3 diamond:2 observation:2 datasets:2 finite:1 displayed:1 relational:21 varied:5 reproducing:2 pair:8 ccls:1 kl:1 required:1 optimized:1 lal:1 textual:1 nip:1 trans:1 beyond:1 bar:2 usually:2 summarize:1 hyperlink:2 including:1 belief:1 wainwright:2 getoor:2 treated:3 difficulty:1 predicting:2 scarce:3 zhu:4 technology:2 numerous:1 axis:1 carried:3 columbia:1 fxs:2 tresp:1 text:1 prior:11 geometric:1 acknowledgement:1 discovery:1 multiplication:1 delineates:1 wisconsin:1 fully:1 probit:2 highlight:1 whisker:2 interesting:1 validation:2 integrate:1 consistent:2 mercer:1 austin:1 course:1 supported:1 rasmussen:3 free:1 figueiredo:1 allow:1 institute:1 wide:1 neighbor:2 taking:1 basu:2 sparse:2 edinburgh:1 distributed:1 curve:2 benefit:1 tolerance:1 stand:1 boundary:3 opper:2 xn:3 world:4 evaluating:2 unweighted:1 kz:1 ignores:1 cumulative:1 collection:5 commonly:1 made:1 simplified:1 ec:2 citation:1 approximate:12 global:3 receiver:1 conclude:1 sathiya:1 assumed:1 xi:8 edison:1 alternatively:1 discriminative:2 latent:1 dfx:2 table:3 lapsvm:8 promising:1 learn:3 ca:1 constructing:2 domain:2 did:1 noise:12 n2:3 repeated:3 categorized:1 positively:5 x1:3 augmented:1 site:1 xu:1 roc:3 ny:1 interconnecting:1 explicit:2 exponential:1 lie:1 tied:2 third:1 ncnm:2 formula:1 hub:1 pac:1 er:4 x:6 svm:2 evidence:10 bivariate:1 intractable:1 incorporating:1 effectively:1 supplement:1 wash:1 conditioned:2 studio:1 kx:1 arcsin:1 eij:16 likely:2 simply:3 saddle:2 twentieth:1 hartemink:1 sindhwani:8 acm:1 weston:2 viewed:1 identity:1 marked:1 labelled:26 content:3 change:1 directionality:1 hard:2 determined:2 typical:1 generalisation:1 partly:1 experimental:3 newsgroup:1 yardstick:1 brevity:1 incorporate:2 dept:2 tested:1 correlated:7
2,324
3,109
Learning Motion Style Synthesis from Perceptual Observations Lorenzo Torresani Riya, Inc. lorenzo@riya.com Peggy Hackney Integrated Movement Studies pjhackney@aol.com Christoph Bregler New York University chris.bregler@nyu.edu Abstract This paper presents an algorithm for synthesis of human motion in specified styles. We use a theory of movement observation (Laban Movement Analysis) to describe movement styles as points in a multi-dimensional perceptual space. We cast the task of learning to synthesize desired movement styles as a regression problem: sequences generated via space-time interpolation of motion capture data are used to learn a nonlinear mapping between animation parameters and movement styles in perceptual space. We demonstrate that the learned model can apply a variety of motion styles to pre-recorded motion sequences and it can extrapolate styles not originally included in the training data. 1 Introduction Human motion perception can be generally thought of as the result of interaction of two factors, traditionally termed content and style. Content generally refers to the nature of the action in the movement (e.g. walking, reaching, etc.), while style denotes the particular way that action is performed. In computer animation, the separation of the underlying content of a movement from its stylistic characteristics is particularly important. For example, a system that can synthesize stylistic variations of a given action would be a useful tool for animators. In this work we address such a problem by proposing a system that applies user-specified styles to motion sequences. Specifically, given as input a target motion style and an arbitrary animation or pre-recorded motion, we want to synthesize a novel sequence that preserves the content of the original input motion but exhibits style similar to the user-specified target. Our approach is inspired by two classes of methods that have successfully emerged within the genre of data-driven animation: sample-based concatenation methods, and techniques based on learned parametric models. Concatenative synthesis techniques [15, 1, 11] are based on the simple idea of generating novel movements by concatenation of motion capture snippets. Since motion is produced by cutting and pasting pre-recorded examples, the resulting animations achieve realism similar to that of pure motion-capture play back. Snippet concatenation can produce novel content by generating arbitrarily complex new movements. However, this approach is restricted to synthesize only the subset of styles originally contained in the input database. Sample-based concatenation techniques are unable to produce novel stylistic variations and cannot generalize style differences from the existing examples. In recent years, several machine learning animation systems [2, 12, 9] have been proposed that attempt to overcome some of these limitations. Unfortunately, most of these methods learn simple parametric motion models that are unable to fully capture the subtleties and complexities of human movement. As a consequence, animations resulting from these systems are often plagued by low quality and scarce realism. The technique introduced in this paper is a compromise between the pure concatenative approaches and the methods based on learned parametric models. The aim is to maintain the animated precision of motion capture data, while introducing the flexibility of style changes achievable by learned parametric models. Our system builds on the observation that stylistically novel, yet highly realistic animations can be generated via space-time interpolation of pairs of motion sequences. We propose to learn not a parametric function of the motion, but rather a parametric function of how the interpolation or extrapolation weights applied to data snippets relate to the styles of the output sequences. This allows us to create motions with arbitrary styles without compromising animation quality. Several researchers have previously proposed the use of motion interpolation for synthesis of novel movement [18, 6, 10]. These approaches are based on the na??ve assumption that motion interpolation produces styles corresponding precisely to the interpolation of the styles of the original sequences. In this paper we experimentally demonstrate that styles generated through motion interpolation are a rather complex function of styles and contents of the original snippets. We propose to explicitly learn the mapping between motion blending parameters and resulting animation styles. This enables our animation system not only to generate arbitrary stylistic variations of a given action, but, more importantly, to synthesize sequences matching user-specified stylistic characteristics. Our approach bears similarities with the Verbs and Adverbs work of Rose et al. [16], in which interpolation models parameterized by style attributes are learned for several actions, such as walking or reaching. Unlike this previously proposed algorithm, our solution can automatically identify sequences having similar content, and therefore does not require manual categorization of motions into classes of actions. This feature allows our algorithm to be used for style editing of sequences without content specification by the user. Additionally, while the Verb and Adverb system characterizes motion styles in terms of difficult-to-measure emotional attributes, such as sad or clueless, our approach relies on a theory of movement observation, Laban Movement Analysis, describing styles by means of a set of rigorously defined perceptual attributes. 2 The LMA Framework In computer animation literature motion style is a vaguely defined concept. In our work, we describe motion styles according to a movement notation system, called Laban Movement Analysis or LMA [7]. We focus on a subset of Laban Movement Analysis: the ?LMA-Effort? dimensions. This system does not attempt to describe the coarse aspects of a motion, e.g. whether someone is walking, or swinging his/her arm. Instead, it targets the subtle differences in motion style, e.g. is the movement ?bound? or ?free?? Each LMA-Effort factor varies in intensity between opposing poles, and takes values in a continuous range. The factors are briefly described as follows: 1. The ?LMA-Effort Factor of Flow? defines the continuity of the movement. The two opposing poles are ?Free? (fluid, released), and ?Bound? (controlled, contained, restrained). 2. The ?LMA-Effort Factor of Weight? is about the relationship of the movement to gravity. The two opposing extremes are ?Light? (gentle, delicate, fine touch) and ?Strong? (powerful, forceful, firm touch). 3. The ?LMA-Effort Factor of Time? has to do with the persons inner attitude toward the time available, but not with how long it takes to perform the movement. The two opposing poles are ?Sudden? (urgent, quick) and ?Sustained? (stretching the time, indulging). 4. The ?LMA-Effort Factor of Space? describes the directness of the movement. Generally, additional features not present in motion capture data, such as eye gaze, are necessary to detect this factor. We use only the LMA-Effort factors of Flow, Weight, and Time. We model styles as points in a three-dimensional perceptual space derived by translating the LMA-Effort notations for each of these factors into numerical values ranging in the interval [?3, 3]. 3 Overview of the system The key-idea of our work is to learn motion style synthesis from a training set of computer-generated animations. The training animations are observed by a human expert who assigns LMA labels to each sequence. This set of supervised data is used to learn a mapping between the space of motion styles and the animation system parameters. We next provide a high-level description of our system, while the following sections give specific details of each component. 3.1 Training: Learning the Style of Motion Interpolation In order to train our system to synthesize motion styles, we employ a corpus of human motion sequences recorded with a motion capture system. We represent the motion as a time-varying vector of joint angles. In the training stage each motion sequence is manually segmented by an LMA human expert into fragments corresponding to fundamental actions or units of motions. Let Xi denote the joint angle data of the i-th fragment in the database. Step 1: Matching motion content. We apply a motion matching algorithm to identify fragment pairs (Xi , Xj ) containing similar actions. Our motion matching algorithm is based on dynamic-time warping. This allows us to compare kinematic contents while factoring out differences in timing or acceleration, more often associated to variations in style. Step 2: Space-time interpolation. We use these motion matches to augment the database with new synthetically-generated styles: given matching motion fragments Xi , Xj , and an interpolation parameter ?, space-time interpolation smoothly blends the kinematics and dynamics of the two fragments to produce a new motion X? i,j with novel distinct style and timing. Step 3: Style interpolation learning. Both the synthesized animations X? i,j as well as the ?seed? motion capture data Xi are labeled with LMA-Effort values by an LMA expert. Let ei and e? i,j denote the three-dimensional vectors encoding the LMA-Effort qualities of Xi and X? i,j , respectively. A non-linear regression model [5] is fitted to the LMA labels and the parameters ? of the space-time interpolation algorithm. This regression defines a function f predicting LMA-Effort factors e? i,j from the style attributes and joint angle data of fragments i and j: e? i,j = f (Xi , Xj , ei , ej , ?) (1) This function-fitting stage allows us to learn how the knobs of our animation system relate to the perceptual space of movement styles. 3.2 Testing: Style Transfer ?. The goal At testing stage we are given a motion sequence Y, and a user-specified motion style e ? to the input sequence Y, without modifying the content of the motion. First, we is to apply style e use dynamic-time warping to segment the input sequence into snippets Yi , such that each snippet matches the content of a set of analogous motions {Xi1 , ..., XiK } in the database. Among all possible pairwise blends X? ik ,il of examples in the set {Xi1 , ..., XiK }, we determine the one that ?. This objective can be formulated as provides the best approximation to the target style e ?? , k ? , l? ? arg min ||? e ? f (Xik , Xil , eik , eil , ?)|| (2) ?,k,l The animation resulting from space-time interpolation of fragments Xik? and Xil? with parameter ?. Concate?? will exhibit content similar to that of snippet Yi and style approximating the target e nating these artificially-generated snippets will produce the desired output. 4 Matching motion content The objective of the matching algorithm is to identify pairs of sequences having similar motion content or consisting of analogous activities. The method should ignore variations in the style with which movements are performed. Previous work [2, 12] has shown that the differences in movement styles can be found by examining the parameters of timing and movement acceleration. By contrast, an action is primarily characterized by changes of body configurations in space rather than over time. Thus we compare the content of two motions by identifying similar spatial body poses while allowing for potentially large differences in timing. Specifically, we define the content similarity between motion snippets Xi and Xj , as the minimum sum of their squared joint angle differences SSD(Xi , Xi ) under a dynamic time warping path. Let d(p, q) = ||Xi (p) ? Xj (q)||2 be our local measure of the distance between spatial body configurations Xi at frame p and Xj at frame q. Let Ti be the number of frames in sequence i and L the variable length of a time path w(n) = (p(n), q(n)) aligning the two snippets. We can then formally define SSD(Xi , Xi ) as: X SSD(Xi , Xi ) = min d(w(n)) (3) w n subject to constraints: p(1) = 1, q(1) = 1, p(L) = Ti , q(L) = Tj if w(n) = (p, q) then w(n ? 1) ? {(p ? 1, q), (p ? 1, q ? 1), (p, q ? 1)} (4) (5) We say that two motions i and j have similar content if SSD(Xi , Xi ) is below a certain value. 5 Space-time interpolation A time warping strategy is also employed to synthesize novel animations from the pairs of contentmatching examples found by the algorithm outlined in the previous section. Given matching snippets Xi and Xj , the objective is to generate a stylistically novel sequence that maintains the content of the two original motions. The idea is to induce changes in style by acting primarily on the timings of the motions. Let w? = (p? , q? ) be the path minimizing Equation 3. This path defines a time alignment between the two sequences. We can interpret frame correspondences (p? (n), q? (n)) for n = 1, ..., L, as discrete samples from a continuous 2D curve parameterized by n. Resampling Xi and Xj along this curve will produce synchronized versions of the two animations, but with new timings. Suppose parameter values n01 , ...., n0Ti are chosen such that p? (n0k ) = k. Then Xi (p? (n0k )) will be replayed with its original timing. However, if we use these same parameter values on sequence Xj (i.e. we estimate joint angles Xj at time steps q? (n0k )) then the resampled motion will correspond to playing sequence j with the timing of sequence i. Similarly, n11 , ...., n1Tj can be chosen, such that q? (n1k ) = k, and these parameter values can be used to synthesize motion i with the timing of motion j. It is also possible to smoothly interpolate between these two scenarios according to an interpolation parameter ? ? [0, 1] to produce intermediate time warps. This will result in a ? time path of length Tij? = (1 ? ?)Ti + ?Tj . Let us indicate with n? ? the path parameter val1 , ...., nTij ues obtained from this time interpolation. New stylistic versions of motions i and j can be produced ? ? by estimating the joint angles Xi and Xj at p? (n? k ) and q (nk ), respectively. The two resulting sequences will move in synchrony according to the new intermediate timing. From these two synchronized sequences, a novel motion X? i,j can be generated by averaging the joint angles according ? ? ? ? to mixing coefficients (1 ? ?) and ?: X? i,j (k) = (1 ? ?)Xi (p (nk )) + ?Xj (q (nk )). The synthe? sized motion Xi,j will display content similar to that of Xi and Xj , but it will have distinct style. We call this procedure ?space-time interpolation?, as it modifies the spatial body configurations and the timings of sequences. 6 Learning style interpolation Given a pair of content-matching snippets Xi and Xj , our goal is to determine the parameter ? that needs to be applied to space-time interpolation in order to produce a motion X? i,j exhibiting target ?. We propose to solve this task by learning to predict the LMA-Effort qualities of animations style e synthesized by space-time interpolation. The training data for this supervised learning task consists of our seed motion sequences {Xi } in the database, a set of interpolated motions {X? i,j }, and the corresponding LMA-Effort qualities {ei }, {e? } observed by an LMA human expert. In order to i,j maintain a consistent data size, we stretch or shrink the time trajectories of the joint angles {Xi } to a set length. In order to avoid overfitting, we compress further the motion data by projecting it onto a low-dimensional linear subspace computed using Principal Component Analysis (PCA). In many of the test cases, we found it was sufficient to retain only the first two or three principal components in order to obtain a discriminative representation of the motion contents. Let ci denote T T T T T the vector containing the PCA coefficients computed from Xi . Let z? i,j = [ci , cj , ei , ej , ?] . We pose the task of predicting LMA-Effort qualities as a function approximation problem: the goal is to learn the optimal parameters ? of a parameterized function f (z? i,j , ?) that models the dependencies ? between z? i,j and the observed LMA-Effort values ei,j . Parameters ? are chosen so as to minimize the objective function: X ? 2 E(?) = U L(f (z? (6) i,j , ?) ? ei,j ) + ||?|| where L is a general loss function and U is a regularization constant aimed at avoiding overfitting and improving generalization. We experimented with several function parameterizations and loss functions applied to our problem. The simplest of the adopted approaches is linear ridge regression [4], which corresponds to choosing the loss function L to be quadratic (i.e. L(.) = (.)2 ) and f to be linear in input space: f (z, ?) = zT ? ? (7) We also applied kernel ridge regression, resulting from mapping the input vectors z into features of a higher-dimensional space via a nonlinear function ?: z ? ?(z). In order to avoid the explicit computation of the vectors ?(zj ) in the high-dimensional feature space, we apply the kernel trick and choose mappings ? such that the inner product ?(zi )T ? ?(zj ) can be computed via a kernel function k(zi , zj ) of the inputs. We compared the performance of kernel ridge regression with that of support vector regression [5]. While kernel ridge regression requires us to store all training examples in order to evaluate function f at a given input, support vector regression overcomes this limitation by using an ?-insensitive loss function [17]. The resulting f can be evaluated using only a subset of the training data, the set of support vectors. 7 Testing: Style Transfer We can restate our initial objective as follows: given an input motion sequence Y in unknown style, ? specified by LMA-Effort values, we want to synthesize a sequence having and a target motion style e ? and content analogous to that of motion Y. A na??ve approach to this problem is to seek in style e the motion database a pair of sequences having content similar to Y and whose interpolation can ?. The learned function f can be used to determine the pair of motions and the approximate style e ?. However, such an approach interpolation parameter ? that produce the best approximation to e is destined to fail as Y can be any arbitrarily long and complex sequence, possibly consisting of several movements performed one after the other. As a consequence, we might not have in the database examples that match sequence Y in its entirety. 7.1 Input segmentation and matching The solution that we propose is inspired by concatenative methods. The idea is to determine the concatenation of database motion examples [X1 , ..., XN ] that best matches the content of input sequence Y. Our approach relies again on dynamic programming and can be interpreted as a generalization of the dynamic time warping technique presented in Section 4, for the case when a time alignment is sought between a given sequence and a concatenation of a variable number of examples chosen from a set. Let d(p, q, i) be the sum of squared differences between the joint angles of sequence Y at frame p and those of example Xi at frame q. The goal is to recover the time warping path w(n) = (p(n), q(n), i(n)) that minimizes the global error X min d(w(n)) (8) w n subject to basic segment transition and endpoint constraints. Transitions constraints are enforced to guarantee that time order is preserved and that no time frames are omitted. Endpoint constraints require that the time path starts at beginning frames and finishes at ending frames of the sequences. The above mentioned conditions can be formalized as follows: if w(n) = (p, 1, i), then w(n ? 1) ? {(p ? 1, 1, i), (p ? 1, Tj , j)for j = 1, ..., J} (9) if w(n) = (p, q, i) and q > 1, then w(n ? 1) ? {(p ? 1, q, i), (p ? 1, q ? 1, i), (p, q ? 1, i)} (10) p(1) = 1, q(1) = 1, p(L) = T, q(L) = Ti(L) (11) where J denotes the number of fragments in the database, L the length of the time warping path, T the number of frames of the input sequence, and Tj the length of the j-th fragment in the database. Table 1: Mean squared error on LMA-Effort prediction for different function approximation methods Function Approxim. Method Flow MSE Weight MSE Time MSE Linear Interpolation 0.65 0.97 1.01 Linear Ridge Regression 1.03 1.04 1.01 Kernel Ridge Regression 0.50 0.39 0.60 Support Vector Regression 0.48 0.48 0.61 The global minimum of the objective in Equation (8), subject to constraints (9),(10),(11), can be found using a dynamic programming method originally developed by Ney [14] for the problem of connected word recognition in speech data. Note that this approach induces a segmentation of the input sequence Y into snippets [Y1 , ..., YN ], matching the examples in the optimal concatenation [X1 , ..., XN ]. 7.2 Piecewise Style synthesis The final step of our algorithm uses the concatenation of examples [X1 , ..., XN ] determined by the ?. For each method outlined in the previous section to synthesize a version of motion Y in style e Xi in [X1 , ..., XN ], we identify the K most similar database examples according to the criterion defined in Equation 3. Let {Xi1 , ..., XiK } denote the K content-neighbors of Xi and {ei1 , ..., eiK } their LMA-Effort values. {Xi1 , ..., XiK } defines a cluster of examples having content similar to that of snippet Yi . The final goal then is to replace each snippet Yi with a pairwise blend of examples ?. Formally, this is achieved by determining in its cluster so as to produce a motion exhibiting style e the pair of examples (ik? , il? ) in Yi ?s cluster, and the interpolation weight ?? that provide the best ?, according to the learned style-prediction function f : approximation to target style e ?? , k ? , l? ? arg min ||? e ? f (z? ik ,il )|| (12) ?,k,l Minimization of this objective is achieved by first finding the optimal ? for each possible pair (ik , il ) of candidate motion fragments. We then select the pair (ik? , il? ) providing the minimum deviation ?. In order to estimate the optimal values of ? for pair (ik , il ), we evaluate from the target style e f (z? ik ,il ) for M values of ? uniformly sampled in the interval [-0.25,1.25], and choose the value with the closest fit to the target style. We found that f tends to vary smoothly as a function of ?, and thus a good estimate of the global minimum in the specified interval can be obtained even with a modest number M of samples. The approximation is further refined using a golden section search [8] around the initial estimate. Note that, by allowing values ? to be chosen in the range [-0.25,1.25] rather than [0,1], we give the algorithm the ability to extrapolate from existing motion styles. Given optimal parameters (?? , k ? , l? ), space-time interpolation of fragments Xik? and Xil? with parameter value ?? produces an animation with content similar to that of Yi and style approximating ?. This procedure is repeated for all snippets of Y. The final animation is obtained the desired target e by concatenating all of the fragments generated via interpolation with optimal parameters. 8 Experiments The system was tested using a motion database consisting of 12 sequences performed by different professional dancers. The subjects were asked to perform a specific movement phrase in their own natural style. Each of the 12 sequences was segmented by an LMA expert into 5 fragments corresponding to the main actions in the phrase. All fragments were then automatically clustered into 5 content groups using the SSD criterion outlined in section 4. The motions were recorded using a marker-based motion capture system. In order to derive joint angles, the 3D trajectories of the markers were fitted to a kinematic chain with 17 joints. The joint angles were represented with exponential maps [13], which have the property of being locally linear and thus particularly suitable for motion interpolation. From these 60 motion fragments, 105 novel motions were synthesized with space-time interpolation using random values of ? in the range [?0.25, 1.25]. All motions, both those recorded and those artificially generated, were annotated with LMA-Effort qualities by 1 0 -1 -2 -3 -0.5 0 0.5 1 1.5 3 LMA-Effort values 3 FLOW WEIGHT TIME 2 LMA-Effort values LMA-Effort values 3 2 1 FLOW WEIGHT TIME 0 -1 -2 -3 -0.5 ? 0 0.5 ? 1 1.5 2 1 0 -1 FLOW WEIGHT TIME -2 -3 -0.5 0 0.5 1 1.5 ? Figure 1: Sample LMA-Effort attributes estimated by kernel ridge regression on three different pairs of motions (Xi , Xj ) and for ? varying in [-0.25, 1.25]. The Flow attribute appears to be almost linearly dependent on ?. By contrast, Weight and Time exhibit non-linear relations with the interpolation parameter. an LMA expert. From this set of motions, 85 training examples were randomly selected to train the style regression models. The remaining 20 examples were used for testing. Table 1 summarizes the LMA-Effort prediction performance in terms of mean squared error for the different function approximation models discussed in the paper. Results are reported by averaging over 500 runs of random splitting of the examples into training and testing sets. We include in our analysis the linear style interpolation model, commonly used in previous work. This model assumes that the style of a sequence generated via motion interpolation is equal to the interpolation of the styles of the two seed motions: e? i,j = ?ei + (1 ? ?)ej . In all experiments involving kernel-based approximation methods, we used a Gaussian RBF kernel. The hyperparameters (i.e. the kernel and the regularization parameters) were tuned using tenfold cross-validation. Since the size of the training data is not overly large, it was possible to run kernel ridge regression without problems despite the absence of sparsity of this solution. The simple linear interpolation model performed reasonably well only on the Flow dimension. Overall, non-linear regression models proved to be much superior to the linear interpolation function, indicating that the style of sequences generates via space-time interpolation is a complex function of the original styles and motions. Figure 1 shows the LMA-Effort qualities predicted by kernel ridge regression while varying ? for three different sample values of the inputs (Xi , Xj , ei , ej ). Note that the shapes of the sample curves learned by kernel ridge regression for the Flow attribute suggest an almost linear dependence of Flow on ?. By contrast, sample functions for the Weight and Time dimensions exhibit non-linear behavior. These results are consistent with the differences in prediction performance between the non-linear function models and the linear approximations, as outlined in Table 1. Several additional motion examples performed by dancers not included in the training data were used to evaluate the complete pipeline of the motion synthesis algorithm. The input sequences were always correctly segmented by the dynamic programming algorithm into the five fragments associated with the actions in the phrase. Kernel ridge regression was used to estimate the values ?. The of ?? , k ? , l? as to minimize Equation 12 for different user-specified LMA-Effort vectors e recovered parameter values were used to synthesize animations with the specified desired styles. Videos of these automatically generated motions as well as additional results can be viewed at http://movement.nyu.edu/learning-motion-styles/ . In order to test the generalization ability of our system, the target styles in this experiment were chosen to be considerably different from those in the training set. All of the synthesized sequences were visually inspected by LMA experts and, for the great majority, they were found to be consistent with the style target labels. 9 Discussions and Future Work We have presented a novel technique that learns motion style synthesis from artificially-generated examples. Animations produced by our system have quality similar to pure motion capture playback. Furthermore, we have shown that, even with a small database, it is possible to use pair-wise interpolation or extrapolation to generate new styles. In previous LMA-based animation systems [3], heuristic and hand-designed rules have been adopted to implement the style changes associated to LMA-Effort variations. To the best of our knowledge, our work represents the first attempt at automatically learning the mapping between LMA attributes and animation parameters. Although our algorithm has shown to produce good results with small training data, we expect that larger databases with a wider variety of motion contents and styles are needed in order to build an effective animation system. Multi-way, as opposed to pair-wise, interpolation might lead to synthesis of more varied motion styles. Our approach could be easily generalized to other languages and notations, and to additional domains, such as facial animation. Our future work will focus on the recognition of LMA categories in motion capture data. Research in this area might point to methods for learning person-specific styles and to techniques for transferring individual movement signatures to arbitrary motion sequences. Acknowledgments This work was carried out while LT was at Stanford University and visiting New York University. Thanks to Alyssa Lees for her help on this project and paper. We are grateful to Edward Warburton, Kevin Feeley, and Robb Bifano for assistance with the experimental setup and to Jared Silver for the Maya animations. Special thanks to Jan Burkhardt, Begonia Caparros, Ed Groff, Ellen Goldman and Pamela Schick for LMA observations and notations. This work has been supported by the National Science Foundation. References [1] O. Arikan and D. A. Forsyth. Synthesizing constrained motions from examples. ACM Transactions on Graphics, 21(3):483?490, July 2002. [2] M. Brand and A. Hertzmann. Style machines. In Proceedings of ACM SIGGRAPH 2000, Computer Graphics Proceedings, Annual Conference Series, pages 183?192, July 2000. [3] D. Chi, M. Costa, L. Zhao, and N. Badler. The emote model for effort and shape. In Proceedings of ACM SIGGRAPH 2000, Computer Graphics Proceedings, Annual Conference Series, July 2000. [4] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines (and other kernel-based learning methods). Cambridge University Press, 2000. [5] H. Drucker, C. J. C. Burges, L. Kaufman, A. Smola, and V. Vapnik. Support vector regression machines. In Proc. NIPS 9, 2003. [6] M. A. Giese and T. Poggio. Morphable models for the analysis and synthesis of complex motion patterns. International Journal of Computer Vision, 38(1):59?73, 2000. [7] P. Hackney. Making Connections: Total Body Integration Through Bartenieff Fundamentals. Routledge, 2000. [8] M. T. Heath. Scientific Computing: An Introductory Survey, Second edition. McGraw Hill, 2002. [9] E. Hsu, K. Pulli, and J. Popovic. Style translation for human motion. ACM Transactions on Graphics, 24(3):1082?1089, 2005. [10] L. Kovar and M. Gleicher. Automated extraction and parameterization of motions in large data sets. ACM Transactions on Graphics, 23(3):559?568, Aug. 2004. [11] J. Lee, J. Chai, P. S. A. Reitsma, J. K. Hodgins, and N. S. Pollard. Interactive control of avatars animated with human motion data. ACM Transactions on Graphics, 21(3):491?500, July 2002. [12] Y. Li, T. Wang, and H.-Y. Shum. Motion texture: A two-level statistical model for character motion synthesis. ACM Transactions on Graphics, 21(3):465?472, July 2002. [13] R. Murray, Z. Li, and S. Sastry. A Mathematical Introduction to Robotic Manipulation. CRC Press, 1994. [14] H. Ney. The use of a one?stage dynamic programming algorithm for connected word recognition. IEEE Transactions on Acoustics, Speech, and Signal Processing, 32(3):263?271, 1984. [15] K. Pullen and C. Bregler. Motion capture assisted animation: Texturing and synthesis. ACM Transactions on Graphics, 21(3):501?508, July 2002. [16] C. Rose, M. Cohen, and B. Bodenheimer. Verbs and adverbs: multidimensional motion interpolation. IEEE Computer Graphics and Application, 18(5):32?40, 1998. [17] V. N. Vapnik. The Nature of Statistical Learning Theory. Springer, 1995. [18] D. J. Wiley and J. K. Hahn. Interpolation synthesis of articulated figure motion. IEEE Computer Graphics and Application, 17(6):39?45, 1997.
3109 |@word version:3 briefly:1 achievable:1 seek:1 initial:2 configuration:3 series:2 fragment:16 shum:1 tuned:1 animated:2 existing:2 recovered:1 com:2 yet:1 numerical:1 realistic:1 shape:2 enables:1 designed:1 resampling:1 selected:1 parameterization:1 destined:1 beginning:1 realism:2 sudden:1 coarse:1 provides:1 parameterizations:1 five:1 mathematical:1 along:1 ik:7 consists:1 sustained:1 fitting:1 introductory:1 pairwise:2 behavior:1 multi:2 chi:1 inspired:2 animator:1 eil:1 automatically:4 goldman:1 tenfold:1 project:1 estimating:1 underlying:1 notation:4 kaufman:1 interpreted:1 minimizes:1 developed:1 proposing:1 finding:1 pasting:1 guarantee:1 multidimensional:1 ti:4 golden:1 interactive:1 gravity:1 control:1 unit:1 yn:1 timing:11 local:1 tends:1 consequence:2 despite:1 encoding:1 path:9 interpolation:41 might:3 christoph:1 someone:1 range:3 acknowledgment:1 testing:5 implement:1 procedure:2 jan:1 area:1 thought:1 matching:11 pre:3 induce:1 refers:1 word:2 suggest:1 cannot:1 onto:1 map:1 quick:1 modifies:1 survey:1 swinging:1 formalized:1 identifying:1 assigns:1 pure:3 nating:1 splitting:1 rule:1 importantly:1 ellen:1 his:1 traditionally:1 variation:6 analogous:3 target:13 play:1 suppose:1 user:6 inspected:1 programming:4 avatar:1 us:1 trick:1 synthesize:11 recognition:3 particularly:2 walking:3 robb:1 database:14 labeled:1 observed:3 wang:1 capture:12 connected:2 movement:30 rose:2 mentioned:1 complexity:1 hertzmann:1 asked:1 cristianini:1 rigorously:1 dynamic:9 signature:1 grateful:1 segment:2 compromise:1 texturing:1 easily:1 joint:12 siggraph:2 represented:1 genre:1 train:2 attitude:1 distinct:2 articulated:1 describe:3 concatenative:3 effective:1 kevin:1 choosing:1 refined:1 firm:1 whose:1 emerged:1 heuristic:1 solve:1 larger:1 say:1 stanford:1 ability:2 final:3 sequence:45 propose:4 interaction:1 product:1 mixing:1 flexibility:1 achieve:1 description:1 gentle:1 chai:1 cluster:3 produce:12 generating:2 categorization:1 xil:3 silver:1 wider:1 derive:1 help:1 pose:2 aug:1 edward:1 strong:1 entirety:1 predicted:1 indicate:1 synchronized:2 exhibiting:2 restate:1 annotated:1 compromising:1 attribute:8 modifying:1 human:9 translating:1 crc:1 require:2 generalization:3 clustered:1 bregler:3 blending:1 lma:41 stretch:1 assisted:1 around:1 plagued:1 visually:1 seed:3 mapping:6 predict:1 great:1 sought:1 vary:1 released:1 omitted:1 proc:1 label:3 create:1 successfully:1 tool:1 minimization:1 gaussian:1 always:1 aim:1 playback:1 reaching:2 rather:4 avoid:2 ej:4 varying:3 knob:1 derived:1 focus:2 contrast:3 detect:1 dependent:1 factoring:1 integrated:1 transferring:1 her:2 relation:1 schick:1 arg:2 among:1 overall:1 augment:1 spatial:3 special:1 constrained:1 integration:1 equal:1 having:5 extraction:1 manually:1 represents:1 pulli:1 eik:2 future:2 torresani:1 piecewise:1 employ:1 primarily:2 randomly:1 preserve:1 ve:2 interpolate:1 individual:1 national:1 consisting:3 maintain:2 opposing:4 attempt:3 delicate:1 highly:1 kinematic:2 alignment:2 extreme:1 light:1 tj:4 chain:1 necessary:1 poggio:1 facial:1 modest:1 taylor:1 desired:4 fitted:2 phrase:3 introducing:1 pole:3 subset:3 deviation:1 examining:1 graphic:10 reported:1 dependency:1 varies:1 considerably:1 person:2 thanks:2 fundamental:2 international:1 retain:1 lee:2 xi1:4 gaze:1 synthesis:13 na:2 squared:4 again:1 recorded:6 containing:2 choose:2 possibly:1 opposed:1 expert:7 zhao:1 style:84 li:2 coefficient:2 inc:1 forsyth:1 explicitly:1 performed:6 extrapolation:2 characterizes:1 start:1 recover:1 maintains:1 synchrony:1 minimize:2 il:7 n1k:1 characteristic:2 stretching:1 who:1 correspond:1 identify:4 generalize:1 produced:3 trajectory:2 researcher:1 manual:1 ed:1 dancer:2 associated:3 arikan:1 sampled:1 costa:1 riya:2 proved:1 hsu:1 knowledge:1 cj:1 subtle:1 segmentation:2 back:1 appears:1 originally:3 higher:1 supervised:2 editing:1 replayed:1 evaluated:1 shrink:1 furthermore:1 stage:4 smola:1 hand:1 touch:2 ei:8 nonlinear:2 marker:2 continuity:1 defines:4 quality:9 scientific:1 concept:1 regularization:2 restrained:1 assistance:1 giese:1 criterion:2 generalized:1 hill:1 ridge:11 demonstrate:2 complete:1 motion:109 ranging:1 wise:2 novel:12 superior:1 overview:1 cohen:1 endpoint:2 insensitive:1 discussed:1 badler:1 synthesized:4 interpret:1 cambridge:1 routledge:1 outlined:4 sastry:1 similarly:1 ssd:5 language:1 shawe:1 specification:1 similarity:2 morphable:1 etc:1 aligning:1 closest:1 own:1 recent:1 driven:1 adverb:3 termed:1 scenario:1 certain:1 store:1 manipulation:1 arbitrarily:2 yi:6 minimum:4 additional:4 employed:1 determine:4 july:6 signal:1 segmented:3 match:4 characterized:1 cross:1 long:2 n11:1 controlled:1 prediction:4 involving:1 regression:20 n01:1 basic:1 vision:1 represent:1 kernel:15 achieved:2 preserved:1 want:2 fine:1 interval:3 unlike:1 heath:1 subject:4 flow:10 call:1 synthetically:1 intermediate:2 automated:1 variety:2 xj:16 finish:1 zi:2 fit:1 inner:2 idea:4 drucker:1 whether:1 pca:2 effort:28 speech:2 york:2 pollard:1 action:11 generally:3 useful:1 tij:1 aimed:1 burkhardt:1 locally:1 induces:1 category:1 simplest:1 generate:3 http:1 zj:3 estimated:1 overly:1 correctly:1 discrete:1 group:1 key:1 vaguely:1 year:1 sum:2 enforced:1 run:2 angle:11 parameterized:3 powerful:1 almost:2 stylistic:6 separation:1 sad:1 n0k:3 summarizes:1 bound:2 resampled:1 maya:1 display:1 correspondence:1 quadratic:1 annual:2 activity:1 precisely:1 constraint:5 synthe:1 interpolated:1 aspect:1 generates:1 min:4 according:6 describes:1 character:1 urgent:1 making:1 projecting:1 restricted:1 pipeline:1 equation:4 previously:2 describing:1 kinematics:1 fail:1 needed:1 jared:1 adopted:2 available:1 apply:4 ney:2 professional:1 original:6 compress:1 denotes:2 remaining:1 include:1 assumes:1 emotional:1 build:2 murray:1 approximating:2 hahn:1 warping:7 objective:7 move:1 blend:3 parametric:6 strategy:1 dependence:1 visiting:1 exhibit:4 subspace:1 distance:1 unable:2 concatenation:8 majority:1 ei1:1 chris:1 toward:1 length:5 relationship:1 providing:1 minimizing:1 difficult:1 unfortunately:1 setup:1 potentially:1 relate:2 xik:7 fluid:1 synthesizing:1 peggy:1 zt:1 unknown:1 perform:2 allowing:2 observation:5 snippet:16 frame:10 y1:1 varied:1 alyssa:1 arbitrary:4 verb:3 intensity:1 introduced:1 cast:1 pair:14 specified:9 connection:1 acoustic:1 learned:8 nip:1 address:1 below:1 perception:1 pattern:1 sparsity:1 video:1 suitable:1 natural:1 predicting:2 scarce:1 arm:1 lorenzo:2 eye:1 carried:1 ues:1 literature:1 determining:1 fully:1 loss:4 bear:1 expect:1 limitation:2 validation:1 foundation:1 sufficient:1 consistent:3 playing:1 translation:1 supported:1 free:2 burges:1 warp:1 neighbor:1 overcome:1 dimension:3 curve:3 xn:4 transition:2 ending:1 commonly:1 transaction:7 approximate:1 ignore:1 cutting:1 mcgraw:1 overcomes:1 global:3 overfitting:2 approxim:1 robotic:1 corpus:1 popovic:1 xi:33 discriminative:1 continuous:2 search:1 table:3 additionally:1 learn:8 nature:2 transfer:2 reasonably:1 aol:1 improving:1 val1:1 mse:3 complex:5 artificially:3 domain:1 hodgins:1 main:1 linearly:1 animation:31 hyperparameters:1 edition:1 repeated:1 body:5 x1:4 wiley:1 precision:1 explicit:1 concatenating:1 exponential:1 candidate:1 perceptual:6 learns:1 specific:3 nyu:2 experimented:1 vapnik:2 ci:2 texture:1 nk:3 pamela:1 smoothly:3 lt:1 contained:2 subtlety:1 applies:1 springer:1 corresponds:1 relies:2 acm:8 goal:5 formulated:1 sized:1 acceleration:2 rbf:1 viewed:1 replace:1 absence:1 content:30 change:4 experimentally:1 included:2 specifically:2 determined:1 uniformly:1 acting:1 averaging:2 principal:2 called:1 total:1 experimental:1 brand:1 indicating:1 formally:2 select:1 support:6 evaluate:3 tested:1 avoiding:1 extrapolate:2
2,325
311
A Multiscale Adaptive Network Model of Motion Computation in Primates H. Taichi Wang Dimal Mathur Science Center, A18 Rockwell International 1049 Camino Dos Rios Thousand Oaks, CA 91360 Science Center, A7A Computation & Neural Systems Caltech,216-76 Rockwell International Pasadena, CA 91125 1049 Camino Dos Rios Thousand Oaks, CA 91360 Christor Koch Abstract We demonstrate a multiscale adaptive network model of motion computation in primate area MT. The model consists of two stages: (l) local velocities are measured across multiple spatio-temporal channels, and (2) the optical flow field is computed by a network of directionselective neurons at multiple spatial resolutions. This model embeds the computational efficiency of Multigrid algorithms within a parallel network as well as adaptively computes the most reliable estimate of the flow field across different spatial scales. Our model neurons show the same nonclassical receptive field properties as Allman's type I MT neurons. Since local velocities are measured across multiple channels, various channels often provide conflicting measurements to the network. We have incorporated a veto scheme for conflict resolution. This mechanism provides a novel explanation for the spatial frequency dependency of the psychophysical phenomenon called Motion Capture. 1 MOTIVATION We previously developed a two-stage model of motion computation in the visual system of primates (Le. magnocellular pathway from retina to V1 and MT; Wang, Mathur & Koch, 1989). This algorithm has these deficiencies: (1) the issue of optimal spatial scale for velocity measurement, and (2) the issue optimal spatial scale for the smoothness of motion field. To address these deficiencies, we have implemented a multi-scale motion network based on multigrid algorithms. All methods of estimating optical flow make a basic assumption about the scale of the velocity relative to the spatial neighborhood and to the temporal discretization step of delay. Thus, if the velocity of the pattern is much larger than the ratio of the spatial to temporal sampling step, an incorrect velocity value will be obtained (Battiti, Amaldi & Koch, 1991). Battiti et al. proposed a coarse-to-fine strategy for adaptively detennining 349 350 Wang, Mathur, and Koch the optimal discretization grid by evaluating the local estimate of the relative error in the flow field due to discretization. The optimal spatial grid is the one minimizing this error. This strategy both leads to a superior estimate of the optical flow field as well as achieving the speedups associated with multigrid methods. This is important. given the large number of iterations needed for relaxation-based algorithms and the remarkable speed with which humans can reliably estimate velocity (on the order of 10 neuronal time constants). Our previous model was based on the standard regularization approach. which involves smoothing with weight A.. This parameter controls the smoothness of the computed motion field. The scale over which the velocity field is smooth depends on the size of the object The larger the object is. the larger the value of A. has to be. Since a real life vision system has to deal with objects of various sizes simultaneously. there does not exist an "optimal" smoothness parameter. Our network architecture allows us to circumvent this problem by having the same smoothing weight A. at different resolution grids. 2 NETWORK ARCHITECTURE The overall architecture of the two-stage model is shown in Figure 1. In the rust stage. local velocities are measured at multiple spatial resolutions. At each spatial resolution p. the local velocities are represented by a set of direction-selective neurons. u(ij.k.p). whose preferred direction is in direction 8tc (the Component cells; Movshon. Adelson. Gizzi & Newsome. 1985). In the second stage. the optical flow field is computed by a network of direction-selective neurons (pattern cells) at multiple spatial resolutions. v(ij.k.p). In the following. we briefly summarize the network. We have used a multiresolution population coding: Nor Nru-l V= L L 1 n (: vf 1 ,1 81 p:<O p'." (1) where Nor is the number of directions in each grid. Nres is the number of resolutions in the network and I is a 2-D linear interpolation operator (Brandt, 1982). In our single resolution model. the input source. sO(ij.k). to a pattern cell v(ij.k) was: av(iJ.k) = so(ij.k) = at L COS(81 - 8t') {u(ij.k~ - (u ? V(iJ)} e(ij.k') l' (2) where u is the the unit vector in the direction of local velocity and e(ij.k') is the local edge strength. For our multiscale network. we have used a convergent multi-channel source term. SO' to a pattern cell v(ij.k.p) is: p So ~ =~ n p p'Sp p"-p' p" p' Rp"_l So (3) where R is a 2-D restriction operator. We use the full weighting operator instead of the injection operator because of the sparse nature of the input data. The computational efficiency of the multigrid algorithms chas been embedded in our multiresolution network by a set of spatial-fIltering synapses. SI' written as: A Multiscale Adaptive Network Model of Motion Computation in IHmates I~ v(~J, II, p) multi resolution motion field I(I,D U(~J,lI,p),E(I,J,"'p) retina multichannel normal velocity measurement Figure 1. The network architecture. IC-'''T Figure 2. A coarse-to-fine veto scheme. 351 352 Wang, Mathur, and Koch sf =a R~lVP-l - fjI;'lRC+ 1 vp ? (4) where a and ~ are constants. As discussed in the section 1. the scale over which the velocity field is smooth depends on the size of the object Consider. for example. an object of certain size is moving with a given velocity across the field of view. The multiresolution representation and the spatial frequency filtering connections will force the velocity field to be represented mostly by a few neurons whose resolution grid matches the size of the object Therefore, the smoothness constraint should be enforced on the individual resolution grids. If membrane potential is used. the source for the smoothness term. S2' at resolution grid P. can be written as: Sf(ij,k) = A. L COS(aA; - at') (v(i-1j,k',p) + v(i+1j,k',p) + v(ij-1,k',p) + v(ij+l.k',p) - 4v(ij,k',p)} k' (5) where A. is the smoothness parameter. The smoothing weight A. in our formulation is the same for each grid and is independent of object sizes. The network equation becomes, aV(ij,k,p) = S& + at sf + sf . (6) The multiresolution network architecture has considerably more complicated synaptic connection pattern but only 33% more neurons as compared to the single resolution model, the convergence is improved by about two orders of magnitude (as measured by numbers of iterations needed). 3 CONFLICT RESOLUTION The velocity estimated by our -- or any other motion algorithm -- depends on the spatial (Ax) and temporal (At) discretization step used. Battiti et ale derived the following expression for the relative error in velocity due to incorrect derivative estimation: 6 = 14K.1 == 21r2 [(Lixl- (u~l] u 3~ m where u is the velocity, A. is the spatial frequency of the moving pattern. As velocity u deviates from Ax=uAt, the velocity measurement become less accurate. The scaling factor in (7) depends on the spatial filtering in the retina. Therefore. the choice of spatial discretization and spatial filtering bandwidth have to satisfy the requirements of both the sampling theorem and the velocity measurement accuracy. Even though (7) was derived based on the gradient model. we believe similar constraint applies to correlation models. We model the receptive field profiles of primate retinal ganglion cells by the Laplacian-ofGaussian (LOG) operators. If we require that the accuracy of velocity measurement be within 10% within u = 0 to u = 2 (Ax/At). then the standard deviation. a. of the Gaussian must be greater or equal to Ux. What happens if velocity measurement at various scales gives inconsistent results? Consider. for example. an object moving at a speed of 3 pixels/sec across the retina. As A Multiscale Adaptive Network Model of Motion Computation in IHmates shown in Figure 2, channels p=1 and p=2 will give the correct measurement, since it is in the reliable ranges of these channels, as depicted by fIlled circles. The finest channel, p=O, on the other hand will give an erroneous reading. This suggests a coarser-to-fine veto scheme for conflict resolution. We have incorporated this strategy in our network architecture by implementing a shunting term in Eq. (4). In this way, the erroneous input signals from the component cells at grid p=O are shunted out (the open circles in Figure 2) by the component cells (the fIlled circles) at coarser grids. 4 MOTION CAPTURE How does human visual system deal with the potential conflicts among various spatial channels? Is there any evidence for the use of such a coarse-to-fine conflict resolution scheme? We believe that the well-known psychophysical phenomenon of Motion Capture is the manifestation of this strategy. When human subjects are presented a sequence of randomly moving random dots pattern, we perceive random motion. Ramachandran and Anstis (1983) found, surprisingly, that our perception of it can be greatly influenced by the movement of a superimposed low contrast, low spatial frequency grating. They found that the human subject has a tendency to perceive the random dots as moving with the spatial grating, as if the random dots adhere to the grating. For a given spatial frequency of the grating, the percentage of capture is highest when the phase shift between frames of the grating is about 900. Even more surprisingly, the lower the spatial frequency of the grating, the higher the percentage of capture. Other researchers (e.g. Yuille & Grzywacz, 1988) and we have attempted to explain this phenomenon based on the smoothness constraint on the velocity field. However, smoothness alone can not explain the dependencies on spatial frequency and the phase shift of the gratings. The coarser-to-fine shunting scheme provides a natural explanation of these dependencies. We have simulated the spatial frequency and phase shift dependency. The results are shown in Figure 3. In these simUlations, we plotted the relative uniformity of the motion-captured velocity fields. Uniformity of 1 signifies total capture. As can be seen clearly, for a given spatial frequency, the effect of capture increases with phase shift, and for a given phase shift, the effect of capture also increase as the spatial frequency become lower. The lower spatial frequency gratings are more effective, because the coarser the channels are, the more finer component cells can be effectively shunted out, as is clear from the receptive field relationship shown in Figure 2. 5 NONCLASSICAL RECEPTIVE FIELD Traditionally, physiologists use isolated bars and slits to map out the classical receptive fields (CRF) of a neuron which is the portion of visual field that can be directly stimulated. Recently, there is mounting evidence that in many visual neurons stimuli presented outside the CRF strongly and selectively influence neural responses to stimuli presented within the CRF. This is tenned nonclassical receptive field. Allman, Miezin & McGuinness (1985) have found that the true receptive field of more than 90% of neurons in the middle temporal (MT) area extends well beyond their CRF. The surrounds commonly have directional and velocity-selectivity influences that are 353 354 Wang, Mathur, and Koch ----=:. . .... ._. . ,. ... -----.. 100 90 "- 80 " " .'" .'.' .,.,. ~ .' .'.' ," .; !::- 's. .2 , ,, , , 70 'c ::J , ,, , , ,, ,, .---- ,II' ---. -.-...... lambda. 64 lambda_32 - - .... - Iambda_18 " , ,, , , ,, 60 " 50+-.....~.....-r.....- ......~~.....--.....~.....~....., 20 80 40 80 100 Spatial Phase (Degree) Figure 3. Spatial frequency dependency of Motion Capture. --0- './h.doJ tJ" WUll " rtfl1..I1 "~ Ino .. .' f ??... 75 .'" ... . ..e I ypu I NWJro, I ... ' , . . ...... -" .. ?.r--....=., ?? ?? L...-.. I ' ? ? . ... ........ .. .. ' ueTI ~ c 0 Q. L_~?. 50 VI .. . . . . . e .~ ;; 25 :/ .... 0 Z ????? CENTER DOTS MOVE '0 0 \\ til' BACKGROUND DOTS STATIONARY . \ ..- .; ?25 -200 - 100 0 100 200 Direction or movement or center dots - ...........-... Model Neuron Allman', Type I Neuron 100 .. ~'u .. '"' -c..o.:::e.,- - -- ~ 0 .. so . '\ 0 \ ~ 0 '::1 -t._-=-:!- -so CENTER DOTS MOVE IN OPTIMUM DIRECTION \, BACKGROUND DIRECTION VARIES .~.- :0 :a oS -100 ?200 ?100 0 100 200 Direction or movement of background dots Figure 4. Simulation of Allman's type I non-classical receptive field properties. A Multiscale Adaptive Network Model of Motion Computation in R'imates antagonistic to the response from the CRF. Based on the surround selectivitYt the MT neurons can be classified into three types. Our model neurons show that same type of nonclassical receptive field selectivity as Allman's type I neuron. We have performed a series of simulations similar to Allman's original experiments. After the CRF of a model is determined, the optimal motion stimulus is presented within the CRF. The surrounds are, however, moved by the same amount but in the various directions. Dearly, the motion in the surround has profound effect of the activity of the cell we are monitoring. 1be effect of the surround motion on the cell as a function of the the direction of surround motion is plotted in Figure 4 (b). When the surround is moved in a similar direction as the center, the neuron activity of the cell is almost totally suppressed. On the other hand, when the surround is moved opposite to the center, the cell's activity is enhanced. Superimposed on Figure 4 are the similar plots from Allman's paper. 6 CONCLUSION In conclusion, we have developed a multi-channel, multi-resolution network model of motion computation in primates. The model MT neurons show similar nonclassical surround properties as Allman's type I cells. We also proposed a novel explanation of the Motion Capture phenomenon based on a coarse-to-fine strategy for conflict resolution among the various input channels. Acknowledgements CK acknowledges ONR, NSF and the James McDonnell Foundation for supporting this research. References Allman, J. t Miezin, F., and McGuinness, E. (1985) "Direction- and velocity-specific responses from beyond the classical receptive field in the middle temporal visual area (MT)", Perception, 14, 105 - 126. Battiti, R., Koch, C. and Amaldi, E. (1991) "Computing optical flow across multiple scales: an adaptive coarse-to-fme approach", to appear in Inti. J. Computer Vision. Brandt, A. (1982) "Guide to multigrid development". In: Muitlgrid Methods, Ed. Dold, A. and Eckmann, B., Springer-Verlag. Movshon, J.A., Adelson, E.H., Gizzi, M.S., and Newsome, W.T. (1985) "The Analysis of Moving Visual Pattern", In Pattern Recognition Mechanisms, ed' Chagas. C., Gattas, R., Gross, C.G., Rome: Vatican Press. Ramachandran, V.S. and Anstis, S.M. (1983) "Displacement thresholds for coherent apparent motion in random dot-patterns", Vision Res. 23 (12), 1719 - 1724. Yuille, A.L. and Grzywacz, N.M. (1988) "A computational theory for the perception of coherent visual motion", Nature, 333, 71 - 74. Wang, H. T., Mathur, B. P. and Koch, C. (1989) "Computing optical flow in the primate visual system", Neural Computation, 1(1),92 - 103. 355
311 |@word middle:2 briefly:1 open:1 simulation:3 series:1 discretization:5 si:1 tenned:1 written:2 finest:1 must:1 plot:1 mounting:1 alone:1 stationary:1 lrc:1 coarse:5 provides:2 brandt:2 oak:2 become:2 profound:1 incorrect:2 consists:1 pathway:1 nor:2 multi:5 totally:1 becomes:1 estimating:1 directionselective:1 what:1 multigrid:5 chas:1 developed:2 temporal:6 control:1 unit:1 appear:1 local:7 interpolation:1 suggests:1 co:2 range:1 displacement:1 area:3 operator:5 a7a:1 influence:2 restriction:1 map:1 center:7 resolution:18 perceive:2 population:1 traditionally:1 grzywacz:2 antagonistic:1 enhanced:1 velocity:27 recognition:1 coarser:4 wang:6 capture:10 thousand:2 movement:3 highest:1 gross:1 uniformity:2 yuille:2 efficiency:2 various:6 represented:2 effective:1 neighborhood:1 outside:1 whose:2 apparent:1 larger:3 sequence:1 dearly:1 nonclassical:5 multiresolution:4 moved:3 convergence:1 requirement:1 optimum:1 object:8 chaga:1 measured:4 ij:16 eq:1 grating:8 implemented:1 involves:1 direction:14 correct:1 human:4 implementing:1 require:1 koch:8 ic:1 normal:1 estimation:1 clearly:1 gaussian:1 ck:1 ax:3 derived:2 superimposed:2 greatly:1 contrast:1 rio:2 pasadena:1 selective:2 i1:1 pixel:1 issue:2 overall:1 among:2 development:1 spatial:30 smoothing:3 field:25 equal:1 having:1 sampling:2 adelson:2 amaldi:2 stimulus:3 few:1 retina:4 randomly:1 simultaneously:1 individual:1 phase:6 tj:1 accurate:1 edge:1 filled:2 circle:3 plotted:2 re:1 isolated:1 newsome:2 signifies:1 deviation:1 delay:1 rockwell:2 dependency:5 varies:1 considerably:1 adaptively:2 international:2 shunted:2 lambda:1 derivative:1 til:1 li:1 potential:2 retinal:1 coding:1 sec:1 satisfy:1 depends:4 vi:1 performed:1 view:1 portion:1 parallel:1 complicated:1 accuracy:2 fme:1 directional:1 vp:1 monitoring:1 researcher:1 finer:1 classified:1 explain:2 synapsis:1 influenced:1 synaptic:1 ed:2 frequency:12 james:1 associated:1 higher:1 response:3 improved:1 formulation:1 though:1 strongly:1 stage:5 correlation:1 hand:2 ramachandran:2 multiscale:6 o:1 believe:2 effect:4 true:1 regularization:1 deal:2 manifestation:1 crf:7 demonstrate:1 motion:25 novel:2 recently:1 superior:1 gizzi:2 mt:7 rust:1 detennining:1 discussed:1 measurement:8 surround:9 smoothness:8 grid:10 dot:9 moving:6 selectivity:2 certain:1 verlag:1 onr:1 life:1 battiti:4 caltech:1 captured:1 seen:1 greater:1 ale:1 ii:2 signal:1 multiple:6 full:1 smooth:2 match:1 shunting:2 laplacian:1 basic:1 vision:3 iteration:2 cell:13 background:3 fine:6 adhere:1 source:3 subject:2 veto:3 flow:8 inconsistent:1 allman:9 architecture:6 bandwidth:1 opposite:1 shift:5 expression:1 movshon:2 clear:1 amount:1 multichannel:1 exist:1 percentage:2 nsf:1 estimated:1 threshold:1 achieving:1 mcguinness:2 v1:1 relaxation:1 enforced:1 extends:1 almost:1 scaling:1 vf:1 convergent:1 activity:3 strength:1 constraint:3 deficiency:2 speed:2 optical:6 injection:1 speedup:1 mcdonnell:1 membrane:1 across:6 suppressed:1 primate:6 happens:1 inti:1 equation:1 previously:1 mechanism:2 needed:2 fji:1 physiologist:1 rp:1 original:1 magnocellular:1 classical:3 psychophysical:2 move:2 receptive:10 strategy:5 gradient:1 simulated:1 relationship:1 ratio:1 minimizing:1 mostly:1 reliably:1 av:2 neuron:17 supporting:1 incorporated:2 ino:1 frame:1 rome:1 mathur:6 connection:2 conflict:6 coherent:2 conflicting:1 anstis:2 address:1 beyond:2 bar:1 pattern:10 perception:3 reading:1 summarize:1 reliable:2 explanation:3 natural:1 force:1 circumvent:1 imates:1 scheme:5 acknowledges:1 deviate:1 acknowledgement:1 relative:4 camino:2 embedded:1 filtering:4 remarkable:1 foundation:1 degree:1 surprisingly:2 l_:1 guide:1 sparse:1 evaluating:1 computes:1 commonly:1 adaptive:6 preferred:1 spatio:1 stimulated:1 channel:11 nature:2 ca:3 sp:1 motivation:1 s2:1 profile:1 neuronal:1 embeds:1 sf:4 weighting:1 uat:1 theorem:1 erroneous:2 miezin:2 specific:1 r2:1 evidence:2 effectively:1 magnitude:1 tc:1 depicted:1 ganglion:1 visual:8 ux:1 applies:1 springer:1 aa:1 determined:1 called:1 total:1 slit:1 tendency:1 attempted:1 selectively:1 phenomenon:4
2,326
3,110
A Kernel Method for the Two-Sample-Problem Arthur Gretton MPI for Biological Cybernetics T?ubingen, Germany arthur@tuebingen.mpg.de Karsten M. Borgwardt Ludwig-Maximilians-Univ. Munich, Germany kb@dbs.ifi.lmu.de Bernhard Sch?olkopf MPI for Biological Cybernetics T?ubingen, Germany bs@tuebingen.mpg.de Malte Rasch Graz Univ. of Technology, Graz, Austria malte.rasch@igi.tu-graz.ac.at Alexander J. Smola NICTA, ANU Canberra, Australia Alex.Smola@anu.edu.au Abstract We propose two statistical tests to determine if two samples are from different distributions. Our test statistic is in both cases the distance between the means of the two samples mapped into a reproducing kernel Hilbert space (RKHS). The first test is based on a large deviation bound for the test statistic, while the second is based on the asymptotic distribution of this statistic. The test statistic can be computed in O(m2 ) time. We apply our approach to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where our test performs strongly. We also demonstrate excellent performance when comparing distributions over graphs, for which no alternative tests currently exist. 1 Introduction We address the problem of comparing samples from two probability distributions, by proposing a statistical test of the hypothesis that these distributions are different (this is called the two-sample or homogeneity problem). This test has application in a variety of areas. In bioinformatics, it is of interest to compare microarray data from different tissue types, either to determine whether two subtypes of cancer may be treated as statistically indistinguishable from a diagnosis perspective, or to detect differences in healthy and cancerous tissue. In database attribute matching, it is desirable to merge databases containing multiple fields, where it is not known in advance which fields correspond: the fields are matched by maximising the similarity in the distributions of their entries. In this study, we propose to test whether distributions p and q are different on the basis of samples drawn from each of them, by finding a smooth function which is large on the points drawn from p, and small (as negative as possible) on the points from q. We use as our test statistic the difference between the mean function values on the two samples; when this is large, the samples are likely from different distributions. We call this statistic the Maximum Mean Discrepancy (MMD). Clearly the quality of MMD as a statistic depends heavily on the class F of smooth functions that define it. On one hand, F must be ?rich enough? so that the population MMD vanishes if and only if p = q. On the other hand, for the test to be consistent, F needs to be ?restrictive? enough for the empirical estimate of MMD to converge quickly to its expectation as the sample size increases. We shall use the unit balls in universal reproducing kernel Hilbert spaces [22] as our function class, since these will be shown to satisfy both of the foregoing properties. On a more practical note, MMD is cheap to compute: given m points sampled from p and n from q, the cost is O(m + n)2 time. We define two non-parametric statistical tests based on MMD. The first, which uses distributionindependent uniform convergence bounds, provides finite sample guarantees of test performance, at the expense of being conservative in detecting differences between p and q. The second test is based on the asymptotic distribution of MMD, and is in practice more sensitive to differences in distribution at small sample sizes. These results build on our earlier work in [6] on MMD for the two sample problem, which addresses only the second kind of test. In addition, the present approach employs a more accurate approximation to the asymptotic distribution of the test statistic. We begin our presentation in Section 2 with a formal definition of the MMD, and a proof that the population MMD is zero if and only if p = q when F is the unit ball of a universal RKHS. We also give an overview of hypothesis testing as it applies to the two-sample problem, and review previous approaches. In Section 3, we provide a bound on the deviation between the population and empirical MMD, as a function of the Rademacher averages of F with respect to p and q. This leads to a first hypothesis test. We take a different approach in Section 4, where we use the asymptotic distribution of an unbiased estimate of the squared MMD as the basis for a second test. Finally, in Section 5, we demonstrate the performance of our method on problems from neuroscience, bioinformatics, and attribute matching using the Hungarian marriage approach. Our approach performs well on high dimensional data with low sample size; in addition, we are able to successfully apply our test to graph data, for which no alternative tests exist. Proofs and further details are provided in [13], and software may be downloaded from http : //www.kyb.mpg.de/bs/people/arthur/mmd.htm 2 The Two-Sample-Problem Our goal is to formulate a statistical test that answers the following question: Problem 1 Let p and q be distributions defined on a domain X. Given observations X := {x1 , . . . , xm } and Y := {y1 , . . . , yn }, drawn independently and identically distributed (i.i.d.) from p and q respectively, is p 6= q? To start with, we wish to determine a criterion that, in the population setting, takes on a unique and distinctive value only when p = q. It will be defined based on [10, Lemma 9.3.2]. Lemma 1 Let (X, d) be a separable metric space, and let p, q be two Borel probability measures defined on X. Then p = q if and only if Ep (f (x)) = Eq (f (x)) for all f ? C(X), where C(X) is the space of continuous bounded functions on X. Although C(X) in principle allows us to identify p = q uniquely, such a rich function class is not practical in the finite sample setting. We thus define a more general class of statistic, for as yet unspecified function classes F, to measure the discrepancy between p and q, as proposed in [11]. Definition 2 Let F be a class of functions f : X ? R and let p, q, X, Y be defined as above. Then we define the maximum mean discrepancy (MMD) and its empirical estimate as MMD [F, p, q] := sup (Ex?p [f (x)] ? Ey?q [f (y)]) , (1) f ?F MMD [F, X, Y ] := sup f ?F ! m n 1 X 1X f (xi ) ? f (yi ) . m i=1 n i=1 (2) We must now identify a function class that is rich enough to uniquely establish whether p = q, yet restrictive enough to provide useful finite sample estimates (the latter property will be established in subsequent sections). To this end, we select F to be the unit ball in a universal RKHS H [22]; we will henceforth use F only to denote this function class. With the additional restriction that X be compact, a universal RKHS is dense in C(X) with respect to the L? norm. It is shown in [22] that Gaussian and Laplace kernels are universal. Theorem 3 Let F be a unit ball in a universal RKHS H, defined on the compact metric space X, with associated kernel k(?, ?). Then MMD [F, p, q] = 0 if and only if p = q. This theorem is proved in [13]. We next express the MMD in a more easily computable form. This is simplified by the fact that in an RKHS, function evaluations can be written f (x) = h?(x), f i, where ?(x) = k(x, .). Denote by ?[p] := Ex?p(x) [?(x)] the expectation of ?(x) (assuming that it exists ? a sufficient condition for this is k?[p]k2H < ?, which is rearranged as Ep [k(x, x? )] < ?, where x and x? are independent random variables drawn according to p). Since Ep [f (x)] = h?[p], f i, we may rewrite MMD[F, p, q] = sup h?[p] ? ?[q], f i = k?[p] ? ?[q]kH . (3) kf kH ?1 1 m Pm ?(xi ) and k(x, x? ) = h?(x), ?(x? )i, an empirical estimate of MMD is ? ? 21 m,n m n X X X 1 2 1 MMD [F, X, Y ] = ? 2 k(xi , xj ) ? k(xi , yj ) + 2 k(yi , yj )? . (4) m i,j=1 mn i,j=1 n i,j=1 Using ?[X] := i=1 Eq. (4) provides us with a test statistic for p 6= q. We shall see in Section 3 that this estimate is biased, although it is straightforward to upper bound the bias (we give an unbiased estimate, and an associated test, in Section 4). Intuitively we expect MMD[F, X, Y ] to be small if p = q, and the quantity to be large if the distributions are far apart. Computing (4) costs O((m + n)2 ) time. Overview of Statistical Hypothesis Testing, and of Previous Approaches Having defined our test statistic, we briefly describe the framework of statistical hypothesis testing as it applies in the present context, following [9, Chapter 8]. Given i.i.d. samples X ? p of size m and Y ? q of size n, the statistical test, T(X, Y ) : Xm ? Xn 7? {0, 1} is used to distinguish between the null hypothesis H0 : p = q and the alternative hypothesis H1 : p 6= q. This is achieved by comparing the test statistic MMD[F, X, Y ] with a particular threshold: if the threshold is exceeded, then the test rejects the null hypothesis (bearing in mind that a zero population MMD indicates p = q). The acceptance region of the test is thus defined as any real number below the threshold. Since the test is based on finite samples, it is possible that an incorrect answer will be returned: we define the Type I error as the probability of rejecting p = q based on the observed sample, despite the null hypothesis being true. Conversely, the Type II error is the probability of accepting p = q despite the underlying distributions being different. The level ? of a test is an upper bound on the Type I error: this is a design parameter of the test, and is used to set the threshold to which we compare the test statistic (finding the test threshold for a given ? is the topic of Sections 3 and 4). A consistent test achieves a level ?, and a Type II error of zero, in the large sample limit. We will see that both of the tests proposed in this paper are consistent. We next give a brief overview of previous approaches to the two sample problem for multivariate data. Since our later experimental comparison is with respect to certain of these methods, we give abbreviated algorithm names in italics where appropriate: these should be used as a key to the tables in Section 5. We provide further details in [13]. A generalisation of the Wald-Wolfowitz runs test to the multivariate domain was proposed and analysed in [12, 17] (Wolf), which involves counting the number of edges in the minimum spanning tree over the aggregated data that connect points in X to points in Y . The computational cost of this method using Kruskal?s algorithm is O((m + n)2 log(m + n)), although more modern methods improve on the log(m + n) term. Two possible generalisations of the Kolmogorov-Smirnov test to the multivariate case were studied in [4, 12]. The approach of Friedman and Rafsky (Smir) in this case again requires a minimal spanning tree, and has a similar cost to their multivariate runs test. A more recent multivariate test was proposed in [20], which is based on the minimum distance non-bipartite matching over the aggregate data, at cost O((m + n)3 ). Another recent test was proposed in [15] (Hall): for each point from p, it requires computing the closest points in the aggregated data, and counting how many of these are from q (the procedure is repeated for each point from q with respect to points from p). The test statistic is costly to compute; [15] consider only tens of points in their experiments. Yet another approach is to use some distance (e.g. L1 or L2 ) between Parzen window estimates of the densities as a test statistic [1, 3], based on the asymptotic distribution of this distance given p = q. When the L2 norm is used, the test statistic is related to those we present here, although it is arrived at from a different perspective (see [13]: the test in [1] is obtained in a more restricted setting where the RKHS kernel is an inner product between Parzen windows. Since we are not doing density estimation, however, we need not decrease the kernel width as the sample grows. In fact, decreasing the kernel width reduces the convergence rate of the associated two-sample test, compared with the (m + n)?1/2 rate for fixed kernels). The L1 approach of [3] (Biau) requires the space to be partitioned into a grid of bins, which becomes difficult or impossible for high dimensional problems. Hence we use this test only for low-dimensional problems in our experiments. 3 A Test based on Uniform Convergence Bounds In this section, we establish two properties of the MMD. First, we show ? that regardless of whether or not p = q, the empirical MMD converges in probability at rate 1/ m + n to its population value. This establishes the consistency of statistical tests based on MMD. Second, we give probabilistic bounds for large deviations of the empirical MMD in the case p = q. These bounds lead directly to a threshold for our first hypothesis test. We begin our discussion of the convergence of MMD[F, X, Y ] to MMD[F, p, q]. Theorem 4 Let p, q, X, Y be defined as in Problem 1, and assume |k(x, y)| ? K. Then n   o   1 1 ??2 mn Pr |MMD[F, X, Y ] ? MMD[F, p, q]| > 2 (K/m) 2 + (K/n) 2 + ? ? 2 exp 2K(m+n) . Our next goal is to refine this result in a way that allows us to define a test threshold under the null hypothesis p = q. Under this circumstance, the constants in the exponent are slightly improved. Theorem 5 Under the conditions of Theorem 4 where additionally p = q and m = n, q 1 MMD[F, X, Y ] > m? 2 2Ep [k(x, x) ? k(x, x? )] + ? > 2(K/m)1/2 + ?, | {z } | {z } B2 (F,p) B1 (F,p)  2  m both with probability less than exp ? ?4K (see [13] for the proof). In this theorem, we illustrate two possible bounds B1 (F, p) and B2 (F, p) on the bias in the empirical estimate (4). The first inequality is interesting inasmuch as it provides a link between the bias bound B1 (F, p) and kernel size (for instance, if we were to use a Gaussian kernel with large ?, then k(x, x) and k(x, x? ) would likely be close, and the bias small). In the context of testing, however, we would need to provide an additional bound to show convergence of an empirical estimate of B1 (F, p) to its population equivalent. Thus, in the following test for p = q based on Theorem 5, we use B2 (F, p) to bound the bias. Lemma 6 A hypothesis test of level ? for the null hypothesis = q (equivalently MMD[F, p, q] =  pp  p ?1 0) has the acceptance region MMD[F, X, Y ] < 2 K/m 1 + log ? . We emphasise that Theorem 4 guarantees ? the consistency of the test, and that the Type II error probability decreases to zero at rate 1/ m (assuming m = n). To put this convergence rate in perspective, consider a test of whether two normal distributions have equal means, given they have unknown but equal variance [9, Exercise 8.41]. In this case, the test statistic has a Student-t distribution with n + m ? 2 degrees of freedom, and its error probability converges at the same rate as our test. 4 An Unbiased Test Based on the Asymptotic Distribution of the U-Statistic We now propose a second test, which is based on the asymptotic distribution of an unbiased estimate of MMD2 . We begin by defining this test statistic. Lemma 7 Given x and x? independent random variables with distribution p, and y and y ? independent random variables with distribution q, the population MMD2 is MMD2 [F, p, q] = Ex,x? ?p [k(x, x? )] ? 2Ex?p,y?q [k(x, y)] + Ey,y? ?q [k(y, y ? )] (5) (see [13] for details). Let Z := (z1 , . . . , zm ) be m i.i.d. random variables, where zi := (xi , yi ) (i.e. we assume m = n). An unbiased empirical estimate of MMD2 is m X 1 2 h(zi , zj ), (6) MMDu [F, X, Y ] = (m)(m ? 1) i6=j which is a one-sample U-statistic with h(zi , zj ) := k(xi , xj ) + k(yi , yj ) ? k(xi , yj ) ? k(xj , yi ). The empirical statistic is an unbiased estimate of MMD2 , although it does not have minimum variance (the minimum variance estimate is almost identical: see [21, Section 5.1.4]). We remark that these quantities can easily be linked with a simple kernel between probability measures: (5) is a special case of the Hilbertian metric [16, Eq. (4)] with the associated kernel K(p, q) = Ep,q k(x, y) [16, Theorem 4]. The asymptotic distribution of this test statistic under H1 is given by [21, Section 5.5.1], and the distribution under H0 is computed based on [21, Section 5.5.2] and [1, Appendix]; see [13] for details.  Theorem 8 We assume E h2 < ?. Under H1 , MMD2u converges in distribution (defined e.g. in [14, Section 7.2]) to a Gaussian according to  D  1 m 2 MMD2u ? MMD2 [F, p, q] ? N 0, ?u2 ,     ? 2 where ?u2 = 4 Ez (Ez? h(z, z ? ))2 ? [Ez,z? (h(z, z ? ))] , uniformly at rate 1/ m [21, Theorem B, p. 193]. Under H0 , the U-statistic is degenerate, meaning Ez? h(z, z ? ) = 0. In this case, MMD2u converges in distribution according to D mMMD2u ? ? X l=1   ?l zl2 ? 2 , (7) where zl ? N(0, 2) i.i.d., ?i are the solutions to the eigenvalue equation Z ? x? )?i (x)dp(x) = ?i ?i (x? ), k(x, X ? i , xj ) := k(xi , xj ) ? Ex k(xi , x) ? Ex k(x, xj ) + Ex,x? k(x, x? ) is the centred RKHS kernel. and k(x Our goal is to determine whether the empirical test statistic MMD2u is so large as to be outside the 1 ? ? quantile of the null distribution in (7) (consistency of the resulting test is guaranteed by the form of the distribution under H1 ). One way to estimate this quantile is using the bootstrap [2] on the aggregated data. Alternatively, we may approximate the null distribution by fitting Pearson curves to its first four moments [18, Section 18.8]. Taking advantage of the degeneracy of the U-statistic, we obtain (see [13])  2    2 E MMD2u = Ez,z? h2 (z, z ? ) m(m ? 1) and  3  8(m ? 2) E MMD2u = 2 Ez,z? [h(z, z ? )Ez?? (h(z, z ?? )h(z ? , z ?? ))] + O(m?4 ). (8) m (m ? 1)2  4  The fourth moment E MMD2u is not computed, since it is both very small (O(m?4 )) and expensive to calculate (O(m4 )). Instead, we replace the kurtosis with its lower bound  2 kurt MMD2u ? skew MMD2u + 1. 5 Experiments We conducted distribution comparisons using our MMD-based tests on datasets from three realworld domains: database applications, bioinformatics, and neurobiology. We investigated the uniform convergence approach (MMD), the asymptotic approach with bootstrap (MMD2u B), and the asymptotic approach with moment matching to Pearson curves (MMD2u M). We also compared against several alternatives from the literature (where applicable): the multivariate ttest, the Friedman-Rafsky Kolmogorov-Smirnov generalisation (Smir), the Friedman-Rafsky WaldWolfowitz generalisation (Wolf), the Biau-Gy?orfi test (Biau), and the Hall-Tajvidi test (Hall). Note that we do not apply the Biau-Gy?orfi test to high-dimensional problems (see end of Section 2), and that MMD is the only method applicable to structured data such as graphs. An important issue in the practical application of the MMD-based tests is the selection of the kernel parameters. We illustrate this with a Gaussian RBF kernel, where we must choose the kernel width ? (we use this kernel for univariate and multivariate data, but not for graphs). The empirical MMD is zero both for kernel size ? = 0 (where the aggregate Gram matrix over X and Y is a unit matrix), and also approaches zero as ? ? ? (where the aggregate Gram matrix becomes uniformly constant). We set ? to be the median distance between points in the aggregate sample, as a compromise between these two extremes: this remains a heuristic, however, and the optimum choice of kernel size is an ongoing area of research. Data integration As a first application of MMD, we performed distribution testing for data integration: the objective is to aggregate two datasets into a single sample, with the understanding that both original samples are generated from the same distribution. Clearly, it is important to check this last condition before proceeding, or an analysis could detect patterns in the new dataset that are caused by combining the two different source distributions, and not by real-world phenomena. We chose several real-world settings to perform this task: we compared microarray data from normal and tumor tissues (Health status), microarray data from different subtypes of cancer (Subtype), and local field potential (LFP) electrode recordings from the Macaque primary visual cortex (V1) with and without spike events (Neural Data I and II). In all cases, the two data sets have different statistical properties, but the detection of these differences is made difficult by the high data dimensionality. We applied our tests to these datasets in the following fashion. Given two datasets A and B, we either chose one sample from A and the other from B (attributes = different); or both samples from either A or B (attributes = same). We then repeated this process up to 1200 times. Results are reported in Table 1. Our asymptotic tests perform better than all competitors besides Wolf: in the latter case, we have greater Type II error for one neural dataset, lower Type II error on the Health Status data (which has very high dimension and low sample size), and identical (error-free) performance on the remaining examples. We note that the Type I error of the bootstrap test on the Subtype dataset is far from its design value of 0.05, indicating that the Pearson curves provide a better threshold estimate for these low sample sizes. For the remaining datasets, the Type I errors of the Pearson and Bootstrap approximations are close. Thus, for larger datasets, the bootstrap is to be preferred, since it costs O(m2 ), compared with a cost of O(m3 ) for Pearson (due to the cost of computing (8)). Finally, the uniform convergence-based test is too conservative, finding differences in distribution only for the data with largest sample size. Dataset Neural Data I Neural Data II Health status Subtype Attr. Same Different Same Different Same Different Same Different MMD 100.0 50.0 100.0 100.0 100.0 100.0 100.0 100.0 MMD2u B 96.5 0.0 94.6 3.3 95.5 1.0 99.1 0.0 MMD2u M 96.5 0.0 95.2 3.4 94.4 0.8 96.4 0.0 t-test 100.0 42.0 100.0 100.0 100.0 100.0 100.0 100.0 Wolf 97.0 0.0 95.0 0.8 94.7 2.8 94.6 0.0 Smir 95.0 10.0 94.5 31.8 96.1 44.0 97.3 28.4 Hall 96.0 49.0 96.0 5.9 95.6 35.7 96.5 0.2 Table 1: Distribution testing for data integration on multivariate data. Numbers indicate the percentage of repetitions for which the null hypothesis (p=q) was accepted, given ? = 0.05. Sample size (dimension; repetitions of experiment): Neural I 4000 (63; 100) ; Neural II 1000 (100; 1200); Health Status 25 (12,600; 1000); Subtype 25 (2,118; 1000). Attribute matching Our second series of experiments addresses automatic attribute matching. Given two databases, we want to detect corresponding attributes in the schemas of these databases, based on their data-content (as a simple example, two databases might have respective fields Wage and Salary, which are assumed to be observed via a subsampling of a particular population, and we wish to automatically determine that both Wage and Salary denote to the same underlying attribute). We use a two-sample test on pairs of attributes from two databases to find corresponding pairs.1 This procedure is also called table matching for tables from different databases. We performed attribute matching as follows: first, the dataset D was split into two halves A and B. Each of the n attributes 1 Note that corresponding attributes may have different distributions in real-world databases. Hence, schema matching cannot solely rely on distribution testing. Advanced approaches to schema matching using MMD as one key statistical test are a topic of current research. in A (and B, resp.) was then represented by its instances in A (resp. B). We then tested all pairs of attributes from A and from B against each other, to find the optimal assignment of attributes A1 , . . . , An from A to attributes B1 , . . . , Bn from B. We assumed that A and B contain the same number of attributes. As a naive approach, one could assume that any possible pair of attributes might correspond, and thus that every attribute of A needs to be tested against all the attributes of B to find the optimal match. We report results for this naive approach, aggregated over all pairs of possible attribute matches, in Table 2. We used three datasets: the census income dataset from the UCI KDD archive (CNUM), the protein homology dataset from the 2004 KDD Cup (BIO) [8], and the forest dataset from the UCI ML archive [5]. For the final dataset, we performed univariate matching of attributes (FOREST) and multivariate matching of tables (FOREST10D) from two different databases, where each table represents one type of forest. Both our asymptotic MMD2u -based tests perform as well as or better than the alternatives, notably for CNUM, where the advantage of MMD2u is large. Unlike in Table 1, the next best alternatives are not consistently the same across all data: e.g. in BIO they are Wolf or Hall, whereas in FOREST they are Smir, Biau, or the t-test. Thus, MMD2u appears to perform more consistently across the multiple datasets. The Friedman-Rafsky tests do not always return a Type I error close to the design parameter: for instance, Wolf has a Type I error of 9.7% on the BIO dataset (on these data, MMD2u has the joint best Type II error without compromising the designed Type I performance). Finally, our uniform convergence approach performs much better than in Table 1, although surprisingly it fails to detect differences in FOREST10D. A more principled approach to attribute matching is also possible. Assume that ?(A) = (?1 (A1 ), ?2 (A2 ), ..., ?n (An )): in other words, the kernel decomposes into kernels on the individual attributes A (and also decomposes this way on the attributes of B). In this case, M M D2 can be Pof n written i=1 k?i (Ai ) ? ?i (Bi )k2 , where we sum over the MMD terms on each of the attributes. Our goal of optimally assigning attributes from B to attributes of PA via MMD is equivalent to finding the optimal permutation ? of attributes of B that minimizes ni=1 k?i (Ai ) ? ?i (B?(i) )k2 . If we define Cij = k?i (Ai ) ? ?i (Bj )k2 , then this is the same as minimizing the sum over Ci,?(i) . This is the linear assignment problem, which costs O(n3 ) time using the Hungarian method [19]. Dataset BIO FOREST CNUM FOREST10D Attr. Same Different Same Different Same Different Same Different MMD 100.0 20.0 100.0 4.9 100.0 15.2 100.0 100.0 MMD2u B 93.8 17.2 96.4 0.0 94.5 2.7 94.0 0.0 MMD2u M 94.8 17.6 96.0 0.0 93.8 2.5 94.0 0.0 t-test 95.2 36.2 97.4 0.2 94.0 19.17 100.0 0.0 Wolf 90.3 17.2 94.6 3.8 98.4 22.5 93.5 0.0 Smir 95.8 18.6 99.8 0.0 97.5 11.6 96.5 1.0 Hall 95.3 17.9 95.5 50.1 91.2 79.1 97.0 72.0 Biau 99.3 42.1 100.0 0.0 98.5 50.5 100.0 100.0 Table 2: Naive attribute matching on univariate (BIO, FOREST, CNUM) and multivariate data (FOREST10D). Numbers indicate the percentage of accepted null hypothesis (p=q) pooled over attributes. ? = 0.05. Sample size (dimension; attributes; repetitions of experiment): BIO 377 (1; 6; 100); FOREST 538 (1; 10; 100); CNUM 386 (1; 13; 100); FOREST10D 1000 (10; 2; 100). We tested this ?Hungarian approach? to attribute matching via MMD2u B on three univariate datasets (BIO, CNUM, FOREST) and for table matching on a fourth (FOREST10D). To study MMD2u B on structured data, we obtained two datasets of protein graphs (PROTEINS and ENZYMES) and used the graph kernel for proteins from [7] for table matching via the Hungarian method (the other tests were not applicable to this graph data). The challenge here is to match tables representing one functional class of proteins (or enzymes) from dataset A to the corresponding tables (functional classes) in B. Results are shown in Table 3. Besides on the BIO dataset, MMD2u B made no errors. 6 Summary and Discussion We have established two simple multivariate tests for comparing two distributions p and q. The test statistics are based on the maximum deviation of the expectation of a function evaluated on each of the random variables, taken over a sufficiently rich function class. We do not require density Dataset BIO CNUM FOREST FOREST10D ENZYME PROTEINS Data type univariate univariate univariate multivariate structured structured No. attributes 6 13 10 2 6 2 Sample size 377 386 538 1000 50 200 Repetitions 100 100 100 100 50 50 % correct matches 90.0 99.8 100.0 100.0 100.0 100.0 Table 3: Hungarian Method for attribute matching via MMD2u B on univariate (BIO, CNUM, FOREST), multivariate (FOREST10D), and structured data (ENZYMES, PROTEINS) (? = 0.05; ?% correct matches? is the percentage of the correct attribute matches detected over all repetitions). estimates as an intermediate step. Our method either outperforms competing methods, or is close to the best performing alternative. Finally, our test was successfully used to compare distributions on graphs, for which it is currently the only option. Acknowledgements: The authors thank Matthias Hein for helpful discussions, Patrick Warnat (DKFZ, Heidelberg) for providing the microarray datasets, and Nikos Logothetis for providing the neural datasets. NICTA is funded through the Australian Government?s Backing Australia?s Ability initiative, in part through the ARC. This work was supported in part by the IST Programme of the European Community, under the PASCAL Network of Excellence, IST-2002-506778. References [1] N. Anderson, P. Hall, and D. Titterington. Two-sample test statistics for measuring discrepancies between two multivariate probability density functions using kernel-based density estimates. Journal of Multivariate Analysis, 50:41?54, 1994. [2] M. Arcones and E. Gin?e. On the bootstrap of u and v statistics. The Annals of Statistics, 20(2):655?674, 1992. [3] G. Biau and L. Gyorfi. On the asymptotic properties of a nonparametric l1 -test statistic of homogeneity. IEEE Transactions on Information Theory, 51(11):3965?3973, 2005. [4] P. Bickel. A distribution free version of the Smirnov two sample test in the p-variate case. The Annals of Mathematical Statistics, 40(1):1?23, 1969. [5] C. L. Blake and C. J. Merz. UCI repository of machine learning databases, 1998. [6] K. M. Borgwardt, A. Gretton, M.J. Rasch, H.P. Kriegel, B. Sch?olkopf, and A.J. Smola. Integrating structured biological data by kernel maximum mean discrepancy. In ISMB, 2006. [7] K. M. Borgwardt, C. S. Ong, S. Schonauer, S. V. N. Vishwanathan, A. J. Smola, and H. P. Kriegel. Protein function prediction via graph kernels. Bioinformatics, 21(Suppl 1):i47?i56, Jun 2005. [8] R. Caruana and T. Joachims. Kdd cup. http://kodiak.cs.cornell.edu/kddcup/index.html, 2004. [9] G. Casella and R. Berger. Statistical Inference. Duxbury, Pacific Grove, CA, 2nd edition, 2002. [10] R. M. Dudley. Real analysis and probability. Cambridge University Press, Cambridge, UK, 2002. [11] R. Fortet and E. Mourier. Convergence de la r?eparation empirique vers la r?eparation th?eorique. Ann. ? Scient. Ecole Norm. Sup., 70:266?285, 1953. [12] J. Friedman and L. Rafsky. Multivariate generalizations of the Wald-Wolfowitz and Smirnov two-sample tests. The Annals of Statistics, 7(4):697?717, 1979. [13] A. Gretton, K. Borgwardt, M. Rasch, B. Sch?olkopf, and A. Smola. A kernel method for the two sample problem. Technical Report 157, MPI for Biological Cybernetics, 2007. [14] G. R. Grimmet and D. R. Stirzaker. Probability and Random Processes. Oxford University Press, Oxford, third edition, 2001. [15] P. Hall and N. Tajvidi. Permutation tests for equality of distributions in high-dimensional settings. Biometrika, 89(2):359?374, 2002. [16] M. Hein, T.N. Lal, and O. Bousquet. Hilbertian metrics on probability measures and their application in svm?s. In Proceedings of the 26th DAGM Symposium, pages 270?277, Berlin, 2004. Springer. [17] N. Henze and M. Penrose. On the multivariate runs test. The Annals of Statistics, 27(1):290?298, 1999. [18] N. L. Johnson, S. Kotz, and N. Balakrishnan. Continuous Univariate Distributions. Volume 1 (Second Edition). John Wiley and Sons, 1994. [19] H.W. Kuhn. The Hungarian method for the assignment problem. Naval Research Logistics Quarterly, 2:83?97, 1955. [20] P. Rosenbaum. An exact distribution-free test comparing two multivariate distributions based on adjacency. Journal of the Royal Statistical Society B, 67(4):515?530, 2005. [21] R. Serfling. Approximation Theorems of Mathematical Statistics. Wiley, New York, 1980. [22] I. Steinwart. On the influence of the kernel on the consistency of support vector machines. J. Mach. Learn. Res., 2:67?93, 2002.
3110 |@word repository:1 version:1 briefly:1 norm:3 smirnov:4 arcones:1 nd:1 d2:1 bn:1 moment:3 series:1 ecole:1 rkhs:8 kurt:1 outperforms:1 current:1 comparing:5 analysed:1 yet:3 assigning:1 must:3 written:2 john:1 subsequent:1 kdd:3 cheap:1 kyb:1 designed:1 half:1 accepting:1 provides:3 detecting:1 mathematical:2 symposium:1 initiative:1 incorrect:1 fitting:1 excellence:1 notably:1 karsten:1 mpg:3 decreasing:1 automatically:1 window:2 becomes:2 provided:1 begin:3 bounded:1 matched:1 underlying:2 pof:1 null:9 kind:1 unspecified:1 minimizes:1 proposing:1 titterington:1 scient:1 finding:4 guarantee:2 every:1 biometrika:1 k2:3 uk:1 bio:10 unit:5 zl:1 subtype:4 yn:1 before:1 local:1 limit:1 despite:2 mach:1 oxford:2 solely:1 merge:1 might:2 chose:2 au:1 studied:1 conversely:1 bi:1 statistically:1 gyorfi:1 ismb:1 practical:3 unique:1 testing:7 yj:4 practice:1 lfp:1 bootstrap:6 procedure:2 area:2 empirical:12 universal:6 reject:1 orfi:2 matching:19 word:1 integrating:1 protein:8 cannot:1 close:4 selection:1 put:1 context:2 impossible:1 influence:1 www:1 restriction:1 equivalent:2 straightforward:1 regardless:1 independently:1 formulate:1 m2:2 attr:2 population:9 laplace:1 resp:2 annals:4 logothetis:1 heavily:1 exact:1 us:1 hypothesis:15 pa:1 expensive:1 database:12 ep:5 observed:2 calculate:1 graz:3 region:2 decrease:2 principled:1 vanishes:1 ong:1 rewrite:1 compromise:1 distinctive:1 bipartite:1 basis:2 htm:1 easily:2 joint:1 chapter:1 represented:1 kolmogorov:2 univ:2 describe:1 detected:1 aggregate:5 outside:1 h0:3 pearson:5 heuristic:1 larger:1 foregoing:1 ability:1 statistic:34 final:1 advantage:2 eigenvalue:1 kurtosis:1 matthias:1 propose:3 product:1 zl2:1 zm:1 tu:1 uci:3 combining:1 ludwig:1 degenerate:1 kh:2 olkopf:3 convergence:10 electrode:1 optimum:1 rademacher:1 converges:4 illustrate:2 ac:1 eq:3 hungarian:7 involves:1 indicate:2 australian:1 c:1 rasch:4 kuhn:1 rosenbaum:1 correct:3 attribute:36 mmmd2u:1 compromising:1 kb:1 australia:2 bin:1 adjacency:1 require:1 government:1 generalization:1 biological:4 subtypes:2 marriage:2 hall:8 sufficiently:1 normal:2 exp:2 k2h:1 henze:1 blake:1 bj:1 kruskal:1 achieves:1 bickel:1 a2:1 estimation:1 applicable:3 currently:2 healthy:1 sensitive:1 largest:1 repetition:5 successfully:2 establishes:1 clearly:2 gaussian:4 always:1 i56:1 cornell:1 mmdu:1 cancerous:1 joachim:1 consistently:2 naval:1 indicates:1 check:1 detect:4 helpful:1 inference:1 dagm:1 germany:3 backing:1 issue:1 html:1 pascal:1 exponent:1 hilbertian:2 special:1 integration:3 field:5 equal:2 having:1 identical:2 represents:1 discrepancy:5 report:2 employ:1 modern:1 homogeneity:2 individual:1 m4:1 friedman:5 freedom:1 detection:1 interest:1 acceptance:2 evaluation:1 extreme:1 accurate:1 grove:1 edge:1 arthur:3 respective:1 tree:2 re:1 hein:2 minimal:1 instance:3 earlier:1 measuring:1 caruana:1 assignment:3 cost:9 deviation:4 entry:1 uniform:5 conducted:1 johnson:1 too:1 optimally:1 reported:1 connect:1 answer:2 borgwardt:4 density:5 probabilistic:1 parzen:2 quickly:1 squared:1 again:1 containing:1 choose:1 henceforth:1 return:1 potential:1 de:5 centred:1 gy:2 b2:3 student:1 pooled:1 satisfy:1 caused:1 igi:1 depends:1 later:1 h1:4 performed:3 doing:1 sup:4 linked:1 start:1 schema:3 option:1 ttest:1 ni:1 variance:3 correspond:2 identify:2 biau:7 rejecting:1 cybernetics:3 tissue:3 mmd2:6 casella:1 definition:2 against:3 competitor:1 pp:1 proof:3 associated:4 degeneracy:1 sampled:1 proved:1 dataset:14 austria:1 dimensionality:1 eparation:2 hilbert:2 appears:1 exceeded:1 improved:1 evaluated:1 strongly:1 anderson:1 smola:5 lmu:1 hand:2 steinwart:1 quality:1 grows:1 name:1 contain:1 unbiased:6 true:1 homology:1 hence:2 equality:1 indistinguishable:1 width:3 uniquely:2 mpi:3 criterion:1 arrived:1 demonstrate:2 performs:3 l1:3 meaning:1 functional:2 overview:3 volume:1 cup:2 cambridge:2 ai:3 warnat:1 automatic:1 grid:1 pm:1 consistency:4 i6:1 funded:1 similarity:1 cortex:1 patrick:1 enzyme:4 multivariate:18 closest:1 recent:2 perspective:3 apart:1 certain:1 ubingen:2 inequality:1 yi:5 minimum:4 additional:2 greater:1 nikos:1 ey:2 determine:5 converge:1 wolfowitz:2 aggregated:4 ii:9 multiple:2 desirable:1 gretton:3 reduces:1 smooth:2 technical:1 match:6 a1:2 prediction:1 wald:2 circumstance:1 expectation:3 metric:4 kernel:28 mmd:46 suppl:1 achieved:1 addition:2 want:1 whereas:1 median:1 source:1 microarray:4 sch:3 biased:1 salary:2 unlike:1 archive:2 recording:1 db:1 balakrishnan:1 call:1 counting:2 intermediate:1 split:1 identically:1 enough:4 variety:2 xj:6 variate:1 zi:3 ifi:1 competing:1 inner:1 eorique:1 computable:1 whether:6 returned:1 york:1 remark:1 useful:1 nonparametric:1 ten:1 rearranged:1 http:2 exist:2 percentage:3 zj:2 neuroscience:1 diagnosis:1 shall:2 express:1 ist:2 key:2 four:1 threshold:8 drawn:4 v1:1 graph:9 sum:2 run:3 realworld:1 fourth:2 almost:1 kotz:1 appendix:1 bound:13 guaranteed:1 distinguish:1 stirzaker:1 refine:1 vishwanathan:1 alex:1 n3:1 software:1 bousquet:1 performing:1 separable:1 fortet:1 structured:6 munich:1 according:3 pacific:1 ball:4 across:2 slightly:1 son:1 maximilians:1 partitioned:1 serfling:1 b:2 intuitively:1 restricted:1 pr:1 census:1 taken:1 equation:1 remains:1 abbreviated:1 skew:1 mind:1 end:2 apply:3 quarterly:1 appropriate:1 dudley:1 inasmuch:1 alternative:7 duxbury:1 original:1 remaining:2 subsampling:1 restrictive:2 quantile:2 build:1 establish:2 society:1 objective:1 question:1 quantity:2 spike:1 parametric:1 costly:1 primary:1 italic:1 gin:1 dp:1 distance:5 link:1 mapped:1 mmd2u:23 thank:1 berlin:1 topic:2 tuebingen:2 spanning:2 nicta:2 maximising:1 assuming:2 besides:2 index:1 berger:1 providing:2 minimizing:1 equivalently:1 difficult:2 cij:1 expense:1 negative:1 design:3 unknown:1 rafsky:5 perform:4 upper:2 observation:1 datasets:12 arc:1 finite:4 i47:1 logistics:1 defining:1 neurobiology:1 y1:1 reproducing:2 community:1 pair:5 z1:1 lal:1 established:2 macaque:1 address:3 able:1 kriegel:2 below:1 pattern:1 xm:2 challenge:1 including:1 royal:1 event:1 malte:2 treated:1 rely:1 advanced:1 mn:2 representing:1 improve:1 technology:1 brief:1 jun:1 naive:3 health:4 review:1 literature:1 l2:2 understanding:1 kf:1 acknowledgement:1 asymptotic:13 expect:1 permutation:2 interesting:1 h2:2 downloaded:1 wage:2 degree:1 sufficient:1 consistent:3 principle:1 cancer:2 summary:1 surprisingly:1 last:1 free:3 supported:1 formal:1 bias:5 taking:1 emphasise:1 distributed:1 curve:3 dimension:3 xn:1 gram:2 world:3 rich:4 author:1 made:2 simplified:1 programme:1 far:2 income:1 transaction:1 approximate:1 compact:2 preferred:1 bernhard:1 status:4 ml:1 vers:1 b1:5 assumed:2 xi:9 alternatively:1 kddcup:1 continuous:2 decomposes:2 table:17 additionally:1 learn:1 mourier:1 ca:1 forest:10 heidelberg:1 bearing:1 excellent:1 investigated:1 european:1 domain:3 dense:1 edition:3 repeated:2 x1:1 canberra:1 borel:1 fashion:1 wiley:2 fails:1 wish:2 exercise:1 third:1 tajvidi:2 theorem:12 svm:1 exists:1 ci:1 anu:2 likely:2 univariate:9 ez:7 visual:1 penrose:1 u2:2 applies:2 springer:1 wolf:7 goal:4 presentation:1 ann:1 rbf:1 replace:1 content:1 generalisation:4 uniformly:2 lemma:4 conservative:2 called:2 tumor:1 accepted:2 experimental:1 merz:1 m3:1 la:2 indicating:1 select:1 people:1 grimmet:1 latter:2 support:1 alexander:1 bioinformatics:4 ongoing:1 tested:3 phenomenon:1 ex:7
2,327
3,111
A Humanlike Predictor of Facial Attractiveness Amit Kagian*1, Gideon Dror?2, Tommer Leyvand *3, Daniel Cohen-Or *4, Eytan Ruppin*5 * School of Computer Sciences, Tel-Aviv University, Tel-Aviv, 69978, Israel. ? The Academic College of Tel-Aviv-Yaffo, Tel-Aviv, 64044, Israel. Email: {1 kagianam, 3 tommer, 4dcor, 5ruppin}@post.tau.ac.il, 2gideon@mta.ac.il Abstract This work presents a method for estimating human facial attractiveness, based on supervised learning techniques. Numerous facial features that describe facial geometry, color and texture, combined with an average human attractiveness score for each facial image, are used to train various predictors. Facial attractiveness ratings produced by the final predictor are found to be highly correlated with human ratings, markedly improving previous machine learning achievements. Simulated psychophysical experiments with virtually manipulated images reveal preferences in the machine's judgments which are remarkably similar to those of humans. These experiments shed new light on existing theories of facial attractiveness such as the averageness, smoothness and symmetry hypotheses. It is intriguing to find that a machine trained explicitly to capture an operational performance criteria such as attractiveness rating, implicitly captures basic human psychophysical biases characterizing the perception of facial attractiveness in general. 1 I n trod u cti on Philosophers, artists and scientists have been trying to capture the nature of beauty since the early days of philosophy. Although in modern days a common layman's notion is that judgments of beauty are a matter of subjective opinion, recent findings suggest that people might share a common taste for facial attractiveness and that their preferences may be an innate part of the primary constitution of our nature. Several experiments have shown that 2 to 8 months old infants prefer looking at faces which adults rate as being more attractive [1]. In addition, attractiveness ratings show very high agreement between groups of raters belonging to the same culture and even across cultures [2]. Such findings give rise to the quest for common factors which determine human facial attractiveness. Accordingly, various hypotheses, from cognitive, evolutional and social perspectives, have been put forward to describe the common preferences for facial beauty. Inspired by Sir Francis Galton?s photographic method of composing faces [3], Rubenstein, Langlois and Roggman created averaged faces by morphing multiple images together and proposed that averageness is the answer for facial attractiveness [4, 5]. Human judges found these averaged faces to be attractive and rated them with attractiveness ratings higher than the mean rating of the component faces composing them. Grammer and Thornhill have investigated symmetry and averageness of faces and concluded that symmetry was more important than averageness in facial attractiveness [6]. Little and colleagues have agreed that average faces are attractive but claim that faces with certain extreme features, such as extreme sexually dimorphic traits, may be more attractive than average faces [7]. Other researchers have suggested various conditions which may contribute to facial attractiveness such as neonate features, pleasant expressions and familiarity. Cunningham and his associates suggest a multiple fitness model in which there is no single constructing line that determines attractiveness. Instead, different categories of features signal different desirable qualities of the perceived target [8]. Even so, the multiple fitness model agrees that some facial qualities are universally physically attractive to people. Apart from eliciting the facial characteristics which account for attractiveness, modern researchers try to describe underlying mechanisms for these preferences. Many contributors refer to the evolutionary origins of attractiveness preferences [9]-[11]. According to this view, facial traits signal mate quality and imply chances for reproductive success and parasite resistance. Some evolutionary theorists suggest that preferred features might not signal mate quality but that the ?good taste? by itself is an evolutionary adaptation (individuals with a preference for attractiveness will have attractive offspring that will be favored as mates) [9]. Another mechanism explains attractiveness' preferences through a cognitive theory - a preference for attractive faces might be induced as a by-product of general perception or recognition mechanisms [5, 12]: Attractive faces might be pleasant to look at since they are closer to the cognitive representation of the face category in the mind. These cognitive representations are described as a part of a cognitive mechanism that abstracts prototypes from distinct classes of objects. These prototypes relate to average faces when considering the averageness hypothesis. A third view has suggested that facial attractiveness originates in a social mechanism, where preferences may be dependent on the learning history of the individual and even on his social goals [12]. Different studies have tried to use computational methods in order to analyze facial attractiveness. Averaging faces with morph tools was done in several cases (e.g. [5, 13]). In [14], laser scans of faces were put into complete correspondence with the average face in order to examine the relationship between facial attractiveness, age, and averageness. Another approach was used in [15] where a genetic algorithm, guided by interactive user selections, was programmed to evolve a ?most beautiful? female face. [16] used machine learning methods to investigate whether a machine can predict attractiveness ratings by learning a mapping from facial images to their attractiveness scores. Their predictor achieved a significant correlation of 0.6 with average human ratings, demonstrating that facial beauty can be learned by a machine, at least to some degree. However, as human raters still do significantly outperform the predictor of [16], the challenge of constructing a facial attractiveness machine with human level evaluation accuracy has remained open. A primary goal of this study is to surpass these results by developing a machine which obtains human level performance in predicting facial attractiveness. Having accomplished this, our second main goal is to conduct a series of simulated psychophysical experiments and study the resemblance between human and machine judgments. This latter task carries two potential rewards: A. To determine whether the machine can aid in understanding the psychophysics of human facial attractiveness, capitalizing on the ready accessibility of the analysis of its inner workings, and B. To study whether learning an explicit operational ratings prediction task also entails learning implicit humanlike biases, at least for the case of facial attractiveness. 2 2.1 T h e f aci al trai n in g d atab as e: Acq u i s i ti on , p rep roces s i n g an d rep res en tati on Rating facial attractiveness The chosen database was composed of 91 facial images of American females, taken by the Japanese photographer Akira Gomi. All 91 samples were frontal color photographs of young Caucasian females with a neutral expression. All samples were of similar age, skin color and gender. The subjects? portraits had no accessories or other distracting items such as jewelry. All 91 facial images in the dataset were rated for attractiveness by 28 human raters (15 males, 13 females) on a 7-point Likert scale (1 = very unattractive, 7 = very attractive). Ratings were collected with a specifically designed html interface. Each rater was asked to view the entire set before rating in order to acquire a notion of attractiveness scale. There was no time limit for judging the attractiveness of each sample and raters could go back and adjust the ratings of already rated samples. The images were presented to each rater in a random order and each image was presented on a separate page. The final attractiveness rating of each sample was its mean rating across all raters. To validate that the number of ratings collected adequately represented the ``collective attractiveness rating'' we randomly divided the raters into two disjoint groups of equal size. For each facial image, we calculated the mean rating on each group, and calculated the Pearson correlation between the mean ratings of the two groups. This process was repeated 1,000 times. The mean correlation between two groups was 0.92 (? = 0.01). This corresponds well to the known level of consistency among groups of raters reported in the literature (e.g. [2]). Hence, the mean ratings collected are stable indicators of attractiveness that can be used for the learning task. The facial set contained faces in all ranges of attractiveness. Final attractiveness ratings range from 1.42 to 5.75 and the mean rating was 3.33 (? = 0.94). 2.2 Data preprocessing and representation Preliminary experimentation with various ways of representing a facial image have systematically shown that features based on measured proportions, distances and angles of faces are most effective in capturing the notion of facial attractiveness (e.g. [16]). To extract facial features we developed an automatic engine that is capable of identifying eyes, nose, lips, eyebrows, and head contour. In total, we measured 84 coordinates describing the locations of those facial features (Figure 1). Several regions are suggested for extracting mean hair color, mean skin color and skin texture. The feature extraction process was basically automatic but some coordinates needed to be manually adjusted in some of the images. The facial coordinates are used to create a distances-vector of all 3,486 distances between all pairs of coordinates in the complete graph created by all coordinates. For each image, all distances are normalized by face length. In a similar manner, a slopes-vector of all the 3,486 slopes of the lines connecting the facial coordinates is computed. Central fluctuating asymmetry (CFA), which is described in [6], is calculated from the coordinates as well. The application also provides, for each face, Hue, Saturation and Value (HSV) values of hair color and skin color, and a measurement of skin smoothness. Figure 1: Facial coordinates with hair and skin sample regions as represented by the facial feature extractor. Coordinates are used for calculating geometric features and asymmetry. Sample regions are used for calculating color values and smoothness. The sample image, used for illustration only, is of T.G. and is presented with her full consent. Combining the distances-vector and the slopes-vector yields a vector representation of 6,972 geometric features for each image. Since strong correlations are expected among the features in such representation, principal component analysis (PCA) was applied to these geometric features, producing 90 principal components which span the sub-space defined by the 91 image vector representations. The geometric features are projected on those 90 principal components and supply 90 orthogonal eigenfeatures representing the geometric features. Eight measured features were not included in the PCA analysis, including CFA, smoothness, hair color coordinates (HSV) and skin color coordinates. These features are assumed to be directly connected to human perception of facial attractiveness and are hence kept at their original values. These 8 features were added to the 90 geometric eigenfeatures, resulting in a total of 98 image-features representing each facial image in the dataset. 3 3.1 E xp eri men ts an d resu l ts Predictor construction and validation We experimented with several induction algorithms including simple Linear Regression, Least Squares Support Vector Machine (LS-SVM) (both linear as well as non-linear) and Gaussian Processes (GP). However, as the LS-SVM and GP showed no substantial advantage over Linear Regression, the latter was used and is presented in the sequel. A key ingredient in our methods is to use a proper image-features selection strategy. To this end we used subset feature selection, implemented by ranking the image-features by their Pearson correlation with the target. Other ranking functions produced no substantial gain. To measure the performance of our method we removed one sample from the whole dataset. This sample served as a test set. We found, for each left out sample, the optimal number of image-features by performing leave-one-out-cross-validation (LOOCV) on the remaining samples and selecting the number of features that minimizes the absolute difference between the algorithm's output and the targets of the training set. In other words, the score for a test example was predicted using a single model based on the training set only. This process was repeated n=91 times, once for each image sample. The vector of attractiveness predictions of all images is then compared with the true targets. These scores are found to be in a high Pearson correlation of 0.82 with the mean ratings of humans (P-value < 10 -23), which corresponds to a normalized Mean Squared Error of 0.39. This accuracy is a marked improvement over the recently published performance results of a Pearson correlation of 0.6 on a similar dataset [16]. The average correlation of an individual human rater to the mean correlations of all other raters in our dataset is 0.67 and the average correlation between the mean ratings of groups of raters is 0.92 (section 2.1). It should be noted that we tried to use this feature selection and training procedure with the original geometric features instead of the eigenfeatures, ranking them by their correlation to the targets and selecting up to 300 best ranked features. This, however, has failed to produce good predictors due to strong correlations between the original geometric features (maximal Pearson correlation obtained was 0.26). 3.2 S i m i l a r i t y o f ma c h i n e a n d h u m a n j u d g m e n t s Each rater (human and machine) has a 91 dimensional rating vector describing its Figure 2: Distribution of mean Euclidean distance from each human rater to all other raters in the ratings space. The machine?s average distance form all other raters (left bar) is smaller than the average distance of each of the human raters to all others. attractiveness ratings of all 91 images. These vectors can be embedded in a 91 dimensional ratings space. The Euclidian distance between all raters (human and machine) in this space was computed. Compared with each of the human raters, the ratings of the machine were the closest, on average, to the ratings of all other human raters (Figure 2). To verify that the machine ratings are not outliers that fall out of clusters of human raters (even though their mean distance from the other ratings is small) we surrounded each of the rating vectors in the ratings space with multidimensional spheres of several radius sizes. The machine had more human neighbors than the mean number of neighbors of human raters, testifying that it does not fall between clusters. Finally, for a graphic display of machine ratings among human ratings we applied PCA to machine and human ratings in the rating space and projected all ratings onto the resulting first 2 and 3 principal components. Indeed, the machine is well placed in a mid-zone of human raters (Figure 3). 5 Machine 0 Machine 0 -4 -8 -5 7 5 0 -10 -10 -5 0 5 10 (a) 0 -7 -5 (b) Figure 3: Location of machine ratings among the 28 human ratings: Ratings were projected into 2 dimensions (a) and 3 dimensions (b) by performing PCA on all ratings and projecting them on the first principal components. The projected data explain 29.8% of the variance in (a) and 36.6% in (b). 3.3 Psychophysical experiments in silico A number of simulated psychophysical experiments reveal humanlike biases of the machine's performance. Rubenstein et al. discuss a morphing technique to create mathematically averaged faces from multiple face images [5]. They reported that averaged faces made of 16 and 32 original component images were rated higher in attractiveness than the mean attractiveness ratings of their component faces and higher than composites consisting of fewer faces. In their experiment, 32-component composites were found to be the most attractive. We used a similar technique to create averaged virtually-morphed faces with various numbers of components, nc, and have let the machine predict their attractiveness. To this end, coordinate values of the original component faces were averaged to create a new set of coordinates for the composite. These coordinates were used to calculate the geometrical features and CFA of the averaged face. Smoothness and HSV values for the composite faces were calculated by averaging the corresponding values of the component faces 1. To study the effect of nc on the attractiveness score we produced 1,000 virtual morph images for each value of n c between 2 and 50, and used our attractiveness predictor (section 3.1) to compute the attractiveness scores of the resulting composites. In accordance with the experimental results of [5], the machine manifests a humanlike bias for higher scores of averaged composites over their components? mean score. Figure 4a, presenting these results, shows the percent of components which were rated as less attractive than their corresponding composite, for each number of components n c. As evident, the attractiveness rating of a composite surpasses a larger percent of its components? ratings as nc increases. Figure 4a also shows the mean scores of 1,000 1 HSV values are converted to RGB before averaging composites and the mean scores of their components, for each n c (scores are normalized to the range [0, 1]). Their actual attractiveness scores are reported in Table 1. As expected, the mean scores of the components images are independent of n c, while composites? scores increase with nc. Mean values of smoothness and asymmetry of the composites are presented in Figure 4b. 0.4 Smoothness Asymmetry 0.8 0.2 0.75 0 -0.2 0.7 -0.4 0.65 -0.6 0.6 Fraction of less attractive components Composite's score (normalized) Components' mean score (normalized) 0.55 2 10 20 30 40 -0.8 50 -1 2 Number of components in composite 10 20 30 40 50 Number of components in composite (a) (b) Figure 4: Mean results over 1,000 composites made of varying numbers of image components: (a) Percent of components which were rated as less attractive than their corresponding composite accompanied with mean scores of composites and the mean scores of their components (scores are normalized to the range [0, 1]. actual attractiveness scores are reported in Table 1). (b) Mean values of smoothness and asymmetry of 1,000 composites for each number of components, nc. Table 1: Mean results over 1,000 composites made of varying numbers of component images NUMBER OF COMPONENTS IN COMPOSITE COMPOSITE SCORE COMPONENTS MEAN SCORE 2 4 12 25 50 3.46 3.66 3.74 3.82 3.94 3.34 3.33 3.32 3.32 3.33 COMPONENTS RATED LOWER THAN COMPOSITE (PERCENT) 55 64 70 75 81 % % % % % Recent studies have provided evidence that skin texture influences judgments of facial attractiveness [17]. Since blurring and smoothing of faces occur when faces are averaged together [5], the smooth complexion of composites may underlie the attractiveness of averaged composites. In our experiment, a preference for averageness is found even though our method of virtual-morphing does not produce the smoothening effect and the mean smoothness value of composites corresponds to the mean smoothness value in the original dataset, for all nc (see Figure 4b). Researchers have also suggested that averaged faces are attractive since they are exceptionally symmetric [18]. Figure 4b shows that the mean level of asymmetry is indeed highly correlated with the mean scores of the morphs (Pearson correlation of -0.91, P-value < 10 -19). However, examining the correlation between the rest of the features and the composites' scores reveals that this high correlation is not at all unique to asymmetry. In fact, 45 of the 98 features are strongly correlated with attractiveness scores (|Pearson correlation| > 0.9). The high correlation between these numerous features and attractiveness scores of averaged faces indicates that symmetry level is not an exceptional factor in the machine?s preference for averaged faces. Instead, it suggests that averaging causes many features, including both geometric features and symmetry, to change in a direction which causes an increase in attractiveness. It has been argued that although averaged faces are found to be attractive, very attractive faces are not average [18]. A virtual composite made of the 12 most attractive faces in the set (as rated by humans) was rated by the machine with a high score of 5.6 while 1,000 composites made of 50 faces got a maximum score of only 5.3. This type of preference resembles the findings of an experiment by Perrett et al. in which a highly attractive composite, morphed from only attractive faces, was preferred by humans over a composite made of 60 images of all levels of attractiveness [13]. Another study by Zaidel et al. examined the asymmetry of attractiveness perception and offered a relationship between facial attractiveness and hemispheric specialization [19]. In this research right-right and left-left chimeric composites were created by attaching each half of the face to its mirror image. Subjects were asked to look at left-left and right-right composites of the same image and judge which one is more attractive. For women?s faces, right-right composites got twice as many ?more attractive? responses than left-left composites. Interestingly, similar results were found when simulating the same experiment with the machine: Right-right and left-left chimeric composites were created from the extracted coordinates of each image and the machine was used to predict their attractiveness ratings (taking care to exclude the original image used for the chimeric composition from the training set, as it contains many features which are identical to those of the composite). The machine gave 63 out of 91 right-right composites a higher rating than their matching left-left composite, while only 28 left-left composites were judged as more attractive. A paired t-test shows these results to be statistically significant with P-value < 10 -7 (scores of chimeric composites are normally distributed). It is interesting to see that the machine manifests the same kind of asymmetry bias reported by Zaidel et al, though it has never been explicitly trained for that. 4 Di s cu s s i on In this work we produced a high quality training set for learning facial attractiveness of human faces. Using supervised learning methodologies we were able to construct the first predictor that achieves accurate, humanlike performance for this task. Our results add the task of facial attractiveness prediction to a collection of abstract tasks that has been successfully accomplished with current machine learning techniques. Examining the machine and human raters' representations in the ratings space identifies the ratings of the machine in the center of human raters, and closest, in average, to other human raters. The similarity between human and machine preferences has prompted us to further study the machine?s operation in order to capitalize on the accessibility of its inner workings and learn more about human perception of facial attractiveness. To this end, we have found that that the machine favors averaged faces made of several component faces. While this preference is known to be common to humans as well, researchers have previously offered different reasons for favoring averageness. Our analysis has revealed that symmetry is strongly related to the attractiveness of averaged faces, but is definitely not the only factor in the equation since about half of the image-features relate to the ratings of averaged composites in a similar manner as the symmetry measure. This suggests that a general movement of features toward attractiveness, rather than a simple increase in symmetry, is responsible for the attractiveness of averaged faces. Obviously, strictly speaking this can be held true only for the machine, but, in due of the remarkable ``humnalike'' behavior of the machine, it also brings important support to the idea that this finding may well extend also to human perception of facial attractiveness. Overall, it is quite surprising and pleasing to see that a machine trained explicitly to capture an operational performance criteria such as rating, implicitly captures basic human psychophysical biases related to facial attractiveness. It is likely that while the machine learns the ratings in an explicit supervised manner, it also concomitantly and implicitly learns other basic characteristics of human facial ratings, as revealed by studying its "psychophysics". A c k n o w l e d g me n t s We thank Dr. Bernhard Fink and the Ludwig-Boltzmann Institute for Urban Ethology at the Institute for Anthropology, University of Vienna, Austria, and Prof. Alice J. O'Toole from the University of Texas at Dallas, for kindly letting us use their face databases. References [1] Langlois, J.H., Roggman, L.A., Casey, R.J., Ritter, J.M., Rieser-Danner, L.A. & Jenkins, V.Y. (1987) Infant preferences for attractive faces: Rudiments of a stereotype? Developmental Psychology, 23, 363-369. [2] Cunningham, M.R., Roberts, A.R., Wu, C.-H., Barbee, A.P. & Druen, P.B. (1995) Their ideas of beauty are, on the whole, the same as ours: Consistency and variability in the cross-cultural perception of female physical attractiveness. Journal of Personality and Social Psychology, 68, 261-279. [3] Galton, F. (1878) Composite portraits. Journal of the Anthropological Institute of Great Britain and Ireland, 8, 132-142. [4] Langlois, J.H. & Roggman, L.A. (1990) Attractive faces are only average. Psychological Science, 1, 115-121. [5] Rubenstein, A.J., Langlois, J.H & Roggman, L.A. (2002) What makes a face attractive and why: The role of averageness in defining facial beauty. In Rhodes, G. & Zebrowitz, L.A. (eds.), Advances in Visual Cognition, Vol. 1: Facial Attractiveness, pp. 1-33. Westport, CT: Ablex. [6] Grammer, K. & Thornhill, R. (1994) Human (Homo sapiens) facial attrativness and sexual selection: The role of symmetry and averageness. Journal of Comparative Psychology, 108, 233-242. [7] Little, A.C., Penton-Voak, I.S., Burt, D.M. & Perrett, D.I. (2002) Evolution and individual differences in the perception of attractiveness: How cyclic hormonal changes and self-perceived attractiveness influence female preferences for male faces. In Rhodes, G. & Zebrowitz, L.A. (eds.), Advances in Visual Cognition, Vol. 1: Facial Attractiveness, pp. 59-90. Westport, CT: Ablex. [8] Cunningham, M.R., Barbee, A.P. & Philhower, C.L. (2002) Dimensions of facial physical attractiveness: The intersection of biology and culture. In Rhodes, G. & Zebrowitz, L.A. (eds.), Advances in Visual Cognition, Vol. 1: Facial Attractiveness, pp. 193-238. Westport, CT: Ablex. [9] Thornhill, R. & Gangsted, S.W. (1999) Facial Attractiveness. Trends in Cognitive Sciences, 3, 452-460. [10] Andersson, M. (1994) Sexual Selection. Princeton, NJ: Princeton University Press. [11] M?ller, A.P. & Swaddle, J.P. (1997) Asymmetry, developmental stability, and evolution. Oxford: Oxford University Press. [12] Zebrowitz, L.A. & Rhodes, G. (2002) Nature let a hundred flowers bloom: The multiple ways and wherefores of attractiveness. In Rhodes, G. & Zebrowitz, L.A. (eds.), Advances in Visual Cognition, Vol. 1: Facial Attractiveness, pp. 261-293. Westport, CT: Ablex. [13] Perrett, D.I., May, K.A. & Yoshikawa, S. (1994) facial shape and judgments of female attractiveness. Nature, 368, 239-242. [14] O?Toole, A.J., Price, T., Vetter, T., Bartlett, J.C. & Blanz, V. (1999) 3D shape and 2D surface textures of human faces: the role of "averages" in attractiveness and age. Image and Vision Computing, 18, 9-19. [15] Johnston, V. S. & Franklin, M. (1993) Is beauty in the eye of the beholder? Ethology and Sociobiology, 14, 183-199. [16] Eisenthal, Y., Dror, G. & Ruppin, E. (2006) Facial attractiveness: Beauty and the Machine. Neural Computation, 18, 119-142. [17] Fink, B., Grammer, K. & Thornhill, R. (2001) Human (Homo sapiens) Facial Attractiveness in Relation to Skin Texture and Color. Journal of Comparative Psychology, 115, 92?99. [18] Alley, T.R. & Cunningham, M.R. (1991) Averaged faces are attractive but very attractive faces are not average. Psychological Science, 2, 123-125. [19] Zaidel, D.W., Chen, A.C. & German, C. (1995) She is not a beauty even when she smiles: possible evolutionary basis for a relationship between facial attractiveness and hemispheric specialization. Neuropsychologia, 33(5), 649-655
3111 |@word cu:1 proportion:1 open:1 tried:2 rgb:1 photographer:1 euclidian:1 carry:1 anthropological:1 cyclic:1 series:1 score:29 selecting:2 contains:1 daniel:1 genetic:1 ours:1 interestingly:1 franklin:1 subjective:1 existing:1 current:1 surprising:1 intriguing:1 wherefore:1 shape:2 designed:1 infant:2 half:2 fewer:1 item:1 caucasian:1 accordingly:1 eigenfeatures:3 provides:1 contribute:1 location:2 preference:16 hsv:4 yoshikawa:1 supply:1 manner:3 indeed:2 expected:2 behavior:1 examine:1 inspired:1 little:2 actual:2 considering:1 provided:1 estimating:1 underlying:1 cultural:1 rieser:1 israel:2 what:1 kind:1 minimizes:1 dror:2 developed:1 finding:4 nj:1 multidimensional:1 ti:1 shed:1 interactive:1 fink:2 originates:1 underlie:1 normally:1 producing:1 humanlike:5 before:2 scientist:1 offspring:1 accordance:1 dallas:1 limit:1 leyvand:1 oxford:2 might:4 twice:1 anthropology:1 resembles:1 examined:1 suggests:2 alice:1 programmed:1 range:4 statistically:1 averaged:19 unique:1 responsible:1 procedure:1 significantly:1 composite:42 got:2 matching:1 word:1 vetter:1 suggest:3 onto:1 selection:6 judged:1 put:2 silico:1 influence:2 jewelry:1 center:1 britain:1 go:1 l:2 identifying:1 his:2 stability:1 notion:3 coordinate:15 target:5 construction:1 user:1 hypothesis:3 origin:1 agreement:1 associate:1 trend:1 recognition:1 database:2 role:3 capture:5 calculate:1 region:3 connected:1 movement:1 removed:1 substantial:2 developmental:2 reward:1 asked:2 trained:3 ablex:4 blurring:1 basis:1 various:5 represented:2 laser:1 train:1 distinct:1 describe:3 effective:1 beholder:1 pearson:7 parasite:1 quite:1 larger:1 blanz:1 favor:1 hormonal:1 gp:2 itself:1 final:3 obviously:1 advantage:1 product:1 maximal:1 adaptation:1 combining:1 consent:1 langlois:4 ludwig:1 validate:1 complexion:1 achievement:1 cluster:2 asymmetry:10 produce:2 comparative:2 leave:1 object:1 ac:2 measured:3 school:1 strong:2 implemented:1 predicted:1 judge:2 stereotype:1 direction:1 guided:1 radius:1 human:45 opinion:1 virtual:3 explains:1 argued:1 preliminary:1 mathematically:1 adjusted:1 strictly:1 tati:1 great:1 mapping:1 predict:3 cognition:4 claim:1 achieves:1 early:1 perceived:2 rhodes:5 loocv:1 contributor:1 agrees:1 create:4 exceptional:1 tool:1 successfully:1 gaussian:1 rather:1 beauty:9 varying:2 casey:1 philosopher:1 rubenstein:3 improvement:1 indicates:1 she:2 dependent:1 entire:1 cunningham:4 her:1 relation:1 favoring:1 overall:1 among:4 html:1 chimeric:4 favored:1 smoothing:1 psychophysics:2 equal:1 once:1 never:1 having:1 extraction:1 construct:1 manually:1 biology:1 identical:1 look:2 capitalize:1 others:1 modern:2 randomly:1 alley:1 manipulated:1 composed:1 rater:5 individual:4 fitness:2 geometry:1 consisting:1 pleasing:1 highly:3 investigate:1 homo:2 evaluation:1 adjust:1 male:2 extreme:2 light:1 held:1 accurate:1 closer:1 capable:1 trod:1 culture:3 facial:61 orthogonal:1 conduct:1 old:1 euclidean:1 concomitantly:1 re:1 psychological:2 portrait:2 surpasses:1 neutral:1 subset:1 predictor:9 hundred:1 examining:2 graphic:1 reported:5 answer:1 morph:2 morphs:1 combined:1 definitely:1 sequel:1 ritter:1 together:2 connecting:1 squared:1 central:1 sapiens:2 woman:1 dr:1 cognitive:6 american:1 account:1 potential:1 converted:1 exclude:1 accompanied:1 matter:1 explicitly:3 ranking:3 try:1 view:3 analyze:1 francis:1 slope:3 acq:1 il:2 square:1 accuracy:2 variance:1 characteristic:2 judgment:5 yield:1 artist:1 produced:4 basically:1 served:1 researcher:4 published:1 history:1 gomi:1 explain:1 ed:4 email:1 colleague:1 pp:4 di:1 gain:1 dataset:6 manifest:2 color:11 austria:1 agreed:1 back:1 higher:5 supervised:3 day:2 methodology:1 response:1 done:1 though:3 strongly:2 implicit:1 accessory:1 correlation:18 working:2 brings:1 quality:5 reveal:2 resemblance:1 aviv:4 neonate:1 innate:1 effect:2 normalized:6 true:2 verify:1 adequately:1 hence:2 evolution:2 symmetric:1 attractive:27 self:1 noted:1 criterion:2 hemispheric:2 trying:1 distracting:1 presenting:1 evident:1 complete:2 perrett:3 interface:1 percent:4 geometrical:1 image:36 ruppin:3 recently:1 common:5 physical:2 cohen:1 extend:1 yaffo:1 trait:2 refer:1 significant:2 measurement:1 theorist:1 morphed:2 composition:1 smoothness:10 automatic:2 consistency:2 dcor:1 had:2 stable:1 entail:1 similarity:1 surface:1 add:1 closest:2 recent:2 female:7 perspective:1 showed:1 apart:1 constitution:1 certain:1 rep:2 success:1 accomplished:2 akira:1 care:1 layman:1 determine:2 ller:1 signal:3 multiple:5 photographic:1 desirable:1 full:1 smooth:1 academic:1 cross:2 sphere:1 divided:1 post:1 paired:1 prediction:3 basic:3 hair:4 regression:2 vision:1 physically:1 achieved:1 addition:1 remarkably:1 johnston:1 concluded:1 rest:1 markedly:1 induced:1 subject:2 virtually:2 smile:1 neuropsychologia:1 extracting:1 likert:1 revealed:2 gave:1 psychology:4 inner:2 idea:2 prototype:2 texas:1 whether:3 expression:2 pca:4 specialization:2 bartlett:1 resistance:1 speaking:1 cause:2 pleasant:2 hue:1 mid:1 category:2 outperform:1 judging:1 disjoint:1 vol:4 group:7 key:1 demonstrating:1 urban:1 cfa:3 bloom:1 kept:1 graph:1 fraction:1 angle:1 wu:1 prefer:1 capturing:1 ct:4 display:1 correspondence:1 occur:1 span:1 performing:2 mta:1 according:1 developing:1 belonging:1 across:2 smaller:1 zebrowitz:5 outlier:1 projecting:1 taken:1 equation:1 previously:1 describing:2 discus:1 mechanism:5 german:1 needed:1 mind:1 nose:1 letting:1 end:3 capitalizing:1 studying:1 operation:1 jenkins:1 experimentation:1 eight:1 fluctuating:1 evolutional:1 simulating:1 original:7 personality:1 remaining:1 eri:1 vienna:1 calculating:2 amit:1 prof:1 eliciting:1 psychophysical:6 skin:9 already:1 added:1 strategy:1 primary:2 evolutionary:4 ireland:1 distance:10 separate:1 thank:1 simulated:3 accessibility:2 me:1 collected:3 reason:1 induction:1 toward:1 length:1 relationship:3 illustration:1 prompted:1 acquire:1 nc:6 robert:1 relate:2 rise:1 collective:1 proper:1 boltzmann:1 galton:2 mate:3 t:2 defining:1 looking:1 head:1 variability:1 burt:1 rating:54 toole:2 pair:1 engine:1 learned:1 adult:1 able:1 suggested:4 bar:1 flower:1 perception:8 gideon:2 challenge:1 eyebrow:1 saturation:1 including:3 tau:1 ranked:1 beautiful:1 predicting:1 indicator:1 representing:3 rated:9 imply:1 numerous:2 eye:2 identifies:1 ethology:2 created:4 ready:1 extract:1 morphing:3 understanding:1 taste:2 literature:1 geometric:9 evolve:1 sir:1 embedded:1 men:1 interesting:1 resu:1 ingredient:1 remarkable:1 age:3 validation:2 degree:1 offered:2 xp:1 raters:21 systematically:1 share:1 surrounded:1 testifying:1 placed:1 bias:6 institute:3 fall:2 neighbor:2 characterizing:1 face:56 attaching:1 absolute:1 taking:1 distributed:1 calculated:4 dimension:3 contour:1 forward:1 made:7 collection:1 universally:1 preprocessing:1 projected:4 social:4 obtains:1 implicitly:3 preferred:2 bernhard:1 reveals:1 assumed:1 aci:1 why:1 table:3 lip:1 nature:4 learn:1 composing:2 operational:3 tel:4 symmetry:9 improving:1 investigated:1 japanese:1 constructing:2 kindly:1 main:1 whole:2 repeated:2 attractiveness:82 en:1 aid:1 sub:1 explicit:2 third:1 extractor:1 learns:2 young:1 remained:1 familiarity:1 reproductive:1 experimented:1 svm:2 unattractive:1 evidence:1 trai:1 mirror:1 texture:5 chen:1 intersection:1 photograph:1 likely:1 visual:4 failed:1 contained:1 gender:1 corresponds:3 determines:1 chance:1 extracted:1 ma:1 cti:1 month:1 goal:3 marked:1 price:1 exceptionally:1 change:2 included:1 specifically:1 sexual:2 averaging:4 surpass:1 principal:5 total:2 andersson:1 eytan:1 experimental:1 zone:1 college:1 people:2 quest:1 latter:2 scan:1 support:2 frontal:1 philosophy:1 princeton:2 correlated:3
2,328
3,112
Efficient Learning of Sparse Representations with an Energy-Based Model Marc?Aurelio Ranzato Christopher Poultney Sumit Chopra Yann LeCun Courant Institute of Mathematical Sciences New York University, New York, NY 10003 {ranzato,crispy,sumit,yann}@cs.nyu.edu Abstract We describe a novel unsupervised method for learning sparse, overcomplete features. The model uses a linear encoder, and a linear decoder preceded by a sparsifying non-linearity that turns a code vector into a quasi-binary sparse code vector. Given an input, the optimal code minimizes the distance between the output of the decoder and the input patch while being as similar as possible to the encoder output. Learning proceeds in a two-phase EM-like fashion: (1) compute the minimum-energy code vector, (2) adjust the parameters of the encoder and decoder so as to decrease the energy. The model produces ?stroke detectors? when trained on handwritten numerals, and Gabor-like filters when trained on natural image patches. Inference and learning are very fast, requiring no preprocessing, and no expensive sampling. Using the proposed unsupervised method to initialize the first layer of a convolutional network, we achieved an error rate slightly lower than the best reported result on the MNIST dataset. Finally, an extension of the method is described to learn topographical filter maps. 1 Introduction Unsupervised learning methods are often used to produce pre-processors and feature extractors for image analysis systems. Popular methods such as Wavelet decomposition, PCA, Kernel-PCA, NonNegative Matrix Factorization [1], and ICA produce compact representations with somewhat uncorrelated (or independent) components [2]. Most methods produce representations that either preserve or reduce the dimensionality of the input. However, several recent works have advocated the use of sparse-overcomplete representations for images, in which the dimension of the feature vector is larger than the dimension of the input, but only a small number of components are non-zero for any one image [3, 4]. Sparse-overcomplete representations present several potential advantages. Using high-dimensional representations increases the likelihood that image categories will be easily (possibly linearly) separable. Sparse representations can provide a simple interpretation of the input data in terms of a small number of ?parts? by extracting the structure hidden in the data. Furthermore, there is considerable evidence that biological vision uses sparse representations in early visual areas [5, 6]. It seems reasonable to consider a representation ?complete? if it is possible to reconstruct the input from it, because the information contained in the input would need to be preserved in the representation itself. Most unsupervised learning methods for feature extraction are based on this principle, and can be understood in terms of an encoder module followed by a decoder module. The encoder takes the input and computes a code vector, for example a sparse and overcomplete representation. The decoder takes the code vector given by the encoder and produces a reconstruction of the input. Encoder and decoder are trained in such a way that reconstructions provided by the decoder are as similar as possible to the actual input data, when these input data have the same statistics as the training samples. Methods such as Vector Quantization, PCA, auto-encoders [7], Restricted Boltzmann Machines [8], and others [9] have exactly this architecture but with different constraints on the code and learning algorithms, and different kinds of encoder and decoder architectures. In other approaches, the encoding module is missing but its role is taken by a minimization in code space which retrieves the representation [3]. Likewise, in non-causal models the decoding module is missing and sampling techniques must be used to reconstruct the input from a code [4]. In sec. 2, we describe an energy-based model which has both an encoding and a decoding part. After training, the encoder allows very fast inference because finding a representation does not require solving an optimization problem. The decoder provides an easy way to reconstruct input vectors, thus allowing the trainer to assess directly whether the representation extracts most of the information from the input. Most methods find representations by minimizing an appropriate loss function during training. In order to learn sparse representations, a term enforcing sparsity is added to the loss. This term usually penalizes those code units that are active, aiming to make the distribution of their activities highly peaked at zero with heavy tails [10] [4]. A drawback for these approaches is that some action might need to be taken in order to prevent the system from always activating the same few units and collapsing all the others to zero [3]. An alternative approach is to embed a sparsifying module, e.g. a non-linearity, in the system [11]. This in general forces all the units to have the same degree of sparsity, but it also makes a theoretical analysis of the algorithm more complicated. In this paper, we present a system which achieves sparsity by placing a non-linearity between encoder and decoder. Sec. 2.1 describes this module, dubbed the ?Sparsifying Logistic?, which is a logistic function with an adaptive bias that tracks the mean of its input. This non-linearity is parameterized in a simple way which allows us to control the degree of sparsity of the representation as well as the entropy of each code unit. Unfortunately, learning the parameters in encoder and decoder can not be achieved by simple backpropagation of the gradients of the reconstruction error: the Sparsifying Logistic is highly non-linear and resets most of the gradients coming from the decoder to zero. Therefore, in sec. 3 we propose to augment the loss function by considering not only the parameters of the system but also the code vectors as variables over which the optimization is performed. Exploiting the fact that 1) it is fairly easy to determine the weights in encoder and decoder when ?good? codes are given, and 2) it is straightforward to compute the optimal codes when the parameters in encoder and decoder are fixed, we describe a simple iterative coordinate descent optimization to learn the parameters of the system. The procedure can be seen as a sort of deterministic version of the EM algorithm in which the code vectors play the role of hidden variables. The learning algorithm described turns out to be particularly simple, fast and robust. No pre-processing is required for the input images, beyond a simple centering and scaling of the data. In sec. 4 we report experiments of feature extraction on handwritten numerals and natural image patches. When the system has a linear encoder and decoder (remember that the Sparsifying Logistic is a separate module), the filters resemble ?object parts? for the numerals, and localized, oriented features for the natural image patches. Applying these features for the classification of the digits in the MNIST dataset, we have achieved by a small margin the best accuracy ever reported in the literature. We conclude by showing a hierarchical extension which suggests the form of simple and complex cell receptive fields, and leads to a topographic layout of the filters which is reminiscent of the topographic maps found in area V1 of the visual cortex. 2 The Model The proposed model is based on three main components, as shown in fig. 1: ? The encoder: A set of feed-forward filters parameterized by the rows of matrix WC , that computes a code vector from an image patch X. ? The Sparsifying Logistic: A non-linear module that transforms the code vector Z into a sparse code vector Z? with components in the range [0, 1]. ? The decoder: A set of reverse filters parameterized by the columns of matrix WD , that ? computes a reconstruction of the input image patch from the sparse code vector Z. The energy of the system is the sum of two terms: E(X, Z, WC , WD ) = EC (X, Z, WC ) + ED (X, Z, WD ) (1) The first term is the code prediction energy which measures the discrepancy between the output of the encoder and the code vector Z. In our experiments, it is defined as: 1 1 (2) ||Z ? Enc(X, WC )||2 = ||Z ? WC X||2 2 2 The second term is the reconstruction energy which measures the discrepancy between the reconstructed image patch produced by the decoder and the input image patch X. In our experiments, it EC (X, Z, WC ) = Figure 1: Architecture of the energy-based model for learning sparse-overcomplete representations. The input image patch X is processed by the encoder to produce an initial estimate of the code vector. The encoding prediction energy EC measures the squared distance between the code vector Z and its estimate. The code vector Z is passed through the Sparsifying Logistic non-linearity which ? The decoder reconstructs the input image patch from the sparse produces a sparsified code vector Z. code. The reconstruction energy ED measures the squared distance between the reconstruction and the input image patch. The optimal code vector Z ? for a given patch minimizes the sum of the two energies. The learning process finds the encoder and decoder parameters that minimize the energy for the optimal code vectors averaged over a set of training samples.  0:01 30  0:1 30  0:1 10 Figure 2: Toy example of sparsifying rectification produced by the Sparsifying Logistic for different choices of the parameters ? and ?. The input is a sequence of Gaussian random variables. The output, computed by using eq. 4, is a sequence of spikes whose rate and amplitude depend on the parameters ? and ?. In particular, increasing ? has the effect of making the output approximately binary, while increasing ? increases the firing rate of the output signal. is defined as: 1 ? WD )||2 = 1 ||X ? WD Z|| ? 2 ||X ? Dec(Z, 2 2 where Z? is computed by applying the Sparsifying Logistic non-linearity to Z. ED (X, Z, WD ) = 2.1 (3) The Sparsifying Logistic The Sparsifying Logistic module is a non-linear front-end to the decoder that transforms the code vector into a sparse vector with positive components. Let us consider how it transforms the k-th training sample. Let zi (k) be the i-th component of the code vector and z?i (k) be its corresponding output, with i ? [1..m] where m is the number of components in the code vector. The relation between these variables is given by: z?i (k) = ?e?zi (k) , i ? [1..m] ?i (k) with ?i (k) = ?e?zi (k) + (1 ? ?)?i (k ? 1) (4) where it is assumed that ? ? [0, 1]. ?i (k) is the weighted sum of values of e?zi (n) corresponding to previous training samples n, with n ? k. The weights in this sum are exponentially decaying as can be seen by unrolling the recursive equation in 4. This non-linearity can be easily understood as a weighted softmax function applied over consecutive samples of the same code unit. This produces a sequence of positive values which, for large values of ? and small values of ?, is characterized by brief and punctuate activities in time. This behavior is reminiscent of the spiking behavior of neurons. ? controls the sparseness of the code by determining the ?width? of the time window over which samples are summed up. ? controls the degree of ?softness? of the function. Large ? values yield quasi-binary outputs, while small ? values produce more graded responses; fig. 2 shows how these parameters affect the output when the input is a Gaussian random variable. Another view of the Sparsifying Logistic is as a logistic function with an adaptive bias that tracks the average input; by dividing the right hand side of eq. 4 by ?e?zi (k) we have: h i?1 1?? 1 z?i (k) = 1 + e??(zi (k)? ? log( ? ?i (k?1))) , i ? [1..m] (5) Notice how ? directly controls the gain of the logistic. Large values of this parameter will turn the ? non-linearity into a step function and will make Z(k) a binary code vector. In our experiments, ?i is treated as trainable parameter and kept fixed after the learning phase. In this case, the Sparsifying Logistic reduces to a logistic function with a fixed gain and a learned bias. For large ? in the continuous-time limit, the spikes can be shown to follow a homogeneous Poisson process. In this framework, sparsity is a ?temporal? property characterizing each single unit in the code, rather than a ?spatial? property shared among all the units in a code. Spatial sparsity usually requires some sort of ad-hoc normalization to ensure that the components of the code that are ?on? are not always the same ones. Our solution tackles this problem differently: each unit must be sparse when encoding different samples, independently from the activities of the other components in the code vector. Unlike other methods [10], no ad-hoc rescaling of the weights or code units is necessary. 3 Learning Learning is accomplished by minimizing the energy in eq. 1. Indicating with superscripts the indices referring to the training samples and making explicit the dependencies on the code vectors, we can rewrite the energy of the system as: E(WC , WD , Z 1 , . . . , Z P ) = P X [ED (X i , Z i , WD ) + EC (X i , Z i , WC )] (6) i=1 This is also the loss function we propose to minimize during training. The parameters of the system, WC and WD , are found by solving the following minimization problem: ? {WC? , WD } = argmin{Wc ,Wd } minZ 1 ,...,Z P E(Wc , Wd , Z 1 , . . . , Z P ) (7) It is easy to minimize this loss with respect to WC and WD when the Z i are known and, particularly for our experiments where encoder and decoder are a set of linear filters, this is a convex quadratic optimization problem. Likewise, when the parameters in the system are fixed it is straightforward to minimize with respect to the codes Z i . These observations suggest a coordinate descent optimization procedure. First, we find the optimal Z i for a given set of filters in encoder and decoder. Then, we update the weights in the system fixing Z i to the value found at the previous step. We iterate these two steps in alternation until convergence. In our experiments we used an on-line version of this algorithm which can be summarized as follows: 1. propagate the input X through the encoder to get a codeword Zinit 2. minimize the loss in eq. 6, sum of reconstruction and code prediction energy, with respect to Z by gradient descent using Zinit as the initial value 3. compute the gradient of the loss with respect to WC and WD , and perform a gradient step where the superscripts have been dropped because we are referring to a generic training sample. Since the code vector Z minimizes both energy terms, it not only minimizes the reconstruction energy, but is also as similar as possible to the code predicted by the encoder. After training the decoder settles on filters that produce low reconstruction errors from minimum-energy, sparsified code vectors Z? ? , while the encoder simultaneously learns filters that predict the corresponding minimumenergy codes Z ? . In other words, the system converges to a state where minimum-energy code vectors not only reconstruct the image patch but can also be easily predicted by the encoder filters. Moreover, starting the minimization over Z from the prediction given by the encoder allows convergence in very few iterations. After the first few thousand training samples, the minimization over Z requires just 4 iterations on average. When training is complete, a simple pass through the encoder will produce an accurate prediction of the minimum-energy code vector. In the experiments, two regularization terms are added to the loss in eq. 6: a ?lasso? term equal to the L1 norm of WC and WD , and a ?ridge? term equal to their L2 norm. These have been added to encourage the filters to localize and to suppress noise. Notice that we could differently weight the encoding and the reconstruction energies in the loss function. In particular, assigning a very large weight to the encoding energy corresponds to turning the penalty on the encoding prediction into a hard constraint. The code vector would be assigned the value predicted by the encoder, and the minimization would reduce to a mean square error minimization through back-propagation as in a standard autoencoder. Unfortunately, this autoencoder-like Figure 3: Results of feature extraction from 12x12 patches taken from the Berkeley dataset, showing the 200 filters learned by the decoder. learning fails because Sparsifying Logistic is almost always highly saturated (otherwise the representation would not be sparse). Hence, the gradients back-propagated to the encoder are likely to be very small. This causes the direct minimization over encoder parameters to fail, but does not seem to adversely affect the minimization over code vectors. We surmise that the large number of degrees of freedom in code vectors (relative to the number of encoder parameters) makes the minimization problem considerably better conditioned. In other words, the alternated descent algorithm performs a minimization over a much larger set of variables than regular back-prop, and hence is less likely to fall victim to local minima. The alternated descent over code and parameters can be seen as a kind of deterministic EM. It is related to gradient-descent over parameters (standard back-prop) in the same way that the EM algorithm is related to gradient ascent for maximum likelihood estimation. This learning algorithm is not only simple but also very fast. For example, in the experiments of sec. 4.1 it takes less than 30 minutes on a 2GHz processor to learn 200 filters from 100,000 patches of size 12x12, and after just a few minutes the filters are already very similar to the final ones. This is much more efficient and robust than what can be achieved using other methods. For example, in Olshausen and Field?s [10] linear generative model, inference is expensive because minimization in code space is necessary during testing as well as training. In Teh et al. [4], learning is very expensive because the decoder is missing, and sampling techniques [8] must be used to provide a reconstruction. Moreover, most methods rely on pre-processing of the input patches such as whitening, PCA and low-pass filtering in order to improve results and speed up convergence. In our experiments, we need only center the data by subtracting a global mean and scale by a constant. 4 Experiments In this section we present some applications of the proposed energy-based model. Two standard data sets were used: natural image patches and handwritten digits. As described in sec. 2, the encoder and decoder learn linear filters. As mentioned in sec. 3, the input images were only trivially pre-processed. 4.1 Feature Extraction from Natural Image Patches In the first experiment, the system was trained on 100,000 gray-level patches of size 12x12 extracted from the Berkeley segmentation data set [12]. Pre-processing of images consists of subtracting the global mean pixel value (which is about 100), and dividing the result by 125. We chose an overcomplete factor approximately equal to 2 by representing the input with 200 code units1 . The Sparsifying Logistic parameters ? and ? were equal to 0.02 and 1, respectively. The learning rate for updating WC was set to 0.005 and for WD to 0.001. These are decreased progressively during training. The coefficients of the L1 and L2 regularization terms were about 0.001. The learning rate for the minimization in code space was set to 0.1, and was multiplied by 0.8 every 10 iterations, for at most 100 iterations. Some components of the sparse code must be allowed to take continuous values to account for the average value of a patch. For this reason, during training we saturated the running sums ? to allow some units to be always active. Values of ? were saturated to 109 . We verified empirically that subtracting the local mean from each patch eliminates the need for this saturation. However, saturation during training makes testing less expensive. Training on this data set takes less than half an hour on a 2GHz processor. Examples of learned encoder and decoder filters are shown in figure 3. They are spatially localized, and have different orientations, frequencies and scales. They are somewhat similar to, but more localized than, Gabor wavelets and are reminiscent of the receptive fields of V1 neurons. Interest1 Overcompleteness must be evaluated by considering the number of code units and the effective dimensionality of the input as given by PCA. = 1 +1 +1 +1 +1 +1 +1 + 0.8 + 0.8 Figure 4: Top: A randomly selected subset of encoder filters learned by our energy-based model when trained on the MNIST handwritten digit dataset. Bottom: An example of reconstruction of a digit randomly extracted from the test data set. The reconstruction is made by adding ?parts?: it is the additive linear combination of few basis functions of the decoder with positive coefficients. ingly, the encoder and decoder filter values are nearly identical up to a scale factor. After training, inference is extremely fast, requiring only a simple matrix-vector multiplication. 4.2 Feature Extraction from Handwritten Numerals The energy-based model was trained on 60,000 handwritten digits from the MNIST data set [13], which contains quasi-binary images of size 28x28 (784 pixels). The model is the same as in the previous experiment. The number of components in the code vector was 196. While 196 is less than the 784 inputs, the representation is still overcomplete, because the effective dimension of the digit dataset is considerably less than 784. Pre-processing consisted of dividing each pixel value by 255. Parameters ? and ? in the temporal softmax were 0.01 and 1, respectively. The other parameters of the system have been set to values similar to those of the previous experiment on natural image patches. Each one of the filters, shown in the top part of fig. 4, contains an elementary ?part? of a digit. Straight stroke detectors are present, as in the previous experiment, but curly strokes can also be found. Reconstruction of most single digits can be achieved by a linear additive combination of a small number of filters since the output of the Sparsifying Logistic is sparse and positive. The bottom part of fig. 4 illustrates this reconstruction by ?parts?. 4.3 Learning Local Features for the MNIST dataset Deep convolutional networks trained with backpropagation hold the current record for accuracy on the MNIST dataset [14, 15]. While back-propagation produces good low-level features, it is well known that deep networks are particularly challenging for gradient-descent learning. Hinton et al. [16] have recently shown that initializing the weights of a deep network using unsupervised learning before performing supervised learning with back-propagation can significantly improve the performance of a deep network. This section describes a similar experiment in which we used the proposed method to initialize the first layer of a large convolutional network. We used an architecture essentially identical to LeNet-5 as described in [15]. However, because our model produces sparse features, our network had a considerably larger number of feature maps: 50 for layer 1 and 2, 50 for layer 3 and 4, 200 for layer 5, and 10 for the output layer. The numbers for LeNet-5 were 6, 16, 100, and 10 respectively. We refer to our larger network as the 50-50-200-10 network. We trained this networks on 55,000 samples from MNIST, keeping the remaining 5,000 training samples as a validation set. When the error on the validation set reached its minimum, an additional five sweeps were performed on the training set augmented with the validation set (unless this increased the training loss). Then the learning was stopped, and the final error rate on the test set was measured. When the weights are initialized randomly, the 50-50-200-10 achieves a test error rate of 0.7%, to be compared with the 0.95% obtained by [15] with the 6-16-100-10 network. In the next experiment, the proposed sparse feature learning method was trained on 5x5 image patches extracted from the MNIST training set. The model had a 50-dimensional code. The encoder filters were used to initialize the first layer of the 50-50-200-10 net. The network was then trained in the usual way, except that the first layer was kept fixed for the first 10 epochs through the training set. The 50 filters after training are shown in fig. 5. The test error rate was 0.6%. To our knowledge, this is the best results ever reported with a method trained on the original MNIST set, without deskewing nor augmenting the training set with distorted samples. The training set was then augmented with samples obtained by elastically distorting the original training samples, using a method similar to [14]. The error rate of the 50-50-200-10 net with random initialization was 0.49% (to be compared to 0.40% reported in [14]). By initializing the first layer with the filters obtained with the proposed method, the test error rate dropped to 0.39%. While this is the best numerical result ever reported on MNIST, it is not statistically different from [14]. Figure 5: Filters in the first convolutional layer after training when the network is randomly initialized (top row) and when the first layer of the network is initialized with the features learned by the unsupervised energy-based model (bottom row). Architecture 6-16-100-10 [15] 5-50-100-10 [14] 50-50-200-10 20K 1.01 0.89 Training Set Size 60K 60K + Distortions 0.95 - 0.60 - 0.40 0.70 0.60 0.49 0.39 Table 1: Comparison of test error rates on MNIST dataset using convolutional networkswith various training set size: 20,000, 60,000, and 60,000 plus 550,000 elastic distortions. For each size, results are reported with randomly initialized filters, and with first-layer filters initialized using the proposed algorithm (bold face). 4.4 Hierarchical Extension: Learning Topographic Maps It has already been observed that features extracted from natural image patches resemble Gabor-like filters, see fig. 3. It has been recently pointed out [6] that these filters produce codes with somewhat uncorrelated but not independent components. In order to capture higher order dependencies among code units, we propose to extend the encoder architecture by adding to the linear filter bank a second layer of units. In this hierarchical model of the encoder, the units produced by the filter bank are now laid out on a two dimensional grid and filtered according to a fixed weighted mean kernel. This assigns a larger weight to the central unit and a smaller weight to the units in the periphery. In order to activate a unit at the output of the Sparsifying Logistic, all the afferent unrectified units in the first layer must agree in giving a strong positive response to the input patch. As a consequence neighboring filters will exhibit similar features. Also, the top level units will encode features that are more translation and rotation invariant, de facto modeling complex cells. Using a neighborhood of size 3x3, toroidal boundary conditions, and computing code vectors with 400 units from 12x12 input patches from the Berkeley dataset, we have obtained the topographic map shown in fig. 6. Filters exhibit features that are locally similar in orientation, position, and phase. There are two low frequency clusters and pinwheel regions similar to what is experimentally found in cortical topography. Ed INPUT X Eucl. Dist. Wc Wd CODE LEVEL 1 0.08 0.12 0.08 CONVOL. K = 0.12 0.23 0.12 K 0.08 0.12 0.08 Spars. Logistic CODE LEVEL 2 Eucl. Dist. CODE Z Ec Figure 6: Example of filter maps learned by the topographic hierarchical extension of the model. The outline of the model is shown on the right. 5 Conclusions An energy-based model was proposed for unsupervised learning of sparse overcomplete representations. Learning to extract sparse features from data has applications in classification, compression, denoising, inpainting, segmentation, and super-resolution interpolation. The model has none of the inefficiencies and idiosyncrasies of previously proposed sparse-overcomplete feature learning methods. The decoder produces accurate reconstructions of the patches, while the encoder provides a fast prediction of the code without the need for any particular preprocessing of the input images. It seems that a non-linearity that directly sparsifies the code is considerably simpler to control than adding a sparsity term in the loss function, which generally requires ad-hoc normalization procedures [3]. In the current work, we used linear encoders and decoders for simplicity, but the model authorizes non-linear modules, as long as gradients can be computed and back-propagated through them. As briefly presented in sec. 4.4, it is straightforward to extend the original framework to hierarchical architectures in encoder, and the same is possible in the decoder. Another possible extension would stack multiple instances of the system described in the paper, with each system as a module in a multi-layer structure where the sparse code produced by one feature extractor is fed to the input of a higher-level feature extractor. Future work will include the application of the model to various tasks, including facial feature extraction, image denoising, image compression, inpainting, classification, and invariant feature extraction for robotics applications. Acknowledgments We wish to thank Sebastian Seung and Geoff Hinton for helpful discussions. This work was supported in part by the NSF under grants No. 0325463 and 0535166, and by DARPA under the LAGR program. References [1] Lee, D.D. and Seung, H.S. (1999) Learning the parts of objects by non-negative matrix factorization. Nature, 401:788-791. [2] Hyvarinen, A. and Hoyer, P.O. (2001) A 2-layer sparse coding model learns simple and complex cell receptive fields and topography from natural images. Vision Research, 41:2413-2423. [3] Olshausen, B.A. (2002) Sparse codes and spikes. R.P.N. Rao, B.A. Olshausen and M.S. Lewicki Eds. MIT press:257-272. [4] Teh, Y.W. and Welling, M. and Osindero, S. and Hinton, G.E. (2003) Energy-based models for sparse overcomplete representations. Journal of Machine Learning Research, 4:1235-1260. [5] Lennie, P. (2003) The cost of cortical computation. Current biology, 13:493-497 [6] Simoncelli, E.P. (2005) Statistical modeling of photographic images. Academic Press 2nd ed. [7] Hinton, G.E. and Zemel, R.S. (1994) Autoencoders, minimum description length, and Helmholtz free energy. Advances in Neural Information Processing Systems 6, J. D. Cowan, G. Tesauro and J. Alspector (Eds.), Morgan Kaufmann: San Mateo, CA. [8] Hinton, G.E. (2002) Training products of experts by minimizing contrastive divergence. Neural Computation, 14:1771-1800. [9] Doi E., Balcan, D.C. and Lewicki, M.S. (2006) A theoretical analysis of robust coding over noisy overcomplete channels. Advances in Neural Information Processing Systems 18, MIT Press. [10] Olshausen, B.A. and Field, D.J. (1997) Sparse coding with an overcomplete basis set: a strategy employed by V1? Vision Research, 37:3311-3325. [11] Foldiak, P. (1990) Forming sparse representations by local anti-hebbian learning. Biological Cybernetics, 64:165-170. [12] The berkeley segmentation dataset http://www.cs.berkeley.edu/projects/vision/grouping/segbench/ [13] The MNIST database of handwritten digits http://yann.lecun.com/exdb/mnist/ [14] Simard, P.Y. Steinkraus, D. and Platt, J.C. (2003) Best practices for convolutional neural networks. ICDAR [15] LeCun, Y. Bottou, L. Bengio, Y. and Haffner, P. (1998) Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324. [16] Hinton, G.E., Osindero, S. and Teh, Y. (2006) A fast learning algorithm for deep belief nets. Neural Computation 18, pp 1527-1554.
3112 |@word briefly:1 version:2 compression:2 seems:2 norm:2 nd:1 propagate:1 decomposition:1 contrastive:1 sparsifies:1 inpainting:2 initial:2 inefficiency:1 contains:2 document:1 current:3 wd:17 com:1 assigning:1 must:6 reminiscent:3 additive:2 numerical:1 update:1 progressively:1 generative:1 half:1 selected:1 record:1 filtered:1 provides:2 simpler:1 five:1 mathematical:1 direct:1 lagr:1 consists:1 ica:1 alspector:1 behavior:2 nor:1 dist:2 multi:1 steinkraus:1 actual:1 window:1 considering:2 increasing:2 unrolling:1 provided:1 project:1 linearity:9 moreover:2 what:2 kind:2 argmin:1 minimizes:4 finding:1 dubbed:1 temporal:2 remember:1 berkeley:5 every:1 tackle:1 softness:1 exactly:1 toroidal:1 facto:1 control:5 unit:20 grant:1 platt:1 positive:5 before:1 understood:2 dropped:2 local:4 limit:1 aiming:1 consequence:1 encoding:7 elastically:1 firing:1 interpolation:1 approximately:2 might:1 chose:1 plus:1 initialization:1 mateo:1 suggests:1 challenging:1 factorization:2 range:1 statistically:1 averaged:1 acknowledgment:1 lecun:3 testing:2 recursive:1 practice:1 backpropagation:2 x3:1 digit:9 procedure:3 area:2 gabor:3 significantly:1 pre:6 word:2 regular:1 suggest:1 segbench:1 get:1 applying:2 www:1 map:6 deterministic:2 missing:3 center:1 straightforward:3 layout:1 starting:1 independently:1 convex:1 resolution:1 simplicity:1 assigns:1 coordinate:2 play:1 homogeneous:1 us:2 curly:1 helmholtz:1 expensive:4 particularly:3 updating:1 recognition:1 surmise:1 database:1 bottom:3 role:2 module:11 observed:1 initializing:2 capture:1 thousand:1 region:1 ranzato:2 decrease:1 mentioned:1 seung:2 trained:11 depend:1 solving:2 rewrite:1 basis:2 easily:3 darpa:1 differently:2 geoff:1 various:2 retrieves:1 fast:7 describe:3 effective:2 activate:1 doi:1 zemel:1 neighborhood:1 interest1:1 whose:1 victim:1 larger:5 distortion:2 reconstruct:4 otherwise:1 encoder:39 statistic:1 topographic:5 itself:1 noisy:1 superscript:2 final:2 hoc:3 advantage:1 sequence:3 net:3 reconstruction:17 propose:3 subtracting:3 coming:1 reset:1 product:1 neighboring:1 enc:1 description:1 exploiting:1 convergence:3 deskewing:1 cluster:1 produce:15 converges:1 object:2 augmenting:1 fixing:1 measured:1 advocated:1 eq:5 strong:1 dividing:3 c:2 resemble:2 predicted:3 drawback:1 filter:34 settle:1 numeral:4 require:1 activating:1 biological:2 elementary:1 extension:5 hold:1 predict:1 achieves:2 early:1 consecutive:1 estimation:1 overcompleteness:1 weighted:3 minimization:12 mit:2 always:4 gaussian:2 ingly:1 super:1 rather:1 encode:1 likelihood:2 helpful:1 inference:4 lennie:1 hidden:2 relation:1 quasi:3 trainer:1 pixel:3 classification:3 among:2 orientation:2 augment:1 spatial:2 softmax:2 initialize:3 fairly:1 summed:1 field:5 equal:4 extraction:7 sampling:3 biology:1 identical:2 placing:1 unsupervised:7 nearly:1 peaked:1 discrepancy:2 future:1 others:2 report:1 few:5 oriented:1 randomly:5 simultaneously:1 divergence:1 preserve:1 phase:3 freedom:1 highly:3 adjust:1 saturated:3 eucl:2 accurate:2 encourage:1 necessary:2 facial:1 unless:1 penalizes:1 initialized:5 causal:1 overcomplete:12 theoretical:2 stopped:1 increased:1 column:1 modeling:2 instance:1 rao:1 cost:1 subset:1 sumit:2 front:1 osindero:2 reported:6 encoders:2 dependency:2 considerably:4 referring:2 lee:1 decoding:2 squared:2 central:1 reconstructs:1 possibly:1 idiosyncrasy:1 collapsing:1 adversely:1 expert:1 simard:1 rescaling:1 toy:1 account:1 potential:1 de:1 coding:3 sec:8 summarized:1 bold:1 coefficient:2 afferent:1 ad:3 performed:2 view:1 reached:1 sort:2 decaying:1 complicated:1 ass:1 square:1 minimize:5 accuracy:2 convolutional:6 kaufmann:1 likewise:2 yield:1 handwritten:7 produced:4 none:1 cybernetics:1 straight:1 processor:3 stroke:3 detector:2 sebastian:1 ed:8 centering:1 energy:29 frequency:2 pp:1 propagated:2 gain:2 dataset:10 popular:1 knowledge:1 dimensionality:2 segmentation:3 amplitude:1 back:7 feed:1 higher:2 courant:1 supervised:1 follow:1 response:2 evaluated:1 furthermore:1 just:2 until:1 autoencoders:1 hand:1 christopher:1 propagation:3 logistic:20 gray:1 olshausen:4 effect:1 requiring:2 consisted:1 regularization:2 assigned:1 hence:2 spatially:1 lenet:2 x5:1 during:6 width:1 exdb:1 outline:1 complete:2 ridge:1 performs:1 l1:2 balcan:1 image:29 novel:1 recently:2 rotation:1 preceded:1 spiking:1 empirically:1 exponentially:1 tail:1 interpretation:1 extend:2 refer:1 trivially:1 grid:1 pointed:1 had:2 cortex:1 whitening:1 recent:1 foldiak:1 reverse:1 periphery:1 codeword:1 tesauro:1 binary:5 alternation:1 accomplished:1 seen:3 minimum:7 additional:1 somewhat:3 morgan:1 employed:1 determine:1 signal:1 multiple:1 simoncelli:1 photographic:1 reduces:1 hebbian:1 characterized:1 x28:1 academic:1 long:1 spar:1 prediction:7 vision:4 essentially:1 poisson:1 iteration:4 kernel:2 normalization:2 achieved:5 cell:3 dec:1 preserved:1 robotics:1 decreased:1 eliminates:1 unlike:1 ascent:1 cowan:1 seem:1 extracting:1 chopra:1 bengio:1 easy:3 iterate:1 affect:2 zi:6 architecture:7 lasso:1 reduce:2 haffner:1 whether:1 distorting:1 pca:5 passed:1 penalty:1 york:2 cause:1 action:1 deep:5 generally:1 transforms:3 locally:1 processed:2 category:1 http:2 nsf:1 notice:2 track:2 sparsifying:18 localize:1 prevent:1 verified:1 kept:2 v1:3 sum:6 parameterized:3 distorted:1 almost:1 reasonable:1 laid:1 yann:3 patch:27 scaling:1 layer:16 followed:1 quadratic:1 nonnegative:1 activity:3 constraint:2 wc:17 speed:1 extremely:1 performing:1 separable:1 x12:4 according:1 combination:2 describes:2 slightly:1 em:4 smaller:1 making:2 restricted:1 invariant:2 taken:3 rectification:1 equation:1 agree:1 previously:1 turn:3 icdar:1 fail:1 fed:1 end:1 multiplied:1 hierarchical:5 appropriate:1 generic:1 alternative:1 original:3 top:4 running:1 ensure:1 remaining:1 include:1 giving:1 graded:1 sweep:1 added:3 already:2 spike:3 receptive:3 strategy:1 usual:1 exhibit:2 gradient:11 hoyer:1 distance:3 separate:1 thank:1 decoder:32 reason:1 enforcing:1 code:69 length:1 index:1 minimizing:3 unfortunately:2 negative:1 suppress:1 boltzmann:1 perform:1 allowing:1 teh:3 neuron:2 observation:1 unrectified:1 descent:7 pinwheel:1 anti:1 sparsified:2 hinton:6 ever:3 stack:1 required:1 learned:6 hour:1 beyond:1 proceeds:1 usually:2 sparsity:7 poultney:1 saturation:2 program:1 including:1 belief:1 natural:8 force:1 treated:1 rely:1 turning:1 representing:1 improve:2 brief:1 auto:1 extract:2 autoencoder:2 alternated:2 epoch:1 literature:1 l2:2 multiplication:1 determining:1 relative:1 loss:11 topography:2 filtering:1 localized:3 validation:3 degree:4 principle:1 bank:2 uncorrelated:2 heavy:1 translation:1 row:3 supported:1 keeping:1 free:1 bias:3 side:1 allow:1 institute:1 fall:1 characterizing:1 face:1 sparse:29 ghz:2 boundary:1 dimension:3 cortical:2 computes:3 forward:1 made:1 adaptive:2 preprocessing:2 san:1 ec:5 hyvarinen:1 welling:1 reconstructed:1 compact:1 global:2 active:2 conclude:1 assumed:1 continuous:2 iterative:1 table:1 learn:5 nature:1 robust:3 ca:1 elastic:1 channel:1 bottou:1 complex:3 marc:1 main:1 linearly:1 aurelio:1 noise:1 allowed:1 augmented:2 fig:7 fashion:1 ny:1 fails:1 position:1 explicit:1 wish:1 minz:1 extractor:3 wavelet:2 learns:2 minute:2 embed:1 showing:2 nyu:1 evidence:1 grouping:1 mnist:13 quantization:1 adding:3 conditioned:1 illustrates:1 sparseness:1 margin:1 entropy:1 likely:2 forming:1 visual:2 contained:1 lewicki:2 corresponds:1 extracted:4 prop:2 shared:1 considerable:1 hard:1 experimentally:1 except:1 denoising:2 pas:2 indicating:1 topographical:1 trainable:1
2,329
3,113
A Collapsed Variational Bayesian Inference Algorithm for Latent Dirichlet Allocation Yee Whye Teh David Newman and Max Welling Gatsby Computational Neuroscience Unit Bren School of Information and Computer Science University College London University of California, Irvine 17 Queen Square, London WC1N 3AR, UK CA 92697-3425 USA ywteh@gatsby.ucl.ac.uk {newman,welling}@ics.uci.edu Abstract Latent Dirichlet allocation (LDA) is a Bayesian network that has recently gained much popularity in applications ranging from document modeling to computer vision. Due to the large scale nature of these applications, current inference procedures like variational Bayes and Gibbs sampling have been found lacking. In this paper we propose the collapsed variational Bayesian inference algorithm for LDA, and show that it is computationally efficient, easy to implement and significantly more accurate than standard variational Bayesian inference for LDA. 1 Introduction Bayesian networks with discrete random variables form a very general and useful class of probabilistic models. In a Bayesian setting it is convenient to endow these models with Dirichlet priors over the parameters as they are conjugate to the multinomial distributions over the discrete random variables [1]. This choice has important computational advantages and allows for easy inference in such models. A class of Bayesian networks that has gained significant momentum recently is latent Dirichlet allocation (LDA) [2], otherwise known as multinomial PCA [3]. It has found important applications in both text modeling [4, 5] and computer vision [6]. Training LDA on a large corpus of several million documents can be a challenge and crucially depends on an efficient and accurate inference procedure. A host of inference algorithms have been proposed, ranging from variational Bayesian (VB) inference [2], expectation propagation (EP) [7] to collapsed Gibbs sampling [5]. Perhaps surprisingly, the collapsed Gibbs sampler proposed in [5] seem to be the preferred choice in many of these large scale applications. In [8] it is observed that EP is not efficient enough to be practical while VB suffers from a large bias. However, collapsed Gibbs sampling also has its own problems: one needs to assess convergence of the Markov chain and to have some idea of mixing times to estimate the number of samples to collect, and to identify coherent topics across multiple samples. In practice one often ignores these issues and collects as many samples as is computationally feasible, while the question of topic identification is often sidestepped by using just 1 sample. Hence there still seems to be a need for more efficient, accurate and deterministic inference procedures. In this paper we will leverage the important insight that a Gibbs sampler that operates in a collapsed space?where the parameters are marginalized out?mixes much better than a Gibbs sampler that samples parameters and latent topic variables simultaneously. This suggests that the parameters and latent variables are intimately coupled. As we shall see in the following, marginalizing out the parameters induces new dependencies between the latent variables (which are conditionally independent given the parameters), but these dependencies are spread out over many latent variables. This implies that the dependency between any two latent variables is expected to be small. This is precisely the right setting for a mean field (i.e. fully factorized variational) approximation: a particular variable interacts with the remaining variables only through summary statistics called the field, and the impact of any single variable on the field is very small [9]. Note that this is not true in the joint space of parameters and latent variables because fluctuations in parameters can have a significant impact on latent variables. We thus conjecture that the mean field assumptions are much better satisfied in the collapsed space of latent variables than in the joint space of latent variables and parameters. In this paper we leverage this insight and propose a collapsed variational Bayesian (CVB) inference algorithm. In theory, the CVB algorithm requires the calculation of very expensive averages. However, the averages only depend on sums of independent Bernoulli variables, and thus are very closely approximated with Gaussian distributions (even for relatively small sums). Making use of this approximation, the final algorithm is computationally efficient, easy to implement and significantly more accurate than standard VB. 2 Approximate Inference in Latent Dirichlet Allocation LDA models each document as a mixture over topics. We assume there are K latent topics, each being a multinomial distribution over a vocabulary of size W . For document j, we first draw a mixing proportion ?j = {?jk } over K topics from a symmetric Dirichlet with parameter ?. For the ith word in the document, a topic zij is drawn with topic k chosen with probability ?jk , then word xij is drawn from the zij th topic, with xij taking on value w with probability ?kw . Finally, a symmetric Dirichlet prior with parameter ? is placed on the topic parameters ?k = {?kw }. The full joint distribution over all parameters and variables is: p(x, z, ?, ?|?, ?) = D K Y ?(K?) QK ??1+njk? Y ?(W ?) QW ??1+n?kw ? k=1 jk w=1 ?kw K W ?(?) ?(?) j=1 (1) k=1 where nP jkw = #{i : xij = w, Pzij = k}, and dot means the corresponding index is summed out: n?kw = j njkw , and njk? = w njkw . Given the observed words x = {xij } the task of Bayesian inference is to compute the posterior distribution over the latent topic indices z = {zij }, the mixing proportions ? = {?j } and the topic parameters ? = {?k }. There are three current approaches, variational Bayes (VB) [2], expectation propagation [7] and collapsed Gibbs sampling [5]. We review the VB and collapsed Gibbs sampling methods here as they are the most popular methods and to motivate our new algorithm which combines advantages of both. 2.1 Variational Bayes Standard VB inference upper bounds the negative log marginal likelihood ? log p(x|?, ?) using the variational free energy: ? log p(x|?, ?) ? Fe(? q (z, ?, ?)) = Eq?[? log p(x, z, ?, ?|?, ?)] ? H(? q (z, ?, ?)) (2) with q?(z, ?, ?) an approximate posterior, H(? q (z, ?, ?)) = Eq?[? log q?(z, ?, ?)] the variational entropy, and q?(z, ?, ?) assumed to be fully factorized: Y Y Y q?(z, ?, ?) = q?(zij |? ?ij ) q?(?j |? ?j ) q?(?k |??k ) (3) ij j k q?(zij |? ?ij ) is multinomial with parameters ??ij and q?(?j |? ?j ), q?(?k |??k ) are Dirichlet with parameters ? ? j and ??k respectively. Optimizing Fe(? q ) with respect to the variational parameters gives us a set of e updates guaranteed to improve F (? q ) at each iteration and converges to a local minimum: P ? ? jk = ? + i ??ijk (4) P ? ?kw = ? + ij 1 (xij = w)? ?ijk (5)   P ??ijk ? exp ?(? ?jk ) + ?(??kxij ) ? ?( w ??kw ) (6) where ?(y) = ? log ?(y) ?y is the digamma function and 1 is the indicator function. Although efficient and easily implemented, VB can potentially lead to very inaccurate results. Notice that the latent variables z and parameters ?, ? can be strongly dependent in the true posterior p(z, ?, ?|x) through the cross terms in (1). This dependence is ignored in VB which assumes that latent variables and parameters are independent instead. As a result, the VB upper bound on the negative log marginal likelihood can be very loose, leading to inaccurate estimates of the posterior. 2.2 Collapsed Gibbs Sampling Standard Gibbs sampling, which iteratively samples latent variables z and parameters ?, ?, can potentially have slow convergence due again to strong dependencies between the parameters and latent variables. Collapsed Gibbs sampling improves upon Gibbs sampling by marginalizing out ? and ? instead, therefore dealing with them exactly. The marginal distribution over x and z is Y ?(K?) Q ?(?+njk? ) Y ?(W ?) Q ?(? +n?kw ) p(z, x|?, ?) = (7) k ?(K?+nj?? ) ?(?) ?(W ? +n?k? ) w ?(?) j k Given the current state of all but one variable zij , the conditional probability of zij is: ?ij p(zij = k|z ?ij ?ij ?1 (? + n?ij jk? )(? + n?kxij )(W ? + n?k? ) , x, ?, ?) = PK k0 =1 (? ?ij ?ij ?1 + n?ij jk0 ? )(? + n?k0 xij )(W ? + n?k0 ? ) (8) where the superscript ?ij means the corresponding variables or counts with xij and zij excluded, and the denominator is just a normalization. The conditional distribution of zij is multinomial with simple to calculate probabilities, so the programming and computational overhead is minimal. Collapsed Gibbs sampling has been observed to converge quickly [5]. Notice from (8) that zij ?ij ?ij depends on z ?ij only through the counts n?ij jk? , n?kxij , n?k? . In particular, the dependence of zij on any particular other variable zi0 j 0 is very weak, especially for large datasets. As a result we expect the convergence of collapsed Gibbs sampling to be fast [10]. However, as with other MCMC samplers, and unlike variational inference, it is often hard to diagnose convergence, and a sufficiently large number of samples may be required to reduce sampling noise. The argument of rapid convergence of collapsed Gibbs sampling is reminiscent of the argument for ?ij ?ij when mean field algorithms can be expected to be accurate [9]. The counts n?ij jk? , n?kxij , n?k? act as fields through which zij interacts with other variables. In particular, averaging both sides of (8) by p(z?ij |x, ?, ?) gives us the Callen equations, a set of equations that the true posterior must satisfy: " # ?ij ?ij ?1 (?+n?ij jk? )(? +n?kxij )(W ? +n?k? ) p(zij = k|x, ?, ?) = Ep(z?ij |x,?,?) PK (9) ?ij ?ij ?ij ?1 k0 =1 (?+njk0 ? )(? +n?k0 xij )(W ? +n?k0 ? ) Since the latent variables are already weakly dependent on each other, it is possible to replace (9) by a set of mean field equations where latent variables are assumed independent and still expect these equations to be accurate. This is the idea behind the collapsed variational Bayesian inference algorithm of the next section. 3 Collapsed Variational Bayesian Inference for LDA We derive a new inference algorithm for LDA combining the advantages of both standard VB and collapsed Gibbs sampling. It is a variational algorithm which, instead of assuming independence, models the dependence of the parameters on the latent variables in an exact fashion. On the other hand we still assume that latent variables are mutually independent. This is not an unreasonable assumption to make since as we saw they are only weakly dependent on each other. We call this algorithm collapsed variational Bayesian (CVB) inference. There are two ways to deal with the parameters in an exact fashion, the first is to marginalize them out of the joint distribution and to start from (7), the second is to explicitly model the posterior of ?, ? given z and x without any assumptions on its form. We will show that these two methods are equivalent. The only assumption we make in CVB is that the latent variables z are mutually independent, thus we approximate the posterior as: Y q?(z, ?, ?) = q?(?, ?|z) q?(zij |? ?ij ) (10) ij where q?(zij |? ?ij ) is multinomial with parameters ??ij . The variational free energy becomes: b q (z)? F(? q (?, ?|z)) = Eq?(z)?q (?,?|z) [? log p(x, z, ?, ?|?, ?)] ? H(? q (z)? q (?, ?|z)) =Eq?(z) [Eq?(?,?|z) [? log p(x, z, ?, ?|?, ?)] ? H(? q (?, ?|z))] ? H(? q (z)) (11) We minimize the variational free energy with respect to q?(?, ?|z) first, followed by q?(z). Since we do not restrict the form of q?(?, ?|z), the minimum is achieved at the true posterior q?(?, ?|z) = p(?, ?|x, z, ?, ?), and the variational free energy simplifies to: b q (z)? Fb(? q (z)) , min F(? q (?, ?|z)) = Eq?(z) [? log p(x, z|?, ?)] ? H(? q (z)) (12) e q (z)) , min Fe(? Fb(? q (z)) ? F(? q (z)? q (?)? q (?)) (13) q?(?,?|z) We see that CVB is equivalent to marginalizing out ?, ? before approximating the posterior over z. As CVB makes a strictly weaker assumption on the variational posterior than standard VB, we have q?(?)? q(?) and thus CVB is a better approximation than standard VB. Finally, we derive the updates for the variational parameters ??ij . Minimizing (12) with respect to ??ijk , we get  exp Eq?(z?ij ) [p(x, z?ij , zij = k|?, ?)] ??ijk = q?(zij = k) = PK (14)  ?ij , z = k 0 |?, ?)] ij k0 =1 exp Eq?(z?ij ) [p(x, z Pn?1 Plugging in (7), expanding log ?(?+n) l=0 log(? + l) for positive reals ? and positive integers ?(?) = n, and cancelling terms appearing both in the numerator and denominator, we get   ?ij ?ij exp Eq?(z?ij ) [log(?+n?ij jk? ) + log(? +n?kxij ) ? log(W ? +n?k? )]   (15) ??ijk = P K ?ij ?ij ?ij ?ij ) [log(?+n exp E ) + log(? +n ) ? log(W ? +n )] 0 0 0 0 q ? (z k =1 jk ? ?k xij ?k ? 3.1 Gaussian approximation for CVB Inference For completeness, we describe how to compute each expectation term in (15) exactly in the appendix. This exact implementation of CVB is computationally too expensive to be practical, and we propose instead to use a simple Gaussian approximation which works very accurately and which requires minimal computational costs. In this section we describe the Gaussian approximation applied to Eq?[log(? + n?ij jk? )]; the other two expectation terms are similarly computed. Assume that nj??  0. Notice that n?ij jk? = P 0 j = k) is a sum of a large number independent Bernoulli variables 1 (zi0 j = k) each 1 (z 0 i i 6=i with mean parameter ??i0 jk , thus it can be accurately approximated by a Gaussian. The mean and variance are given by the sum of the means and variances of the individual Bernoulli variables: X X Eq?[n?ij ??i0 jk Varq?[n?ij ??i0 jk (1 ? ??i0 jk ) (16) jk? ] = jk? ] = i0 6=i i0 6=i We further approximate the function log(? + n?ij jk? ) using a second-order Taylor expansion about ?ij Eq?[njk? ], and evaluate its expectation under the Gaussian approximation: ?ij Eq?[log(? + n?ij jk? )] ? log(? + Eq?[njk? ]) ? Varq?(n?ij jk? ) 2 2(? + Eq?[n?ij jk? ]) (17) Because Eq?[n?ij jk? ]  0, the third derivative is small and the Taylor series approximation is very accurate. In fact, we have found experimentally that the Gaussian approximation works very well even when nj?? is small. The reason is that we often have ??i0 jk being either close to 0 or 1 thus the variance of n?ij jk? is small relative to its mean and the Gaussian approximation will be accurate. Finally, plugging (17) into (15), we have our CVB updates:    ?1 ??ijk ? ?+Eq?[n?ij ? +Eq?[n?ij W ? +Eq?[n?ij jk? ] ?kxij ] ?k? ]   Varq?(n?ij Varq?(n?ij ?kxij ) Varq?(n?ij jk? ) ?k? ) exp ? 2(?+E [n?ij ])2 ? 2(?+E [n?ij ])2 + 2(W ?+E [n?ij ])2 (18) q ? q ? jk? ?kxij q ? ?k? Notice the striking correspondence between (18), (8) and (9), showing that CVB is indeed the mean field version of collapsed Gibbs sampling. In particular, the first line in (18) is obtained from (8) ?ij ?ij by replacing the fields n?ij jk? , n?kxij and n?k? by their means (thus the term mean field) while the exponentiated terms are correction factors accounting for the variance in the fields. CVB with the Gaussian approximation is easily implemented and has minimal computational costs. By keeping track of the mean and variance of njk? , n?kw and n?k? , and subtracting the mean and variance of the corresponding Bernoulli variables whenever we require the terms with xij , zij removed, the computational cost scales only as O(K) for each update to q?(zij ). Further, we only need to maintain one copy of the variational posterior over the latent variable for each unique document/word pair, thus the overall computational cost per iteration of CVB scales as O(M K) where M is the total number of unique document/word pairs, while the memory requirement is O(M K). This is the same as for VB. In comparison, collapsed Gibbs sampling needs to keep track of the current sample of zij for every word in the corpus, thus the memory requirement is O(N ) while the computational cost scales as O(N K) where N is the total number of words in the corpus?higher than for VB and CVB. Note however that the constant factor involved in the O(N K) time cost of collapsed Gibbs sampling is significantly smaller than those for VB and CVB. 4 Experiments We compared the three algorithms described in the paper: standard VB, CVB and collapsed Gibbs sampling. We used two datasets: first is ?KOS? (www.dailykos.com), which has J = 3430 documents, a vocabulary size of W = 6909, a total of N = 467, 714 words in all the documents and on average 136 words per document. Second is ?NIPS? (books.nips.cc) with J = 1675 documents, a vocabulary size of W = 12419, N = 2, 166, 029 words in the corpus and on average 1293 words per document. In both datasets stop words and infrequent words were removed. We split both datasets into a training set and a test set by assigning 10% of the words in each document to the test set. In all our experiments we used ? = 0.1, ? = 0.1, K = 8 number of topics for KOS and K = 40 for NIPS. We ran each algorithm on each dataset 50 times with different random initializations. Performance was measured in two ways. First using variational bounds of the log marginal probabilities on the training set, and secondly using log probabilities on the test set. Expressions for the variational bounds are given in (2) for VB and (12) for CVB. For both VB and CVB, test set log probabilities are computed as: YX ? + Eq [n?kw ] ? + Eq [njk? ] ??kw = (19) p(xtest ) = ??jk ??kxtest ??jk = ij K? + E [n ] W ? + Eq [n?k? ] q j?? ij k Note that we used estimated mean values of ?jk and ?kw [11]. For collapsed Gibbs sampling, given S samples from the posterior, we used: p(xtest ) = S YX 1 X s s ?jk ?kxtest ij |S| s=1 ij k s ?jk = ? + nsjk? K? + nsj?? ?skw = ? + ns?kw W ? + ns?k? (20) Figure 1 summarizes our results. We show both quantities as functions of iterations and as histograms of final values for all algorithms and datasets. CVB converged faster and to significantly better solutions than standard VB; this confirms our intuition that CVB provides much better approximations than VB. CVB also converged faster than collapsed Gibbs sampling, but Gibbs sampling attains a better solution in the end; this is reasonable since Gibbs sampling should be exact with ?7.5 ?7.4 ?7.6 ?7.8 ?8 ?8 ?8.2 ?8.4 ?8.5 ?8.6 Collapsed VB Standard VB ?9 0 20 40 60 80 100 20 Collapsed VB Standard VB ?8.8 ?9 0 20 40 60 ?7.55 ?7.5 80 100 ?7.45 ?7.4 40 Collapsed VB Standard VB 35 15 Collapsed VB Standard VB 30 25 10 20 15 5 10 5 0 ?7.8 ?7.675 ?7.55 0 ?7.65 ?7.6 ?7.2 ?7.4 ?7.3 ?7.5 ?7.4 ?7.6 ?7.5 ?7.6 ?7.7 ?7.7 ?7.8 ?7.9 0 20 40 Collapsed Gibbs Collapsed VB Standard VB ?7.8 80 ?7.9 0 60 100 20 30 Collapsed Gibbs Collapsed VB Standard VB 25 Collapsed Gibbs Collapsed VB Standard VB 20 40 60 80 100 Collapsed Gibbs Collapsed VB Standard VB 15 20 10 15 10 5 5 0 ?7.7 ?7.65 ?7.6 ?7.55 ?7.5 ?7.45 ?7.4 0 ?7.5 ?7.45 ?7.4 ?7.35 ?7.3 ?7.25 ?7.2 Figure 1: Left: results for KOS. Right: results for NIPS. First row: per word variational bounds as functions of numbers of iterations of VB and CVB. Second row: histograms of converged per word variational bounds across random initializations for VB and CVB. Third row: test set per word log probabilities as functions of numbers of iterations for VB, CVB and Gibbs. Fourth row: histograms of final test set per word log probabilities across 50 random initializations. ?7.4 ?7.5 ?7.5 ?7.6 ?8 ?7.7 ?7.8 ?8.5 ?7.9 ?8 ?9 Collapsed Gibbs Collapsed VB Standard VB ?8.1 ?8.2 0 500 1000 1500 2000 2500 Collapsed VB Standard VB ?9.5 0 500 1000 1500 2000 2500 Figure 2: Left: test set per word log probabilities. Right: per word variational bounds. Both as functions of the number of documents for KOS. enough samples. We have also applied the exact but much slower version of CVB without the Gaussian approximation, and found that it gave identical results to the one proposed here (not shown). We have also studied the dependence of approximation accuracies on the number of documents in the corpus. To conduct this experiment we train on 90% of the words in a (growing) subset of the corpus and test on the corresponding 10% left out words. In figure Figure 2 we show both variational bounds and test set log probabilities as functions of the number of documents J. We observe that as expected the variational methods improve as J increases. However, perhaps surprisingly, CVB does not suffer as much as VB for small values of J, even though one might expect that the Gaussian approximation becomes dubious in that regime. 5 Discussion We have described a collapsed variational Bayesian (CVB) inference algorithm for LDA. The algorithm is easy to implement, computationally efficient and more accurate than standard VB. The central insight of CVB is that instead of assuming parameters to be independent from latent variables, we treat their dependence on the topic variables in an exact fashion. Because the factorization assumptions made by CVB are weaker than those made by VB, the resulting approximation is more accurate. Computational efficiency is achieved in CVB with a Gaussian approximation, which was found to be so accurate that there is never a need for exact summation. The idea of integrating out parameters before applying variational inference has been independently proposed by [12]. Unfortunately, because they worked in the context of general conjugateexponential families, the approach cannot be made generally computationally useful. Nevertheless, we believe the insights of CVB can be applied to a wider class of discrete graphical models beyond LDA. Specific examples include various extensions of LDA [4, 13] hidden Markov models with discrete outputs, and mixed-membership models with Dirichlet distributed mixture coefficients [14]. These models all have the property that they consist of discrete random variables with Dirichlet priors on the parameters, which is the property allowing us to use the Gaussian approximation. We are also exploring CVB on an even more general class of models, including mixtures of Gaussians, Dirichlet processes, and hierarchical Dirichlet processes. Over the years a variety of inference algorithms have been proposed based on a combination of {maximize, sample, assume independent, marginalize out} applied to both parameters and latent variables. We conclude by summarizing these algorithms in Table 1, and note that CVB is located in the marginalize out parameters and assume latent variables are independent cell. A Exact Computation of Expectation Terms in (15) We can compute the expectation terms in (15) exactly as follows. Consider Eq?[log(? + n?ij jk? )], ?ij which requires computing q?(njk? ) (other expectation terms are similarly computed). Note that Parameters ? maximize sample assume marginalize ? Latent variables independent out maximize Viterbi EM ? ME ME sample stochastic EM Gibbs sampling ? collapsed Gibbs assume independent variational EM ? VB CVB marginalize out EM any MCMC EP for LDA intractable Table 1: A variety of inference algorithms for graphical models. Note that not every cell is filled in (marked by ?) while some are simply intractable. ?ME? is the maximization-expectation algorithm of [15] and ?any MCMC? means that we can use any MCMC sampler for the parameters once latent variables have been marginalized out. P n?ij jk? = i0 6=i 1 (zi0 j = k) is a sum of independent Bernoulli variables 1 (zi0 j = k) each with mean parameter ??i0 jk . Define vectors vi0 jk = [(1 ? ??i0 jk ), ??i0 jk ]> , and let vjk = v1jk ? ? ? ? ? vn?j? jk be ?ij the convolution of all vi0 jk . Finally let vjk be vjk deconvolved by vijk . Then q?(n?ij jk? = m) will ?ij ?ij be the (m+1)st entry in vjk . The expectation Eq?[log(?+njk? )] can now be computed explicitly. This exact implementation requires an impractical O(n2j?? ) time to compute Eq?[log(?+n?ij jk? )]. At the expense of complicating the algorithm implementation, this can be improved by sparsifying the vectors vjk (setting small entries to zero) as well as other computational tricks. We propose instead the Gaussian approximation of Section 3.1, which we have found to give extremely accurate results but with minimal implementation complexity and computational cost. Acknowledgement YWT was previously at NUS SoC and supported by the Lee Kuan Yew Endowment Fund. MW was supported by ONR under grant no. N00014-06-1-0734 and by NSF under grant no. 0535278. References [1] D. Heckerman. A tutorial on learning with Bayesian networks. In M. I. Jordan, editor, Learning in Graphical Models. Kluwer Academic Publishers, 1999. [2] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. JMLR, 3, 2003. [3] W. Buntine. Variational extensions to EM and multinomial PCA. In ECML, 2002. [4] M. Rosen-Zvi, T. Griffiths, M. Steyvers, and P. Smyth. The author-topic model for authors and documents. In UAI, 2004. [5] T. L. Griffiths and M. Steyvers. Finding scientific topics. In PNAS, 2004. [6] L. Fei-Fei and P. Perona. A Bayesian hierarchical model for learning natural scene categories. In CVPR, 2005. [7] T. P. Minka and J. Lafferty. Expectation propagation for the generative aspect model. In UAI, 2002. [8] W. Buntine and A. Jakulin. Applying discrete PCA in data analysis. In UAI, 2004. [9] M. Opper and O. Winther. From naive mean field theory to the TAP equations. In D. Saad and M. Opper, editors, Advanced Mean Field Methods : Theory and Practice. The MIT Press, 2001. [10] G. Casella and C. P. Robert. Rao-Blackwellisation of sampling schemes. Biometrika, 83(1):81?94, 1996. [11] M. J. Beal. Variational Algorithms for Approximate Bayesian Inference. PhD thesis, Gatsby Computational Neuroscience Unit, University College London, 2003. [12] J. Sung, Z. Ghahramani, and S. Choi. Variational Bayesian EM: A second-order approach. Unpublished manuscript, 2005. [13] W. Li and A. McCallum. Pachinko allocation: DAG-structured mixture models of topic correlations. In ICML, 2006. [14] E. M. Airoldi, D. M. Blei, E. P. Xing, and S. E. Fienberg. Mixed membership stochastic block models for relational data with application to protein-protein interactions. In Proceedings of the International Biometrics Society Annual Meeting, 2006. [15] M. Welling and K. Kurihara. Bayesian K-means as a ?maximization-expectation? algorithm. In SIAM Conference on Data Mining, 2006.
3113 |@word version:2 seems:1 proportion:2 confirms:1 crucially:1 accounting:1 xtest:2 series:1 zij:21 njk:9 document:17 current:4 com:1 assigning:1 reminiscent:1 must:1 update:4 fund:1 generative:1 mccallum:1 ith:1 blei:2 completeness:1 provides:1 vjk:5 combine:1 overhead:1 indeed:1 expected:3 rapid:1 growing:1 becomes:2 factorized:2 qw:1 finding:1 nj:3 impractical:1 sung:1 every:2 act:1 exactly:3 biometrika:1 uk:2 unit:2 grant:2 before:2 positive:2 local:1 treat:1 jakulin:1 fluctuation:1 might:1 initialization:3 studied:1 collect:2 suggests:1 factorization:1 zi0:4 practical:2 unique:2 practice:2 block:1 implement:3 procedure:3 significantly:4 convenient:1 word:22 integrating:1 griffith:2 protein:2 get:2 cannot:1 marginalize:5 close:1 collapsed:42 applying:2 context:1 yee:1 www:1 equivalent:2 deterministic:1 independently:1 insight:4 steyvers:2 infrequent:1 exact:9 programming:1 smyth:1 trick:1 expensive:2 approximated:2 jk:45 located:1 ep:4 observed:3 bren:1 calculate:1 removed:2 ran:1 intuition:1 complexity:1 motivate:1 depend:1 weakly:2 upon:1 efficiency:1 easily:2 joint:4 k0:7 various:1 train:1 fast:1 describe:2 london:3 jk0:1 newman:2 cvpr:1 otherwise:1 statistic:1 varq:5 final:3 superscript:1 kuan:1 beal:1 advantage:3 ucl:1 propose:4 subtracting:1 interaction:1 cancelling:1 uci:1 combining:1 mixing:3 convergence:5 requirement:2 converges:1 wider:1 derive:2 ac:1 measured:1 ij:82 school:1 eq:25 strong:1 soc:1 implemented:2 implies:1 closely:1 stochastic:2 require:1 secondly:1 summation:1 strictly:1 extension:2 correction:1 exploring:1 sufficiently:1 ic:1 exp:6 viterbi:1 saw:1 sidestepped:1 mit:1 gaussian:14 pn:1 endow:1 bernoulli:5 likelihood:2 digamma:1 attains:1 summarizing:1 inference:24 dependent:3 membership:2 i0:11 inaccurate:2 hidden:1 perona:1 issue:1 overall:1 summed:1 marginal:4 field:13 once:1 never:1 ng:1 sampling:24 identical:1 kw:13 icml:1 rosen:1 np:1 simultaneously:1 individual:1 maintain:1 cvb:34 mining:1 mixture:4 behind:1 wc1n:1 chain:1 accurate:12 vi0:2 biometrics:1 conduct:1 filled:1 taylor:2 minimal:4 deconvolved:1 modeling:2 rao:1 ar:1 queen:1 maximization:2 njk0:1 cost:7 subset:1 entry:2 too:1 buntine:2 zvi:1 dependency:4 st:1 winther:1 international:1 siam:1 probabilistic:1 lee:1 quickly:1 again:1 central:1 satisfied:1 thesis:1 book:1 derivative:1 leading:1 li:1 ywt:1 coefficient:1 satisfy:1 explicitly:2 depends:2 diagnose:1 start:1 bayes:3 xing:1 ass:1 square:1 minimize:1 accuracy:1 qk:1 kxij:10 variance:6 identify:1 yew:1 weak:1 bayesian:19 identification:1 accurately:2 cc:1 converged:3 suffers:1 casella:1 whenever:1 energy:4 involved:1 minka:1 irvine:1 stop:1 dataset:1 popular:1 improves:1 manuscript:1 higher:1 improved:1 though:1 strongly:1 just:2 correlation:1 hand:1 replacing:1 propagation:3 lda:12 perhaps:2 scientific:1 believe:1 usa:1 true:4 hence:1 excluded:1 symmetric:2 iteratively:1 deal:1 conditionally:1 numerator:1 whye:1 ranging:2 variational:36 recently:2 multinomial:7 dubious:1 million:1 kluwer:1 significant:2 gibbs:32 dag:1 similarly:2 dot:1 posterior:12 own:1 optimizing:1 n00014:1 onr:1 meeting:1 minimum:2 converge:1 maximize:3 multiple:1 mix:1 full:1 pnas:1 faster:2 academic:1 calculation:1 cross:1 host:1 plugging:2 impact:2 ko:4 denominator:2 vision:2 expectation:12 iteration:5 normalization:1 histogram:3 achieved:2 cell:2 publisher:1 saad:1 unlike:1 lafferty:1 seem:1 jordan:2 call:1 integer:1 nsj:1 mw:1 leverage:2 split:1 easy:4 enough:2 variety:2 independence:1 gave:1 restrict:1 reduce:1 idea:3 simplifies:1 expression:1 pca:3 suffer:1 ignored:1 useful:2 generally:1 induces:1 category:1 xij:10 nsf:1 tutorial:1 notice:4 neuroscience:2 estimated:1 popularity:1 track:2 per:9 discrete:6 shall:1 sparsifying:1 kxtest:2 nevertheless:1 drawn:2 sum:5 year:1 fourth:1 striking:1 family:1 reasonable:1 vn:1 draw:1 appendix:1 summarizes:1 vb:47 bound:8 guaranteed:1 followed:1 correspondence:1 annual:1 precisely:1 worked:1 fei:2 scene:1 ywteh:1 aspect:1 argument:2 min:2 extremely:1 relatively:1 conjecture:1 structured:1 combination:1 conjugate:1 across:3 smaller:1 em:6 intimately:1 heckerman:1 making:1 fienberg:1 computationally:6 equation:5 mutually:2 previously:1 loose:1 count:3 end:1 gaussians:1 unreasonable:1 observe:1 hierarchical:2 appearing:1 slower:1 assumes:1 dirichlet:13 remaining:1 include:1 graphical:3 marginalized:2 yx:2 ghahramani:1 especially:1 approximating:1 society:1 question:1 already:1 quantity:1 dependence:5 interacts:2 me:3 topic:17 reason:1 assuming:2 index:2 minimizing:1 unfortunately:1 fe:3 potentially:2 robert:1 expense:1 negative:2 implementation:4 teh:1 upper:2 allowing:1 convolution:1 markov:2 datasets:5 ecml:1 relational:1 david:1 pair:2 required:1 unpublished:1 tap:1 california:1 coherent:1 nu:1 nip:4 beyond:1 regime:1 challenge:1 max:1 memory:2 including:1 natural:1 indicator:1 advanced:1 scheme:1 improve:2 coupled:1 naive:1 text:1 prior:3 review:1 acknowledgement:1 marginalizing:3 relative:1 lacking:1 fully:2 expect:3 mixed:2 allocation:6 editor:2 endowment:1 nsjk:1 row:4 summary:1 surprisingly:2 placed:1 free:4 keeping:1 copy:1 supported:2 blackwellisation:1 bias:1 side:1 weaker:2 exponentiated:1 taking:1 distributed:1 opper:2 vocabulary:3 complicating:1 fb:2 ignores:1 author:2 made:3 pachinko:1 welling:3 approximate:5 preferred:1 keep:1 dealing:1 uai:3 corpus:6 assumed:2 conclude:1 latent:31 table:2 nature:1 ca:1 expanding:1 expansion:1 pk:3 spread:1 noise:1 fashion:3 gatsby:3 slow:1 n:2 momentum:1 jmlr:1 third:2 choi:1 specific:1 showing:1 consist:1 intractable:2 gained:2 airoldi:1 phd:1 entropy:1 simply:1 conditional:2 marked:1 replace:1 n2j:1 feasible:1 hard:1 experimentally:1 operates:1 sampler:5 averaging:1 dailykos:1 kurihara:1 called:1 total:3 ijk:7 college:2 evaluate:1 mcmc:4
2,330
3,114
A Theory of Retinal Population Coding Eizaburo Doi Center for the Neural Basis of Cognition Carnegie Mellon University Pittsburgh, PA 15213 edoi@cnbc.cmu.edu Michael S. Lewicki Center for the Neural Basis of Cognition Carnegie Mellon University Pittsburgh, PA 15213 lewicki@cnbc.cmu.edu Abstract Efficient coding models predict that the optimal code for natural images is a population of oriented Gabor receptive fields. These results match response properties of neurons in primary visual cortex, but not those in the retina. Does the retina use an optimal code, and if so, what is it optimized for? Previous theories of retinal coding have assumed that the goal is to encode the maximal amount of information about the sensory signal. However, the image sampled by retinal photoreceptors is degraded both by the optics of the eye and by the photoreceptor noise. Therefore, de-blurring and de-noising of the retinal signal should be important aspects of retinal coding. Furthermore, the ideal retinal code should be robust to neural noise and make optimal use of all available neurons. Here we present a theoretical framework to derive codes that simultaneously satisfy all of these desiderata. When optimized for natural images, the model yields filters that show strong similarities to retinal ganglion cell (RGC) receptive fields. Importantly, the characteristics of receptive fields vary with retinal eccentricities where the optical blur and the number of RGCs are significantly different. The proposed model provides a unified account of retinal coding, and more generally, it may be viewed as an extension of the Wiener filter with an arbitrary number of noisy units. 1 Introduction What are the computational goals of the retina? The retina has numerous specialized classes of retinal ganglion cells (RGCs) that are likely to subserve a variety of different tasks [1]. An important class directly subserving visual perception is the midget RGCs (mRGCs) which constitute 70% of RGCs with an even greater proportion at the fovea [1]. The problem that mRGCs face should be to maximally preserve signal information in spite of the limited representational capacity, which is imposed both by neural noise and the population size. This problem was recently addressed (although not specifically as a model of mRGCs) in [2], which derived the theoretically optimal linear coding method for a noisy neural population. This model is not appropriate, however, for the mRGCs, because it does not take into account the noise in the retinal image (Fig. 1). Before being projected on the retina, the visual stimulus is distorted by the optics of the eye in a manner that depends on eccentricity [3]. This retinal image is then sampled by cone photoreceptors whose sampling density also varies with eccentricity [1]. Finally, the sampled image is noisier in the dimmer illumination condition [4]. We conjecture that the computational goal of mRGCs is to represent the maximum amount of information about the underlying, non-degraded image signal subject to limited coding precision and neural population size. Here we propose a theoretical model that achieves this goal. This may be viewed as a generalization of both Wiener filtering [5] and robust coding [2]. One significant characteristic of the proposed model is that it can make optimal use of an arbitrary number of neurons in order to preserve the maximum amount of signal information. This allows the model to predict theoretically optimal representations at any retinal eccentricity in contrast to the earlier studies [4, 6, 7, 8]. (b) Fovea retinal image (c) 40 degrees eccentricity Intensity (a) Undistorted image -4 0 -4 4 Visual angle [arc min] 0 4 Figure 1: Simulation of retinal images at different retinal eccentricities. (a) Undistorted image signal. (b) The convolution kernel at the fovea [3] superimposed on the photoreceptor array indicated by triangles under the x-axis [1]. (c) The same as in (b) but at 40 degrees of retinal eccentricity. 2 The model First let us define the problem (Fig. 2). We assume that data sampled by photoreceptors (referred to as the observation) x ? RN are blurred versions of the underlying image signal s ? RN with additive white noise ? ? N (0, ??2 IN ), x = Hs + ? (1) where H ? RN ?N implements the optical blur. To encode the image, we assume that the observation is linearly transformed into an M -dimensional representation. To model limited neural precision, it is assumed that the representation is subject to additive channel noise, ? ? N (0, ??2 IM ). The noisy neural representation is therefore expressed as r = W(Hs + ?) + ? (2) M ?N where each row of W ? R corresponds to a receptive field. To evaluate the amount of signal information preserved in the representation, we consider a linear reconstruciton ?s = Ar where A ? RN ?M . The residual is given by  = (IN ? AWH)s ? AW? ? A?, (3) where IN is the N -dimensional identity matrix, and the mean squared error (MSE) is E = tr[?s ] ? 2 tr[AWH?s ] + tr[AW(H?s HT + ??2 IN )WT AT ] + ??2 tr[AAT ] (4) T with E = trh i by definition, h?i the average over samples, and ?s the covariance matrix of the image signal s. The problem is to find W and A that minimize E. To model limited neural capacity, the representation r must have limited SNR. This constraint is equivalent to fixing the variance of filter output hwjT xi = ?u2 , where wj is the j-th row of W (here we assume all neurons have the same capacity). It is expressed in the matrix form as diag[W?x WT ] = ?u2 1M (5) where ?x = H?s HT + ??2 IN is the covariance of the observation. It can further be simplified to diag[VVT ] W = 1M , = ? s image H ? encoder x observation (7) channel noise sensory noise optical blur (6) T ?u VS?1 x E , W decoder r representation A s? reconstruction Figure 2: The model diagram. If there is no degradation of the image (H = I and ??2 = 0), the model is reduced to the original robust coding model [2]. If channel noise is zero as well (??2 = 0), it boils down to conventional block coding such as PCA, ICA, or wavelet transforms. p p ? where Sx = diag( ?1 ?1 + ??2 , ? ? ? , ?N ?N + ??2 ) (the square root of ?x ?s eigenvalues), ?k and ?k are respectively ? the eigenvalues of H and ?s , and the columns of E are their common eigenvectors1 . Note that ?k defines the modulation transfer function of the optical blur H, i.e., the attenuation of the amplitude of the signal along the k-th eigenvector. Now, the problem is to find V and A that minimize E. The optimal A should satisfy ?E/?A = O, which yields = ?s HT WT [W(H?s HT + ??2 IN )WT + ??2 IM ]?1 (8) 2 ? = ESs P[IN + ? 2 VT V]?1 VT (9) ?u ? ? ? ? where ? 2 = ?u2 /??2 (neural SNR), Ss = diag( ?1 , ? ? ? , ?N ), P = diag( ?1 , ? ? ? , ?N ), and ?k = ?k ?k /(?k ?k + ??2 ) (the power ratio between the attenuated signal and that signal plus sensory noise; as we will see below, ?k characterizes the generalized solutions of robust coding, and if there is neither sensory noise nor optical blur, ?k becomes 1 that reduces the solutions of the current model to those in the original robust coding model [2]). This implies that the optimal A is determined once the optimal V is found. A With eqn. 7 and 9, E becomes E= N X ?k (1 ? ?k ) + tr[Ss 2 P2 (IN + ? 2 VT V)?1 ]. (10) k=1 Finally, the problem is reduced to finding V that minimizes eqn. 10. Solutions for 2-D data In this section we present the explicit characterization of the optimal solutions for two-dimensional data. It entails under-complete, complete, and over-complete representations, and provides precise insights into the numerical solutions for the high-dimensional image data (Section 3). This is a generalization of the analysis in [2] with the addition of optical blur and additive sensory noise. From eqn. 6 we can parameterize V with ? ? sin ?1 ? .. ? . sin ?M cos ?1 ? .. V = ? . cos ?M (11) where ?j ? [0, 2?), j = 1, ? ? ? , M , which yields E= 2 X ?k (1 ? ?k ) + (?1 + ?2 ) k=1 M 2 2 ? M 2 2 ?  2 + 1 ? ?2 (?1 ? ?2 ) Re(Z) , 2 + 1 ? 14 ? 4 |Z|2 (12) P with ?k ? ?k ?k and Z ? j (cos 2?j + i sin 2?j ). In the following we analyze the cases when ?1 = ?2 and when ?1 6= ?2 . Without loss of generality we consider ?1 > ?2 for the latter case. (In the previous analysis of robust coding [2], these cases depend only on the ratio between ?1 and ?2 , i.e., the isotropy of the data. In the current, general model, these also depend on the isotropy of the optical blur (?1 and ?2 ) and the variance of sensory noise (??2 ), and no simple meaning is attatched to the individual cases.) 1). If ?1 = ?2 (? ?): E in eqn. 10 becomes E= 2 X k=1 ?k (1 ? ?k ) + 2? M 2 2 ? M 2 2 ? 2 +1  +1 ? 14 ? 4 |Z|2 . (13) Therefore, E is minimized when |Z|2 is minimized. 1 The eigenvectors of ?s and H are both Fourier basis functions because we assume that s are natural images [9] and H is a circulant matrix [10]. 1-a). If M = 1 (single neuron case): By definition |Z|2 = 1, implying that E is constant for any ?1 ,     ?1 ?2 E = ?1 (1 ? ?1 ) + 2 + ?2 = ?2 (1 ? ?2 ) + 2 + ?1 , (14) ? +1 ? +1  p  1/ ?1 ?1 + ??2 p 0 W = ?u ( cos ?1 sin ?1 ) ET . (15) 0 1/ ?2 ?2 + ??2 Because there is only one neuron, only one direction in the two dimensional space can be reconstructed, and eqn. 15 implies that any direction can be equally good. The first equality in eqn. 14 can be interpreted as the case when W represents the direction along the first eigenvector, and consequently, the whole data variance along the second eigenvector ?2 is left in the error E. 1-b). If M ? 2 (multiple neuron case): There always exists Z that satisfies |Z| = 0 if M ? 2, with which E is minimized [2]. Accordingly, " # 2 X ?k E = ?k (1 ? ?k ) + M 2 , (16) 2 ? +1 k=1 p   0 1/ ?1 ?1 + ??2 p W = ?u V ET , (17) 0 1/ ?2 ?2 + ??2 where V is arbitrary as long as it satisfies |Z| = 0. Note that W takes the same form as in M = 1 except that there are more than two neurons. Also, eqn.16 shares the second term with eqn. 14 except that the SNR of the representation ? 2 is multiplied by M/2. It implies that having n times the neurons is equivalent to increasing the representation SNR by the factor of n (this relation generally holds in the multiple neuron cases below). 2). If ?1 > ?2 : Eqn. 12 is minimized when Z = Re(Z) ? 0 for a fixed value of |Z|2 . Therefore, the problem is reduced to seeking a real value Z = y ? [0, M ] that minimizes  ?2 2 2 X (?1 + ?2 ) M 2 ? + 1 ? 2 (?1 ? ?2 ) y . (18) ?k (1 ? ?k ) + E= 2 1 M 2 ? 4 ? 4 y2 k=1 2 ? +1 2-a). If M = 1 (single neuron case): Z = Re(Z) holds iff ?1 = 0. Accordingly,   ?1 E = ?1 (1 ? ?1 ) + 2 + ?2 , ? +1 ?u eT1 . W = p ?1 ?1 + ??2 (19) (20) These take the same form as in the case of ?1 = ?2 and M = 1 (eqn. 14-15) except that the direction of the representation is specified along the first eigenvector e1 , indicating that all the representational resources (namely, one neuron) are devoted to the largest data variance direction. 2-b). If M ? 2 (multiple neuron case): From eqn. 18, the necessary condition for the minimun dE/dy = 0 yields ?  ?  ?   ?   ?1 ? ? 2 2 ?1 + ?2 2 ? ? ? M + 2 ?y ? M + 2 ? y = 0. (21) ? ? ?1 + ? 2 ?1 ? ?2 The existence of a root y in the domain [0, M ] depends on how ? 2 compares to the next quantity, which is a generalized form of the critical point of neural precision [2]: "s # 1 ?1 2 ?c = ?1 . (22) M ?2 2-b-i). If ? 2 < ?c2 : dE/dy = 0 does not have a root within the domain. Since dE/dy is always negative, E is minimized when y = M . Accordingly,   ?1 E = ?1 (1 ? ?1 ) + + ?2 , (23) M ?2 + 1 ?u W = p 1M eT1 . (24) ?1 ?1 + ??2 These solutions are the same as in M = 1 (eqn. 19-20) except that the neural SNR ? 2 is multiplied by M to yield smaller MSE. 2-b-ii). If ? 2 ? ?c2 : Eqn. 21 has a root within [0, M ], ? ?   ? 1 ? ?2 2 ? y=? +M , ? 1 + ?2 ? 2 (25) with y = M if ? 2 = ?c2 . The optimal solutions are E W ? ? ( ?1 + ?2 )2 = ?k (1 ? ?k ) + M 2 , 2 2 ? +1 k=1 p   0 1/ ?1 ?1 + ??2 p ET , = ?u V 0 1/ ?2 ?2 + ??2 2 X 1 (26) (27) where V is arbitrary up to satisfying eqn. 25. In Fig. 3 we illustrate some examples of explicit solutions for 2-D data with two neurons. The general strategy of the proposed model is to represent the principal axis of the signal s more accurately as the signal is more degraded (by optical blur and/or sensory noise). Specifically, the two neurons come to represent the identical dimension when the degradation is sufficiently large. 10dB 0dB -10dB blur no-blur 20dB Figure 3: Sensory noise changes the optimal linear filter. The gray (outside) and blue (inside) contours show the variance of the target and reconstructed signal, respectively, and the red (thick) bars the optimal linear filters when there are two neurons. The SNR of the observation is varied from 20 to ?10 dB (column-wise). The bottom row is the case where the power of the signal?s minor component is attenuated as in the optical blur (i.e., low pass filtering): (?1 , ?2 ) = (1, 0.1); while the top is without the blur: (?1 , ?2 ) = (1, 1). The neural SNR is fixed at 10 dB. 3 Optimal receptive field populations We applied the proposed model to a natural images data set [11] to obtain the theoretically optimal population coding for mRGCs. The optimal solutions were derived under the following biological constraints on the observation, or the photoreceptor response, x (Fig. 2). To model the retinal images at different retinal eccentricities, we used modulation transfer functions of the human eye [3] and cone photoreceptor densities of the human retina [1] (Fig. 1). The retinal image is further corrupted by the additive Gaussian noise to model the photon transduction noise by which the SNR of the observartion becomes smaller under dimmer illumination level [4]. This yields the observation at different retinal eccentricities. In the following, we present the optimal solutions for the fovea (where the most accurate visual information is represented while the receptive field characteristics are difficult to measure experimentally) and those at 40 degrees retinal eccentricity (where we can compare the model to recent physiological measurements in the primate retina [12]). The information capacity of neural representations is limited by both the number of neurons and the precision of neural codes. The ratio of cone photoreceptors to mRGCs in the human retina is 1 : 2 at the fovea and 23 : 2 at 40 degrees [13]. We did not model neural rectification (separate on and off channels) and thus assumed the effective cell ratios as 1 : 1 and 23 : 1, respectively. We also fixed the neural SNR at 10 dB, equivalent to assuming ? 1.7 bits coding precision as in real neurons [14]. The optimal W can be derived with the gradient descent on E, and A can be derived from W using eqn. 8. As explained in Section 2, the solution must satisfy the variance constraint (eqn. 6). We formulate this as a constrained optimization problem [15]. The update rule for W is given by ! ln[diag(W?x WT )/?u2 ] T T 2 T ?W ? ?A (AWH ? IN )?s H ? ?? A AW ? ? diag W?x , (28) diag(W?x WT ) where ? is a positive constant that controls the strength of the variance constraint. Our initial results indicated that the optimal solutions are not unique and these solutions are equivalent in terms of MSE. We then imposed an additional neural resource constraint that spatial extent of P penalizes the 2 a receptive field: the constraint for the k-th neuron is defined by j |Wkj |(? dkj + 1) where dkj is the spatial distance between the j-th weight and the center of mass of all weights, and ? is a positive constant defining the strength of the spatial constraint. This assumption is consistent with the spatially restricted computation in the retina. If ? = 0, it imposes sparse weights [16], though not necessarily spatially localized. In our simulations we fixed ? = 0.5. For the fovea, we examined 15?15 pixel image patches sampled from a large set of natural images, where each pixel corresponds to a cone photoreceptor. Since the cell ratio is assumed to be 1 : 1, there were 225 model neurons in the population. As shown in Fig. 4, the optimal filters show concentric center-surround organization that is well fit with a difference-of-Gaussian function (which is one major characteristic of mRGCs). The precise organization of the model receptive field changes according to the SNR of the observation: as the SNR decreases, the surround inhibition gradually disappears and the center becomes larger, which serves to remove sensory noise by averaging. As a population, this yields a significant overlap among adjacent receptive fields. In terms of spatial-frequency, this change corresponds to a shift from band-pass to low-pass filtering, which is consistent with psychophysical measurements of the human and the macaque [17]. (a) 20dB 10dB 0dB -10dB (d) (c) 20dB Magnitude (b) -10dB Spatial freq. Figure 4: The model receptive fields at the fovea under different SNRs of the observation. (a) A cross-section of the two-dimensional receptive field. (b) Six examples of receptive fields. (c) The tiling of a population of receptive fields in the visual field. The ellipses show the contour of receptive fields at half the maximum. One pair of adjacent filters are highlighted for clarity. The scale bar indicates an interval of three photoreceptors. (d) Spatial-frequency profiles (modulation transfer functions) of the receptive fields at different SNRs. For 40 degrees retinal eccentricity, we examined 35?35 photoreceptor array that are projected to 53 model neurons (so that the cell ratio is 23 : 1). The general trend of the results is the same as in the fovea except that the receptive fields are much larger. This allows the fewer neurons in the population to completely tile the visual field. Furthermore, the change of the receptive field with the sensory noise level is not as significant as that predicted for the fovea, suggesting that the SNR is a less significant factor when neural number is severely limited. We also note that the elliptical shape of the extent of the receptive fields matches experimental observations [12]. 20dB -10dB Figure 5: The theoretically- derived receptive fields for 40 degrees of the retinal eccentricity. Captions as in Fig. 4. Finally, we demonstrate the performance of de- blurring, de- noising, and information preservation by these receptive fields (Fig. 6). The original image is well recovered in spite of both the noisy representation (10% of the code?s variation is noise because of the 10 dB precision) and the noisy, degraded observation. Note that the 40 degrees eccentricity is subject to an additional, significant dimensionality reduction, which is why the reconstruction error (e.g., 34.8% at 20 dB) can be greater than the distortion in the observation (30.5%). reconstruction original fovea 25.7% -10 dB 1024.7% 10.1% 57.5% 20 dB 40 deg. 30.5% -10 dB 1029.5% 34.8% 53.5% observation 20 dB Figure 6: Reconstruction example. For both the fovea and 40 degrees retinal eccentricity, two sensory noise conditions are shown (20 and ?10 dB). The percentages indicate the average distortion in the observation or the reconstruction error, respectively, over 60,000 samples. The blocking effect is caused by the implementation of the optical blur on each image patch using a matrix H instead of convolving the whole image. 4 Discussion The proposed model is a generalization of the robust coding model [2] and allows a complete characterization of the optimal representation as a function of both image degradation (optical blur and additive sensory noise) and limited neural capacity (neural precision and population size). If there is no sensory noise ??2 = 0 and no optical blur H = IN , then ?k = 1 for all k, which reduces all the optimal solutions above to those reported in [2]. The proposed model may also be viewed as a generalization of the Wiener filter: if there is no channel noise ??2 = 0 and the cell ratio is 1 : 1, and by assuming A ? IN without loss of generality, the problem is reformulated as finding W ? RN ?N that provides the best estimate of the original signal ?s = W(Hs + ?) in terms of the MSE. The optimal solution is given by the Wiener filter:   ? ? ?1 ?1 ?N ?N , ? ? ? , ET , (29) W = ?s HT [H?s HT + ??2 IN ]?1 = E diag ?1 ?1 + ??2 ?N ?N + ??2 E = tr[?s ] ? tr[WH?s ] = PN k=1 ?k (1 ? ?k ), (30) (note that the diagonal matrix in eqn. 29 corresponds to the Wiener filter formula in the frequency domain [5]). This also implies that the Wiener filter is optimal only in the limiting case in our setting. Here, we have treated the model primarily as a theory of retinal coding, but its generality would allow it to be applied to a wide range of problems in signal processing. We should also note several limitations. The model assumes Gaussian signal structure. Modeling non-Gaussian signal distributions might account for coding efficiency constraints on the retinal population. The model is linear, but the framework allows for the incorporation of non-linear encoding and decoding methods, at the expense of analytic tractability. There have been earlier approaches to theoretically characterizing the retinal code [4, 6, 7, 8]. Our approach differs from these in several respects. First, it is not restricted to the so-called complete representation (M = N ) and can predict properties of mRGCs at any retinal eccentricity. Second, we do not assume a single, translation invariant filter and can derive the optimal receptive fields for a neural population. Third, we accurately model optical blur, retinal sampling, cell ratio, and neural precision. Finally, we assumed that, as in [4, 8], the objective of the retinal coding is to form the neural code that yields the minimum MSE with linear decoding, while others assumed it to form the neural code that maximally preserves information about signal [6, 7]. To the best of our knowledge, we don?t know a priori which objective should be appropriate for the retinal coding. As suggested earlier [8], this issue could be resolved by comparing different theoretical predictions to physiological data. References [1] R. W. Rodieck. The First Steps in Seeing. Sinauer, MA, 1998. [2] E. Doi, D. C. Balcan, and M. S. Lewicki. A theoretical analysis of robust coding over noisy overcomplete channels. In Advances in Neural Information Processing Systems, volume 18. MIT Press, 2006. [3] R. Navarro, P. Artal, and D. R. Williams. Modulation transfer of the human eye as a function of retinal eccentricity. Journal of Optical Society of America A, 10:201?212, 1993. [4] M. V. Srinivasan, S. B. Laughlin, and A. Dubs. Predictive coding: a fresh view of inhibition in the retina. Proc. R. Soc. Lond. B, 216:427?459, 1982. [5] R. C. Gonzalez and R. E. Woods. Digital image processing. Prentice Hall, 2nd edition, 2002. [6] J. J. Atick and A. N. Redlich. Towards a theory of early visual processing. Neural Computation, 2:308? 320, 1990. [7] J. H. van Hateren. Theoretical predictions of spatiotemporal receptive fields of fly LMCs, and experimental validation. J. Comp. Physiol. A, 171:157?170, 1992. [8] D. L. Ruderman. Designing receptive fields for highest fidelity. Network, 5:147?155, 1994. [9] D. J. Field. Relations between the statistics of natural images and the response properties of cortical cells. J. Opt. Soc. Am. A, 4:2379?2394, 1987. [10] R. M. Gray. Toeplitz and circulant matrices: A review. Foundations and Trends in Communications and Information Theory, 2:155?239, 2006. [11] E. Doi, T. Inui, T.-W. Lee, T. Wachtler, and T. J. Sejnowski. Spatiochromatic receptive field properties derived from information-theoretic analyses of cone mosaic responses to natural scenes. Neural Computation, 15:397?417, 2003. [12] E. S. Frechette, A. Sher, M. I. Grivich, D. Petrusca, A. M. Litke, and E. J. Chichilnisky. Fidelity of the ensemble code for visual motion in primate retina. Journal of Neurophysiology, 94:119?135, 2005. [13] C. A. Curcio and K. A. Allen. Topography of ganglion cells in human retina. Journal of Comparative Neurology, 300:5?25, 1990. [14] A. Borst and F. E. Theunissen. Information theory and neural coding. Nature Neuroscience, 2:947?957, 1999. [15] E. Doi and M. S. Lewicki. Sparse coding of natural images using an overcomplete set of limited capacity units. In Advances in Neural Information Processing Systems, volume 17. MIT Press, 2005. [16] B. T. Vincent and R. J. Baddeley. Synaptic energy efficiency in retinal processing. Vision Research, 43:1283?1290, 2003. [17] R. L. De Valois, H. Morgan, and D. M. Snodderly. Psychophysical studies of monkey vision - III. Spatial luminance contrast sensitivity test of macaque and human observers. Vision Research, 14:75?81, 1974.
3114 |@word h:3 neurophysiology:1 version:1 proportion:1 nd:1 simulation:2 covariance:2 tr:7 reduction:1 initial:1 valois:1 current:2 elliptical:1 recovered:1 comparing:1 must:2 physiol:1 numerical:1 additive:5 blur:16 shape:1 analytic:1 remove:1 update:1 v:1 implying:1 half:1 fewer:1 accordingly:3 es:1 provides:3 characterization:2 along:4 c2:3 inside:1 manner:1 cnbc:2 theoretically:5 ica:1 nor:1 borst:1 increasing:1 becomes:5 underlying:2 mass:1 isotropy:2 what:2 interpreted:1 minimizes:2 eigenvector:4 monkey:1 unified:1 finding:2 attenuation:1 control:1 unit:2 before:1 positive:2 aat:1 severely:1 encoding:1 modulation:4 might:1 plus:1 examined:2 co:4 limited:9 range:1 unique:1 block:1 implement:1 differs:1 gabor:1 significantly:1 awh:3 seeing:1 spite:2 noising:2 prentice:1 equivalent:4 imposed:2 conventional:1 center:5 williams:1 formulate:1 insight:1 rule:1 array:2 importantly:1 population:14 variation:1 limiting:1 target:1 snodderly:1 caption:1 designing:1 mosaic:1 pa:2 trend:2 satisfying:1 theunissen:1 blocking:1 bottom:1 fly:1 parameterize:1 wj:1 decrease:1 highest:1 depend:2 predictive:1 efficiency:2 edoi:1 basis:3 blurring:2 triangle:1 completely:1 resolved:1 represented:1 america:1 snrs:2 effective:1 sejnowski:1 doi:4 outside:1 whose:1 larger:2 distortion:2 s:2 encoder:1 toeplitz:1 statistic:1 curcio:1 highlighted:1 noisy:6 eigenvalue:2 propose:1 reconstruction:5 maximal:1 iff:1 representational:2 eccentricity:16 comparative:1 derive:2 illustrate:1 undistorted:2 fixing:1 minor:1 p2:1 soc:2 strong:1 predicted:1 implies:4 come:1 indicate:1 direction:5 thick:1 filter:12 human:7 generalization:4 opt:1 biological:1 im:2 extension:1 hold:2 sufficiently:1 hall:1 cognition:2 predict:3 major:1 vary:1 achieves:1 early:1 proc:1 et1:2 largest:1 wachtler:1 mit:2 always:2 gaussian:4 pn:1 encode:2 derived:6 superimposed:1 indicates:1 contrast:2 litke:1 am:1 relation:2 transformed:1 pixel:2 issue:1 among:1 fidelity:2 priori:1 constrained:1 spatial:7 field:27 once:1 having:1 sampling:2 petrusca:1 identical:1 represents:1 vvt:1 minimized:5 others:1 stimulus:1 primarily:1 retina:12 oriented:1 simultaneously:1 preserve:3 individual:1 organization:2 devoted:1 accurate:1 necessary:1 lmcs:1 penalizes:1 re:3 overcomplete:2 theoretical:5 column:2 earlier:3 modeling:1 ar:1 tractability:1 snr:12 reported:1 varies:1 aw:3 corrupted:1 spatiotemporal:1 density:2 sensitivity:1 lee:1 off:1 decoding:2 michael:1 squared:1 tile:1 convolving:1 account:3 suggesting:1 photon:1 de:8 retinal:34 coding:24 blurred:1 satisfy:3 caused:1 depends:2 root:4 view:1 observer:1 analyze:1 characterizes:1 red:1 trh:1 minimize:2 square:1 degraded:4 wiener:6 variance:7 characteristic:4 ensemble:1 yield:8 vincent:1 accurately:2 dub:1 comp:1 synaptic:1 definition:2 energy:1 frequency:3 boil:1 sampled:5 wh:1 knowledge:1 dimensionality:1 subserving:1 amplitude:1 response:4 maximally:2 dimmer:2 though:1 generality:3 furthermore:2 atick:1 eqn:17 ruderman:1 defines:1 gray:2 indicated:2 effect:1 rgcs:4 y2:1 dkj:2 equality:1 spatially:2 freq:1 white:1 adjacent:2 sin:4 generalized:2 complete:5 demonstrate:1 theoretic:1 motion:1 allen:1 balcan:1 image:30 meaning:1 wise:1 recently:1 common:1 specialized:1 volume:2 mellon:2 significant:5 measurement:2 surround:2 subserve:1 entail:1 cortex:1 similarity:1 inhibition:2 recent:1 inui:1 vt:3 morgan:1 minimum:1 greater:2 additional:2 signal:21 preservation:1 ii:1 multiple:3 reduces:2 match:2 cross:1 long:1 equally:1 e1:1 ellipsis:1 prediction:2 desideratum:1 vision:3 cmu:2 represent:3 kernel:1 cell:9 preserved:1 addition:1 addressed:1 interval:1 diagram:1 wkj:1 navarro:1 subject:3 db:22 ideal:1 iii:1 variety:1 eizaburo:1 fit:1 attenuated:2 shift:1 six:1 pca:1 reformulated:1 constitute:1 generally:2 eigenvectors:1 amount:4 transforms:1 band:1 reduced:3 percentage:1 neuroscience:1 blue:1 carnegie:2 srinivasan:1 clarity:1 neither:1 ht:6 luminance:1 cone:5 wood:1 angle:1 distorted:1 patch:2 gonzalez:1 dy:3 bit:1 strength:2 optic:2 constraint:8 incorporation:1 scene:1 aspect:1 fourier:1 min:1 lond:1 optical:14 conjecture:1 according:1 spatiochromatic:1 smaller:2 primate:2 explained:1 restricted:2 gradually:1 invariant:1 rectification:1 resource:2 ln:1 know:1 serf:1 tiling:1 available:1 grivich:1 multiplied:2 appropriate:2 rodieck:1 existence:1 original:5 top:1 assumes:1 society:1 seeking:1 psychophysical:2 objective:2 quantity:1 receptive:24 primary:1 strategy:1 diagonal:1 gradient:1 fovea:11 distance:1 separate:1 capacity:6 decoder:1 extent:2 fresh:1 assuming:2 code:10 ratio:8 difficult:1 expense:1 negative:1 implementation:1 neuron:22 convolution:1 observation:14 arc:1 descent:1 defining:1 communication:1 precise:2 rn:5 varied:1 arbitrary:4 intensity:1 concentric:1 namely:1 pair:1 specified:1 chichilnisky:1 optimized:2 macaque:2 bar:2 suggested:1 below:2 perception:1 power:2 critical:1 overlap:1 natural:8 treated:1 residual:1 midget:1 eye:4 numerous:1 disappears:1 axis:2 sher:1 review:1 sinauer:1 loss:2 topography:1 limitation:1 filtering:3 localized:1 digital:1 validation:1 foundation:1 degree:8 consistent:2 imposes:1 share:1 translation:1 row:3 allow:1 laughlin:1 circulant:2 wide:1 face:1 characterizing:1 sparse:2 van:1 dimension:1 cortical:1 contour:2 sensory:13 projected:2 simplified:1 reconstructed:2 deg:1 photoreceptors:5 pittsburgh:2 assumed:6 xi:1 neurology:1 don:1 why:1 channel:6 transfer:4 robust:8 nature:1 mse:5 necessarily:1 domain:3 diag:9 did:1 linearly:1 whole:2 noise:24 profile:1 edition:1 fig:8 referred:1 redlich:1 transduction:1 precision:8 explicit:2 third:1 wavelet:1 down:1 formula:1 physiological:2 exists:1 magnitude:1 illumination:2 sx:1 likely:1 ganglion:3 visual:9 expressed:2 lewicki:4 u2:4 corresponds:4 satisfies:2 ma:1 goal:4 viewed:3 identity:1 consequently:1 towards:1 change:4 experimentally:1 specifically:2 determined:1 except:5 wt:6 averaging:1 degradation:3 principal:1 called:1 pas:3 experimental:2 photoreceptor:6 indicating:1 rgc:1 latter:1 noisier:1 hateren:1 evaluate:1 baddeley:1
2,331
3,115
A Local Learning Approach for Clustering Mingrui Wu, Bernhard Sch?olkopf Max Planck Institute for Biological Cybernetics 72076 T?ubingen, Germany {mingrui.wu, bernhard.schoelkopf}@tuebingen.mpg.de Abstract We present a local learning approach for clustering. The basic idea is that a good clustering result should have the property that the cluster label of each data point can be well predicted based on its neighboring data and their cluster labels, using current supervised learning methods. An optimization problem is formulated such that its solution has the above property. Relaxation and eigen-decomposition are applied to solve this optimization problem. We also briefly investigate the parameter selection issue and provide a simple parameter selection method for the proposed algorithm. Experimental results are provided to validate the effectiveness of the proposed approach. 1 Introduction In the multi-class clustering problem, we are given n data points, x1 , . . . , xn , and a positive integer c. The goal is to partition the given data xi (1 ? i ? n) into c clusters, such that different clusters are in some sense ?distinct? from each other. Here xi ? X ? Rd is the input data, X is the input space. Clustering has been widely applied for data analysis tasks. It identifies groups of data, such that data in the same group are similar to each other, while data in different groups are dissimilar. Many clustering algorithms have been proposed, including the traditional k-means algorithm and the currently very popular spectral clustering approach [3, 10]. Recently the spectral clustering approach has attracted increasing attention due to its promising performance and easy implementation. In spectral clustering, the eigenvectors of a matrix are used to reveal the cluster structure in the data. In this paper, we propose a clustering method that also has this characteristic. But it is based on the local learning idea. Namely, the cluster label of each data point should be well estimated based on its neighboring data and their cluster labels, using current supervised learning methods. An optimization problem is formulated whose solution can satisfy this property. Relaxation and eigen-decomposition are applied to solve this problem. As will be seen later, the proposed algorithm is also easy to implement while it shows better performance than the spectral clustering approach in the experiments. The local learning idea has already been successfully applied in supervised learning problems [1]. This motivates us to incorporate it into clustering, an important unsupervised learning problem. Adapting valuable supervised learning ideas for unsupervised learning problems can be fruitful. For example, in [9] the idea of large margin, which has proved effective in supervised learning, is applied to the clustering problem and good results are obtained. The remaining of this paper is organized as follows. In section 2, we specify some notation that will be used in later sections. The details of our local learning based clustering algorithm are presented in section 3. Experimental results are then provided in section 4, where we also briefly investigate the parameter selection issue for the proposed algorithm. Finally we conclude the paper in the last section. 2 Notations In the following, ?neighboring points? or ?neighbors? of xi simply refers the nearest neighbors of xi according to some distance metric. n c Cl Ni ni Diag(M) 3 3.1 the total number of data. the number of clusters to be obtained. the set of points contained in the l-th cluster, 1 ? l ? c. the set of neighboring points of xi , 1 ? i ? n, not including xi itself. |Ni | , i.e. the number of neighboring points of xi . the diagonal matrix with the same size and the same diagonal elements as M, where M is an arbitrary square matrix. Clustering via Local Learning Local Learning in Supervised Learning In supervised learning algorithms, a model is trained with all the labeled training data and is then used to predict the labels of unseen test data. These algorithms can be called global learning algorithms as the whole training dataset is used for training. In contrast, in local learning algorithms [1], for a given test data point, a model is built only with its neighboring training data, and then the label of the given test point is predicted by this locally learned model. It has been reported that local learning algorithms often outperform global ones [1] as the local models are trained only with the points that are related to the particular test data. And in [8], it is proposed that locality is a crucial parameter which can be used for capacity control, in addition to other capacity measures such as the VC dimension. 3.2 Representation of Clustering Results The procedure of our clustering approach largely follows that of the clustering algorithms proposed in [2, 10]. We also use a Partition Matrix (PM) P = [pil ] ? {0, 1}n?c to represent a clustering scheme. Namely pil = 1 if xi (1 ? i ? n) is assigned to cluster Cl (1 ? l ? c), otherwise pil = 0. So in each row of P, there is one and only one element that equals 1, all the others equal 0. As in [2, 10], instead of computing the PM directly to cluster the given data, we compute a Scaled 1 Partition Matrix (SPM) F defined by: F = P(P> P)? 2 . (The reason for this will be given later.) Asp P> P is diagonal, the l-th (1 ? l ? c) column of F is just the l-th column of P multiplied by 1/ |Cl |. Clearly we have 1 1 F> F = (P> P)? 2 P> P(P> P)? 2 = I (1) where I is the unit matrix. Given a SPM F, we can easily restore the corresponding PM P with a mapping P (?) defined as 1 P = P (F) = Diag(FF> )? 2 F (2) In the following, we will also express F as: F = [f 1 , . . . , f c ] ? Rn?c , where f l = [f1l , . . . , fnl ]> ? Rn , 1 ? l ? c, is the l-th column of F. 3.3 Basic Idea The good performance of local learning methods indicates that the label of a data point can be well estimated based on its neighbors. Based on this, in order to find a good SPM F (or equivalently a good clustering result), we propose to solve the following optimization problem: min n?c F?R subject to c X n X l=1 i=1 (fil ? oli (xi ))2 = c X l f ? ol 2 (3) l=1 F is a scaled partition matrix (4) where oli (?) denotes the output function of a Kernel Machine (KM), trained with some supervised kernel learning algorithms [5], using the training data {(xj , fjl )}xj ?Ni , where fjl is used as the label of xj for training this KM. In (3), ol = [ol1 (x1 ), . . . , oln (xn )]> ? Rn . Details on how to compute oli (xi ) will be given later. For the function oli (?), the superscript l indicates that it is for the l-th cluster, and the subscript i means the KM is trained with the neighbors of xi . Hence apart from xi , the training data {(xj , fjl )}xj ?Ni also influence the value of oli (xi ). Note that fjl (xj ? Ni ) are also variables of the problem (3)?(4). To explain the idea behind problem (3)?(4), let us consider the following problem: Problem 1. For a data point xi and a cluster Cl , given the values of fjl at xj ? Ni , what should be the proper value of fil at xi ? This problem can be solved by supervised learning. In particular, we can build a KM with the training data {(xj , fjl )}xj ?Ni . As mentioned before, let oli (?) denote the output function of this locally learned KM, then the good performance of local learning methods mentioned above implies that oli (xi ) is probably a good guess of fil , or the proper fil should be similar as oli (xi ). Therefore, a good SPM F should have the following property: For any xi (1 ? i ? n) and any cluster Cl (1 ? l ? c), the value of fil can be well estimated based on the neighbors of xi . That is, fil should be similar to the output of the KM that is trained locally with the data {(xj , fjl )}xj ?Ni . This suggests that in order to find a good SPM F, we can solve the optimization problem (3)?(4). We can also explain our approach intuitively as follows. A good clustering method will put the data into well separated clusters. This implies that it is easy to predict the cluster membership of a point based on its neighbors. If, on the other hand, a cluster is split in the middle, then there will be points at the boundary for which it is hard to predict which cluster they belong to. So minimizing the objective function (3) favors the clustering schemes that do not split the same group of data into different clusters. Moreover, it is very difficult to construct local clustering algorithms in the same way as for supervised learning. In [1], a local learning algorithm is obtained by running a standard supervised algorithm on a local training set. This does not transfer to clustering. Rather than simply applying a given clustering algorithm locally and facing the difficulty to combine the local solution into a global one, problem (3)?(4) seeks a global solution with the property that locally for each point, its cluster assignment looks like the solution that we would obtain by local learning if we knew the cluster assignment of its neighbors. 3.4 Computing oli (xi ) Having explained the basic idea, now we have to make the problem (3)?(4) more specific to build a concrete clustering algorithm. So we consider, based on xi and {(xj , fjl )}xj ?Ni , how to compute oli (xi ) with kernel learning algorithms. It is well known that applying many kernel learning algorithms on {(xj , fjl )}xj ?Ni will result in a KM, according to which oli (xi ) can be calculated as: X l oli (xi ) = ?ij K(xi , xj ) (5) xj ?Ni l where K : X ? X ? R is a positive definite kernel function [5], and ?ij are the expansion coefl ficients. In general, any kernel learning algorithms can be applied to compute the coefficients ?ij . Here we choose the ones that make the problem (3)?(4) easy to solve. To this end, we adopt the Kernel Ridge Regression (KRR) algorithm [6], with which we can obtain an analytic expression of oli (xi ) based on {(xj , fjl )}xj ?Ni . Thus for each xi , we need to solve the following KRR training problem: 2 (6) min ?(? li )> Ki ? li + Ki ? li ? fil ? li ?Rni l > ? Rni is the vector of the expansion coefficients, i.e. ? li = [?ij ] for xj ? Ni , ? > 0 is   > the regularization parameter, fil ? Rni denotes the vector fjl for xj ? Ni , and Ki ? Rni ?ni is the kernel matrix over xj ? Ni , namely Ki = [K(xu , xv )], for xu , xv ? Ni . where ? li Solving problem (6) leads to ? li = (Ki + ?I)?1 fil . Substituting it into (5), we have ?1 l oli (xi ) = k> fi i (Ki + ?I) (7) > ni where ki ? R denotes the vector [K(xi , xj )] for xj ? Ni . Equation (7) can be written as a linear equation: l oli (xi ) = ?> (8) i fi ni where ?i ? R is computed as > ?1 ?> (9) i = ki (Ki + ?I) l It can be seen that ?i is independent of fi and the cluster index l, and it is different for different xi . Note that fil is a sub-vector of f l , so equation (8) can be written in a compact form as: ol = Af l l (10) l n?n where o and f are the same as in (3), while the matrix A = [aij ] ? R is constructed as follows: ?xi and xj , 1 ? i, j ? n, if xj ? Ni , then aij equals the corresponding element of ?i in (9), otherwise aij equals 0. Similar as ?i , the matrix A is also independent of f l and the cluster index l. Substituting (10) into (3) results in a more specific optimization problem: c c X X l f ? Af l 2 = min (f l )> Tf l = trace(F> TF) F?Rn?c subject to l=1 F is a scaled partition matrix where (11) l=1 (12) T = (I ? A)> (I ? A) (13) Thus, based on the KRR algorithm, we have transformed the objective function (3) into the quadratic function (11). 3.5 Relaxation Following the method in [2, 10], we relax F into the continuous domain and combine the property (1) into the problem (11)?(12), so as to turn it into a tractable continuous optimization problem: min F?Rn?c subject to ? trace(F> TF) (14) F> F = I (15) n?c Let F ? R denote the matrix whose columns consist of c eigenvectors corresponding to the c smallest eigenvalues of the symmetric matrix T. Then it is known that the global optimum of the above problem is not unique, but a subspace spanned by the columns of F? through orthonormal matrices [10]: {F? R : R ? Rc?c , R> R = I} (16) Now we can see that working on the SPM F allows us to make use of the property (1) to construct a tractable continuous optimization problem (14)?(15), while working directly on the PM P does not have this advantage. 3.6 Discretization: Obtaining the Final Clustering Result According to [10], to get the final clustering result, we need to find a true SPM F which is close to the subspace (16). To this end, we apply the mapping (2) on F? to obtain a matrix P? = P (F? ). It can be easily proved that for any orthogonal matrix R ? Rc?c , we have P (F? R) = P? R. This equation implies that if there exists an orthogonal matrix R such that F? R is close to a true SPM F, then P? R should also be near to the corresponding discrete PM P. To find such an orthogonal matrix R and the discrete PM P, we can solve the following optimization problem [10]: 2 min kP ? P? Rk subject to P ? {0, 1}n?c , P?Rn?c ,R?Rc?c > R R=I (17) P1c = 1n (18) (19) where 1c and 1n denote the c dimensional and the n dimensional vectors of all 1?s respectively. Details on how to find a local minimum of the above problem can be found in [10]. In [3], a method using k-means algorithm is proposed to find a discrete PM P based on P? . In this paper, we adopt the approach in [10] to get the final clustering result. 3.7 Comparison with Spectral Clustering Our Local Learning based Clustering Algorithm (LLCA) also uses the eigenvalues of a matrix (T in (13)) to reveal the cluster structure in the data, therefore it can be regarded as belonging to the category of spectral clustering approaches. The matrix whose eigenvectors are used for clustering plays the key role in spectral clustering. In LLCA, this matrix is computed based on the local learning idea: a clustering result is obtained based on whether the label of each point can be well estimated base on its neighbors with a well established supervised learning algorithm. This is different from the graph partitioning based spectral clustering method. As will be seen later, LLCA and spectral clustering have quite different performance in the experiments. LLCA needs one additional step: computing the matrix T in the objective function (14). The remaining steps, i.e. computing the eigenvectors of T and discretization (cf. section 3.6) are the same as in the spectral clustering approach. According to equation (13), to compute T, we need to compute the matrix A in (10), which in turn requires calculating ?i in (9) for each x i . We can see that P n this is very easy to implement and A can be computed with time complexity O( i=1 n3i ). In practice, just like in the spectral clustering method, the number of neighbors ni is usually set to a fixed small value k for all xi in LLCA. In this case, A can be computed efficiently with complexity O(nk 3 ), which scales linearly with the number of data n. So in this case the main calculation is to obtain the eigenvectors of T. Furthermore, according to (13), the eigenvectors of T are identical to the right singular vectors of I ? A, which can be calculated efficiently because now I ? A is sparse, each row of which contains just k + 1 nonzero elements. Hence in this case, we do not need to compute T explicitly. We conclude that LLCA is easy to implement, and in practice, the main computational load is to compute the eigenvectors of T, therefore the LLCA and the spectral clustering approach have the same order of time complexity in most practical cases.1 4 Experimental Results In this section, we empirically compare LLCA with the spectral clustering approach of [10] as well as with k-means clustering. For the last discretization step of LLCA (cf. section 3.6), we use the same code contained in the implementation of the spectral clustering algorithm, available at http://www.cis.upenn.edu/?jshi/software/. 4.1 Datasets The following datasets are used in the experiments. ? USPS-3568: The examples of handwritten digits 3, 5, 6 and 8 from the USPS dataset. ? USPS-49: The examples of handwritten digits 4 and 9 from the USPS dataset. ? UMist: This dataset consists of face images of 20 different persons. ? UMist5: The data from the UMist dataset, belonging to class 4, 8, 12, 16 and 20. 1 Sometimes we are also interested in a special case: ni = n ? 1 for all xi , i.e. all the data are neighboring to each other. In this case, it can be proved that T = Q> Q, where Q = (Diag(B))?1 B with B = I ? K(K + ?I)?1 , where K is the kernel matrix over all the data points. So in this case T can be computed with time complexity O(n3 ). This is the same as computing the eigenvectors of the non-sparse matrix T. Hence the order of the overall time complexity is not increased by the step of computing T, and the above statements still hold. ? News4a: The text documents from the 20-newsgroup dataset, covering the topics in rec.?, which contains autos, motorcycles, baseball and hockey. ? News4b: The text documents from the 20-newsgroup dataset, covering the topics in sci.?, which contains crypt, electronics, med and space. Further details of these datasets are provided in Table 1. Table 1: Descriptions of the datasets used in the experiments. For each dataset, the number of data n, the data dimensionality d and the number of classes c are provided. Dataset n d c USPS-3568 3082 256 4 USPS-49 1673 256 2 UMist 575 10304 20 UMist5 140 10304 5 News4a 3840 4989 4 News4b 3874 5652 4 In News4a and New4b, each document is represented by a feature vector, the elements of which are related to the frequency of occurrence of different words. For these two datasets, we extract a subset of each of them in the experiments by ignoring the words that occur in 10 or fewer documents and then removing the documents that have 10 or fewer words. This is why the data dimensionality are different in these two datasets, although both of them are from the 20-newsgroup dataset. 4.2 Performance Measure In the experiments, we set the number of clusters equal to the number of classes c for all the clustering algorithms. To evaluate their performance, we compare the clusters generated by these algorithms with the true classes by computing the following two performance measures. 4.2.1 Normalized Mutual Information The Normalized Mutual Information (NMI) [7] is widely used for determining the quality of clusters. For two random variable X and Y, the NMI is defined as [7]: I(X, Y) N M I(X, Y) = p (20) H(X)H(Y) where I(X, Y) is the mutual information between X and Y, while H(X) and H(Y) are the entropies of X and Y respectively. One can see that N M I(X, X) = 1, which is the maximal possible value of NMI. Given a clustering result, the NMI in (20) is estimated as [7]:   Pc Pc n?nl,h l=1 h=1 nl,h log nl n ?h NMI = q P (21)  Pc  c nl n ?h n ? log n log l h l=1 h=1 n n ? h is the number of data where nl denotes the number of data contained in the cluster Cl (1 ? l ? c), n belonging to the h-th class (1 ? h ? c), and nl,h denotes the number of data that are in the intersection between the cluster Cl and the h-th class. The value calculated in (21) is used as a performance measure for the given clustering result. The larger this value, the better the performance. 4.2.2 Clustering Error Another performance measure is the Clustering Error. To compute it for a clustering result, we need to build a permutation mapping function map(?) that maps each cluster index to a true class label. The classification error based on map(?) can then be computed as: Pn ?(yi , map(ci )) err = 1 ? i=1 n where yi and ci are the true class label and the obtained cluster index of xi respectively, ?(x, y) is the delta function that equals 1 if x = y and equals 0 otherwise. The clustering error is defined as the minimal classification error among all possible permutation mappings. This optimal matching can be found with the Hungarian algorithm [4], which is devised for obtaining the maximal weighted matching of a bipartite graph. 4.3 Parameter Selection In the spectral clustering algorithm, first a graph of n nodes is constructed, each node of which corresponds to a data point, then the clustering problem is converted into a graph partition problem. In the experiments, for the spectral clustering algorithm, a weighted k-nearest neighbor graph is employed, where k is a parameter searched over the grid: k ? {5, 10, 20, 40, 80}. On this graph, the edge weight between two connected data points is computed with a kernel function, for which the following two kernel functions are tried respectively in the experiments. The cosine kernel: K1 (xi , xj ) = x> i xj kxi k kxj k (22) and the Gaussian kernel: 1 2 K2 (xi , xj ) = exp(? kxi ? xj k ) (23) ? The parameter ? in (23) is searched in: ? ? {?02 /16, ?02 /8, ?02 /4, ?02 /2, ?02 , 2?02 , 4?02 , 8?02 , 16?02 }, where ?0 is the mean norm of the given data xi , 1 ? i ? n. For LLCA, the cosine function (22) and the Gaussian function (23) are also adopted respectively as the kernel function in (5). The number of neighbors ni for all xi is set to a single value k. The parameters k and ? are searched over the same grids as mentioned above. In LLCA, there is another parameter ? (cf. (6)), which is selected from the grid: ? ? {0.1, 1, 1.5}. Automatic parameter selection for unsupervised learning is still a difficult problem. We propose a simple parameter selection method for LLCA as follows. For a clustering result obtained with a set of parameters, which in our case consists of k and ? when the cosine kernel (22) is used, or k, ? and ? when the Gaussian kernel (23) is used, we compute its corresponding SPM F and then use the objective value (11) as the evaluation criteria. Namely, the clustering result corresponding to the smallest objective value is finally selected for LLCA. For simplicity, on each dataset, we will just report the best result of spectral clustering. For LLCA, both the best result (LLCA1) and the one obtained with the above parameter selection method (LLCA2) will be provided. No parameter selection is needed for the k-means algorithm, since the number of clusters is given. 4.4 Numerical Results Numerical results are summarized in Table 2. The results on News4a and News4b datasets show that different kernels may lead to dramatically different performance for both spectral clustering and LLCA. For spectral clustering, the results on USPS-3568 are also significantly different for different kernels. It can also be observed that different performance measures may result in different performance ranks of the clustering algorithms being investigated. This is reflected by the results on USPS-3568 when the cosine kernel is used and the results on News4b when the Gaussian kernel is used. Despite all these phenomena, we can still see from Table 2 that both LLCA1 and LLCA2 outperform the spectral clustering and the k-means algorithm in most cases. We can also see that LLCA2 fails to find good parameters on News4a and News4b when the Gaussian kernel is used, while in the remaining cases, LLCA2 is either slightly worse than or identical to LLCA1. And analogous to LLCA1, LLCA2 also improves the results of the spectral clustering and the k-means algorithm on most datasets. This illustrates that our parameter selection method for LLCA can work well in many cases, and clearly it still needs improvement. Finally, it can be seen that the k-means algorithm is worse than spectral clustering, except on USPS3568 with respect to the clustering error criteria when the cosine kernel is used for spectral clustering. This corroborates the advantage of the popular spectral clustering approach over the traditional k-means algorithm. 5 Conclusion We have proposed a local learning approach for clustering, where an optimization problem is formulated leading to a solution with the property that the label of each data point can be well estimated Table 2: Clustering results. Both the normalized mutual information and the clustering error are provided. Two kernel functions (22) and (23) are tried for both spectral clustering and LLCA. On each dataset, the best result of the spectral clustering algorithm is reported (Spec-Clst). For LLCA, both the best result (LLCA1) and the one obtained with the parameter selection method described before (LLCA2) are provided. In each group, the best results are shown in boldface, the second best is in italics. Note that the results of k-means algorithm are independent of the kernel function. NMI, cosine NMI, Gaussian Error (%), cosine Error (%), Gaussian Spec-Clst LLCA1 LLCA2 k-means Spec-Clst LLCA1 LLCA2 k-means Spec-Clst LLCA1 LLCA2 k-means Spec-Clst LLCA1 LLCA2 k-means USPS-3568 0.6575 0.8720 0.8720 0.5202 0.8245 0.8493 0.8467 0.5202 32.93 3.57 3.57 22.16 5.68 4.61 4.70 22.16 USPS-49 0.3608 0.6241 0.6241 0.2352 0.4319 0.5980 0.5493 0.2352 16.56 8.01 8.01 22.30 13.51 8.43 9.80 22.30 UMist 0.7483 0.8003 0.7889 0.6479 0.8099 0.8377 0.8377 0.6479 46.26 36.00 38.43 56.35 41.74 33.91 37.22 56.35 UMist5 0.8810 1 1 0.7193 0.8773 1 1 0.7193 9.29 0 0 36.43 10.00 0 0 36.43 News4a 0.6468 0.7587 0.7587 0.0800 0.4039 0.2642 0.0296 0.0800 28.26 7.99 7.99 70.62 42.34 47.24 74.38 70.62 News4b 0.5765 0.7125 0.7125 0.0380 0.1861 0.1776 0.0322 0.0380 21.73 9.65 9.65 74.08 64.71 53.25 72.97 74.08 based on its neighbors. We have also provided a parameter selection method for the proposed clustering algorithm. Experiments show encouraging results. Future work may include improving the proposed parameter selection method and extending this work to other applications such as image segmentation. References [1] L. Bottou and V. Vapnik. Local learning algorithms. Neural Computation, 4:888?900, 1992. [2] P. K. Chan, M. D. F. Schlag, and J. Y. Zien. Spectral k-way ratio-cut partitioning and clustering. IEEE Transactions on Computer-aided Design of Integrated Circuits and Systems, 13:1088? 1096, 1994. [3] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: analysis and an algorithm. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, Cambridge, MA, 2002. MIT Press. [4] C. H. Papadimitriou and K. Steiglitz. Combinatorial Optimization: Algorithms and Complexity. Dover, New York, 1998. [5] B. Sch?olkopf and A. J. Smola. Learning with Kernels. The MIT Press, Cambridge, MA, 2002. [6] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, Cambridge, UK, 2004. [7] A. Strehl and J. Ghosh. Cluster ensembles ? a knowledge reuse framework for combining multiple partitions. Journal of Machine Learning Research, 3:583?617, 2002. [8] V. Vapnik. The Nature of Statistical Learning Theory. Springer Verlag, New York, 1995. [9] L. Xu, J. Neufeld, B. Larson, and D. Schuurmans. Maximum margin clustering. In L. K. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural Information Processing Systems 17. MIT Press, Cambridge, MA, 2005. [10] S. X. Yu and J. Shi. Multiclass spectral clustering. In L. D. Raedt and S. Wrobel, editors, International Conference on Computer Vision. ACM, 2003.
3115 |@word middle:1 briefly:2 norm:1 km:7 seek:1 tried:2 decomposition:2 electronics:1 contains:3 document:5 err:1 current:2 discretization:3 attracted:1 written:2 numerical:2 partition:7 analytic:1 spec:5 fewer:2 guess:1 selected:2 dover:1 node:2 rc:3 constructed:2 consists:2 combine:2 upenn:1 mpg:1 multi:1 ol:3 encouraging:1 increasing:1 mingrui:2 provided:8 notation:2 moreover:1 circuit:1 what:1 ghosh:1 scaled:3 k2:1 uk:1 control:1 unit:1 partitioning:2 planck:1 positive:2 before:2 local:22 xv:2 despite:1 subscript:1 suggests:1 unique:1 practical:1 practice:2 implement:3 definite:1 digit:2 procedure:1 adapting:1 significantly:1 matching:2 word:3 refers:1 get:2 close:2 selection:12 put:1 influence:1 applying:2 www:1 fruitful:1 map:4 shi:1 attention:1 simplicity:1 regarded:1 spanned:1 orthonormal:1 analogous:1 play:1 us:1 element:5 rec:1 cut:1 labeled:1 observed:1 role:1 solved:1 schoelkopf:1 connected:1 valuable:1 oln:1 mentioned:3 complexity:6 cristianini:1 trained:5 solving:1 baseball:1 bipartite:1 usps:10 easily:2 kxj:1 represented:1 separated:1 distinct:1 effective:1 kp:1 whose:3 quite:1 widely:2 solve:7 larger:1 relax:1 otherwise:3 favor:1 unseen:1 itself:1 superscript:1 final:3 advantage:2 eigenvalue:2 neufeld:1 propose:3 maximal:2 neighboring:7 motorcycle:1 combining:1 description:1 validate:1 olkopf:2 cluster:33 optimum:1 extending:1 ij:4 nearest:2 predicted:2 hungarian:1 implies:3 vc:1 biological:1 hold:1 fil:10 exp:1 mapping:4 predict:3 substituting:2 adopt:2 smallest:2 label:12 currently:1 krr:3 combinatorial:1 tf:3 successfully:1 weighted:2 mit:3 clearly:2 gaussian:7 rather:1 pn:1 asp:1 improvement:1 rank:1 indicates:2 contrast:1 sense:1 membership:1 integrated:1 transformed:1 interested:1 germany:1 issue:2 overall:1 classification:2 among:1 special:1 mutual:4 equal:7 construct:2 having:1 ng:1 identical:2 look:1 unsupervised:3 yu:1 future:1 papadimitriou:1 others:1 report:1 investigate:2 evaluation:1 nl:6 pc:3 behind:1 edge:1 orthogonal:3 taylor:1 minimal:1 increased:1 column:5 fnl:1 umist:4 raedt:1 assignment:2 subset:1 schlag:1 reported:2 kxi:2 person:1 international:1 concrete:1 choose:1 worse:2 leading:1 li:7 converted:1 de:1 summarized:1 coefficient:2 satisfy:1 explicitly:1 later:5 pil:3 square:1 ni:25 characteristic:1 largely:1 efficiently:2 ensemble:1 handwritten:2 cybernetics:1 explain:2 crypt:1 frequency:1 proved:3 dataset:12 popular:2 knowledge:1 dimensionality:2 improves:1 organized:1 segmentation:1 supervised:12 reflected:1 specify:1 wei:2 furthermore:1 just:4 smola:1 hand:1 working:2 spm:9 quality:1 reveal:2 fjl:11 dietterich:1 normalized:3 true:5 hence:3 assigned:1 regularization:1 symmetric:1 nonzero:1 covering:2 larson:1 cosine:7 criterion:2 ridge:1 image:2 recently:1 fi:3 jshi:1 empirically:1 oli:15 belong:1 cambridge:5 rd:1 automatic:1 grid:3 pm:7 shawe:1 base:1 chan:1 apart:1 verlag:1 ubingen:1 yi:2 seen:4 minimum:1 additional:1 employed:1 zien:1 multiple:1 af:2 calculation:1 devised:1 basic:3 regression:1 vision:1 metric:1 sometimes:1 represent:1 kernel:26 addition:1 singular:1 crucial:1 sch:2 probably:1 subject:4 med:1 effectiveness:1 jordan:1 integer:1 near:1 split:2 easy:6 xj:30 idea:9 multiclass:1 whether:1 expression:1 reuse:1 becker:1 york:2 dramatically:1 eigenvectors:8 locally:5 category:1 http:1 outperform:2 estimated:6 delta:1 discrete:3 express:1 group:5 key:1 graph:6 relaxation:3 wu:2 ki:9 quadratic:1 occur:1 n3:1 software:1 min:5 according:5 belonging:3 nmi:7 slightly:1 intuitively:1 explained:1 equation:5 turn:2 needed:1 tractable:2 end:2 adopted:1 available:1 multiplied:1 apply:1 spectral:29 occurrence:1 eigen:2 denotes:5 clustering:75 remaining:3 running:1 cf:3 include:1 calculating:1 k1:1 build:3 ghahramani:1 objective:5 already:1 traditional:2 diagonal:3 italic:1 subspace:2 distance:1 sci:1 capacity:2 topic:2 tuebingen:1 reason:1 boldface:1 code:1 index:4 ratio:1 minimizing:1 equivalently:1 difficult:2 statement:1 trace:2 implementation:2 design:1 motivates:1 proper:2 datasets:8 rn:6 steiglitz:1 arbitrary:1 namely:4 learned:2 established:1 usually:1 pattern:1 built:1 max:1 including:2 difficulty:1 restore:1 scheme:2 identifies:1 auto:1 extract:1 text:2 determining:1 permutation:2 facing:1 rni:4 editor:3 strehl:1 row:2 last:2 aij:3 institute:1 neighbor:12 saul:1 face:1 sparse:2 boundary:1 dimension:1 xn:2 calculated:3 transaction:1 compact:1 bernhard:2 global:5 conclude:2 knew:1 xi:39 corroborates:1 continuous:3 why:1 hockey:1 table:5 promising:1 nature:1 transfer:1 ignoring:1 obtaining:2 schuurmans:1 improving:1 expansion:2 investigated:1 cl:7 bottou:2 domain:1 diag:3 main:2 linearly:1 whole:1 x1:2 xu:3 ff:1 sub:1 fails:1 rk:1 removing:1 wrobel:1 load:1 specific:2 consist:1 exists:1 vapnik:2 ci:3 illustrates:1 margin:2 nk:1 locality:1 entropy:1 intersection:1 simply:2 contained:3 springer:1 corresponds:1 acm:1 ma:3 goal:1 formulated:3 hard:1 aided:1 except:1 total:1 called:1 experimental:3 newsgroup:3 searched:3 ficients:1 dissimilar:1 incorporate:1 evaluate:1 phenomenon:1
2,332
3,116
Stability of K-Means Clustering Alexander Rakhlin Department of Computer Science UC Berkeley Berkeley, CA 94720 rakhlin@cs.berkeley.edu Andrea Caponnetto Department of Computer Science University of Chicago Chicago, IL 60637 and D.I.S.I., Universit`a di Genova, Italy caponnet@uchicago.edu Abstract We phrase K-means clustering as an empirical risk minimization procedure over a class HK and explicitly calculate the covering number for this class. Next, we show that stability of K-means clustering is characterized by the geometry of HK with respect to the underlying distribution. We prove that in the case of a unique global minimizer, the clustering solution is stable with respect to complete changes of the data, while for the case of multiple minimizers, the change of ?(n1/2 ) samples defines the transition between stability and instability. While for a finite number of minimizers this result follows from multinomial distribution estimates, the case of infinite minimizers requires more refined tools. We conclude by proving that stability of the functions in HK implies stability of the actual centers of the clusters. Since stability is often used for selecting the number of clusters in practice, we hope that our analysis serves as a starting point for finding theoretically grounded recipes for the choice of K. 1 Introduction Identification of clusters is the most basic tool for data analysis and unsupervised learning. While people are extremely good at pointing out the relevant structure in the data just by looking at the 2-D plots, learning algorithms struggle to match this performance. Part of the difficulty comes from the absence, in general, of an objective way to assess the clustering quality and to compare two groupings of the data. Ben-David et al [1, 2, 3] put forward the goal of establishing a Theory of Clustering. In particular, attempts have been made by [4, 2, 3] to study and theoretically justify the stability-based approach of evaluating the quality of clustering solutions. Building upon these ideas, we present a characterization of clustering stability in terms of the geometry of the function class associated with minimizing the objective function. To simplify the exposition, we focus on K-means clustering, although the analogous results can be derived for K-medians and other clustering algorithms which minimize an objective function. Let us first motivate the notion of clustering stability. While for a fixed K, two clustering solutions can be compared according to the K-means objective function (see the next section), it is not meaningful to compare the value of the objective function for different K. How can one decide, then, on the value of K? If we assume that the observed data is distributed independently according to some unknown distribution, the number of clusters K should correspond to the number of modes of the associated probability density. Since density estimation is a difficult task, another approach is needed. A stability-based solution has been used for at least a few decades by practitioners. The approach stipulates that, for each K in some range, several clustering solutions should be computed by sub-sampling or perturbing the data. The best value of K is that for which the clustering solutions are most ?similar?. This rule of thumb is used in practice, although, to our knowledge, there is very little theoretical justification in the literature. The precise details of data sub-sampling in the method described above differ from one paper to another. For instance, Ben-Hur et al [5] randomly choose overlapping portions of the data and evaluate the distance between the resulting clustering solutions on the common samples. Lange et al [6], on the other hand, divide the sample into disjoint subsets. Similarly, Ben-David et al [3, 2] study stability with respect to complete change of the data (independent draw). These different approaches of choosing K prompted us to give a precise characterization of clustering stability with respect to both complete and partial changes of the data. It has been noted by [6, 4, 3] that the stability of clustering with respect to complete change of the data is characterized by the uniqueness of the minimum of the objective function with respect to the true distribution. Indeed, minimization of the K-means objective function can be phrased as an empirical risk minimization procedure (see [7]). The stability follows, under some regularity assumptions, from the convergence of empirical and expected means over a Glivenko-Cantelli class of functions. We prove stability in the case of a unique minimizer by explicitly computing the covering number in the next section and noting that the resulting class is VC-type. We go further in our analysis by considering the other two interesting cases: finite and infinite number of minimizers of the objective function. With the help of a stability result of [8, 9] for empirical risk minimization, we are able to prove that K-means clustering is stable with respect ? ? to changes of o( n) samples, where n is the total number of samples. In fact, the rate of ?( n) changes is a sharp transition between stability and instability in these cases. 2 Preliminaries Let (Z, A, P ) be a probability space with an unknown probability measure P . Let k ? k denote the Euclidean norm. We assume from the outset that the data live in a Euclidean ball in Rm , i.e. Z ? B2 (0, R) ? Rm for some R > 0 and Z is closed. A partition function C : Z 7? {1, . . . , K} assigns to each point Z its ?cluster identity?. The goal of clustering is to find a good partition based on the sample Z1 , . . . , Zn of n points, distributed independently according to P . In particular, for K-means clustering, the quality of C on Z1 , . . . , Zn is measured by the within-point scatter1 (see [10]) K W (C) = 1 X 2n X k=1 i,j:C(Zi )=C(Zj )=k kZi ? Zj k2 . (1) It is easy to verify that the (scaled) within-point scatter can be rewritten as K W (C) = 1X n X k=1 i:C(Zi )=k kZi ? ck k2 (2) where ck is the mean of the k-th cluster based on the assignment C (see Figure 1). We are interested in the minimizers of the within-point scatter. Such assignments have to map each point to its nearest cluster center. Since in this case the partition function C is completely determined by the K centers, we will often abuse the notation by associating C with the set {c1 , . . . , cK }. The K-means clustering algorithm is an alternating procedure minimizing the within-point scatter W (C). The centers {ck }K k=1 are computed in the first step, following by the assignment of each Zi to its closest center ck ; the procedure is repeated. The algorithm can get trapped in local minima, and various strategies, such as starting with several random assignments, are employed to overcome the problem. In this paper, we are not concerned with the algorithmic issues of the minimization procedure. Rather, we study stability properties of the minimizers of W (C). The problem of minimizing W (C) can be phrased as empirical risk minimization [7] over the function class HK = {hA (z) = kz ? ai k2 , i = argmin kz ? aj k2 : A = {a1 , . . . , aK } ? Z K }, j?{1...K} 1 We have scaled the within-point scatter by 1/n if compared to [10]. (3) kck ? Zi k2 c1 c2 Figure 1: The clustering objective is to place the centers ck to minimize the sum of squared distances from points to their closest centers. where the functions are obtained by selecting all possible K centers. Functions hA (z) in HK can also be written as K X kz ? ai k2 I(z is closest to ai ), hA (z) = i=1 where ties are broken, for instance, in the order of ai ?s. Hence, functions hA ? HK are K parabolas glued together with centers at a1 , . . . , aK , as shown in Figure 1. With this notation, one can see that n 1X h(Zi ). min W (C) = min C h?HK n i=1 Moreover, if C minimizes the left-hand side, hC has to minimize the right-hand side and vice versa. Hence, we will interchangeably use C and hC as minimizers of the within-point scatter. Several recent papers (e.g. [11]) have addressed the question of finding the distance metric for clustering. Fortunately, in our case there are several natural choices. One choice is to measure the K similarity between the centers {ak }K k=1 and {bk }k=1 of clusterings A and B. Another choice is to measure the Lq (P ) distance between hA and hB for some q ? 1. In fact, we show that these two choices are essentially equivalent. 3 Covering Number for HK The following technical Lemma shows that a covering of the ball B2 (0, R) induces a cover of HK in the L? distance because small shifts of the centers imply small changes of the corresponding functions in HK . Lemma 3.1. For any ? > 0, mK  16R2 K + ? N (HK , L? , ?) ? . ? m Proof. It is well-known that a Euclidean ball of radius R in Rm can be covered by N = 4R+? ? balls of radius ? (see Lemma 2.5 in [12]). Let T = {t1 , . . . , tN } be the set of centers of such a cover. Consider an arbitrary function hA ? HK with centers at {a1 , . . . , aK }. By the definition of the cover, there exists ti1 ? T such that ka1 ? ti1 k ? ?. Let A1 = {ti1 , a2 , . . . , aK }. Since Z ? B2 (0, R), khA ? hA1 k? ? (2R)2 ? (2R ? ?)2 ? 4R?. We iterate through all the ai ?s, replacing them by the members of T . After K steps, khA ? hAK k? ? 4RK? and all centers of AK belong to T . Hence, each function hA ? H can be approximated to within 4RK? by functions with centers in a finite set T . The upper bound on the number of functions in mK functions cover HK to within 4RK? in HK with centers in T is N K . Hence, N K = 4R+? ? the L? norm. The Lemma follows by setting ? = 4RK?. Geometry of HK and Stability 4 The above Lemma shows that HK is not too rich, as its covering numbers are polynomial. This is the first important aspect in the study of clustering stability. The second aspect is the geometry of HK with respect to the measure P . In particular, stability of K-means clustering depends on the number of functions h ? HK with the minimum expectation Eh. Note that the number of minimizers depends only on P and K, and not on the data. Since Z is closed, the number of minimizers is at least one. The three important cases are: unique minimum, a finite number of minimizers (greater than one), and an infinite number of minimizers. The first case is the simplest one, and is a good starting point. Definition 4.1. For  > 0 define QP = {h ? HK : Eh ? 0inf Eh0 + }, h ?HK the set of almost-minimizers of the expected error. In the case of a unique minimum of Eh, one can show that the diameter of Q?P tends to zero as  ? 0.2 Lemma 3.1 implies that the class HK is VC-type. In particular, it is uniform Donsker, as well as uniform Glivenko-Cantelli. Hence, empirical averages of functions in HK uniformly converge to their expectations: ! n 1X lim P sup Eh ? h(Zi ) > ? = 0. n?? n h?HK i=1 Therefore, for any ?, ? > 0 ! n 1X P sup Eh ? h(Zi ) > ? < ? n i=1 h?HK for n > n?,? . Denote by hA the function corresponding to a minimum of W (C) on Z1 , . . . , Zn . Suppose hC ? = argminh?HK Eh, i.e. C ? is the best clustering, which can be computed only with the knowledge of P . Then, with probability at least 1 ? ?, n EhA ? n 1X hA (Zi ) + ? n i=1 for n > n?,? . Furthermore, and n 1X hC ? (Zi ) ? EhC ? + ? n i=1 n 1X 1X hA (Zi ) ? hC ? (Zi ) n i=1 n i=1 by the optimality of hA on the data. Combining the above, EhA ? EhC ? + 2? with probability at least 1 ? ? for n > n?,? . Another way to state the result is P EhA ? ? 0inf Eh0 . h ?HK Assuming the existence of a unique minimizer, i.e. diamL1 (P ) QP ? 0, we obtain P khA ? hC ? kL1 (P ) ? ? 0. By triangle inequality, we immediately obtain the following Proposition. 2 This can be easily proved by contradiction. Let us assume that the diameter does not tend to zero. Then (t) there is a sequence of functions {h(t)} in QP with (t) ? 0 such that kh(t)?h? kL1 (P ) ? ? for some ? > 0. Hence, by the compactness of HK , the sequence {h(t)} has an accumulation point h?? , and by the continuity of expectation, Eh?? = inf h0 ?HK Eh0 . Moreover, kh? ? h?? kL1 ? ?, which contradicts the uniqueness of the minimizer. Proposition 4.1. Let Z1 , . . . , Zn , Z10 , . . . , Zn0 be i.i.d. samples. Suppose the clustering A minimizes W (C) over the set {Z1 , . . . , Zn } while B is the minimizer over {Z10 , . . . , Zn0 }. Then P khA ? hB kL1 (P ) ? ? 0. We have shown that in the case of a unique minimizer of the objective function (with respect to the distribution), two clusterings over independently drawn sets of points become arbitrarily close to each other with increasing probability as the number of points increases. If there are finite (but greater than one) number of minimizers h ? ? HK of Eh, multinomial distribution estimates tell us that?we expect stability with respect to o( n) changes of points, while no stability is expected for ?( n) changes, as the next example shows. Example 1. Consider 1-mean minimization over Z = {x1 , x2 }, x1 6= x2 , and P = 21 (?x1 + ?x2 ). It is clear that, given the training set Z1 , . . . , Zn , the center of the minimizer of W (C) is either x1 or x2 , according to the majority vote over the training set. Since the difference between the number of points on x1 and x2 is distributed according to a binomial with zero mean and the variance scaling ? as n, it is clear that by changing ?( n) points from Z1 , . . . , Zn , it is possible to swap the majority vote with constant probability. Moreover, with probability approaching one, it is not possible to ? achieve the swap by a change of o( n) points. A similar result can be shown for any K-means over a finite Z. The above example shows that, in general, it is not possible to prove closeness of clusterings over ? two sets of samples differing on ?( n) elements. In fact, this is a sharp threshold. Indeed, by employing the following Theorem, proven in [8, 9], we can show that even in?the case of an infinite number of minimizers, clusterings over two sets of samples differing on o( n) elements become arbitrarily close with increasing probability as the number of samples increases. This result cannot be deduced from the multinomial estimates, as it relies on the control of fluctuations of empirical means over a Donsker class. Recall that a class is Donsker if it satisfies a version of the central limit theorem for function classes. Theorem 4.1 (Corollary 11 in [9] or Corollary 2 in [8]). Assume that the class of functions F over Z is uniformly bounded and P -Donsker, for some probability measure P over Z. Let f (S) and f (T ) be minimizers over F of the empirical?averages with respect to the sets S and T of n points i.i.d. according to P . Then, if |S 4 T | = o( n), it holds P kf (S) ? f (T ) kL1 (P ) ? ? 0. We apply the above theorem to HK which is P -Donsker for any P because its covering numbers in L? scale polynomially (see Lemma 3.1). The boundedness condition is implied by the assumption that Z ? B2 (0, R). We note that if the class HK were richer than P -Donsker, the stability result would not necessarily hold. Corollary 4.1. Suppose the clusterings A and B are minimizers ? of the K-means objective W (C) over the sets S and T , respectively. Suppose that |S 4 T | = o( n). Then P khA ? hB kL1 (P ) ? ? 0. The above Corollary holds even if the number of minimizers h ? HK of Eh is infinite. This concludes the analysis of stability of K-means for the three interesting cases: unique minimizer, finite number (greater than one) of minimizers, and infinite number of minimizers. We remark that the distribution P and the number K alone determine which one of the above cases is in evidence. We have proved that stability of K-means clustering is characterized by the geometry of the class HK with respect to P . It is evident that the choice of K maximizing stability of clustering aims to choose K for which there is a unique minimizer. Unfortunately, ? for ?small? n, stability with respect to a complete change of the data and stability with respect to o( n) changes are indistinguishable, making this rule of thumb questionable. Moreover, as noted in [3], small changes of P lead to drastic changes in the number of minimizers. 5 Stability of the Centers Intuitively, stability of functions hA with respect to perturbation of the data Z1 , . . . , Zn implies stability of the centers of the clusters. This intuition is made precise in this section. Let us first define a notion of distance between centers of two clusterings. Definition 5.1. Suppose {a1 , . . . , aK } and {b1 , . . . , bK } are centers of two clusterings A and B, respectively. Define a distance between these clusterings as dmax ({a1 , . . . , aK }, {b1 , . . . , bK }) := max min (kai ? bj k + kaj ? bi k) 1?i?K 1?j?K Lemma 5.1. Assume the density of P (with respect to the Lebesgue measure ? over Z) is bounded away from 0, i.e. dP > c d? for some c > 0. Suppose khA ? hB kL1 (P ) ? ?. Then dmax ({a1 , . . . , aK }, {b1 , . . . , bK }) ?  ? cc,m 1  m+2 where cc,m depends only on c and m. Proof. First, we note that dmax ({a1 , . . . , aK }, {b1 , . . . , bK }) ? 2 max  max min kai ? bj k, max 1?i?K 1?j?K  min kaj ? bi k 1?i?K 1?j?K Without loss of generality, assume that the maximum on the right-hand side is attained at a1 and b1 such that b1 is the closest center to a1 out of {b1 , . . . , bK }. Suppose ka1 ? b1 k = d. Since dmax ({a1 , . . . , aK }, {b1 , . . . , bK }) ? 2d, it is enough to show that d is small (scales as a power of ?). Consider B2 (a1 , d/2), a ball of radius d/2 centered at a1 . Since any point z ? B2 (a1 , d/2) is closer to a1 than to b1 , we have kz ? a1 k2 ? kz ? b1 k2 . Refer to Figure 2 for the pictorial representation of the proof. Note that bj ? / B2 (a1 , d/2) for any j ? {2 . . . K}. Also note that for any z ? Z, kz ? a1 k2 ? K X i=1 kz ? ai k2 I(ai is closest to z) = hA (z). ( d2 )2 a1 b1 B(a1 , d/2) d Figure 2: To prove Lemma 5.1 it is enough to show that the shaded area is upperbounded by the L1 (P ) distance between the functions hA and hB and lower-bounded by a power of d. We deduce that d cannot be large. Combining all the information, we obtain the following chain of inequalities Z khA ? hB kL1 (P ) = |hA (z) ? hB (z)| dP (z) Z ? |hA (z) ? hB (z)| dP (z) B2 (a1 ,d/2) Z hA (z) ? kz ? b1 k2 dP (z) = B (a ,d/2) Z 2 1  = kz ? b1 k2 ? hA (z) dP (z) B2 (a1 ,d/2) = Z kz ? b1 k2 ? B2 (a1 ,d/2) ? ? Z B2 (a1 ,d/2) ?c? 2? m/2 ?(m/2) m/2 =c? i=1  kz ? b1 k ? kz ? a1 k2 dP (z)  (d/2)2 ? kz ? a1 k2 dP (z) Z d/2  (d/2)2 ? r2 rm?1 dr 0 2? 2 (d/2)m+2 = cc,m ? dm+2 . ?(m/2) m(m + 2) Since, by assumption, we obtain khA ? hB kL1 (P ) ? ?, d?  ? cc,m ! kz ? ai k2 I(ai is closest to z) dP (z) 2 B2 (a1 ,d/2) Z K X 1  m+2 . From the above lemma, we immediately obtain the following Proposition. Proposition 5.1. Assume the density of P (with respect to the Lebesgue measure ? over Z) is bounded away from 0, i.e. dP > c d? for some c > 0. Suppose the clusterings A and B are minimizers of?the K-means objective W (C) over the sets S and T , respectively. Suppose that |S 4 T | = o( n). Then P dmax ({a1 , . . . , aK }, {b1 , . . . , bK }) ? ? 0. Hence, the? centers of the minimizers of the within-point scatter are stable with respect to perturbations of o( n) points. Similar results can be obtained for other procedures which optimize some function of the data by applying Theorem 4.1. 6 Conclusions We showed that K-means clustering can be phrased as empirical risk minimization over a class HK . Furthermore, stability of clustering is determined by the geometry of HK with respect to P . We proved that in the case of a unique minimizer, K-means is stable with respect to a complete ? change of the data, while for multiple minimizers, we still expect stability with respect to o( n) changes. The rule for choosing K by maximizing stability can be viewed then as an attempt to select K such that HK has a unique minimizer with respect to P . Although used in practice, this choice of K is questionable, especially for small n. We hope that our analysis serves as a starting point for finding theoretically grounded recipes for choosing the number of clusters. References [1] Shai Ben-David. A framework for statistical clustering with a constant time approximation algorithms for k-median clustering. In COLT, pages 415?426, 2004. [2] Ulrike von Luxburg and Shai Ben-David. Towards a statistical theory of clustering. PASCAL Workshop on Statistics and Optimization of Clustering, 2005. [3] Shai Ben-David, Ulrike von Luxburg, and David Pal. A sober look at clustering stability. In COLT, 2006. [4] A. Rakhlin. Stability of clustering methods. NIPS Workshop ?Theoretical Foundations of Clustering?, December 2005. [5] A. Ben-Hur, A. Elisseeff, and I. Guyon. A stability based method for discovering structure in clustered data. In Pasific Symposium on Biocomputing, volume 7, pages 6?17, 2002. [6] T. Lange, M. Braun, V. Roth, and J. Buhmann. Stability-based model selection. In NIPS, 2003. [7] Joachim M. Buhmann. Empirical risk approximation: An induction principle for unsupervised learning. Technical Report IAI-TR-98-3, 3, 1998. [8] A. Caponnetto and A. Rakhlin. Some properties of empirical risk minimization over Donsker classes. AI Memo 2005-018, Massachusetts Institute of Technology, May 2005. [9] A. Caponnetto and A. Rakhlin. Stability properties of empirical risk minimization over Donsker classes. Journal of Machine Learning Research. Accepted. Available at http://cbcl.mit.edu/people/rakhlin/erm.pdf, 2006. [10] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning - Data Mining, Inference, and Prediction. Springer, 2002. [11] Marina Meil?a. Comparing clusterings: an axiomatic view. In ICML ?05: Proceedings of the 22nd international conference on Machine learning, pages 577?584, New York, NY, USA, 2005. ACM Press. [12] S.A. van de Geer. Empirical Processes in M-Estimation. Cambridge University Press, 2000.
3116 |@word version:1 polynomial:1 norm:2 nd:1 d2:1 elisseeff:1 tr:1 boundedness:1 selecting:2 comparing:1 scatter:6 written:1 chicago:2 partition:3 plot:1 alone:1 discovering:1 characterization:2 c2:1 become:2 symposium:1 prove:5 theoretically:3 expected:3 indeed:2 andrea:1 actual:1 little:1 considering:1 increasing:2 underlying:1 notation:2 moreover:4 bounded:4 argmin:1 minimizes:2 differing:2 finding:3 berkeley:3 questionable:2 tie:1 braun:1 universit:1 k2:16 scaled:2 rm:4 eha:3 control:1 t1:1 local:1 struggle:1 tends:1 limit:1 ak:12 establishing:1 meil:1 fluctuation:1 glued:1 abuse:1 shaded:1 range:1 bi:2 unique:10 practice:3 procedure:6 area:1 empirical:13 outset:1 get:1 cannot:2 close:2 selection:1 put:1 risk:8 live:1 applying:1 instability:2 accumulation:1 equivalent:1 map:1 optimize:1 center:23 maximizing:2 roth:1 go:1 starting:4 independently:3 assigns:1 immediately:2 contradiction:1 rule:3 stability:40 proving:1 notion:2 justification:1 analogous:1 suppose:9 element:3 approximated:1 parabola:1 observed:1 calculate:1 intuition:1 broken:1 motivate:1 upon:1 completely:1 triangle:1 swap:2 easily:1 various:1 glivenko:2 tell:1 choosing:3 refined:1 h0:1 richer:1 kai:2 statistic:1 sequence:2 relevant:1 combining:2 achieve:1 kh:2 recipe:2 convergence:1 cluster:9 regularity:1 ben:7 help:1 measured:1 nearest:1 c:1 implies:3 come:1 differ:1 radius:3 vc:2 centered:1 sober:1 clustered:1 preliminary:1 proposition:4 hold:3 cbcl:1 algorithmic:1 bj:3 pointing:1 a2:1 uniqueness:2 estimation:2 axiomatic:1 vice:1 zn0:2 tool:2 minimization:10 hope:2 mit:1 aim:1 ck:6 rather:1 corollary:4 derived:1 focus:1 joachim:1 cantelli:2 hk:36 inference:1 minimizers:23 compactness:1 interested:1 issue:1 colt:2 pascal:1 uc:1 sampling:2 kha:8 look:1 unsupervised:2 icml:1 report:1 simplify:1 few:1 randomly:1 pictorial:1 geometry:6 lebesgue:2 n1:1 attempt:2 friedman:1 mining:1 upperbounded:1 chain:1 closer:1 partial:1 divide:1 euclidean:3 theoretical:2 mk:2 instance:2 cover:4 zn:8 assignment:4 phrase:1 subset:1 kl1:9 uniform:2 too:1 pal:1 deduced:1 density:4 international:1 together:1 squared:1 central:1 von:2 choose:2 dr:1 de:1 b2:12 explicitly:2 depends:3 view:1 closed:2 sup:2 portion:1 ulrike:2 shai:3 ass:1 il:1 minimize:3 variance:1 correspond:1 ka1:2 identification:1 thumb:2 cc:4 trevor:1 definition:3 dm:1 associated:2 di:1 proof:3 proved:3 massachusetts:1 hur:2 knowledge:2 lim:1 recall:1 attained:1 iai:1 generality:1 furthermore:2 just:1 jerome:1 hand:4 replacing:1 overlapping:1 continuity:1 defines:1 mode:1 aj:1 quality:3 usa:1 building:1 verify:1 true:1 hence:7 alternating:1 indistinguishable:1 interchangeably:1 covering:6 noted:2 pdf:1 evident:1 complete:6 tn:1 l1:1 common:1 multinomial:3 qp:1 perturbing:1 volume:1 belong:1 refer:1 versa:1 cambridge:1 ai:10 similarly:1 stable:4 similarity:1 deduce:1 ehc:2 closest:6 recent:1 showed:1 italy:1 inf:3 inequality:2 arbitrarily:2 minimum:6 fortunately:1 greater:3 employed:1 converge:1 determine:1 multiple:2 caponnetto:3 technical:2 match:1 characterized:3 marina:1 a1:28 prediction:1 basic:1 essentially:1 expectation:3 metric:1 grounded:2 c1:2 addressed:1 median:2 tend:1 member:1 december:1 practitioner:1 noting:1 easy:1 concerned:1 hb:9 iterate:1 enough:2 zi:11 hastie:1 associating:1 approaching:1 lange:2 idea:1 ti1:3 shift:1 york:1 remark:1 covered:1 clear:2 induces:1 simplest:1 diameter:2 http:1 zj:2 trapped:1 disjoint:1 tibshirani:1 stipulates:1 kck:1 threshold:1 drawn:1 changing:1 hak:1 sum:1 luxburg:2 place:1 almost:1 guyon:1 decide:1 draw:1 scaling:1 genova:1 kaj:2 bound:1 x2:5 phrased:3 eh0:3 aspect:2 extremely:1 min:5 optimality:1 department:2 according:6 ball:5 contradicts:1 making:1 intuitively:1 erm:1 dmax:5 needed:1 drastic:1 serf:2 available:1 rewritten:1 z10:2 apply:1 away:2 existence:1 binomial:1 clustering:48 especially:1 implied:1 objective:12 question:1 strategy:1 dp:9 distance:8 majority:2 induction:1 assuming:1 prompted:1 minimizing:3 difficult:1 unfortunately:1 robert:1 memo:1 unknown:2 upper:1 finite:7 looking:1 precise:3 perturbation:2 sharp:2 arbitrary:1 david:6 bk:8 z1:8 nip:2 able:1 max:4 power:2 difficulty:1 natural:1 eh:9 buhmann:2 technology:1 imply:1 concludes:1 literature:1 kf:1 loss:1 expect:2 interesting:2 proven:1 foundation:1 principle:1 side:3 uchicago:1 institute:1 distributed:3 ha1:1 overcome:1 van:1 transition:2 evaluating:1 rich:1 kz:14 forward:1 made:2 employing:1 polynomially:1 kzi:2 global:1 b1:17 conclude:1 decade:1 ca:1 hc:6 necessarily:1 repeated:1 x1:5 ny:1 sub:2 lq:1 donsker:8 rk:4 theorem:5 rakhlin:6 r2:2 closeness:1 grouping:1 exists:1 evidence:1 workshop:2 springer:1 minimizer:11 satisfies:1 relies:1 acm:1 goal:2 identity:1 viewed:1 exposition:1 towards:1 absence:1 change:17 infinite:6 determined:2 uniformly:2 justify:1 lemma:10 total:1 geer:1 accepted:1 vote:2 meaningful:1 select:1 people:2 alexander:1 biocomputing:1 argminh:1 evaluate:1
2,333
3,117
Using Combinatorial Optimization within Max-Product Belief Propagation John Duchi Daniel Tarlow Gal Elidan Daphne Koller Department of Computer Science Stanford University Stanford, CA 94305-9010 {jduchi,dtarlow,galel,koller}@cs.stanford.edu Abstract In general, the problem of computing a maximum a posteriori (MAP) assignment in a Markov random field (MRF) is computationally intractable. However, in certain subclasses of MRF, an optimal or close-to-optimal assignment can be found very efficiently using combinatorial optimization algorithms: certain MRFs with mutual exclusion constraints can be solved using bipartite matching, and MRFs with regular potentials can be solved using minimum cut methods. However, these solutions do not apply to the many MRFs that contain such tractable components as sub-networks, but also other non-complying potentials. In this paper, we present a new method, called C OMPOSE, for exploiting combinatorial optimization for sub-networks within the context of a max-product belief propagation algorithm. C OMPOSE uses combinatorial optimization for computing exact maxmarginals for an entire sub-network; these can then be used for inference in the context of the network as a whole. We describe highly efficient methods for computing max-marginals for subnetworks corresponding both to bipartite matchings and to regular networks. We present results on both synthetic and real networks encoding correspondence problems between images, which involve both matching constraints and pairwise geometric constraints. We compare to a range of current methods, showing that the ability of C OMPOSE to transmit information globally across the network leads to improved convergence, decreased running time, and higher-scoring assignments. 1 Introduction Markov random fields (MRFs) [12] have been applied to a wide variety of real-world problems. However, the probabilistic inference task in MRFs ? computing the posterior distribution of one or more variables ? is tractable only in small tree-width networks, which are not often an appropriate model in practice. Thus, one typically must resort to the use of approximate inference methods, most commonly (in recent years) some variant of loopy belief propagation [11]. An alternative approach, whose popularity has grown in recent years, is based on the maximum a posteriori (MAP) inference problem ? computing the single most likely assignment relative to the distribution. Somewhat surprisingly, there are certain classes of networks where MAP inference can be performed very efficiently using combinatorial optimization algorithms, even though posterior probability inference is intractable. So far, two main such classes of networks have been studied. Regular (or associative) networks [18], where the potentials encode a preference for adjacent variables to take the same value, can be solved optimally or almost optimally using a minimum cut algorithm. Conversely, matching networks, where the potentials encode a type of mutual exclusion constraints between values of adjacent variables, can be solved using matching algorithms. These types of networks have been shown to be applicable in a variety of applications, such as stereo reconstruction [13] and segmentation for regular networks, and image correspondence [15] or word alignment for matching networks [19]. In many real-world applications, however, the problem formulation does not fall neatly into one of these tractable subclasses. The problem may well have a large component that can be well-modeled as regular or as a matching problem, but there may be additional constraints that take it outside this restricted scope. For example, in a task of registering features between two images or 3D scans, we may formulate the task as a matching problem, but may also want to encode constraints that enforce the preservation of local or global geometry [1]. Unfortunately, once the network contains some ?non-complying? potentials, it is not clear if and how one can apply the combinatorial optimization algorithm, even if only as a subroutine. In practice, in such networks, one often simply resorts to applying standard inference methods, such as belief propagation. Unfortunately, belief propagation may be far from an ideal procedure for these types of networks. In many cases, the MRF structures associated with the tractable components are quite dense and contain many small loops, leading to convergence problems and bad approximations. Indeed, recent empirical studies studies [17] show that belief propagation methods perform considerably worse than min-cut-based methods when applied to a variety of (purely) regular MRFs. Thus, falling back on belief propagation methods for these MRFs may result in poor performance. The main contribution of this paper is a message-passing scheme for max-product inference that can exploit combinatorial optimization algorithms for tractable subnetworks. The basic idea in our algorithm, called C OMPOSE (Combinatorial Optimization for Max-Product on Subnetworks), is that the network can often be partitioned into a number of subnetworks whose union is equivalent to the original distribution. If we can efficiently solve the MAP problem for each of these subnetworks, we would like to combine these results in order to find an approximate MAP for the original problem. The obvious difficulty is that a MAP solution, by itself, provides only a single assignment, and one cannot simply combine different assignments. The key insight is that we can combine the information from the different sub-networks by computing max-marginals for each one. A maxmarginal for an individual variable X is a vector that specifies, for each value x, the probability of the MAP assignment in which X = x. If we have a black box that computes a max-marginal for each variable X in a subnetwork, we can embed that black box as a subroutine in a max-product belief propagation algorithm, without changing the algorithm?s basic properties. In the remainder of this paper, we define the C OMPOSE scheme, and show how combinatorial algorithms for both regular networks and matching networks can be embedded in this framework. In particular, we also describe efficient combinatorial optimization algorithms for both types of networks that can compute all the max-marginals in the network at a cost similar to that of finding the single MAP assignment. We evaluate the applicability of C OMPOSE on synthetic networks and on an image registration task for scans of a cell obtained using an electron microscope, all of which are matching problems with additional pairwise constraints. We compare C OMPOSE to variants of both max-product and sum-product belief propagation, as well as to straight matching. Our results demonstrate that the ability of C OMPOSE to transmit information globally across the network leads to improved convergence, decreased running time, and higher-scoring assignments. 2 Markov Random Fields In this paper, for simplicity of presentation, we restrict our discussion to pairwise Markov networks (or Markov Random Fields) over discrete variables X = {X1 , . . . , XN }. We emphasize that our results extend easily to the more general case of non-pairwise Markov networks. We denote an assignment of values to X with x, and an assignment of a value to a single variable X i with xi . A pairwise Markov network M is defined as a graph G = (V, E) and set of potentials F that include both node potentials ?i (xi ) and edge potentials ?ij (xi , xj ). The network encodes a joint probability Q Q distribution via an unnormalized density PF0 (x) = N j ), defining the i=1 ?i (xi ) i,j?U ?ij (xi , xP 1 0 distribution as PF (x) = Z PF (x), where Z is the partition function given by Z = x0 PF0 (x). There are different types of queries that one may want to compute on a Markov network. Most common are (conditional) probability queries, where the task is to compute the marginal probability of one or more variables, possibly given some evidence. This type of inference is essentially equivalent to computing the partition function, which sums up exponentially many assignments, a computation which is currently intractable except in networks of low tree width. An alternative type of inference task is the is maximum a posteriori (MAP) problem ? finding arg maxx PF (x) = arg maxx PF0 (x). In the MAP problem, we can avoid computing the partition function, so there are certain classes of networks to which the MAP assignment can be computed effectively, even though computing the partition problem can be shown to be intractable; we describe two such important classes in Section 4. In general, however, an exact solution to the MAP problem is also intractable. Max-product belief propagation (MPBP) [20] is a commonly-used method for finding an approximate solution. In this algorithm, each node Xi passes to its neighboring nodes Ni a message which is a vector defining a value for each value xi : ? ? Y ?i?j (xj ) := max ??i (xi )?ij (xi , xj ) ?k?i (xi )? . xi k?Ni ?{j} At convergence, each variable can compute its own local belief as: b i (xi ) = Q ?i (xi ) k?Ni ?k?i (xi ). In a tree structured MRF, if such messages are passed from the leaves towards a single root, the value of the message passed by X i towards the root encodes a partial max-marginal: the entry for xi is the probability of the most likely assignment, to the subnetwork emanating from Xi away from the root, where we force Xi = xi . At the root, we obtain exact max-marginals for the entire joint distribution. However, applied to a network with loops, MPBP often does not converge, even when combined with techniques such as smoothing and asynchronous message passing, and the answers obtained can be quite approximate. 3 Composing Max-Product Inference on Subnetworks We now describe the C OMPOSE scheme for decomposing the network into hopefully more tractable components, and allowing approximate max-product computation over the network as a whole to be performed by iteratively computing max-product in one component and passing approximate maxmarginals to the other(s). As the unnormalized probability of an assignment in a Markov network is a product of local potentials, we can partition the potentials in an MRF into an ensemble of k subgraphs G1 , . . . Gk over the same set of nodes V, associated edges E1 , . . . , Ek and sets of factors F1 , . . . , Fk . We require that the product of the potentials in these subnetworks maintain the same information as the original MRF. That is, if we originally have a factor ? i ? F and associated factors Qk (1) (k) (l) ?i ? F1 , . . . , ?i ? Fk , we must have that l=1 ?i (Xi ) = ?i (Xi ). One method of partitioning that achieves this equality is simply to select, for each potential ?i , one subgraph in which it appears (l) unchanged, and set all of the other ?i to be 1. Even if MAP inference in the original network is intractable, it may be tractable in each of the sub-networks in the ensemble. But how do we combine the results from MAP inference in an ensemble of networks over the same set of variables? Our approach draws its motivation from the MPBP algorithm, which computes messages that correspond to pseudo-max-marginals over single variables (approximate max-marginals, that do not account for the loops in the network). We begin by conceptually reformulating the ensemble as a set of networks over disjoint sets of variables (l) (l) {X1 , . . . , Xn } for l = 1, . . . , k; we enforce consistency of the joint assignment using a set of (l) ?communicator? variables X1 , . . . , Xn , such that each Xi must take the same value as Xi . We assume that each subnetwork is associated with an algorithm that can ?read in? pseudo-max-marginals over the communicator variables, and compute pseudo-max-marginals over these variables. More precisely, let ?(l)?i be the message sent from subnetwork l to Xi and ?i?(l) the opposite message. Then we define the C OMPOSE message passing scheme as follows: Y (l) ?(l)?i (xi ) = max PFl (x(l) ) ?j?(l) (Xj ) (1) (l) x(l) : Xi =xi ?i?(l) = Y ?(l0 )?i . j6=i (2) l0 6=l That is, each subnetwork computes its local pseudo-max-marginals over each of the individual variables, given, as input, the pseudo-max-marginals over the others. The separate pseudo-maxmarginals are integrated via the communicator variables. It is not difficult to see that this message passing scheme is equivalent to a particular scheduling algorithm for max-product belief propagation over the ensemble of networks, assuming that the max-product computation in each of the subnetworks is computed exactly using a black-box subroutine. We note that this message passing scheme is somewhat related to the tree-reweighted maxproduct (TRW) method of Wainwright et al. [8], where the network distribution is partitioned as a weighted combination of trees, which also communicate pseudo-max-marginals with each other. 4 Efficient Computation of Max-Marginals In this section, we describe two important classes of networks where the MAP problem can be solved efficiently using combinatorial algorithms: matching networks, which can be solved using bipartite matching algorithms; and regular networks, which can be solved using (iterated application of) minimum cut algorithms. We show how the same algorithms can be adapted, at minimal computational cost, for computing not only the single MAP assignment, but also the set of max-marginals. This allows these algorithms to be used as one of our ?black boxes? in the C OMPOSE framework. Bipartite matching. Many problems can be well-formulated as maximum-score (or minimum weight) bipartite matching: We are given a graph G = (A, U), whose nodes are partitioned into disjoint sets A = A ? B. In G, each edge (a, b) has one endpoint in A and the other in B and an associated score c(a, b). A bipartite matching is a subset of the edges W ? U such that each node appears in at most one edge. The notion of a matching can be relaxed to include other types of degree constraints, e.g., constraining certain nodes to appear in at most k edges. The score of the matching is simply the sum of the scores of the edges in W. The matching problem can also be formulated as an MRF, in several different ways. For example, in the degree-1 case (each node in A is matched to one node in B), we can have a variable X a for each a ? A whose possible values are all of the nodes in B. The edge scores in the matching graph are then simply singleton potentials in the MRF, where ?a (Xa = b) = exp(c(a, b)). Unfortunately, while the costs can be easily encoded in an MRF, the degree constraints on the matching induce a set of pairwise mutual-exclusion potentials on all pairs of variables in the MRF, leading to a fully connected network. Thus, standard methods for MRF inference cannot handle the networks associated with matching problems. Nevertheless, finding the maximum score bipartite matching (with any set of degree constraints) can be accomplished easily using standard combinatorial optimization algorithms (e.g., [6]). However, we also need to find all the max-marginals. Fortunately, we can adapt the standard algorithm for finding a single best matching to also find all of the max-marginals. A standard solution to the max-matching problem reduces it to a max-weight flow problem, by introducing an additional ?source? node that connects to all the nodes in A, and an additional ?sink? node that connects to all the nodes in B; the capacity of these edges is the degree constraint of the node (1 for a 1-to-1 matching). We now run a standard max-weight flow algorithm, and define an edge to be in the matching if it bears flow. Standard results show that, if the edge capacities are integers, then the flow too is integral, so that it defines a matching. Let w ? be the weight of the flow in the graph. A flow in the graph defines a residual graph, where there is an edge in the graph whose capacity is the amount of flow it can carry relative to the current flow. Thus, for example, if the current solution carries a unit of flow along a particular edge (a, b) in the original graph, the residual graph will have an edge with a unit capacity going in the reverse direction, corresponding to the fact that we can now choose to ?eliminate? the flow from a to b. The scores in these inverse edges are also negative, corresponding to the fact that score is lost when we reduce the flow. Our goal now is to find, for each pair (a, b), the score of the optimal matching where we force this pair to be matched. If this pair is matched in the current solution, then the score is simply w ? . Otherwise, we simply find the highest scoring path from b to a in the residual graph. Any edges on this new path from A to B will be included in the new matching; any edges from B to A were included in the old matching, but are not in the new matching because of the augmenting path. This path is the best way of changing the flow so as to force flow from a to b. Letting ? be the weight of this augmenting path, the overall score of the new flow is w ? + ?. It follows that the cost of this path is necessarily negative, for otherwise it would have been optimal to apply it to the original flow, improving its score. Thus, we can find the highest-scoring path by simply negating all edge costs and finding the shortest path in the graph. Thus, to compute all of the max-marginals, we simply need to find the shortest path from every node a ? A to every node b ? B. We can find this using the Floyd-Warshall all-pairs-shortest-paths algorithm, which runs in O((nA + nB )3 ) time, for nA = |A| and nB = |B|; or we can run a singlesource shortest-path algorithm for each node in B, at a total cost of O(n B ? nA nB log(nA nB )). By comparison, the cost of solving the initial flow problem is O(n3A log(nA )). Minimum Cuts. A very different class of networks that admits an efficient solution is based on the application of a minimum cut algorithm to a graph. At a high level, these networks encode situations where adjacent variables like to take ?similar? values. There are many variants of this condition. The simplest variant is applied to pairwise MRFs over binary-valued random variables. In this case, a potential is said to be regular if: ?ij (Xi = 1, Xj = 1) ? ?ij (Xi = 0, Xj = 0) ? ?ij (Xi = 0, Xj = 1) ? ?ij (Xi = 1, Xj = 0). For MRFs with only regular potentials, the MAP solution can be found as the minimum cut of a weighted graph constructed from the MRF [9]. This construction can be extended in various ways (see [9] for a survey), including to the class of networks with non-binary variables whose negative-log-probability is a convex function [5]. Moreover, for a range of conditions on the potentials, an ?-expansion procedure [2], which iteratively applies a mincut to a series of graphs, can be used to find a solution with guaranteed approximation error relative to the optimal MAP assignment. As above, a single joint assignment does not suffice for our purposes. In recent work, Kohli and Torr [7], studying the problem of confidence estimation in MAP problems, showed how all of the max-marginals in a regular network can be computed using dynamic algorithms for flow computations. Their method also applies to non-binary networks with convex potentials (as in [5]), but not to networks for which ?-expansion is used to find an approximate MAP assignment. 5 Experimental Results We evaluate C OMPOSE on the image correspondence problem, which is characteristic of matching problems with geometric constraints. We compare both max-product tree-reparameterization (TRMP) [8] and asynchronous max-product (AMP). The axes along which we compare all algorithms are: the ability to achieve convergence, the time it takes to reach a solution, and the quality ? log of the unnormalized likelihood ? of the solution found, in the Markov network that defines the problem. We use standard message damping of .3 for the max-product algorithms and a convergence threshold of 10?3 for all propagation algorithms. All tests were run on a 3.4 GHz Pentium 4 processor with 2GB of memory. We focus our experiments on an image correspondence task, where the goal is to find a 1to-1 mapping between landmarks in two images. Here, we have a set of template points S = {x1 , . . . , xn } and a set T of target points, {x01 , . . . , x0n }. We encode our MRF with a variable Xi for each marker xi in the source image, whose value corresponds to its aligned candidate x 0j in the target image. Our MRF contains singleton potentials ?i , which may encode both local appearance information, so that a marker xi prefers to be aligned to a candidate x0j in the target image whose neighborhood looks similar to xi ?s, or a distance potential so that markers xi prefer to be aligned to candidates x0j in locations close to those in the source image. The MRF also contains pairwise potentials {?ij } that can encode dependencies between the landmark assignments. In particular, we may want to encode geometric potentials, which enforce a preference for preservation of distance or orientation for pairs of markers xi , xj and their assigned targets x0k , x0l . Finally, as the goal is to find a 1-to-1 mapping between landmarks in the source and target images, we also encode a set of mutual exclusion potentials over pairs of variables, enforcing the constraint that no two markers are assigned to the same candidate x0k . Our task is to find the MAP solution in this MRF. Synthetic Networks. We first experimented with synthetically generated networks that follow the above form. To generate the networks, we first create a source ?image? that contains a set of template points S = {x1 , . . . , xn }, chosen by uniformly sampling locations from a two-dimensional plane. Next, the target set of points T = {x01 , . . . , x0n } is generated by generating one point from each template point xi , sampling from a Gaussian distribution with mean xi and a diagonal covariance matrix ? 2 I. As there was no true local information, the matching (or singleton) potentials for both types of synthetic networks were generated uniformly at random on [0, 1). The ?correct? matching point, or the one the template variable generates, was given weight .7, ensuring that the correct matching gets a non-negligible weight without making the correspondence too obvious. We consider two different formulations for the geometric potentials. The first utilizes a minimum spanning tree connecting the points in S, and the second simply a chain. In both cases, we generate pairwise geometric potentials ?ij (Xi , Xj ) that are Gaussian with mean ? = (xi ? xj ) and standard deviation proportional to the Euclidean distance between xi and xj and variance ? 2 . Results for the two constructions were similar, so, due to lack of space, we present results only for the line networks. Fig. 1(a) shows the cumulative percentage of convergent runs as a function of CPU time. C OM POSE converges significantly more often than either AMP or state-of-the-art TRMP. For TRMP, we created one tree over all the geometric and singleton potentials to quickly pass information through the entire graph; the rest of the trees chosen for TRMP were over a singleton potential, all the neighboring mutual exclusion potentials, and pairwise potentials neighboring the singleton, allowing us to maintain the mutual exclusion constraints during different reparameterization steps in TRMP. Since Figure 1: (a) Cumulative percentage of convergent runs versus CPU time on networks with 30 variables and sigma ranging from 3 to 9. (b) The effect of changing the number of variables on the log score. Shown is the difference between the log score of each algorithm and the (a) (b) score found by AMP. (c) Direct comparison of C OMPOSE to TRMP on individual runs from the same set of networks as in (b), grouped by algorithm convergence. (d) Score of assignment based on intermediate beliefs versus time for C OM POSE , TRMP, and matching on 100 variable networks. All al(c) (d) gorithms were allowed to run for 5000 seconds. sum-product algorithms are known in general to be less susceptible to oscillation than their maxproduct counterparts, we also compared against sum-product asynchronous belief propagation. In our experiments, however, sum-production BP did not achieve good scores even on runs in which it did converge, perhaps because the distribution was fairly diffuse, leading to an averaging of diverse solutions; we omit results for lack of space. Fig. 1(b) shows the average difference in log scores between each algorithm?s result and the average log score of AMP as a function of the number of variables in the networks. C OMPOSE clearly outperforms the other algorithms, gaining a larger score margin as the size of the problem increases. In the synthetic tests we ran for (b) and (c), C OMPOSE achieved the best score in over 90% of cases. This difference was greatest in more difficult problems, where there is greater variance in the locations of candidates in the target image leading to difficulty achieving a 1-to-1 correspondence. In Fig. 1(c), we further examine scores from individual runs, comparing C OMPOSE directly to the strongest competitor, TRMP. C OMPOSE consistently outperforms TRMP and never loses by more than a small margin; C OMPOSE often achieves scores on the order of 2 40 times better than those achieved by TRMP. Interestingly, there appears not to be a strong correlation between relative performance and whether or not the algorithms converged. Fig. 1(d) examines the intermediate scores obtained by C OMPOSE and TRMP on intermediate assignments reached during the inference process, for large (100 variable) problems. Though C OM POSE does not reach convergence in messages, it quickly takes large steps to a very good score on the large networks. TRMP also takes larger steps near the beginning, but it is less consistent and it never achieves a score as high as C OMPOSE. This indicates that C OMPOSE scales better than TRMP to larger problems. This behavior may also help to explain the results from (c), where we see that, even when C OMPOSE does not converge in messages, it still is able to achieve good scores. Overall, these results indicate that we can use intermediate results for C OMPOSE even before convergence. Real Networks. We now consider real networks generated for the task of electron microscope tomography: the three-dimensional reconstruction of cell and organelle structures based on a series of images obtained at different tilt angles. The problem is to localize and track markers in images across time, and it is a difficult one; traditional methods like cross correlation and graph matching often result in many errors. We can encode the problem, however, as an MRF, as described above. In this case, the geometric constraints were more elaborate, and it was not clear how to construct a good set of spanning trees. We therefore used a variant on AMP called residual max-product (RMP) [3] that schedules messages in an informed way over the network; in this work and others, we have found this variant to achieve better performance than TRMP on difficult networks. Fig. 2(a) shows a source set of markers in an electron tomography image; Fig. 2(b) shows the correspondence our algorithm achieves, and Fig. 2(c) shows the correspondence that RMP achieves. Note that, in Fig. 2(c), points from the source image are assigned to the same point in the target image, whereas C OMPOSE does not have the same failing. Of the twelve pairs of images we tested, RMP failed to converge on 11/12 within 20 minutes, whereas C OMPOSE failed to converge on only two of the twelve. Because the network structure was difficult for loopy approximate methods, we ran experiments where we replaced mutual exclusion constraints with soft location constraints on individual landmarks; while convergence improved, actual performance was inferior. Fig. 2(d) shows the scores for the different methods we use to solve these problems. Using RMP as the baseline score, we see the difference in scores for the different methods. It is clear that, though RMP and TRMP run on a simpler network with soft mutual exclusion constraints are competitive with, and even very slightly better than C OMPOSE on simple problems, as problems become more difficult (more variance in target images), C OMPOSE clearly dominates. We also compare C OMPOSE to simply finding the best matching of markers to candidates without any geometric information; C OMPOSE dominates this approach, never scoring worse than the matching. (a) (b) (c) (d) 6 Discussion Figure 2: (a) Labeled markers in a source electron microscope image (b) Candidates C OMPOSE assigns in the target image (c) Candidates RMP assigns in the target image (note the Xs through incorrect or duplicate assignments) (d) A score comparison of C OMPOSE, matching, and RMP on the image correspondences In this paper, we have presented C OMPOSE, an algorithm that exploits the presence of tractable substructures in MRFs within the context of max-product belief propagation. Motivated by the existence of very efficient algorithms to extract all max-marginals from combinatorial substructures, we presented a variation of belief propagation methods that used the max-marginals to take large steps in inference. We also demonstrated that C OMPOSE significantly outperforms state-of-the-art methods on different challenging synthetic and real problems. We believe that one of the major reasons that belief propagation algorithms have difficulty with the augmented matching problems described above is that the mutual exclusion constraints create a phenomenon where small changes to local regions of the network can have strong effects on distant parts of the network, and it is difficult for belief propagation to adequately propagate information. Some existing variants of belief propagation (such as TRMP) attempt to speed the exchange of information across opposing sides of the network by means of intelligent message scheduling. Even intelligently-scheduled message passing is limited, however, as messages are inherently local. If there are oscillations across a wide diameter, due to global interactions in the network, they might contribute significantly to poor performance by BP algorithms. C OMPOSE slices the network along a different axis, using subnetworks that are global in nature but that do not have all of the information about any subset of variables. If the component of the network that is difficult for belief propagation can be encoded in an efficient special-purpose subnetwork such as a matching, then we have a means of effectively propagating global information. We conjecture that C OMPOSE?s ability to globally pass information contributes both to its improved convergence and to the better results it obtains even without convergence. Some very recent work explores the case where a regular MRF contains terms that are not regular [14, 13], but this work is largely specific to certain types of ?close-to-regular? MRFs. It would be interesting to compare C OMPOSE and these methods on a range of networks containing regu- lar subgraphs. Our work is also related to work trying to solve the quadratic assignment problem (QAP) [10], a class of problems of which our generalized matching networks are a special case. Standard algorithms for QAP include simulated annealing, tabu search, branch and bound, and ant algorithms [16]; the latter have some of the flavor of message passing, walking trails over the graph representing a QAP and iteratively updating scores of different assignments to the QAP. To the best of our knowledge, however, none of these previous methods attempts to use a combinatorial algorithm as a component in a general message-passing algorithm, thereby exploiting the structure of the pairwise constraints. There are many interesting directions arising from this work. It would be interesting to perform a theoretical analysis of the C OMPOSE approach, perhaps providing conditions under which it is guaranteed to provide a certain level of approximation. A second major direction is the identification of other tractable components within real-world MRFs that one can solve using combinatorial optimization methods, or other efficient approaches. For example, the constraint satisfaction community has studied several special-purpose constraint types that can be solved more efficiently than using generic methods [4]; it would be interesting to explore whether these constraints arise within MRFs, and, if so, whether the special-purpose procedures can be integrated into the C OMPOSE framework. Overall, we believe that real-world MRFs often contain large structured sub-parts that can be solved efficiently with special-purpose algorithms; the combination of special-purpose solvers within a general inference scheme may allow us to solve problems that are intractable to any current method. Acknowledgments This research was supported by the Defense Advanced Research Projects Agency (DARPA) under the Transfer Learning Program. We also thank David Karger for useful conversations and insights. References [1] D. Anguelov, D. Koller, P. Srinivasan, S. Thrun, H. Pang, and J. Davis. The correlated correspondence algorithm for unsupervised registration of nonrigid surfaces. In NIPS, 2004. [2] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. In ICCV, 1999. [3] G. Elidan, I. McGraw, and D. Koller. Residual belief propagation. In UAI, 2006. [4] J. Hooker, G. Ottosson, E.S. Thorsteinsson, and H.J. Kim. A scheme for unifying optimization and constraint satisfaction methods. In Knowledge Engineering Review, 2000. [5] H. Ishikawa. Exact optimization for Markov random fields with convex priors. PAMI, 2003. [6] J. Kleinberg and E. Tardos. Algorithm Design. Addison-Wesley, 2005. [7] P. Kohli and P. Torr. Measuring uncertainty in graph cut solutions - efficiently computing min-marginal energies using dynamic graph cuts. In ECCV, 2006. [8] V. Kolmogorov and M. Wainwright. On the optimality of tree-reweighted max-product message-passing. In UAI ?05. [9] V. Kolmogorov and R. Zabih. What energy functions can be minimized via graph cuts? In ECCV, 2002. [10] E. Lawler. The quadratic assignment problem. In Management Science, 1963. [11] K. Murphy and Y. Weiss. Loopy belief propagation for approximate inference: An empirical study. In UAI ?99. [12] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, 1988. [13] A. Raj, G. Singh, and R. Zabih. MRF?s for MRI?s: Bayesian reconstruction of MR images via graph cuts. In CVPR, 2006. To appear. [14] C. Rother, S. Kumar, V. Kolmogorov, and A. Blake. Digital tapestry. In CVPR, 2005. [15] J. Shi and J. Malik. Normalized cuts and image segmentation. PAMI, 2000. [16] T. St?utzle and M. Dorigo. ACO algorithms for the quadratic assignment problem. In New Ideas in Optimization. 1999. [17] R. Szeliski, R. Zabih, D. Scharstein, O. Veksler, V. Kolmogorov, A. Agarwala, M. Tappen, and C. Rother. A comparative study of energy minimization methods for Markov random fields. In ECCV, 2006. [18] B. Taskar, V. Chatalbashev, and D. Koller. Learning associative markov networks. In ICML ?04. [19] B. Taskar, V. Chatalbashev, D. Koller, and C. Guestrin. Learning structured prediction models: a large margin approach. In ICML ?05. [20] Y. Weiss and W. Freeman. On the optimality of solutions of the max-product belief-propagation algorithm in arbitrary graphs. IEEE Transactions on Information Theory, 47, 2001.
3117 |@word kohli:2 mri:1 complying:2 propagate:1 covariance:1 thereby:1 carry:2 initial:1 contains:5 score:32 series:2 karger:1 daniel:1 interestingly:1 amp:5 outperforms:3 existing:1 current:5 comparing:1 must:3 john:1 distant:1 partition:5 leaf:1 plane:1 beginning:1 tarlow:1 provides:1 node:18 location:4 preference:2 contribute:1 simpler:1 daphne:1 registering:1 along:3 constructed:1 direct:1 become:1 incorrect:1 combine:4 x0:1 pairwise:11 indeed:1 behavior:1 examine:1 freeman:1 globally:3 cpu:2 actual:1 pf:3 solver:1 begin:1 project:1 matched:3 moreover:1 suffice:1 what:1 informed:1 finding:7 gal:1 jduchi:1 pseudo:7 every:2 subclass:2 exactly:1 partitioning:1 dtarlow:1 unit:2 omit:1 appear:2 before:1 negligible:1 engineering:1 local:8 encoding:1 path:11 pami:2 black:4 might:1 studied:2 conversely:1 challenging:1 limited:1 range:3 acknowledgment:1 practice:2 union:1 lost:1 procedure:3 empirical:2 maxx:2 significantly:3 matching:43 word:1 induce:1 regular:14 confidence:1 get:1 cannot:2 close:3 scheduling:2 nb:4 context:3 applying:1 equivalent:3 map:21 demonstrated:1 shi:1 convex:3 survey:1 formulate:1 simplicity:1 assigns:2 subgraphs:2 insight:2 examines:1 tabu:1 reparameterization:2 handle:1 notion:1 variation:1 transmit:2 tardos:1 construction:2 target:11 exact:4 us:1 trail:1 tappen:1 walking:1 updating:1 cut:13 labeled:1 taskar:2 solved:9 region:1 connected:1 highest:2 ran:2 agency:1 maxmarginals:3 dynamic:2 singh:1 solving:1 purely:1 bipartite:7 matchings:1 sink:1 easily:3 joint:4 darpa:1 various:1 kolmogorov:4 grown:1 fast:1 describe:5 query:2 emanating:1 outside:1 neighborhood:1 whose:8 quite:2 stanford:3 solve:5 encoded:2 valued:1 larger:3 otherwise:2 cvpr:2 ability:4 g1:1 itself:1 associative:2 pfl:1 intelligently:1 reconstruction:3 interaction:1 product:24 remainder:1 neighboring:3 aligned:3 loop:3 subgraph:1 achieve:4 exploiting:2 convergence:12 generating:1 comparative:1 converges:1 help:1 propagating:1 pose:3 augmenting:2 ij:9 strong:2 c:1 indicate:1 direction:3 correct:2 require:1 exchange:1 f1:2 blake:1 exp:1 scope:1 mapping:2 electron:4 major:2 achieves:5 purpose:6 failing:1 estimation:1 applicable:1 combinatorial:15 currently:1 grouped:1 create:2 weighted:2 minimization:2 clearly:2 gaussian:2 avoid:1 encode:10 l0:2 ax:1 focus:1 consistently:1 likelihood:1 indicates:1 pentium:1 baseline:1 kim:1 posteriori:3 inference:18 mrfs:14 chatalbashev:2 entire:3 typically:1 integrated:2 eliminate:1 koller:6 x0l:1 going:1 subroutine:3 arg:2 overall:3 orientation:1 agarwala:1 smoothing:1 special:6 art:2 fairly:1 mutual:9 marginal:4 field:6 once:1 never:3 construct:1 sampling:2 ishikawa:1 look:1 unsupervised:1 icml:2 minimized:1 others:2 intelligent:2 duplicate:1 individual:5 murphy:1 replaced:1 geometry:1 connects:2 maintain:2 opposing:1 attempt:2 message:21 highly:1 alignment:1 chain:1 edge:18 integral:1 partial:1 damping:1 tree:11 old:1 euclidean:1 theoretical:1 minimal:1 soft:2 negating:1 measuring:1 assignment:28 loopy:3 cost:7 applicability:1 deviation:1 introducing:1 entry:1 subset:2 veksler:2 too:2 optimally:2 dependency:1 answer:1 synthetic:6 considerably:1 combined:1 st:1 density:1 twelve:2 explores:1 probabilistic:2 connecting:1 quickly:2 na:5 management:1 containing:1 choose:1 possibly:1 worse:2 resort:2 ek:1 leading:4 account:1 potential:30 singleton:6 performed:2 root:4 reached:1 competitive:1 substructure:2 contribution:1 om:3 pang:1 ni:3 qk:1 characteristic:1 efficiently:7 ensemble:5 correspond:1 variance:3 largely:1 kaufmann:1 ant:1 conceptually:1 identification:1 bayesian:1 iterated:1 none:1 straight:1 j6:1 processor:1 converged:1 explain:1 strongest:1 reach:2 against:1 competitor:1 energy:4 obvious:2 associated:6 knowledge:2 regu:1 conversation:1 segmentation:2 schedule:1 back:1 trw:1 appears:3 wesley:1 lawler:1 higher:2 originally:1 follow:1 improved:4 wei:2 formulation:2 box:4 though:4 xa:1 correlation:2 hopefully:1 propagation:22 marker:9 lack:2 defines:3 lar:1 quality:1 perhaps:2 scheduled:1 believe:2 effect:2 contain:3 true:1 normalized:1 counterpart:1 adequately:1 equality:1 assigned:3 reformulating:1 read:1 iteratively:3 galel:1 adjacent:3 reweighted:2 floyd:1 width:2 during:2 inferior:1 davis:1 unnormalized:3 generalized:1 trying:1 nonrigid:1 demonstrate:1 duchi:1 reasoning:1 image:27 ranging:1 boykov:1 aco:1 common:1 endpoint:1 exponentially:1 tilt:1 extend:1 rmp:7 marginals:19 anguelov:1 communicator:3 fk:2 consistency:1 neatly:1 surface:1 posterior:2 own:1 exclusion:9 recent:5 showed:1 raj:1 reverse:1 certain:7 binary:3 trmp:16 accomplished:1 scoring:5 guestrin:1 morgan:1 minimum:8 additional:4 somewhat:2 relaxed:1 fortunately:1 greater:1 mr:1 converge:5 shortest:4 elidan:2 preservation:2 branch:1 reduces:1 adapt:1 cross:1 organelle:1 e1:1 ensuring:1 prediction:1 mrf:19 variant:7 basic:2 essentially:1 achieved:2 cell:2 microscope:3 whereas:2 want:3 decreased:2 annealing:1 source:8 rest:1 pass:1 tapestry:1 sent:1 flow:17 integer:1 near:1 presence:1 ideal:1 constraining:1 synthetically:1 intermediate:4 variety:3 xj:12 restrict:1 opposite:1 reduce:1 idea:2 maxmarginal:1 whether:3 motivated:1 defense:1 gb:1 passed:2 qap:4 stereo:1 passing:10 prefers:1 useful:1 clear:3 involve:1 amount:1 tomography:2 zabih:4 simplest:1 diameter:1 generate:2 specifies:1 percentage:2 disjoint:2 popularity:1 track:1 arising:1 diverse:1 discrete:1 srinivasan:1 key:1 nevertheless:1 threshold:1 falling:1 achieving:1 localize:1 changing:3 registration:2 graph:23 year:2 sum:6 run:11 inverse:1 angle:1 uncertainty:1 communicate:1 almost:1 x0n:2 x0j:2 utilizes:1 oscillation:2 draw:1 prefer:1 bound:1 guaranteed:2 convergent:2 correspondence:10 quadratic:3 adapted:1 constraint:24 precisely:1 bp:2 encodes:2 diffuse:1 generates:1 kleinberg:1 speed:1 min:2 optimality:2 kumar:1 conjecture:1 department:1 structured:3 combination:2 poor:2 across:5 slightly:1 partitioned:3 making:1 restricted:1 iccv:1 computationally:1 addison:1 letting:1 tractable:9 subnetworks:9 studying:1 decomposing:1 apply:3 away:1 appropriate:1 enforce:3 generic:1 alternative:2 existence:1 original:6 running:2 include:3 mincut:1 unifying:1 exploit:2 unchanged:1 malik:1 diagonal:1 traditional:1 said:1 subnetwork:6 distance:3 separate:1 thank:1 simulated:1 capacity:4 landmark:4 thrun:1 dorigo:1 spanning:2 enforcing:1 reason:1 assuming:1 rother:2 modeled:1 providing:1 difficult:8 unfortunately:3 susceptible:1 gk:1 sigma:1 negative:3 design:1 perform:2 allowing:2 markov:13 defining:2 situation:1 extended:1 arbitrary:1 community:1 david:1 pair:8 pearl:1 nip:1 able:1 program:1 max:45 including:1 memory:1 belief:23 wainwright:2 gaining:1 greatest:1 satisfaction:2 difficulty:3 force:3 residual:5 advanced:1 representing:1 scheme:8 axis:1 created:1 extract:1 review:1 geometric:8 prior:1 relative:4 x0k:2 embedded:1 fully:1 bear:1 interesting:4 proportional:1 versus:2 digital:1 x01:2 degree:5 xp:1 consistent:1 production:1 eccv:3 surprisingly:1 supported:1 asynchronous:3 side:1 allow:1 szeliski:1 wide:2 fall:1 template:4 ghz:1 slice:1 xn:5 world:4 cumulative:2 computes:3 commonly:2 far:2 transaction:1 scharstein:1 approximate:11 emphasize:1 obtains:1 mcgraw:1 global:4 uai:3 xi:41 search:1 hooker:1 nature:1 transfer:1 ca:1 composing:1 inherently:1 contributes:1 improving:1 expansion:2 necessarily:1 did:2 main:2 dense:1 whole:2 motivation:1 arise:1 allowed:1 x1:5 augmented:1 fig:9 elaborate:1 gorithms:1 sub:6 candidate:8 minute:1 embed:1 bad:1 specific:1 showing:1 experimented:1 admits:1 x:1 evidence:1 dominates:2 intractable:7 effectively:2 margin:3 flavor:1 simply:11 likely:2 appearance:1 explore:1 failed:2 applies:2 corresponds:1 loses:1 conditional:1 goal:3 presentation:1 formulated:2 towards:2 change:1 included:2 except:1 torr:2 uniformly:2 averaging:1 called:3 total:1 pas:2 experimental:1 maxproduct:2 select:1 latter:1 scan:2 evaluate:2 tested:1 phenomenon:1 correlated:1
2,334
3,118
Conditional mean field Nando de Freitas Department of Computer Science University of British Columbia Vancouver, BC, Canada V6T 1Z4 nando@cs.ubc.ca Peter Carbonetto Department of Computer Science University of British Columbia Vancouver, BC, Canada V6T 1Z4 pcarbo@cs.ubc.ca Abstract Despite all the attention paid to variational methods based on sum-product message passing (loopy belief propagation, tree-reweighted sum-product), these methods are still bound to inference on a small set of probabilistic models. Mean field approximations have been applied to a broader set of problems, but the solutions are often poor. We propose a new class of conditionally-specified variational approximations based on mean field theory. While not usable on their own, combined with sequential Monte Carlo they produce guaranteed improvements over conventional mean field. Moreover, experiments on a well-studied problem? inferring the stable configurations of the Ising spin glass?show that the solutions can be significantly better than those obtained using sum-product-based methods. 1 Introduction Behind all variational methods for inference in probabilistic models lies a basic principle: treat the quantities of interest, which amount to moments of the random variables, as the solution to an optimization problem obtained via convex duality. Since optimizing the dual is rarely an amelioration over the original inference problem, various strategies have arisen out of statistical physics and machine learning for making principled (and unprincipled) approximations to the objective. One such class of techniques, mean field theory, requires that the solution define a distribution that factorizes in such a way that the statistics of interest are easily derived. Mean field remains a popular tool for statistical inference, mainly because it applies to a wide range of problems. As remarked by Yedidia in [17], however, mean field theory often imposes unrealistic or questionable factorizations, leading to poor solutions. Advances have been made in improving the quality of mean field approximations [17, 22, 26], but their applicability remains limited to specific models. Bethe-Kikuchi approximations overcome some of the severe restrictions on factorizability by decomposing the entropy according to a junction graph [1], for which it is well established that generalized belief propagation updates converge to the stationary points of the resulting optimization problem (provided they converge at all). Related variational approximations based on convex combinations of treestructured distributions [24] have the added advantage that they possess a unique global optimum (by contrast, we can only hope to discover a local minimum of the Bethe-Kikuchi and mean field objectives). However, both these methods rely on tractable sum-product messages, hence are limited to Gaussian Markov random fields or discrete random variables. Expectation propagation projections and Monte Carlo approximations to the sum-product messages get around these limitations, but can be unsuitable for dense graphs or can introduce extraordinary computational costs [5, 23]. Thus, there still exist factorized probabilistic models, such as sigmoid belief networks [21] and latent Dirichlet allocation [5], whereby mean field remains to date the tractable approximation of choice. Several Monte Carlo methods have been proposed to correct for the discrepancy between the factorized variational approximations and the target distribution. These methods include importance sampling [8, 14] and adaptive Markov Chain Monte Carlo (MCMC) [6]. However, none of these techniques scale well to general, high-dimensional state spaces because the variational approxi- mations tend to be too restrictive when used as a proposal distribution. This is corroborated by experimental results in those papers as well as theoretical results [20]. We propose an entirely new approach that overcomes the problems of the aforementioned methods by constructing a sequence of variational approximations that converges to the target distribution. To accomplish this, we derive a new class of conditionally-specified mean field approximations, and use sequential Monte Carlo (SMC) [7] to obtain samples from them. SMC acts as a mechanism to migrate particles from an easy-to-sample distribution (naive mean field) to a difficult-to-sample one (the distribution of interest), through a sequence of artificial distributions. Each artificial distribution is a conditional mean field approximation, designed in such a way that it is at least as sensible as its predecessor because it recovers dependencies left out by mean field. Sec. 4 explains these ideas thoroughly. The idea of constructing a sequence of distributions has a strong tradition in the literature, dating back to work on simulating the behaviour of polymer chains [19] and counting and integration problems [12]. Recent advances in stochastic simulation have allowed practitioners to extend these ideas to general probabilistic inference [7, 11, 15]. However, very little is known as to how to come up with a good sequence of distributions. Tempering is perhaps the most widely used strategy, due to its ease of implementation and intuitive appeal. At early stages, high global temperatures smooth the modes and allow easy exploration of the state space. Afterward, the temperature is progressively cooled until the original distribution is recovered. The problem is that the variance of the importance weights tends to degenerate around a system?s critical range of temperatures, as observed in [9]. An entirely different approach is to remove constraints (or factors) from the original model, then incrementally reintroduce them. This has been a fruitful approach for approximate counting [12], simulation of protein folding, and inference in the Ising model [9]. If, however, a reintroduced constraint has a large effect on the distribution, the particles may again rapidly deterioriate. We limit our study to the Ising spin glass model [16]. Ernst Ising developed his model in order to explain the phenomenon of ?spontaneous magnetization? in magnets. Here, we use it as a test bed to investigate the viability or our proposed algorithm. Our intent is not to design an algorithm tuned to sampling the states of the Ising model, but rather to tackle factorized graphical models with arbitrary potentials. Conditional mean field raises many questions, and since we can only hope to answer some in this study, the Ising model represents a respectable first step. We hint at how our ideas might generalize in Sec. 6. The next two sections serve as background for the presentation of our main contribution in Sec. 4. 2 Mean field theory In this study, we restrict our attention to random vectors X = (X1 , . . . , Xn )T , with possible configurations x = (x1 , . . . , xn )T ? ?, that admit a distribution belonging to the standard exponential family [25]. A member of this family has a probability density of the form  p(x; ?) = exp ?T ?(x) ? ?(?) , (1) where ? is the canonical vector of parameters, and ?(x) is the vector of sufficient statistics [25]. The log-partition function ?(?) ensures that p(x; ?) defines a valid probability density, and is given by  R ?(?) = log exp ?T ?(x) dx. Denoting E? {f (X)} to be the expected value of a function f (x) with respect to distribution ?, Jensen?s inequality states that f (E? {X}) ? E? {f (X)} for any convex function f (x) and distribution ? on X. Using the fact that ? log(x) is convex, we obtain the variational lower bound n exp ?T ?(X) o R ( ) ?(?) = log Ep( ? ;?) ? ?T ?(?) ? p(x; ?) log p(x; ?) dx, (2) p(X;?) where the mean statistics are defined by ?(?) ? Ep( ? ;?) {?(X)}. The second term on the righthand side of (2) is the Boltzmann-Shannon entropy of p(x; ?), which we denote by H(?). Clearly, some lower bounds of the form (2) are better than others, so the optimization problem is to find a set of parameters ? that leads to the tightest bound on the log-partition function. This defines the variational principle. We emphasize that this lower bound holds for any choice of ?. A more rigorous treatment follows from analyzing the conjugate of the convex, differentiable function ?(?) [25]. As it is presented here, the variational principle is of little practical use because no tractable expressions exist for the entropy and mean statistics. There do, however, exist particular choices of the variational parameters ? where it is possible to compute them both. We shall examine one particular set of choices, naive mean field, in the context of the Ising spin glass model. At each site i ? {1, . . . , n}, the random variable Xi is defined to be xi = +1 if the magnetic dipole in the ?up? spin position, or xi = ?1 if it is ?down?. Each scalar ?ij defines the interaction between sites i and j. Setting ?ij > 0 causes attraction between spins, and ?ij < 0 induces repulsion. Scalars ?i define the effect of the external magnetic field on the energy of the system. We use the undirected labelled graph G = (V, E), where V = {1, . . . , n}, to represent the conditional independence structure of the probability measure (there is no edge between i and j if and only if Xi and Xj are conditionally independent given values at all other points of the graph). Associating singleton factors with nodes of G and pairwise factors with its edges, and setting the entries of the sufficient statistics vector to be xi , ? i ? V and xi xj , ? (i, j) ? E, we can write the probability density as nP o P p(x; ?) = exp ? x + ? x x ? ?(?) . (3) i i ij i j i?V (i,j)?E The corresponding variational lower bound on the log-partition function ?(?) then decomposes as P P F (?) ? i?V ?i ?i (?) + (i,j)?E ?ij ?ij (?) + H(?), (4) where ?i (?) and ?ij (?) are the expectations of single spins i and pairs of spins (i, j), respectively. Naive mean field restricts the variational parameters ? to belong to {? | ? (i, j) ? E, ?ij = 0}. We can compute the lower bound (4) for any ? belonging to this subset because we have tractable expressions for the mean statistics and entropy. For the Ising spin glass, the mean statistics are R ?i (?) ? xi p(x; ?) dx = tanh(?i ) (5) R ?ij (?) ? xi xj p(x; ?) dx = ?i (?) ?j (?), (6) and the entropy is derived to be   X    X  1?? (?)  1+?i (?) i H(?) = ? log 1??2i (?) ? log 1+?2i (?) . 2 2 i?V (7) i?V The standard way to proceed [17, 25] is to derive coordinate ascent updates by equating the derivatives ?F/??i to zero and solving for ?i . Since the variables ?i must be valid mean statistics, they are constrained to lie within an envelope known as the marginal polytope [25]. Alternatively, one can solve the optimization problem with respect to the unconstrained variational parameters ?. Since it is not possible to obtain the fixed-point equations by isolating each ?i , instead one can easily derive expressions for the gradient ?F (?) and Hessian ?2 F (?) and run a nonlinear optimization routine. This approach, as we will see, is necessary for optimizing the conditional mean field objective. 3 Sequential Monte Carlo Consider a sequence of two distributions, ?(x) and ? ? (x), where the second represents the target. Assuming familiarity with importance sampling, this will be sufficient to explain key concepts underlying SMC, and does not overwhelm the reader with subscripts. See [7] for a detailed description. In the first step, samples x(s) ? ? are drawn from some proposal density q(x) and assigned importance weights w(x) = ?(x)/q(x). In the second step, a Markov transition kernel K ? (x0 | x) shifts each sample towards the target, and the importance weights w(x, ? x0 ) compensate for any failure to do so. In effect, the second step consists of extending the path of each particle onto the joint space ? ? ?. The unbiased importance weights on the joint space are given by w(x, ? x0 ) = ? ? (x, x0 ) L(x | x0 ) ? ? (x0 ) = ? w(x), q?(x, x0 ) K ? (x0 | x) ?(x) (8) where ? ? (x, x0 ) = L(x | x0 ) ? ? (x0 ) is the artificial distribution over the joint space, q?(x, x0 ) = K ? (x0 | x) q(x) is the corresponding importance distribution, and the ?backward-in-time? kernel L(x | x0 ) is designed so that it admits ?(x) as its invariant distribution. Our expectation is that K ? (x0 | x) have invariant distribution ? ? (x), though it is not required. To prevent potential particle degeneracy in the marginal space, we adopt the standard stratified resampling algorithm [13]. Choice of backward-in-time kernel. Mean field tends to be overconfident in its estimates (although not necessarily so). Loosely speaking, this means that if ?(x) were to be a mean field approximation, then it would likely have lighter tails than the target distribution ? ? (x). If we were to use a suboptimal backward kernel [7, Sec. 3.3.2.3], the importance weights would simplify to w(x, ? x0 ) = ? ? (x) / ?(x) ? w(x). (9) Implicitly, this is the choice of backward kernel made in earlier sequential frameworks [11, 15]. Since the mean field approximation ?(x) might very well fail to ?dominate? the target ? ? (x), the expression (9) risks having unbounded variance. This is a problem because the weights may change abruptly from one iteration to the next, or give too much importance to too few values x [18]. Instead, Del Moral et al suggest approximating the optimal backward-in-time kernel [7, Sec. 3.3.2.1] by L(x | x0 ) = R K ? (x0 | x) ?(x) . K ? (x0 | x) ?(x) dx (10) It offers some hope because the resulting importance weights on the joint space, following (8), are w(x, ? x0 ) = R ? ? (x0 ) ? w(x). | x) ?(x) dx K ? (x0 (11) If the transition kernel increases the mass of the proposal in regions where ?(x) is weak relative to ? ? (x), the backward kernel (10) will rectify the problems caused by an overconfident proposal. Choice of Markov transition kernel. The drawback of the backward kernel (10) is that it limits the choice of transition kernel K ? (x0 | x), a crucial ingredient to a successful SMC simulation. For instance, we can?t use the Metropolis-Hastings algorithm because its transition kernel involves an integral that does not admit a closed form [18]. One transition kernel which fits our requirements and is widely applicable is a mixture of kernels based on the random-scan Gibbs sampler [18]. Denoting ?y (x) to be the Dirac measure at location y, the transition kernel with invariant distribution ? ? (x) is P K ? (x0 | x) = k ?k ? ? (x0k | x?k ) ?x?k (x0?k ), (12) where ?(xk |x?k ) is the conditional density of xk given values at all other sites, and ?k is the probability of shifting the samples according to the Gibbs kernel at site k. Following (11) and the identity for conditional probability, we arrive at the expression for the importance weights,  ?1 ? ? (x0 ) X ? ? (x0k | x0?k ) 0 ?k ? w(x). (13) w(x, ? x)= ?(x0 ) ?(x0k | x0?k ) k Normalized estimator. For almost all problems in Bayesian analysis (and certainly the one considered in this paper), the densities are only known up to a normalizing constant. That is, only f (x) and f ? (x) are known pointwise, where ?(x) = f (x)/Z and ? ? (x) = f ? (x)/Z ? . The normalized importance sampling estimator [18] yields (asymptotically unbiased) importance weights w(x, ? x0 ) ? w(x, ? x0 ), where the unnormalized importance weights w(x, ? x0 ) in the joint space remain ? the same as (13), except that we substitute ?(x) for f (x), and ? (x) for f ? (x). The normalized estimator can recover a Monte Carlo estimate of the normalizing constant Z ? via the recursion P (s) Z ? ? Z ? sw ? , (14) provided we already have a good estimate of Z [7]. 4 Conditional mean field We start with a partition R (equivalence relation) of the set of vertices V . Elements of R, which we denote with the capital letters A and B, are disjoint subsets of V . Our strategy is to come up with a good naive mean field approximation to the conditional density p(xA | x?A ; ?) for every equivalence class A ? R, and then again for every configuration x?A . Here, we denote xA to be the configuration x restricted to set A ? V , and x?A to be the restriction of x to V \ A. The crux of the matter is that for any point ?, the functions p(xA | x?A ; ?) only represent valid conditional densities if they correspond to some unique joint, as discussed in [2]. Fortunately, under the Ising model the terms p(xA | x?A ; ?) represent valid conditionals for any ?. What we have is a slight generalization of the auto-logistic model [3], for which the joint is always known. As noted by Besag, ?although this is derived classically from thermodynamic principles, it is remarkable that the Ising model follows necessarily as the very simplest non-trivial binary Markov random field [4].? Conditional mean field forces each conditional p(xA | x?A ; ?) to decompose as a product of marginals p(xi | x?A ; ?), for all i ? A. As a result, ?ij must be zero for every edge (i, j) ? E(A), where we define E(A) ? {(i, j) | i ? A, j ? A} to be the set of edges contained by the vertices in subset A. Notice that we have a set of free variational parameters ?ij defined on the edges (i, j) that straddle subsets of the partition. Formally, these are the edges that belong to CR ? {(i, j) | ?A ? R, (i, j) ? / E(A)}. We call CR the set of ?connecting edges?. Our variational formulation consists of competing objectives, since the conditionals p(xA | x?A ; ?) share a common set of parameters. We formulate the final objective function as a linear combination of conditional objectives. A conditional mean field optimization problem with respect to graph partition R and linear weights ? is of the form P P maximize FR,? (?) ? A?R xN (A) ?A (xN (A) )FA (?, xN (A) ) (15) subject to ?ij = 0, for all (i, j) ? E \ CR . We extend the notion of neighbours to sets, so that N (A) is the Markov blanket of A. The nonnegative scalars ?A (xN (A) ) are defined for every equivalence class A ? R and configuration xN (A) . Each conditional objective FA (?, xN (A) ) represents a naive mean field lower bound to the logpartition function of the conditional density p(xA | x?A ; ?) = p(xA | xN (A) ; ?). For the Ising model, FA (?, xN (A) ) follows from the exact same steps used in the derivation of the naive mean field lower bound in Sec. 2, except that we replace the joint by a conditional. We obtain the expression P P FA (?, xN (A) ) = i?A ?i ?i (?, xN (A) ) + (i,j)?E(A) ?ij ?ij (?, xN (A) ) P P + i?A j ? (N (i) ? N (A)) ?ij xj ?i (?, xN (A) ) + HA (?, xN (A) ), (16) with the conditional mean statistics for i ? A, j ? A given by  R P ?i (?, xN (A) ) ? xi p(xA | xN (A) ; ?) dx = tanh ?i + j ? (N (i) ? N (A)) ?ij xj R ?ij (?, xN (A) ) ? xi xj p(xA | xN (A) ; ?) dx = ?i (?, xN (A) ) ?j (?, xN (A) ). (17) (18) The entropy is identical to (7), with the mean statistics replaced with their conditional counterparts. Notice the appearance of the new terms in (16). These terms account for the interaction between the random variables on the border of the partition. We can no longer optimize ? following the standard approach; we cannot treat the ?i (?, xN (A) ) as independent variables for all xN (A) , as the solution would no longer define an Ising model (or even a valid probability density, as we discussed). Instead, we optimize with respect to the parameters ?, taking derivatives ?FR,? (?) and ?2 FR,? (?). We have yet to address the question: how to select the scalars ?? It stands to reason that we should place greater emphasis on those conditionals that are realised more often, and set ?A (xN (A) ) ? p(xN (A) ; ?). Of course, these probabilities aren?t available! Equally problematic is the fact that (15) may involve nearly as many terms as there are possible worlds, hence offering little improvement over the naive solution. As it turns out, a greedy choice resolves both issues. Supposing that we are at some intermediate stage in the SMC algorithm (see Sec. 4.1), a greedy but not unreasonable choice is to set ?A (xN (A) ) to be the current Monte Carlo estimate of the marginal p(xN (A) ; ?), P ?A (xN (A) ) = s w(s) ?x(s) (xN (A) ). (19) N (A) Happily, the number of terms in (15) is now on the order of the number of the particles. Unlike standard naive mean field, conditional mean field optimizes over the pairwise interactions ?ij defined on the connecting edges (i, j) ? CR . In our study, we fix these parameters to ?ij = ?ij . This choice is convenient for two reasons. First, the objective is separable on the subsets of the partition. Second, the conditional objective of a singleton subset has a unique maximum at ?i = ?i , so any solution to (15) is guaranteed to recover the original distribution when |R| = n. 4.1 The Conditional mean field algorithm We propose an SMC algorithm that produces progressively refined particle estimates of the mean statistics, in which conditional mean field acts in a supporting role. The initial SMC distribution is obtained by solving (15) for R = {V }, which amounts to the mean field approximation derived in Sec. 2. In subsequent steps, we iteratively solve (15), update the estimates of the mean statistics by reweighting (see (20)) and occasionally resampling the particles, then we split the partition until we cannot split it anymore, at which point |R| = n and we recover the target p(x; ?). It is easy to Figure 1: The graphs on the left depict the Markov properties of the conditional mean field approximations in steps 1 to 4. Graph #4 recovers the target. In the right plot, the solid line is the evolution of the estimate of the log-partition function in SMC steps 1 to 4. The dashed line is the true value. draw samples from the Pinitial fully-factorized distribution. It is also easy to compute its log-partition function, as ?(?) = i?V log(2 cosh(?i )). Note that this estimate is not a variational lower bound. Let?s now suppose we are at some intermediate step in the algorithm. We currently have a particle estimate of the R-partition conditional mean field approximation p(x; ?) with samples x(s) and marginal importance weights w(s) . To construct the next artificial distribution p(x; ?? ) in the sequence, we choose a finer partitioning of the graph, R? , set the weights ?? according to (19), and use a nonlinear solver to find a local minimum ?? to (15). The solver is initialized to ?i? = ?i . We require that the new graph partition satisfy that for every B ? R? , B ? A for some A ? R. In this manner, we ensure that the sequence is progressing toward the target (provided R 6= R? ), and that it is always possible to evaluate the importance weights. It is not understood how to tractably choose a good sequence of partitions, so we select them in an arbitrary manner. Next, we use the random-scan Gibbs sampler (12) to shift the particles toward the new distribution, where the Gibbs sites k correspond to the subsets B ? R? . We set the mixture probabilities of the Markov transition kernel to ?B = |B|/n. Following (13), the expression for the unnormalized importance weights is  P ? 0 P ? 0 0  X Y ?(x0i | x0N (B) ; ?? ) ?1 exp ? x + ? x x i i ij i j i (i,j) 0  P P w(x, ? x)= ? w(x), (20) ?B 0 0 0 ?(x0i | x0N (A) ; ?) exp ? i ?i xi + (i,j) ?ij xi xj B?R i?B where the single-site conditionals are ?(xi | xN (A) ; ?) = (1 + xi ?i (?, xN (A) ))/2 and A ? R is the unique subsetPcontaining B ? R? . The new SMC estimate of the log-partition function is ?(?? ) ? ?(?) + log s w ? (s) . To obtain the particle estimate of the new distribution, we normalize the (s) weights w ? ?w ? (s) , assign the marginal importance weights w(s) ? w ? (s) , and set x(s) ? (x0 )(s) . We are now ready to move to the next iteration. Let?s look at a small example to see how this works. 1 (4, 3, ?5, ?2), ?13 = ?24 = Example. Consider an Ising model with n = 4 and parameters ?1:4 = 10 1 1 ?34 = + 2 and ?12 = ? 2 . We assume we have enough particles to recover the distributions almost perfectly. Setting R = {{1, 2, 3, 4}}, the first artificial distribution is the naive mean field solution ?1:4 = (0.09, 0.03, ?0.68, ?0.48) with ?(?) = 3.10. Knowing that the true mean statistics are ?1:4 = (0.11, 0.07, ?0.40, ?0.27), and Var(Xi ) = 1 ? ?2i , it is easy to see naive mean field largely underestimates the variance of the spins. In step 2, we split the partition into R = {{1, 2}, {3, 4}}, and the new conditional mean field approximation is given by ?1:4 = (0.39, 0.27, ?0.66, ?0.43), with potentials ?13 = ?13 , ?24 = ?24 on the connecting edges CR . The second distribution recovers the two dependencies between the subsets, as depicted in Fig. 1. Step 3 then splits subset {1, 2}, and we get ? = (0.40, 0.30, ?0.64, ?0.42) by setting ? according to the weighted samples from step 2. Notice that ?1 = ?1 , ?2 = ?2 . Step 4 recovers the original distribution, at which point the estimate of the log-partition function comes close to the exact solution, as shown in Fig. 1. In this example, ?(?) happens to underestimate ?(?), but in other examples we may get overestimates. The random-scan Gibbs sampler can mix poorly, especially on a fine graph partition. Gradually changing the parameters with tempered artificial distributions [7, Sec. 2.3.1] p(x; ?)1?? p(x; ?? )? gives the transition kernel more opportunity to correctly migrate the samples to the next distribution. To optimize (15), we used a stable modification to Newton?s method that maintains a quadratic approximation to the objective with a positive definite Hessian. In light of our experiences, a better choice might have been to sacrifice the quadratic convergence rate for a limited-memory Hessian approximation or conjugate gradient; the optimization routine was the computational bottleneck on dense graphs. Even though the solver is executed at every iteration of SMC, the separability of the objective (15) means that the computational expense decreases significantly at every iteration. To our knowledge, this is the only SMC implementation in which the next distribution in the sequence is constructed dynamically according to the particle approximation from the previous step. Figure 2: (a) Estimate of the 12 ? 12 grid log-partition function for each iteration of SMC. (c) Same, for the fully-connected graph with 26 nodes. We omitted the tree-reweighted upper bound because it is way off the map. Note that these plots will vary slightly for each simulation. (b) Average error of the mean statistics according to the hot coupling (HC), conditional mean field algorithm (CMF), Bethe-Kikuchi variational approximation (B-K), and tree-reweighted upper bound (TRW) estimates. The maximum possible average error is 2. For the HC and CMF algorithms, 95% of the estimates fall within the shaded regions according to a sample of 10 simulations. 5 Experiments We conduct experiments on two Ising models, one defined on a 12?12 grid, and the other on a fullyconnected graph with 26 nodes. The model sizes approach the limit of what we can compute exactly for the purposes of evaluation. The magnetic fields are generated by drawing each ?i uniformly from [?1, 1] and drawing ?ij uniformly from {? 12 , + 12 }. Both models exhibit strong and conflicting pairwise interactions, so it is expected that rudimentary MCMC methods such as Gibbs sampling will get ?stuck? in local modes [9]. Our algorithm settings are as follows. We use 1000 particles (as with most particle methods, the running time is proportional to the number of particles), and we temper across successive distributions with a linear inverse temperature schedule of length 100. The particles are resampled when the effective sample size [18] drops below 21 . We compare our results with the ?hot coupling? SMC algorithm described in [9] (appropriately, using the same algorithm settings), and with two sum-product methods based on Bethe-Kikuchi approximations [1] and treereweighted upper bounds [24]. We adopt the simplest formulation of both methods in which the regions (or junction graph nodes) are defined as the edges E. Since loopy belief propagation failed to converge for the complete graph, we implemented the convergent double-loop algorithm of [10]. The results of the experiments are summarized in Fig. 2. The plots on the left and right show that the estimate of the log-partition function, for the most part, moves to the exact solution as the graph is partitioned into smaller and smaller pieces. Both Bethe-Kikuchi approximations and tree-reweighted upper bounds provide good approximations to the grid model. Indeed, the former recovers the logpartition function almost perfectly. However, these approximations break down as soon as they encounter a dense, frustrated model. This is consistent with the results observed in other experiments [9, 24]. The SMC algorithms proposed here and in [9], by contrast, produce significantly improved estimates of the mean statistics. It is surprising that we achieve similar performance with hot coupling [9], given that we do not exploit the tractability of sum-product messages in the Ising model (which would offer guaranteed improvements due to the Rao-Blackwell theorem). 6 Conclusions and discussion We presented a sequential Monte Carlo algorithm in which each artificial distribution is the solution to a conditionally-specified mean field optimization problem. We believe that the extra expense of nonlinear optimization at each step may be warranted in the long run as our method holds promise in solving more difficult inference problems, problems where Monte Carlo and variational methods alone perform poorly. We hypothesize that our approach is superior methods that ?prune? constraints on factors, but further exploration in other problems is needed to verify this theory. Beyond mean field. As noted in [22], naive mean field implies complete factorizability, which is not necessary under the Ising model. A number of refinements are possible. However, this is not a research direction we will pursue. Bethe-Kikuchi approximations based on junction graphs have many merits, but they cannot be considered candidates for our framework because they produce estimates of local mean statistics without defining a joint distribution. Tree-reweighted upper bounds are appealing because they tend to be underconfident, but again we have the same difficulty. Extending to other members of the exponential family. In general, the joint is not available in analytic form given expressions for the conditionals, but there are still some encouraging signs. For one, we can use Brook?s lemma [3, Sec. 2] to derive an expression for the importance weights that does not involve the joint. Furthermore, conditions for guaranteeing the validity of conditional densities have been extensively studied in multivariate [2] and spatial statistics [3]. Acknowledgments We are indebted to Arnaud Doucet and Firas Hamze for invaluable discussions, to Martin Wainwright for providing his code, and to the Natural Sciences and Engineering Research Council of Canada for their support. References [1] S. M. Aji and R. J. McEliece. The Generalized distributive law and free energy minimization. In Proceedings of the 39th Allerton Conference, pages 672?681, 2001. [2] B. Arnold, E. Castillo, and J.-M. Sarabia. Conditional Specification of Statistical Models. Springer, 1999. [3] J. Besag. Spatial interaction and the statistical analysis of lattice systems. J. Roy. Statist. Soc., Ser. B, 36:192?236, 1974. [4] J. Besag. Comment to ?Conditionally specified distributions?. Statist. Sci., 16:265?267, 2001. [5] W. Buntine and A. Jakulin. Applying discrete PCA in data analysis. In Uncertainty in Artificial Intelligence, volume 20, pages 59?66, 2004. [6] N. de Freitas, P. H?jen-S?rensen, M. I. Jordan, and S. Russell. Variational MCMC. In Uncertainty in Artificial Intelligence, volume 17, pages 120?127, 2001. [7] P. del Moral, A. Doucet, and A. Jasra. Sequential Monte Carlo samplers. J. Roy. Statist. Soc., Ser. B, 68:411?436, 2006. [8] Z. Ghahramani and M. J. Beal. Variational inference for Bayesian mixtures of factor analysers. In Advances in Neural Information Processing Systems, volume 12, pages 449?455, 1999. [9] F. Hamze and N. de Freitas. Hot Coupling: a particle approach to inference and normalization on pairwise undirected graphs. Advances in Neural Information Processing Systems, 18:491?498, 2005. [10] T. Heskes, K. Albers, and B. Kappen. Approximate inference and constrained optimization. In Uncertainty in Artificial Intelligence, volume 19, pages 313?320, 2003. [11] C. Jarzynski. Nonequilibrium equality for free energy differences. Phys. Rev. Lett., 78:2690?2693, 1997. [12] M. Jerrum and A. Sinclair. The Markov chain Monte Carlo method: an approach to approximate counting and integration. In Approximation Algorithms for NP-hard Problems, pages 482?520. PWS Pubs., 1996. [13] G. Kitagawa. Monte Carlo filter and smoother for non-Gaussian nonlinear state space models. J. Comput. Graph. Statist., 5:1?25, 1996. [14] P. Muyan and N. de Freitas. A blessing of dimensionality: measure concentration and probabilistic inference. In Proceedings of the 19th Workshop on Artificial Intelligence and Statistics, 2003. [15] R. M. Neal. Annealed importance sampling. Statist. and Comput., 11:125?139, 2001. [16] M. Newman and G. Barkema. Monte Carlo Methods in Statistical Physics. Oxford Univ. Press, 1999. [17] M. Opper and D. Saad, editors. Advanced Mean Field Methods, Theory and Practice. MIT Press, 2001. [18] C. P. Robert and G. Casella. Monte Carlo Statistical Methods. Springer, 2nd edition, 2004. [19] M. N. Rosenbluth and A. W. Rosenbluth. Monte Carlo calculation of the average extension of molecular chains. J. Chem. Phys., 23:356?359, 1955. [20] J. S. Sadowsky and J. A. Bucklew. On large deviations theory and asymptotically efficient Monte Carlo estimation. IEEE Trans. Inform. Theory, 36:579?588, 1990. [21] L. K. Saul, T. Jaakola, and M. I. Jordan. Mean field theory for sigmoid belief networks. J. Artificial Intelligence Res., 4:61?76, 1996. [22] L. K. Saul and M. I. Jordan. Exploiting tractable structures in intractable networks. In Advances in Neural Information Processing Systems, volume 8, pages 486?492, 1995. [23] E. B. Sudderth, A. T. Ihler, W. T. Freeman, and A. S. Willsky. Nonparametric belief propagation. In Computer Vision and Pattern Recognition,, volume I, pages 605?612, 2003. [24] M. J. Wainwright, T. S. Jaakkola, and A. S. Willsky. A new class of upper bounds on the log partition function. IEEE Trans. Inform. Theory, 51:2313?2335, 2005. [25] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference. Technical report, EECS Dept., University of California, Berkeley, 2003. [26] W. Wiegerinck. Variational approximations between mean field theory and the junction tree algorithm. In Uncertainty in Artificial Intelligence, volume 16, pages 626?633, 2000.
3118 |@word nd:1 simulation:5 paid:1 solid:1 kappen:1 moment:1 configuration:5 initial:1 pub:1 tuned:1 bc:2 denoting:2 offering:1 freitas:4 recovered:1 current:1 surprising:1 yet:1 dx:8 must:2 subsequent:1 partition:21 analytic:1 remove:1 designed:2 plot:3 update:3 progressively:2 resampling:2 stationary:1 greedy:2 depict:1 alone:1 intelligence:6 xk:2 node:4 location:1 successive:1 allerton:1 treereweighted:1 unbounded:1 constructed:1 predecessor:1 consists:2 fullyconnected:1 manner:2 introduce:1 x0:33 pairwise:4 sacrifice:1 indeed:1 expected:2 examine:1 freeman:1 resolve:1 little:3 encouraging:1 solver:3 provided:3 discover:1 moreover:1 underlying:1 factorized:4 mass:1 what:2 pursue:1 developed:1 berkeley:1 every:7 act:2 tackle:1 questionable:1 exactly:1 ser:2 partitioning:1 overestimate:1 positive:1 understood:1 local:4 treat:2 tends:2 limit:3 engineering:1 despite:1 jakulin:1 analyzing:1 oxford:1 subscript:1 path:1 might:3 emphasis:1 studied:2 equating:1 equivalence:3 dynamically:1 shaded:1 ease:1 factorization:1 limited:3 smc:14 range:2 stratified:1 jaakola:1 unique:4 practical:1 acknowledgment:1 practice:1 definite:1 aji:1 significantly:3 projection:1 convenient:1 protein:1 suggest:1 get:4 onto:1 cannot:3 close:1 context:1 risk:1 applying:1 restriction:2 conventional:1 fruitful:1 logpartition:2 optimize:3 map:1 annealed:1 attention:2 convex:5 formulate:1 dipole:1 estimator:3 attraction:1 dominate:1 his:2 cooled:1 coordinate:1 notion:1 target:9 spontaneous:1 suppose:1 exact:3 lighter:1 element:1 roy:2 recognition:1 ising:16 corroborated:1 v6t:2 observed:2 ep:2 role:1 region:3 ensures:1 reintroduce:1 connected:1 decrease:1 russell:1 principled:1 raise:1 solving:3 serve:1 easily:2 joint:11 various:1 derivation:1 univ:1 effective:1 monte:17 artificial:13 analyser:1 newman:1 refined:1 widely:2 solve:2 drawing:2 statistic:18 jerrum:1 final:1 beal:1 advantage:1 sequence:9 differentiable:1 propose:3 interaction:5 product:8 fr:3 loop:1 date:1 rapidly:1 degenerate:1 ernst:1 poorly:2 achieve:1 intuitive:1 bed:1 description:1 dirac:1 normalize:1 exploiting:1 convergence:1 double:1 optimum:1 extending:2 requirement:1 produce:4 guaranteeing:1 converges:1 kikuchi:6 derive:4 coupling:4 x0i:2 ij:23 albers:1 strong:2 soc:2 implemented:1 c:2 involves:1 come:3 blanket:1 implies:1 direction:1 drawback:1 correct:1 filter:1 stochastic:1 exploration:2 nando:2 happily:1 explains:1 require:1 carbonetto:1 behaviour:1 crux:1 fix:1 generalization:1 assign:1 decompose:1 polymer:1 kitagawa:1 extension:1 hold:2 around:2 considered:2 exp:6 vary:1 early:1 adopt:2 omitted:1 purpose:1 estimation:1 applicable:1 tanh:2 currently:1 council:1 treestructured:1 tool:1 weighted:1 hope:3 minimization:1 mit:1 clearly:1 gaussian:2 always:2 rather:1 cr:5 factorizes:1 mations:1 broader:1 jaakkola:1 derived:4 improvement:3 mainly:1 contrast:2 rigorous:1 tradition:1 besag:3 progressing:1 glass:4 inference:12 repulsion:1 relation:1 issue:1 dual:1 aforementioned:1 temper:1 constrained:2 integration:2 spatial:2 marginal:5 field:49 construct:1 having:1 sampling:6 identical:1 represents:3 look:1 nearly:1 discrepancy:1 others:1 np:2 simplify:1 hint:1 few:1 report:1 neighbour:1 replaced:1 sarabia:1 interest:3 message:4 investigate:1 righthand:1 evaluation:1 severe:1 certainly:1 mixture:3 light:1 behind:1 chain:4 edge:10 integral:1 necessary:2 experience:1 tree:6 conduct:1 loosely:1 initialized:1 re:1 isolating:1 theoretical:1 instance:1 earlier:1 rao:1 respectable:1 lattice:1 loopy:2 applicability:1 cost:1 vertex:2 entry:1 subset:9 tractability:1 nonequilibrium:1 deviation:1 successful:1 firas:1 too:3 buntine:1 dependency:2 answer:1 eec:1 accomplish:1 combined:1 thoroughly:1 density:11 straddle:1 probabilistic:5 physic:2 off:1 connecting:3 again:3 choose:2 classically:1 sinclair:1 admit:2 external:1 usable:1 leading:1 derivative:2 account:1 potential:3 de:4 singleton:2 sec:10 summarized:1 matter:1 satisfy:1 mcmc:3 caused:1 piece:1 break:1 closed:1 realised:1 start:1 recover:4 maintains:1 contribution:1 spin:9 variance:3 largely:1 yield:1 correspond:2 generalize:1 weak:1 bayesian:2 none:1 carlo:17 finer:1 indebted:1 explain:2 phys:2 casella:1 inform:2 failure:1 underestimate:2 energy:3 remarked:1 ihler:1 recovers:5 degeneracy:1 treatment:1 popular:1 knowledge:1 dimensionality:1 schedule:1 routine:2 back:1 factorizability:2 trw:1 bucklew:1 improved:1 formulation:2 though:2 furthermore:1 xa:10 stage:2 until:2 mceliece:1 hastings:1 nonlinear:4 reweighting:1 propagation:5 incrementally:1 del:2 defines:3 mode:2 logistic:1 quality:1 perhaps:1 believe:1 effect:3 validity:1 verify:1 concept:1 unbiased:2 normalized:3 counterpart:1 hence:2 assigned:1 evolution:1 true:2 former:1 iteratively:1 arnaud:1 equality:1 neal:1 reweighted:5 conditionally:5 whereby:1 noted:2 unnormalized:2 generalized:2 complete:2 magnetization:1 invaluable:1 temperature:4 rudimentary:1 variational:23 sigmoid:2 common:1 superior:1 cmf:2 volume:7 extend:2 belong:2 tail:1 discussed:2 slight:1 marginals:1 gibbs:6 unconstrained:1 grid:3 heskes:1 z4:2 particle:17 rectify:1 stable:2 specification:1 longer:2 multivariate:1 own:1 recent:1 optimizing:2 optimizes:1 occasionally:1 inequality:1 binary:1 tempered:1 minimum:2 fortunately:1 greater:1 prune:1 converge:3 maximize:1 dashed:1 smoother:1 thermodynamic:1 mix:1 smooth:1 technical:1 calculation:1 offer:2 compensate:1 long:1 equally:1 molecular:1 basic:1 vision:1 expectation:3 iteration:5 represent:3 arisen:1 kernel:18 normalization:1 folding:1 proposal:4 background:1 conditionals:5 fine:1 sudderth:1 crucial:1 appropriately:1 saad:1 envelope:1 unlike:1 posse:1 extra:1 ascent:1 comment:1 subject:1 tend:2 supposing:1 undirected:2 member:2 jordan:4 practitioner:1 call:1 hamze:2 counting:3 intermediate:2 split:4 easy:5 viability:1 enough:1 independence:1 xj:7 fit:1 restrict:1 associating:1 suboptimal:1 competing:1 idea:4 perfectly:2 knowing:1 shift:2 bottleneck:1 expression:9 pca:1 moral:2 abruptly:1 peter:1 passing:1 cause:1 proceed:1 hessian:3 speaking:1 migrate:2 detailed:1 involve:2 amount:2 nonparametric:1 cosh:1 extensively:1 induces:1 statist:5 simplest:2 exist:3 restricts:1 canonical:1 problematic:1 notice:3 rensen:1 sign:1 disjoint:1 correctly:1 discrete:2 write:1 shall:1 promise:1 key:1 tempering:1 drawn:1 capital:1 prevent:1 changing:1 pcarbo:1 backward:7 graph:19 asymptotically:2 sum:7 run:2 inverse:1 letter:1 uncertainty:4 arrive:1 family:4 reader:1 almost:3 place:1 x0n:2 draw:1 entirely:2 bound:16 resampled:1 guaranteed:3 convergent:1 quadratic:2 nonnegative:1 constraint:3 separable:1 martin:1 jasra:1 department:2 according:7 overconfident:2 jarzynski:1 combination:2 poor:2 belonging:2 conjugate:2 remain:1 slightly:1 across:1 separability:1 smaller:2 partitioned:1 appealing:1 metropolis:1 rev:1 making:1 happens:1 modification:1 invariant:3 restricted:1 gradually:1 equation:1 remains:3 overwhelm:1 turn:1 mechanism:1 fail:1 needed:1 merit:1 tractable:5 junction:4 decomposing:1 tightest:1 available:2 yedidia:1 unreasonable:1 magnetic:3 simulating:1 anymore:1 encounter:1 original:5 substitute:1 dirichlet:1 include:1 ensure:1 running:1 graphical:2 opportunity:1 sw:1 newton:1 unsuitable:1 exploit:1 restrictive:1 ghahramani:1 especially:1 approximating:1 objective:11 move:2 added:1 quantity:1 question:2 already:1 strategy:3 fa:4 concentration:1 exhibit:1 gradient:2 sci:1 distributive:1 sensible:1 evaluate:1 polytope:1 trivial:1 reason:2 toward:2 willsky:2 assuming:1 length:1 code:1 pointwise:1 providing:1 difficult:2 executed:1 robert:1 expense:2 intent:1 implementation:2 design:1 rosenbluth:2 boltzmann:1 perform:1 upper:6 markov:9 supporting:1 defining:1 arbitrary:2 canada:3 pair:1 required:1 specified:4 blackwell:1 reintroduced:1 california:1 conflicting:1 established:1 tractably:1 brook:1 address:1 beyond:1 trans:2 below:1 pattern:1 drop:1 memory:1 belief:6 shifting:1 unrealistic:1 critical:1 hot:4 difficulty:1 rely:1 force:1 wainwright:3 natural:1 recursion:1 advanced:1 ready:1 barkema:1 naive:11 columbia:2 auto:1 dating:1 literature:1 vancouver:2 relative:1 x0k:3 law:1 fully:2 limitation:1 allocation:1 afterward:1 proportional:1 var:1 ingredient:1 remarkable:1 sufficient:3 consistent:1 imposes:1 principle:4 editor:1 share:1 course:1 free:3 soon:1 side:1 allow:1 arnold:1 wide:1 fall:1 taking:1 saul:2 overcome:1 lett:1 xn:31 valid:5 transition:9 stand:1 world:1 opper:1 stuck:1 made:2 adaptive:1 refinement:1 unprincipled:1 approximate:3 emphasize:1 implicitly:1 overcomes:1 global:2 approxi:1 doucet:2 xi:16 alternatively:1 latent:1 decomposes:1 bethe:6 ca:2 improving:1 warranted:1 hc:2 necessarily:2 constructing:2 hypothesize:1 dense:3 main:1 border:1 edition:1 allowed:1 amelioration:1 x1:2 site:6 fig:3 extraordinary:1 inferring:1 position:1 exponential:3 comput:2 lie:2 candidate:1 british:2 down:2 theorem:1 familiarity:1 specific:1 jen:1 pws:1 jensen:1 appeal:1 admits:1 normalizing:2 workshop:1 intractable:1 sequential:6 importance:20 aren:1 entropy:6 depicted:1 likely:1 appearance:1 failed:1 contained:1 scalar:4 applies:1 springer:2 ubc:2 frustrated:1 conditional:29 identity:1 presentation:1 towards:1 labelled:1 replace:1 change:1 hard:1 except:2 uniformly:2 sampler:4 wiegerinck:1 lemma:1 castillo:1 blessing:1 duality:1 experimental:1 shannon:1 rarely:1 formally:1 select:2 support:1 scan:3 chem:1 dept:1 magnet:1 phenomenon:1
2,335
3,119
Modelling transcriptional regulation using Gaussian processes Neil D. Lawrence School of Computer Science University of Manchester, U.K. neill@cs.man.ac.uk Guido Sanguinetti Department of Computer Science University of Sheffield, U.K. guido@dcs.shef.ac.uk Magnus Rattray School of Computer Science University of Manchester, U.K. magnus@cs.man.ac.uk Abstract Modelling the dynamics of transcriptional processes in the cell requires the knowledge of a number of key biological quantities. While some of them are relatively easy to measure, such as mRNA decay rates and mRNA abundance levels, it is still very hard to measure the active concentration levels of the transcription factor proteins that drive the process and the sensitivity of target genes to these concentrations. In this paper we show how these quantities for a given transcription factor can be inferred from gene expression levels of a set of known target genes. We treat the protein concentration as a latent function with a Gaussian process prior, and include the sensitivities, mRNA decay rates and baseline expression levels as hyperparameters. We apply this procedure to a human leukemia dataset, focusing on the tumour repressor p53 and obtaining results in good accordance with recent biological studies. Introduction Recent advances in molecular biology have brought about a revolution in our understanding of cellular processes. Microarray technology now allows measurement of mRNA abundance on a genomewide scale, and techniques such as chromatin immunoprecipitation (ChIP) have largely unveiled the wiring of the cellular transcriptional regulatory network, identifying which genes are bound by which transcription factors. However, a full quantitative description of the regulatory mechanism of transcription requires the knowledge of a number of other biological quantities: first of all the concentration levels of active transcription factor proteins, but also a number of gene-specific constants such as the baseline expression level for a gene, the rate of decay of its mRNA and the sensitivity with which target genes react to a given transcription factor protein concentration. While some of these quantities can be measured (e.g. mRNA decay rates), most of them are very hard to measure with current techniques, and have therefore to be inferred from the available data. This is often done following one of two complementary approaches. One can formulate a large scale simplified model of regulation (for example assuming a linear response to protein concentrations) and then combine network architecture data and gene expression data to infer transcription factors? protein concentrations on a genome-wide scale. This line of research was started in [3] and then extended further to include gene-specific effects in [10, 11]. Alternatively, one can formulate a realistic model of a small subnetwork where few transcription factors regulate a small number of established target genes, trying to include the finer points of the dynamics of transcriptional regulation. In this paper we follow the second approach, focussing on the simplest subnetwork consisting of one tran- scription factor regulating its target genes, but using a detailed model of the interaction dynamics to infer the transcription factor concentrations and the gene specific constants. This problem was recently studied by Barenco et al. [1] and by Rogers et al. [9]. In these studies, parametric models were developed describing the rate of production of certain genes as a function of the concentration of transcription factor protein at some specified time points. Markov chain Monte Carlo (MCMC) methods were then used to carry out Bayesian inference of the protein concentrations, requiring substantial computational resources and limiting the inference to the discrete time-points where the data was collected. We show here how a Gaussian process model provides a simple and computationally efficient method for Bayesian inference of continuous transcription factor concentration profiles and associated model parameters. Gaussian processes have been used effectively in a number of machine learning and statistical applications [8] (see also [2, 6] for the work that is most closely related to ours). Their use in this context is novel, as far as we know, and leads to several advantages. Firstly, it allows for the inference of continuous quantities (concentration profiles) without discretization, therefore accounting naturally for the temporal structure of the data. Secondly, it avoids the use of cumbersome interpolation techniques to estimate mRNA production rates from mRNA abundance data, and it allows us to deal naturally with the noise inherent in the measurements. Finally, it greatly outstrips MCMC techniques in terms of computational efficiency, which we expect to be crucial in future extensions to more complex (and realistic) regulatory networks. The paper is organised as follows: in the first section we discuss linear response models. These are simplified models in which the mRNA production rate depends linearly on the transcription factor protein concentration. Although the linear assumption is not verified in practice, it has the advantage of giving rise to an exactly tractable inference problem. We then discuss how to extend the formalism to model cases where the dependence of mRNA production rate on transcription factor protein concentration is not linear, and propose a MAP-Laplace approach to carry out Bayesian inference. In the third section we test our model on the leukemia data set studied in [1]. Finally, we discuss further extensions of our work. MATLAB code to recreate the experiments is available on-line. 1 Linear Response Model Let the data set under consideration consist of T measurements of the mRNA abundance of N genes. We consider a linear differential equation that relates a given gene j?s expression level xj (t) at time t to the concentration of the regulating transcription factor protein f (t), dxj = Bj + Sj f (t) ? Dj xj (t) . dt (1) Here, Bj is the basal transcription rate of gene j, Sj is the sensitivity of gene j to the transcription factor and Dj is the decay rate of the mRNA. Crucially, the dependence of the mRNA transcription rate on the protein concentration (response) is linear. Assuming a linear response is a crude simplification, but it can still lead to interesting results in certain modelling situations. Equation (1) was used by Barenco et al. [1] to model a simple network consisting of the tumour suppressor transcription factor p53 and five of its target genes. We will consider more general models in section 2. The equation given in (1) can be solved to recover Bj + kj exp (?Dj t) + Sj exp (?Dj t) xj (t) = Dj Z t f (u) exp (Dj u) du (2) 0 where kj arises from the initial conditions, and is zero if we assume an initial baseline expression level xj (0) = Bj /Dj . We will model the protein concentration f as a latent function drawn from a Gaussian process prior distribution. It is important to notice that equation (2) involves only linear operations on the function f (t). This implies immediately that the mRNA abundance levels will also be modelled as a Gaussian process, and the covariance function of the marginal distribution p (x1 , . . . , xN ) can be worked out explicitly from the covariance function of the latent function f . Let us rewrite equation (2) as Bj + Lj [f ] (t) Dj where we have set the initial conditions such that kj in equation (2) is equal to zero and Z t Lj [f ] (t) = Sj exp (?Dj t) f (u) exp (Dj u) du xj (t) = (3) 0 is the linear operator relating the latent function f to the mRNA abundance of gene j, xj (t). If the covariance function associated with f (t) is given by kf f (t, t0 ) then elementary functional analysis yields that cov (Lj [f ] (t) , Lk [f ] (t0 )) = Lj ? Lk [kf f ] (t, t0 ) . Explicitly, this is given by the following formula Z t Z t0 0 0 kxj xk (t, t ) = Sj Sk exp (?Dj t ? Dk t ) exp (Dj u) exp (Dk u0 ) kf f (u, u0 ) du0 du. (4) 0 0 If the process prior over f (t) is taken to be a squared exponential kernel, ! 0 2 (t ? t ) , kf f (t, t0 ) = exp ? l2 where l controls the width of the basis functions1 , the integrals in equation (4) can be computed analytically. The resulting covariances are obtained as ? ?l 0 [hkj (t0 , t) + hjk (t, t0 )] (5) kxj xk (t, t ) = Sj Sk 2 where   0    2  exp (?k ) t ?t t hkj (t0 , t) = ? ?k + erf + ?k exp [?Dk (t0 ? t)] erf Dj + Dk l l   0   t ?exp [? (Dk t0 + Dj )] erf ? ?k + erf (?k ) . l  Rx Here erf(x) = ?2? 0 exp ?y 2 dy and ?k = D2k l . We can therefore compute a likelihood which N relates instantiations from all the observed genes, {xj (t)}j=1 , through dependencies on the paramN eters {Bj , Sj , Dj }j=1 . The effect of f (t) has been marginalised. To infer the protein concentration levels, one also needs the ?cross-covariance? terms between xj (t) and f (t0 ), which is obtained as Z t kxj f (t, t0 ) = Sj exp (?Dj t) exp (Dj u) kf f (u, t0 ) du. (6) 0 Again, this can be obtained explicitly for squared exponential priors on the latent function f as   0    ? ?lSj t ?t t 2 exp (?j ) exp [?Dj (t0 ? t)] erf ? ?j + erf + ?j . kxj f (t0 , t) = 2 l l Standard Gaussian process regression techniques [see e.g. 8] then yield the mean and covariance function of the posterior process on f as ?1 hf ipost = Kf x Kxx x ?1 Kfpost f = Kf f ? Kf x Kxx Kxf (7) where x denotes collectively the xj (t) observed variables and capital K denotes the matrix obtained by evaluating the covariance function of the processes on every pair of observed time points. The 1 The scale of the process is ignored to avoid a parameterisation ambiguity with the sensitivities. model parameters Bj , Dj and Sj can be estimated by type II maximum likelihood. Alternatively, they can be assigned vague gamma prior distributions and estimated a posteriori using MCMC sampling. In practice, we will allow the mRNA abundance of each gene at each time point to be corrupted by some noise, so that we can model the observations at times ti for i = 1, . . . , T as, yj (ti ) = xj (ti ) + j (ti ) 2 0, ?ji (8)  with j (ti ) ? N . Estimates of the confidence levels associated with each mRNA measurement can be obtained for Affymetrix microarrays using probe-level processing techniques such as the mmgMOS model of [4]. The covariance of the noisyprocess is simply obtained as 2 2 2 2 Kyy = ? + Kxx , with ? = diag ?11 , . . . , ?1T , . . . , ?N 1 , . . . , ?N T . 2 Non-linear Response Model While the linear response model presents the advantage of being exactly tractable in the important squared exponential case, a realistic model of transcription should account for effects such as saturation and ultrasensitivity which cannot be captured by a linear function. Also, all the quantities in equation (1) are positive, but one cannot constrain samples from a Gaussian process to be positive. Modelling the response of the transcription rate to protein concentration using a positive nonlinear function is an elegant way to enforce this constraint. 2.1 Formalism Let the response of the mRNA transcription rate to transcription factor protein concentration levels be modelled by a nonlinear function g with a target-specific vector ?j of parameters, so that, dxj = Bj + g(f (t), ?j ) ? Dj xj dt Z t Bj xj (t) = + exp (?Dj t) du g(f (u), ?j ) exp (Dj u) , Dj 0 (9) where we again set xj (0) = Bj /Dj and assign a Gaussian process prior distribution to f (t). In this case the induced distribution of xj (t) is no longer a Gaussian process. However, we can derive the functional gradient of the likelihood and prior, and use this to learn the Maximum a Posteriori (MAP) solution for f (t) and the parameters by (functional) gradient descent. Given noise-corrupted data yj (ti ) as above, the log-likelihood of the data Y = {yj (ti )} is given by " # N T 2  NT 1 X X (xj (ti ) ? yj (ti )) 2 log(2?) (10) ? log ?ji ? p(Y |f, {Bj , ?j , Dj , ?}) = ? 2 2 i=1 j=1 ?ji 2 where ? denotes collectively the parameters of the prior covariance on f (in the squared exponential case, ? = l2 ). The functional derivative of the log-likelihood with respect to f is then obtained as T N X X ? log p(Y |f ) (xj (ti ) ? yj (ti )) 0 =? ?(ti ? t) g (f (t))e?Dj (ti ?t) 2 ?f (t) ? ji i=1 j=1 (11) where ?(x) is the Heaviside step function and we have omitted the model parameters for brevity. The negative Hessian of the log-likelihood with respect to f is given by w(t, t0 ) = ? + T N X (xj (ti ) ? yj (ti )) 00 ? 2 log p(Y |f ) X 0 = ?(t ? t)? (t ? t ) g (f (t))e?Dj (ti ?t) i 2 ?f (t) ?f (t0 ) ? ji i=1 j=1 T X i=1 ?(ti ? t)? (ti ? t0 ) N X 0 ?2 0 ?ji g (f (t)) g 0 (f (t0 )) e?Dj (2ti ?t?t ) j=1 (12) where g 0 (f ) = ?g/?f and g 00 (f ) = ? 2 g/?f 2 . 2.2 Implementation We discretise in time t and compute the gradient and Hessian on a grid using approximate Riemann quadrature. In the simplest case, we choose a uniform grid [tp ] p = 1, . . . , M so that ? = tp ? tp?1 is constant. We write f = [fp ] to be the vector realisation of the function f at the grid points. The gradient of the log-likelihood is then given by, T N X X ? log p(Y |f ) (xj (ti ) ? yj (ti )) 0 g (fp ) e?Dj (ti ?tp ) = ?? ? (ti ? tp ) 2 ?fp ? ji i=1 j=1 (13) and the negative Hessian of the log-likelihood is, Wpq = ? + ?2 T N X X ? 2 log p(Y |f ) (xj (ti ) ? yj (ti )) 00 g (fq ) e?Dj (ti ?tq ) = ?pq ? ? (ti ? tq ) 2 ?fp ?fq ? ji i=1 j=1 T X ? (ti ? tp ) ? (ti ? tq ) i=1 N X (14) ?2 0 ?ji g (fq ) g 0 (fp ) e?Dj (2ti ?tp ?tq ) j=1 where ?pq is the Kronecker delta. In these and the following formulae ti is understood to mean the index of the grid point corresponding to the ith data point, whereas tp and tq correspond to the grid points themselves. We can then compute the gradient and Hessian of the (discretised) un-normalised log posterior ?(f ) = log p(Y |f ) + log p(f ) [see 8, chapter 3] ??(f ) = ? log p(Y |f ) ? K ?1 f ???(f ) = ?(W + K ?1 ) (15) where K is the prior covariance matrix evaluated at the grid points. These can be used to find the MAP solution f? using Newton?s method. The Laplace approximation to the log-marginal likelihood is then (ignoring terms that do not involve model parameters) (16) log p(Y ) ' log p(Y |f?) ? 1 f?T K ?1 f? ? 1 log |I + KW |. 2 2 We can also optimise the log-marginal with respect to the model and kernel parameters. The gradient of the log-marginal with respect to the kernel parameters is [8]  X  ? log p(Y |?) ? f?p ? log p(Y |?) ?K ?1 ? 1 ?K = 12 f?T K ?1 K f ? 2 tr (I + KW )?1 W + (17) ?? ?? ?? ?? ? f?p p where the final term is due to the implicit dependence of f? on ?. 2.3 Example: exponential response As an example, we consider the case in which g (f (t) , ?j ) = Sj exp (f (t)) (18) which provides a useful way of constraining the protein concentration to be positive. Substituting equation (18) in equations (13) and (14) one obtains T N X X ? log p(Y |f ) (xj (ti ) ? yj (ti )) = ?? ? (ti ? tp ) Sj efp ?Dj (ti ?tp ) 2 ?fp ? ji i=1 j=1 Wpq = ??pq T N X X ? log p(Y |f ) ?2 2 fp +fq ?Dj (2ti ?tp ?tq ) + ?2 ? (ti ? tp ) ? (ti ? tq ) ?ji Sj e . ?fp i=1 j=1 The terms required in equation (17) are, ? log p(Y |?) 1X = ?(AW )pp ? Aqq Wqp 2 q ? f?p where A = (W + K ?1 )?1 . ? f? ?K = AK ?1 ? log p(Y |f?) , ?? ?? 3 Results To test the efficacy of our method, we used a recently published biological data set which was studied using a linear response model by Barenco et al. [1]. This study focused on the tumour suppressor protein p53. mRNA abundance was measured at regular intervals in three independent human cell lines using Affymetrix U133A oligonucleotide microarrays. The authors then restricted their interest to five known target genes of p53: DDB2, p21, SESN1/hPA26, BIK and TNFRSF10b. They estimated the mRNA production rates by using quadratic interpolation between any three consecutive time points. They then discretised the model and used MCMC sampling (assuming a log-normal noise model) to obtain estimates of the model parameters Bj , Sj , Dj and f (t). To make the model identifiable, the value of the mRNA decay of one of the target genes, p21, was measured experimentally. Also, the scale of the sensitivities was fixed by choosing p21?s sensitivity to be equal to one, and f (0) was constrained to be zero. Their predictions were then validated by doing explicit protein concentration measurements and growing mutant cell lines where the p53 gene had been knocked out. 3.1 Linear response analysis We first analysed the data using the simple linear response model used by Barenco et al. [1]. Raw data was processed using the mmgMOS model of [4], which also provides estimates of the credibility associated with each measurement. Data from the different cell lines were treated as independent instantiations of f but sharing the model parameters {Bj , Sj , Dj , ?}. We used a squared exponential covariance function for the prior distribution on the latent function f . The inferred posterior mean function for f , together with 95% confidence intervals, is shown in Figure 1(a). The pointwise estimates inferred by Barenco et al. are shown as crosses in the plot. The posterior mean function matches well the prediction obtained by Barenco et al.2 Notice that the right hand tail of the inferred mean function shows an oscillatory behaviour. We believe that this is an artifact caused by the squared exponential covariance; the steep rise between time zero and time two forces the length scale of the function to be small, hence giving rise to wavy functions [see page 123 in 8]. To avoid this, we repeated the experiment using the ?MLP? covariance function for the prior distribution over f [12]. Posterior estimation cannot be obtained analytically in this case so we resorted to the MAPLaplace approximation described in section 2. The MLP covariance is obtained as the limiting case of an infinite number of sigmoidal neural networks and has the following covariance function ! wtt0 + b 0 k (t, t ) = arcsin p (19) (wt2 + b + 1) (wt02 + b + 1) where w and b are parameters known as the weight and the bias variance. The results using this covariance function are shown in Figure 1(b). The resulting profile does not show the unexpected oscillatory behaviour and has tighter credibility intervals. Figure 2 shows the results of inference on the values of the hyperparameters Bj , Sj and Dj . The columns on the left, shaded grey, show results from our model and the white columns are the estimates obtained in [1]. The hyperparameters were assigned a vague gamma prior distribution (a = b = 0.1, corresponding to a mean of 1 and a variance of 10). Samples from the posterior distribution were obtained using Hybrid Monte Carlo [see e.g. 7]. The results are in good accordance with the results obtained by Barenco et al. Differences in the estimates of the basal transcription rates are probably due to the different methods used for probe-level processing of the microarray data. 3.2 Non-linear response analysis We then used the non-linear response model of section 2 in order to constrain the protein concentrations inferred to be positive. We achieved this by using an exponential response of the transcription rate to the logged protein concentration. The inferred MAP solutions for the latent function f are plotted in Figure 3 for the squared exponential prior (a) and for the MLP prior (b). 2 Barenco et al. also constrained the latent function to be zero at time zero. 4 4 3 3 2 2 1 1 0 0 ?1 ?1 ?2 0 5 (a) ?2 10 0 5 (b) 10 Figure 1: Predicted protein concentration for p53 using a linear response model: (a) squared exponential prior on f ; (b) MLP prior on f . Solid line is mean prediction, dashed lines are 95% credibility intervals. The prediction of Barenco et al. was pointwise and is shown as crosses. 0.25 2.5 0.2 2 0.15 1.5 0.9 0.8 0.7 0.6 0.5 0.4 0.1 1 0.05 0.5 0.3 0.2 0.1 0 DDB2 hPA26 TNFRSF20b p21 BIK 0 DDB2 hPA26 TNFRSF20b p21 BIK 0 DDB2 hPA26 TNFRSF20b p21 BIK (a) (b) (c) Figure 2: Results of inference on the hyperparameters for p53 data studied in [1]. The bar charts show (a) Basal transcription rates from our model and that of Barenco et al.. Grey are estimates obtained with our model, white are the estimates obtained by Barenco et al. (b) Similar for sensitivities. (c) Similar for decay rates. 4 Discussion In this paper we showed how Gaussian processes can be used effectively in modelling the dynamics of a very simple regulatory network motif. This approach has many advantages over standard parametric approaches: first of all, there is no need to restrict the inference to the observed time points, and the temporal continuity of the inferred functions is accounted for naturally. Secondly, Gaussian processes allow noise information to be accounted for in a natural way. It is well known that biological data exhibits a large variability, partly because of technical noise (due to the difficulty to measure mRNA abundance for low expressed genes, for example), and partly because of the difference between different cell lines. Accounting for these sources of noise in a parametric model can be difficult (particularly when estimates of the derivatives of the measured quantities are required), while Gaussian Processes can incorporate this information naturally. Finally, MCMC parameter estimation in a discretised model can be computationally expensive due to the high correlations between variables. This is a consequence of treating the protein concentrations as parameters, and results in many MCMC iterations to obtain reliable samples. Parameter estimation can be achieved easily in our framework by type II maximum likelihood or by using efficient Monte Carlo sampling techniques only on the model hyperparameters. While the results shown in the paper are encouraging, this is still a very simple modelling situation. For example, it is well known that transcriptional delays can play a significant role in determining the dynamics of many cellular processes [5]. These effects can be introduced naturally in a Gaussian process model; however, the data must be sampled at a reasonably high frequency in order for delays to become identifiable in a stochastic model, which is often not the case with microarray data sets. Another natural extension of our work would be to consider more biologically meaningful nonlinearities, such as the popular Michaelis-Menten model of transcription used in [9]. Finally, networks consisting of a single transcription factor are very useful to study small systems of particular interest such as p53. However, our ultimate goal would be to describe regulatory pathways consisting of 6 6 5 5 4 4 3 3 2 2 1 1 0 0 5 (a) 10 0 0 5 (b) 10 Figure 3: Predicted protein concentration for p53 using an exponential response: (a) shows results of using a squared exponential prior covariance on f ; (b) shows results of using an MLP prior covariance on f . Solid line is mean prediction, dashed lines show 95% credibility intervals. The results shown are for exp(f ), hence the asymmetry of the credibility intervals. The prediction of Barenco et al. was pointwise and is shown as crosses. more genes. These can be dealt with in the general framework described in this paper, but careful thought will be needed to overcome the greater computational difficulties. Acknowledgements We thank Martino Barenco for useful discussions and for providing the data. We gratefully acknowledge support from BBSRC Grant No BBS/B/0076X ?Improved processing of microarray data with probabilistic models?. References [1] M. Barenco, D. Tomescu, D. Brewer, R. Callard, J. Stark, and M. Hubank. Ranked prediction of p53 targets using hidden variable dynamic modeling. Genome Biology, 7(3):R25, 2006. [2] T. Graepel. Solving noisy linear operator equations by Gaussian processes: Application to ordinary and partial differential equations. In T. Fawcett and N. Mishra, editors, Proceedings of the International Conference in Machine Learning, volume 20, pages 234?241. AAAI Press, 2003. [3] J. C. Liao, R. Boscolo, Y.-L. Yang, L. M. Tran, C. Sabatti, and V. P. Roychowdhury. Network component analysis: Reconstruction of regulatory signals in biological systems. Proceedings of the National Academy of Sciences USA, 100(26):15522?15527, 2003. [4] X. Liu, M. Milo, N. D. Lawrence, and M. Rattray. A tractable probabilistic model for affymetrix probelevel analysis across multiple chips. Bioinformatics, 21(18):3637?3644, 2005. [5] N. A. Monk. Unravelling nature?s networks. Biochemical Society Transactions, 31:1457?1461, 2003. [6] R. Murray-Smith and B. A. Pearlmutter. Transformations of Gaussian process priors. In J. Winkler, N. D. Lawrence, and M. Niranjan, editors, Deterministic and Statistical Methods in Machine Learning, volume 3635 of Lecture Notes in Artificial Intelligence, pages 110?123, Berlin, 2005. Springer-Verlag. [7] R. M. Neal. Bayesian Learning for Neural Networks. Springer, 1996. Lecture Notes in Statistics 118. [8] C. E. Rasmussen and C. K. Williams. Gaussian Processes for Machine Learning. MIT press, 2005. [9] S. Rogers, R. Khanin, and M. Girolami. Model based identification of transcription factor activity from microarray data. In Probabilistic Modeling and Machine Learning in Structural and Systems Biology, Tuusula, Finland, 17-18th June 2006. [10] C. Sabatti and G. M. James. Bayesian sparse hidden components analysis for transcription regulation networks. Bioinformatics, 22(6):739?746, 2006. [11] G. Sanguinetti, M. Rattray, and N. D. Lawrence. A probabilistic dynamical model for quantitative inference of the regulatory mechanism of transcription. Bioinformatics, 22(14):1753?1759, 2006. [12] C. K. I. Williams. Computation with infinite neural networks. Neural Computation, 10(5):1203?1216, 1998.
3119 |@word grey:2 crucially:1 accounting:2 covariance:18 tr:1 solid:2 carry:2 initial:3 liu:1 efficacy:1 ours:1 affymetrix:3 mishra:1 current:1 discretization:1 nt:1 analysed:1 must:1 realistic:3 plot:1 treating:1 intelligence:1 monk:1 xk:2 ith:1 smith:1 provides:3 firstly:1 sigmoidal:1 five:2 differential:2 become:1 combine:1 pathway:1 themselves:1 growing:1 riemann:1 encouraging:1 developed:1 transformation:1 temporal:2 quantitative:2 every:1 ti:38 exactly:2 uk:3 control:1 grant:1 positive:5 understood:1 accordance:2 treat:1 consequence:1 ak:1 interpolation:2 studied:4 shaded:1 yj:9 practice:2 procedure:1 thought:1 confidence:2 regular:1 protein:24 cannot:3 operator:2 context:1 map:4 deterministic:1 mrna:22 williams:2 knocked:1 focused:1 formulate:2 identifying:1 immediately:1 react:1 laplace:2 limiting:2 target:10 play:1 guido:2 expensive:1 particularly:1 observed:4 role:1 solved:1 substantial:1 dynamic:6 rewrite:1 solving:1 efficiency:1 basis:1 vague:2 kxj:4 easily:1 chip:2 chapter:1 describe:1 monte:3 artificial:1 choosing:1 cov:1 neil:1 erf:7 winkler:1 statistic:1 noisy:2 final:1 advantage:4 propose:1 tran:2 interaction:1 reconstruction:1 academy:1 description:1 manchester:2 asymmetry:1 wavy:1 derive:1 ac:3 measured:4 school:2 c:2 involves:1 implies:1 predicted:2 girolami:1 closely:1 stochastic:1 human:2 rogers:2 behaviour:2 assign:1 unravelling:1 biological:6 elementary:1 secondly:2 tighter:1 extension:3 magnus:2 normal:1 exp:21 lawrence:4 bj:14 genomewide:1 substituting:1 finland:1 consecutive:1 omitted:1 estimation:3 brought:1 mit:1 gaussian:17 avoid:2 validated:1 wqp:1 june:1 martino:1 mutant:1 modelling:6 likelihood:10 fq:4 greatly:1 baseline:3 posteriori:2 inference:10 motif:1 biochemical:1 lj:4 hidden:2 hubank:1 scription:1 constrained:2 marginal:4 equal:2 functions1:1 sampling:3 biology:3 kw:2 leukemia:2 future:1 inherent:1 few:1 realisation:1 gamma:2 national:1 consisting:4 tq:7 interest:2 regulating:2 mlp:5 unveiled:1 chain:1 integral:1 partial:1 kxx:3 plotted:1 formalism:2 column:2 modeling:2 tp:12 ordinary:1 uniform:1 r25:1 delay:2 dependency:1 aw:1 corrupted:2 international:1 sensitivity:8 probabilistic:4 together:1 outstrips:1 squared:9 again:2 ambiguity:1 aaai:1 choose:1 d2k:1 derivative:2 stark:1 account:1 nonlinearities:1 repressor:1 mcmc:6 explicitly:3 caused:1 depends:1 doing:1 recover:1 hf:1 michaelis:1 chart:1 variance:2 largely:1 yield:2 correspond:1 dealt:1 modelled:2 bayesian:5 raw:1 identification:1 eters:1 carlo:3 rx:1 drive:1 finer:1 published:1 oscillatory:2 cumbersome:1 sharing:1 pp:1 frequency:1 james:1 naturally:5 associated:4 sampled:1 dataset:1 popular:1 knowledge:2 graepel:1 focusing:1 dt:2 follow:1 response:18 improved:1 done:1 evaluated:1 implicit:1 correlation:1 hand:1 nonlinear:2 continuity:1 artifact:1 believe:1 usa:1 effect:4 requiring:1 analytically:2 assigned:2 hence:2 bbsrc:1 neal:1 deal:1 white:2 wiring:1 width:1 trying:1 pearlmutter:1 consideration:1 novel:1 recently:2 functional:4 ji:11 volume:2 extend:1 tail:1 relating:1 measurement:6 significant:1 credibility:5 grid:6 hjk:1 menten:1 dj:36 pq:3 had:1 gratefully:1 longer:1 wt2:1 posterior:6 recent:2 showed:1 certain:2 verlag:1 captured:1 greater:1 focussing:1 signal:1 dashed:2 relates:2 full:1 u0:2 ii:2 infer:3 multiple:1 technical:1 match:1 cross:4 molecular:1 niranjan:1 prediction:7 regression:1 sheffield:1 liao:1 iteration:1 fawcett:1 kernel:3 achieved:2 cell:5 whereas:1 shef:1 interval:6 ddb2:4 source:1 microarray:5 crucial:1 probably:1 induced:1 elegant:1 dxj:2 bik:4 structural:1 yang:1 constraining:1 easy:1 xj:20 architecture:1 restrict:1 microarrays:2 t0:19 recreate:1 expression:6 ultimate:1 hessian:4 matlab:1 ignored:1 useful:3 aqq:1 detailed:1 involve:1 processed:1 simplest:2 notice:2 roychowdhury:1 estimated:3 delta:1 rattray:3 discrete:1 write:1 milo:1 basal:3 key:1 drawn:1 capital:1 verified:1 resorted:1 oligonucleotide:1 logged:1 dy:1 bound:1 simplification:1 neill:1 quadratic:1 identifiable:2 activity:1 constraint:1 worked:1 constrain:2 kronecker:1 wpq:2 relatively:1 barenco:14 tomescu:1 department:1 p53:10 across:1 parameterisation:1 biologically:1 restricted:1 taken:1 computationally:2 equation:13 resource:1 describing:1 discus:3 mechanism:2 brewer:1 needed:1 know:1 tractable:3 available:2 operation:1 apply:1 probe:2 regulate:1 enforce:1 callard:1 denotes:3 include:3 newton:1 giving:2 murray:1 society:1 quantity:7 parametric:3 concentration:27 dependence:3 tumour:3 transcriptional:5 subnetwork:2 gradient:6 exhibit:1 thank:1 berlin:1 collected:1 cellular:3 assuming:3 code:1 discretise:1 index:1 pointwise:3 length:1 providing:1 regulation:4 steep:1 difficult:1 negative:2 rise:3 implementation:1 observation:1 markov:1 acknowledge:1 descent:1 situation:2 extended:1 variability:1 dc:1 inferred:8 introduced:1 pair:1 discretised:3 specified:1 required:2 established:1 bar:1 sabatti:2 dynamical:1 fp:8 saturation:1 optimise:1 reliable:1 treated:1 force:1 hybrid:1 natural:2 difficulty:2 ranked:1 marginalised:1 technology:1 lk:2 started:1 kj:3 prior:19 understanding:1 immunoprecipitation:1 l2:2 kf:8 acknowledgement:1 determining:1 expect:1 lecture:2 interesting:1 organised:1 boscolo:1 editor:2 production:5 accounted:2 rasmussen:1 bias:1 allow:2 normalised:1 wide:1 sparse:1 overcome:1 xn:1 evaluating:1 avoids:1 genome:2 author:1 simplified:2 far:1 transaction:1 lsj:1 sj:15 approximate:1 obtains:1 bb:1 transcription:30 gene:26 suppressor:2 active:2 instantiation:2 sesn1:1 sanguinetti:2 alternatively:2 continuous:2 latent:8 regulatory:7 un:1 sk:2 learn:1 reasonably:1 nature:1 ignoring:1 obtaining:1 du:5 complex:1 diag:1 linearly:1 noise:7 hyperparameters:5 profile:3 repeated:1 complementary:1 quadrature:1 x1:1 paramn:1 explicit:1 exponential:12 crude:1 third:1 kxf:1 abundance:9 formula:2 specific:4 revolution:1 decay:7 dk:5 consist:1 effectively:2 arcsin:1 simply:1 unexpected:1 expressed:1 collectively:2 springer:2 goal:1 careful:1 man:2 hard:2 experimentally:1 infinite:2 hkj:2 partly:2 meaningful:1 support:1 arises:1 brevity:1 bioinformatics:3 incorporate:1 p21:6 heaviside:1 chromatin:1
2,336
312
Bumptrees for Efficient Function, Constraint, and Classification Learning Stephen M. Omohundro International Computer Science Institute 1947 Center Street. Suite 600 Berkeley. California 94704 Abstract A new class of data structures called "bumptrees" is described. These structures are useful for efficiently implementing a number of neural network related operations. An empirical comparison with radial basis functions is presented on a robot ann mapping learning task. Applications to density estimation. classification. and constraint representation and learning are also outlined. 1 WHAT IS A BUMPTREE? A bumptree is a new geometric data structure which is useful for efficiently learning. representing. and evaluating geometric relationships in a variety of contexts. They are a natural generalization of several hierarchical geometric data structures including oct-trees. k-d trees. balltrees and boxtrees. They are useful for many geometric learning tasks including approximating functions. constraint surfaces. classification regions. and probability densities from samples. In the function approximation case. the approach is related to radial basis function neural networks, but supports faster construction, faster access, and more flexible modification. We provide empirical data comparing bumptrees with radial basis functions in section 2. A bumptree is used to provide efficient access to a collection of functions on a Euclidean space of interest. It is a complete binary tree in which a leaf corresponds to each function of interest There are also functions associated with each internal node and the defining constraint is that each interior node's function must be everwhere larger than each of the 693 694 Omohundro functions associated with the leaves beneath it In many cases the leaf functions will be peaked in locali7..ed regions. which is the origin of the name. A simple kind of bump function is spherically symmetric about a center and vanishes outside of a specified ball. Figure 1 shows the structure of a two-dimensional bumptree in this setting. Ball supported bump ~~ a&)b e"0f 2-d leaf functions abc d e f tree structure tree functions Figure 1: A two-dimensional bumptree. A particularly important special case of bumptrees is used to access collections of Gaussian functions on multi-dimensional spaces. Such collections are used. for example. in representing smooth probability distribution functions as a Gaussian mixture and arises in many adaptive kernel estimation schemes. It is convenient to represent the quadratic exponents of the Gaussians in the tree rather than the Gaussians themselves. The simplest approach is to use quadratic functions for the internal nodes as well as the leaves as shown in Figure 2. though other classes of internal node functions can sometimes provide faster access. A abc d Figure 2: A bumptree for holding Gaussians. Many of the other hierarchical geometric data structures may be seen as special cases of bumptrees by choosing appropriate internal node functions as shown in Figure 3. Regions may be represented by functions which take the value 1 inside the region and which vanish outside of it. The function shown in Figure 3D is aligned along a coordinate axis and is constant on one side of a specified value and decreases quadratically on the other side. It is represented by specifying the coordinate which is cut, the cut location. the constant value (0 in some situations), and the coefficient of quadratic decrease. Such a function may be evaluated extremely efficiently on a data point and so is useful for fast pruning operations. Such evaluations are effectively what is used in (Sproull. 1990) to implement fast nearest neighbor computation. The bumptree structure generalizes this kind of query to allow for different scales for different points and directions. The empirical results presented in the next section are based on bumptrees with this kind of internal node function. Bumptrees for Efficient Function, Constraint, and Classification Learning A. cy c. B. D. Figure 3: Internal bump functions for A) oct-trees, kd-trees, boxtrees (Omohundro, 1987), B) and C) for balItrees (Omohundro, 1989), and D) for Sproull's higher performance kd-tree (Sproull, 1990). There are several approaches to choosing a tree structure to build over given leaf data. Each of the algorithms studied for balltree construction in (Omohundro, 1989) may be applied to the more general task of bumptree construction. The fastest approach is analogous to the basic k-d tree construction technique (Friedman, et. al, 1977) and is top down and recursively splits the functions into two sets of almost the same size. This is what is used in the simulations described in the next section. The slowest but most effective approach builds the tree bottom up, greedily deciding on the best pair of functions to join under a single parent node. Intermediate in speed and quality are incremental approaches which allow one to dynamically insert and delete leaf functions. Bumptrees may be used to efficiently support many important queries. The simplest kind of query presents a point in the space and asks for all leaf functions which have a value at that point which is larger than a specified value. The bumptree allows a search from the root to prune any subtrees whose root function is smaller than the specified value at the point. More interesting queries are based on branch and bound and generalize the nearest neighbor queries that k-d trees support. A typical example in the case of a collection of Gaussians is to request all Gaussians in the set whose value at a specified point is within a specified factor (say .(01) of the Gaussian whose value is largest at that point. The search proceeds down the most promising branches rust, continually maintains the largest value found at any point, and prunes away subtrees which are not within the given factor of the current largest function value. 2 THE ROBOT MAPPING LEARNING TASK ........... System ..... Kinematic space ~-- R3 ----_~. Visual space .. R6 Figure 4: Robot arm mapping task. 695 696 Omohundro Figure 4 shows the setup which defines the mapping learning task we used to study the effectiveness of the balltree data structure. This setup was investigated extensively by (Mel. 1990) and involves a camera looking at a robot arm. The kinematic state of the ann is defmed by three angle control coordinates and the visual state by six visual coordinates of highlighted spots on the arm. The mapping from kinematic to visual space is a nonlinear map from three dimensions to six. The system attempts to learn this mapping by flailing the ann around and observing the visual state for a variety of randomly chosen kinematic states. From such a set of random input/output pairs. the system must generalize the mapping to inputs it has not seen before. This mapping task was chosen as fairly representative of typical problems arising in vision and robotics. The radial basis function approach to mapping learning is to represent a function as a linear combination of functions which are spherically symmetric around chosen centers f (x) = L wjg j (x - Xj) . In the simplest form. which we use here. the basis functions are j centered on the input points. More recent variations have fewer basis functions than sample points and choose centers by clustering. The timing results given here would be in terms of the number of basis functions rather than the number of sample points for a variation of this type. Many fonns for the basis functions themselves have been suggested. In our study both Gaussian and linearly increasing functions gave similar results. The coefficients of the radial basis functions are chosen so that the sum forms a least squares best fit to the data. Such fits require a time proportional to the cube of the number of parameters in general. The experiments reported here were done using the singular value decomposition to compute the best fit coefficients. The approach to mapping learning based on bumptrees builds local models of the mapping in each region of the space using data associated with only the training samples which are nearest that region. These local models are combined in a convex way according to "influence" functions which are associated with each model. Each influence function is peaked in the region for which it is most salient. The bumptree structure organizes the local models so that only the few models which have a great influence on a query sample need to be evaluated. If the influence functions vanish outside of a compact region. then the tree is used to prune the branches which have no influence. If a model's influence merely dies off with distance, then the branch and bound technique is used to determine contributions that are greater than a specified error bound. If a set of bump functions sum to one at each point in a region of interest. they are called a "partition of unity". We fonn influence bumps by dividing a set of smooth bumps (either Gaussians or smooth bumps that vanish outside a sphere) by their sum to form an easily computed partiton of unity. Our local models are affine functions determined by a least squares fit to local samples. When these are combined according to the partition of unity, the value at each point is a convex combination of the local model values. The error of the full model is therefore bounded by the errors of the local models and yet the full approximation is as smooth as the local bump functions. These results may be used to give precise bounds on the average number of samples needed to achieve a given approximation error for functions with a bounded second derivative. In this approach. linear fits are only done on a small set of local samples. avoiding the computationally expensive fits over the whole data set required by radial basis functions. This locality also allows us to easily update the model online as new data arrives. Bumptrees for Efficient Function, Constraint, and Classification Learning If bi (x) are bump functions such as Gaussians. then ftj (x) = bi(x) fonns a partition Lbj(X) j of unity. If m i (x) are the local affine models. then the final smoothly interpolated approximating function is /(x) = Lfti(x)mi(x). The influence bumps are centered on the i sample points with a width determined by the sample density. The affine model associated with each influence bump is detennined by a weighted least squares fit of the sample points nearest the bwnp center in which the weight decreases with distance. Because it performs a global fit, for a given number of samples points, the radial basis function approach achieves a smaller error than the approach based on bumptrees. In terms of construction time to achieve a given error, however, bwnptrees are the clear winner.Figure 5 shows how the mean square error for the robot arm mapping task decreases as a function of the time to construct the mapping. Mean Square Error 0.010 0.008 0.006 0.004 0.002 0.000-1-.....;;;;::.....- ........- ....-...-.......... o 40 120 80 Learning time (sees) 160 Figure 5: Mean square error as a function of learning time. Perhaps even more important for applications than learning time is retrieval time. Retrieval using radial basis functions requires that the value of each basis function be computed on each query input and that these results be combined according to the best fit weight matrix. This time increases linearly as a function of the number of basis functions in the representation. In the bumptree approach, only those influence bumps and affine models which are not pruned away by the bumptree retrieval need perform any computation on an input. Figure 6 shows the retrieval time as a function of number of training samples for the robot mapping task. The retrieval time for radial basis functions crosses that for balltrees at about 100 samples and increases linearly off the graph. The balltree algorithm has a retrieval time which empirically grows very slowly and doesn't require much more time even when 10,000 samples are represented. While not shown here, the representation may be improved in both size and generalization capacity by a best first merging technique. The idea is to consider merging two local models and their influence bumps into a single model. The pair which increases the error the least 697 698 Omohundro is merged frrst and the process is repeated until no pair is left whose meger wouldn't exceed an error criterion. This algorithm does a good job of discovering and representing linear parts of a map with a single model and putting many higher resolution models in areas with strong nonlinearities. Retrieval time (sees) 0.030 Gaussian RBF Bumptree 0.020 0.010 O.OOO-+---r-..-.....--r~..-..........~..-........... o 2K 4K 6K 8K 10K Figure 6: Retrieval time as a function of number of training samples. 3 EXTENSIONS TO OTHER TASKS The bumptree structure is useful for implementing efficient versions of a variety of other geometric learning tasks (Omohundro, 1990). Perhaps the most fundamental such task is density estimation which attempts to model a probability distribution on a space on the basis of samples drawn from that distribution. One powerful technique is adaptive kernel estimation (Devroye and Gyorfi, 1985). The estimated distribution is represented as a Gaussian mixture in which a spherically symmetric Gaussian is centered on each data point and the widths are chosen according to the local density of samples. A best-first merging technique may often be used to produce mixtures consisting of many fewer non-symmetric Gaussians. A bumptree may be used to fmd and organize such Gaussians. Possible internal node functions include both quadratics and the faster to evaluate functions shown in Figure 3D. It is possible to effICiently perform many operations on probability densities represented in this way. The most basic query is to return the density at a given location. The bumptree may be used with branch and bound to achieve retrieval in logarithmic expected time. It is also possible to quickly fmd marginal probabilities by integrating along certain dimensions. The tree is used to quickly identify the Gaussian which contribute. Conditional distributions may also be represented in this form and bumptrees may be used to compose two such distributions. Above we discussed mapping learning and evaluation. In many situations there are not the natural input and output variables required for a mapping. If a probability distribution is peaked on a lower dimensional surface, it may be thought of as a constraint. Networks of Bumptrees for Efficient Function, Constraint, and Classification Learning constraints which may be imposed in any order among variables are natural for describing many problems. Bumptrees open up several possibilities f(X' efficiently representing and propagating smooth constraints on continuous variables. The most basic query is to specify known external constraints on certain variables and allow the network to further impose whatever constraints it can. Multi-dimensional product Ganssians can be used to represent joint ranges in a set of variables. The operation of imposing a constraint surface may be thought of as multiplying an external constraint Gaussian by the function representing the constraint distribution. Because the product of two Gaussians is a Gaussian, this operation always produces Gaussian mixtures and bumptrees may be used to facilitate the operation. A representation of constraints which is more like that used above for mappings consbUcts surfaces from local affine patches weighted by influence functions. We have developed a local analog of principle components analysis which builds up surfaces from random samples drawn from them. As with the mapping structures, a best-frrst merging operation may be used to discover affine sbUcture in a constraint surface. Finally, bumptrees may be used to enhance the performance of classifiers. One approach is to directly implement Bayes classifiers using the adaptive kernel density estimator described above for each class t s distribution function. A separate bumptree may be used for each class or with a more sophisticated branch and bound. a single tree may be used for the whole set of classes. In summary, bumptrees are a natural generalization of several hierarchical geometric access structures and may be used to enhance the performance of many neural network like algorithms. While we compared radial basis functions against a different mapping learning technique, bumptrees may be used to boost the retrieval performance of radial basis functions directly when the basis functions decay away from their centers. Many other neural network approaches in which much of the network does not perform useful work for every query are also susceptible to sometimes dramatic speedups through the use of this kind of access SbUCture. References L. Devroye and L. Gyorfi. (1985) Nonparametric Density Estimation: The Ll View, New York: Wiley. ]. H. Friedman, ]. L. Bentley and R. A. Finkel. (1977) An algorithm for finding best matches in logarithmic expected time. ACM Trans. Math. Software 3:209-226. B. Mel. (1990) Connectionist Robot Motion Planning. A Neurally-Inspired Approach to Visually-Guided Reaching. San Diego, CA: Academic Press. S. M. Omohundro. (1987) Efficient algorithms with neural network behavior. Complex Systems 1:273-347. S. M. Omohundro. (1989) Five balltree construction algorithms. International Computer Science Institute Technical Report TR-89-063. S. M. Omohundro. (1990) Geometric learning algorithms. Physica D 42:307-321. R. F. Sproull. (1990) Refmements to Nearest-Neighbor Searching in k-d Trees. Sutherland. Sproull and Associates Technical Report SSAPP # 184c. to appear in Algorithmica. 699
312 |@word version:1 open:1 simulation:1 decomposition:1 fonn:1 dramatic:1 asks:1 tr:1 recursively:1 current:1 comparing:1 yet:1 must:2 partition:3 update:1 leaf:8 fewer:2 discovering:1 math:1 node:8 location:2 contribute:1 five:1 along:2 compose:1 inside:1 expected:2 behavior:1 themselves:2 planning:1 multi:2 inspired:1 increasing:1 discover:1 bounded:2 what:3 kind:5 developed:1 finding:1 suite:1 berkeley:1 every:1 classifier:2 control:1 whatever:1 appear:1 organize:1 continually:1 before:1 sutherland:1 timing:1 local:14 bumptrees:18 studied:1 dynamically:1 specifying:1 fastest:1 bi:2 gyorfi:2 range:1 camera:1 implement:2 spot:1 area:1 empirical:3 thought:2 convenient:1 radial:11 integrating:1 interior:1 context:1 influence:12 map:2 imposed:1 center:6 convex:2 resolution:1 estimator:1 searching:1 coordinate:4 variation:2 analogous:1 construction:6 diego:1 origin:1 associate:1 expensive:1 particularly:1 cut:2 bottom:1 cy:1 region:9 decrease:4 vanishes:1 basis:19 easily:2 joint:1 represented:6 fast:2 effective:1 query:10 outside:4 choosing:2 whose:4 larger:2 say:1 highlighted:1 final:1 online:1 product:2 aligned:1 beneath:1 detennined:1 achieve:3 frrst:2 parent:1 produce:2 incremental:1 propagating:1 nearest:5 job:1 strong:1 dividing:1 involves:1 direction:1 guided:1 merged:1 centered:3 implementing:2 require:2 generalization:3 insert:1 extension:1 physica:1 around:2 deciding:1 great:1 visually:1 mapping:19 bump:13 achieves:1 estimation:5 largest:3 weighted:2 gaussian:11 always:1 rather:2 reaching:1 bumptree:17 finkel:1 slowest:1 greedily:1 classification:6 flexible:1 among:1 exponent:1 special:2 fairly:1 cube:1 marginal:1 construct:1 peaked:3 connectionist:1 report:2 few:1 randomly:1 algorithmica:1 consisting:1 lbj:1 friedman:2 attempt:2 interest:3 possibility:1 kinematic:4 evaluation:2 mixture:4 arrives:1 subtrees:2 tree:17 euclidean:1 delete:1 reported:1 combined:3 density:9 international:2 fundamental:1 off:2 enhance:2 quickly:2 choose:1 slowly:1 external:2 derivative:1 return:1 nonlinearities:1 coefficient:3 sproull:5 root:2 view:1 observing:1 bayes:1 maintains:1 contribution:1 square:6 efficiently:6 identify:1 generalize:2 multiplying:1 ed:1 against:1 associated:5 mi:1 sophisticated:1 higher:2 specify:1 improved:1 ooo:1 evaluated:2 though:1 done:2 until:1 nonlinear:1 defines:1 quality:1 perhaps:2 grows:1 bentley:1 facilitate:1 name:1 spherically:3 symmetric:4 ll:1 width:2 defmed:1 mel:2 criterion:1 omohundro:11 complete:1 performs:1 motion:1 rust:1 empirically:1 winner:1 balltrees:2 discussed:1 analog:1 imposing:1 outlined:1 robot:7 access:6 surface:6 recent:1 certain:2 binary:1 seen:2 greater:1 impose:1 prune:3 determine:1 stephen:1 branch:6 neurally:1 full:2 smooth:5 technical:2 match:1 academic:1 faster:4 cross:1 sphere:1 retrieval:10 basic:3 fonns:2 vision:1 represent:3 sometimes:2 kernel:3 robotics:1 singular:1 effectiveness:1 intermediate:1 split:1 exceed:1 variety:3 xj:1 fit:9 gave:1 idea:1 six:2 york:1 useful:6 clear:1 nonparametric:1 extensively:1 simplest:3 boxtrees:2 estimated:1 arising:1 putting:1 salient:1 drawn:2 graph:1 merely:1 sum:3 angle:1 powerful:1 almost:1 patch:1 dy:1 bound:6 quadratic:4 constraint:17 software:1 interpolated:1 speed:1 extremely:1 pruned:1 speedup:1 according:4 ball:2 request:1 combination:2 kd:2 smaller:2 unity:4 modification:1 computationally:1 describing:1 r3:1 needed:1 generalizes:1 operation:7 gaussians:10 hierarchical:3 away:3 appropriate:1 top:1 clustering:1 include:1 build:4 approximating:2 balltree:4 distance:2 separate:1 capacity:1 street:1 devroye:2 relationship:1 setup:2 susceptible:1 holding:1 perform:3 defining:1 situation:2 looking:1 precise:1 pair:4 required:2 specified:7 california:1 quadratically:1 boost:1 trans:1 suggested:1 proceeds:1 ftj:1 including:2 natural:4 arm:4 representing:5 scheme:1 axis:1 geometric:8 interesting:1 proportional:1 affine:6 principle:1 summary:1 supported:1 side:2 allow:3 institute:2 neighbor:3 dimension:2 evaluating:1 doesn:1 collection:4 adaptive:3 wouldn:1 san:1 pruning:1 compact:1 global:1 search:2 continuous:1 promising:1 learn:1 ca:1 investigated:1 complex:1 linearly:3 whole:2 repeated:1 fmd:2 representative:1 join:1 wiley:1 r6:1 vanish:3 down:2 decay:1 merging:4 effectively:1 locality:1 smoothly:1 logarithmic:2 visual:5 flailing:1 corresponds:1 abc:2 acm:1 oct:2 conditional:1 ann:3 rbf:1 typical:2 determined:2 called:2 organizes:1 internal:7 support:3 arises:1 evaluate:1 avoiding:1
2,337
3,120
A PAC-Bayes Risk Bound for General Loss Functions Pascal Germain D?epartement IFT-GLO Universit?e Laval Qu?ebec, Canada Pascal.Germain.1@ulaval.ca Alexandre Lacasse D?epartement IFT-GLO Universit?e Laval Qu?ebec, Canada Alexandre.Lacasse@ift.ulaval.ca Franc?ois Laviolette D?epartement IFT-GLO Universit?e Laval Qu?ebec, Canada Francois.Laviolette@ift.ulaval.ca Mario Marchand D?epartement IFT-GLO Universit?e Laval Qu?ebec, Canada Mario.Marchand@ift.ulaval.ca Abstract We provide a PAC-Bayesian bound for the expected loss of convex combinations of classifiers under a wide class of loss functions (which includes the exponential loss and the logistic loss). Our numerical experiments with Adaboost indicate that the proposed upper bound, computed on the training set, behaves very similarly as the true loss estimated on the testing set. 1 Intoduction The PAC-Bayes approach [1, 2, 3, 4, 5] has been very effective at providing tight risk bounds for large-margin classifiers such as the SVM [4, 6]. Within this approach, we consider a prior distribution P over a space of classifiers that characterizes our prior belief about good classifiers (before the observation of the data) and a posterior distribution Q (over the same space of classifiers) that takes into account the additional information provided by the training data. A remarkable result that came out from this line of research, known as the ?PAC-Bayes theorem?, provides a tight upper bound on the risk of a stochastic classifier (defined on the posterior Q) called the Gibbs classifier. In the context of binary classification, the Q-weighted majority vote classifier (related to this stochastic classifier) labels any input instance with the label output by the stochastic classifier with probability more than half. Since at least half of the Q measure of the classifiers err on an example incorrectly classified by the majority vote, it follows that the error rate of the majority vote is at most twice the error rate of the Gibbs classifier. Therefore, given enough training data, the PAC-Bayes theorem will give a small risk bound on the majority vote classifier only when the risk of the Gibbs classifier is small. While the Gibbs classifiers related to the large-margin SVM classifiers have indeed a low risk [6, 4], this is clearly not the case for the majority vote classifiers produced by bagging [7] and boosting [8] where the risk of the associated Gibbs classifier is normally close to 1/2. Consequently, the PAC-Bayes theorem is currently not able to recognize the predictive power of the majority vote in these circumstances. In an attempt to progress towards a theory giving small risk bounds for low-risk majority votes having a large risk for the associated Gibbs classifier, we provide here a risk bound for convex combinations of classifiers under quite arbitrary loss functions, including those normally used for boosting (like the exponential loss) and those that can give a tighter upper bound to the zero-one loss of weighted majority vote classifiers (like the sigmoid loss). Our numerical experiments with Adaboost [8] indicate that the proposed upper bound for the exponential loss and the sigmoid loss, computed on the training set, behaves very similarly as the true loss estimated on the testing set. 2 Basic Definitions and Motivation We consider binary classification problems where the input space X consists of an arbitrary subset of Rn and the output space Y = {?1, +1}. An example is an input-output (x, y) pair where x ? X and y ? Y. Throughout the paper, we adopt the PAC setting where each example (x, y) is drawn according to a fixed, but unknown, probability distribution D on X ? Y. We consider learning algorithms that work in a fixed hypothesis space H of binary classifiers and produce a convex combination fQ of binary classifiers taken from H. Each binary classifier h ? H contribute to fQ with a weight Q(h) ? 0. For any input example x ? X , the real-valued output fQ (x) is given by X fQ (x) = Q(h)h(x) , h?H where h(x) ? {?1, +1}, fQ (x) ? [?1, +1], and called the posterior distribution1 . P h?H Q(h) = 1. Consequently, Q(h) will be Since fQ (x) is also the expected class label returned by a binary classifier randomly chosen according to Q, the margin yfQ (x) of fQ on example (x, y) is related to the fraction WQ (x, y) of binary classifiers that err on (x, y) under measure Q as follows. Let I(a) = 1 when predicate a is true and I(a) = 0 otherwise. We then have: ? ? 1 yh(x) 1 X 1 = E I(h(x) 6= y) ? = E ? = ? Q(h)yh(x) WQ (x, y) ? h?Q h?Q 2 2 2 2 h?H Since E (x,y)?D 1 = ? yfQ (x) . 2 WQ (x, y) is the Gibbs error rate (by definition), we see that the expected margin is just one minus twice the Gibbs error rate. In contrast, the error for the Q-weighted majority vote is given by ? ? 1 1 1 = E lim tanh (? [2WQ (x, y) ? 1]) + E I WQ (x, y) > 2 2 (x,y)?D ??? 2 (x,y)?D ? E tanh (? [2WQ (x, y) ? 1]) + 1 (?? > 0) (x,y)?D ? E (x,y)?D exp (? [2WQ (x, y) ? 1]) (?? > 0) . Hence, for large enough ?, the sigmoid loss (or tanh loss) of fQ should be very close to the error rate of the Q-weighted majority vote. Moreover, the error rate of the majority vote is always upper bounded by twice that sigmoid loss for any ? > 0. The sigmoid loss is, in turn, upper bounded by the exponential loss (which is used, for example, in Adaboost [9]). More generally, we will provide tight risk bounds for any loss function that can be expanded by a Taylor series around WQ (x, y) = 1/2. Hence we consider any loss function ?Q (x, y) that can be written as ? 1 1X def k + g(k) (2WQ (x, y) ? 1) ?Q (x, y) = (1) 2 2 k=1 ? ?k ? 1 1X + g(k) E ? yh(x) = , (2) h?Q 2 2 k=1 and our task is to provide tight bounds for the expected loss ?Q that depend on the empirical loss ?c Q measured on a training sequence S = h(x1 , y1 ), . . . , (xm , ym )i of m examples, where m X def def 1 ?Q (xi , yi ) . ?Q = E ?Q (x, y) ; ?c = (3) Q m i=1 (x,y)?D Note that by upper bounding ?Q , we are taking into account all moments of WQ . In contrast, the PAC-Bayes theorem [2, 3, 4, 5] currently only upper bounds the first moment E WQ (x, y). (x,y)?D 1 When H is a continuous set, Q(h) denotes a density and the summations over h are replaced by integrals. 3 A PAC-Bayes Risk Bound for Convex Combinations of Classifiers The PAC-Bayes theorem [2, 3, 4, 5] is a statement about the expected zero-one loss of a Gibbs classifier. Given any distribution over a space of classifiers, the Gibbs classifier labels any example x ? X according to a classifier randomly drawn from that distribution. Hence, to obtain a PACBayesian bound for the expected general loss ?Q of a convex combination of classifiers, let us relate ?Q to the zero-one loss of a Gibbs classifier. For this task, let us first write ? ?k E E ? yh(x) = E E ? ? ? E E (?y)k h1 (x)h2 (x) ? ? ? hk (x) . h?Q (x,y)?D h1 ?Q h2 ?Q hk ?Q (x,y) Note that the product h1 (x)h2 (x) ? ? ? hk (x) defines another binary classifier that we denote as h1?k (x). We now define the error rate R(h1?k ) of h1?k as ? ? def R(h1?k ) = E I (?y)k h1?k (x) = sgn(g(k)) (4) (x,y)?D = 1 1 + ? sgn(g(k)) E (?y)k h1?k (x) , 2 2 (x,y)?D where sgn(g) = +1 if g > 0 and ?1 otherwise. If we now use E h1?k ?Qk to denote E E h1 ?Q h2 ?Q ? ? ? E , Equation 2 now becomes ? ?Q = = = 1 1X + g(k) E 2 2 (x,y)?D 1 1 + 2 2 1 + 2 k=1 ? X |g(k)| k=1 ?k E ? yh(x) h?Q |g(k)| ? sgn(g(k)) k=1 ? X hk ?Q ? E h1?k ?Qk E E h1?k ?Qk (x,y)?D (?y)k h1?k (x) ? ? 1 R(h1?k ) ? . 2 (5) Apart, from constant factors, Equation 5 relates ?Q the the zero-one loss of a new type of Gibbs classifier. Indeed, if we define ? X def c = |g(k)| , (6) k=1 Equation 5 can be rewritten as ? ? ? 1 1 1X 1 def ?Q ? + = |g(k)| E R(h1?k ) = R(GQ ) . c 2 2 c h1?k ?Qk (7) k=1 The new type of Gibbs classifier is denoted above by GQ , where Q is a distribution over the product classifiers h1?k with variable length k. More precisely, given an example x to be labelled by GQ , we first choose at random a number k ? N+ according to the discrete probability distribution given by |g(k)|/c and then we choose h1?k randomly according to Qk to classify x with h1?k (x). The risk R(GQ ) of this new Gibbs classifier is then given by Equation 7. We will present a tight PAC-Bayesien bound for R(GQ ) which will automatically translate into a bound for ?Q via Equation 7. This bound will depend on the empirical risk RS (GQ ) which relates to the the empirical loss ?c Q (measured on the training sequence S of m examples) through the equation 1 c ? 1 ?c Q? 2 ? ? + 1 1X def = |g(k)| E RS (h1?k ) = RS (GQ ) , 2 c h1?k ?Qk k=1 where RS (h1?k ) def = ? ? m 1 X k I (?yi ) h1?k (xi ) = sgn(g(k)) . m i=1 (8) Note that Equations 7 and 8 imply that i h ?Q ? ?c Q = c ? R(GQ ) ? RS (GQ ) . Hence, any looseness in the bound for R(GQ ) will be amplified by the scaling factor c on the bound for ?Q . Therefore, within this approach, the bound for ?Q can be tight only for small values of c. Note however that loss functions having a small value of c are commonly used in practice. Indeed, learning algorithms for feed-forward neural networks, and other approaches that construct a realvalued function fQ (x) ? [?1, +1] from binary classification data, typically use a loss function of the form |fQ (x) ? y|r /2, for r ? {1, 2}. In these cases we have ? ?r ? 1 ?? 1 r r |fQ (x) ? y| = ? E yh(x) ? 1?? = 2r?1 |WQ (x, y)| , 2 2 h?Q which gives c = 1 for r = 1, and c = 3 for r = 2. Given a set H of classifiers, a prior distribution P on H, and a training sequence S of m examples, the learner will output a posterior distribution Q on H which, in turn, gives a convex combination fQ that suffers the expected loss ?Q . Although Equation 7 holds only for a distribution Q defined by the absolute values of the Taylor coefficients g(k) and the product distribution Qk , the PAC-Bayesian theorem will hold for any prior P and posterior Q defined on [ def H? = Hk , (9) k?N+ and for any zero-one valued loss function `(h(x), y)) defined ?h ? H? and ?(x, y) ? X ? Y (not ? by Equation 4). This PAC-Bayesian theorem upper-bounds the value of ? just the?one defined kl RS (GQ )?R(GQ ) , where def kl(qkp) = q ln q 1?q + (1 ? q) ln p 1?p denotes the Kullback-Leibler divergence between the Bernoulli distributions with probability of ? ? ? success q and probability of success p. Note that an upper bound on kl RS (GQ )?R(GQ ) provides both and upper and a lower bound on R(GQ ). ? ? ? The upper bound on kl RS (GQ )?R(GQ ) depends on the value of KL(QkP ), where def KL(QkP ) = E ln h?Q Q(h) P (h) denotes the Kullback-Leibler divergence between distributions Q and P defined on H? . In our case, since we want a bound on R(GQ ) that translates into a bound for ?Q , we need a Q that satisfies Equation 7. To minimize the value of KL(QkP ), it is desirable to choose a prior P having properties similar to those of Q. Namely, the probabilities assigned by P to the possible values of k will also be given by |g(k)|/c. Moreover, we will restrict ourselves to the case where the k classifiers from H are chosen independently, each according to the prior P on H (however, other choices for P are clearly possible). In this case we have ? KL(QkP ) = |g(k)| ? Qk (h1?k ) 1X |g(k)| E ln c |g(k)| ? P k (h1?k ) h1?k ?Qk k=1 = ? k X 1X Q(hi ) |g(k)| E . . . E ln h1 ?Q hk ?Q c P (hi ) i=1 = 1X Q(h) |g(k)| ? k E ln h?Q c P (h) k=1 ? k=1 = k ? KL(QkP ) , (10) where def k = ? 1X |g(k)| ? k . c (11) k=1 We then have the following theorem. Theorem 1 For any set H of binary classifiers, any prior distribution P on H? , and any ? ? (0, 1], we have ? ?? ? ? ? ? 1 m+1 KL(QkP ) + ln ? 1??. Pr m ?Q on H? : kl RS (GQ )?R(GQ ) ? S?D m ? Proof The proof directly follows from the fact that we can apply the PAC-Bayes theorem of [4] to priors and posteriors defined on the space H? of binary classifiers with any zero-one valued loss function. Note that Theorem 1 directly provides upper and lower bounds on ?Q when we use Equations 7 and 8 to relate R(GQ ) and RS (GQ ) to ?Q and ?c Q and when we use Equation 10 for KL(QkP ). Consequently, we have the following theorem. Theorem 2 Consider any loss function ?Q (x, y) defined by Equation 1. Let ?Q and ?c Q be, respectively, the expected loss and its empirical estimate (on a sample of m examples) as defined by Equation 3. Let c and k be defined by Equations 6 and 11 respectively. Then for any set H of binary classifiers, any prior distribution P on H, and any ? ? (0, 1], we have ? ? ! ? ? ? ? ? 1? 1 1 1 c 1 ?1 ?Q ? + ? ?Q ? + Pr ?Q on H : kl S?D m c 2 2?c 2 2 ? ?! 1 m+1 ? 1??. k ? KL(QkP ) + ln ? m ? 4 Bound Behavior During Adaboost We have decided to examine the behavior of the proposed bounds during Adaboost since this learning algorithm generally produces a weighted majority vote having a large Gibbs risk E (x,y) WQ (x, y) (i.e., small expected margin) and a small Var (x,y) WQ (x, y) (i.e., small variance of the margin). Indeed, recall that one of our main motivations was to find a tight risk bound for the majority vote precisely under these circumstances. We have used the ?symmetric? version of Adaboost [10, 9] where, at each boosting round t, the weak learning algorithm produces a classifier ht with the smallest empirical error ?t = m X Dt (i)I[ht (xi ) 6= yi ] i=1 with respect to the boosting distribution Dt (i) on the indices i ? {1, . . . , m} of the training examples. After each boosting round t, this distribution is updated according to Dt+1 (i) = 1 Dt (i) exp(?yi ?t ht (xi )) , Zt where Zt is the normalization constant required for Dt+1 to be a distribution, and where ? ? 1 ? ?t 1 . ?t = ln 2 ?t Since our task is not to obtain the majority vote with the smallest possible risk but to investigate the tightness of the proposed bounds, we have used the standard ?decision stumps? for the set H of classifiers that can be chosen by the weak learner. Each decision stump is a threshold classifier that depends on a single attribute: it outputs +y when the tested attribute exceeds the threshold and predicts ?y otherwise, where y ? {?1, +1}. For each decision stump h ? H, its boolean complement is also in H. Hence, we have 2[k(i) ? 1] possible decision stumps on an attribute i having k(i) Pnpossible (discrete values). Hence, for data sets having n attributes, we have exactly |H| = 2 i=1 2[k(i) ? 1] classifiers. Data sets having continuous-valued attributes have been discretized in our numerical experiments. From Theorem 2 and Equation 10, the bound on ?Q depends on KL(QkP ). We have chosen a uniform prior P (h) = 1/|H| ?h ? H. We therefore have X X Q(h) def = Q(h) ln Q(h) + ln |H| = ?H(Q) + ln |H| . KL(QkP ) = Q(h) ln P (h) h?H h?H At boosting round t, Adaboost changes the distribution from Dt to Dt+1 by putting more weight on the examples that are incorrectly classified by ht . This strategy is supported by the propose bound on ?Q since it has the effect of increasing the entropy H(Q) as a function of t. Indeed, apart from tiny fluctuations, the entropy was seen to be nondecreasing as a function of t in all of our boosting experiments. We have focused our attention on two different loss functions: the exponential loss and the sigmoid loss. 4.1 Results for the Exponential Loss The exponential loss EQ (x, y) is the obvious choice for boosting since, the typical analysis [8, 10, 9] shows that the empirical estimate of the exponential loss is decreasing at each boosting round 2 . More precisely, we have chosen def 1 exp (? [2WQ (x, y) ? 1]) . EQ (x, y) = (12) 2 For this loss function, we have c = e? ? 1 ? . k = 1 ? e?? Since c increases exponentially rapidly with ?, so will the risk upper-bound for EQ . Hence, unfortunately, we can obtain a tight upper-bound only for small values of ?. All the data sets used were obtained from the UCI repository. Each data set was randomly split into two halves of the same size: one for the training set and the other for the testing set. Figure 1 illustrates the typical behavior for the exponential loss bound on the Mushroom and Sonar data sets containing 8124 examples and 208 examples respectively. We first note that, although the test error of the majority vote (generally) decreases as function of the number T of boosting rounds, the risk of the Gibbs classifier, E (x,y) WQ (x, y) increases as a function of T but its variance Var (x,y) WQ (x, y) decreases dramatically. Another striking feature is the fact that the exponential loss bound curve, computed on the training set, is essentially parallel to the true exponential loss curve computed on the testing set. This same parallelism was observed for all the UCI data sets we have examined so far.3 Unfortunately, as we can see in Figure 2, the risk bound increases rapidly as a function of ?. Interestingly however, the risk bound curves remain parallel to the true risk curves. 4.2 Results for the Sigmoid Loss We have also investigated the sigmoid loss TQ (x, y) defined by 1 def 1 + tanh (? [2WQ (x, y) ? 1]) . TQ (x, y) = 2 2 2 (13) In fact, this is true only for the positive linear combination produced by Adaboost. The empirical exponential risk of the convex combination fQ is not always decreasing as we shall see. 3 These include the following data sets: Wisconsin-breast, breast cancer, German credit, ionosphere, kr-vskp, USvotes, mushroom, and sonar. 0.6 0.4 0.5 0.3 0.4 0.2 0.1 EQ bound EQ on test 0.3 EQ bound EQ on test E(WQ ) on test MV error on test Var(WQ ) on test 0.2 E(WQ ) on test MV error on test Var(WQ ) on test 0.1 0 0 0 40 80 120 160 T 0 40 80 120 160 T Figure 1: Behavior of the exponential risk bound (EQ bound), the true exponential risk (EQ on test), the Gibbs risk (E(WQ ) on test), its variance (Var(WQ ) on test), and the test error of the majority vote (MV error on test) as of function of the boosting round T for the Mushroom (left) and the Sonar (right) data sets. The risk bound and the true risk were computed for ? = ln 2. 0.5 0.8 0.7 0.4 0.6 0.5 0.3 0.4 ?=1 ?=2 ?=3 ?=4 MV error on test 0.2 0.1 ?=1 ?=2 ?=3 ?=4 MV error on test 0.3 0.2 0.1 0 0 1 40 80 120 160 T 1 40 80 120 160 T Figure 2: Behavior of the true exponential risk (left) and the exponential risk bound (right) for different values of ? on the Mushroom data set. Since the Taylor series expansion for tanh(x) about x = 0 converges only for |x| < ?/2, we are limited to ? ? ?/2. Under these circumstances, we have c = k = tan(?) 1 . cos(?) sin(?) Similarly as in Figure 1, we see on Figure 3 that the sigmoid loss bound curve, computed on the training set, is essentially parallel to the true sigmoid loss curve computed on the testing set. Moreover, the bound appears to be as tight as the one for the exponential risk on Figure 1. 5 Conclusion By trying to obtain a tight PAC-Bayesian risk bound for the majority vote, we have obtained a PAC-Bayesian risk bound for any loss function ?Q that has a convergent Taylor expansion around WQ = 1/2 (such as the exponential loss and the sigmoid loss). Unfortunately, the proposed risk 0.6 0.4 0.5 0.4 0.3 0.2 0.1 TQ bound TQ on test 0.3 TQ bound TQ on test E(WQ ) on test MV error on test Var(WQ ) on test 0.2 E(WQ ) on test MV error on test Var(WQ ) on test 0.1 0 0 0 40 80 120 160 T 0 40 80 120 160 T Figure 3: Behavior of the sigmoid risk bound (TQ bound), the true sigmoid risk (TQ on test), the Gibbs risk (E(WQ ) on test), its variance (Var(WQ ) on test), and the test error of the majority vote (MV error on test) as of function of the boosting round T for the Mushroom (left) and the Sonar (right) data sets. The risk bound and the true risk were computed for ? = ln 2. bound is tight only for small values of the scaling factor c involved in the relation between the expected loss ?Q of a convex combination of binary classifiers and the zero-one loss of a related Gibbs classifier GQ . However, it is quite encouraging to notice in our numerical experiments with Adaboost that the proposed loss bound (for the exponential loss and the sigmoid loss), behaves very similarly as the true loss. Acknowledgments Work supported by NSERC Discovery grants 262067 and 122405. References [1] David McAllester. Some PAC-Bayesian theorems. Machine Learning, 37:355?363, 1999. [2] Matthias Seeger. PAC-Bayesian generalization bounds for gaussian processes. Journal of Machine Learning Research, 3:233?269, 2002. [3] David McAllester. PAC-Bayesian stochastic model selection. Machine Learning, 51:5?21, 2003. [4] John Langford. Tutorial on practical prediction theory for classification. Journal of Machine Learning Research, 6:273?306, 2005. [5] Franc?ois Laviolette and Mario Marchand. PAC-Bayes risk bounds for sample-compressed Gibbs classifiers. Proceedings of the 22nth International Conference on Machine Learning (ICML 2005), pages 481?488, 2005. [6] John Langford and John Shawe-Taylor. PAC-Bayes & margins. In S. Thrun S. Becker and K. Obermayer, editors, Advances in Neural Information Processing Systems 15, pages 423? 430. MIT Press, Cambridge, MA, 2003. [7] Leo Breiman. Bagging predictors. Machine Learning, 24:123?140, 1996. [8] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55:119?139, 1997. [9] Robert E. Schapire and Yoram Singer. Improved bosting algorithms using confidence-rated predictions. Machine Learning, 37:297?336, 1999. [10] Robert E. Schapire, Yoav Freund, Peter Bartlett, and Wee Sun Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. The Annals of Statistics, 26:1651? 1686, 1998.
3120 |@word repository:1 version:1 r:10 minus:1 moment:2 epartement:4 series:2 interestingly:1 err:2 mushroom:5 written:1 john:3 numerical:4 half:3 provides:3 boosting:14 contribute:1 consists:1 indeed:5 expected:10 behavior:6 examine:1 discretized:1 decreasing:2 automatically:1 encouraging:1 increasing:1 becomes:1 provided:1 moreover:3 bounded:2 voting:1 ebec:4 exactly:1 universit:4 classifier:51 normally:2 grant:1 before:1 positive:1 fluctuation:1 twice:3 examined:1 co:1 limited:1 decided:1 acknowledgment:1 practical:1 testing:5 practice:1 empirical:7 confidence:1 close:2 selection:1 risk:41 context:1 attention:1 independently:1 convex:8 focused:1 updated:1 qkp:11 annals:1 tan:1 hypothesis:1 predicts:1 observed:1 sun:1 decrease:2 depend:2 tight:11 predictive:1 learner:2 leo:1 effective:1 quite:2 valued:4 tightness:1 otherwise:3 compressed:1 statistic:1 nondecreasing:1 sequence:3 matthias:1 propose:1 product:3 gq:23 uci:2 rapidly:2 translate:1 amplified:1 francois:1 produce:3 converges:1 measured:2 progress:1 eq:9 ois:2 indicate:2 attribute:5 stochastic:4 sgn:5 mcallester:2 generalization:2 tighter:1 summation:1 hold:2 around:2 credit:1 exp:3 adopt:1 smallest:2 label:4 currently:2 tanh:5 pacbayesian:1 weighted:5 mit:1 clearly:2 always:2 gaussian:1 breiman:1 bernoulli:1 fq:13 hk:6 contrast:2 seeger:1 typically:1 relation:1 classification:4 pascal:2 denoted:1 construct:1 having:7 icml:1 franc:2 randomly:4 wee:1 recognize:1 divergence:2 replaced:1 ourselves:1 tq:8 attempt:1 investigate:1 integral:1 taylor:5 instance:1 classify:1 boolean:1 yoav:2 subset:1 uniform:1 predictor:1 predicate:1 density:1 international:1 lee:1 ym:1 containing:1 choose:3 account:2 stump:4 includes:1 coefficient:1 mv:8 depends:3 h1:28 mario:3 characterizes:1 bayes:11 parallel:3 minimize:1 qk:9 variance:4 weak:2 bayesian:8 produced:2 classified:2 suffers:1 definition:2 involved:1 obvious:1 associated:2 proof:2 recall:1 lim:1 appears:1 alexandre:2 feed:1 dt:7 adaboost:9 improved:1 just:2 langford:2 defines:1 logistic:1 effect:1 true:13 hence:7 assigned:1 symmetric:1 leibler:2 round:7 sin:1 during:2 ulaval:4 trying:1 theoretic:1 sigmoid:14 behaves:3 laval:4 exponentially:1 cambridge:1 gibbs:20 similarly:4 shawe:1 glo:4 posterior:6 apart:2 binary:13 came:1 success:2 yi:4 seen:1 additional:1 relates:2 desirable:1 exceeds:1 usvotes:1 prediction:2 basic:1 breast:2 circumstance:3 essentially:2 normalization:1 want:1 effectiveness:1 split:1 enough:2 restrict:1 translates:1 bartlett:1 becker:1 peter:1 returned:1 dramatically:1 generally:3 schapire:3 tutorial:1 notice:1 estimated:2 write:1 discrete:2 shall:1 putting:1 threshold:2 drawn:2 ht:4 fraction:1 striking:1 throughout:1 decision:5 scaling:2 def:15 bound:59 hi:2 convergent:1 marchand:3 precisely:3 expanded:1 according:7 combination:9 remain:1 qu:4 pr:2 taken:1 ln:15 equation:16 turn:2 german:1 singer:1 rewritten:1 apply:1 bagging:2 denotes:3 include:1 laviolette:3 yoram:1 giving:1 strategy:1 obermayer:1 thrun:1 majority:18 length:1 index:1 providing:1 unfortunately:3 robert:3 statement:1 relate:2 zt:2 unknown:1 looseness:1 upper:15 observation:1 lacasse:2 incorrectly:2 y1:1 rn:1 arbitrary:2 canada:4 david:2 complement:1 germain:2 pair:1 kl:16 namely:1 required:1 able:1 distribution1:1 parallelism:1 xm:1 including:1 explanation:1 belief:1 power:1 nth:1 rated:1 imply:1 realvalued:1 prior:10 discovery:1 wisconsin:1 freund:2 loss:57 var:8 remarkable:1 h2:4 editor:1 tiny:1 cancer:1 ift:7 supported:2 wide:1 taking:1 absolute:1 curve:6 forward:1 commonly:1 far:1 kullback:2 xi:4 continuous:2 sonar:4 ca:4 expansion:2 investigated:1 main:1 motivation:2 bounding:1 x1:1 exponential:19 yh:6 theorem:15 pac:21 svm:2 yfq:2 ionosphere:1 kr:1 illustrates:1 margin:8 entropy:2 nserc:1 satisfies:1 ma:1 consequently:3 towards:1 labelled:1 change:1 typical:2 called:2 vote:18 wq:31 tested:1
2,338
3,121
Kernel Maximum Entropy Data Transformation and an Enhanced Spectral Clustering Algorithm Robert Jenssen1?, Torbj?rn Eltoft1 , Mark Girolami2 and Deniz Erdogmus3 Department of Physics and Technology, University of Troms?, Norway Department of Computing Science, University of Glasgow, Scotland 3 Department of Computer Science and Engineering, Oregon Health and Science University, USA 1 2 Abstract We propose a new kernel-based data transformation technique. It is founded on the principle of maximum entropy (MaxEnt) preservation, hence named kernel MaxEnt. The key measure is Renyi?s entropy estimated via Parzen windowing. We show that kernel MaxEnt is based on eigenvectors, and is in that sense similar to kernel PCA, but may produce strikingly different transformed data sets. An enhanced spectral clustering algorithm is proposed, by replacing kernel PCA by kernel MaxEnt as an intermediate step. This has a major impact on performance. 1 Introduction Data transformation is of fundamental importance in machine learning, and may greatly improve and simplify tasks such as clustering. Some of the most well-known approaches to data transformation are based on eigenvectors of certain matrices. Traditional techniques include principal component analysis (PCA) and classical multidimensional scaling. These are linear methods. Recent advanced non-linear techniques include locally-linear embedding [1] and isometric mapping [2]. Of special interest to this paper is kernel PCA [3], a member of the kernel-based methods [4]. Recently, it has been shown that there is a close connection between the kernel methods and information theoretic learning [5, 6, 7, 8]. We propose a new kernel-based data transformation technique based on the idea of maximum entropy preservation. The new method, named kernel MaxEnt, is based on Renyi?s quadratic entropy estimated via Parzen windowing. The data transformation is obtained using eigenvectors of the data affinity matrix. These eigenvectors are in general not the same as those used in kernel PCA. We show that kernel MaxEnt may produce strikingly different transformed data sets than kernel PCA. We propose an enhanced spectral clustering algorithm, by replacing kernel PCA by kernel MaxEnt as an intermediate step. This seemingly minor adjustment has a huge impact on the performance of the algorithm. This paper is organized as follows. In section 2, we briefly review kernel PCA. Section 3 is devoted to the kernel MaxEnt method. Some illustrations are given in section 4. The enhanced spectral clustering is discussed in section 5. Finally, we conclude the paper in section 6. 2 Kernel PCA PCA is a linear data transformation technique based on the eigenvalues and eigenvectors of the (d ? d) data correlation matrix, where d is the data dimensionality. A dimensionality reduction from d to l < d is obtained by projecting a data point onto a subspace spanned by the eigenvectors (principal axes) corresponding to the l largest eigenvalues. It is well-known that this data ? Corresponding author. Phone: (+47) 776 46493. Email: robertj@phys.uit.no. transformation preserves the maximum amount of variance in the l-dimensional data compared to the original d-dimensional data. Sch?olkopf et al. [3] proposed a non-linear extension, by performing PCA implicitly in a kernel feature space which is non-linearly related to the input space via the mapping xi ? ?(xi ), i = 1, . . . , N . Using the kernel-trick to compute inner-products, k(xi , xj ) = h?(xi ), ?(xj )i, it was shown that the eigenvalue problem in terms of the feature space correlation matrix is reduced to an eigenvalue problem in terms of the kernel matrix K x , where element (i, j) of Kx equals k(xi , xj ), i, j = 1, . . . , N . This matrix can be eigendecomposed as Kx = EDET , where D is a diagonal matrix storing the eigenvalues in descending order, and E is a matrix with the eigenvectors as columns. Let ?pca be a matrix where each column corresponds to the PCA projection of the data points ?(xi ), i = 1, . . . , N , onto the subspace spanned by the 1 l largest kernel space principal axes. Then, ?pca = Dl2 ETl , where the (l ? l) matrix Dl stores the l largest eigenvalues, and the (N ? l) matrix El stores the corresponding eigenvectors. This is the kernel PCA transformed data set 1 . Kernel PCA thus preserves variance in terms of the kernel induced feature space. However, kernel PCA is not easily interpreted in terms of the input space data set. How does variance preservation in the kernel feature space correspond to an operation on the input space data set? To the best of our knowledge, there are no such intuitive interpretations of kernel PCA. In the next section, we introduce kernel MaxEnt, which we show is related to kernel PCA. However, kernel MaxEnt may be interpreted in terms of the input space, and will in general perform a different projection in the kernel space. 3 Kernel MaxEnt R The Renyi quadratic entropy is given by H2 (x) = ? log f 2 (x)dx [9], where f (x) is the density associated with the random variable X. A d-dimensional data set x i , i = 1, . . . , N , generated from f (x), is assumed available. A non-parametric estimator for H2 (x) is obtained by replacing the actual pdf by its Parzen window estimator, given by [10]   N 1 X 1 ||x ? xi ||2 ? f (x) = W? (x, xi ), W? (x, xi ) = ? . (1) d exp N i=1 2? 2 (2?? 2 ) 2 The Parzen window need not be Gaussian, but it must be a density itself. The following derivation assumes a Gaussian window. Non-Gaussian windows are easily incorporated. Hence, we obtain ? 2 (x) H = ? log Z N N Z 1 XX f?2 (x)dx = ? log 2 W? (x, xi )W? (x, xj )dx N i=1 j=1 N N 1 XX ? = ? log 2 W 2? (xi , xj ), N i=1 j=1 (2) where in the last step the convolution theorem for Gaussian functions has been employed. For notational simplicity, we denote W?2? (xi , xj ) as k(xi , xj ). Note that since W?2? (?, ?) is a Gaussian function, it is also a Mercer kernel, and so is k(?, ?). In the following, we construct the kernel matrix Kx , such that element (i, j) of Kx equals k(xi , xj ), i, j = 1, . . . , N . It is easily shown that the Renyi quadratic entropy may be expressed compactly in terms of the ? 2 (x) = ? log 12 1T Kx 1, where 1 is a (N ? 1) ones-vector. Since the logarithm kernel matrix as H N is a monotonic function, we will in the remainder of this paper focus on the quantity V (x) = 1 T N 2 1 Kx 1. It is thus clear that all the information regarding the Renyi entropy resides in the kernel matrix Kx . Hence, the kernel matrix is the input space quantity of interest in this paper. A well-known input space data transformation principle is founded on the idea of maximum entropy (MaxEnt), usually defined in terms of minimum model complexity. In this paper, we define MaxEnt differently, as a mapping X ? Y, such that the entropy associated with Y is maximally similar 1 In [3] the kernel feature space data was assumed centered, obtained by a centering operation of the kernel matrix. We do not assume centered data here. to the entropy of X. Since we are concerned with Renyi?s entropy, it is therefore clear that such a data mapping results in a V (y) = N12 1T Ky 1, in terms of the Y data set, which should be as close as possible to V (x) = N12 1T Kx 1. This means that the kernel matrix Ky must be maximally similar to Kx in some sense. Since our input space quantity of concern is the kernel matrix, we are only implicitly concerned with the Y data set (we do not actually want to obtain Y, nor is the dimensionality of interest). The kernel matrix can be decomposed as Kx = EDET . The kernel matrix is at the same time an inner-product matrix in the Mercer kernel induced feature space. Let ? x be a matrix such that each column represents an approximation to the corresponding kernel feature space data point in the set 1 ?(x1 ), . . . , ?(xN ). An approximation which preserves inner-products is given by ? x = D 2 ET , 1 T T T since then Kx = ?x ?x = EDE . Note that ?x = D 2 E is the projection onto all the principal axes in the Mercer kernel feature space, hence defining a N -dimensional data set. We now describe a dimensionality reduction in the Mercer kernel space, obtaining the k-dimensional ?y from ?x , yielding Ky = ?Ty ?y such that V (y) ? V (x). Notice that we may rewrite V (x) as follows [8] N N 1 X 1 X V (x) = 2 ?i (1T ei )2 = 2 ?i ?i2 , (3) N i=1 N i=1 where ei is the eigenvector corresponding to the i?th column of Kx , and 1T ei = ?i . We also assume 2 that the products ?i ?i2 have been sorted in decreasing order, such that ?1 ?12 ? . . . ? ?N ?N . If we are to approximate V (x) using only k terms (eigenvalues/eigenvectors) of the sum Eq. (3), we must use the k first terms in order to achieve minimum approximation error. This corresponds 1 to using the k largest ?i ?i2 . Let us define the data set ?y = Dk2 ETk , using the k eigenvalues and eigenvectors of Kx corresponding to the k largest products ?i ?i2 . Hence, Ky = ?Ty ?y = 1 1 Ek Dk2 Dk2 ETk = Ek Dk ETk , and k 1 X 1 V (y) = 2 ?i ?i2 = 2 1T Ky 1, N i=1 N (4) the best approximation to the entropy estimate V (x) using k eigenvalues and eigenvectors. We 1 thus refer to the mapping ?y = Dk2 ETk as a maximum entropy data transformation in a Mercer kernel feature space. Note that this is not the same as the PCA dimensionality reduction in Mercer 1 kernel feature space, which is defined as ?pca = Dl2 ETl , using the eigenvalues and eigenvectors corresponding to the l largest eigenvalues of Kx . In terms of the eigenvectors of the kernel feature space correlation matrix, we project ?(xi ) onto a subspace spanned by different eigenvectors, which is possibly not the most variance preserving (remember that variance in the kernel feature space data set is given by the sum of the largest eigenvalues). The kernel MaxEnt procedure, as described above, is summarized in Table 1. It is important to realize that kernel MaxEnt outputs two quantities, which may be used for further data analysis. The 1 kernel space output quantity is the transformed data set ?y = Dk2 ETk . The input space output quantity is the kernel matrix Ky = Ek Dk ETk , which is an approximation to the original kernel matrix Kx . Input Space Kx = EDET Kernel Space ? o Ky = Ek Dk ETk 1 ?x = D 2 ET ? 1 ? ?y = Dk2 ETk Table 1. Flow of the kernel MaxEnt procedure. There are two possible outputs; the input space kernel matrix Ky , and the kernel space data set ?y . 3.1 Interpretation in Terms of Cost Function Minimization Kernel MaxEnt produces a new kernel matrix Ky = Ek Dk ETk , such that the sum of the elements of Ky is maximally equal to the sum of the elements of Kx . Hence, kernel MaxEnt picks eigenvectors and eigenvalues in order to minimize the cost function 1T (Kx ? Ky )1. On the other hand, it is well known that the kernel PCA matrix Kpca = El Dl ETl , based on the l largest eigenvalues, minimizes the Frobenius norm of (Kx ? Kpca ), that is 1T (Kx ? Kpca ).2 1 (where A.2 denotes elementwise squaring of matrix A.) 3.2 Kernel MaxEnt Eigenvectors Reveal Cluster Structure Under ?ideal? circumstances, kernel MaxEnt and kernel PCA yield the same result, as shown in the following. Assume that the data consists of C = 2 different maximally compact subsets, such that k(xi , xj ) = 1 for xi and xj in the same subset, and k(xi , xj ) = 0 for xi and xj in different subsets (point clusters). Assume that subset one consists of N1 data points, and subset two consists of N2 data points. Hence, N = N1 + N2 and we assume N1 ? N2 . Then # "  N1 ?N1  ?1 1N 0N 1 1 0N1 ?N2 1 N1 K= , (5) , E= ?1 1N 0N 2 0N2 ?N1 1N2 ?N2 2 N2 where 1M ?M (0M ?M ) is the (M ?M ) all ones (zero) matrix and D = diag(N1 , N2 ). Hence, a data point xi in subgroup one will be represented by xi ? [1 0]T and a data point xj in subgroup two will be represented by xj ? [0 1]T both using ?y and ?pca . (see also [11] for a related analysis). Thus, kernel MaxEnt and kernel PCA yield the same data mapping, where each subgroup is mapped into mutually orthogonal points in the kernel space (the clusters were points also in the input space, but not necessarily orthogonal). Hence, in the ?ideal? case, the clusters in the transformed data set is spread by 90 degrees angles. Also, the eigenvectors carry all necessary information about the cluster structure (cluster memberships can be assigned by a proper thresholding). This kind of ?ideal? analysis has been used as a justification for the kernel PCA mapping, where the mapping is based on the C largest eigenvalues/eigenvectors. Such a situation will correspond to maximally concentrated eigenvalues of the kernel matrix. In practice, however, there will be more than C non-zero eigenvalues, not necessarily concentrated, and corresponding eigenvectors, because there will be no such ?ideal? situation. Shawe-Taylor and Cristianini [4] note that kernel PCA can only detect stable patterns if the eigenvalues are concentrated. In practice, the first C eigenvectors may not necessarily be those which carry most information about the clustering structure of the data set. However, kernel MaxEnt will seek to pick those eigenvectors with the blockwise structure corresponding to cluster groupings, because this will make the sum of the elements in Ky as close as possible to the sum of the elements of Kx . Some illustrations of this property follow in the next section. 3.3 Parzen Window Size Selection The Renyi entropy estimate is directly connected to Parzen windowing. In theory, therefore, an appropriate window, or kernel, size, corresponds to an appropriate density estimate. Parzen window size selection has been thoroughly studied in statistics [12]. Many reliable data-driven methods exist, especially for data sets of low to moderate dimensionality. Silverman?s rule [12] is one of the 1 i d+4 h P 4 2 , where ?X = d?1 i ?Xii , and ?Xii are the diagonal simplest, given by ? ? = ?X (2d+1)N elements of the sample covariance matrix. Unless otherwise stated, the window size is determined using this rule. 4 Illustrations Fig. 1 (a) shows a ring-shaped data set consisting of C = 3 clusters (marked with different symbols for clarity). The vertical lines in (b) show the 10 largest eigenvalues (normalized). The largest eigenvalue is more than twice as large as the second largest. However, the values of the remaining eigenvalues are not significantly different. The bars in (b) shows the entropy terms ? i ?i2 (normalized) corresponding to these largest eigenvalues. Note that the entropy terms corresponding to the first, fourth and seventh eigenvalues are significantly larger than the rest. This means that kernel MaxEnt is based on the first, fourth and seventh eigenvalue/eigenvector pair (yielding a 3-dimensional transformed data set). In contrast, kernel PCA is based on the eigenvalue/eigenvector pair corresponding to the three largest eigenvalues. In (c) the kernel MaxEnt data transformation is shown. Note that the clusters are located along different lines radially from the origin (illustrated by the lines in the figure). These lines are almost orthogonal to each other, hence approximating what would be expected in the ?ideal? case. The kernel PCA data transformation is shown in (d). This data set is significantly different. In fact, the mean vectors of the clusters in the kernel PCA representation are not spread angularly. In (e), the first eight eigenvectors are shown. The original data set is ordered, such that the first 63 elements correspond to the innermost ring, the next 126 elements correspond to the ring in the middle, and the final 126 elements correspond to the outermost ring. Observe how eigenvectors one, four and seven are those which carry information about the cluster structure, with their blockwise appearance. The kernel matrix Kx is shown in (f). Ideally, this should be a blockwise matrix. It is not. In (g), the kernel MaxEnt approximation Ky to the original kernel matrix is shown, obtained from eigenvectors one, four and seven. Note the blockwise appearance. In contrast, (g) shows the corresponding Kpca . The same blockwise structure can not be observed. Fig. 2 (a) shows a ring-shaped data set consisting of two clusters. In (b) and (c) the kernel MaxEnt (eigenvalues/eigenvectors one and five) and kernel PCA transformations are shown, respectively. Again, kernel MaxEnt produces a data set where the clusters are located along almost orthogonal lines, in contrast to kernel PCA. The same phenomenon is observed for the data set shown in (d), with the kernel MaxEnt (eigenvalues/eigenvectors one and four) and kernel PCA transformations shown in (e) and (f), respectively. In addition, (g) and (h) shows the kernel MaxEnt (eigenvalues/eigenvectors one, two and five) and kernel PCA transformations of the 16-dimensional penbased handwritten digit recognition data set (three clusters, digits 0, 1 and 2), extracted from the UCI repository. Again, similar comments can be made. These illustrations show that kernel MaxEnt produces a different transformed data set than kernel PCA. Also, it produces a kernel matrix K y having a blockwise appearance. Both the transformed data ? y and the new kernel matrix can be utilized for further data analysis. In the following, we focus on ?y . 5 An Enhanced Spectral Clustering Algorithm A recent spectral clustering algorithm [7] is based on the Cauchy-Schwarz (CS) pdf divergence measure, which is closely connected to the Renyi entropy. Let f?1 (x) and f?2 (x) be Parzen window estimators of the densities corresponding to two clusters. Then, an estimator for the CS measure can be expressed as [6] R f?1 (x)f?2 (x)dx ? D(f1 , f2 ) = qR = cos 6 (m1 , m2 ), (6) R 2 2 ? ? f1 (x)dx f2 (x)dx where m1 and m2 are the kernel feature space mean vectors of the data points corresponding to ? 1 , f2 ) ? [0, 1], reaching its maximum value if m1 = m2 (f?1 (x) = the two clusters. Note that D(f ? f2 (x)), and its minimum value if the two vectors (densities) are orthogonal. The measure can easily be extended to more than two pdfs. The clustering is based on computing the cosine of the angle between a data point and the mean vector mi of each cluster ?i , i = 1, . . . , C, for then to assign the data point to the cluster corresponding to the maximum value. This procedure minimizes the CS measure as defined above. Kernel PCA was used for representing the data in the kernel feature space. As an illustration of the utility of kernel MaxEnt, we here replace kernel PCA by kernel MaxEnt. This adjustment has a major impact on the performance. The algorithm thus has the following steps: 1) Use some data-driven method from statistics to determine the Parzen window size. 2) Compute the kernel matrix Kx . 3) Obtain a C-dimensional kernel feature space representation using kernel MaxEnt. 4) Initialize mean vectors. 5) For all data points: xt ? ?i : maxi cos 6 (?(xt ), mi ). 6) Update mean vectors. 7) Repeat steps 5-7 until convergence. For further details (like mean vector initialization etc.) we refer to [7]. Fig. 3 (a) shows the clustering performance in terms of the percentage of correct labeling for the data set shown in Fig. 2 (d). There are three curves: Our spectral clustering algorithm using kernel MaxEnt (marked by the circle-symbol), and kernel PCA (star-symbol), and in addition we compare 30 1 20 0.2 0.8 0.1 10 0.6 0 0 ?10 ?20 0.4 ?0.1 0.2 ?0.2 0.2 0 ?30 ?30 ?20 ?10 0 10 20 0 30 2 3 4 5 6 7 8 9 ?0.2 0 10 (b) 0.2 0.4 0.6 (c) 4 1 (a) 1 0.4 0.2 0 7 ?0.2 ?0.4 0.5 0 ?0.5 0 0.2 0.4 0.6 (d) (f) (e) (g) (h) Figure 1: Examples of data transformations using kernel MaxEnt and kernel PCA. with the state-of-the-art Ng et al. method (NG) [11] using the Laplacian matrix. The clustering is performed over a range of kernel sizes. The vertical line indicates the ?optimal? kernel size using Silverman?s rule. Over the whole range, kernel MaxEnt performs equally good as NG, and better than kernel PCA for small kernel sizes. Fig. 3 (b) shows a similar result for the data set shown in Fig. 1 (a). Kernel MaxEnt has the best performance over the most part of the kernel range. Fig. 3 (c) shows the mean result for the benchmark thyroid data set [13]. On this data set, kernel MaxEnt performs considerably better than the two other methods, over a wide range of kernel sizes. These preliminary experiments show the potential benefits of kernel MaxEnt in data analysis, especially when the kernel space cost function is based on an angular measure. Using kernel MaxEnt makes the algorithm competitive to spectral clustering using the Laplacian matrix. We note that kernel MaxEnt in theory requires the full eigendecomposition, thus making it more computationally complex than clustering based on only the C largest eigenvectors. 1.5 0.8 0.2 0.6 1 0 0.4 0.5 ?0.2 0.2 ?0.4 ?0.2 0 0 ?0.5 ?1.5 ?1.5 ?0.4 ?0.6 ?1 ?0.6 ?1 ?0.5 0 0.5 1 ?0.8 ?1 1.5 ?0.8 ?0.6 (a) ?0.4 ?0.2 ?0.8 ?1 0 ?0.8 ?0.6 (b) 3 0.6 2 0.5 ?0.2 0 (c) 1 0.5 0.4 1 ?0.4 0.3 0 0 0.2 ?1 0.1 ?2 ?3 ?5 ?0.5 0 ?4 ?3 ?2 ?1 0 1 ?0.1 2 0 0.2 (d) 0.4 0.6 ?1 0.8 0 0.2 (e) 0.6 0.8 (f) 0.1 0.2 0 0.1 ?0.1 0 ?0.2 ?0.1 ?0.3 0.4 0.4 ?0.2 0.4 0.4 0.2 0.2 0 ?0.2 0 (g) 0.4 0.2 0.2 0 ?0.2 0 (h) Figure 2: Examples of data transformations using kernel MaxEnt and kernel PCA. 6 Conclusions In this paper, we have introduced a new data transformation technique, named kernel MaxEnt, which has a clear theoretical foundation based on the concept of maximum entropy preservation. The new method is similar in structure to kernel PCA, but may produce totally different transformed data sets. We have shown that kernel MaxEnt significantly enhances a recent spectral clustering algorithm. Kernel MaxEnt also produces a new kernel matrix, which may be useful for further data analysis. Kernel MaxEnt requires the kernel to be a valid Parzen window (i.e. a density). Kernel PCA requires the kernel to be a Mercer kernel (positive semidefinite), hence not necessarily a density. In that sense, kernel PCA may use a broader class of kernels. On the other hand, kernel MaxEnt may use Parzen windows which are not Mercer kernels (indefinite), such as the Epanechnikov kernel. Kernel MaxEnt based on indefinite kernels will be studied in future work. Acknowledgements RJ is supported by NFR grant 171125/V30 and MG is supported by EPSRC grant EP/C010620/1. References [1] S. Roweis and L. Saul, ?Nonlinear Dimensionality Reduction by Locally Linear Embedding,? Science, vol. 290, pp. 2323?2326, 2000. 100 80 80 Performance % Performance % 100 60 40 KPCA ME NG 20 0 0.5 1 ? 1.5 60 40 20 KPCA ME NG 0 1 2 2 3 ? (a) 4 (b) 100 Performance % 80 60 40 KPCA ME NG 20 0 0.5 1 1.5 ? 2 2.5 3 (c) Figure 3: Clustering results. [2] J. Tenenbaum, V. de Silva, and J. C. Langford, ?A Global Geometric Framework for Nonlinear Dimensionality Reduction,? Science, vol. 290, pp. 2319?2323, 2000. [3] B. Sch?olkopf, A. J. Smola, and K. R. M?uller, ?Nonlinear Component Analysis as a Kernel Eigenvalue Problem,? Neural Computation, vol. 10, pp. 1299?1319, 1998. [4] J. Shawe-Taylor and N. Cristianini, Kernel Methods for Pattern Analysis, Cambridge University Press, 2004. [5] R. Jenssen, D. Erdogmus, J. C. Principe, and T. Eltoft, ?The Laplacian PDF Distance: A Cost Function for Clustering in a Kernel Feature Space,? in Advances in Neural Information Processing Systems 17, MIT Press, Cambridge, 2005, pp. 625?632. [6] R. Jenssen, D. Erdogmus, J. C. Principe, and T. Eltoft, ?Some Equivalences between Kernel Methods and Information Theoretic Methods,? Journal of VLSI Signal Processing, to appear, 2006. [7] R. Jenssen, D. Erdogmus, J. C. Principe, and T. Eltoft, ?Information Theoretic Angle-Based Spectral Clustering: A Theoretical Analysis and an Algorithm,? in Proceedings of International Joint Conference on Neural Networks, Vancouver, Canada, July 16-21, 2006, pp. 4904?4911. [8] M. Girolami, ?Orthogonal Series Density Estimation and the Kernel Eigenvalue Problem,? Neural Computation, vol. 14, no. 3, pp. 669?688, 2002. [9] A. Renyi, ?On Measures of Entropy and Information,? Selected Papers of Alfred Renyi, Akademiai Kiado, Budapest, vol. 2, pp. 565?580, 1976. [10] E. Parzen, ?On the Estimation of a Probability Density Function and the Mode,? The Annals of Mathematical Statistics, vol. 32, pp. 1065?1076, 1962. [11] A. Y. Ng, M. Jordan, and Y. Weiss, ?On Spectral Clustering: Analysis and an Algorithm,? in Advances in Neural Information Processing Systems, 14, MIT Press, Cambridge, 2002, pp. 849?856. [12] B. W. Silverman, Density Estimation for Statistics and Data Analysis, Chapman and Hall, London, 1986. [13] G. R?aetsch, T. Onoda, and K. R. M?uller, ?Soft Margins for Adaboost,? Machine Learning, vol. 42, pp. 287?320, 2001.
3121 |@word repository:1 middle:1 briefly:1 norm:1 seek:1 covariance:1 innermost:1 pick:2 carry:3 reduction:5 series:1 dx:6 must:3 deniz:1 realize:1 update:1 selected:1 scotland:1 epanechnikov:1 five:2 mathematical:1 along:2 consists:3 troms:1 introduce:1 torbj:1 expected:1 nor:1 decomposed:1 decreasing:1 actual:1 window:12 totally:1 etl:3 project:1 xx:2 what:1 kind:1 interpreted:2 minimizes:2 eigenvector:3 transformation:18 remember:1 multidimensional:1 grant:2 appear:1 positive:1 engineering:1 twice:1 initialization:1 studied:2 equivalence:1 co:2 range:4 practice:2 silverman:3 digit:2 procedure:3 significantly:4 projection:3 onto:4 close:3 selection:2 descending:1 c010620:1 simplicity:1 glasgow:1 m2:3 estimator:4 rule:3 spanned:3 embedding:2 n12:2 justification:1 annals:1 enhanced:5 origin:1 trick:1 element:10 ede:1 recognition:1 located:2 utilized:1 observed:2 epsrc:1 ep:1 connected:2 complexity:1 ideally:1 cristianini:2 rewrite:1 f2:4 strikingly:2 compactly:1 easily:4 joint:1 differently:1 represented:2 derivation:1 describe:1 london:1 labeling:1 larger:1 otherwise:1 statistic:4 itself:1 final:1 seemingly:1 eigenvalue:31 mg:1 propose:3 product:5 remainder:1 uci:1 budapest:1 achieve:1 roweis:1 intuitive:1 frobenius:1 olkopf:2 ky:13 qr:1 convergence:1 cluster:18 produce:8 ring:5 eltoft:3 minor:1 eq:1 c:3 girolami:1 closely:1 correct:1 centered:2 assign:1 f1:2 preliminary:1 eigendecomposed:1 extension:1 hall:1 exp:1 mapping:8 major:2 estimation:3 schwarz:1 largest:15 minimization:1 uller:2 mit:2 gaussian:5 reaching:1 broader:1 ax:3 focus:2 notational:1 pdfs:1 indicates:1 greatly:1 contrast:3 sense:3 detect:1 el:2 squaring:1 membership:1 v30:1 vlsi:1 transformed:9 art:1 special:1 initialize:1 equal:3 construct:1 shaped:2 having:1 ng:7 chapman:1 represents:1 future:1 simplify:1 preserve:3 divergence:1 consisting:2 n1:9 interest:3 huge:1 yielding:2 semidefinite:1 devoted:1 necessary:1 orthogonal:6 unless:1 taylor:2 maxent:48 logarithm:1 circle:1 theoretical:2 column:4 soft:1 kpca:7 cost:4 subset:5 seventh:2 considerably:1 thoroughly:1 density:10 fundamental:1 international:1 physic:1 parzen:12 again:2 possibly:1 ek:5 potential:1 de:1 star:1 summarized:1 oregon:1 performed:1 competitive:1 minimize:1 variance:5 correspond:5 yield:2 handwritten:1 phys:1 email:1 centering:1 ty:2 pp:10 associated:2 mi:2 radially:1 knowledge:1 dimensionality:8 organized:1 actually:1 norway:1 isometric:1 follow:1 adaboost:1 maximally:5 wei:1 angular:1 smola:1 correlation:3 until:1 hand:2 langford:1 replacing:3 ei:3 nonlinear:3 mode:1 reveal:1 usa:1 normalized:2 concept:1 hence:11 assigned:1 i2:6 illustrated:1 cosine:1 pdf:3 theoretic:3 performs:2 silva:1 recently:1 discussed:1 interpretation:2 m1:3 elementwise:1 refer:2 cambridge:3 shawe:2 stable:1 etc:1 recent:3 moderate:1 driven:2 phone:1 store:2 certain:1 preserving:1 minimum:3 employed:1 determine:1 signal:1 preservation:4 july:1 windowing:3 full:1 rj:1 equally:1 laplacian:3 impact:3 circumstance:1 kernel:148 addition:2 want:1 sch:2 edet:3 rest:1 comment:1 induced:2 member:1 flow:1 jordan:1 ideal:5 intermediate:2 concerned:2 xj:14 inner:3 regarding:1 idea:2 dl2:2 pca:44 utility:1 etk:9 useful:1 clear:3 eigenvectors:28 amount:1 locally:2 tenenbaum:1 concentrated:3 simplest:1 reduced:1 percentage:1 exist:1 notice:1 estimated:2 xii:2 alfred:1 vol:7 key:1 four:3 indefinite:2 clarity:1 sum:6 angle:3 fourth:2 named:3 almost:2 scaling:1 quadratic:3 angularly:1 thyroid:1 performing:1 department:3 making:1 projecting:1 computationally:1 mutually:1 available:1 operation:2 eight:1 observe:1 spectral:11 appropriate:2 original:4 assumes:1 clustering:19 include:2 denotes:1 remaining:1 especially:2 approximating:1 classical:1 quantity:6 parametric:1 traditional:1 diagonal:2 enhances:1 affinity:1 subspace:3 distance:1 mapped:1 me:3 seven:2 cauchy:1 illustration:5 robert:1 blockwise:6 stated:1 proper:1 perform:1 vertical:2 convolution:1 benchmark:1 defining:1 situation:2 incorporated:1 extended:1 rn:1 canada:1 introduced:1 pair:2 connection:1 subgroup:3 bar:1 usually:1 pattern:2 reliable:1 advanced:1 representing:1 improve:1 technology:1 health:1 review:1 geometric:1 acknowledgement:1 vancouver:1 h2:2 eigendecomposition:1 foundation:1 degree:1 mercer:8 principle:2 thresholding:1 storing:1 repeat:1 last:1 supported:2 wide:1 saul:1 dk2:6 benefit:1 outermost:1 curve:1 xn:1 uit:1 valid:1 resides:1 author:1 made:1 founded:2 approximate:1 compact:1 implicitly:2 global:1 conclude:1 assumed:2 xi:21 table:2 onoda:1 obtaining:1 necessarily:4 complex:1 diag:1 spread:2 linearly:1 whole:1 n2:9 akademiai:1 x1:1 fig:7 renyi:10 theorem:1 xt:2 symbol:3 maxi:1 dk:4 concern:1 dl:2 grouping:1 importance:1 kx:23 margin:1 entropy:20 appearance:3 expressed:2 adjustment:2 ordered:1 monotonic:1 corresponds:3 jenssen:3 extracted:1 sorted:1 marked:2 erdogmus:3 replace:1 determined:1 principal:4 principe:3 mark:1 kiado:1 phenomenon:1
2,339
3,122
A Nonparametric Approach to Bottom-Up Visual Saliency Wolf Kienzle, Felix A. Wichmann, Bernhard Sch?olkopf, and Matthias O. Franz Max Planck Institute for Biological Cybernetics, Spemannstr. 38, 72076 T?ubingen, Germany {kienzle,felix,bs,mof}@tuebingen.mpg.de Abstract This paper addresses the bottom-up influence of local image information on human eye movements. Most existing computational models use a set of biologically plausible linear filters, e.g., Gabor or Difference-of-Gaussians filters as a front-end, the outputs of which are nonlinearly combined into a real number that indicates visual saliency. Unfortunately, this requires many design parameters such as the number, type, and size of the front-end filters, as well as the choice of nonlinearities, weighting and normalization schemes etc., for which biological plausibility cannot always be justified. As a result, these parameters have to be chosen in a more or less ad hoc way. Here, we propose to learn a visual saliency model directly from human eye movement data. The model is rather simplistic and essentially parameter-free, and therefore contrasts recent developments in the field that usually aim at higher prediction rates at the cost of additional parameters and increasing model complexity. Experimental results show that?despite the lack of any biological prior knowledge?our model performs comparably to existing approaches, and in fact learns image features that resemble findings from several previous studies. In particular, its maximally excitatory stimuli have center-surround structure, similar to receptive fields in the early human visual system. 1 Introduction The human visual system samples images through saccadic eye movements, which rapidly change the point of fixation. It is believed that the underlying mechanism is driven by both top-down strategies, such as the observer?s task, thoughts, or intentions, and by bottom-up effects. The latter are usually attributed to early vision, i.e., to a system that responds to simple, and often local image features, such as a bright spot in a dark scene. During the past decade, several studies have explored which image features attract eye movements. For example, Reinagel and Zador [18] found that contrast was substantially higher at gaze positions, Krieger et al. [10] reported differences in the intensity bispectra. Parkhurst, Law, and Niebur [13] showed that a saliency map [9], computed by a model similar to the widely used framework by Itti, Koch and Niebur [3, 4], is significantly correlated with human fixation patterns. Numerous other hypotheses were tested [1, 5, 6, 10, 12, 14, 16, 17, 19, 21], including intensity, edge content, orientation, symmetry, and entropy. Each of the above models is built on a particular choice of image features that are believed to be relevant to visual saliency. A common approach is to compute several feature maps from linear filters that are biologically plausible, e.g., Difference of Gaussians (DoG) or Gabor filters, and nonlinearly combine the feature maps into a single saliency map [1, 3, 4, 13, 16, 21]. This makes it straightforward to construct complex models from simple, biologically plausible components. A downside of this parametric approach, however, is that the feature maps are chosen manually by the designer. As a consequence, any such model is biased to certain image structure, and therefore discriminates features that might not seem plausible at first sight, but may well play a significant role. (a) (b) Figure 1: Eye movement data. (a) shows 20 (out of 200) of the natural scenes that were presented to the 14 subjects. (b) shows the top right image from (a), together with the recorded fixation locations from all 14 subjects. The average viewing time per subject was approximately 3 seconds. Another problem comes from the large number of additional design parameters that are necessary in any implementation, such as the precise filter shapes, sizes, weights, nonlinearities, etc. While choices for these parameters are often only vaguely justified in terms of their biological plausibility, they greatly affect the behavior of the system as a whole and thus its predictive power. The latter, however, is often used as a measure of plausibility. This is clearly an undesirable situation, since it makes a fair comparison between models very difficult. In fact, we believe that this may explain the conflicting results in the debate about whether edges or contrast filters are more relevant [1, 6, 13]. In this paper we present a nonparametric approach to bottom-up saliency, which does not (or to a far lesser extent) suffer from the shortcomings described above. Instead of using a predefined set of feature maps, our saliency model is learned directly from human eye movement data. The model consists of a nonlinear mapping from an image patch to a real value, trained to yield positive outputs on fixated, and negative outputs on randomly selected image patches. The main difference to previous models is that our saliency function is essentially determined by the fact that it maximizes the prediction performance on the observed data. Below, we show that the prediction performance of our model is comparable to that of biologically motivated models. Furthermore, we analyze the system in terms of the features it has learned, and compare our findings to previous results. 2 Eye Movement Data Eye movement data were taken from [8]. They consist of 200 natural images (1024?768, 8bit grayscale) and 18,065 fixation locations recorded from 14 na??ve subjects. The subjects freely viewed each image for about three seconds on a 19 inch CRT at full screen size and 60cm distance, which corresponds to 37? ? 27? of visual angle. For more details about the recording setup, please refer to [8]. Figure 1 illustrates the data set.1 Below, we are going to formulate saliency learning as a classification problem. This requires negative examples, i.e., a set of non-fixated, or background locations. As pointed out in [18, 21], care must be taken that no spurious differences in the local image statistics are generated by using different spatial distributions for positive and negative examples. As an example, fixation locations are usually biased towards the center of the image, probably due to the reduced physical effort when looking straight. At the same time, it is known that local image statistics can be correlated with 1 In our initial study [8], these data were preprocessed further. In order to reduce the noise due to varying top-down effects, only those locations that are consistent among subjects were used. Unfortunately, while this leads to higher prediction scores, the resulting model is only valid for the reduced data set, which in that case is less than ten percent of the fixations. To better explain the entire data set, in the present work we instead retain all 18,065 fixations, i.e., we trade performance for generality. image location [18, 21], e.g., due to the photographer?s bias of keeping objects at the center of the image. If we sampled background locations uniformly over the image, our system might learn the difference between pixel statistics at the image center and towards the boundary, instead of the desired difference between fixated and non-fixated locations. Moreover, the learning algorithm might be mislead by simple boundary effects. To avoid this effect, we use the 18,065 fixation locations to generate an equal number of background locations by using the same image coordinates, but with the corresponding image numbers shuffled. This ensures that the spatial distributions of both classes are identical. The proposed model computes saliency based on local image structure. To represent fixations and background locations accordingly, we cut out a square image patch at each location and stored the pixel values in a feature vector xi together with a label yi ? {1; ?1}, indicating fixation or background. Unfortunately, choosing an appropriate patch size and resolution is not straightforward, as there might be a wide range of reasonable values. To remedy this, we follow the approach proposed in [8], which is a simple compromise between computational tractability and generality: we fix the resolution to 13 ? 13 pixels, but leave the patch size d unspecified, i.e., we construct a separate data set for various values of d. Later, we determine the size d which leads to the best generalization performance estimate. For each image location, 11 patches were extracted, with sizes ranging between d = 0.47? and d = 27? visual angle, equally spaced on a logarithmic scale. Each patch was subsampled to 13 ? 13 pixels, after low-pass filtering to reduce aliasing effects. The range of sizes was chosen such that pixels in the smallest patch correspond to image pixels at full screen resolution, and that the largest patch has full screen height. Finally, for each patch we subtracted the mean intensity, and stored the normalized pixel values in a 169-dimensional feature vector xi . The data were divided into a training (two thirds) and a test set (one third). This was done such that both sets contained data from all 200 images, but never from the same subject on the same image. For model selection (Section 4.1) and assessment (Section 4.2), which rely on cross-validation estimates of the generalization error, further splits were required. These splits were done image-wise, i.e., such that no validation or test fold contained any data from images in the corresponding training fold. This is necessary, since image patches from different locations can overlap, leading to a severe over-estimation of the generalization performance. 3 Model and Learning Method From the eye movement described in Section 2, we learn a bottom-up saliency map f (x) : R169 ? R using a support vector machine (SVM) [2]. We model saliency as a linear combination of Gaussian radial basis functions (RBFs), centered at the training points xi ,   m X kx ? xi k2 f (x) = ?i yi exp ? . (1) 2? 2 i=1 The SVM algorithm determines non-negative coefficients ?i suchPthat the regularized risk R(f ) = m D(f ) + ?S(f ) is minimized. Here, D(f ) denotes the data fit i=1 max(0, 1 ? yi f (xi )), and S(f ) is the standard SVM regularizer 21 kf k2 [2]. The tradeoff between data fit and smoothness is controlled by the parameter ?. As described in Section 4.1, this design parameter, as well as the RBF bandwidth ? and the patch size d is determined by maximizing the model?s estimated prediction performance. It is insightful to compare our model (1) to existing models. Similar to most existing approaches, our model is based on linear filters whose outputs are nonlinearly combined into a real-valued saliency measure. This is a common model for the early visual system, and receptive-field estimation techniques such as reverse-correlation usually make the same assumptions. It differs from existing approaches in terms of its nonparametric nature, i.e., the basic linear filters are the training samples themselves. That way, the system is not restricted to the designer?s choice of feature maps, but learns relevant structure from the data. For the nonlinear component, we found the Gaussian RBF appropriate for two reasons: first, it is a universal SVM kernel [20], allowing the model to approximate any smooth function on the data points; second, it carries no information about the spatial ordering of the pixels within an image patch x: if we consistently permuted the pixels of the training and test patches, the model output would be identical. This implies that the system is has no a priori preference for particular image structures. The SVM algorithm was chosen primarily since it is a d=0.47? d=0.7? d=1.1? d=1.6? ?2 ?2 ?2 ?2 ?1 ?1 ?1 ?1 0 0 0 0 1 1 1 1 2 2 2 4 2 0 ?2 ?4 4 2 d=2.4? 0 ?2 ?4 2 4 2 d=3.6? 0 ?2 ?4 4 ?2 ?2 ?2 ?1 ?1 ?1 ?1 0 0 0 0 1 1 1 1 2 2 2 2 0 ?2 ?4 4 2 d=12? 0 ?2 ?4 ?2 ?1 ?1 0 0 0 1 1 1 2 2 2 0 ?2 ?4 0 ?2 ?4 d=27? ?2 4 2 d=18? ?1 ?2 ?4 ?2 ?4 2 4 ?2 0 d=8.1? ?2 4 2 d=5.4? 4 2 0 colormap 0.550 0.533 0.517 0.500 2 4 2 0 ?2 ?4 4 2 0 ?2 ?4 Figure 2: Selection of the parameters d, ? and ?. Each panel shows the estimated model performance for a fixed d, and all ? (vertical axes, label values denote log10 ?) and ? (horizontal axes, label values denote log10 ?). Darker shades of gray denote higher accuracy; a legend is shown on the lower right. Based on these results, we fixed d = 5.4? , log10 ? = 0, and log10 ? = 0. powerful standard method for binary classification. In light of its resemblance to regularized logistic regression, our method is therefore related to the one proposed in [1]. Their model is parametric, however. 4 4.1 Experiments Selection of d, ?, and ? For fixing d, ?, and ?, we conducted an exhaustive search on a 11 ? 9 ? 13 grid with the grid points equally spaced on a log scale such that d = 0.47? , . . . , 27? , ? = 0.01, . . . , 100, and ? = 0.001, . . . , 10, 000. In order to make the search computationally tractable, we divided the training set (Section 2) into eight parts. Within each part, and for each point on the parameter grid, we computed a cross-validation estimate of the classification accuracy (i.e., the relative frequency of signf (xi ) = yi ). The eight estimates were then averaged to yield one performance estimate for each grid point. Figure 2 illustrates the results. Each panel shows the model performance for one (?, ?)slice of the parameter space. The performance peaks at 0.55 (0.013 standard error of mean, SEM) at d = 5.4? , ? = 1, ? = 1, which is in agreement with [8], up to their slightly different d = 6.2? .2 Note that while 0.55 is not much, it is four standard errors above chance level. Furthermore, all (?, ?) plots show the same, smooth pattern which is known to be characteristic for RBF-SVM model selection [7]. This further suggests that, despite the low absolute performance, our choice of parameters is well justified. Model performance (Section 4.2) and interpretation (Section 4.3) were qualitatively stable within at least one step in any direction of the parameter grid. 2 Due to the subsampling (Section 2), the optimal patch size of d = 5.4? leads to an effective saliency map resolution of 89 ? 66 (the original image is 1024 ? 768), which corresponds to 2.4 pixels per visual degree. While this might seem low, note that similar resolutions have been suggested for bottom-up saliency: using Itti?s model with default parameters leads to a resolution of 64 ? 48. (a) (b) (c) Figure 3: Saliency maps. (a) shows a natural scene from our database, together with the recorded eye movements from all 14 subjects. Itti?s saliency map, using ?standard? normalization is shown in (b). Brighter regions denote more salient areas. The picture in (c) shows our learned saliency map, which was re-built for this example, with the image in (a) excluded from the training data. Note that the differing boundary effects are of no concern for our performance measurements, since hardly any fixations are that close to the boundary. 4.2 Model Performance To test the model?s performance with the optimal parameters (d = 5.4? , ? = 1, ? = 1) and more training examples, we divided the test set into eight folds. Again, this was done image-wise, i.e., such that each fold comprised the data from 25 images (cf. Section 2). For each fold we trained our model on all training data not coming from the respective 25 images. As expected, the use of more training data significantly improved the accuracy to 0.60 (0.011 SEM). For a comparison with other work, we also computed the mean ROC score of our system, 0.64 (0.010 SEM). This performance is lower than the 0.67 reported in [8]. However, their model explains only about 10% of the ?simplest? fixations in the data. Another recent study yielded 0.63 [21], although on a different data set. Itti?s model [4] was tested in [15], who report ROC scores around 0.65 (taken from a graph, no actual numbers are given). Scores of up to 0.70 were achieved with an extended version, that uses more elaborate long-range interactions and eccentricity-dependent processing. We also ran Itti?s model on our test set, using the code from [22]. We tried both the ?standard? [3] and ?iterative? [4] normalization scheme. The best performing setting was the earlier ?standard? method, which yielded 0.62 (0.022 SEM). The more recent iterative scheme did not improve on this result, also not when only the first, or first few fixations were considered. For a qualitative comparison, Figure 3 shows our learned saliency map and Itti?s model evaluated on a sample image. It is important to mention that the purpose of the above comparison is not to show that our model makes better predictions than existing models ? which would be a weak statement anyway since the data sets are different. The main insight here is that our nonparametric model performs at the same level as existing, biologically motivated models, which implement plausible, multi-scale front-end filters, carefully designed non-linearities, and even global effects. 4.3 Feature Analysis In the previous section we have shown that our model generalizes to unseen data, i.e., that it has learned regularities in the data that are relevant to the human fixation selection mechanism. This section addresses the question of what the learned regularities are, and how they are related to existing models. As mentioned in Section 1, characterizing a nonlinear model solely by the feature maps at its basis is insufficient. In fact, our SVM-based model is an example where this would be particularly wrong. An SVM assigns the smaller (down to zero) weights ?i , the easier the respective training samples xi can be classified. Describing f by its support vectors {xi |?i > 0} is therefore misleading, since they represent unusual examples, rather than prototypes. To avoid this, we instead characterize the learned function by means of inputs x that are particularly excitatory or inhibitory to the entire system. As a first test, we collected 20, 000 image patches from random locations in natural scenes (not in the training set) and presented them to our system. The top and bottom 100 patches sorted by model output and a histogram over all 20, 000 saliency values are shown in Figure 4 . Note that since our model is unbiased towards any particular image structure, the different Figure 4: Natural image patches ranked by saliency according to our model. The panels (a) and (b) show the bottom and top 100 of 20, 000 patches, respectively (the dots in between denote the 18, 800 patches which are not shown). A histogram of all 20, 000 saliency values is given on the lower right. The outputs in (a) range from ?2.0 to ?1.7, the ones in (b) from 0.99 to 1.8. (b) frequency (x1000) (a) 6 4 2 0 ?2 ?1 0 1 saliency 2 patterns observed in high and low output patches are solely due to differences between pixel statistics at fixated and background regions. The high output patches seem to have higher contrast, which is in agreement with previous results, e.g., [8, 10, 14, 18]. In fact, the correlation coefficient of the model output (all 20, 000 values) with r.m.s. contrast is 0.69. Another result from [14, 18] is that in natural images the correlation between pixel values decays faster at fixated locations, than at randomly chosen locations. Figure 4 shows this trend as well: as we move away from the patch center, the pixels? correlation with the center intensity decays faster for patches with high predicted salience. Moreover, a study on bispectra at fixated image locations [10] suggested that ?the saccadic selection system avoids image regions, which are dominated by a single oriented structure. Instead, it selects regions containing different orientations, like occlusions, corners, etc?. A closer look at Figure 4 reveals that our model tends to attribute saliency not alone to contrast, but also to non-trivial image structure. Extremely prominent examples of this effect are the high contrast edges appearing among the bottom 100 patches, e.g., in the patches at position (7,2) or (10,10). To further characterize the system, we explicitly computed the maximally excitatory and inhibitory stimuli. This amounts to solving the unconstrained optimization problems arg maxx f (x) and arg minx f (x), respectively. Since f is differentiable, we can use a simple gradient method. The only problem is that f (x) can have multiple extrema in x. A common way to deal with local optima is to run the search several times with different initial values for x. Here, we repeated the search 1, 000 times for both minima and maxima. The initial x were constructed by drawing 169 pixel values from a normal distribution with zero mean and then normalizing the patch standard deviation to 0.11 (the average value over the training patches). The 1, 000 optimal values were then clustered using k-means. The number of clusters k was found by increasing k until the clusters were stable. Interestingly, the clusters for both minima and maxima were already highly concentrated for k = 2, i.e., within each cluster, the average variance of a pixel was less than 0.03% of the pixel variance of its center patch. This result could also be confirmed visually, i.e., despite the randomized initial values both optimization problems had only two visually distinct outcomes. We also re-ran this experiment with natural image patches as starting values, with identical results. This indicates that our saliency function has essentially two minima and two maxima in x. The four optimal stimuli are shown in Figure 5 . The first two images (a) and (b) show the maximally inhibitory stimuli. saliency=?4.9 saliency=?4.5 saliency=5.0 saliency=5.5 (a) (b) (c) (d) Figure 5: Maximally inhibitory and excitatory stimuli of the learned model. Note the large magnitude of the saliency values compared to the typical model output (cf. the histogram in Figure 4). (a) and (b): the two maximally inhibitory stimuli (lowest possible saliency). (c) and (d): the two maximally excitatory stimuli (highest possible saliency), (e) and (f): the radial average of (c) and (d), respectively. 0.4 0 0.3 ?0.1 0.2 0.1 ?0.2 0 ?0.3 (e) (f) These are rather difficult to interpret other than no particular structure is visible. On the other hand, the maximally excitatory stimuli, denoted by (c) and (d), have center-surround structure. All four stimuli have zero mean, which is not surprising since during gradient search, both the initial value and the step directions?which are linear combinations of the training data?have zero mean. As a consequence, the surrounds of (c) and (d) are inhibitory w.r.t. their centers, which can also be seen from the different signs in their radial averages (e) and (f).3 The optimal stimuli thus bear a close resemblance to receptive fields in the early visual system [11]. To see that the optimal stimuli have in fact prototype character, note how the histogram in Figure 4 reflects the typical distribution of natural image patches along the learned saliency function. It illustrates that the saliency values of unseen natural image patches usually lie between ?2.0 and 1.8 (for the training data, they are between ?1.8 and 2.2). In contrast, our optimal stimuli have saliency values of 5.0 and 5.5, indicating that they represent the difference between fixated and background locations in a much more articulated way than any of the noisy measurements in our data set. 5 Discussion We have presented a nonparametric model for bottom-up visual saliency, trained on human eye movement data. A major goal of this work was to complement existing approaches in that we keep the number of assumptions low, and instead learn as much as possible from the data. In order to make this tractable, the model is rather simplistic, e.g., it implements no long-range interactions within feature maps. Nevertheless, we found that the prediction performance of our system is comparable to that of parametric, biologically motivated models. Although no such information was used in the design of our model, we found that the learned features are consistent with earlier results on bottom-up saliency. For example, the outputs of our model are strongly correlated with local r.m.s. contrast [18]. Also, we found that the maximally excitatory stimuli of our system have centersurround structure, similar to DoG filters commonly used in early vision models [3, 13, 21]. This is a nontrivial result, since our model has no preference for any particular image features, i.e., a priori, any 13 ? 13 image patch is equally likely to be an optimal stimulus. Recently, several authors have explored whether oriented (Gabor) or center-surround (DoG) features are more relevant to human eye movements. As outlined in Section 1, this is a difficult task: while some results indicate that both features perform equally well [21], others suggest that one [1] or the other [6, 13] are more relevant. Our results shed additional light on this discussion in favor of center-surround features. 3 Please note that the radial average curves in Figure 5 (e) and (f) do not necessarily sum to zero, since the patch area in (c) and (d) grows quadratically with its corresponding radius. References [1] R. J. Baddeley and B. W. Tatler. High frequency edges (but not contrast) predict where we fixate: A bayesian system identification analysis. Vision Research, 46(18):2824?2833, 2006. [2] C. J. C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2(2):121?167, 1998. [5] L. Itti. Quantifying the contribution of low-level saliency to human eye movements in dynamic scenes. Visual Cognition, 12(6):1093?1123, 2005. [6] L. Itti. Quantitative modeling of perceptual salience at human eye position. Visual Cognition (in press), 2006. [3] L. Itti, Koch C., and E. Niebur. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11):1254?1259, 1998. [4] L. Itti and C. Koch. A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 40(10-12):1489?1506, 2000. [7] S. S. Keerthi and C. J. Lin. Asymptotic behaviors of support vector machines with gaussian kernel. Neural Computation, 15:1667?1689, 2003. [8] W. Kienzle, F. A. Wichmann, B. Sch?olkopf, and M. O. Franz. Learning an interest operator from human eye movements. In Beyond Patches Workshop, International Conference on Computer Vision and Pattern Recognition, 2006. [9] C. Koch and S. Ullman. Shifts in selective visual attention: towards the underlying neural circuitry. Human Neurobiology, 4(4):219?227, 1985. [10] G. Krieger, I. Rentschler, G. Hauske, K. Schill, and C. Zetzsche. Object and scene analysis by saccadic eye-movements: an investigation with higher-order statistics. Spatial Vision, 3(2,3):201?214, 2000. [11] S. W. Kuffler. Discharge patterns and functional organization of mammalian retina. Journal of Neurophysiology, 16(1):37?68, 1953. [12] S. K. Mannan, K. H. Ruddock, and D. S. Wooding. The relationship between the locations of spatial features and those of fixations made during visual examination of briefly presented images. Spatial Vision, 10(3):165?88, 1996. [13] D. J. Parkhurst, K. Law, and E. Niebur. Modeling the role of salience in the allocation of overt visual attention. Vision Research, 42(1):107?123, 2002. [14] D. J. Parkhurst and E. Niebur. Scene content selected by active vision. Spatial Vision, 16(2):125?154, 2003. [15] R. J. Peters, A. Iyer, C. Koch, and L. Itti. Components of bottom-up gaze allocation in natural scenes (poster). In Vision Sciences Society (VSS) Annual Meeting, 2005. [16] C. M. Privitera and L. W. Stark. Algorithms for defining visual regions-of-interest: Comparison with eye fixations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(9):970?982, 2000. [17] R. Raj, W. S. Geisler, R. A. Frazor, and A. C. Bovik. Contrast statistics for foveated visual systems: Fixation selection by minimizing contrast entropy. Journal of the Optical Society of America A., 22(10):2039?2049, 2005. [18] P. Reinagel and A. M. Zador. Natural scene statistics at the center of gaze. Network: Computation in Neural Systems, 10(4):341?350, 1999. [19] L. W. Renninger, J. Coughlan, P. Verghese, and J. Malik. An information maximization model of eye movements. In Advances in Neural Information Processing Systems, volume 17, pages 1121?1128, 2005. [20] I. Steinwart. On the influence of the kernel on the consistency of support vector machines. Journal of Machine Learning Research, 2:67?93, 2001. [22] D. Walther. Interactions of visual attention and object recognition: computational modeling, algorithms, and psychophysics. PhD thesis, California Institute of Technology, 2006.
3122 |@word neurophysiology:1 version:1 briefly:1 tried:1 photographer:1 mention:1 carry:1 initial:5 score:4 interestingly:1 past:1 existing:9 surprising:1 must:1 visible:1 shape:1 plot:1 designed:1 v:1 alone:1 intelligence:2 selected:2 accordingly:1 coughlan:1 location:20 preference:2 height:1 along:1 constructed:1 qualitative:1 consists:1 fixation:17 walther:1 combine:1 expected:1 rapid:1 behavior:2 mpg:1 themselves:1 aliasing:1 multi:1 actual:1 increasing:2 underlying:2 moreover:2 maximizes:1 panel:3 linearity:1 lowest:1 what:1 cm:1 unspecified:1 substantially:1 differing:1 finding:2 extremum:1 quantitative:1 shed:1 colormap:1 k2:2 wrong:1 planck:1 positive:2 felix:2 local:7 tends:1 consequence:2 despite:3 solely:2 approximately:1 might:5 suggests:1 range:5 averaged:1 implement:2 differs:1 spot:1 area:2 universal:1 maxx:1 gabor:3 thought:1 significantly:2 poster:1 intention:1 radial:4 suggest:1 cannot:1 undesirable:1 selection:7 close:2 operator:1 risk:1 influence:2 map:15 center:12 maximizing:1 straightforward:2 attention:5 zador:2 starting:1 formulate:1 mislead:1 resolution:6 renninger:1 assigns:1 reinagel:2 insight:1 anyway:1 coordinate:1 discharge:1 play:1 us:1 hypothesis:1 agreement:2 trend:1 recognition:3 particularly:2 mammalian:1 cut:1 database:1 bottom:12 role:2 observed:2 region:5 ensures:1 ordering:1 movement:16 trade:1 highest:1 ran:2 mentioned:1 discriminates:1 complexity:1 dynamic:1 trained:3 solving:1 compromise:1 predictive:1 basis:2 various:1 america:1 regularizer:1 articulated:1 distinct:1 shortcoming:1 effective:1 choosing:1 outcome:1 exhaustive:1 whose:1 widely:1 plausible:5 valued:1 drawing:1 favor:1 statistic:7 unseen:2 noisy:1 hoc:1 differentiable:1 matthias:1 propose:1 interaction:3 coming:1 relevant:6 rapidly:1 tatler:1 olkopf:2 regularity:2 eccentricity:1 optimum:1 cluster:4 leave:1 object:3 fixing:1 predicted:1 resemble:1 come:1 implies:1 indicate:1 direction:2 radius:1 attribute:1 filter:11 centered:1 human:13 crt:1 viewing:1 explains:1 fix:1 generalization:3 clustered:1 investigation:1 biological:4 koch:5 around:1 considered:1 normal:1 exp:1 visually:2 mapping:1 predict:1 cognition:2 circuitry:1 major:1 early:5 smallest:1 purpose:1 estimation:2 overt:2 label:3 largest:1 reflects:1 clearly:1 always:1 sight:1 aim:1 gaussian:3 rather:4 avoid:2 varying:1 ax:2 verghese:1 consistently:1 indicates:2 greatly:1 contrast:12 dependent:1 attract:1 entire:2 spurious:1 going:1 selective:1 selects:1 germany:1 pixel:16 arg:2 classification:3 orientation:2 among:2 denoted:1 priori:2 development:1 spatial:7 psychophysics:1 field:4 construct:2 equal:1 never:1 manually:1 identical:3 look:1 minimized:1 report:1 stimulus:14 others:1 primarily:1 few:1 retina:1 randomly:2 oriented:2 ve:1 subsampled:1 occlusion:1 keerthi:1 organization:1 interest:2 highly:1 mining:1 severe:1 light:2 zetzsche:1 predefined:1 edge:4 closer:1 necessary:2 respective:2 privitera:1 desired:1 re:2 earlier:2 downside:1 modeling:3 maximization:1 cost:1 tractability:1 deviation:1 comprised:1 conducted:1 front:3 characterize:2 reported:2 stored:2 combined:2 peak:1 randomized:1 international:1 geisler:1 retain:1 gaze:3 together:3 na:1 again:1 thesis:1 recorded:3 x1000:1 containing:1 corner:1 itti:11 leading:1 ullman:1 stark:1 nonlinearities:2 de:1 parkhurst:3 coefficient:2 explicitly:1 ad:1 later:1 observer:1 analyze:1 rbfs:1 contribution:1 bright:1 square:1 accuracy:3 variance:2 characteristic:1 who:1 yield:2 saliency:41 spaced:2 correspond:1 inch:1 weak:1 bayesian:1 identification:1 comparably:1 niebur:5 confirmed:1 cybernetics:1 straight:1 classified:1 explain:2 frequency:3 fixate:1 attributed:1 sampled:1 knowledge:2 carefully:1 higher:6 follow:1 maximally:8 improved:1 done:3 evaluated:1 strongly:1 generality:2 furthermore:2 correlation:4 until:1 hand:1 steinwart:1 horizontal:1 nonlinear:3 assessment:1 lack:1 logistic:1 gray:1 resemblance:2 believe:1 grows:1 effect:8 normalized:1 unbiased:1 remedy:1 shuffled:1 excluded:1 deal:1 during:3 please:2 prominent:1 performs:2 covert:1 percent:1 image:52 ranging:1 wise:2 recently:1 common:3 permuted:1 functional:1 physical:1 volume:1 interpretation:1 interpret:1 significant:1 refer:1 measurement:2 surround:5 smoothness:1 unconstrained:1 grid:5 outlined:1 consistency:1 pointed:1 had:1 dot:1 stable:2 etc:3 recent:3 showed:1 raj:1 driven:1 reverse:1 certain:1 ubingen:1 binary:1 meeting:1 yi:4 seen:1 minimum:3 additional:3 care:1 freely:1 determine:1 full:3 multiple:1 smooth:2 faster:2 plausibility:3 believed:2 cross:2 long:2 lin:1 divided:3 equally:4 controlled:1 prediction:7 simplistic:2 basic:1 regression:1 essentially:3 vision:11 histogram:4 normalization:3 represent:3 kernel:3 achieved:1 justified:3 background:7 bovik:1 sch:2 biased:2 probably:1 subject:8 recording:1 spemannstr:1 legend:1 seem:3 split:2 affect:1 fit:2 brighter:1 bandwidth:1 reduce:2 lesser:1 prototype:2 tradeoff:1 shift:2 whether:2 motivated:3 effort:1 suffer:1 peter:1 hardly:1 amount:1 nonparametric:5 dark:1 ten:1 concentrated:1 simplest:1 reduced:2 generate:1 inhibitory:6 tutorial:1 designer:2 estimated:2 sign:1 per:2 four:3 salient:1 nevertheless:1 preprocessed:1 vaguely:1 graph:1 sum:1 run:1 angle:2 powerful:1 reasonable:1 patch:35 comparable:2 bit:1 fold:5 yielded:2 annual:1 nontrivial:1 scene:10 dominated:1 extremely:1 performing:1 optical:1 according:1 combination:2 smaller:1 slightly:1 character:1 b:1 biologically:6 wichmann:2 restricted:1 rentschler:1 taken:3 computationally:1 describing:1 mechanism:3 tractable:2 end:3 unusual:1 generalizes:1 gaussians:2 eight:3 away:1 appropriate:2 appearing:1 subtracted:1 original:1 top:5 denotes:1 subsampling:1 cf:2 log10:4 society:2 move:1 malik:1 question:1 already:1 receptive:3 saccadic:3 strategy:1 parametric:3 responds:1 minx:1 gradient:2 distance:1 separate:1 centersurround:1 extent:1 tuebingen:1 collected:1 reason:1 trivial:1 code:1 relationship:1 insufficient:1 minimizing:1 difficult:3 unfortunately:3 setup:1 statement:1 debate:1 negative:4 design:4 implementation:1 perform:1 allowing:1 vertical:1 situation:1 extended:1 looking:1 precise:1 neurobiology:1 defining:1 intensity:4 complement:1 nonlinearly:3 dog:3 required:1 california:1 learned:10 conflicting:1 quadratically:1 address:2 beyond:1 suggested:2 usually:5 pattern:8 below:2 built:2 max:2 including:1 power:1 overlap:1 natural:11 rely:1 regularized:2 ranked:1 examination:1 scheme:3 improve:1 misleading:1 technology:1 eye:18 numerous:1 picture:1 prior:1 mof:1 discovery:1 kf:1 relative:1 law:2 asymptotic:1 bear:1 filtering:1 allocation:2 validation:3 degree:1 consistent:2 excitatory:7 free:1 keeping:1 salience:3 bias:1 kuffler:1 burges:1 institute:2 wide:1 characterizing:1 absolute:1 slice:1 boundary:4 default:1 curve:1 valid:1 avoids:1 computes:1 author:1 qualitatively:1 commonly:1 made:1 franz:2 far:1 transaction:2 approximate:1 bernhard:1 keep:1 global:1 active:1 reveals:1 fixated:8 xi:8 grayscale:1 search:6 iterative:2 decade:1 learn:4 nature:1 symmetry:1 sem:4 complex:1 necessarily:1 did:1 main:2 whole:1 noise:1 fair:1 repeated:1 roc:2 screen:3 elaborate:1 darker:1 position:3 lie:1 perceptual:1 weighting:1 third:2 learns:2 down:3 shade:1 insightful:1 explored:2 decay:2 svm:8 concern:1 normalizing:1 consist:1 workshop:1 phd:1 magnitude:1 iyer:1 illustrates:3 krieger:2 foveated:1 kx:1 easier:1 entropy:2 logarithmic:1 likely:1 visual:22 contained:2 wolf:1 corresponds:2 determines:1 chance:1 extracted:1 viewed:1 sorted:1 goal:1 quantifying:1 rbf:3 towards:4 content:2 change:1 determined:2 typical:2 uniformly:1 kienzle:3 pas:1 experimental:1 indicating:2 support:5 latter:2 hauske:1 baddeley:1 tested:2 correlated:3
2,340
3,123
Temporal Coding using the Response Properties of Spiking Neurons Thomas Voegtlin INRIA - Campus Scientifique, B.P. 239 F-54506 Vandoeuvre-Les-Nancy Cedex, FRANCE voegtlin@loria.fr Abstract In biological neurons, the timing of a spike depends on the timing of synaptic currents, in a way that is classically described by the Phase Response Curve. This has implications for temporal coding: an action potential that arrives on a synapse has an implicit meaning, that depends on the position of the postsynaptic neuron on the firing cycle. Here we show that this implicit code can be used to perform computations. Using theta neurons, we derive a spike-timing dependent learning rule from an error criterion. We demonstrate how to train an auto-encoder neural network using this rule. 1 Introduction The temporal coding hypothesis states that information is encoded in the precise timing of action potentials sent by neurons. In order to achieve computations in the time domain, it is thus necessary to have neurons spike at desired times. However, at a more fundamental level, it is also necessary to describe how the timings of action potentials received by a neuron are combined together, in a way that is consistent with the neural code. So far, the main theory has posited that the shape of post-synaptic potentials (PSPs) is relevant for computations [1, 2, 3]. In these models, the membrane potential at the soma of a neuron is a weighted sum of PSPs arriving from dendrites at different times. The spike time of the neuron is defined as the time when its membrane potential first reaches a firing threshold, and it depends on the precise temporal arrangement of PSPs, thus enabling computations in the time domain. Hence, the nature of the temporal code is closely tied to the shape of PSPs. A consequence is that the length of the rising segment of post-synaptic potentials limits the available coding interval [1, 2]. Here we propose a new theory, based on the non-linear dynamics of integrate-and-fire neurons. This theory takes advantage of the fact that the effect of synaptic currents depends on the internal state of the postsynaptic neuron. For neurons spiking regularly, this dependency is classically described by the Phase Response Curve (PRC) [4]. We use theta neurons, which are mathematically equivalent to quadratic integrate-and-fire neurons [5, 6]. In these neuron models, once the potential has crossed the firing threshold, the neuron is still sensitive to incoming currents, which may change the timing of the next spike. In the proposed model, computations do not rely on the shape of PSPs, which alleviates the restriction imposed by the length of their rising segment. Therefore, we may use a simplified model of synaptic currents; we model synaptic currents as Diracs, which means that we do not take into account synaptic time constants. Another advantage of our model is that computations do not rely on the delays imposed by inter-neuron transmission; this means that it is not necessary to fine-tune delays in order to learn desired spike times. 2 2.1 Description of the model The Theta Neuron The theta neuron is described by the following differential equation: d? = (1 ? cos?) + ?I(1 + cos?) dt (1) where ? is the ?potential? of the neuron, and I is a variable input current, measured in radians per unit of time. For convenience, we call units of time ?milliseconds?. The neuron is said to fire everytime ? crosses ?. The dynamics of the model can be represented on a phase circle (Figure 1). The effect of an input current is not uniform across the circle; currents that occur late (for ? close to ?) have little effect on ?, while currents that arrive when ? is close to zero have a much greater effect. ?+0 ? ? ??0 I>0 I<0 Figure 1: Phase circle of the theta model. The neuron fires everytime ? crosses ?. For I < 0 there 1+?I are two fixed points: An unstable point ?0+ = arccos 1??I , and an attractor ?0? = ??0+ . 2.2 Synaptic interactions The input current I is the sum of a constant current I0 and transient synaptic currents Ii (t), where i ? 1..N indexes the synapses: N X Ii (t) (2) I = I0 + i=1 Synaptic currents are modeled as Diracs : Ii (t) = wi ?(t ? ti ), where ti is the firing time of presynaptic neuron i, and wi is the weight of the synapse. Transmission delays are not taken into account. Figure 2: Response properties of the theta model. Curves shows the change of firing time tf of a neuron receiving a Dirac current of weight w at time t. Left: For I0 > 0, the neuron spikes regularly (I0 = 0.005, ?(0) = ??). If w is small, the curves corresponding to w > 0 and w < 0 are symmetric; the positive curve is called the Phase Response Curve (PRC). If w is large, curves are no longer symmetric; the portions correspond to the ascending (resp. descending) phase of sin ? have different slopes. Right: Response for I0 < 0. The initial condition is slightly above the unstable equilibrium point (I0 = ?0.005, ?(0) = ?0+ + 0.0001), so that the neuron fires if not perturbed. For w > 0, the response curve is approximately linear, until it reaches zero. For w < 0, the current might cancel the spike if it occurs early. Figure 2 shows how the firing time of a theta neuron changes with the time of arrival of a synaptic current. In our time coding model, we view this curve as the transfer function of the neuron; it describes how the neuron converts input spike times into output spike times. 2.3 Learning rule We derive a spike-timing dependent learning rule from the objective of learning a set of target firing times. Following [2], we consider the mean squared error, E, between desired spike times t?s and actual spike times ts : E =< (ts ? t?s )2 > (3) where < . > denotes the mean. Gradient descent on E yields the following stochastic learning rule: ?wi = ?? The partial derivative ?ts ?wi ?E ?ts = ?2?(ts ? t?s ) ?wi ?wi (4) expresses the credit assignment problem for synapses. ? d ?+i ?+i wi ti ? ?i? ts ti time Figure 3: Notations used in the text. An incoming spike triggers an instantaneous change of the potential ?. ?i? (resp. ?i+ ) denotes the postsynaptic potential before (resp. after) the presynaptic spike. A small modification dwi of the synaptic weight wi induces a change d?i+ Let F denote the ?remaining time?, that is, the time that remains before the neuron will fire: Z ? d? F (t) = (1 ? cos?) + ?I(1 + cos?) ?(t) (5) In our model, I is not continuous, because of Dirac synaptic currents. For the moment, we assume that ? is between the unstable point ?0+ and ?. In addition, we assume that the neuron receives one spike on each of its synapses, and that all synaptic weights are positive. Let tj denote the time of arrival of the action potential on synapse j. Let ?j? (resp. ?j+ ) denote the potential before (resp. after) the synaptic current:  ? ?j = ?(t? j ) (6) ? ? ?j+ = ?(t+ j ) = ?j + ?wj (1 + cos ?j ) We consider the effect of a small change of weight wi . We shall rewrite integral (5) on the intervals where the integrand is continuous. To keep notations simple, we assume that action potentials are ? ordered, ie : tj ? tj+1 for all j. For consistency, we use the notation ?N +1 = ?. We may write: F (ti ) = XZ j?i ? ?j+1 ?j+ d? (1 ? cos?) + ?I0 (1 + cos?) (7) The partial derivative of the spiking time ts can be expressed as : ?F ??+ X ?ts = + i + ?wi ??i ?wi j>i + ? ?F ??j ?F ??j + ? + ??j ?wi ??j ?wi ! (8) In this expression, the sum expresses how a change of weight wi will modify the effect of other spikes, for j > i. The j th terms of this sum depend on the time elapsed between tj and ti . Since we have no a priori information on the distribution of tj given ti , we shall consider that this term is not ?E correlated with ?w . For that reason, we neglect this sum in our stochastic learning rule: i ?ts ?F ??+ ? + i ?wi ??i ?wi (9) (1 + cos ?i? )? ?ts ?? ?wi (1 ? cos ?i+ ) + ?I0 (1 + cos ?i+ ) (10) which yields : Note that this expression is not bounded when ?i+ is close to the unstable point ?0+ . In that case, ? is in a region where it changes very slowly, and the timing of other action potentials for j > i will mostly determine the firing time ts . This means that approximation (9) will not hold. In addition, it is necessary to extend the learning rule to the case ?i+ ? [?0? ?0+ [, where the above expression is negative. For these reasons, we introduce a credit bound, C, and we modify the learning rule as follows: ?ts ?ts if 0 < ? < C then: ?wi = ?2?(ts ? t?s ) (11) ?wi ?wi else: ?wi = 2?(ts ? t?s )C (12) 2.4 Algorithm The algorithm updates the weights in the direction of the gradient. The learning rule takes effect at the end of a trial of fixed duration. If a neuron does not fire at all during the trial, then its firing time is considered to be equal to the duration of the trial. For each synapse, it is necessary to compute the credit from Equation (10) everytime a current is transmitted. We may relax the assumption that each synapse receives one single action potential; if a presynaptic neuron fires several times before the postsynaptic neuron fires, then the credit corresponding to all spikes is summed. Theta neurons were simulated using Euler integration of Equation (1). The time step must be carefully chosen; if the temporal resolution is too coarse, then the credit assignment problem becomes too difficult, which increases the number of trials necessary for learning. On the other hand, small values of the time step mean that simulations take more time. 3 Auto-encoder network Predicting neural activities has been proposed as a possible role for spike-timing dependent learning rules [7]. Here we train a network to predict its own activities using the learning rule derived above. For this, a time-delayed version of the input (echo) is used as the desired output (see Figure 4). The network has to find a representation of the input that minimizes mean squared reconstruction error. The network has three populations of neurons: (i) An input population X of size n neurons, where an input vector is represented using spike times. We call Inter Stimulus Interval (ISI) the interval between the spikes encoding the input and the echo. After the ISI, population X fires a second burst of spikes, that is a time-delayed version of the initial burst. (ii) An output population Y , of size m neurons, that is activated by neurons in X. (iii) A population X ? of size n neurons, where the input is reconstructed. Neurons in X ? are activated by Y . The learning rule updates the feedback connections (wij )i?n,j?m from Y to X, comparing spike times in X and in X ? . We use I0 < 0, so the response to positive transient currents is approximately linear (see fig. 2). We thus expect neurons to perform linear summation of spike times. For the feed-forward connections from X to Y , we use the transpose of the feedback weights matrix. This is inspired by Oja?s Principal Subspace Network [8]. If spike times are within the linear part of the response curve, then we expect this network to perform Principal Component Analysis (PCA) in the time domain. However, one difference is that the PRC we use is always positive (type I neurons). This means that spike times can only code for positive values (even though synaptic weights can be of both signs). Output Y feedback feed?forward X Echo Input time Figure 4: Auto-encoder network. An input vector is translated into firing times of the input population. Output neurons are activated by input neurons through feed-forward connections. A reconstruction of the input burst is generated through feedback connections. Target firing times are provided by a delayed version of the input burst (echo). In order to code for values of both signs, one would need a transfer function that changes its sign around a time that would code for zero, so that the effect of a current is reversed when its arrival time crosses zero. Here we may view the neural code as a positive code: Early spikes code for high values, and late spikes code for values close to zero. In this architecture, it is necessary to ensure that each neuron in Y fires a single spike on each trial. In order to do this, we impose that neurons in Y have the same average firing time. For this, we add a centering term to the learning rule: ?wij = ?? ?E ? ??j ?wij (13) where ? ? IR and ?j is the average phase of neuron j. ?j is a leaky average of the difference between the firing time tj and the average firing times of all neurons in population Y . It is updated after each trial: ! m 1 X ?j ? ? ?j + (1 ? ? ) tj ? tk (14) m k=1 This modification of the learning rule results in neurons that have no preferred firing order. 4 Experiments We used I0 = ?0.01 for all neurons. This ensures that neurons have no spontaneous activity. At the beginning of a trial, all neurons were initialized to their stable fixed point. In order to balance the effect of the different sizes of populations X and Y , different values of ? were used for X and Y neurons: We used ?X = 0.1 and ?Y = m n ?X . In the leaky average we used ? = 0.1 In each experiment, the input vector was encoded in spike times. When doing so, one must make sure that the values taken by the input are within the coding interval of the neurons, ie the range of values where the PRC is not zero. In practice, spikes that arrive too late in the firing cycle are not taken into account by the learning rule. In that case, the weights corresponding to other synapses become overly increased, which eventually causes some postsynaptic neurons in X ? to fire before presynaptic neurons in Y (?anticausal spikes?). If this occurs, one possibility is to reduce the variance of the input. 4.1 Principal Component Analysis of a Gaussian distribution A two-dimensional Gaussian random variable was encoded in the spike times of three input neurons. The ellipsoid had a long axis of standard deviation 1ms and a short axis of deviation 0.5ms, and it was rotated by ?/3. Because the network does not have an absolute time reference, it is necessary to use three input neurons, in order to encode two degrees of freedom in relative spiking times. The output layer had two neurons (one degree of freedom). Therefore the network has to find a 1D representation of a 2D variable, that minimizes the mean-squared reconstruction error. The input was encoded as follows: ( t0 = 3 t1 = 3 + ?1 cos(?/3) + 0.5?2 sin(?/3) (15) t2 = 3 + 0.5?2 cos(?/3) + ?1 sin(?/3) where ?1 and ?2 are two independent random variables picked from a Gaussian distribution of variance 1. Input spikes times were centered around t = 3ms, where t = 0 denotes the beginning of a trial. We used a time step of 0.05 ms. Each trial lasted for 400 iterations, which corresponds to 20ms of simulated time. The ISI was 5ms. The credit bound was C = 1000. Other parameters were ? = 0.0001 and ? = 0.001. Weights were initialized with random values between 0.5 and 1.5. Figure 5: Principal Component Analysis of a 2D Gaussian distribution. The input vector was encoded in the relative spike times of three input neurons. Top: Evolution of the weights over 20.000 learning iterations. Bottom: Final synaptic weights represented as bars. Note the complementary shapes of weight vectors. Right: The input (white dots) and its reconstruction (dark dots) from the network?s activities. Each branch corresponds to a firing order of the two output neurons. Figure 5 shows that the network has learned to extract the principal direction of the distribution. Two branches are visible in the distribution of dots corresponding to the reconstruction. They correspond to two firing orders of the output neurons. The direction of the branches results from the synaptic weights of the neurons. Note that the lower branch has a slight curvature. This suggests that the response function of neurons is not perfectly linear in the interval where spike times are coded. The fact that branches do not exactly have the same orientation might result from non-linearities, or from the approximation made in deriving the learning rule. There are six synaptic weights in the network. One degree of freedom per neuron in X ? is used to adapt its mean firing times to the value imposed by the ISI; the smaller the ISI, the larger the weights. This ?normalization? removes three degrees of freedom. One additional constraint is imposed by the centering term that was added to the learning rule in (13). Thus the network had two degrees of freedom. It used them to find the directions of the two branches shown in Figure 5 (left). These two branches can be viewed as the base vectors used in the compressed representation in Y . The network uses two base vectors in order to represent one single principal direction; each codes for one half of the Gaussian. This is because the network uses a positive code, where negative values are not allowed. 4.2 Encoding natural images An encoder network was trained on the set of raw natural images used in [9]1 . The encoder had 64 output neurons and 256 input neurons. On each trial, a random patch of size 16 ? 16 was extracted 1 Images were retrieved from http://redwood.berkeley.edu/bruno/sparsenet/ from a random image of the dataset, and encoded in the network. Raw grey values from the dataset were encoded as milliseconds. The standard deviation per pixel was 1.00ms. The time step of the simulation was 0.1ms, and each trial lasted for 200 time steps (20ms). The ISI was 9ms, and the parameters of the learning rule were ? = 0.0001, C = 50 and ? = 0.001. Weights were initialized with random values between 0 and 0.3. Figure 6: Synaptic weights learned by the network. 64 neurons were trained to represent natural images patches of size 16 ? 16. Different grey scales are used in order to display positive and negative weights (black is negative, white is positive). Left: grey scale between -1 and 1. Only positive weights are visible at this scale, because they are much larger than negative weights. Right: grey scale between -0.1 and 0.1. Negative weights are visible, positive weights are beyond scale. Synaptic weights after 100.000 trials are shown in Figure 6. There is a strong difference of amplitude between positive and negative weights; positive weights typically have values between 0 and 1, while negative weights are one order of magnitude smaller. For that reason, weights are displayed twice, with two different grey scales. An image reconstructed from spike times is shown in Figure 7. After training, the mean reconstruction error on the entire dataset was 0.25ms/pixel. For comparison, the mean error performed by Oja?s principal subspace network [8] trained on the same image patches was 0.11ms/pixel. The difference of amplitude between positive and negative weights results from higher sensitivity of the response curves to negative weights, as shown in Figure 2. Synaptic weights with negative values have the ability to strongly delay the output spike, and even to cancel it. Synaptic weights have the shape of local filters, with antagonistic center-surround structures. This contrasts with the base vectors typically obtained from PCA of natural images, which are not local. One possible explanation lies in the response properties of the theta neurons. The response function is not linear, especially in the case of negative weights (Figure 2). This will disfavor solutions involving linear combinations of both positive and negative weights, and favor sparse representations. Hence, the network could be performing something similar to Nonlinear PCA [10]. 5 Conclusions We have shown that the dynamic response properties of spiking neurons can be effectively used as transfer functions, in order to perform computations (in this paper, PCA and Nonlinear PCA). A similar proposal was made in [11], where the PRC of neurons has been adapted to a biologically realistic STDP rule. Here we took a complementary approach, adapting the learning rule to the neuronal dynamics. We used theta neurons, which are of type I, and equivalent to quadratic integrate-and-fire neurons. Type I neurons have a PRC that is always positive. This means that spike times can encode only Figure 7: Natural image and reconstruction from spike times. The 512 ? 512 image from the training set (left) was divided into 16 ? 16 patches, and encoded using 64 neurons. The reconstruction (right) is derived from spikes times in X ? . Standard deviation of the encoded images was 1.00ms/pixel. The mean reconstruction error on the entire dataset was 0.25ms/pixel, about 2.5 times the error made by PCA. positive values. In order to encode values of both signs, one would need the transfer function to change its sign around a time that codes for zero. This will be possible with more complex type II neurons, where the sign of the PRC is not constant. Acknowledgments The author thanks Samuel McKennoch and Dominique Martinez for helpful comments. References [1] W. Maass. Lower bounds for the computational power of networks of spiking neurons. Neural Computation, 8(1):1?40, 1996. [2] S.M. Bohte, J.N. Kok, and H. La Poutr?e. Spike-prop: error-backprogation in multi-layer networks of spiking neurons. Neurocomputing, 48:17?37, 2002. [3] A. J. Bell and L. C. Parra. Maximising sensitivity in a spiking network. In Advances in Neural Information Processing Systems, volume 17, pages 121?128, 2005. [4] R. F. Gal?an, G. B. Ermentrout, and N. N. Urban. Efficient estimation of phase-resetting curves in real neurons and its significance for neural-network modeling. Physical Review Letters, 94:158101, 2005. [5] G. B. Ermentrout. Type I membranes, phase resetting curves, and synchrony. Neural Computation, 8:979?1001, 1996. [6] W. Gerstner and W. M. Kistler. Spiking Neuron Models : Single Neurons, Populations, Plasticity. Cambridge University Press, 2002. [7] R. P. N. Rao and T. J. Sejnowski. Predictive sequence learning in recurrent neocortical circuits. In Advances in Neural Information Processing Systems, volume 12, pages 164?170, 2000. [8] E. Oja. Neural networks, principal components and subspaces. International Journal of Neural Systems, 1(1):61?68, 1989. [9] B. Olshausen and D. Field. Sparse coding of natural images produces localized, oriented, bandpass receptive fields. Nature, 381:607?609, 1996. [10] E. Oja. The nonlinear PCA learning rule in independent component analysis. Neurocomputing, 17(1):25? 46, 1997. [11] Lengyel M., Kwag J., Paulsen O., and Dayan P. Matching storage and recall:hippocampal spike timingdependent plasticity and phase response curves. Nature Neuroscience, 8:1677?1683, 2006.
3123 |@word trial:12 version:3 rising:2 grey:5 simulation:2 dominique:1 paulsen:1 moment:1 initial:2 current:21 comparing:1 must:2 realistic:1 visible:3 plasticity:2 shape:5 remove:1 update:2 half:1 beginning:2 short:1 coarse:1 burst:4 differential:1 become:1 introduce:1 inter:2 isi:6 xz:1 multi:1 inspired:1 little:1 actual:1 becomes:1 provided:1 campus:1 notation:3 bounded:1 linearity:1 circuit:1 minimizes:2 gal:1 temporal:6 berkeley:1 ti:7 exactly:1 everytime:3 unit:2 positive:17 before:5 t1:1 timing:9 modify:2 local:2 limit:1 consequence:1 encoding:2 firing:19 approximately:2 inria:1 might:2 black:1 twice:1 suggests:1 co:12 range:1 acknowledgment:1 practice:1 bell:1 adapting:1 matching:1 prc:7 convenience:1 close:4 storage:1 descending:1 restriction:1 equivalent:2 imposed:4 center:1 duration:2 resolution:1 rule:21 deriving:1 population:9 antagonistic:1 updated:1 resp:5 target:2 trigger:1 spontaneous:1 us:2 hypothesis:1 bottom:1 role:1 wj:1 region:1 cycle:2 ensures:1 ermentrout:2 dynamic:4 trained:3 depend:1 rewrite:1 segment:2 predictive:1 translated:1 represented:3 train:2 describe:1 sejnowski:1 encoded:9 larger:2 relax:1 compressed:1 encoder:5 ability:1 favor:1 echo:4 final:1 advantage:2 sequence:1 took:1 propose:1 reconstruction:9 interaction:1 fr:1 relevant:1 disfavor:1 alleviates:1 achieve:1 description:1 dirac:4 transmission:2 produce:1 rotated:1 tk:1 derive:2 recurrent:1 measured:1 received:1 strong:1 direction:5 closely:1 anticausal:1 filter:1 stochastic:2 centered:1 transient:2 kistler:1 biological:1 summation:1 mathematically:1 parra:1 voegtlin:2 hold:1 around:3 credit:6 considered:1 stdp:1 equilibrium:1 predict:1 early:2 estimation:1 sensitive:1 tf:1 weighted:1 always:2 gaussian:5 encode:3 derived:2 lasted:2 contrast:1 helpful:1 dependent:3 dayan:1 i0:10 typically:2 entire:2 wij:3 france:1 pixel:5 orientation:1 priori:1 arccos:1 summed:1 integration:1 equal:1 once:1 field:2 cancel:2 t2:1 stimulus:1 oriented:1 oja:4 neurocomputing:2 delayed:3 phase:10 fire:13 attractor:1 freedom:5 possibility:1 arrives:1 activated:3 tj:7 implication:1 integral:1 partial:2 necessary:8 initialized:3 desired:4 circle:3 increased:1 modeling:1 rao:1 assignment:2 deviation:4 euler:1 uniform:1 delay:4 too:3 dependency:1 perturbed:1 combined:1 thanks:1 fundamental:1 sensitivity:2 international:1 ie:2 receiving:1 together:1 squared:3 slowly:1 classically:2 scientifique:1 derivative:2 account:3 potential:16 coding:7 depends:4 crossed:1 performed:1 view:2 picked:1 doing:1 portion:1 slope:1 synchrony:1 ir:1 variance:2 resetting:2 correspond:2 yield:2 raw:2 lengyel:1 synapsis:4 reach:2 synaptic:23 centering:2 radian:1 dataset:4 nancy:1 recall:1 amplitude:2 carefully:1 feed:3 higher:1 dt:1 response:15 synapse:5 though:1 strongly:1 implicit:2 until:1 hand:1 receives:2 nonlinear:3 olshausen:1 effect:9 evolution:1 hence:2 bohte:1 symmetric:2 maass:1 white:2 sin:3 during:1 timingdependent:1 samuel:1 criterion:1 m:14 hippocampal:1 neocortical:1 demonstrate:1 meaning:1 image:12 instantaneous:1 spiking:9 physical:1 volume:2 extend:1 slight:1 surround:1 cambridge:1 consistency:1 bruno:1 had:4 dot:3 stable:1 longer:1 loria:1 add:1 base:3 curvature:1 something:1 own:1 retrieved:1 transmitted:1 greater:1 additional:1 impose:1 determine:1 ii:5 branch:7 adapt:1 cross:3 long:1 posited:1 divided:1 post:2 coded:1 involving:1 iteration:2 normalization:1 represent:2 proposal:1 addition:2 fine:1 interval:6 else:1 sure:1 cedex:1 comment:1 sent:1 regularly:2 call:2 iii:1 psps:5 architecture:1 perfectly:1 reduce:1 t0:1 expression:3 pca:7 six:1 cause:1 action:7 tune:1 dark:1 kok:1 induces:1 http:1 millisecond:2 sign:6 neuroscience:1 overly:1 per:3 kwag:1 write:1 shall:2 express:2 soma:1 threshold:2 urban:1 sum:5 convert:1 letter:1 arrive:2 patch:4 bound:3 layer:2 display:1 quadratic:2 activity:4 adapted:1 occur:1 constraint:1 integrand:1 performing:1 combination:1 membrane:3 across:1 slightly:1 describes:1 postsynaptic:5 smaller:2 wi:21 modification:2 biologically:1 taken:3 equation:3 remains:1 eventually:1 ascending:1 end:1 available:1 thomas:1 denotes:3 remaining:1 ensure:1 top:1 neglect:1 especially:1 objective:1 arrangement:1 added:1 spike:43 occurs:2 receptive:1 said:1 gradient:2 subspace:3 reversed:1 simulated:2 presynaptic:4 unstable:4 reason:3 maximising:1 code:13 length:2 index:1 modeled:1 ellipsoid:1 balance:1 difficult:1 mostly:1 negative:13 perform:4 neuron:82 enabling:1 descent:1 t:15 displayed:1 precise:2 redwood:1 connection:4 elapsed:1 learned:2 sparsenet:1 beyond:1 bar:1 explanation:1 power:1 natural:6 rely:2 predicting:1 theta:10 axis:2 auto:3 extract:1 text:1 review:1 relative:2 expect:2 vandoeuvre:1 localized:1 integrate:3 degree:5 consistent:1 transpose:1 arriving:1 absolute:1 leaky:2 sparse:2 curve:14 feedback:4 forward:3 made:3 dwi:1 author:1 simplified:1 far:1 reconstructed:2 preferred:1 keep:1 incoming:2 continuous:2 nature:3 learn:1 transfer:4 dendrite:1 gerstner:1 complex:1 domain:3 significance:1 main:1 arrival:3 martinez:1 allowed:1 complementary:2 neuronal:1 fig:1 position:1 bandpass:1 lie:1 tied:1 late:3 effectively:1 magnitude:1 expressed:1 ordered:1 corresponds:2 extracted:1 prop:1 viewed:1 change:10 principal:8 called:1 la:1 internal:1 correlated:1
2,341
3,124
Multiple Instance Learning for Computer Aided Diagnosis Glenn Fung, Murat Dundar, Balaji Krishnapuram, R. Bharat Rao CAD & Knowledge Solutions, Siemens Medical Solutions USA, Malvern, PA 19355 {glenn.fung, murat.dundar, balaji.krishnapuram, bharat.rao}@siemens.com Abstract Many computer aided diagnosis (CAD) problems can be best modelled as a multiple-instance learning (MIL) problem with unbalanced data: i.e. , the training data typically consists of a few positive bags, and a very large number of negative instances. Existing MIL algorithms are much too computationally expensive for these datasets. We describe CH, a framework for learning a Convex Hull representation of multiple instances that is significantly faster than existing MIL algorithms. Our CH framework applies to any standard hyperplane-based learning algorithm, and for some algorithms, is guaranteed to find the global optimal solution. Experimental studies on two different CAD applications further demonstrate that the proposed algorithm significantly improves diagnostic accuracy when compared to both MIL and traditional classifiers. Although not designed for standard MIL problems (which have both positive and negative bags and relatively balanced datasets), comparisons against other MIL methods on benchmark problems also indicate that the proposed method is competitive with the state-of-the-art. 1 Introduction In many computer aided diagnosis applications, the goal is to detect potentially malignant tumors and lesions in medical images (CT scans, X-ray, MRI etc). In an almost universal paradigm for CAD algorithms, this problem is addressed by a 3 stage system: identification of potentially unhealthy regions of interest (ROI) by a candidate generator, computation of descriptive features for each candidate, and labeling of each candidate (e.g. as normal or diseased) by a classifier. The training dataset for the classifier is generated as follows: Expert radiologists examine a set of images to mark out tumors. Then, candidate ROIs (with associated computed features) are marked positive if they are sufficiently close to a radiologist mark, and negative otherwise. Many CAD datasets have fewer than 1-10% positive candidates. In the CAD literature, standard machine learning algorithms?such as support vector machines (SVM), and Fisher?s linear discriminant?have been employed to train the classifier. In Section 2 we show that CAD data is better modeled in the multiple instance learning (MIL) framework, and subsequently present a novel convex-hull-based MIL algorithm. In Section 3 we provide experimental evidence from two different CAD problems to show that the proposed algorithm is significantly faster than other MIL algorithms, and more accurate when compared to other MIL algorithms and to traditional classifiers. Further?although this is not the main focus of our paper?on traditional benchmarks for MIL, our algorithm is again shown to be competitive with the current state-of-the-art. We conclude with a description of the relationship to previous work, review of our contributions, and directions for future research in Section 4. 2 A Novel Convex Hull MIL algorithm Almost all the standard classification methods explicitly assume that the training samples (i.e., candidates) are drawn identically and independently from an underlying?though unknown?distribution. This property is clearly violated in a CAD dataset, due to spatial adjacency of the regions identified by a candidate generator, both the features and the class labels of several adjacent candidates (training instances) are highly correlated. First, because the candidate generators for CAD problems are trying to identify potentially suspicious regions, they tend to produce many candidates that are spatially close to each other; since these often refer to regions that are physically adjacent in an image, the class labels for these candidates are also highly correlated. Second, because candidates are labelled positive if they are within some pre-determined distance from a radiologist mark, multiple positive candidates could correspond with the same (positive) radiologist mark on the image. Note that some of the positively labelled candidates may actually refer to healthy structures that just happen to be near a mark, thereby introducing an asymmetric labeling error in the training data. In MIL terminology from previous literature, a ?bag? may contain many observation instances of the same underlying entity, and every training bag is provided a class label (e.g. positive or negative). The objective in MIL is to learn a classifier that correctly classifies at least one instance from every bag. This corresponds perfectly with the the appropriate measure of accuracy for evaluating the classifier in a CAD system. In particular, even if one of the candidates that refers to the underlying malignant structure (radiologist mark) is correctly highlighted to the radiologist, the malignant structure is detected; i.e. , the correct classification of every candidate instance is not as important as the ability to detect at least one candidate that points to a malignant region. Furthermore, we would like to classify every sample that is distant from radiologist mark as negative, this is easily accomplished by considering each negative candidate as a bag. Therefore, it would appear that MIL algorithms should outperform traditional classifiers on CAD datasets. Unfortunately, in practice, most of the conventional MIL algorithms are computationally quite inefficient, and some of them have problems with local minima. In CAD we typically have several thousand mostly negative candidates (instances), and a few hundred positive bags; existing MIL algorithms are simply unable to handle such large datasets due to time or memory requirements. i Notation: Let the i-th bag of class j be represented by the matrix Bji ? ?mj ?n , i = 1, . . . , rj , j ? {?1}, n is the number of features. The row l of Bji , denoted by Bjil represents the datapoint l of the bag i in class j with l = 1, . . . , mij . The binary bag-labels are specified by a vector d ? {?1}rj . The vector e represent a vector with all its elements one. 2.1 Key idea: Relaxation of MIL via Convex-Hulls The original MIL problem requires at least one of the samples in a bag to be correctly labeled by the classifier: this corresponds to a set of discrete constraints on the classifier. By contrast, we shall relax this and require that at least one point in the convex hull of a bag of samples (including, possibly one of the original samples) has to be correctly classified. Figure 1 illustrates the idea using a graphical toy example. This relaxation, (first introduced in [1]) eliminates the combinatorial nature of the MIL problem, allowing algorithms that are more computationally efficient. As mentioned above, we will consider that a bag Bji is correctly classified if any point inside the convex hull of the bag Bji (i.e. any convex combination of points of Bji ) is correctly classified. Let ? s.t. 0 ? ?ij , e? ?ij = 1 be the vector containing the coefficients of the convex combination that defines the representative point of bag i in class j. Let r be the total number of representative points, i.e. r = r+ + r? . Let ? be the total number of convex hull coefficients corresponding to the representative points in class j, i.e. Prj ?j = i=1 mij , ? = ?+ + ?? . Then, we can formulate the MIL problem as, min (?,w,?,?)?Rr+n+1+? s.t. ?E(?) i + ? = ? ? e? ?ij = 0 ? ?(w, ?) + ?(?) di ? (?ij Bji w ? e?) ? 1 ?ij (1) Where ? = {?1 , . . . , ?r } are slack terms (errors), ? is the bias (offset from origin) term, and ? is a vector containing all the ?ij for i = 1, . . . , rj , j ? {?}. E : ?r ? ? represents the loss function, ? : ?(n+1) ? ? is a regularization function on the hyperplane coefficients [2] and ? is a regularization function on the convex combination coefficients ?ij . Depending on the choice of E, ?, ? and ?, (1) will lead to MIL versions of several well-known classification algorithms. 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1 Figure 1: A toy example illustrating the proposed approach. Positive and negative classes are represented by blue circles and red diamonds respectively. Cyan polyhedrons represent the convex hulls for the three positives bags, the points chosen by our algorithm to represent each bag is shown by blue stars. The magenta line represents the linear hyperplane obtained by our algorithm and the black line represents the hyperplane for the SVM. 1. E(?) = k(?)+ k22 , ?(w, ?) = k(w, ?)k22 and ? = ?r+ , leads to MIL versions of the Quadratic-Programming-SVM [3]. 2 2 2. E(?) = k(?)k2 , ?(w, ?) = k(w, ?)k2 and ? = ?r , leads to MIL versions of the LeastSquares-SVM. 2 3. ? = 1, E(?) = k?k2 , ? = {? : e? ?j = 0, j ? {?}} leads to MIL versions of the QP formulation for Fisher?s linear discriminant (FD) [4]. As an example, we derive a special case of the algorithm for the Fisher?s Discriminant, because this choice (FD) brings us some algorithmic as well as computational advantages. 2.2 Convex-Hull MIL for Fisher?s Linear Discriminant 2 Setting ? = 1, E(?) = k?k2 , ? = {? : e? ?j = 0, j ? {?}} in (1) we obtain the following MIL version of the quadratic programming algorithm for Fisher?s Linear Discriminant [4]. 2 min (?,w,?,?)?Rr+n+1+? k?k2 + ?(w, ?) + ?(?) i ? = di ? (?ij Bji w ? e?) (2) e ?j = 0 ? i e ?j = 1 0 ? ?ij The number of variables to be optimized in (2) is r+n+1+?: this is computationally infeasible when the number of bags is large (r > 104 ). To alleviate the situation, we (a) replace ? i by di ? (?ij Bji w ? e?) in the objective function, and (b) replace the equality constraints e? ?j = 0 by w? (?+ ? ?? ) = 2. This substitution eliminates the variables ?, ? from the problem and also the corresponding r equality constraints in (2). Effectively, this results in the MIL version of the traditional FD algorithm. As discussed later in the paper, in addition to the obvious computational gains, this manipulation results in some algorithmic advantages as well (For more information on the equivalence between the single instance learning versions of (2) and (3) see [4]). Thus, the optimization problem reduces to: wT SW w + ?(w) + ?(?) min n+? s.t. ? (w, ?)?R s.t. wT (?+ ? ?? ) = e? ?ij = 0 ? b 1 ?ij (3) P T where SW = j?{?} r1j (Xj ? ?j e? ) (Xj ? ?j e? ) is the within class scatter matrix, ?j = r1j Xj e is the mean for class j. Xj ? ?rj ?n is a matrix containing the rj representative points on ndimensional space such that the row of Xj denoted by bij = Bji ?ij is the representative point of bag i in class j where i = {1, . . . , rj } and j ? {?}. 2.3 Alternate Optimization for Convex-Hull MIL Fisher?s Discriminant The proposed mathematical program (3) can be solved used an efficient Alternate Optimization (AO) algorithm [5]. In the AO setting the main optimization problem is subdivided in two smaller or easier subproblems that depend on disjoints subsets of the original variables. When ?(w) and ?(?) are strongly convex functions, both the original objective function and the two subproblems (for optimizing ? and w) in (3) are strongly convex, meaning that the algorithm converges to a global minimizer [6]. For computational efficiency, in the remainder of the paper we will use the 2 2 regularizers ?(w) = ? kwk2 and ?(?) = ? k?k2 , where ? is a positive regularization parameter. An efficient AO algorithm for solving the mathematical program (3) is described below. Sub Problem 1: Fix ? = ?? : When we fix ? = ?? , the problem becomes, min wT SW w + ?(w) s.t. wT (?+ ? ?? ) = b w?Rn (4) which is the formulation for the Fisher?s Discriminant. Since SW is the sum of two covariance matrices, it is guaranteed to be at least positive semidefinite and thus the problem in (4) is convex. For datasets with r >> n, i.e. the number of bags is much greater than the number of dimensionality, SW is positive definite and thus the problem in (4) is strictly convex. Unlike (1) where the number of constraints is proportional to the number of bags, eliminating ? and ? leaves us with only one constraint. This changes the order of complexity from O(nr2 ) to O(n2 r) and brings some computational advantages when dealing with datasets with r >> n. Sub Problem 2: Fix w = w? : When we fix w = w? , the problem becomes ?T S?W ? min ??R? s.t. + T ? (? ?+ ? ? ?? ) = e? ?ij = 0 ? ?(?) b 1 ?ij (5) ? j where X ? j ? ?rj ?? is now a matrix where S?W and ? ? are defined as in (4) with Xj replaced by X ? j denoted by ?bi is a containing the rj new points on the ?-dimensional space such that the row of X j P k vector with its nonzero elements set to Bji w? . For the positive class elements i?1 m + 1 through + k=1 Pi P r+ k k ?i k=1 m+ of bj are nonzero, for the negative class nonzero elements are located at k=1 m+ + Pi?1 k P r+ P i k k ? k=1 m? + 1 through k=1 m+ + k=1 m? . Note that SW is also a sum of two covariance matrices, it is positive semidefinite and thus the problem in (5) is convex. Unlike sub problem 1 the positive definiteness of S?W does not depend on the data, since it always true that r ? ?. The complexity of (5) is O(n? 2 ). As it was mentioned before, in CAD applications, a bag is defined as a set of candidates that are spatially close to the radiologist marked ground-truth. Any candidate that is spatially far from this location is considered negative in the training data, therefore the concept of bag for negative examples does not make any practical sense in this scenario. Moreover, since ground truth is only available on the training set, there is no concept of a bag on the test set for both positive and negative examples. The learned classifier labels (ie classifies) individual instances - the bag information for positive examples is only used to help learn a better classifier from the training data. Hence, the problem in (5) can be simplified to account for these practical observations resulting in an optimization problem 2 with O(n?+ ) complexity. The entire algorithm is summarized below for clarity. 2.4 CH-FD: An Algorithm for Learning Convex Hull Representation of Multiple Instances (0) Choose as initial guess for ?i0 = e mi , ?i = 1, . . . , r, set counter c=0. (i) For fixed ?ic , ?i = 1, . . . , r solve for wc in (4). (ii) Fixing w = wc solve for ?ic , ?i = 1, . . . , r in (5). (iii) Stop if ?1(c+1) ? ?1c , . . . , ?r(c+1) ? ?rc 2 is less than some desired tolerance. Else replace ?ic by ?i(c+1) and c by c + 1 and go to (i). The nonlinear version of the proposed algorithm can be obtained by first transforming the original datapoints to a kernel space spanned by all datapoints through a kernel operator, i.e. K : ?n ? ??? and then by optimizing (4) and (5) in this new space. Ideally ?? is set to ?. However when ? is large, for computational reasons we can use the technique presented in [7] to limit the number of datapoints spanning this new space. This corresponds to constraining w to lie in a subspace of the kernel space. 3 Experimental Results and Discussion For the experiments in section 3.1 , we compare four techniques: naive Fisher?s Discriminnat (FD), CH-FD, EM-DD [8], IDAPR [9]. For IDAPR and EM-DD we used the Matlab implementation of these algorithms also used in [10]. In both experiments we used the linear version of our algorithm. Hence the only parameter that requires tuning is ? which is tuned to optimize the 10-fold Patient Cross Validation on the training data,. All algorithms are trained on the training data and then tested on the sequestered test data. The resulting Receiver Operating Characteristics (ROC) plots are obtained by trying different values of the parameters (?, ?) for IDAPR, and by thresholding the corresponding output for each of the EM-DD, FD and CH-FD. 3.1 Two CAD Datasets: Pulmonary Embolism & Colon Cancer Detection Next, we present the problems that mainly motivated this work. Pulmonary embolism (PE), a potentially life-threatening condition, is a result of underlying venous thromboembolic disease. An early and accurate diagnosis is the key to survival. Computed tomography angiography (CTA) has emerged as an accurate diagnostic tool for PE, and However, there are hundreds of CT slices in each CTA study and manual reading is laborious, time consuming and complicated by various PE lookalikes. Several CAD systems are being developed to assist radiologists to detect and characterize emboli [11], [12]. At four different hospitals (two North American sites and two European sites), we collected 72 cases with 242 PE bags comprised of 1069 positive candidates marked by expert chest radiologists. The cases were randomly divided into two sets: training (48 cases with 173 PE bags and 3655 candidates) and testing (24 cases with 69 PE bags and 1857 candidates). The test group was sequestered and only used to evaluate the performance of the final system. A combined total of 70 features are extracted for each candidate. Colorectal cancer is the third most common cancer in both men and women. It is estimated that in 2004, nearly 147,000 cases of colon and rectal cancer will be diagnosed in the US, and more than 56,730 people would die from colon cancer [13]. CT colonography is emerging as a new procedure to help in early detection of colon polyps. However, reading through a large CT dataset, which typically consists of two CT series of the patient in prone and supine positions, each with several hundred slices, is time-consuming. Colon CAD [14] can play a critical role to help the radiologist avoid the missing of colon polyps. Most polyps, therefore, are represented by two candidates; one obtained from the prone view and the other one from the supine view. Moreover, for large polyps, a typical candidate generation algorithm generates several candidates across the polyp surface. The database of high-resolution CT images used in this study were obtained from seven different sites across US, Europe and Asia. The 188 patients were randomly partitioned into two groups, training comprised of: 65 cases with 127 volumes, 50 polyps bags (179 positive candidates) were identified in this set with a total number of 6569 negative candidates and testing comprised of 123 patients with 237 volumes, a total of 103 polyp bags (232 positive candidates) were identified in this set with a total number of 12752 negative candidates. The test group was sequestered and only used to evaluate the performance of the final system. A total of 75 features are extracted for each candidate. The resulting Receiver Operating Characteristics (ROC) curves are displayed in Figure 2. Although for the PE dataset Figure 2 (left) IDAPR crosses over CH-FD and is more sensitive than CH-FD for extremely high number of false positives, Table 1 show that CH-FD is more accurate than all other methods over the entire space (AUC). Note that CAD performance is only valid in the clinically acceptable range, < 10fp/patient for PE, < 5fp/volume for Colon (generally there are 2 volumes per patient). In the region of clinical interest (AUC-RCI), Table 1 shows that CH-FD significantly outperforms all other methods. Table 1: Comparison of 3 MIL and one traditional algorithms: Computation time, AUC, and normalized AUC in the region of clinical interest for PE and Colon test data Algorithm IAPR EMDD CH-FD FD Time PE 184.6 903.5 97.2 0.19 Time Colon 689.0 16614.0 7.9 0.4 AUC PE 0.83 0.67 0.86 0.74 AUC Colon 0.70 0.80 0.90 0.88 AUC-RCI PE 0.34 0.17 0.50 0.44 AUC-RCI Colon 0.26 0.42 0.69 0.57 Execution times for all the methods tested are shown in Table 1. As expected, the computational cost is the cheapest for the traditional non-MIL based FD. Among MIL algorithms, for the PE data, CH-FD was roughly 2-times and 9-times as fast than IAPR and EMDD respectively, and for the much larger colon dataset was roughly 85-times and 2000-times faster, respectively(see Table 1). 1 1 0.9 0.9 EMDD IAPR CH?FD FD 0.8 0.7 0.8 0.7 0.6 Sensitivity Sensitivity 0.6 0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.1 0 CH?FD EMDD IAPR FD 0 10 20 30 FP/Patient 40 50 60 0 0 10 20 30 False positive per volume 40 50 60 Figure 2: ROC curves obtained for (left) PE Testing data and (right) COLON testing Data 3.2 Experiments on Benchmark Datasets We compare CH-FD with several state-of-the-art MIL algorithms on 5 benchmark MIL datasets: 2 Musk datasets [9] and 3 Image Annotation datasets [15]. Each of these datasets contain both positive and negative bags. CH-FD (and MICA) use just the positive bag information and ignore the negative bag information, in effect, treating each negative instance as a separate bag. All the other MIL algorithms use both the positive and negative bag information. The Musk datasets contains feature vectors describing the surfaces of low-energy shapes from molecules. Each feature vector has 166 features. The goal is to differentiate molecules that smell ?musky? from the rest of the molecules. Approximately half of the molecules are known to smell musky. There are two musk datasets. MUSK1 contains 92 molecules with a total of 476 instances. MUSK2 contains 102 molecules with a total of 6598 instances. 72 of the molecules are shared between two datasets but MUSK2 dataset contain more instances for the shared molecules. The Image Annotation data is composed of three different categories, namely Tiger, Elephant, Fox. Each dataset has 100 positive bags and 100 negative bags. We set ?(w) = ? |?|. For the musk datasets our results are based on a Radial Basis Function 2 (RBF) kernel K(xi , xj ) = exp(?? kx ? yk ). The kernel space is assumed to be spanned by all the datapoints in MUSK1 dataset and a subset of the datapoints in MUSK2 dataset (one tenth of the original training set is randomly selected for this purpose). The width of the kernel function and ? are tuned over a discrete set of five values each to optimize the 10-fold Cross Validation performance. For the Image Annotation data we use the linear version of our algorithm. We follow the benchmark experiment design and report average accuracy of 10 runs of 10-fold Cross Validation Table 2: Average accuracy on Benchmark Datasets. The number in parenthesis represents the relative rank of each of the algorithms (performance-wise) in the corresponding dataset Datasets CH-FD IAPR DD EMDD mi-SVM MI-SVM MI-NN MICA MUSK1 88.8 (2) 87.2 (5) 88.0 (3) 84.8 (6) 87.4 (4) 77.9 (8) 88.9 (1) 84.4 (7) MUSK2 85.7 (2) 83.6 (6) 84.0 (5) 84.9 (3) 83.6 (6) 84.3 (4) 82.5 (7) 90.5 (1) Elephant 82.4 (2) - (-) - (-) 78.3 (5) 82.2 (3) 81.4 (4) - (-) 82.5 (1) Tiger 82.2 (2) - (-) - (-) 72.1 (5) 78.4 (4) 84.0 (1) - (-) 82.0(3) Fox 60.4 (2) - (-) - (-) 56.1 (5) 58.2 (3) 57.8 (4) - (-) 62.0(1) Average Rank 2 5.5 4 4.8 4 4.2 4 3.25 in Table 2. Results for other MIL algorithms from the literature are also reported in the same table. Iterated Discriminant APR (IAPR), Diverse Density (DD) [16], Expectation-Maximization Diverse Density (EM-DD) [8], Maximum Bag Margin Formulation of SVM (mi-SVM, MI-SVM) [15], Multi Instance Neural Networks (MI-NN) [17] are the techniques considered in this experiment for comparison purposes. Results for mi-SVM, MI-SVM and EM-DD are taken from [15]. Table 2 shows that CH-FD is comparable to other techniques on all datasets, even though it ignores the negative bag information. Furthermore, CH-FD appears to be the most stable of the algorithms, at least on these 5 datasets, achieving the most consistent performance as indicated by the ?Average Rank? column. We believe that this stable behavior of our algorithm is due in part because it converges to global solutions avoiding the local minima problem. 4 Discussions Relationship to previous literature on MIL: The Multiple Instance Learning problem described in this paper has been studied widely in the literature [9, 15, 16, 17, 8]. The convex-hull idea presented in this paper to represent each bag is similar in nature to the one presented in [1]. However in contrast with [1] and many other approaches in the literature [9, 15, 17] our formulation leads to a strongly convex minimization problem that converges to a unique minimizer. Since our algorithm considers each negative instance as an individual bag, it is complexity is square proportional to the number of positive instances only which makes it scalable to large datasets with large number of negative examples. Principal contributions of the paper: This paper makes three principal contributions. First, we have identified the need for multiple-instance learning in CAD applications and described the spatial proximity based inter-sample correlations in the label noise for classifier design in this setting. Second, building on an intuitive convex-relaxation of the original MIL problem, this paper presents a new approach to multiple-instance learning. In particular, we dramatically improve the run time by replacing a large set of discrete constraints (at least one instance in each bag has to be correctly classified) with infinite but continuous sets of constraints (at least one convex combination of the original instances in every bag has to be correctly classified). Further, the proposed idea for achieving convexity in the objective function of the training algorithm alleviates the problems of local maxima that occurs in some of the previous MIL algorithms, and often improves the classification accuracy on many practical datasets. Third, we present a practical implementation of this idea in the form of a simple but efficient alternate-optimization algorithm for Convex Hull based Fisher?s Discriminant. In our benchmark experiments, the resulting algorithm achieves accuracy that is comparable to the current state of the art, but at a significantly lower run time (typically several orders of magnitude speed ups were observed). Related work: Note that as the distance between candidate ROI increases, the correlations between their features and labels decreases. In another study, we model the spatial-correlation among neighboring samples. Thus we jointly classify entire batches of correlated samples both during training and testing. Instead of classifying each sample independently, we use this spatial information along with the features of each candidate to simultaneously classify all the candidate ROIs for a single patient/volume in a joint operation [18]. References [1] O. L. Mangasarian and E. W. Wild. Multiple instance classification via successive linear programming. Technical Report 05-02, Data Mining Institute, Univ of Wisconsin, Madison, 2005. [2] V. N. Vapnik. The Nature of Statistical Learning Theory. Springer, New York, 1995. [3] O. L. Mangasarian. Generalized support vector machines. In A. Smola, P. Bartlett, B. Sch?olkopf, and D. Schuurmans, editors, Advances in Large Margin Classifiers, pages 135?146, Cambridge, MA, 2000. MIT Press. ftp://ftp.cs.wisc.edu/math-prog/tech-reports/98-14.ps. [4] Sebastian Mika, Gunnar R?atsch, and Klaus-Robert M?uller. A mathematical programming approach to the kernel fisher algorithm. In NIPS, pages 591?597, 2000. [5] J. Bezdek and R. Hathaway. Convergence of alternating optimization. Neural, Parallel Sci. Comput., 11(4):351?368, 2003. [6] J. Warga. Minimizing certain convex functions. Journal of SIAM on Applied Mathematics, 11:588?593, 1963. [7] Y.-J. Lee and O. L. Mangasarian. RSVM: Reduced support vector machines. Technical Report 00-07, Data Mining Institute, Computer Sciences Department, University of Wisconsin, Madison, Wisconsin, July 2000. Proceedings of the First SIAM International Conference on Data Mining, Chicago, April 5-7, 2001, CD-ROM Proceedings. ftp://ftp.cs.wisc.edu/pub/dmi/tech-reports/00-07.ps. [8] Q. Zhang and S. Goldman. Em-dd: An improved multiple-instance learning technique. In Advances in Neural Information Processing Systems, volume 13. The MIT Press, 2001. [9] Thomas G. Dietterich, Richard H. Lathrop, and Tomas Lozano-Perez. Solving the multiple instance problem with axis-parallel rectangles. Artificial Intelligence, 89(1-2):31?71, 1997. [10] Z. Zhou and M. Zhang. Ensembles of multi-instance learners. In Proceedings of the 14th European Conference on Machine Learning, LNAI 2837, pages 492?502, Cavtat-Dubrovnik, Croatia, 2003. Springer. [11] M. Quist, H. Bouma, C. Van Kuijk, O. Van Delden, and F. Gerritsen. Computer aided detection of pulmonary embolism on multi-detector ct, 2004. [12] C. Zhou, L. M. Hadjiiski, B. Sahiner, H.-P. Chan, S. Patel, P. Cascade, E. A. Kazerooni, and J. Wei. Computerized detection of pulmonary embolism in 3D computed tomographic (CT) images: vessel tracking and segmentation techniques. In Medical Imaging 2003: Image Processing. Edited by Sonka, Milan; Fitzpatrick, J. Michael. Proceedings of the SPIE, Volume 5032, pp. 1613-1620 (2003)., pages 1613?1620, May 2003. [13] D. Jemal, R. Tiwari, T. Murray, A. Ghafoor, A. Saumuels, E. Ward, E. Feuer, and M. Thun. Cancer statistics, 2004. [14] L. Bogoni, P. Cathier, M. Dundar, A. Jerebko, S. Lakare, J. Liang, S. Periaswamy, M. Baker, and M. Macari. Cad for colonography: A tool to address a growing need. British Journal of Radiology, 78:57?62, 2005. [15] S. Andrews, I. Tsochantaridis, and T. Hofmann. Support vector machines for multiple-instance learning. In S. Thrun S. Becker and K. Obermayer, editors, Advances in Neural Information Processing Systems 15, pages 561?568. MIT Press, Cambridge, MA, 2003. [16] Oded Maron and Tom?as Lozano-P?erez. A framework for multiple-instance learning. In Michael I. Jordan, Michael J. Kearns, and Sara A. Solla, editors, Advances in Neural Information Processing Systems, volume 10. The MIT Press, 1998. [17] J. Ramon and L. De Raedt. Multi instance neural networks, 2000. [18] V. Vural, G. Fung, B. Krishnapuram, J. G. Dy, and R. B. Rao. Batch classification with applications in computer aided diagnosis. In Proceedings of the ECML?06, Berlin, Germany, 2006.
3124 |@word illustrating:1 version:10 mri:1 eliminating:1 covariance:2 thereby:1 initial:1 substitution:1 series:1 contains:3 pub:1 tuned:2 outperforms:1 existing:3 current:2 com:1 cad:20 scatter:1 distant:1 happen:1 chicago:1 shape:1 hofmann:1 designed:1 plot:1 treating:1 half:1 fewer:1 leaf:1 guess:1 selected:1 intelligence:1 math:1 location:1 successive:1 zhang:2 five:1 mathematical:3 rc:1 along:1 suspicious:1 consists:2 wild:1 ray:1 inside:1 cta:2 bharat:2 inter:1 expected:1 behavior:1 roughly:2 examine:1 growing:1 multi:4 goldman:1 considering:1 becomes:2 provided:1 classifies:2 underlying:4 notation:1 moreover:2 baker:1 emerging:1 developed:1 every:5 musk2:4 classifier:14 k2:6 medical:3 appear:1 positive:29 before:1 local:3 limit:1 jerebko:1 approximately:1 black:1 mika:1 studied:1 r1j:2 equivalence:1 sara:1 sequestered:3 bi:1 range:1 practical:4 unique:1 testing:5 practice:1 definite:1 procedure:1 universal:1 significantly:5 cascade:1 ups:1 pre:1 radial:1 refers:1 krishnapuram:3 close:3 tsochantaridis:1 operator:1 optimize:2 conventional:1 missing:1 go:1 independently:2 convex:25 formulate:1 resolution:1 tomas:1 spanned:2 datapoints:5 handle:1 smell:2 play:1 programming:4 origin:1 pa:1 element:4 expensive:1 located:1 balaji:2 asymmetric:1 labeled:1 database:1 observed:1 role:1 solved:1 thousand:1 region:7 solla:1 counter:1 decrease:1 yk:1 balanced:1 mentioned:2 transforming:1 disease:1 complexity:4 convexity:1 edited:1 ideally:1 trained:1 depend:2 solving:2 iapr:6 efficiency:1 learner:1 basis:1 easily:1 joint:1 represented:3 various:1 train:1 univ:1 fast:1 describe:1 detected:1 artificial:1 labeling:2 klaus:1 quite:1 emerged:1 larger:1 solve:2 widely:1 relax:1 otherwise:1 elephant:2 ability:1 statistic:1 ward:1 radiology:1 highlighted:1 jointly:1 final:2 differentiate:1 descriptive:1 rr:2 advantage:3 remainder:1 neighboring:1 alleviates:1 description:1 intuitive:1 milan:1 olkopf:1 convergence:1 requirement:1 prj:1 p:2 produce:1 diseased:1 converges:3 help:3 depending:1 derive:1 ftp:4 fixing:1 andrew:1 ij:15 c:2 indicate:1 direction:1 correct:1 hull:13 subsequently:1 adjacency:1 require:1 subdivided:1 ao:3 fix:4 alleviate:1 leastsquares:1 strictly:1 proximity:1 sufficiently:1 considered:2 ground:2 roi:4 normal:1 ic:3 exp:1 algorithmic:2 bj:1 fitzpatrick:1 achieves:1 early:2 purpose:2 bag:43 label:7 combinatorial:1 healthy:1 sensitive:1 tool:2 minimization:1 uller:1 mit:4 clearly:1 always:1 avoid:1 zhou:2 mil:39 focus:1 polyhedron:1 rank:3 mainly:1 tech:2 contrast:2 detect:3 sense:1 colon:13 unhealthy:1 i0:1 typically:4 entire:3 nn:2 lnai:1 germany:1 classification:6 among:2 musk:4 denoted:3 art:4 spatial:4 special:1 dmi:1 represents:5 nearly:1 future:1 rci:3 report:5 bezdek:1 richard:1 few:2 randomly:3 composed:1 simultaneously:1 individual:2 replaced:1 detection:4 interest:3 fd:25 highly:2 threatening:1 mining:3 laborious:1 venous:1 semidefinite:2 perez:1 regularizers:1 radiologist:11 accurate:4 fox:2 circle:1 desired:1 instance:32 classify:3 column:1 rao:3 raedt:1 maximization:1 nr2:1 introducing:1 cost:1 subset:2 hundred:3 comprised:3 too:1 characterize:1 reported:1 combined:1 density:2 international:1 rectal:1 sensitivity:2 ie:1 siam:2 lee:1 michael:3 again:1 containing:4 choose:1 possibly:1 woman:1 expert:2 inefficient:1 american:1 toy:2 account:1 de:1 star:1 summarized:1 north:1 coefficient:4 explicitly:1 later:1 view:2 red:1 competitive:2 complicated:1 parallel:2 annotation:3 contribution:3 square:1 accuracy:6 characteristic:2 ensemble:1 correspond:1 identify:1 modelled:1 identification:1 iterated:1 computerized:1 classified:5 datapoint:1 detector:1 manual:1 sebastian:1 against:1 energy:1 pp:1 obvious:1 associated:1 di:3 mi:9 spie:1 gain:1 stop:1 dataset:10 knowledge:1 improves:2 dimensionality:1 segmentation:1 tiwari:1 actually:1 appears:1 follow:1 asia:1 tom:1 improved:1 april:1 wei:1 formulation:4 though:2 strongly:3 diagnosed:1 furthermore:2 just:2 stage:1 smola:1 correlation:3 replacing:1 nonlinear:1 defines:1 brings:2 maron:1 indicated:1 believe:1 building:1 effect:1 dietterich:1 usa:1 k22:2 contain:3 true:1 concept:2 normalized:1 regularization:3 equality:2 hence:2 spatially:3 nonzero:3 alternating:1 lozano:2 adjacent:2 during:1 width:1 auc:8 die:1 generalized:1 trying:2 demonstrate:1 image:10 meaning:1 wise:1 novel:2 mangasarian:3 common:1 qp:1 volume:9 discussed:1 kwk2:1 refer:2 cambridge:2 tuning:1 mathematics:1 erez:1 stable:2 europe:1 operating:2 surface:2 etc:1 chan:1 optimizing:2 manipulation:1 scenario:1 certain:1 binary:1 life:1 accomplished:1 minimum:2 greater:1 employed:1 paradigm:1 july:1 ii:1 multiple:14 rj:8 reduces:1 technical:2 faster:3 cross:4 clinical:2 divided:1 parenthesis:1 scalable:1 patient:8 expectation:1 physically:1 represent:4 kernel:7 addition:1 addressed:1 else:1 sch:1 eliminates:2 unlike:2 rest:1 tend:1 dubrovnik:1 dundar:3 jordan:1 chest:1 near:1 constraining:1 iii:1 identically:1 musk1:3 xj:7 identified:4 perfectly:1 idea:5 mica:2 motivated:1 bartlett:1 assist:1 becker:1 york:1 matlab:1 dramatically:1 generally:1 colorectal:1 tomography:1 category:1 reduced:1 outperform:1 diagnostic:2 estimated:1 correctly:8 per:2 blue:2 diagnosis:5 diverse:2 discrete:3 shall:1 group:3 key:2 four:2 terminology:1 gunnar:1 achieving:2 drawn:1 clarity:1 wisc:2 tenth:1 rectangle:1 imaging:1 relaxation:3 sum:2 run:3 prog:1 almost:2 tomographic:1 quist:1 acceptable:1 dy:1 comparable:2 cyan:1 ct:8 guaranteed:2 fold:3 quadratic:2 constraint:7 warga:1 wc:2 generates:1 speed:1 min:5 extremely:1 relatively:1 department:1 fung:3 alternate:3 combination:4 clinically:1 smaller:1 across:2 em:6 partitioned:1 taken:1 computationally:4 slack:1 describing:1 malignant:4 available:1 operation:1 disjoints:1 appropriate:1 batch:2 original:8 thomas:1 graphical:1 sw:6 madison:2 murray:1 objective:4 occurs:1 traditional:7 obermayer:1 subspace:1 distance:2 unable:1 separate:1 sci:1 entity:1 thrun:1 berlin:1 seven:1 collected:1 discriminant:9 considers:1 reason:1 spanning:1 feuer:1 rom:1 modeled:1 relationship:2 minimizing:1 liang:1 unfortunately:1 mostly:1 robert:1 potentially:4 subproblems:2 negative:22 implementation:2 design:2 murat:2 unknown:1 diamond:1 allowing:1 observation:2 datasets:23 benchmark:7 embolism:4 displayed:1 ecml:1 situation:1 rn:1 introduced:1 namely:1 specified:1 optimized:1 learned:1 nip:1 address:1 below:2 fp:3 reading:2 program:2 including:1 memory:1 ramon:1 critical:1 ndimensional:1 improve:1 axis:1 naive:1 emdd:5 review:1 literature:6 relative:1 wisconsin:3 loss:1 men:1 generation:1 proportional:2 generator:3 validation:3 consistent:1 dd:8 thresholding:1 editor:3 classifying:1 pi:2 cd:1 row:3 cancer:6 prone:2 infeasible:1 bias:1 institute:2 tolerance:1 slice:2 curve:2 rsvm:1 van:2 evaluating:1 valid:1 ignores:1 simplified:1 far:1 ignore:1 patel:1 dealing:1 global:3 receiver:2 polyp:7 conclude:1 assumed:1 consuming:2 xi:1 continuous:1 glenn:2 table:9 learn:2 mj:1 nature:3 molecule:8 musky:2 schuurmans:1 vessel:1 european:2 cheapest:1 apr:1 main:2 noise:1 n2:1 lesion:1 positively:1 site:3 malvern:1 representative:5 oded:1 roc:3 definiteness:1 sub:3 position:1 comput:1 candidate:36 lie:1 pe:14 third:2 bij:1 magenta:1 british:1 offset:1 svm:11 evidence:1 survival:1 false:2 vapnik:1 effectively:1 magnitude:1 execution:1 illustrates:1 kx:1 margin:2 easier:1 simply:1 bogoni:1 tracking:1 hathaway:1 applies:1 springer:2 ch:18 corresponds:3 mij:2 minimizer:2 truth:2 pulmonary:4 bji:10 extracted:2 ma:2 goal:2 marked:3 rbf:1 labelled:2 replace:3 fisher:10 shared:2 change:1 aided:5 tiger:2 determined:1 typical:1 infinite:1 hyperplane:4 wt:4 kearns:1 tumor:2 principal:2 total:9 hospital:1 lathrop:1 experimental:3 bouma:1 siemens:2 atsch:1 mark:7 support:4 people:1 scan:1 unbalanced:1 violated:1 evaluate:2 tested:2 avoiding:1 correlated:3
2,342
3,125
Ordinal Regression by Extended Binary Classification Hsuan-Tien Lin Learning Systems Group California Institute of Technology htlin@caltech.edu Ling Li Learning Systems Group California Institute of Technology ling@caltech.edu Abstract We present a reduction framework from ordinal regression to binary classification based on extended examples. The framework consists of three steps: extracting extended examples from the original examples, learning a binary classifier on the extended examples with any binary classification algorithm, and constructing a ranking rule from the binary classifier. A weighted 0/1 loss of the binary classifier would then bound the mislabeling cost of the ranking rule. Our framework allows not only to design good ordinal regression algorithms based on well-tuned binary classification approaches, but also to derive new generalization bounds for ordinal regression from known bounds for binary classification. In addition, our framework unifies many existing ordinal regression algorithms, such as perceptron ranking and support vector ordinal regression. When compared empirically on benchmark data sets, some of our newly designed algorithms enjoy advantages in terms of both training speed and generalization performance over existing algorithms, which demonstrates the usefulness of our framework. 1 Introduction We work on a type of supervised learning problems called ranking or ordinal regression, where examples are labeled by an ordinal scale called the rank. For instance, the rating that a customer gives on a movie might be one of do-not-bother, only-if-you-must, good, very-good, and run-to-see. The ratings have a natural order, which distinguishes ordinal regression from general multiclass classification. Recently, many algorithms for ordinal regression have been proposed from a machine learning perspective. For instance, Crammer and Singer [1] generalized the online perceptron algorithm with multiple thresholds to do ordinal regression. In their approach, a perceptron maps an input vector to a latent potential value, which is then thresholded to obtain a rank. Shashua and Levin [2] proposed new support vector machine (SVM) formulations to handle multiple thresholds. Some other formulations were studied by Rajaram et al. [3] and Chu and Keerthi [4]. All these algorithms share a common property: they are modified from well-known binary classification approaches. Since binary classification is much better studied than ordinal regression, a general framework to systematically reduce the latter to the former can introduce two immediate benefits. First, well-tuned binary classification approaches can be readily transformed into good ordinal regression algorithms, which saves immense efforts in design and implementation. Second, new generalization bounds for ordinal regression can be easily derived from known bounds for binary classification, which saves tremendous efforts in theoretical analysis. In this paper, we propose such a reduction framework. The framework is based on extended examples, which are extracted from the original examples and a given mislabeling cost matrix. The binary classifier trained from the extended examples can then be used to construct a ranking rule. We prove that the mislabeling cost of the ranking rule is bounded by a weighted 0/1 loss of the binary classifier. Hence, binary classifiers that generalize well could introduce ranking rules that generalize well. The advantages of the framework in algorithmic design and in theoretical analysis are both demonstrated in the paper. In addition, we show that our framework provides a unified view for many existing ordinal regression algorithms. The experiments on some benchmark data sets validate the usefulness of our framework in practice. The paper is organized as follows. In Section 2, we introduce our reduction framework. An unified view of some existing algorithms based on the framework is discussed in Section 3. Theoretical guarantee on the reduction, including derivations of new generalization bounds for ordinal regression, is provided in Section 4. We present experimental results of several new algorithms in Section 5, and conclude in Section 6. 2 The reduction framework In an ordinal regression problem, an example (x, y) is composed of an input vector x ? X and an ordinal label (i.e., rank) y ? Y = {1, 2, . . . , K}. Each example is assumed to be drawn i.i.d. from some unknown distribution P (x, y) on X ? Y. The generalization error of a ranking rule r : X ? Y is then defined as def C(r, P ) = E (x,y)?P Cy,r(x) , where C is a K ? K cost matrix with Cy,k being the cost of predicting an example (x, y) as rank k. N Naturally we assume Cy,y = 0 and Cy,k > 0 for k 6= y. Given a training set S = {(xn , yn )}n=1 containing N examples, the goal is to find a ranking rule r that generalizes well, i.e., associates with a small C(r, P ). The setting above looks similar to that of a multiclass classification problem, except that the ranks are ordered. The ordinal information can be interpreted in several ways. In statistics, the information is assumed to reflect a stochastic ordering on the conditional distributions P (y ? k | x) [5]. Another interpretation is that the mislabeling cost depends on the ?closeness? of the prediction. Consider an example (x, 4) with r1 (x) = 3 and r2 (x) = 1. The rule r2 should pay more for the erroneous prediction than the rule r1 . Thus, we generally want each row of C to be V-shaped. That is, Cy,k?1 ? Cy,k if k ? y and Cy,k ? Cy,k+1 if k ? y. A simple C with V-shaped rows is the classification cost matrix, with entries Cy,k = Jy 6= kK.1 The classification cost is widely used in multiclass classification. However, because the cost is invariant for all kinds of mislabelings, the ordinal information is not taken into account. The absolute cost matrix, which is defined by Cy,k = |y ? k|, is a popular choice that better reflects the ordering preference. Its rows are not only V-shaped, but also convex. That is, Cy,k+1 ? Cy,k ? Cy,k ? Cy,k?1 for 1 < k < K. The convex rows encode a stronger preference in making the prediction ?close.? In this paper, we shall always assume that the ordinal regression problem under study comes with a cost matrix of V-shaped rows, and discuss how to reduce the ordinal regression problem to a binary classification problem. Some of the results may require the rows to be convex. 2.1 Reducing ordinal regression to binary classification The ordinal information allows ranks to be compared. Consider, for instance, that we want to know how good a movie x is. An associated question would be: ?is the rank of x greater than k?? For a fixed k, such a question is exactly a binary classification problem, and the rank of x can be determined by asking multiple questions for k = 1, 2, until (K ? 1). Frank and Hall [6] proposed to solve each binary classification problem independently and combine the binary outputs to a rank. Although their approach is simple, the generalization performance using the combination step cannot be easily analyzed. Our framework works differently. First, all the binary classification problems are solved jointly to obtain a single binary classifier. Second, a simpler step is used to convert the binary outputs to a rank, and generalization analysis can immediately follow. 1 The Boolean test J?K is 1 if the inner condition is true, and 0 otherwise. Assume that fb (x, k) is a binary classifier for all the associated questions above. Consistent answers would be fb (x, k) = 1 (?yes?) for k = 1 until (y 0 ? 1) for some y 0 , and 0 (?no?) afterwards. Then, a reasonable ranking rule based on the binary answers is r(x) = y 0 = 1 + min {k : fb (x, k) = 1}. Equivalently, K?1 X def r(x) = 1 + fb (x, k). k=1 Although the definition can be flexibly applied even when fb is not consistent, a consistent fb is usually desired in order to introduce a good ranking rule r. Furthermore, the ordinal information can help to model the relative confidence in the binary outputs. That is, when k is farther from the rank of x, the answer fb (x, k) should be more confident. The confidence can be modeled by a real-valued function f : X ? {1, 2, . . . , K ? 1} ? R, with fb (x, k) = Jf (x, k) > 0K and the confidence encoded in the magnitude of f . Accordingly, def r(x) = 1 + K?1 X k=1 Jf (x, k) > 0K. (1) The ordinal information would naturally require f to be rank-monotonic, i.e., f (x, 1) ? f (x, 2) ? ? ? ? ? f (x, K ? 1) for every x. Note that a rank-monotonic function f introduces consistent answers fb . Again, although the construction (1) can be applied to cases where f is not rankmonotonic, a rank-monotonic f is usually desired. When f is rank-monotonic, we have f (x, k) > 0 for k < r(x), and f (x, k) ? 0 for k ? r(x). Thus the cost of the ranking rule r on an example (x, y) is Cy,r(x) = K?1 X (Cy,k ? Cy,k+1 ) + Cy,K = K?1 X k=1 k=r(x) (Cy,k ? Cy,k+1 ) Jf (x, k) ? 0K + Cy,K . (2) Define the extended examples (x(k) , y (k) ) with weights wy,k as x(k) = (x, k), y (k) = 2Jk < yK ? 1, Because row y in C is V-shaped, the binary variable y latter is not zero. Continuing from (2), Cy,r(x) = y?1 X k=1 = y?1 X k=1 ? wy,k ? y (k) Jf (x(k) ) ? 0K + wy,k Jy K?1 X k=1 (k) (k) f (x K?1 X k=y (3) equals the sign of (Cy,k ? Cy,k+1 ) if the  wy,k ? y (k) 1 ? Jf (x(k) ) > 0K + Cy,K ) ? 0K + Cy,y + K?1 X k=y wy,k Jy (k) f (x(k) ) ? 0K. wy,k = |Cy,k ? Cy,k+1 | . (k) wy,k Jy (k) f (x(k) ) < 0K (4) Inequality (4) shows that the cost of r on example (x, y) is bounded by a weighted 0/1 loss of f on the extended examples. It becomes an equality if the degenerate case f (x(k) ) = 0 does not happen. When f is not rank-monotonic but row y of C is convex, the inequality (4) could be alternatively proved from K?1 K?1 X X (Cy,k ? Cy,k+1 ) ? (Cy,k ? Cy,k+1 ) Jf (x(k) ) ? 0K. k=r(x) k=1 The inequality above holds because (Cy,k ? Cy,k+1 ) is decreasing due to the convexity, and there are exactly (r(x) ? 1) zeros and (K ? r(x)) ones in the values of Jf (x(k) ) ? 0K in (1). Altogether, our reduction framework consists of the following steps: we first use (3) to transform (k) (k) all training examples (xn , yn ) to extended examples (xn , yn ) with weights wyn ,k (also denoted (k) as wn ). All the extended examples would then be jointly learned by a binary classifier f with confidence outputs, aiming at a low weighted 0/1 loss. Finally, a ranking rule r is constructed from f using (1). The cost bound in (4) leads to the following theorem. Theorem 1 (reduction) An ordinal regression problem with a V-shaped cost matrix C can be reduced to a binary classification problem with the extended examples in (3) and the ranking rule r in (1). If f is rank-monotonic or every row of C is convex, for any example (x, y) and its extended examples (x(k) , y (k) ), the weighted sum of the 0/1 loss of f (x(k) ) bounds the cost of r(x). 2.2 Thresholded model From Theorem 1 and the illustrations above, a rank-monotonic f is preferred for our framework. A popular approach to obtain such a function f is to use a thresholded model [1, 4, 5, 7]: f (x, k) = g(x) ? ?k . As long as the threshold vector ? is ordered, i.e., ?1 ? ?2 ? ? ? ? ? ?K?1 , the function f is rank-monotonic. The question is then, ?when can a binary classification algorithm return ordered thresholds?? A mild but sufficient condition is shown as follows. Theorem 2 (ordered thresholds) If every row of the cost matrix is convex, and the binary classification algorithm minimizes the loss ?(g) + N K?1 X X   wn(k) ? ` yn(k) (g(xn ) ? ?k ) , (5) n=1 k=1 where `(?) is non-increasing in ?, there exists an optimal solution (g ? , ? ? ) such that ? ? is ordered. P ROOF For an optimal solution (g, ?), assume that ?k > ?k+1 for some k. We shall prove that switching ?k and ?k+1 would not increase the objective value of (5). First, consider an example (k+1) (k) = ?1, switching the thresholds changes the objective with yn = k + 1. Since yn = 1 and yn value by wn(k) [`(g(xn ) ? ?k+1 ) ? `(g(xn ) ? ?k )] + wn(k+1) [`(?k ? g(xn )) ? `(?k+1 ? g(xn ))] . (6) Because `(?) is non-increasing, the change is non-positive. (k) (k+1) For an example with yn < k + 1, we have yn = yn = ?1. The change in the objective is (wn(k) ? wn(k+1) ) [`(?k+1 ? g(xn )) ? `(?k ? g(xn ))] . (k) (k+1) if yn < k + 1. Since `(?) Note that row yn of the cost matrix being convex leads to wn ? wn is non-increasing, the change above is also non-positive. The case for examples with yn > k + 1 is similar and the change there is also non-positive. Thus, by switching adjacent pairs of strictly decreasing thresholds, we can actually obtain a solution (g ? , ? ? ) with a smaller or equal objective value in (5), and g ? = g. The optimality of (g, ?) shows that (g ? , ? ? ) is also optimal.  Note that if `(?) is strictly decreasing for ? < 0, and there are training examples for every rank, the change (6) is strictly negative. Thus, the optimal ? ? for any g ? is always ordered. 3 Algorithms based on the framework So far the reduction works only by assuming that x(k) = (x, k) is a pair understandable by f . Actually, any lossless encoding from (x, k) to a vector can be used to encode the pair. With proper choices of the cost matrix, the encoding scheme of (x, k), and the binary learning algorithm, many existing ordinal regression algorithms can be unified in our framework. In this section, we will briefly discuss some of them. It happens that a simple encoding scheme for (x, k) via a coding matrix E of (K ? 1) rows works for all these algorithms. To form x(k) , the vector ek , which denotes the k-th row of E, is appended after x. We will mostly work with E = ?IK?1 , where ? is a positive scalar and IK?1 is the (K ? 1) ? (K ? 1) identity matrix. 3.1 Perceptron-based algorithms The perceptron ranking (PRank) algorithm proposed by Crammer and Singer [1] is an online ordinal regression algorithm that employs the thresholded model with f (x, k) = hu, xi ? ?k . Whenever a training example is not predicted correctly, the current u and ? are updated in a way similar to the perceptron learning rule [8]. The algorithm was proved to keep an ordered ?, and a mistake bound was also proposed [1]. With the simple encoding scheme E = IK?1 , we can see that f (x, k) = (u, ??), x(k) . Thus, when the absolute cost matrix is taken and a modified perceptron learning rule2 is used as the underlying binary classification algorithm, the PRank algorithm is a specific instance of our framework. The orderliness of the thresholds is guaranteed by Theorem 2, and the mistake bound is a direct application of the well-known perceptron mistake bound (see for example Freund and Schapire [8]). Our framework not only simplifies the derivation of the mistake bound, but also allows the use of other perceptron algorithms, such as a batch-mode algorithm rather than an online one. 3.2 SVM-based algorithms SVM [9] can be thought as a generalized perceptron with a kernel that computes the inner product on transformed input vectors ?(x). For the extended examples (x, k), we can suitably define the extended kernel as the original kernel plus the inner product between the extensions, K ((x, k), (x0 , k 0 )) = h?(x), ?(x0 )i + hek , ek0 i . Then, several SVM-based approaches for ordinal regression are special instances of our framework. For example, the approach of Rajaram et al. [3] is equivalent to using the classification cost matrix, the coding matrix E defined with ek,i = ? ? Jk ? iK for some ? > 0, and the hard-margin SVM. When E = ?IK?1 and the traditional soft-margin SVM are used in our framework, the binary classifier f (x, k) has the form hu, ?(x)i ? ?k ? b, and can be obtained by solving 2 2 min kuk + k?k /? 2 + ? u,?,b N K?1 X X n o wn(k) max 0, 1 ? yn(k) (hu, ?(xn )i ? ?k ? b) . (7) n=1 k=1 The explicit (SVOR-EXP) and implicit (SVOR-IMC) approaches of Chu and Keerthi [4] can be regarded as instances of our framework with a modified soft-margin SVM formulation (since they 2 excluded the term k?k /? 2 and added some constraints on ?). Thus, many of their results can be alternatively explained with our reduction framework. For example, their proof for ordered ? of SVOR-IMC is implied from Theorem 2. In addition, they found that SVOR-EXP performed better in terms of the classification cost, and SVOR-IMC preceded in terms of the absolute cost. This finding can also be explained by reduction: SVOR-EXP is an instance of our framework using the classification cost and SVOR-IMC comes from using the absolute cost. Note that Chu and Keerthi paid much effort in designing and implementing suitable optimizers for their modified formulation. If the unmodified soft-margin SVM (7) is directly used in our framework with the absolute cost, we obtain a new support vector ordinal regression formulation.3 From Theorem 2, the thresholds ? would be ordered. The dual of (7) can be easily solved with state-ofthe-art SVM optimizers, and the formulations of Chu and Keerthi can be approximated by setting ? to a large value. As we shall see in Section 5, even a simple setting of ? = 1 performs similarly to the approaches of Chu and Keerthi in practice. 4 Generalization bounds With the extended examples, new generalization bounds can be derived for ordinal regression problems with any cost matrix. A simple result that comes immediately from (4) is: 2 To precisely replicate the PRank algorithm, the (K ? 1) extended examples sprouted from a same example should be considered altogether in updating the perceptron weight vector. 3 The formulation was only briefly mentioned in a footnote, but not studied, by Chu and Keerthi [4]. Theorem 3 (reduction of generalization error) Let cy = Cy,1 + Cy,K and c = maxy cy . If f is rank-monotonic or every row of C is convex, there exists a distribution P? on (X, Y ), where X contains the encoding of (x, k) and Y is a binary label, such that E (x,y)?P Cy,r(x) ? c ? E (X,Y )?P? JY f (X) ? 0K. P ROOF We prove by constructing P? . Given the conditions, following (4), we have Cy,r(x) ? K?1 X k=1 wy,k Jy (k) f (x(k) ) ? 0K = cy ? E Jy (k) f (x(k) ) ? 0K, k?Pk PK?1 where Pk (k | y) = wy,k /cy is a probability distribution because cy = k=1 wy,k . Equivalently, we can define a distribution P? (x(k) , y (k) ) that generates (x(k) , y (k) ) by drawing the tuple (x, y, k) from P (x, y) and Pk (k | y). Then, the generalization error of r is E (x,y)?P Cy,r(x) ? E (x,y)?P cy ? E Jy (k) f (x(k) ) ? 0K ? c ? k?Pk E (x(k) ,y (k) )?P? Jy (k) f (x(k) ) ? 0K. (8)  Theorem 3 shows that, if the binary classifier f generalizes well when examples are sampled from P? , the constructed ranking rule would also generalize well. The terms y (k) f (x(k) ), which are exactly the margins of the associated binary classifier fb (x, k), would be analogously called the margins for ordinal regression, and are expected to be positive and large for correct and confident predictions. Herbrich et al. [5] derived a large-margin bound for an SVM-based thresholded model using pairwise comparisons between examples. However, the bound is complicated because O(N 2 ) pairs are taken into consideration, and the bound is restricted because it is only applicable to hard-margin cases, (k) (k) i.e., for all n, the margins yn f (xn ) ? ? > 0. Another large-margin bound was derived by Shashua and Levin [2]. However, the bound is not data-dependent, and hence does not fully explain the generalization performance of large-margin ranking rules in reality (for more discussions on data-dependent bounds, see the work of, for example, Bartlett and Shawe-Taylor [10]). Next, we show how a novel data-dependent bound for SVM-based ordinal regression approaches can be derived from our reduction framework. Our bound includes only O(KN ) extended examples, and applies to both hard-margin and soft-margin cases, i.e., the margins y (k) f (x(k) ) can be negative. Similar techniques can be used to derive generalization bounds when AdaBoost is the underlying classifier (see the work of Lin and Li [7] for one of such bounds). Theorem 4 (data-dependent bound for support vector ordinal regression) Assume that n o 2 2 2 f (x, k) ? f : (x, k) 7? hu, ?(x)i ? ?k , kuk + k?k ? 1, k?(x)k + 1 ? R2 . If ? is ordered or every row of C is convex, for any margin criterion ?, with probability at least 1??, every rank rule r based on f has generalization error no more than ! r N K?1 log N R 1 maxy cy ? X X (k) (k) (k) ? wn Jyn f (xn ) ? ?K + O ? , , log , where ? = . N n=1 ? ? miny cy N k=1  (k) (k) P ROOF Consider the extended training set S? = (xn , yn ) , which contains N (K ? 1) elements. Each element is a possible outcome from the distribution P? constructed in Theorem 3. Note, however, that these elements are not all independent. Thus, we cannot directly use the whole extended set as i.i.d. outcomes from P? . Nevertheless, some subsets of S? do contain i.i.d. outcomes from P? . One way to extract such a subset is to choose independent kn from Pk (k | yn ) for each (xn , yn ).  (k ) (k ) N The subset would be named T = (xn n , yn n ) n=1 . Bartlett and Shawe-Taylor [10] showed that with probability at least (1 ? ?/2) over the choice of N i.i.d. outcomes from P? , which is the case of T , ! r N X 1 log N R 1 n) E Jy (k) f (x(k) ) ? 0K ? Jy (kn ) f (x(k ) ? ?K + O ? , , log . (9) n N n=1 n ? N ? (x(k) ,y (k) )?P? Table 1: Test error with absolute cost data set pyrimidines machine boston abalone bank computer california census (k ) C4.5 1.565 ? 0.072 0.987 ? 0.024 0.950 ? 0.016 1.560 ? 0.006 1.700 ? 0.005 0.701 ? 0.003 0.974 ? 0.004 1.263 ? 0.003 Reduction based on boost-stump SVM-perceptr. 1.360 ? 0.054 1.304 ? 0.040 0.875 ? 0.017 0.842 ? 0.022 0.846 ? 0.015 0.732 ? 0.013 1.458 ? 0.005 1.383 ? 0.004 1.481 ? 0.002 1.404 ? 0.002 0.604 ? 0.002 0.565 ? 0.002 0.991 ? 0.003 0.940 ? 0.001 1.210 ? 0.001 1.143 ? 0.002 SVOR-IMC with kernel perceptron Gaussian [4] 1.315 ? 0.039 1.294 ? 0.046 0.814 ? 0.019 0.990 ? 0.026 0.729 ? 0.013 0.747 ? 0.011 1.386 ? 0.005 1.361 ? 0.003 1.404 ? 0.002 1.393 ? 0.002 0.565 ? 0.002 0.596 ? 0.002 0.939 ? 0.001 1.008 ? 0.001 1.143 ? 0.002 1.205 ? 0.002 (k ) Let bn = Jyn n f (xn n ) ? ?K be a Boolean random variable introduced by kn ? Pk (k | yn ). The PK?1 (k) (k) (k) variable has mean c?1 yn ? k=1 wn Jyn f (xn ) ? ?K. An extended Chernoff bound shows that when each bn is chosen independently, with probability at least (1 ? ?/2) over the choice of bn , ! r N N K?1 1 X 1 X (k) (k) 1 1 1 X (k) bn ? wn Jyn f (xn ) ? ?K + O ? , log . (10) N n=1 N n=1 cyn ? N k=1 The desired result can be obtained by combining (8), (9), and (10) with a union bound. 5  Experiments We performed experiments with eight benchmark data sets that were used by Chu and Keerthi [4]. The data sets were produced by quantizing some metric regression data sets with K = 10. We used the same training/test ratio and also averaged the results over 20 trials. Thus, with the absolute cost matrix, we can fairly compare our results with those of SVOR-IMC [4]. We tested our framework with E = ?IK?1 and three different binary classification algorithms. The first binary algorithm is Quinlan?s C4.5 [11]. The second is AdaBoost-stump which uses AdaBoost to aggregate 500 decision stumps. The third one is SVM with the perceptron kernel [12], with a simple setting of ? = 1. Note that the Gaussian kernel was used by Chu and Keerthi [4]. We used the perceptron kernel instead to gain the advantage of faster parameter selection. The parameter ? of the soft-margin SVM was determined by a 5-fold cross validation procedure with log2 ? = ?17, ?15, . . . , 3, and LIBSVM [13] was adopted as the solver. For a fair comparison, we also implemented SVOR-IMC with the perceptron kernel and the same parameter selection procedure in LIBSVM. Within the three SVM-based approaches, the two with the perceptron kernel are better than SVORIMC with the Gaussian kernel in test performance. Our direct reduction to the standard SVM performs similarly to SVOR-IMC with the same perceptron kernel, but is much easier to implement. In addition, our direct reduction is significantly faster than SVOR-IMC in training, which is illustrated in Figure 1 using the four largest data sets.4 The main cause to the time difference is the speedup heuristics. While, to the best of our knowledge, not much 4 avg. training time (hour) We list the mean and the standard error of all test results in Table 1, with entries within one standard error of the lowest one marked in bold. With our reduction framework, all the three binary learning algorithms could be better than SVOR-IMC with the Gaussian kernel on some of the data sets, which demonstrates that they achieve decent out-of-sample performances. Among the three algorithms, SVM-perceptron is significantly better than the other two. 6 reduction SVOR?IMC 4 2 0 bank computer california census Figure 1: Training time (including automatic parameter selection) of the SVM-based approaches with the perceptron kernel The results are averaged CPU time gathered on a 1.7G Dual Intel Xeon machine with 1GB of memory. has been done to improve the original SVOR-IMC algorithm, plenty of heuristics, such as shrinking and advanced working set selection in LIBSVM, can be seamlessly adopted by our direct reduction. This difference demonstrates another advantage of our reduction framework: improvements to binary classification approaches can be immediately inherited by reduction-based ordinal regression algorithms. 6 Conclusion We presented a reduction framework from ordinal regression to binary classification based on extended examples. The framework has the flexibility to work with any reasonable cost matrix and any binary classifiers. We demonstrated the algorithmic advantages of the framework in designing new ordinal regression algorithms and explaining existing algorithms. We also showed that the framework can be used to derive new generalization bounds for ordinal regression. Furthermore, the usefulness of the framework was empirically validated by comparing three new algorithms constructed from our framework with the state-of-the-art SVOR-IMC algorithm. Acknowledgments We wish to thank Yaser S. Abu-Mostafa, Amrit Pratap, John Langford, and the anonymous reviewers for valuable discussions and comments. Ling Li was supported by the Caltech SISL Graduate Fellowship, and Hsuan-Tien Lin was supported by the Caltech EAS Division Fellowship. References [1] K. Crammer and Y. Singer. Pranking with ranking. In T. G. Dietterich, S. Becker, and Z. Ghahramani, eds., Advances in Neural Information Processing Systems 14, vol. 1, pp. 641?647. MIT Press, 2002. [2] A. Shashua and A. Levin. Ranking with large margin principle: Two approaches. In S. Becker, S. Thrun, and K. Obermayer, eds., Advances in Neural Information Processing Systems 15, pp. 961?968. MIT Press, 2003. [3] S. Rajaram, A. Garg, X. S. Zhou, and T. S. Huang. Classification approach towards ranking and sorting problems. In N. Lavra?c, D. Gamberger, H. Blockeel, and L. Todorovski, eds., Machine Learning: ECML 2003, vol. 2837 of Lecture Notes in Artificial Intelligence, pp. 301?312. Springer-Verlag, 2003. [4] W. Chu and S. S. Keerthi. New approaches to support vector ordinal regression. In L. D. Raedt and S. Wrobel, eds., ICML 2005: Proceedings of the 22nd International Conference on Machine Learning, pp. 145?152. Omnipress, 2005. [5] R. Herbrich, T. Graepel, and K. Obermayer. Large margin rank boundaries for ordinal regression. In A. J. Smola, P. L. Bartlett, B. Sch?olkopf, and D. Schuurmans, eds., Advances in Large Margin Classifiers, chapter 7, pp. 115?132. MIT Press, 2000. [6] E. Frank and M. Hall. A simple approach to ordinal classification. In L. D. Raedt and P. Flach, eds., Machine Learning: ECML 2001, vol. 2167 of Lecture Notes in Artificial Intelligence, pp. 145?156. SpringerVerlag, 2001. [7] H.-T. Lin and L. Li. Large-margin thresholded ensembles for ordinal regression: Theory and practice. In J. L. Balc?azar, P. M. Long, and F. Stephan, eds., Algorithmic Learning Theory: ALT 2006, vol. 4264 of Lecture Notes in Artificial Intelligence, pp. 319?333. Springer-Verlag, 2006. [8] Y. Freund and R. E. Schapire. Large margin classification using the perceptron algorithm. Machine Learning, 37(3):277?296, 1999. [9] V. N. Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag, 2nd edition, 1999. [10] P. Bartlett and J. Shawe-Taylor. Generalization performance of support vector machines and other pattern classifiers. In B. Sch?olkopf, C. J. C. Burges, and A. J. Smola, eds., Advances in Kernel Methods: Support Vector Learning, chapter 4, pp. 43?54. MIT Press, 1998. [11] J. R. Quinlan. Induction of decision trees. Machine Learning, 1(1):81?106, 1986. [12] H.-T. Lin and L. Li. Novel distance-based SVM kernels for infinite ensemble learning. In Proceedings of the 12th International Conference on Neural Information Processing, pp. 761?766, 2005. [13] C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines, 2001. Software available at http://www.csie.ntu.edu.tw/?cjlin/libsvm.
3125 |@word mild:1 trial:1 briefly:2 stronger:1 replicate:1 nd:2 suitably:1 flach:1 hu:4 bn:4 paid:1 reduction:21 contains:2 tuned:2 existing:6 current:1 comparing:1 chu:9 must:1 readily:1 john:1 happen:1 designed:1 intelligence:3 accordingly:1 farther:1 balc:1 provides:1 preference:2 herbrich:2 simpler:1 constructed:4 direct:4 ik:6 consists:2 prove:3 combine:1 introduce:4 pairwise:1 x0:2 expected:1 decreasing:3 cpu:1 solver:1 increasing:3 becomes:1 provided:1 bounded:2 underlying:2 lowest:1 kind:1 interpreted:1 minimizes:1 unified:3 finding:1 guarantee:1 every:7 exactly:3 classifier:16 demonstrates:3 enjoy:1 yn:21 positive:5 mistake:4 aiming:1 switching:3 encoding:5 rule2:1 blockeel:1 might:1 plus:1 garg:1 studied:3 lavra:1 hek:1 graduate:1 averaged:2 acknowledgment:1 practice:3 union:1 implement:1 optimizers:2 procedure:2 thought:1 significantly:2 confidence:4 cannot:2 close:1 selection:4 www:1 equivalent:1 map:1 customer:1 demonstrated:2 reviewer:1 flexibly:1 independently:2 convex:9 hsuan:2 immediately:3 rule:18 regarded:1 handle:1 updated:1 construction:1 us:1 designing:2 associate:1 element:3 approximated:1 jk:2 updating:1 labeled:1 csie:1 solved:2 cy:47 ordering:2 valuable:1 yk:1 mentioned:1 convexity:1 miny:1 trained:1 solving:1 division:1 easily:3 differently:1 chapter:2 derivation:2 artificial:3 aggregate:1 outcome:4 encoded:1 widely:1 solve:1 valued:1 heuristic:2 drawing:1 otherwise:1 statistic:1 mislabeling:4 jointly:2 transform:1 online:3 advantage:5 quantizing:1 propose:1 product:2 combining:1 degenerate:1 achieve:1 flexibility:1 validate:1 olkopf:2 r1:2 help:1 derive:3 implemented:1 predicted:1 come:3 correct:1 stochastic:1 implementing:1 require:2 generalization:16 anonymous:1 ntu:1 strictly:3 extension:1 hold:1 hall:2 considered:1 exp:3 algorithmic:3 mostafa:1 applicable:1 label:2 largest:1 weighted:5 reflects:1 mit:4 always:2 gaussian:4 modified:4 rather:1 zhou:1 encode:2 derived:5 validated:1 improvement:1 rank:23 seamlessly:1 pyrimidine:1 dependent:4 transformed:2 classification:32 dual:2 among:1 denoted:1 art:2 special:1 fairly:1 equal:2 construct:1 shaped:6 chernoff:1 look:1 icml:1 plenty:1 employ:1 distinguishes:1 composed:1 roof:3 keerthi:9 introduces:1 analyzed:1 immense:1 tuple:1 tree:1 continuing:1 taylor:3 desired:3 theoretical:3 instance:7 xeon:1 soft:5 boolean:2 asking:1 unmodified:1 raedt:2 cost:30 entry:2 subset:3 usefulness:3 levin:3 kn:4 answer:4 confident:2 international:2 pranking:1 analogously:1 again:1 reflect:1 containing:1 choose:1 huang:1 ek:2 return:1 li:5 account:1 potential:1 stump:3 coding:2 bold:1 includes:1 ranking:20 depends:1 performed:2 view:2 shashua:3 complicated:1 inherited:1 appended:1 rajaram:3 ensemble:2 gathered:1 ofthe:1 yes:1 generalize:3 unifies:1 produced:1 footnote:1 explain:1 whenever:1 ed:8 definition:1 pp:9 naturally:2 associated:3 proof:1 sampled:1 newly:1 proved:2 gain:1 popular:2 knowledge:1 organized:1 graepel:1 actually:2 ea:1 supervised:1 follow:1 adaboost:3 formulation:7 done:1 furthermore:2 implicit:1 smola:2 until:2 langford:1 working:1 mode:1 dietterich:1 contain:1 true:1 former:1 hence:2 equality:1 excluded:1 illustrated:1 adjacent:1 abalone:1 criterion:1 generalized:2 performs:2 omnipress:1 consideration:1 novel:2 recently:1 common:1 preceded:1 empirically:2 ek0:1 discussed:1 interpretation:1 imc:13 automatic:1 similarly:2 shawe:3 showed:2 perspective:1 verlag:3 inequality:3 binary:43 tien:2 caltech:4 greater:1 multiple:3 afterwards:1 bother:1 faster:2 cross:1 long:2 lin:6 jy:11 prediction:4 regression:36 metric:1 kernel:15 addition:4 want:2 fellowship:2 sch:2 comment:1 extracting:1 stephan:1 wn:12 decent:1 reduce:2 inner:3 simplifies:1 multiclass:3 bartlett:4 gb:1 becker:2 effort:3 yaser:1 cause:1 generally:1 reduced:1 schapire:2 http:1 sign:1 correctly:1 shall:3 vol:4 abu:1 group:2 four:1 threshold:9 nevertheless:1 drawn:1 libsvm:5 kuk:2 thresholded:6 convert:1 sum:1 run:1 you:1 named:1 reasonable:2 decision:2 bound:28 def:3 pay:1 guaranteed:1 fold:1 constraint:1 precisely:1 software:1 generates:1 speed:1 min:2 optimality:1 speedup:1 combination:1 smaller:1 tw:1 making:1 happens:1 maxy:2 explained:2 invariant:1 restricted:1 census:2 taken:3 discus:2 cjlin:1 singer:3 ordinal:43 know:1 adopted:2 generalizes:2 available:1 eight:1 save:2 batch:1 altogether:2 original:4 denotes:1 log2:1 quinlan:2 ghahramani:1 implied:1 objective:4 question:5 added:1 traditional:1 obermayer:2 distance:1 thank:1 thrun:1 induction:1 assuming:1 modeled:1 kk:1 illustration:1 ratio:1 equivalently:2 mostly:1 frank:2 negative:2 design:3 implementation:1 understandable:1 proper:1 unknown:1 benchmark:3 ecml:2 prank:3 immediate:1 extended:21 svor:16 rating:2 introduced:1 pair:4 c4:2 california:4 learned:1 tremendous:1 boost:1 hour:1 usually:2 wy:10 pattern:1 including:2 max:1 memory:1 suitable:1 natural:1 predicting:1 advanced:1 scheme:3 improve:1 movie:2 technology:2 lossless:1 library:1 extract:1 relative:1 freund:2 loss:6 fully:1 lecture:3 validation:1 sufficient:1 consistent:4 principle:1 bank:2 systematically:1 share:1 row:15 supported:2 burges:1 perceptron:20 institute:2 explaining:1 absolute:7 benefit:1 boundary:1 xn:19 fb:10 computes:1 avg:1 far:1 preferred:1 keep:1 conclude:1 assumed:2 xi:1 alternatively:2 latent:1 reality:1 table:2 nature:1 schuurmans:1 constructing:2 pk:8 main:1 whole:1 ling:3 azar:1 edition:1 fair:1 intel:1 shrinking:1 explicit:1 wish:1 third:1 theorem:11 wrobel:1 erroneous:1 specific:1 r2:3 list:1 svm:19 alt:1 closeness:1 exists:2 vapnik:1 magnitude:1 margin:21 sorting:1 easier:1 boston:1 ordered:10 scalar:1 chang:1 monotonic:9 applies:1 springer:3 extracted:1 conditional:1 goal:1 identity:1 marked:1 towards:1 jf:7 change:6 hard:3 springerverlag:1 determined:2 except:1 reducing:1 infinite:1 called:3 experimental:1 support:8 latter:2 crammer:3 tested:1
2,343
3,126
Image Retrieval and Classification Using Local Distance Functions Andrea Frome Department of Computer Science UC Berkeley Berkeley, CA 94720 andrea.frome@gmail.com Yoram Singer Google, Inc. Mountain View, CA 94043 singer@google.com Jitendra Malik Department of Computer Science UC Berkeley malik@cs.berkeley.edu Abstract In this paper we introduce and experiment with a framework for learning local perceptual distance functions for visual recognition. We learn a distance function for each training image as a combination of elementary distances between patch-based visual features. We apply these combined local distance functions to the tasks of image retrieval and classification of novel images. On the Caltech 101 object recognition benchmark, we achieve 60.3% mean recognition across classes using 15 training images per class, which is better than the best published performance by Zhang, et al. 1 Introduction Visual categorization is a difficult task in large part due to the large variation seen between images belonging to the same class. Within one semantic class, there can be a large differences in shape, color, and texture, and objects can be scaled or translated within an image. For some rigid-body objects, appearance changes greatly with viewing angle, and for articulated objects, such as animals, the number of possible configurations can grow exponentially with the degrees of freedom. Furthermore, there is a large number of categories in the world between which humans are able to distinguish. One oft-cited, conservative estimate puts the total at about 30,000 categories [1], and this does not consider the identification problem (e.g. telling faces apart). One of the more successful tools used in visual classification is a class of patch-based shape and texture features that are invariant or robust to changes in scale, translation, and affine deformations. These include the Gaussian-derivative jet descriptors of [2], SIFT descriptors [3], shape contexts [4], and geometric blur [5]. The basic outline of most discriminative approaches which use these types of features is as follows: (1) given a training image, select a subset of locations or ?interest points?, (2) for each location, select a patch surrounding it, often elliptical or rectangular in shape, (3) compute a fixed-length feature vector from each patch, usually a summary of edge responses or image gradients. This gives a set of fixed-length feature vectors for each training image. (4) Define a function which, given the two sets from two images, returns a value for the distance (or similarity) between the images. Then, (5) use distances between pairs of images as input to a learning algorithm, for example an SVM or nearest neighbor classifier. When given a test image, patches and features are extracted, distances between the test image and training images are computed, and a classification is made. Figure 1: These exemplars are all drawn from the cougar face category of the Caltech 101 dataset, but we can see a great deal of variation. The image on the left is a clear, color image of a cougar face. As with most cougar face exemplars, the locations and appearances of the eyes and ears are a strong signal for class membership, as well as the color pattern of the face. Now consider the grayscale center image, where the appearance of the eyes has changed, the ears are no longer visible, and hue is useless. For this image, the markings around the mouth and the texture of the fur become a better signal. The image on the right shows the ears, eyes, and mouth, but due to articulation, the appearance of all have changed again, perhaps representing a common visual subcategory. If we were to limit ourselves to learning one model of relative importance across these features for all images, or even for each category, it could reduce our ability to determine similarity to these exemplars. In most approaches, machine learning only comes to play in step (5), after the distances or similarities between training images are computed. In this work, we learn the function in step (4) from the training data. This is similar in spirit to the recent body of metric learning work in the machine learning community [6][7][8][9][10]. While these methods have been successfully applied to recognizing digits, there are a couple drawbacks in applying these methods to the general image classification problem. First, they would require representing each image as a fixed-length feature vector. We prefer to use sets of patch-based features, considering both the strong empirical evidence in their favor and the difficulties in capturing invariances in fixed-length feature vectors. Second, these metric-learning algorithms learn one deformation for the entire space of exemplars. To gain an intuition as to why this is a problem, consider Figure 1. The goal of this paper is to demonstrate that in the setting of visual categorization, it can be useful to determine the relative importance of visual features on a finer scale. In this work, we attack the problem from the other extreme, choosing to learn a distance function for each exemplar, where each function gives a distance value between its training image, or focal image, and any other image. These functions can be learned from either multi-way class labels or relative similarity information in the training data. The distance functions are built on top of elementary distance measures between patch-based features, and our problem is formulated such that we are learning a weighting over the features in each of our training images. This approach has two nice properties: (1) the output of the learning is a quantitative measure of the relative importance of the parts of an image; and (2) the framework allows us to naturally combine and select features of different types. We learn the weights using a generalization of the constrained optimization formulation proposed by Schultz and Joachims [7] for relative comparison data. Using these local distance functions, we address applications in image browsing, retrieval and classification. In order to perform retrieval and classification, we use an additional learning step that allows us to compare focal images to one another, and an inference procedure based on error-correcting output codes to make a class choice. We show classification results on the Caltech 101 object recognition benchmark, that for some time has been a de facto standard for multi-category classification. Our mean recognition rate on this benchmark is 60.3% using only fifteen exemplar images per category, which is an improvement over the best previously published recognition rate in [11]. 2 Distance Functions and Learning Procedure In this section we will describe the distance functions and the learning procedure in terms of abstract patch-based image features. Any patch-based features could be used with the framework we present, and we will wait to address our choice of features in Section 3. If we have N training images, we will be solving N separate learning problems. The training image for which a given learning problem is being solved will be referred to as its focal image. Each problem is trained with a subset of the remaining training images, which we will refer to as the learning set for that problem. In the rest of this section we will discuss one such learning problem and focal image, but keep in mind that in the full framework there are N of these. We define the distance function we are learning to be a combination of elementary patch-based distances, each of which are computed between a single patch-based feature in the focal image F and a set of features in a candidate image I, essentially giving us a patch-to-image distance. Any function between a patch feature and a set of features could be used to compute these elementary distances; we will discuss our choice in Section 3. If there are M patches in the focal image, we have M patch-to-image distances to compute between F and I, and we notate each distance in that F set as dF j (I), where j ? [1, M ], and refer to the vector of these as d (I). The image-to-image distance function D that we learn is a linear combination of these elementary distances. Where wF is a vector of weights with a weight corresponding to each patch feature: D(F, I) = M X j=1 F F wjF dF j (I) = w ? d (I) (1) Our goal is to learn this weighting over the features in the focal image. We set up our algorithm to learn from ?triplets? of images, each composed of (1) the focal image F, (2) an image labeled ?less similar? to F, and (3) an image labeled ?more similar? to F. This formulation has been used in other work for its flexibility [7]; it makes it possible to use a relative ranking over images as training input, but also works naturally with multi-class labels by considering exemplars of the same class as F to be ?more similar? than those of another class. To set up the learning algorithm, we consider one such triplet: (F, I d , I s ), where I d and I s refer to the dissimilar and similar images, respectively. If we could use our learned distance function for F to rank these two images relative to one another, we ideally would want I d to have a larger value s d than I s ). Using the formula from the last section, this is equivalent IF, i.e.F D(F, I ) >FD(F, d F to w ? d (I ) > w ? d (I s ) . Let xi = dF (I d ) ? dF (I s ), the difference of the two elementary F distance vectors for this triplet, now indexed by i. Now we can write the condition as w ? xi > 0. For a given focal image, we will construct T of these triplets from our training data (we will discuss how we choose triplets in Section 5.1). Since we will not be able to find one set of weights that meets this condition for all triplets, we use a maximal-margin formulation where we allow slack for triplets that do not meet the condition and try to minimize the total amount of slack allowed. We also increase the desired margin from zero to one, and constrain wF to have non-negative elements, which we denote using .1 . 2 PT arg minwF ,? 21 wF + C i=1 ?i (2) s.t. : ?(i) ? [1, T ] : wF ? xi ? 1 ? ?i , ?i ? 0 F w 0 We chose the L2 regularization in order to be more robust to outliers and noise. Sparsity is also desirable, and an L1 norm could give more sparse solutions. We do not yet have a direct comparison between the two within this framework. This optimization is a generalization of that proposed by Schultz and Joachims in [7] for distance metric learning. However, our setting is different from theirs in two ways. First, their triplets do not share the same focal image as they apply their method to learning one metric for all classes and instances. Second, they arrive at their formulation by assuming that (1) each exemplar is represented by a single fixed-length vector, and (2) a L22 distance between these vectors is used. This would appear to preclude our use of patch features and more interesting distance measures, but as we show, this is an unnecessary restriction for the optimization. Thus, a contribution of this paper is to show that the algorithm in [7] is more widely applicable than originally presented. We used a custom solver to find wF , which runs on the order of one to two seconds for about 2,000 triplets. While it closely resembles the form for support vector machines, it differs in two important ways: (1) we have a primal positivity constraint on wF , and (2) we do not have a bias term because 1 This is based on the intuition that negative weights would mean that larger differences between features could make two images more similar, which is arguably an undesirable effect. we are using the relative relationship between our data vectors. The missing bias term means that, in the dual optimization problem, we do not have a constraint that ties together the dual variables for the margin constraints. Instead, they can be updated separately using an approach similar to the row action method described in [12], followed by a projection of the new wF to make it positive. Denoting the dual variables for the margin constraints by ?i , we first initialize all ?i to zero, then cycle through the triplets, performing these two steps for the ith triplet: ( T ) ( ( ) ) X 1 ? w F ? xi F w ? max ?i xi , 0 , ?i ? min max + ?i , 0 , C kxi k2 i=1 where the first max is element-wise, and the min and max in the second line forces 0 ? ?i ? C. We stop iterating when all KKT conditions are met, within some precision. 3 Visual Features and Elementary Distances The framework described above allows us to naturally combine different kinds of patch-based features, and we will make use of shape features at two different scales and a rudimentary color feature. Many papers have shown the benefits of using filter-based patch features such as SIFT [3] and geometric blur [13] for shape- or texture-based object matching and recognition [14][15][13]. We chose to use geometric blur descriptors, which were used by Zhang et al. in [11] in combination with their KNN-SVM method to give the best previously published results on the Caltech 101 image recognition benchmark. Like SIFT, geometric blur features summarize oriented edges within a patch of the image, but are designed to be more robust to affine transformation and differences in the periphery of the patch. In previous work using geometric blur descriptors on the Caltech 101 dataset [13][11], the patches used are centered at 400 or fewer edge points sampled from the image, and features are computed on patches of a fixed scale and orientation. We follow this methodology as well, though one could use an interest point operator to determine location, scale, and orientation from low-level information, as is typically done with SIFT features. We use two different scales of geometric blur features, the same used in separate experiments in [11]. The larger has a patch radius of 70 pixels, and the smaller a patch radius of 42 pixels. Both use four oriented channels and 51 sample points, for a total of 204 dimensions. As is done in [13], we default to normalizing the feature vector so that the L2 norm is equal to one. Our color features are histograms of eight-pixel radius patches also centered at edge pixels in the image. Any ?pixels? in a patch off the edge of the image are counted in a ?undefined? bin, and we convert the HSV coordinates of the remaining points to a Cartesian space where the z direction is value and (x, y) is the Cartesian projection of the hue/saturation dimensions. We divide the (x, y) space into an 11 ? 11 grid, and make three divisions in the z direction. These were the only parameters that we tested with the color features, choosing not to tune the features to the Caltech 101 dataset. We normalize the bins by the total number of pixels in the patch. Using these features, we can compute elementary patch-to-image distances. If we are computing the distance between the jth patch in the focal image to a candidate image I, we find the closest feature of the same type in I using the L2 distance, and use that L2 distance as the jth elementary patch-toimage distance. We only compare features of the same type, so large geometric blur features are not compared to small geometric blur features. In our experiments we have not made use of geometric relationships between features, but this could be incorporated in a manner similar to that in [11] or [16]. 4 Image Browsing, Retrieval, and Classification The learned distance functions induce rankings that could naturally be the basis for a browsing application over a closed set of images. Consider a ranking of images with respect to one focal image, as in Figure 2. The user may see this and decide they want more sunflower images. Clicking on the sixth image shown would then take them to the ranking with that sunflower image as the focal image, which contains more sunflower results. In essence, we can allow a user to navigate ?image space? by visual similarity.2 2 To see a simple demo based on the functions learned for this paper, go to http://www.cs.berkeley. edu/?afrome/caltech101/nips2006. We also can make use of these distance functions to perform image retrieval: given a new image Q, return a listing of the N training images (or the top K) in order of similarity to Q. If given class labels, we would want images ranked high to be in the same class as Q. While we can use the N distance functions to compute the distance from each of the focal images Fi to Q, these distances are not directly comparable. This is because (1) the weight vectors for each of the focal vectors are not constrained to share any properties other than non-negativity, (2) the number of elementary distance measures and their potential ranges are different for each focal image, and (3) some learned distance functions are simply better than others at characterizing similarity within their class. To address this in cases where we have multi-class labels, we do a second round of training for each focal image where we fit a logistic classifier to the binary (in-class versus out-of-class) training labels and learned distances. Now, given a query image Q, we can compute a probability that the query is in the same class as each of the focal (training) images, and we can use these probabilities to rank the training images relative to one another. The probabilities are on the same scale, and the logistic also helps to penalize poor focal rankings.34 To classify a query image, we first run the retrieval method above to get the probabilities for each training image. For each class, we sum the probabilities for all training images from that class, and the query is assigned to the class with the largest total. Formally, if pj is the P probability for the jth training image Ij , and C is the set of classes, the chosen class is arg maxC j:Ij ?C pj . This can be shown to be a relaxation of the Hamming decoding scheme for the error-correcting output codes in [17] in which the number of focal images is the same for each class. 5 Caltech101 Experiments We test our approach on the Caltech101 dataset [18]5 . This dataset has artifacts that make a few classes easy, but many are quite difficult, and due to the important challenges it poses for scalable object recognition, it has up to this point been one of the de facto standard benchmarks for multi-class image categorization/object recognition. The dataset contains images from 101 different categories, with the number of images per category ranging from 31 to 800, with a median of about 50 images. We ignore the background class and work in a forced-choice scenario with the 101 object categories, where a query image must be assigned to one of the 101 categories. We use the same testing methodology and mean recognition reporting described in Grauman et al. [15]: we use varying numbers of training set sizes (given in number of examples per class), and in each training scenario, test with all other images in the Caltech101 dataset, except the BACKGROUND Google class. Recognition rate per class is computed, then averaged across classes. This normalizes the overall recognition rate so that the performance for categories with a larger number of test images does not skew the mean recognition rate. 5.1 Training data The images are first resized to speed feature computation. The aspect ratio is maintained, but all images are scaled down to be around 200 ? 300. We computed features for each of these images as described in Section 3. We used up to 400 of each type of feature (two sizes of geometric blur and one color), for a maximum total of 1,200 features per image. For images with few edge points, we computed fewer features so that the features were not overly redundant. After computing elementary distances, we rescale the distances for each focal image and feature to have a standard deviation of 0.1. For each focal image we choose a set of triplets for training, and since we are learning similarity for the purposes image classification, we use the category labels on the images in the training set: images that have the same label as the focal image are considered ?more similar? than all images that are out of class. Note that the training algorithm allows for a more nuanced training set where an image could be more similar with respect to one image and less similar with respect to another, but 3 You can also see retrieval rankings with probabilities at the web page. We experimented with abandoning the max-margin optimization and just training a logistic for each focal image; the results were far worse, perhaps because the logistic was fitting noise in the tails. 5 Information about the data set, images, and published results can be found at http://www.vision. caltech.edu/Image Datasets/Caltech101/Caltech101.html 4 water lilly focal image water lilly 12.37 lotus 12.39 water lilly 12.44 water lilly 12.58 (pos) sunflower 12.70 lotus 12.72 water lilly 12.89 water lilly 12.96 (pos) water lilly 13.14 (pos) water lilly 13.16 lotus 13.21 (neg) sunflower 13.22 (neg) sunflower 13.23 water lilly 13.26 (pos) stegosaurus 13.28 Figure 2: The first 15 images from a ranking induced for the focal image in the upper-left corner, trained with 15 images/category. Each image is shown with its raw distance distance, and only those marked with (pos) or (neg) were in the learning set for this focal image. Full rankings for all experimental runs can be browsed at http://www.cs.berkeley.edu/?afrome/caltech101/ nips2006. we are not fully exploiting that in these experiments. Instead of using the full pairwise combination of all in- and out-of-class images, we select triplets using elementary feature distances. Thus, we refer to all the images available for training as the training set and the set of images used to train with respect to a given focal image as its learning set. We want in our learning set those images that are similar to the focal image according to at least one elementary distance measure. For each of the M elementary patch distance measures, we find the top K closest images. If that group contains both in- and out-of-class images, then we make triplets out of the full bipartite match. If all K images are in-class, then we find the closest out-of-class image according to that distance measure and make K triplets with one out-of-class image and the K similar images. We do the converse if all K images are out of class. In our experiments, we used K = 5, and we have not yet performed experiments to determine the effect of the choice of K. The final set of triplets for F is the union of the triplets chosen by the M measures. On average, we used 2,210 triplets per focal image, and mean training time was 1-2 seconds (not including the time to compute the features, elementary distances, or choose the triplets). While we have to solve N of these learning problems, each can be run completely independently, so that for a training set of 1,515 images, we can complete this optimization on a cluster of 50 1GHz computers in about one minute. 5.2 Results We ran a series of experiments using all features, each with a different number of training images per category (either 5, 15, or 30), where we generated 10 independent random splits of the 8,677 images from the 101 categories into training and test sets. We report the average of the mean recognition rates across these splits as well as the standard deviations. We determined the C parameter of the training algorithm using leave-one-out cross-validation on a small random subset of 15 images per category, and our final results are reported using the best value of C found (0.1). In general, however, the method was robust to the choice of C, with only changes of about 1% in recognition with an order of magnitude change in C near the maximum. Figure 3 graphs these results with most of the published results for the Caltech 101 dataset. In the 15 training images per category setting, we also performed recognition experiments on each of our features separately, the combination of the two shape features, and the combination of two shape features with the color features, for a total of five different feature combinations. We performed another round of cross-validation to determine the C value for each feature combination6 . Recognition in the color-only experiment was the poorest at 6% (0.8% standard deviation)7 The next best performance was from the bigger geometric blur features with 49.6% (?1.9%), followed by the smaller geometric blur features with 52.1% (?0.8%). Combining the two shape features together, we achieved 58.8% (?0.8%), and with color and shape, reached 60.3% (?0.7%), which 6 For big geometric blur, small geometric blur, both together, and color alone, the values were C=5, 1, 0.5, and 50, respectively. 7 Only seven categories did better than 33% recognition using only color: Faces easy, Leopards, car side, garfield, pizza, snoopy, and sunflower. Note that all car side exemplars are in black and white. Figure 3: Number of training exemplars versus average recognition rate across classes (based on the graph in [11]). Also shows results from [11], [14], [16], [15], [13], [19], [20], [21], and [18]. Figure 4: Average confusion matrix for 15 training examples per class, across 10 independent runs. Shown in color using Matlab?s jet scale, shown on the right side. is better than the best previously published performance for 15 training images on the Caltech 101 dataset [11]. Combining shape and color performed better than using the two shape features alone for 52 of the categories, while it degraded performance for 46 of the categories, and did not change performance in the remaining 3. In Figure 4 we show the confusion matrix for combined shape and color using 15 training images per category. The ten worst categories starting with the worst were cougar body, beaver, crocodile, ibis, bass, cannon, crayfish, sea horse, crab, and crocodile head, nine of which are animal categories. Almost all the processing at test time is the computation of the elementary distances between the focal images and the test image. In practice the weight vectors that we learn for our focal images are fairly sparse, with a median of 69% of the elements set to zero after learning, which greatly reduces the number of feature comparisons performed at test time. We measured that our unoptimized code takes about 300 seconds per test image.8 After comparisons are computed, we only need to compute linear combinations and compare scores across focal images, which amounts to negligible processing time. This is a benefit of our method compared to the KNN-SVM method of Zhang, et al. [11], which requires the training of a multiclass SVM for every test image, and must perform all feature comparisons. Acknowledgements We would like to thank Hao Zhang and Alex Berg for use of their precomputed geometric blur features, and Hao, Alex, Mike Maire, Adam Kirk, Mark Paskin, and Chuck Rosenberg for many helpful discussions. References [1] I. Biederman, ?Recognition-by-components: A theory of human image understanding,? Psychological Review, vol. 94, no. 2, pp. 115?147, 1987. [2] C. Schmid and R. Mohr, ?Combining greyvalue invariants with local constraints for object recognition,? in CVPR, 1996. [3] D. Lowe, ?Object recognition from local scale-invariant features,? in ICCV, pp. 1000?1015, Sep 1999. [4] S. Belongie, J. Malik, and J. Puzicha, ?Shape matching and object recognition using shape contexts,? PAMI, vol. 24, pp. 509?522, April 2002. [5] A. Berg and J. Malik, ?Geometric blur for template matching,? in CVPR, pp. 607?614, 2001. [6] E. Xing, A. Ng, and M. Jordan, ?Distance metric learning with application to clustering with sideinformation,? in NIPS, 2002. [7] Schutlz and Joachims, ?Learning a distance metric from relative comparisons,? in NIPS, 2003. [8] S. Shalev-Shwartz, Y. Singer, and A. Ng, ?Online and batch learning of pseudo-metrics,? in ICML, 2004. [9] K. Q. Weinberger, J. Blitzer, and L. K. Saul, ?Distance metric learning for large margin nearest neighbor classification,? in NIPS, 2005. [10] A. Globerson and S. Roweis, ?Metric learning by collapsing classes,? in NIPS, 2005. [11] H. Zhang, A. Berg, M. Maire, and J. Malik, ?SVM-KNN: Discriminative Nearset Neighbor Classification for Visual Category Recognition,? in CVPR, 2006. [12] Y. Censor and S. A. Zenios, Parallel Optimization: Theory, Algorithms, and Applications. Oxford University Press, 1998. [13] A. Berg, T. Berg, and J. Malik, ?Shape matching and object recognition using low distortion correspondence,? in CVPR, 2005. [14] S. Lazebnik, C. Schmid, and J. Ponce, ?Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories,? in CVPR, 2006. [15] K. Grauman and T. Darrell, ?Pyramic match kernels: Discriminative classficiation with sets of image features (version 2),? Tech. Rep. MIT CSAIL TR 2006-020, MIT, March 2006. [16] J. Mutch and D. G. Lowe, ?Multiclass object recognition with sparse, localized features,? in CVPR, 2006. [17] E. L. Allwein, R. E. Schapire, and Y. Singer, ?Reducing multiclass to binary: A unifying approach for margin classifiers,? JMLR, vol. 1, pp. 113?141, 2000. [18] L. Fei-Fei, R. Fergus, and P. Perona, ?Learning generative visual models from few training examples: an incremental bayesian approach testing on 101 object categories.,? in Workshop on Generative-Model Based Vision, CVPR, 2004. [19] G. Wang, Y. Zhang, and L. Fei-Fei, ?Using dependent regions for object categorization in a generative framework,? in CVPR, 2006. [20] A. D. Holub, M. Welling, and P. Perona, ?Combining generative models and fisher kernels for object recognition,? in ICCV, 2005. [21] T. Serre, L. Wolf, and T. Poggio, ?Object recognition with features inspired by visual cortex,? in CVPR, 2005. 8 To further speed up comparisons, in place of an exact nearest neighbor computation, we could use approximate nearest neighbor algorithms such as locality-sensitive hashing or spill trees.
3126 |@word version:1 norm:2 fifteen:1 tr:1 wjf:1 configuration:1 contains:3 series:1 score:1 denoting:1 elliptical:1 com:2 gmail:1 yet:2 must:2 visible:1 blur:15 shape:16 designed:1 alone:2 generative:4 fewer:2 beaver:1 ith:1 location:4 hsv:1 attack:1 zhang:6 five:1 direct:1 become:1 combine:2 fitting:1 manner:1 introduce:1 pairwise:1 andrea:2 multi:5 inspired:1 preclude:1 considering:2 solver:1 mountain:1 kind:1 transformation:1 pseudo:1 berkeley:6 quantitative:1 every:1 tie:1 grauman:2 scaled:2 classifier:3 facto:2 k2:1 converse:1 appear:1 arguably:1 positive:1 negligible:1 local:6 limit:1 oxford:1 meet:2 mohr:1 pami:1 black:1 chose:2 resembles:1 range:1 averaged:1 abandoning:1 globerson:1 testing:2 union:1 practice:1 differs:1 digit:1 procedure:3 maire:2 empirical:1 projection:2 matching:5 induce:1 wait:1 get:1 undesirable:1 operator:1 put:1 context:2 applying:1 restriction:1 equivalent:1 www:3 center:1 missing:1 go:1 starting:1 independently:1 rectangular:1 sideinformation:1 correcting:2 variation:2 coordinate:1 updated:1 pt:1 play:1 user:2 exact:1 element:3 recognition:29 labeled:2 mike:1 solved:1 wang:1 worst:2 region:1 cycle:1 bass:1 ran:1 intuition:2 ideally:1 trained:2 solving:1 division:1 bipartite:1 basis:1 completely:1 translated:1 po:5 sep:1 represented:1 surrounding:1 articulated:1 train:1 forced:1 describe:1 query:5 horse:1 choosing:2 shalev:1 quite:1 larger:4 widely:1 solve:1 cvpr:9 distortion:1 ability:1 favor:1 knn:3 final:2 online:1 maximal:1 combining:4 flexibility:1 achieve:1 roweis:1 normalize:1 exploiting:1 cluster:1 darrell:1 sea:1 categorization:4 adam:1 leave:1 incremental:1 object:18 help:1 blitzer:1 pose:1 measured:1 rescale:1 ij:2 exemplar:10 nearest:4 strong:2 c:3 frome:2 come:1 met:1 direction:2 radius:3 drawback:1 closely:1 filter:1 centered:2 human:2 viewing:1 bin:2 require:1 generalization:2 elementary:16 leopard:1 around:2 considered:1 crab:1 great:1 purpose:1 applicable:1 bag:1 label:7 sensitive:1 largest:1 successfully:1 tool:1 mit:2 gaussian:1 cannon:1 resized:1 varying:1 allwein:1 rosenberg:1 joachim:3 improvement:1 ponce:1 fur:1 rank:2 lotus:3 greatly:2 tech:1 wf:7 helpful:1 inference:1 censor:1 dependent:1 rigid:1 membership:1 entire:1 typically:1 perona:2 unoptimized:1 pixel:6 arg:2 classification:13 dual:3 orientation:2 overall:1 html:1 animal:2 constrained:2 spatial:1 initialize:1 uc:2 fairly:1 equal:1 construct:1 ng:2 icml:1 others:1 report:1 few:3 oriented:2 composed:1 ourselves:1 freedom:1 interest:2 fd:1 custom:1 extreme:1 undefined:1 primal:1 edge:6 poggio:1 indexed:1 tree:1 divide:1 desired:1 deformation:2 psychological:1 instance:1 classify:1 deviation:3 subset:3 recognizing:2 successful:1 reported:1 kxi:1 notate:1 combined:2 nips2006:2 cited:1 csail:1 off:1 decoding:1 together:3 again:1 cougar:4 ear:3 choose:3 l22:1 positivity:1 collapsing:1 worse:1 corner:1 derivative:1 return:2 potential:1 de:2 inc:1 afrome:2 jitendra:1 ranking:8 performed:5 view:1 try:1 closed:1 lowe:2 reached:1 xing:1 parallel:1 contribution:1 minimize:1 degraded:1 descriptor:4 listing:1 identification:1 raw:1 bayesian:1 finer:1 published:6 maxc:1 sixth:1 pp:5 naturally:4 hamming:1 couple:1 gain:1 stop:1 dataset:9 sampled:1 color:15 car:2 holub:1 originally:1 hashing:1 follow:1 methodology:2 response:1 mutch:1 april:1 formulation:4 done:2 though:1 furthermore:1 just:1 web:1 google:3 logistic:4 artifact:1 perhaps:2 nuanced:1 effect:2 serre:1 regularization:1 assigned:2 semantic:1 deal:1 white:1 round:2 essence:1 maintained:1 outline:1 complete:1 demonstrate:1 confusion:2 l1:1 rudimentary:1 image:148 wise:1 ranging:1 novel:1 fi:1 lazebnik:1 common:1 exponentially:1 tail:1 theirs:1 refer:4 focal:33 grid:1 crocodile:2 similarity:8 longer:1 cortex:1 closest:3 recent:1 apart:1 periphery:1 scenario:2 binary:2 chuck:1 rep:1 caltech:9 neg:3 seen:1 additional:1 determine:5 redundant:1 signal:2 full:4 desirable:1 reduces:1 jet:2 match:2 cross:2 retrieval:8 bigger:1 scalable:1 basic:1 essentially:1 metric:9 df:4 vision:2 histogram:1 kernel:2 pyramid:1 achieved:1 penalize:1 background:2 want:4 separately:2 grow:1 median:2 rest:1 induced:1 spirit:1 jordan:1 near:1 split:2 easy:2 fit:1 zenios:1 reduce:1 classficiation:1 multiclass:3 sunflower:7 nine:1 action:1 matlab:1 useful:1 iterating:1 clear:1 tune:1 amount:2 hue:2 ten:1 category:26 http:3 schapire:1 overly:1 per:13 write:1 vol:3 group:1 four:1 drawn:1 pj:2 graph:2 relaxation:1 convert:1 sum:1 run:5 angle:1 you:1 arrive:1 reporting:1 almost:1 decide:1 place:1 patch:32 prefer:1 comparable:1 poorest:1 capturing:1 followed:2 distinguish:1 correspondence:1 constraint:5 constrain:1 alex:2 scene:1 fei:4 aspect:1 speed:2 min:2 toimage:1 performing:1 department:2 marking:1 according:2 combination:9 poor:1 march:1 belonging:1 across:7 smaller:2 ibis:1 outlier:1 invariant:3 iccv:2 previously:3 discus:3 slack:2 skew:1 precomputed:1 singer:4 mind:1 available:1 apply:2 eight:1 batch:1 weinberger:1 top:3 remaining:3 include:1 clustering:1 spill:1 unifying:1 yoram:1 giving:1 malik:6 gradient:1 distance:56 separate:2 thank:1 seven:1 water:9 assuming:1 length:5 code:3 useless:1 relationship:2 ratio:1 difficult:2 hao:2 negative:2 pizza:1 subcategory:1 perform:3 upper:1 datasets:1 benchmark:5 incorporated:1 head:1 community:1 biederman:1 pair:1 learned:6 nip:4 address:3 able:2 beyond:1 usually:1 pattern:1 articulation:1 oft:1 sparsity:1 summarize:1 challenge:1 saturation:1 built:1 max:5 including:1 mouth:2 difficulty:1 force:1 ranked:1 natural:1 representing:2 scheme:1 eye:3 negativity:1 schmid:2 nice:1 geometric:16 l2:4 acknowledgement:1 understanding:1 review:1 relative:10 fully:1 interesting:1 versus:2 localized:1 validation:2 degree:1 affine:2 share:2 translation:1 row:1 normalizes:1 summary:1 changed:2 caltech101:7 last:1 jth:3 bias:2 allow:2 side:3 telling:1 neighbor:5 template:1 face:6 characterizing:1 saul:1 sparse:3 benefit:2 ghz:1 dimension:2 default:1 world:1 made:2 schultz:2 counted:1 far:1 welling:1 approximate:1 ignore:1 keep:1 kkt:1 unnecessary:1 belongie:1 discriminative:3 xi:5 demo:1 grayscale:1 shwartz:1 fergus:1 triplet:19 why:1 learn:9 channel:1 robust:4 ca:2 did:2 big:1 noise:2 allowed:1 body:3 referred:1 crayfish:1 precision:1 candidate:2 clicking:1 perceptual:1 jmlr:1 weighting:2 kirk:1 formula:1 down:1 minute:1 navigate:1 sift:4 paskin:1 experimented:1 svm:5 evidence:1 normalizing:1 workshop:1 importance:3 texture:4 magnitude:1 cartesian:2 margin:7 browsing:3 locality:1 simply:1 appearance:4 visual:12 wolf:1 extracted:1 goal:2 formulated:1 marked:1 fisher:1 change:5 determined:1 except:1 reducing:1 conservative:1 total:7 invariance:1 experimental:1 select:4 formally:1 berg:5 puzicha:1 support:1 mark:1 dissimilar:1 tested:1
2,344
3,127
Similarity by Composition Oren Boiman Michal Irani Dept. of Computer Science and Applied Math The Weizmann Institute of Science 76100 Rehovot, Israel Abstract We propose a new approach for measuring similarity between two signals, which is applicable to many machine learning tasks, and to many signal types. We say that a signal S1 is ?similar? to a signal S2 if it is ?easy? to compose S1 from few large contiguous chunks of S2 . Obviously, if we use small enough pieces, then any signal can be composed of any other. Therefore, the larger those pieces are, the more similar S1 is to S2 . This induces a local similarity score at every point in the signal, based on the size of its supported surrounding region. These local scores can in turn be accumulated in a principled information-theoretic way into a global similarity score of the entire S1 to S2 . ?Similarity by Composition? can be applied between pairs of signals, between groups of signals, and also between different portions of the same signal. It can therefore be employed in a wide variety of machine learning problems (clustering, classification, retrieval, segmentation, attention, saliency, labelling, etc.), and can be applied to a wide range of signal types (images, video, audio, biological data, etc.) We show a few such examples. 1 Introduction A good measure for similarity between signals is necessary in many machine learning problems. However, the notion of ?similarity? between signals can be quite complex. For example, observing Fig. 1, one would probably agree that Image-B is more ?similar? to Image-A than Image-C is. But why...? The configurations appearing in image-B are different than the ones observed in Image-A. What is it that makes those two images more similar than Image-C? Commonly used similarity measures would not be able to detect this type of similarity. For example, standard global similarity measures (e.g., Mutual Information [12], Correlation, SSD, etc.) require prior alignment or prior knowledge of dense correspondences between signals, and are therefore not applicable here. Distance measures that are based on comparing empirical distributions of local features, such as ?bags of features? (e.g., [11]), will not suffice either, since all three images contain similar types of local features (and therefore Image-C will also be determined similar to Image-A). In this paper we present a new notion of similarity between signals, and demonstrate its applicability to several machine learning problems and to several signal types. Observing the right side of Fig. 1, it is evident that Image-B can be composed relatively easily from few large chunks of Image-A (see color-coded regions). Obviously, if we use small enough pieces, then any signal can be composed of any other (including Image-C from Image-A). We would like to employ this idea to indicate high similarity of Image-B to Image-A, and lower similarity of Image-C to Image-A. In other words, regions in one signal (the ?query? signal) which can be composed using large contiguous chunks of data from the other signal (the ?reference? signal) are considered to have high local similarity. On the other hand, regions in the query signal which can be composed only by using small fragmented pieces are considered locally dissimilar. This induces a similarity score at every point in the signal based on the size of its largest surrounding region which can be found in the other signal (allowing for some distortions). This approach provides the ability to generalize and infer about new configurations in the query signal that were never observed in the reference signal, while preserving Image-A: Image-C: The ?reference? signal: (Image-A) Image-B: The ?query? signal: (Image-B) Figure 1: Inference by Composition ? Basic concept. Left: What makes ?Image-B? look more similar to ?Image-A? than ?Image-C? does? (None of the ballet configurations in ?Image-B? appear in ?Image-A?!) Right: Image-B (the ?query?) can be composed using few large contiguous chunks from ImageA (the ?reference?), whereas it is more difficult to compose Image-C this way. The large shared regions between B and A (indicated by colors) provide high evidence to their similarity. structural information. For instance, even though the two ballet configurations observed in Image-B (the ?query? signal) were never observed in Image-A (the ?reference? signal), they can be inferred from Image-A via composition (see Fig. 1), whereas the configurations in Image-C are much harder to compose. Note that the shared regions between similar signals are typically irregularly shaped, and therefore cannot be restricted to predefined regularly shaped partitioning of the signal. The shapes of those regions are data dependent, and cannot be predefined. Our notion of signal composition is?geometric? and data-driven. In that sense it is very different from standard decomposition methods (e.g., PCA, ICA, wavelets, etc.) which seek linear decomposition of the signal, but not geometric decomposition. Other attempts to maintain the benefits of local similarity while maintaining global structural information have recently been proposed [8]. These have been shown to improve upon simple ?bags of features?, but are restricted to preselected partitioning of the image into rectangular sub-regions. In our previous work [5] we presented an approach for detecting irregularities in images/video as regions that cannot be composed from large pieces of data from other images/video. Our approach was restricted only to detecting local irregularities. In this paper we extend this approach to a general principled theory of ?Similarity by Composition?, from which we derive local and global similarity and dissimilarity measures between signals. We further show that this framework extends to a wider range of machine learning problems and to a wider variety of signals (1D, 2D, 3D, .. signals). More formally, we present a statistical (generative) model for composing one signal from another. Using this model we derive information-theoretic measures for local and global similarities induced by shared regions. The local similarities of shared regions (?local evidence scores?) are accumulated into a global similarity score (?global evidence score?) of the entire query signal relative to the reference signal. We further prove upper and lower bounds on the global evidence score, which are computationally tractable. We present both a theoretical and an algorithmic framework to compute, accumulate and weight those gathered ?pieces of evidence?. Similarity-by-Composition is not restricted to pairs of signals. It can also be applied to compute similarity of a signal to a group of signals (i.e., compose a query signal from pieces extracted from multiple reference signals). Similarly, it can be applied to measure similarity between two different groups of signals. Thus, Similarity-by-Composition is suitable for detection, retrieval, classification, and clustering. Moreover, it can also be used for measuring similarity or dissimilarity between different portions of the same signal. Intra-signal dissimilarities can be used for detecting irregularities or saliency, while intra-signal similarities can be used as affinity measures for sophisticated intra-signal clustering and segmentation. The importance of large shared regions between signals have been recognized by biologists for determining similarities between DNA sequences, amino acid chains, etc. Tools for finding large repetitions in biological data have been developed (e.g., ?BLAST? [1]). In principle, results of such tools can be fed into our theoretical framework, to obtain similarity scores between biological data sequences in a principled information theoretic way. The rest of the paper is organized as follows: In Sec. 2 we derive information-theoretic measures for local and global ?evidence? (similarity) induced by shared regions. Sec. 3 describes an algorithmic framework for computing those measures. Sec. 4 demonstrates the applicability of the derived local and global similarity measures for various machine learning tasks and several types of signal. 2 Similarity by Composition ? Theoretical Framework We derive principled information-theoretic measures for local and global similarity between a ?query? Q (one or more signals) and a ?reference? ref (one or more signals). Large shared regions between Q and ref provide high statistical evidence to their similarity. In this section we show how to quantify this statistical evidence. We first formulate the notion of ?local evidence? for local regions within Q (Sec. 2.1). We then show how these pieces of local evidence can be integrated to provide ?global evidence? for the entire query Q (Sec. 2.2). 2.1 Local Evidence Let R ? Q be a connected region within Q. Assume that a similar region exists in ref . We would like to quantify the statistical significance of this region co-occurrence, and show that it increases with the size of R. To do so, we will compare the likelihood that R was generated by ref , versus the likelihood that it was generated by some random process. More formally, we denote by Href the hypothesis that R was ?generated? by ref , and by H0 the hypothesis that R was generated by a random process, or by any other application-dependent PDF (referred to as the ?null hypothesis?). Href assumes the following model for the ?generation? of R: a region was taken from somewhere in ref , was globally transformed by some global transformation T , followed by some small possible local distortions, and then put into Q to generate R. T can account for shifts, scaling, rotations, etc. In the simplest case (only shifts), T is the corresponding location in ref . P We can compute the likelihood ratio: P (R|T, Href )P (T |Href ) P (R|Href ) T = (1) LR(R) = P (R|H0 ) P (R|H0 ) where P (T |Href ) is the prior probability on the global transformations T (shifts, scaling, rotations), and P (R|T, Href ) is the likelihood that R was generated from ref at that location, scale, etc. (up to some local distortions which are also modelled by P (R|T, Href ) ? see algorithmic details in Sec. 3). If there are multiple corresponding regions in ref , (i.e., multiple T s), all of them contribute to the estimation of LR(R). We define the Local Evidence Score of R to be the log likelihood ratio: LES(R|Href ) = log2 (LR(R)). LES is referred to as a ?local evidence score?, because the higher LES is, the smaller the probability that R was generated by random (H0 ). In fact, P ( LES(R|Href ) > l | H0 ) < 2?l , i.e., the probability of getting a score LES(R) > l for a randomly generated region R is smaller than 2?l (this is due to LES being a log-likelihood ratio [3]). High LES therefore provides higher statistical evidence that R was generated from ref . Note that the larger the region R ? Q is, the higher its evidence score LES(R|Href ) (and therefore it will also provide higher statistical evidence to the hypothesis that Q was composed from ref ). For example, assume for simplicity that R has a single identical copy in ref , and that T is restricted to shifts with uniform probability (i.e., P (T |Href ) = const), then P (R|Href ) is constant, regardless of the size of R. On the other hand, P (R|H0 ) decreases exponentially with the size of R. Therefore, the likelihood ratio of R increases, and so does its evidence score LES. LES can also be interpreted as the number of bits saved by describing the region R using ref , instead of describing it using H0 : Recall that the optimal average code length of a random variable y with probability function P (y) is length(y) = ?log(P (y)). Therefore we can write the evidence score as LES(R|Href ) = length(R|H0 ) ? length(R|Href ). Therefore, larger regions provide higher saving (in bits) in the description length of R. LES(R|H ) ref (where A region R induces ?average savings per point? for every point q ? R, namely, |R| |R| is the number of points in R). However, a point q ? R may also be contained in other regions generated by ref , each with its own local evidence score. We can therefore define the maximal possible savings per point (which we will refer to in short as P ES = ?Point Evidence Score?): LES(R|Href ) (2) P ES(q|Href ) = max R?Q s.t. q?R |R| For any point q ? Q we define R[q] to be the region which provides this maximal score for q. Fig. 1 shows such maximal regions found in Image-B (the query Q) given Image-A (the reference ref ). In practice, many points share the same maximal region. Computing an approximation of LES(R|Href ), P ES(q|Href ), and R[q] can be done efficiently (see Sec 3). 2.2 Global Evidence We now proceed to accumulate multiple local pieces of evidence. Let R1 , ..., Rk ? Q be k disjoint regions in Q, which have been generated independently from the examples in ref . Let R0 = Q\ ?ki=1 Ri denote the remainder of Q. Namely, S = {R0 , R1 , ..., Rk } is a segmentation/division of Q. Assuming that the remainder R0 was generated i.i.d. by the null hypothesis H0 , we can derive a global evidence score on the hypothesis that Q was generated from ref via the segmentation S (for simplicity of notation we use the symbol Href also to denote the global hypothesis): GES(Q|Href , S) = log P (Q|Href , S) = log P (Q|H0 ) P (R0 |H0 ) k Q i=0 k Q i=1 P (Ri |Href ) P (Ri |H0 ) = k X LES(Ri |Href ) i=1 Namely, the global evidence induced by S is the accumulated sum of the local evidences provided by the individual segments of S. The statistical significance of such an accumulated evidence is Pk expressed by: P ( GES(Q|Href , S) > l | H0 ) = P ( i=1 LES(Ri |Href ) > l | H0 ) < 2?l . Consequently, we can accumulate local evidence of non-overlapping regions within Q which have similar regions in ref for obtaining global evidence on the hypothesis that Q was generated from ref . Thus, for example, if we found 5 regions within Q with similar copies in ref , each resulting with probability less than 10% of being generated by random, then the probability that Q was generated by random is less than (10%)5 = 0.001% (and this is despite the unfavorable assumption we made that the rest of Q was generated by random). So far the segmentation S was assumed to be given, and we estimated GES(Q, Href , S). In order to obtain the global evidence score of Q, we marginalize over all possible segmentations S of Q: X P (Q|Href ) P (Q|Href , S) = log (3) GES(Q|Href ) = log P (S|Href ) P (Q|H0 ) P (Q|H0 ) S Namely, the likelihood P (S|Href ) of a segmentation S can be interpreted as a weight for the likelihood ratio score of Q induced by S. Thus, we would like P (S|Href ) to reflect the complexity of the segmentation S (e.g., its description length). From a practical point of view, in most cases it would be intractable to compute GES(Q|Href ), as Eq. (3) involves summation over all possible segmentations of the query Q. However, we can derive upper and lower bounds on GES(Q|Href ) which are easy to compute: Claim 1. Upper and lower bounds on GES: X X max { logP (S|Href ) + LES(Ri |Href ) } ? GES(Q|Href ) ? P ES(q|Href ) S Ri ?S q?Q (4) proof: See Appendix www.wisdom.weizmann.ac.il/?vision/Composition.html. Practically, this claim implies that we do not need to scan all possible segmentations. The lower bound (left-handP side of Eq. (4) ) is achieved by the segmentation of Q with the best accumulated evidence score, Ri ?S LES(Ri |Href ) = GES(Q|Href , S), penalized by the length of the segmentation description logP (S|Href ) = ?length(S). Obviously, every segmentation provides such a lower (albeit less tight) bound on the total evidence score. Thus, if we find large enough contiguous regions in Q, with supporting regions in ref (i.e., high enough local evidence scores), and define R0 to be the remainder of Q, then S = R0 , R1 , ..., Rk can provide a reasonable lower bound on GES(Q|Href ). As to the upper bound on GES(Q|Href ), this can be done by summing up the maximal point-wise evidence scores P ES(q|Href ) (see Eq. 2) from all the points in Q (right-hand side of Eq. (4)). Note that the upper bound is computed by finding the maximal evidence regions that pass through every point in the query, regardless of the region complexity length(R). Both bounds can be estimated quite efficiently (see Sec. 3). 3 Algorithmic Framework The local and global evidence scores presented in Sec. 2 provide new local and global similarity measures for signal data, which can be used for various learning and inference problems (see Sec. 4). In this section we briefly describe the algorithmic framework used for computing P ES, LES, and GES to obtain the local and global compositional similarity measures. Assume we are given a large region R ? Q and would like to estimate its evidence score LES(R|Href ). We would like to find similar regions to R in ref , that would provide large local evidence for R. However, (i) we cannot expect R to appear as is, and would therefore like to allow for global and local deformations of R, and (ii) we would like to perform this search efficiently. Both requirements can be achieved by breaking R into lots of small (partially overlapping) data patches, each with its own patch descriptor. This information is maintained via a geometric ?ensemble? of local patch descriptors. The search for a similar ensemble in ref is done using efficient inference on a star graphical model, while allowing for small local displacement of each local patch [5]. For example, in images these would be small spatial patches around each pixel contained in the larger image region R, and the displacements would be small shifts in x and y. In video data the region R would be a space-time volumetric region, and it would be broken into lots of small overlapping space-time volumetric patches. The local displacements would be in x, y, and t (time). In audio these patches would be short time-frame windows, etc. In general, for any n-dimensional signal representation, the region R would be a large n-dimensional region within the signal, and the patches would be small n-dimensional overlapping regions within R. The local patch descriptors are signal and application dependent, but can be very simple. (For example, in images we used a SIFT-like [9] patch descriptor computed in each image-patch. See more details in Sec. 4). It is the simultaneous matching of all these simple local patch descriptors with their relative positions that provides the strong overall evidence score for the entire region R. The likelihood of R, given a global transformation T (e.g., location in ref ) and local patch displacements ?li for each patch i in R (i = 1, 2, ..., |R|), 2 2 Q ? |?di2| ? |?li2| is captured by the following expression: P (R|T, {?li }, Href ) = 1/Z e 2?1 e 2?2 , where i {?di } are the descriptor distortions of each patch, and Z is a normalization factor. To estimate P (R|T, Href ) we marginalize over all possible local displacements {?li } within a predefined limited radius. In order to compute LES(R|Href ) in Eq. (1), we need to marginalize over all possible global transformations T . In our current implementation we used only global shifts, and assumed uniform distributions over all shifts, i.e., P (T |Href ) = 1/|ref |. However, the algorithm can accommodate more complex global transformations. To compute P (R|Href ), we used our inference algorithm described in [5], modified to compute likelihood (sum-product) instead of MAP (maxproduct). In a nutshell, the algorithm uses a few patches in R (e.g., 2-3), exhaustively searching ref for those patches. These patches restrict the possible locations of R in ref , i.e., the possible candidate transformations T for estimating P (R|T, Href ). The search of each new patch is restricted to locations induced by the current list of candidate transformations T . Each new patch further reduces this list of candidate positions of R in ref . This computation of P (R|Href ) is efficient: O(|db|) + O(|R|) ? O(|db|), i.e., approximately linear in the size of ref . In practice, we are not given a specific region R ? Q in advance. For each point q ? Q we want to estimate its maximal region R[q] and its corresponding evidence score LES(R|Href ) (Sec. 2.1). In (a) (b) (c) Figure 2: Detection of defects in grapefruit images. Using the single image (a) as a ?reference? of good quality grapefruits, we can detect defects (irregularities) in an image (b) of different grapefruits at different arrangements. Detected defects are highlighted in red (c). (a) Input1: Output1: (b) Input2: Output2: Figure 3: Detecting defects in fabric images (No prior examples). Left side of (a) and (b) show fabrics with defects. Right side of (a) and (b) show detected defects in red (points with small intraimage evidence LES). Irregularities are measured relative to other parts of each image. order to perform this step efficiently, we start with a small surrounding region around q, break it into patches, and search only for that region in ref (using the same efficient search method described above). Locations in ref where good initial matches were found are treated as candidates, and are gradually ?grown? to their maximal possible matching regions (allowing for local distortions in patch position and descriptor, as before). The evidence score LES of each such maximally grown region is computed. Using all these maximally grown regions we approximate P ES(q|Href ) and R[q] (for all q ? Q). In practice, a region found maximal for one point is likely to be the maximal region for many other points in Q. Thus the number of different maximal regions in Q will tend to be significantly smaller than the number of points in Q. Having computed P ES(q|Href ) ?q ? Q, it is straightforward to obtain an upper bound on GES(Q|Href ) (right-hand side of Eq. (4)). In principle, in order to obtain a lower bound on GES(Q|Href ) we need to perform an optimization over all possible segmentations S of Q. However, any good segmentation can be used to provide a reasonable (although less tight) lower bound. Having extracted a list of disjoint maximal regions R1 , ..., Rk , we can use these to induce a reasonable (although not optimal) segmentation using the following heuristics: We choose the first segment to be the maximal region with the largest evidence score: R?1 = argmaxRi LES(Ri |Href ). The second segment is chosen to be the largest of all the remaining regions after having removed their overlap with R?1 , etc. This process yields a segmentation of Q: S = {R?1 , ..., R?l } (l ? k). Reevaluating the evidence scores LES(R?i |Href ) of these regions, we can obtain a reasonable lower bound on GES(Q|Href ) using the left-hand side of Eq. (4). For evaluating the lower bound, we also need to estimate log P (S|Href ) = ?length(S|Href ). This is done by summing the description length of the boundaries of the individual regions within S. For more details see appendix in www.wisdom.weizmann.ac.il/?vision/Composition.html. 4 Applications and Results The global similarity measure GES(Q|Href ) can be applied between individual signals, and/or between groups of signals (by setting Q and ref accordingly). As such it can be employed in machine learning tasks like retrieval, classification, recognition, and clustering. The local similarity measure LES(R|Href ) can be used for local inference problems, such as local classification, saliency, segmentation, etc. For example, the local similarity measure can also be applied between different (a) (b) (c) Figure 4: Image Saliency and Segmentation. (a) Input image. (b) Detected salient points, i.e., points with low intra-image evidence scores LES (when measured relative to the rest of the image). (c) Image segmentation ? results of clustering all the non-salient points into 4 clusters using normalized cuts. Each maximal region R[q] provides high evidence (translated to high affinity scores) that all the points within it should be grouped together (see text for more details). portions of the same signal (e.g., by setting Q to be one part of the signal, and ref to be the rest of the signal). Such intra-signal evidence can be used for inference tasks like segmentation, while the absence of intra-signal evidence (local dissimilarity) can be used for detecting saliency/irregularities. In this section we demonstrate the applicability of our measures to several of these problems, and apply them to three different types of signals: audio, images, video. For additional results as well as video sequences see www.wisdom.weizmann.ac.il/?vision/Composition.html 1. Detection of Saliency/Irregularities (in Images): Using our statistical framework, we define a point q ? Q to be irregular if its best local evidence score LES(R[q] |Href ) is below some threshold. Irregularities can be inferred either relative to a database of examples, or relative to the signal itself. In Fig. 2 we show an example of applying this approach for detecting defects in fruit. Using a single image as a ?reference? of good quality grapefruits (Fig. 2.a, used as ref ), we can detect defects (irregularities) in an image of different grapefruits at different arrangements (Fig. 2.b, used as the query Q). The algorithm tried to compose Q from as large as possible pieces of ref . Points in Q with low LES (i.e., points whose maximal regions were small) were determined as irregular. These are highlighted in ?red? in Fig. 2.c, and correspond to defects in the fruit. Alternatively, local saliency within a query signal Q can also be measured relative to other portions of Q, e.g., by trying to compose each region in Q using pieces from the rest of Q. For each point q ? Q we compute its intra-signal evidence score LES(R[q] ) relative to the other (non-neighboring) parts of the image. Points with low intra-signal evidence are detected as salient. Examples of using intra-signal saliency to detect defects in fabric can be found in Fig. 3. Another example of using the same algorithm, but for a completely different scenario (a ballet scene) can be found in Fig. 4.b. We used a SIFT-like [9] patch descriptor, but computed densely for all local patches in the image. Points with low gradients were excluded from the inference (e.g., floor). 2. Signal Segmentation (Images): For each point q ? Q we compute its maximal evidence region R[q] . This can be done either relative to a different reference signal, or relative Q itself (as is the case of saliency). Every maximal region provides evidence to the fact that all points within the region should be clustered/segmented together. Therefore, the value LES(R[q] |Href )) is added to all entries (i, j) in an affinity matrix, ?qi ?qj ? R[q] . Spectral clustering can then be applied to the affinity matrix. Thus, large regions which appear also in ref (in the case of a single image ? other regions in Q) are likely to be clustered together in Q. This way we foster the generation of segments based on high evidential co-occurrence in the examples rather than based on low level similarity as in [10]. An example of using this algorithm for image segmentation is shown in Fig. 4.c. Note that we have not used explicitly low level similarity in neighboring point, as is customary in most image segmentation algorithms. Such additional information would improve the segmentation results. 3. Signal Classification (Video ? Action Classification): We have used the action video database of [4], which contains different types of actions (?run?, ?walk?, ?jumping-jack?, ?jump-forward-ontwo-legs?, ?jump-in-place-on-two-legs?, ?gallop-sideways?, ?wave-hand(s)?,?bend?) performed by nine different people (altogether 81 video sequences). We used a leave-one-out procedure for action classification. The number of correct classifications was 79/81 = 97.5%. These sequences contain a single person in the field of view (e.g., see Fig. 5.a.). Our method can handle much more complex scenarios. To illustrate the capabilities of our method we further added a few more sequences (e.g., see Fig. 5.b and 5.c), where several people appear simultaneously in the field of view, with partial (a) (b) (c) Figure 5: Action Classification in Video. (a) A sample ?walk? sequence from the action database of [4]. (b),(c) Other more complex sequences with several walking people in the field of view. Despite partial occlusions, differences in scale, and complex backgrounds, these sequences were all classified correctly as ?walk? sequences. For video sequences see www.wisdom.weizmann.ac.il/?vision/Composition.html occlusions, some differences in scale, and more complex backgrounds. The complex sequences were all correctly classified (increasing the classification rate to 98%). In our implementation, 3D space-time video regions were broken into small spatio-temporal video patches (7 ? 7 ? 4). The descriptor for each patch was a vector containing the absolute values of the temporal derivatives in all pixels of the patch, normalized to a unit length. Since stationary backgrounds have zero temporal derivatives, our method is not sensitive to the background, nor does it require foreground/background separation. Image patches and fragments have been employed in the task of class-based object recognition (e.g., [7, 2, 6]). A sparse set of informative fragments were learned for a large class of objects (the training set). These approaches are useful for recognition, but are not applicable to non-class based inference problems (such as similarity between pairs of signals with no prior data, clustering, etc.) 4. Signal Retrieval (Audio ? Speaker Recognition): We used a database of 31 speakers (male and female). All the speakers repeated three times a five-word sentence (2-3 seconds long) in a foreign language, recorded over a phone line. Different repetitions by the same person slightly varied from one another. Altogether the database contained 93 samples of the sentence. Such short speech signals are likely to pose a problem for learning-based (e.g., HMM, GMM) recognition system. We applied our global measure GES for retrieving the closest database elements. The highest GES recognized the right speaker 90 out of 93 cases (i.e., 97% correct recognition). Moreover, the second best GES was correct 82 out of 93 cases (88%). We used a standard mel-frequency cepstrum frame descriptors for time-frames of 25 msec, with overlaps of 50%. Acknowledgments Thanks to Y. Caspi, A. Rav-Acha, B. Nadler and R. Basri for their helpful remarks. This work was supported by the Israeli Science Foundation (Grant 281/06) and by the Alberto Moscona Fund. The research was conducted at the Moross Laboratory for Vision & Motor Control at the Weizmann Inst. References [1] S. Altschul, W. Gish, W. Miller, E. Myers, and D. Lipman. Basic local alignment search tool. JMolBiol, 215:403?410, 1990. [2] E. Bart and S. Ullman. Class-based matching of object parts. In VideoRegister04, page 173, 2004. [3] A. Birnbaum. On the foundations of statistical inference. J. Amer. Statist. Assoc, 1962. [4] M. Blank, L. Gorelick, E. Shechtman, M. Irani, and R. Basri. Actions as space-time shapes. In ICCV05. [5] O. Boiman and M. Irani. Detecting irregularities in images and in video. In ICCV05, pages I: 462?469. [6] P. Felzenszwalb and D. Huttenlocher. Pictorial structures for object recognition. IJCV, 61, 2005. [7] R. Fergus, P. Perona, and A. Zisserman. Object class recognition by unsupervised scale-invariant learning. In CVPR03. [8] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In CVPR06. [9] D. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60(2):91?110, 2004. [10] J. Shi and J. Malik. Normalized cuts and image segmentation. PAMI, 22(8):888?905, August 2000. [11] J. Sivic, B. Russell, A. Efros, A. Zisserman, and W. Freeman. Discovering objects and their localization in images. In ICCV05, pages I: 370?377. [12] P. Viola and W. Wells, III. Alignment by maximization of mutual information. In ICCV95, pages 16?23.
3127 |@word briefly:1 seek:1 tried:1 gish:1 decomposition:3 accommodate:1 harder:1 shechtman:1 initial:1 configuration:5 contains:1 score:36 fragment:2 current:2 comparing:1 michal:1 di2:1 blank:1 informative:1 shape:2 motor:1 fund:1 bart:1 stationary:1 generative:1 discovering:1 accordingly:1 rav:1 short:3 lr:3 provides:7 math:1 detecting:7 location:6 contribute:1 five:1 retrieving:1 prove:1 ijcv:2 compose:6 blast:1 ica:1 nor:1 freeman:1 globally:1 window:1 increasing:1 provided:1 estimating:1 moreover:2 suffice:1 notation:1 null:2 israel:1 what:2 interpreted:2 developed:1 finding:2 transformation:7 temporal:3 every:6 nutshell:1 demonstrates:1 assoc:1 partitioning:2 unit:1 grant:1 control:1 appear:4 before:1 local:50 despite:2 approximately:1 pami:1 co:2 limited:1 range:2 weizmann:6 practical:1 acknowledgment:1 practice:3 irregularity:10 procedure:1 displacement:5 empirical:1 significantly:1 matching:4 word:2 induce:1 cannot:4 marginalize:3 bend:1 put:1 applying:1 www:4 map:1 shi:1 straightforward:1 attention:1 regardless:2 independently:1 rectangular:1 formulate:1 simplicity:2 searching:1 notion:4 handle:1 us:1 hypothesis:8 element:1 recognition:8 walking:1 cut:2 database:6 huttenlocher:1 observed:4 region:71 connected:1 decrease:1 removed:1 highest:1 russell:1 principled:4 broken:2 complexity:2 exhaustively:1 tight:2 segment:4 upon:1 division:1 distinctive:1 localization:1 completely:1 translated:1 easily:1 fabric:3 various:2 grown:3 surrounding:3 describe:1 query:15 detected:4 h0:16 quite:2 heuristic:1 larger:4 whose:1 say:1 distortion:5 ability:1 highlighted:2 itself:2 obviously:3 sequence:12 myers:1 propose:1 maximal:17 product:1 remainder:3 neighboring:2 description:4 getting:1 cluster:1 requirement:1 href:67 r1:4 leave:1 object:6 wider:2 derive:6 illustrate:1 ac:4 pose:1 measured:3 eq:7 strong:1 involves:1 indicate:1 implies:1 quantify:2 radius:1 saved:1 correct:3 require:2 clustered:2 biological:3 summation:1 practically:1 around:2 considered:2 nadler:1 algorithmic:5 claim:2 efros:1 estimation:1 applicable:3 bag:3 sensitive:1 largest:3 grouped:1 repetition:2 tool:3 sideways:1 modified:1 rather:1 derived:1 ponce:1 likelihood:11 detect:4 sense:1 inst:1 inference:9 helpful:1 dependent:3 accumulated:5 foreign:1 entire:4 typically:1 integrated:1 perona:1 transformed:1 pixel:2 overall:1 classification:10 html:4 spatial:2 mutual:2 biologist:1 field:3 never:2 shaped:2 saving:3 having:3 lipman:1 identical:1 look:1 unsupervised:1 foreground:1 few:6 employ:1 randomly:1 composed:8 simultaneously:1 densely:1 individual:3 pictorial:1 occlusion:2 maintain:1 attempt:1 detection:3 intra:9 alignment:3 male:1 chain:1 predefined:3 partial:2 necessary:1 jumping:1 walk:3 deformation:1 theoretical:3 output2:1 instance:1 contiguous:4 measuring:2 logp:2 maximization:1 applicability:3 entry:1 uniform:2 recognizing:1 conducted:1 chunk:4 person:2 thanks:1 input2:1 together:3 reflect:1 recorded:1 containing:1 choose:1 derivative:2 ullman:1 li:3 account:1 star:1 sec:12 explicitly:1 piece:11 performed:1 view:4 lot:2 break:1 lowe:1 observing:2 portion:4 red:3 start:1 wave:1 capability:1 il:4 acid:1 descriptor:10 efficiently:4 ensemble:2 correspond:1 gathered:1 miller:1 acha:1 boiman:2 saliency:9 yield:1 generalize:1 modelled:1 wisdom:4 none:1 classified:2 evidential:1 simultaneous:1 iccv95:1 volumetric:2 frequency:1 proof:1 di:1 recall:1 knowledge:1 color:2 segmentation:26 organized:1 sophisticated:1 higher:5 zisserman:2 maximally:2 cepstrum:1 amer:1 done:5 though:1 correlation:1 hand:6 overlapping:4 quality:2 indicated:1 contain:2 concept:1 normalized:3 excluded:1 irani:3 laboratory:1 maintained:1 speaker:4 mel:1 trying:1 pdf:1 evident:1 theoretic:5 demonstrate:2 image:69 wise:1 jack:1 lazebnik:1 recently:1 rotation:2 exponentially:1 extend:1 accumulate:3 refer:1 composition:13 similarly:1 ssd:1 language:1 similarity:43 etc:11 closest:1 own:2 female:1 driven:1 phone:1 scenario:2 altschul:1 preserving:1 captured:1 additional:2 floor:1 employed:3 r0:6 recognized:2 signal:76 ii:1 multiple:4 keypoints:1 infer:1 reduces:1 segmented:1 match:1 long:1 retrieval:4 alberto:1 coded:1 qi:1 basic:2 vision:5 normalization:1 pyramid:1 oren:1 achieved:2 irregular:2 whereas:2 want:1 background:5 rest:5 probably:1 induced:5 tend:1 db:2 regularly:1 input1:1 structural:2 iii:1 easy:2 enough:4 variety:2 gorelick:1 restrict:1 idea:1 shift:7 qj:1 expression:1 pca:1 speech:1 proceed:1 nine:1 compositional:1 action:7 remark:1 useful:1 locally:1 statist:1 induces:3 category:1 dna:1 simplest:1 generate:1 estimated:2 disjoint:2 per:2 correctly:2 li2:1 rehovot:1 write:1 group:4 salient:3 threshold:1 birnbaum:1 gmm:1 defect:10 sum:2 run:1 extends:1 place:1 reasonable:4 patch:28 separation:1 appendix:2 scaling:2 ballet:3 bit:2 bound:14 ki:1 followed:1 correspondence:1 ri:10 scene:2 relatively:1 describes:1 smaller:3 slightly:1 s1:4 grapefruit:5 leg:2 restricted:6 gradually:1 invariant:2 taken:1 computationally:1 agree:1 turn:1 describing:2 irregularly:1 tractable:1 fed:1 ge:19 apply:1 spectral:1 appearing:1 occurrence:2 altogether:2 customary:1 assumes:1 clustering:7 remaining:1 graphical:1 log2:1 maintaining:1 const:1 somewhere:1 malik:1 arrangement:2 added:2 affinity:4 gradient:1 distance:1 hmm:1 assuming:1 code:1 length:12 ratio:5 difficult:1 implementation:2 perform:3 allowing:3 upper:6 supporting:1 viola:1 frame:3 varied:1 august:1 moro:1 inferred:2 pair:3 namely:4 sentence:2 sivic:1 learned:1 output1:1 israeli:1 able:1 beyond:1 below:1 preselected:1 including:1 max:2 video:14 suitable:1 overlap:2 treated:1 natural:1 improve:2 schmid:1 text:1 prior:5 geometric:3 determining:1 relative:10 expect:1 generation:2 versus:1 foundation:2 fruit:2 principle:2 foster:1 share:1 penalized:1 supported:2 copy:2 side:7 allow:1 institute:1 wide:2 felzenszwalb:1 absolute:1 sparse:1 benefit:1 fragmented:1 boundary:1 evaluating:1 forward:1 commonly:1 made:1 jump:2 far:1 approximate:1 basri:2 global:30 summing:2 assumed:2 spatio:1 fergus:1 alternatively:1 search:6 why:1 composing:1 obtaining:1 complex:7 significance:2 dense:1 pk:1 reevaluating:1 s2:4 repeated:1 ref:37 amino:1 fig:13 referred:2 sub:1 position:3 msec:1 candidate:4 breaking:1 wavelet:1 rk:4 specific:1 sift:2 symbol:1 list:3 evidence:53 exists:1 intractable:1 albeit:1 importance:1 dissimilarity:4 labelling:1 likely:3 expressed:1 contained:3 partially:1 caspi:1 extracted:2 consequently:1 shared:7 absence:1 determined:2 total:1 pas:1 e:8 unfavorable:1 maxproduct:1 formally:2 people:3 scan:1 dissimilar:1 dept:1 audio:4
2,345
3,128
Learning with Hypergraphs: Clustering, Classification, and Embedding Dengyong Zhou? , Jiayuan Huang? , and Bernhard Sch? olkopf? ? NEC Laboratories America, Inc. 4 Independence Way, Suite 200, Princeton, NJ 08540, USA ? School of Computer Science, University of Waterloo Waterloo ON, N2L3G1, Canada ? Max Planck Institute for Biological Cybernetics Spemannstr. 38, 72076 T? ubingen, Germany {dengyong.zhou, jiayuan.huang, bernhard.schoelkopf}@tuebingen.mpg.de Abstract We usually endow the investigated objects with pairwise relationships, which can be illustrated as graphs. In many real-world problems, however, relationships among the objects of our interest are more complex than pairwise. Naively squeezing the complex relationships into pairwise ones will inevitably lead to loss of information which can be expected valuable for our learning tasks however. Therefore we consider using hypergraphs instead to completely represent complex relationships among the objects of our interest, and thus the problem of learning with hypergraphs arises. Our main contribution in this paper is to generalize the powerful methodology of spectral clustering which originally operates on undirected graphs to hypergraphs, and further develop algorithms for hypergraph embedding and transductive classification on the basis of the spectral hypergraph clustering approach. Our experiments on a number of benchmarks showed the advantages of hypergraphs over usual graphs. 1 Introduction In machine learning problem settings, we generally assume pairwise relationships among the objects of our interest. An object set endowed with pairwise relationships can be naturally illustrated as a graph, in which the vertices represent the objects, and any two vertices that have some kind of relationship are joined together by an edge. The graph can be undirected or directed. It depends on whether the pairwise relationships among objects are symmetric or not. A finite set of points in Euclidean space associated with a kernel matrix is a typical example of undirected graphs. As to directed graphs, a well-known instance is the World Wide Web. A hyperlink can be thought of as a directed edge because given an arbitrary hyperlink we cannot expect that there certainly exists an inverse one, that is, the hyperlink based relationships are asymmetric [20]. However, in many real-world problems, representing a set of complex relational objects as undirected or directed graphs is not complete. For illustrating this point of view, let us consider a problem of grouping a collection of articles into different topics. Given an article, assume the only information that we have is who wrote this article. One may construct an undirected graph in which two vertices are joined together by an edge if there is at least one common author of their corresponding articles (Figure 1), and then an undirected graph based clustering approach is applied, e.g. spectral graph techniques [7, 11, 16]. The undirected graph may be further embellished by assigning to each edge a weight equal to the Figure 1: Hypergraph vs. simple graph. Left: an author set E = {e1 , e2 , e3 } and an article set V = {v1 , v2 , v3 , v4 , v5 , v6 , v7 }. The entry (vi , ej ) is set to 1 if ej is an author of article vi , and 0 otherwise. Middle: an undirected graph in which two articles are joined together by an edge if there is at least one author in common. This graph cannot tell us whether the same person is the author of three or more articles or not. Right: a hypergraph which completely illustrates the complex relationships among authors and articles. number of authors in common. The above method may sound natural, but within its graph representation we obviously miss the information on whether the same person joined writing three or more articles or not. Such information loss is unexpected because the articles by the same person likely belong to the same topic and hence the information is useful for our grouping task. A natural way of remedying the information loss issue occurring in the above methodology is to represent the data as a hypergraph instead. A hypergraph is a graph in which an edge can connect more than two vertices [2]. In other words, an edge is a subset of vertices. In what follows, we shall unifiedly refer to the usual undirected or directed graphs as simple graphs. Moreover, without special mentioning, the referred simple graphs are undirected. It is obvious that a simple graph is a special kind of hypergraph with each edge containing two vertices only. In the problem of clustering articles stated before, it is quite straightforward to construct a hypergraph with the vertices representing the articles, and the edges the authors (Figure 1). Each edge contains all articles by its corresponding author. Even more than that, we can consider putting positive weights on the edges to encode our prior knowledge on authors? work if we have. For instance, for a person working on a broad range of fields, we may assign a relatively small value to his corresponding edge. Now we can completely represent the complex relationships among objects by using hypergraphs. However, a new problem arises. How to partition a hypergraph? This is the main problem that we want to solve in this paper. A powerful technique for partitioning simple graphs is spectral clustering. Therefore, we generalize spectral clustering techniques to hypergraphs, more specifically, the normalized cut approach of [16]. Moreover, as in the case of simple graphs, a real-valued relaxation of the hypergraph normalized cut criterion leads to the eigendecomposition of a positive semidefinite matrix, which can be regarded as an analogue of the so-called Laplacian for simple graphs (cf. [5]), and hence we suggestively call it the hypergraph Laplacian. Consequently, we develop algorithms for hypergraph embedding and transductive inference based on the hypergraph Laplacian. There have actually existed a large amount of literature on hypergraph partitioning, which arises from a variety of practical problems, such as partitioning circuit netlists [11], clustering categorial data [9], and image segmentation [1]. Unlike the present work however, they generally transformed hypergraphs to simple ones by using the heuristics we discussed in the beginning or other domain-specific heuristics, and then applied simple graph based spectral clustering techniques. [9] proposed an iterative approach which was indeed designed for hypergraphs. Nevertheless it is not a spectral method. In addition, [6] and [17] considered propagating label distributions on hypergraphs. The structure of the paper is as follows. We first introduce some basic notions on hypergraphs in Section 2. In Section 3, we generalize the simple graph normalized cut to hypergraphs. As shown in Section 4, the hypergraph normalized cut has an elegant probabilistic interpretation based on a random walk naturally associated with a hypergraph. In Section 5, we introduce the real-valued relaxation to approximately obtain hypergraph normalized cuts, and also the hypergraph Laplacian derived from this relaxation. In section 6, we develop a spectral hypergraph embedding technique based on the hypergraph Laplacian. In Section 7, we address transductive inference on hypergraphs, this is, classifying the vertices of a hypergraph provided that some of its vertices have been labeled. Experimental results are shown in Section 8, and we conclude this paper in Section 9. 2 Preliminaries Let V denote a finite set of objects, and let E be a family of subsets e of V such that ?e?E = V. Then we call G = (V, E) a hypergraph with the vertex set V and the hyperedge set E. A hyperedge containing just two vertices is a simple graph edge. A weighted hypergraph is a hypergraph that has a positive number w(e) associated with each hyperedge e, called the weight of hyperedge e. Denote a weighted hypergraph by G = (V, E, w). A hyperedge e is said to be incident with a vertex v when v ? e. For a vertex v ? V, its degree is P defined by d(v) = {e?E|v?e} w(e). Given an arbitrary set S, let |S| denote the cardinality of S. For a hyperedge e ? E, its degree is defined to be ?(e) = |e|. We say that there is a hyperpath between vertices v1 and vk when there is an alternative sequence of distinct vertices and hyperedges v1 , e1 , v2 , e2 , . . . , ek?1 , vk such that {vi , vi+1 } ? ei for 1 ? i ? k ? 1. A hypergraph is connected if there is a path for every pair of vertices. In what follows, the hypergraphs we mention are always assumed to be connected. A hypergraph G can be represented by a |V | ? |E| matrix H with entries h(v, e) = 1 if v ? e andP0 otherwise, P called the incidence matrix of G. Then d(v) = e?E w(e)h(v, e) and ?(e) = v?V h(v, e). Let Dv and De denote the diagonal matrices containing the vertex and hyperedge degrees respectively, and let W denote the diagonal matrix containing the weights of hyperedges. Then the adjacency matrix A of hypergraph G is defined as A = HW H T ? Dv , where H T is the transpose of H. 3 Normalized hypergraph cut For a vertex subset S ? V, let S c denote the compliment of S. A cut of a hypergraph G = (V, E, w) is a partition of V into two parts S and S c . We say that a hyperedge e is cut if it is incident with the vertices in S and S c simultaneously. Given a vertex subset S ? V, define the hyperedge boundary ?S of S to be a hyperedge set which consists of hyperedges which are cut, i.e. ?S := {e ? E|e ? S 6= ?, e ? S c 6= ?}, and define P the volume vol S of S to be the sum of the degrees of the vertices in S, that is,vol S := v?S d(v). Moreover, define the volume of ?S by X |e ? S| |e ? S c | . (1) vol ?S := w(e) ?(e) e??S Clearly, we have vol ?S = vol ?S c . The definition given by Equation (1) can be understood as follows. Let us imagine each hyperedge e as a clique, i.e. a fully connected subgraph. For avoiding unnecessary confusion, we call the edges in such an imaginary subgraph the subedges. Moreover, we assign the same weight w(e)/?(e) to all subedges. Then, when a hyperedge e is cut, there are |e ? S| |e ? S c | subedges are cut, and hence a single sum term in Equation (1) is the sum of the weights over the subedges which are cut. Naturally, we try to obtain a partition in which the connection among the vertices in the same cluster is dense while the connection between two clusters is sparse. Using the above introduced definitions, we may formalize this natural partition as ? ? 1 1 + . (2) argmin c(S) := vol ?S vol S vol S c ?6=S?V For a simple graph, |e ? S| = |e ? S c | = 1, and ?(e) = 2. Thus the right-hand side of Equation (2) reduces to the simple graph normalized cut [16] up to a factor 1/2. In what follows, we explain the hypergraph normalized cut in terms of random walks. 4 Random walk explanation We associate each hypergraph with a natural random walk which has the transition rule as follows. Given the current position u ? V, first choose a hyperedge e over all hyperedges incident with u with the probability proportional to w(e), and then choose a vertex v ? e uniformly at random. Obviously, it generalizes the natural random walk defined on simple graphs. Let P denote the transition probability matrix of this hypergraph random walk. Then each entry of P is X h(u, e) h(v, e) . (3) p(u, v) = w(e) d(u) ?(e) e?E In matrix notation, P = Dv?1 HW De?1 H T . The stationary distribution ? of the random walk is d(v) ?(v) = , (4) vol V which follows from that X X d(u) X w(e)h(u, e)h(v, e) 1 X X w(e)h(u, e)h(v, e) = ?(u)p(u, v) = vol V d(u)?(e) vol V ?(e) u?V u?V = e?E e?E We written c(S) = u?V e?E X 1 X 1 X h(v, e) d(v) = w(e) h(u, e) w(e)h(v, e) = . vol V ?(e) vol V vol V vol ?S vol V ? u?V e?E ? 1 1 + . From Equation (4), we have vol S/ vol V vol S c / vol V X d(v) X vol S = = ?(v), (5) vol V vol V v?S v?V that is, the ratio vol S/ vol V is the probability with which the random walk occupies some vertex in S. Moreover, from Equations (3) and (4), we have X X X w(e) h(u, e)h(v, e) X w(e) |e ? S| |e ? S c | vol ?S = = vol V vol V ?(e) vol V ?(e) c e??S u?e?S v?e?S e??S = X X X e??S u?e?S v?e?S = d(u) h(u, e) h(v, e) w(e) vol V d(u) ?(e) c X X X X d(u) X h(u, e) h(v, e) = ?(u)p(u, v), w(e) vol V d(u) ?(e) c c u?S v?S u?S v?S e?S that is, the ratio vol ?S/ vol V is the probability with which one sees a jump of the random walk from S to S c under the stationary distribution. From Equations (5) and (6), we can understand the hypergraph normalized cut criterion as follows: looking for a cut such that the probability with which the random walk crosses different clusters is as small as possible while the probability with which the random walk stays in the same cluster is as large as possible. It is worth pointing out that the random walk view is consistent with that for the simple graph normalized cut [13]. The consistency means that our generalization of the normalized cut approach from simple graphs to hypergraphs is reasonable. 5 Spectral hypergraph partitioning As in [16], the combinatorial optimization problem given by Equation (2) is NP-complete, and it can be relaxed (2) into a real-valued optimization problem !2 ? f (v) f (u) 1 X X w(e) p ?p argmin d(u) d(v) f ?R|V | 2 e?E {u,v}?e ?(e) X X p subject to f 2 (v) = 1, f (v) d(v) = 0. v?V v?V ?1/2 ?1/2 We define the matrices ? = Dv HW De?1 H T Dv and ? = I ? ?, where I denotes the identity matrix. Then it can be verified that !2 ? X X w(e) f (v) f (u) p ?p = 2f T ?f. ?(e) d(u) d(v) e?E {u,v}?e Note that this also shows that ? is positive semi-definite. We ? can check that the smallest eigenvalue of ? is 0, and its corresponding eigenvector is just d. Therefore, from standard results in linear algebra, we know that the solution to the optimization problem is an eigenvector ? of ? associated with its smallest nonzero eigenvalue. Hence, the vertex set is clustered into the two parts S = {v ? V |?(v) ? 0} and S c = {v ? V |?(v) < 0}. For a simple graph, the edge degree matrix De reduces to 2I. Thus ? 1? 1 1 I ? Dv?1/2 ADv?1/2 , ? = I ? Dv?1/2 HW H T Dv?1/2 = I ? Dv?1/2 (Dv + A) Dv?1/2 = 2 2 2 which coincides with the simple graph Laplacian up to a factor of 1/2. So we suggestively call ? the hypergraph Laplacian. As in [20] where the spectral clustering methodology is generalized from undirected to directed simple graphs, we may consider generalizing the present approach to directed hypergraphs [8]. A directed hypergraph is a hypergraph in which each hyperedge e is an ordered pair (X, Y ) where X ? V is the tail of e and Y ? V \ X is the head. Directed hypergraphs have been used to model various practical problems from biochemical networks [15] to natural language parsing [12]. 6 Spectral hypergraph embedding As in the simple graph case [4, 10], it is straightforward to extend the spectral hypergraph clustering approach to k-way partitioning. Denote a k-way partition by (V1 , ? ? ? , Vk ), where V1 ? V2 ? ? ? ? ? Vk = V, and Vi ? Vj = ? for all 1 ? i, j ? k. We may obtain a k-way Pk vol ?Vi partition by minimizing c(V1 , ? ? ? , Vk ) = i=1 over all k-way partitions. Similarly, vol Vi the combinatorial optimization problem can be relaxed into a real-valued one, of which the solution can be any orthogonal basis of the linear space spanned by the eigenvectors of ? associated with the k smallest eigenvalues. Theorem 1. Assume a hypergraph G = (V, E, w) with |V | = n. Denote the eigenvalues of the Laplacian ? of G by ?1 ? ?2 ? ? ? ? ? ?n . Define ck (G) = min c(V1 , ? ? ? , Vk ), where the Pk minimization is over all k-way partitions. Then i=1 ?i ? ck (G). Proof. Let ri be a n-dimensional vector defined by ri (v) = 1 if v ? Vi , and 0 otherwise. Then k X riT (Dv ? HW De?1 H T )ri c(V1 , ? ? ? , Vk ) = riT Dv ri i=1 ?1/2 Define si = Dv ri , and fi = si /ksi k, where k ? k denotes the usual Euclidean norm. Thus c(V1 , ? ? ? , Vk ) = k X fiT ?fi = tr F T ?F, i=1 T where F = [f1 ? ? ? fk ]. Clearly, F F = I. If allowing the elements of ri to take arbitrary continuous values rather than Boolean ones only, we have ck (G) = min c(V1 , ? ? ? , Vk ) ? min tr F T ?F = F T F =I k X ?i . i=1 The last equality is from standard results in linear algebra. This completes the proof. The above result also shows that the real-valued optimization problem derived from the relaxation is actually a lower bound of the original combinatorial optimization problem. Unlike 2-way partitioning however, it is unclear how to utilize multiple eigenvectors simultaneously to obtain a k-way partition. Many heuristics have been proposed in the situation of simple graphs, and they can be applied here as well. Perhaps the most popular one among them is as follows [14]. First form a matrix X = [?1 ? ? ? ?k ], where ?i ?s are the eigenvectors of ? associated with the k smallest eigenvalues. And then the row vectors of X are regarded as the representations of the graph vertices in k-dimensional Euclidian space. Those vectors corresponding to the vertices are generally expected to be well separated, and consequently we can obtain a good partition simply by running k-means on them once. [18] has resorted to a semidefinite relaxation model for the k-way normalized cut instead of the relatively loose spectral relaxation, and then obtained a more accurate solution. It sounds reasonable to expect that the improved solution will lead to improved clustering. As reported in [18], however, the expected improvement does not occur in practice. 7 Transductive inference We have established algorithms for spectral hypergraph clustering and embedding. Now we consider transductive inference on hypergraphs. Specifically, given a hypergraph G = (V, E, w), the vertices in a subset S ? V have labels in L = {1, ?1}, our task is to predict the labels of the remaining unlabeled vertices. Basically, we should try to assign the same label to all vertices contained in the same hyperedge. It is actually straightforward to derive a transductive inference approach from a clustering scheme. Let f : V 7? R denote a classification function, which assigns a label sign f (v) to a vertex v ? V. Given an objective functional ?(?) from some clustering approach, one may choose a classification function by argmin{Remp (f ) + ??(f )}, f ?R|V | where Remp (f ) denotes a chosen empirical loss, such as the least square loss or the hinge loss, and the number ? > 0 the regularization parameter. Since in general normalized cuts are thought to be superior to mincuts, the transductive inference approach that we used in the later experiments is built on the above spectral hypergraph clustering method. Consequently, as shown in [20], with the least square loss function, the classification function is finally given by f = (I ? ??)?1 y, where the elements of y denote the initial labels, and ? is a parameter in (0, 1). For a survey on transductive inference, we refer the readers to [21]. 8 Experiments All datasets except a particular version of the 20-newsgroup one are from the UCI Machine Learning Depository. They are usually referred to as the so-called categorical data. Specifically, each instance in those datasets is described by one or more attributes. Each attribute takes only a small number of values, each corresponding to a specific category. Attribute values cannot be naturally ordered linearly as numerical values can [9]. In our experiments, we constructed a hypergraph for each dataset, where attribute values were regarded as hyperedges. The weights for all hyperedges were simply set to 1. How to choose suitable weights is definitely an important problem requiring additional exploration however. We also constructed a simple graph for each dataset, and the simple graph spectral clustering based approach [19] was then used as the baseline. Those simple graphs were constructed in the way discussed in the beginning of Section 1, which is essentially to define pairwise relationships among the objects by the adjacency matrices of hypergraphs. The first task we addressed is to embed the animals in the zoo dataset into Euclidean space. This dataset contains 100 animals with 17 attributes. The attributes include hair, feathers, eggs, milk, legs, tail, etc. The animals have been manually classified into 7 different categories. We embedded those animals into Euclidean space by using the eigenvectors of the hypergraph Laplacian associated with the smallest eigenvalues (Figure 2). For the animals having the same attributes, we randomly chose one as their representative to put in the figures. It is apparent that those animals are well separated in their Euclidean representations. Moreover, it deserves a further look that seal and dolphin are significantly 0.25 0.25 honeybee 0.2 0.15 seal 0.1 0.05 0 ?0.05 ?0.1 ?0.15 seasnake carp dolphin stingray dogfish seahorse ?0.1 0 0.05 0.1 crab starfish 0.1 cavy 0.05 0 ?0.05 ?0.1 bear gorilla girl toad frog seal pussycat pony lion pitviper deer seasnake tortoise platypus mink stingray tuatara newt vampire squirrel slowworm dolphin seahorse sealion carp dogfish ostrich kiwi flamingo hawk penguin dove gull swan ?0.15 ?0.2 0.15 octopus seawasp clam worm seawasp ?0.05 lobster ladybird scorpion 0.15 sealion ?0.15 flea gnat octopus pitviper crab clam newt slowworm starfish mink frog tuatara lion scorpion lobster toad bear pussycat platypus worm deer girl penguin pony cavy gorilla tortoise ladybird squirrel kiwi flea gull vampire ostrich hawk swan gnat housefly honeybee wasp flamingo dove ?0.2 ?0.2 wasp housefly 0.2 ?0.15 ?0.1 ?0.05 0 0.05 0.1 0.15 0.2 0.25 Figure 2: Embedding the zoo dataset. Left panel: the eigenvectors with the 2nd and 3rd smallest eigenvalues; right panel: the eigenvectors with the 3rd and 4th smallest eigenvalues. Note that dolphin is between class 1 (denoted by ?) containing the animals having milk and living on land, and class 4 (denoted by ?) containing the animals living in sea. hypergraph simple graph 0.28 0.22 0.18 0.16 0.15 0.2 test error test error test error 0.2 0.18 0.16 0.14 60 80 100 120 140 160 180 200 # labeled points (a) mushroom ? 20 0.22 0.2 0.16 0.12 0.1 40 0.24 0.18 0.14 20 hypergraph simple graph 0.26 0.2 0.25 test error 0.24 hypergraph simple graph 0.22 hypergraph simple graph 0.3 40 60 80 100 120 140 160 180 200 # labeled points (b) 20-newsgroup 20 0.14 40 60 80 100 120 140 160 180 200 # labeled points (c) letter 0.1 0.2 0.3 0.4 0.5 0.6 different value 0.7 0.8 0.9 (d) ? (letter) Figure 3: Classification on complex relational data. (a)-(c) Results from both the hypergraph based approach and the simple graph based approach. (d) The influence of the ? in letter recognition with 100 labeled instances. mapped to the positions between class 1 consisting of the animals having milk and living on land, and class 4 consisting of the animals living in sea. A similar observation also holds for seasnake. The second task is classification on the mushroom dataset that contains 8124 instances described by 22 categorical attributes, such as shape, color, etc. We remove the 11th attribute that has missing values. Each instance is labeled as edible or poisonous. They contain 4208 and 3916 instances separately. The third task is text categorization on a modified 20-newsgroup dataset with binary occurrence values for 100 words across 16242 articles (see http://www.cs.toronto.edu/~roweis). The articles belong to 4 different topics corresponding to the highest level of the original 20 newsgroups, with the sizes being 4605, 3519, 2657 and 5461 respectively. The final task is to guess the letter categories with the letter dataset, in which each instance is described by 16 primitive numerical attributes (statistical moments and edge counts). We used a subset containing the instances of the letters from A to E with the sizes being 789, 766, 736, 805 and 768 respectively. The experimental results of the above three tasks are shown in Figures 3(a)-3(c). The regularization parameter ? is fixed at 0.1. Each testing error is averaged over 20 trials. The results show that the hypergraph based method is consistently better than the baseline. The influence of the ? used in the letter recognition task is shown in Figure 3(d). It is interesting that the ? influences the baseline much more than the hypergraph based approach. 9 Conclusion We generalized spectral clustering techniques to hypergraphs, and developed algorithms for hypergraph embedding and transductive inference. It is interesting to consider applying the present methodology to a broader range of practical problems. We are particularly interested in the following problems. One is biological network analysis [17]. Biological networks are mainly modeled as simple graphs so far. It might be more sensible to model them as hypergraphs instead such that complex interactions will be completely taken into account. The other is social network analysis. As recently pointed out by [3], many social transactions are supra-dyadic; they either involve more than two actors or they involve numerous aspects of the setting of interaction. So standard network techniques are not adequate in analyzing these networks. Consequently, they resorted to the concept of a hypergraph, and showed how the concept of network centrality can be adapted to hypergraphs. References [1] S. Agarwal, L. Zelnik-Manor J. Lim, P. Perona, D. Kriegman, and S. Belongie. Beyond pairwise clustering. In IEEE Conf. on Computer Vision and Pattern Recognition, 2005. [2] C. Berge. Hypergraphs. North-Holland, Amsterdam, 1989. [3] P. Bonacich, A.C. Holdren, and M. Johnston. Hyper-edges and multi-dimensional centrality. Social Networks, 26(3):189?203, 2004. [4] P.K. Chan, M.D.F. Schlag, and J. Zien. Spectral k-way ratio cut partitioning and clustering. IEEE Trans. on Computer Aided Design of Integrated Circuits and Systems, 13(9):1088?1096, 1994. [5] F. Chung. Spectral Graph Theory. Number 92 in CBMS Regional Conference Series in Mathematics. American Mathematical Society, Providence, RI, 1997. [6] A. Corduneanu and T. Jaakkola. Distributed information regularization on graphs. In Advances in Neural Information Processing Systems 17, Cambridge, MA, 2005. MIT Press. [7] M. Fiedler. Algebraic connectivity of graphs. Czechoslovak Mathematical Journal, 23(98):298? 305, 1973. [8] G. Gallo, G. Longo, and S. Pallottino. Directed hypergraphs and applications. Discrete Applied Mathematics, 42(2):177?201, 1993. [9] D. Gibson, J. Kleinberg, and P. Raghavan. Clustering categorical data: An approach based on dynamical systems. VLDB Journal, 8(3-4):222?236, 2000. [10] M. Gu, H. Zha, C. Ding, X. He, and H. Simon. Spectral relaxation models and structure analysis for k-way graph clustering and bi-clustering. Technical Report CSE-01-007, Department of Computer Science and Engineering, Pennsylvania State University, 2001. [11] L. Hagen and A.B. Kahng. New spectral methods for ratio cut partitioning and clustering. IEEE Trans. on Computed-Aided Desgin of Integrated Circuits and Systems, 11(9):1074?1085, 1992. [12] D. Klein and C. Manning. Parsing and hypergraphs. In Proc. 7th Intl. Workshop on Parsing Technologies, 2001. [13] M. Meila and J. Shi. A random walks view of spectral segmentation. In Proc. 8th Intl. Workshop on Artificial Intelligence and Statistics, 2001. [14] A.Y. Ng, M.I. Jordan, and Y. Weiss. On spectral clustering: analysis and an algorithm. In Advances in Neural Information Processing Systems 14, Cambridge, MA, 2002. MIT Press. [15] J.S. Oliveira, J.B. Jones-Oliveira, D.A. Dixon, C.G. Bailey, and D.W. Gull. Hyperdigraph? Theoretic analysis of the EGFR signaling network: Initial steps leading to GTP: Ras complex formation. Journal of Computational Biology, 11(5):812?842, 2004. [16] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Tran. on Pattern Analysis and Machine Intelligence, 22(8):888?905, 2000. [17] K. Tsuda. Propagating distributions on a hypergraph by dual information regularization. In Proc. 22th Intl. Conf. on Machine Learning, 2005. [18] E.P. Xing and M.I. Jordan. On semidefinite relaxation for normalized k-cut and connections to spectral clustering. Technical Report CSD-03-1265, Division of Computer Science, University of California, Berkeley, 2003. [19] D. Zhou, O. Bousquet, T.N. Lal, J. Weston, and B. Sch? olkopf. Learning with local and global consistency. In Advances in Neural Information Processing Systems 16, Cambridge, MA, 2004. MIT Press. [20] D. Zhou, J. Huang, and B. Sch? olkopf. Learning from labeled and unlabeled data on a directed graph. In Proc. 22th Intl. Conf. on Machine Learning, 2005. [21] X. Zhu. Semi-supervised learning literature survey. Technical Report Computer Sciences 1530, University of Wisconsin - Madison, 2005.
3128 |@word trial:1 illustrating:1 version:1 middle:1 norm:1 seal:3 nd:1 vldb:1 zelnik:1 euclidian:1 mention:1 tr:2 moment:1 initial:2 contains:3 series:1 egfr:1 imaginary:1 current:1 incidence:1 si:2 assigning:1 mushroom:2 written:1 parsing:3 numerical:2 partition:10 shape:1 remove:1 designed:1 v:1 stationary:2 intelligence:2 guess:1 beginning:2 cse:1 toronto:1 mathematical:2 constructed:3 consists:1 feather:1 introduce:2 pairwise:8 indeed:1 ra:1 expected:3 mpg:1 multi:1 cardinality:1 provided:1 moreover:6 notation:1 circuit:3 panel:2 what:3 kind:2 argmin:3 eigenvector:2 developed:1 nj:1 suite:1 berkeley:1 every:1 partitioning:8 planck:1 before:1 positive:4 understood:1 engineering:1 local:1 analyzing:1 path:1 approximately:1 might:1 chose:1 frog:2 czechoslovak:1 mentioning:1 range:2 bi:1 averaged:1 directed:11 practical:3 testing:1 practice:1 definite:1 wasp:2 signaling:1 empirical:1 gibson:1 thought:2 significantly:1 word:2 cannot:3 unlabeled:2 put:1 influence:3 writing:1 applying:1 www:1 missing:1 shi:2 straightforward:3 primitive:1 survey:2 assigns:1 rule:1 regarded:3 spanned:1 his:1 embedding:8 notion:1 imagine:1 associate:1 element:2 recognition:3 particularly:1 hagen:1 asymmetric:1 cut:24 labeled:7 ding:1 schoelkopf:1 connected:3 adv:1 highest:1 valuable:1 hypergraph:56 kriegman:1 algebra:2 division:1 completely:4 basis:2 girl:2 gu:1 kiwi:2 represented:1 america:1 various:1 separated:2 distinct:1 fiedler:1 artificial:1 tell:1 deer:2 hyper:1 formation:1 quite:1 heuristic:3 apparent:1 solve:1 valued:5 say:2 otherwise:3 statistic:1 transductive:9 final:1 obviously:2 advantage:1 sequence:1 eigenvalue:8 tran:1 interaction:2 uci:1 subgraph:2 roweis:1 olkopf:3 dolphin:4 cluster:4 supra:1 sea:2 intl:4 categorization:1 object:11 derive:1 dengyong:2 develop:3 propagating:2 school:1 berge:1 c:1 attribute:10 exploration:1 occupies:1 raghavan:1 adjacency:2 assign:3 f1:1 generalization:1 clustered:1 preliminary:1 biological:3 squirrel:2 hold:1 crab:2 considered:1 predict:1 pointing:1 smallest:7 clam:2 proc:4 dogfish:2 label:6 combinatorial:3 waterloo:2 weighted:2 minimization:1 mit:3 clearly:2 always:1 modified:1 ck:3 rather:1 zhou:4 ej:2 manor:1 broader:1 jaakkola:1 starfish:2 endow:1 encode:1 derived:2 vk:9 improvement:1 consistently:1 check:1 mainly:1 baseline:3 inference:8 biochemical:1 integrated:2 perona:1 transformed:1 interested:1 germany:1 issue:1 classification:7 among:9 dual:1 denoted:2 animal:10 special:2 equal:1 construct:2 field:1 once:1 having:3 ng:1 manually:1 biology:1 broad:1 look:1 jones:1 np:1 report:3 penguin:2 randomly:1 simultaneously:2 consisting:2 interest:3 certainly:1 semidefinite:3 accurate:1 edge:17 orthogonal:1 euclidean:5 walk:13 tsuda:1 gull:3 instance:9 boolean:1 deserves:1 vertex:31 entry:3 subset:6 schlag:1 reported:1 connect:1 providence:1 person:4 definitely:1 stay:1 v4:1 probabilistic:1 together:3 connectivity:1 containing:7 huang:3 choose:4 v7:1 conf:3 ek:1 chung:1 american:1 leading:1 account:1 suggestively:2 de:6 north:1 inc:1 dixon:1 depends:1 vi:8 later:1 view:3 try:2 zha:1 xing:1 simon:1 contribution:1 square:2 who:1 generalize:3 basically:1 zoo:2 worth:1 cybernetics:1 classified:1 explain:1 definition:2 lobster:2 obvious:1 e2:2 naturally:4 associated:7 proof:2 dataset:8 popular:1 remp:2 knowledge:1 color:1 lim:1 segmentation:3 formalize:1 actually:3 pony:2 cbms:1 originally:1 supervised:1 methodology:4 improved:2 wei:1 just:2 tortoise:2 working:1 hand:1 web:1 ei:1 corduneanu:1 perhaps:1 usa:1 normalized:15 requiring:1 contain:1 concept:2 hence:4 equality:1 regularization:4 symmetric:1 laboratory:1 nonzero:1 illustrated:2 coincides:1 criterion:2 generalized:2 complete:2 theoretic:1 confusion:1 image:2 fi:2 recently:1 common:3 superior:1 functional:1 volume:2 belong:2 hypergraphs:25 discussed:2 interpretation:1 tail:2 extend:1 he:1 refer:2 cambridge:3 compliment:1 rd:2 meila:1 consistency:2 fk:1 similarly:1 pointed:1 mathematics:2 language:1 actor:1 etc:2 swan:2 showed:2 chan:1 gallo:1 ubingen:1 hyperedge:15 binary:1 additional:1 relaxed:2 v3:1 living:4 semi:2 zien:1 multiple:1 sound:2 reduces:2 technical:3 cross:1 e1:2 laplacian:9 basic:1 hair:1 essentially:1 vision:1 represent:4 kernel:1 agarwal:1 addition:1 want:1 separately:1 addressed:1 completes:1 johnston:1 hyperedges:6 sch:3 unlike:2 regional:1 subject:1 elegant:1 undirected:11 spemannstr:1 jordan:2 call:4 variety:1 independence:1 remedying:1 fit:1 newsgroups:1 pennsylvania:1 edible:1 whether:3 dove:2 categorial:1 algebraic:1 e3:1 adequate:1 generally:3 useful:1 eigenvectors:6 involve:2 amount:1 oliveira:2 category:3 http:1 sign:1 klein:1 discrete:1 shall:1 vol:35 putting:1 nevertheless:1 verified:1 utilize:1 v1:10 resorted:2 graph:52 relaxation:8 sum:3 inverse:1 letter:7 powerful:2 family:1 reasonable:2 reader:1 bound:1 existed:1 adapted:1 occur:1 ri:7 bousquet:1 kleinberg:1 aspect:1 min:3 relatively:2 department:1 manning:1 across:1 leg:1 dv:14 honeybee:2 taken:1 equation:7 loose:1 count:1 know:1 generalizes:1 endowed:1 v2:3 spectral:24 occurrence:1 bailey:1 centrality:2 alternative:1 pussycat:2 original:2 denotes:3 clustering:26 cf:1 running:1 remaining:1 include:1 platypus:2 hinge:1 flamingo:2 housefly:2 madison:1 society:1 objective:1 malik:1 v5:1 usual:3 diagonal:2 said:1 unclear:1 mapped:1 cavy:2 sensible:1 topic:3 tuebingen:1 modeled:1 relationship:12 ratio:4 minimizing:1 stated:1 mink:2 design:1 kahng:1 allowing:1 observation:1 datasets:2 benchmark:1 finite:2 inevitably:1 situation:1 relational:2 looking:1 head:1 arbitrary:3 canada:1 princeton:1 introduced:1 pair:2 connection:3 lal:1 california:1 established:1 poisonous:1 trans:2 address:1 beyond:1 usually:2 lion:2 pattern:2 dynamical:1 ostrich:2 hyperlink:3 gorilla:2 built:1 max:1 explanation:1 gtp:1 analogue:1 suitable:1 natural:6 carp:2 zhu:1 representing:2 scheme:1 technology:1 numerous:1 categorical:3 toad:2 text:1 prior:1 literature:2 wisconsin:1 embedded:1 loss:7 expect:2 fully:1 bear:2 interesting:2 proportional:1 squeezing:1 eigendecomposition:1 incident:3 degree:5 consistent:1 article:16 classifying:1 land:2 row:1 last:1 transpose:1 side:1 understand:1 institute:1 wide:1 sparse:1 distributed:1 boundary:1 world:3 transition:2 author:10 collection:1 jump:1 far:1 social:3 transaction:1 bernhard:2 wrote:1 clique:1 global:1 conclude:1 assumed:1 unnecessary:1 belongie:1 continuous:1 iterative:1 investigated:1 complex:9 domain:1 vj:1 octopus:2 pk:2 main:2 dense:1 linearly:1 csd:1 dyadic:1 referred:2 representative:1 egg:1 position:2 third:1 hw:5 theorem:1 embed:1 specific:2 grouping:2 naively:1 exists:1 workshop:2 milk:3 nec:1 illustrates:1 occurring:1 ksi:1 generalizing:1 simply:2 likely:1 unexpected:1 ordered:2 v6:1 contained:1 amsterdam:1 joined:4 holland:1 vampire:2 ma:3 weston:1 identity:1 consequently:4 aided:2 typical:1 specifically:3 operates:1 uniformly:1 except:1 miss:1 called:4 mincuts:1 worm:2 experimental:2 jiayuan:2 newsgroup:3 rit:2 arises:3 hawk:2 avoiding:1
2,346
3,129
Speakers optimize information density through syntactic reduction Roger Levy Department of Linguistics UC San Diego 9500 Gilman Drive La Jolla, CA 92093-0108, USA rlevy@ling.ucsd.edu T. Florian Jaeger Department of Linguistics & Department of Psychology Stanford University & UC San Diego 9500 Gilman Drive La Jolla, CA 92093-0109, USA tiflo@csli.stanford.edu Abstract If language users are rational, they might choose to structure their utterances so as to optimize communicative properties. In particular, information-theoretic and psycholinguistic considerations suggest that this may include maximizing the uniformity of information density in an utterance. We investigate this possibility in the context of syntactic reduction, where the speaker has the option of either marking a higher-order unit (a phrase) with an extra word, or leaving it unmarked. We demonstrate that speakers are more likely to reduce less information-dense phrases. In a second step, we combine a stochastic model of structured utterance production with a logistic-regression model of syntactic reduction to study which types of cues speakers employ when estimating the predictability of upcoming elements. We demonstrate that the trend toward predictability-sensitive syntactic reduction (Jaeger, 2006) is robust in the face of a wide variety of control variables, and present evidence that speakers use both surface and structural cues for predictability estimation. 1 Introduction One consequence of the expressive richness of natural languages is that usually more than one means exists of expressing the same (or approximately the same) message. As a result, speakers are often confronted with choices as to how to structure their intended message into an utterance. At the same time, linguistic communication takes place under a host of cognitive and environmental constraints: speakers and addressees have limited cognitive resources to bring to bear, speaker and addressee have incomplete knowledge of the world and of each other?s state of knowledge, the environment of communication is noisy, and so forth. Under these circumstances, if speakers are rational then we can expect them to attempt to optimize the communicative properties of their utterances. But what are the communicative properties that speakers choose to optimize? The prevalence of ambiguity in natural language?the fact that many structural analyses are typically available for a given utterance?might lead one to expect that speakers seek to minimize structural ambiguity, but both experimental (Arnold et al., 2004, inter alia) and corpus-based (Roland et al., 2006, inter alia) investigations have found little evidence for active use of ambiguity-avoidance strategies. In this paper we argue for a different locus of optimization: that speakers structure utterances so as to optimize information density. Here we use the term ?information? in its most basic information-theoretic sense?the negative log-probability of an event?and by ?information density? we mean the amount of information per unit comprising the utterance. If speakers behave optimally, they should structure their utterances so as to avoid peaks and troughs in information density (see also (Aylett and Turk, 2004; Genzel and Charniak, 2002)). For example, this principle of uniform information density (UID) as an aspect of rational language production predicts that speakers should modulate phonetic duration in accordance with the predictability of the unit expressed. This has been shown by Bell et al. (2003, inter alia) for words and by Aylett and Turk (2004) for syllables. If UID is a general principle of communicative optimality, however, its effects should be apparent at higher levels of linguistic production as well. In line with this prediction are the results of Genzel and Charniak (2002); Keller (2004), who found that sentences taken out of context have more information the later they occur in a discourse. For phonetic reduction, choices about word duration can directly modulate information density. However, it is less clear how the effects of UID at higher levels of language production observed by Genzel and Charniak (2002) and Keller (2004) come about. Genzel and Charniak (2002) show that at least part of their result is driven by the repetition of open-class words, but it is unclear how this effect relates to a broader range of choice points within language production. In particular, it is unclear whether any choices above the lexical level are affected by information density (as expected if UID is general). In this paper we present the first evidence that speakers? choice during syntactic planning is affected by information density optimization. This evidence comes from syntactic reduction?a phenomenon in which speakers have the choice of either marking a phrase with an optional word, or leaving it unmarked (Section 3). We show that in cases where the phrase is marked, the marking reduces the phrase?s information density, and that the phrases that get marked are the ones that would otherwise be the most information-dense (Section 4). This provides crucial support for UID as a general principle of language production. The possibility that speakers? use of syntactic reduction optimizes information density leads to questions as to how speakers estimate the probability of an upcoming syntactic event. In particular, one can ask what types of cues language users employ when estimating these probabilites. For example, speakers could compute information density using only surface cues (such as the words immediately preceding a phrase). On the other hand, they might also take structural features of the utterance into account. We investigate these issues in Section 5 using an incremental model of structured utterance production. In this model, the predictability of the upcoming phrase markable by the optional word is taken as a measure of the phrase?s information density. The resulting predictability estimate, in turn, becomes a covariate in a separate model of syntactic reduction. Through this two-step modeling approach we show that predictability is able to explain a significant part of the variability in syntactic reduction, and that evidence exists for speakers using both structural and surface cues in estimating phrasal predictability. 2 Optimal information density in linguistic utterances We begin with the information-theoretic definition that the information conveyed by a complete ut1 terance u is u?s Shannon information content (also called its surprisal), or log2 P (u) . If the complete utterance u is realized in n units (for example, words wi ), then the information conveyed by u is the sum of the information conveyed by each unit of u: log 1 1 1 1 = log + log + ? ? ? + log P (u) P (w1 ) P (w2 |w1 ) P (wn |w1 ? ? ? wn?1 ) (1) For simplicity we assume that each wi occupies an equal amount of time (for spoken language) or space (written language). Optimization of information density entails that the information conveyed by each wi should be as uniform and close to an ideal value as possible. There are at least two ways in which UID may be optimal. First, the transmission of a message via spoken or written language can be viewed as a noisy channel. From this assumption it follows that information density is optimized near the channel capacity, where speakers maximize the rate of information transmission while minimizing the danger of a mistransmitted message (see also Aylett (2000); Aylett and Turk (2004); Genzel and Charniak (2002)). That is, UID is an optimal solution to the problem of rapid yet error-free communication in a noisy environment. Second and independently of whether linguistic communication is viewed as a noisy channel, UID can be seen as minimizing comprehension difficulty. The difficulty incurred by a comprehender in processing a word wi is positively correlated with its surprisal (Hale, 2001; Levy, 2006). IfPthe effect of surprisal on difficulty is superlinear, then the total difficulty of the utterance u ( ni=1 [log P (wi |w11???wi?1 ) ]k with k > 1) is minimized when information density is uniform (for proof see appendix; see also Levy 2005, ch. 2).1 That is, UID is also an optimal solution to the problem of low-effort comprehension. 3 Syntactic reduction UID would be optimal in several ways, but do speakers actually consider UID as a factor when making choices during online syntactic production? We address this question by directly linking a syntactic choice point to UID. If information density optimization is general, i.e. if it applies to all aspects of language production, we should find its effects even in structural choices. We use variation in the form of certain types of English relative clauses (henceforth RCs) to test this hypothesis. At the onset of an RC speakers can, but do not have to, utter the relativizer that.2 We refer to the omission of that as syntactic REDUCTION. (1) How big is [NP the familyi [RC (that) you cook for i ]]? Our dataset consists of a set of 3,452 RCs compatible with the above variation, extracted from the Switchboard corpus of spontaneous American English speech. All RCs were automatically annotated for a variety of control factors that are known to influence syntactic reduction of RCs, including RC size, distance of the RC from the noun it modifies, data about the speaker including gender and speech rate, local measures of speech disfluency, and formal and animacy properties of the RC subject (a full list is given in the appendix; see also (Jaeger, 2006)). These control factors are used in the logistic regression models presented in Section 5. 4 Reduction as a means of information density modulation From a syntactic perspective, the choice to omit a relativizer means that the first word of an RC conveys two pieces of information simultaneously: the onset of a relative clause and part of its internal contents (usually part of its subject, as you in Example (1)). Using the notation w????1 for the context preceding the RC and w1 for the RC?s first word (excluding the relativizer, if any), these two pieces of information can be expressed as a Markov decomposition of w1 ?s surprisal: log 1 1 1 = log + log P (w1 |w????1 ) P (RC|w????1 ) P (w1 |RC, w????1 ) (2) Conversely, the choice to use a relativizer separates out these two pieces of information, so that the only information carried by w1 is measured as log 1 P (w1 |RC, that, w????1 ) (3) If the overall distribution of syntactic reduction is in accordance with principles of informationdensity optimization, we should expect that full forms (overt relativizers) should be used more often when the information density of the RC would be high if the relativizer were omitted. The information density of the RC and subsequent parts of the sentence can be quantified by their Shannon information content. As a first test of this prediction, we use n-gram language models to measure the relationship between the Shannon information content of the first word of an RC and the tendency toward syntactic reduction. We examined the relationship between rate of syntactic reduction and the surprisal that w1 would 1 have if no relativizer had been used?that is, log P (w1 |w ?as estimated by a trigram language ????1 ) 1 Superlinearity would be a natural consequence of limited cognitive resources, although the issue awaits further empirical investigation. 2 To be precise, standard American English restricts omission of that to finite, restrictive, non-pied-piped, non-extraposed, non-subject-extracted relative clauses. Only such RCs are considered here. 1.0 0.6 0.4 0.0 0.2 Likelihood of full form 0.8 N= 1674 ?5 ?4 ?3 ?2 ?1 log(P(W1 | W?2 W?1)) Figure 1: RC n-gram-estimated information density and syntactic reduction. Dotted green line indicates lowess fit. model.3 To eliminate circularity from this test (the problem that for an unreduced RC token, P (w1 |w????1 ) may be low precisely because that is normally inserted between w????1 and w1 ), we estimated P (w1 |w????1 ) from a version of the Switchboard corpus in which all optional relativizers were omitted. That is, if we compare actual English with a hypothetical pseudo-English differing only in the absence of optional relativizers, are the overt relativizers in actual English distributed in a way such that they occur more in the contexts that would be of highest information density in the pseudo-English?4 For every actual instance of an RC onset ? ? ? w?2 w?1 (that)w1 ? ? ? we calculated the trigram probability P (w1 |w?2 w?1 ): that is, an estimate of the probability that w1 would have if no relativizer had been used, regardless of whether a relativizer was actually used or not. We then examined the relationship between this probability and the outcome event: whether or not a relativizer was actually used. Figure 4 shows the relationship between the different quantiles of the log-probability of w1 and the likelihood of syntactic reduction. As can be seen, reduction is more common when the probability P (w1 |w?n ? ? ? w?1 ) is high. This inverse relationship between w1 surprisal and relativizer use matches the predictions of UID. 5 5 Structural predictability and speaker choice Section 4 provides evidence that speakers? choices about syntactic reduction are correlated with information density: RC onsets that would be more informationally dense in reduced form are less likely to be reduced. This observation does not, however, provide strong evidence that speakers are directly sensitive to information density in their choices about reduction. Furthermore, if speakers are sensitive to information density in their reduction choices, it raises a new question: what kind of information is taken into account in speakers? estimation of information density? This section addresses the questions of whether reduction is directly sensitive to information density, and what information might be used in estimates of information density, using a two-step modeling approach. The first step involves a incremental stochastic model of structured utterance production. This model is used to construct estimates of the first term in Equation (2) contributing to an RC onset?s information density: the predictability (conditional probability) of an RC beginning at a 3 In cases where the conditioning bigram was not found, we backed off to a conditioning unigram, and omitted cases where the conditioning unigram could not be found; no other smoothing was applied. We used hold-one-out estimation of n-gram probabilities to prevent bias. 4 Omitting optional relativizers in the language model can alternatively be interpreted as assuming that speakers equate (3) with the second term of (2)?that is, the presence or absence of the relativizer is ignored in estimating the probablity of the first word of a relative clause. 5 We also calculated the relationship for estimates of RC information density using a trigram model of the Switchboard corpus as-is. By this method, there is a priori reason to expect a correlation, and indeed reduction is (more strongly than in Figure 4) negatively correlated with this measure. S NP-SBJ BES PRP it ?s NP-PRD(1) CD PP one IN of NP(2) DT JJ JJ NNS DT JJ JJ things IN the last few PP-LOC in RC NP(3) DT you?d ever want to do . . . NN DT world the Figure 2: A flattened-tree representation of a sentence containing an RC. The incremental parse through world consists of everything to the left of the dashed line. given point in a sentence, given an incremental structural representation for the sentence up to that point. Because the target event space of this term is small, a wide variety of cues, or features, can be included in the model, and the reliability of the resulting predictability estimates is relatively high. This model is described in Section 5.1. The resulting predictability estimates serve as a crucial covariate in the second step: a logistic regression model including a number of control factors (see Section 3 and appendix). This model is used in Sections 5.3 as a stringent test of the explanatory power of UID for speakers? reduction choices, and in Section 5.4 to determine whether evidence exists for speakers using structural as well as surface cues in their predictability estimates. 5.1 A structural predictability model In this section we present a method of estimating the predictability of a relative clause in its sentential context, contingent on the structural analysis of that context. For simplicity, we assume that structural analyses are context-free trees, and that the complete, correct incremental analysis of the sentential context is available for conditioning.6 In general, the task is to estimate P (RCn+1??? |w1???n , T1???n ) (4) that is, the probability that a phrase of type RC appears in the utterance beginning at wn+1 , given the incremental structured utterance hw1???n , T1???n i. To estimate these probabilities, we model production as a fully incremental, top-down stochastic tree generation process similar to that used for parsing in Roark (2001). Tree production begins by expanding the root node, and the expansion process for each non-terminal node N consists of the following steps: (a) choosing a leftmost daughter event D1 for N , and making it the active node; (b) recursively expanding the active node; and (c) choosing the next right-sister event Di+1 , and making it the active node. Steps (b) and (c) are repeated until a special right-sister event ?EN D? is chosen in step (c), at which point expansion of N is complete. As in Collins (2003) and Roark (2001), this type of directed generative process allows conditioning on arbitrary features of the incremental utterance. 6 If predictability from the perspective of the comprehender rather than the producer is taken to be of primary interest, this assumption may seem controversial. Nevertheless, there is little evidence that incremental structural misanalysis is a pervasive phenomenon in naturally occuring language (Roland et al., 2006), and the types of incremental utterances occurring immediately before relative clauses do not seem to be good candidates for local misanalysis. From a practical perspective, assuming access to the correct incremental analysis avoids the considerable difficulty involved in the incremental parsing of speech. After each word wn , the bottom-right preterminal of the incremental parse is taken as the currently active node N0 ; if its i-th ancestor is Ni then we have:7 P (RCn+1??? |w1???n , T1???n ) = k X i=0 " P (RC|Ni ) i?1 Y # P (?EN D ? |Nj ) j=0 (5) Figure 2 gives an example of an incremental utterance just before an RC, and illustrates how Equation (5) might be applied.8 At this point, NN would be the active node, and step (b) of expanding NP(3) would have just been completed. An RC beginning after wn (world in Figure 2) could conceivably modify any of the NPs marked (1)-(3), and all three of those attachments may contribute probability mass to P (RCn+1??? ), but an attachment at NP(2) can only do so if NP(1) and PP-LOC make no further expansion. 5.2 Model parameters and estimation What remains is to define the relevant event space and estimate the parameters of the tree-generation model. For RC predictability estimation, the only relevant category distinctions are between RC, ?EN D?, and any other non-null category, so we limit our event space to these three outcomes. Furthermore, because RCs are never leftmost daughters, we can ignore the parameters determining first-daughter event outcome probabilities (step (a) in Section 5.1). We estimate event probabilities using log-linear models (Berger et al., 1996; Della Pietra et al., 1997).9 We included five classes of features in our models, chosen by linguistic considerations of what is likely to help predict the next event given an active node in an incremental utterance (see Wasow et al. (ress)): ? NGRAM features: the last one, two, and three words in the incremental utterance; ? H EAD features: the head word and head part of speech (if yet seen), and animacy (for NPs) of the currently expanded node; ? H ISTory features: the incremental constituent structure of the currently expanded node N , and the number of words and sister nodes that have appeared to the right of N ?s head daughter; ? P RENOMinal features: when the currently expanded node is an NP, the prenominal adjectives, determiners, and possessors it contains; ? E XTernal features: when the currently expanded node is an NP, its external grammatical function, and the verb in the clause it governs. The complete set of features used is listed in a supplementary appendix. 7 Equation (5) relies on the fact that an RC can never be the first daughter of a node expansion; the possibility of RC generation through left-recursion can thus be ignored. 8 The phrase structures found in the Penn Treebank were flattened and canonicalized to ensure that the incremental parse structures do not contain implicit information about upcoming constituents. For example, RC structures are annotated with a nested NP structure, such as [NP [NP something else] [RC we could have done]] Tree canonicalization consisted of ensuring that each phrasal node had a preterminal head daughter, and that each preterminal node headed a phrasal node, according to the head-finding algorithm of Collins (2003). VP and S nodes without a verbal head child were given special null-copula head daughters, so that the NP-internal constituency of predicative nouns without overt copulas was distinguished from sentence-level constituency. 9 The predictability models were heavily overparameterized, and to prevent overfitting were regularized with a quadratic Bayesian prior. For each trained model the value of the regularization parameter (constant for all features) was chosen to optimize held-out data likelihood. RC probabilities were estimated using ten-fold crossvalidation over the entire Switchboard corpus, so that a given RC was never contained in the training data of the model used to determine its probability. 5.3 Explanatory power of phrasal predictability We use the same statistical procedures as in (Jaeger, 2006, Chapter 4) to put the predictions of the information-density hypothesis to a more stringent test. We evaluate the explanatory power of phrasal predictability in logistic regression models of syntactic reduction that include all the control variables otherwise known to influence relativizer omission (Section 3). To avoid confounds due to clusters of data points from the same speaker, the model was bootstrapped (10, 000 iterations) with random replacement of speaker clusters.10 Phrasal predictability of the RC (based on the full feature set listed in Section 5.2) was entered into this model as a covariate to test whether RC predictability co-determines syntactic reduction after other factors are controlled for. Phrasal predictability makes a significant contribution to the relativizer omission model (?2 (1) = 54.3; p < 0.0001). This demonstrates that phrasal predictability has explanatory power in this case of syntactic reduction. 5.4 Surface and structural conditioning of phrasal predictability The structural predictability model puts us in a position to ask whether empirically observed patterns of syntactic reduction give evidence for speakers? use of some types of cues but not others. In particular, there is a question of whether predictability based on surface cues alone (the NGRAM features of Section 5.2) provides a complete description of information-density effects on speakers? choices in syntactic reduction. We tested this by building a syntactic-reduction model containing two predictability covariates: one using NGRAM features alone, and one using all other (i.e., structural, or all-but-NGRAM) feature types listed in Section 5.2. We can then test whether the parameter weight in the reduction model for each predictability measure differs significantly from zero. It turns out that both predictability measures matter: all-but-NGRAM predictability is highly significant (?2 (1) = 23.55, p < 0.0001), but NGRAM predictability is also significant (?2 (1) = 5.28, p < 0.025). While NGRAM and all-but-NGRAM probabilities are strongly correlated (r2 = 0.70), they evidently exhibit enough differences to contribute non-redundant information in the reduction model. We interpret this as evidence that speakers may be using both surface and structural cues for phrasal predictability estimation in utterance structuring. 6 Conclusion Using a case study in syntactic reduction, we have argued that information-density optimization? the tendency to maximize the uniformity of upcoming-event probabilities at each part of a sentence?plays an important role in speakers? choices about structuring their utterances. This question has been previously addressed in the context of phonetic reduction of highly predictable words and syllables (Aylett and Turk, 2004; Bell et al., 2003), but not in the case of word reduction. Using a stochastic tree-based model of incremental utterance production combined with a logistic regression model of syntactic reduction, we have found evidence that when speakers have the choice between using or omitting an optional function word that marks the onset of a phrase, they use the function word more often when the phrase it introduces is less predictable. We have found evidence that speakers may be using both phrasal and structural information to calculate upcoming-event predictabilities. The overall distribution of syntactic reduction has the effect of smoothing the information profile of the sentence: when the function word is not omitted, the information density of the immediately following words is reduced. The fact that our case study involves the omission of a single word with little to no impact on utterance meaning made the data particularly amenable to analysis, but we believe that this method is potentially applicable to a wider range of variable linguistic phenomena, such as word ordering and lexical choice. More generally, we believe that the ensuing view of constraints on situated linguistic communication as limits on the information-transmission capacity of the environment, or on information-processing capacity of human language processing faculties, can serve as a useful framework for the study of 10 Our data comes from approximately 350 speakers contributing 1 to 40 RCs (MEAN= 10, MEDIAN= 8, SD= 8.5) to the data set. Ignoring such clusters in the modeling process would cause the models to be overly optimistic. Post-hoc tests conducted on the models presented here revealed no signs of over-fitting, which means that the model is likely to generalize beyond the corpus to the population of American English speakers. The significance levels reported in this paper are based on a normal-theory interpretation of the unbootstrapped model parameter estimate, using a bootstrapped estimate of the parameter?s standard error. language use. On this view, syntactic reduction is available to the speaker as a pressure valve to regulate information density when it is dangerously high. Equivalently, the presence of a function word can be interpreted as a signal to the comprehender to expect the unexpected, a rational exchange of time for reduced information density, or a meaningful delay (Jaeger, 2005). More generally, reduction at different levels of linguistic form (phonetic detail, detail of referring expressions, as well as omission of words, as in the case examined here) provides a means for speakers to smooth the information-density profile of their utterances (Aylett and Turk, 2004; Genzel and Charniak, 2002). This raises important questions about the specific motivations of speakers? choices: are these choices made for the sake of facilitating production, or as part of audience design? Finally, this view emphasizes the connection between grammatical optionality and communicative optimality. The availability of more than one way to express a given meaning grants speakers the choice to select the optimal alternative for each communicative act. Acknowledgments This work has benefited from audience feedback at the Language Evolution and Computation research group at the University of Edinburgh, and at the Center for Research on Language at UC San Diego. The idea to derive estimates of RC predictability based on multiple cues originated in discussion with T. Wasow, P. Fontes, and D. Orr. RL?s work on this paper was supported by an ESRC postdoctoral fellowship at the School of Informatics at the University of Edinburgh (award PTA-026-27-0944). FJ?s work was supported by a research assistantship at the Linguistics Department, Stanford University (sponsored by T. Wasow and D. Jurafsky) and a post-doctoral fellowship at the Department of Psychology, UC San Diego (V. Ferreira?s NICHD grant R01 HD051030). References Arnold, J. E., Wasow, T., Asudeh, A., and Alrenga, P. (2004). Avoiding attachment ambiguities: The role of constituent ordering. Journal of Memory and Language, 51:55?70. Aylett, M. (2000). Stochastic Suprasegmentals: Relationships between Redundancy, Prosodic Structure and Care of Articulation in Spontaneous Speech. PhD thesis, University of Edinburgh. Aylett, M. and Turk, A. (2004). The Smooth Signal Redundancy Hypothesis: A functional explanation for relationships between redundancy, prosodic prominence, and duration in spontaneous speech. Language and Speech, 47(1):31?56. Bell, A., Jurafsky, D., Fosler-Lussier, E., Girand, C., Gregory, M., and Gildea, D. (2003). Effects of disfluencies, predictability, and utterance position on word form variation in English conversation. Journal of the Acoustical Society of America, 113(2):1001?1024. Berger, A. L., Pietra, S. A. D., and Pietra, V. J. D. (1996). A Maximum Entropy approach to natural language processing. Computational Linguistics, 22(1):39?71. Collins, M. (2003). Head-driven statistical models for natural language parsing. Computational Linguistics, 29:589?637. Della Pietra, S., Della Pietra, V., and Lafferty, J. (1997). Inducing features of random fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(4):380?393. Genzel, D. and Charniak, E. (2002). Entropy rate constancy in text. In Proceedings of ACL. Hale, J. (2001). A probabilistic Earley parser as a psycholinguistic model. In Proceedings of NAACL, volume 2, pages 159?166. Jaeger, T. F. (2005). Optional that indicates production difficulty: evidence from disfluencies. In Proceedings of Disfluency in Spontaneous Speech Workshop. Jaeger, T. F. (2006). Redundancy and Syntactic Reduction in Spontaneous Speech. PhD thesis, Stanford University, Stanford, CA. Keller, F. (2004). The entropy rate principle as a predictor of processing effort: An evaluation against eyetracking data. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 317?324, Barcelona. Levy, R. (2005). Probabilistic Models of Word Order and Syntactic Discontinuity. PhD thesis, Stanford University. Levy, R. (2006). Expectation-based syntactic comprehension. Ms., University of Edinburgh. Roark, B. (2001). Probabilistic top-down parsing and language modeling. Computational Linguistics, 27(2):249?276. Roland, D., Elman, J. L., and Ferreira, V. S. (2006). Why is that? Structural prediction and ambiguity resolution in a very large corpus of English sentences. Cognition, 98:245?272. Wasow, T., Jaeger, T. F., and Orr, D. (in press). Lexical variation in relativizer frequency. In Wiese, H. and Simon, H., editors, Proceedings of the Workshop on Expecting the unexpected: Exceptions in Grammar at the 27th Annual Meeting of the German Linguistic Association, University of Cologne, Germany. DGfS.
3129 |@word faculty:1 version:1 bigram:1 addressee:2 open:1 seek:1 decomposition:1 prominence:1 pressure:1 recursively:1 reduction:41 loc:2 contains:1 charniak:7 bootstrapped:2 yet:2 written:2 parsing:4 subsequent:1 sponsored:1 n0:1 alone:2 cue:11 generative:1 cook:1 intelligence:1 beginning:3 probablity:1 provides:4 node:18 contribute:2 five:1 rc:38 consists:3 combine:1 fitting:1 headed:1 inter:3 indeed:1 expected:1 rapid:1 elman:1 planning:1 terminal:1 automatically:1 little:3 actual:3 valve:1 becomes:1 begin:2 estimating:5 notation:1 mass:1 null:2 what:6 kind:1 probabilites:1 interpreted:2 spoken:2 differing:1 finding:1 nj:1 pseudo:2 every:1 hypothetical:1 act:1 ferreira:2 demonstrates:1 control:5 unit:5 normally:1 omit:1 penn:1 grant:2 t1:3 before:2 accordance:2 local:2 modify:1 limit:2 consequence:2 sd:1 modulation:1 approximately:2 might:5 acl:1 doctoral:1 quantified:1 examined:3 conversely:1 co:1 jurafsky:2 limited:2 ngram:8 range:2 directed:1 practical:1 acknowledgment:1 ead:1 differs:1 prevalence:1 procedure:1 danger:1 empirical:2 bell:3 significantly:1 word:29 suggest:1 get:1 close:1 superlinear:1 unreduced:1 put:2 context:9 influence:2 optimize:6 lexical:3 maximizing:1 modifies:1 backed:1 regardless:1 center:1 duration:3 keller:3 independently:1 resolution:1 simplicity:2 immediately:3 avoidance:1 d1:1 population:1 variation:4 phrasal:11 diego:4 spontaneous:5 target:1 user:2 heavily:1 play:1 parser:1 hypothesis:3 element:1 gilman:2 trend:1 animacy:2 particularly:1 predicts:1 lowess:1 observed:2 inserted:1 bottom:1 role:2 constancy:1 calculate:1 richness:1 ordering:2 highest:1 expecting:1 environment:3 predictable:2 covariates:1 trained:1 uniformity:2 raise:2 serve:2 negatively:1 chapter:1 america:1 prosodic:2 outcome:3 choosing:2 csli:1 apparent:1 stanford:6 supplementary:1 canonicalization:1 otherwise:2 grammar:1 syntactic:35 noisy:4 online:1 confronted:1 hoc:1 evidently:1 relevant:2 entered:1 forth:1 description:1 inducing:1 constituent:3 crossvalidation:1 cluster:3 transmission:3 istory:1 jaeger:8 incremental:19 help:1 wider:1 derive:1 measured:1 school:1 strong:1 preterminal:3 involves:2 come:3 annotated:2 correct:2 stochastic:5 occupies:1 human:1 stringent:2 everything:1 argued:1 exchange:1 disfluency:4 investigation:2 comprehension:3 hold:1 considered:1 normal:1 cognition:1 predict:1 trigram:3 omitted:4 determiner:1 estimation:6 overt:3 applicable:1 currently:5 communicative:6 sensitive:4 repetition:1 assistantship:1 rather:1 avoid:2 broader:1 linguistic:9 pervasive:1 structuring:2 likelihood:3 indicates:2 prp:1 sense:1 nn:2 typically:1 eliminate:1 explanatory:4 entire:1 ancestor:1 comprising:1 germany:1 issue:2 overall:2 priori:1 noun:2 smoothing:2 special:2 uc:4 copula:2 equal:1 construct:1 never:3 field:1 minimized:1 np:16 others:1 pta:1 producer:1 employ:2 few:1 simultaneously:1 pietra:5 intended:1 replacement:1 attempt:1 interest:1 message:4 investigate:2 possibility:3 highly:2 evaluation:1 introduces:1 circularity:1 held:1 amenable:1 tree:7 incomplete:1 instance:1 modeling:4 phrase:13 uniform:3 predictor:1 delay:1 conducted:1 optimally:1 reported:1 gregory:1 nns:1 combined:1 referring:1 density:38 peak:1 probabilistic:3 off:1 informatics:1 w1:23 thesis:3 ambiguity:5 sentential:2 containing:2 choose:2 henceforth:1 cognitive:3 external:1 american:3 account:2 orr:2 availability:1 matter:1 trough:1 onset:6 piece:3 later:1 root:1 view:3 optimistic:1 option:1 simon:1 contribution:1 minimize:1 gildea:1 ni:3 who:1 equate:1 confounds:1 vp:1 generalize:1 bayesian:1 emphasizes:1 drive:2 explain:1 psycholinguistic:2 definition:1 against:1 pp:3 turk:6 involved:1 frequency:1 conveys:1 naturally:1 proof:1 di:1 rational:4 dataset:1 ask:2 knowledge:2 conversation:1 dgfs:1 actually:3 appears:1 higher:3 dt:4 earley:1 done:1 strongly:2 furthermore:2 roger:1 just:2 implicit:1 correlation:1 until:1 hand:1 parse:3 expressive:1 logistic:5 overparameterized:1 believe:2 usa:2 effect:8 omitting:2 contain:1 consisted:1 building:1 naacl:1 evolution:1 regularization:1 eyetracking:1 during:2 speaker:46 m:1 leftmost:2 theoretic:3 demonstrate:2 complete:6 occuring:1 bring:1 fj:1 meaning:2 consideration:2 rcn:3 common:1 functional:1 clause:7 empirically:1 rl:1 conditioning:6 volume:1 linking:1 interpretation:1 association:1 interpret:1 expressing:1 significant:4 refer:1 language:26 had:3 reliability:1 access:1 entail:1 surface:7 something:1 perspective:3 jolla:2 driven:2 optimizes:1 phonetic:4 certain:1 meeting:1 genzel:7 seen:3 comprehender:3 contingent:1 care:1 florian:1 preceding:2 determine:2 maximize:2 redundant:1 dashed:1 signal:2 relates:1 full:4 multiple:1 reduces:1 smooth:2 match:1 rcs:7 host:1 post:2 roland:3 award:1 controlled:1 ensuring:1 prediction:5 impact:1 regression:5 basic:1 circumstance:1 expectation:1 iteration:1 audience:2 want:1 fellowship:2 addressed:1 else:1 median:1 leaving:2 crucial:2 extra:1 w2:1 subject:3 thing:1 predicative:1 lafferty:1 seem:2 structural:19 near:1 presence:2 ideal:1 revealed:1 enough:1 wn:5 variety:3 fit:1 psychology:2 reduce:1 idea:1 fosler:1 whether:10 expression:1 effort:2 speech:10 cause:1 jj:4 ignored:2 generally:2 useful:1 clear:1 governs:1 listed:3 amount:2 ten:1 situated:1 category:2 reduced:4 hw1:1 constituency:2 restricts:1 dotted:1 sign:1 estimated:4 overly:1 per:1 sister:3 prd:1 affected:2 express:1 group:1 redundancy:4 nevertheless:1 uid:14 prevent:2 sum:1 inverse:1 you:3 place:1 roark:3 appendix:4 syllable:2 fold:1 quadratic:1 annual:1 occur:2 constraint:2 precisely:1 sake:1 alia:3 aspect:2 optimality:2 expanded:4 relatively:1 department:5 marking:3 structured:4 wasow:5 according:1 wi:6 making:3 conceivably:1 taken:5 resource:2 equation:3 remains:1 previously:1 turn:2 german:1 locus:1 ut1:1 available:3 awaits:1 regulate:1 distinguished:1 alternative:1 sbj:1 top:2 res:1 linguistics:6 include:2 completed:1 ensure:1 log2:1 restrictive:1 society:1 r01:1 upcoming:6 question:7 realized:1 strategy:1 primary:1 unclear:2 exhibit:1 distance:1 separate:2 capacity:3 aylett:8 ensuing:1 acoustical:1 argue:1 toward:2 reason:1 assuming:2 relationship:8 berger:2 minimizing:2 equivalently:1 potentially:1 negative:1 daughter:7 design:1 observation:1 markov:1 finite:1 behave:1 optional:7 communication:5 variability:1 excluding:1 precise:1 ever:1 ucsd:1 head:8 omission:6 arbitrary:1 verb:1 sentence:9 optimized:1 connection:1 distinction:1 barcelona:1 discontinuity:1 address:2 able:1 beyond:1 usually:2 pattern:2 articulation:1 appeared:1 adjective:1 including:3 green:1 memory:1 explanation:1 surprisal:6 power:4 event:14 natural:6 difficulty:6 regularized:1 recursion:1 w11:1 attachment:3 carried:1 utterance:28 text:1 prior:1 contributing:2 relative:6 determining:1 fully:1 expect:5 bear:1 generation:3 utter:1 incurred:1 conveyed:4 switchboard:4 controversial:1 principle:5 treebank:1 editor:1 cd:1 production:15 compatible:1 token:1 supported:2 informationally:1 free:2 english:10 last:2 verbal:1 formal:1 bias:1 arnold:2 wide:2 face:1 distributed:1 grammatical:2 feedback:1 calculated:2 canonicalized:1 world:4 gram:3 avoids:1 edinburgh:4 made:2 san:4 transaction:1 ignore:1 active:7 overfitting:1 corpus:7 alternatively:1 postdoctoral:1 why:1 channel:3 robust:1 ca:3 expanding:3 ignoring:1 expansion:4 significance:1 dense:3 unmarked:2 ling:1 big:1 profile:2 motivation:1 repeated:1 child:1 facilitating:1 positively:1 benefited:1 quantiles:1 en:3 predictability:36 position:2 originated:1 candidate:1 levy:5 down:2 unigram:2 covariate:3 specific:1 hale:2 list:1 r2:1 evidence:14 exists:3 nichd:1 workshop:2 flattened:2 phd:3 illustrates:1 occurring:1 entropy:3 likely:4 expressed:2 contained:1 unexpected:2 lussier:1 applies:1 ch:1 gender:1 nested:1 environmental:1 discourse:1 extracted:2 relies:1 determines:1 conditional:1 modulate:2 marked:3 viewed:2 absence:2 content:4 considerable:1 included:2 called:1 total:1 experimental:1 la:2 tendency:2 shannon:3 meaningful:1 exception:1 select:1 internal:2 support:1 mark:1 collins:3 avoiding:1 phenomenon:3 evaluate:1 tested:1 della:3 correlated:4
2,347
313
Applications of Neural Networks in Video Signal Processing John C. Pearson, Clay D. Spence and Ronald Sverdlove David Sarnoff Research Center CN5300 Princeton, NJ 08543-5300 Abstract Although color TV is an established technology, there are a number of longstanding problems for which neural networks may be suited. Impulse noise is such a problem, and a modular neural network approach is presented in this paper. The training and analysis was done on conventional computers, while real-time simulations were performed on a massively parallel computer called the Princeton Engine. The network approach was compared to a conventional alternative, a median filter. Real-time simulations and quantitative analysis demonstrated the technical superiority of the neural system. Ongoing work is investigating the complexity and cost of implementing this system in hardware. 1 THE POTENTIAL FOR NEURAL NETWORKS IN CONSUMER ELECTRONICS Neural networks are most often considered for application in emerging new technologies, such as speech recognition, machine vision, and robotics. The fundamental ideas behind these technologies are still being developed, and it will be some time before products containing neural networks are manufactured. As a result, research in these areas will not drive the development of inexpensive neural network hardware which could serve as a catalyst for the field of neural networks in general. In contrast, neural networks are rarely considered for application in mature technologies, such as consumer electronics. These technologies are based on established principles of information processing and communication, and they are used in millions of products per year. The embedding of neural networks within such mass- 289 290 Pearson, Spence, and Sverdlove market products would certainly fuel the development oflow-cost network hardware, as economics dictates rigorous cost-reduction in every component. 2 IMPULSE NOISE IN TV The color television signaling standard used in the U.S. was adopted in 1953 (McIlwain and Dean, 1956; Pearson, 1975). The video information is first broadcast as an amplitude modulated (AM) radio-frequency (RF) signal, and is then demodulated in the receiver into what is called the composite video signal. The composite signal is comprised of the high-bandwidth (4.2 MHz) luminance (black and white) signal and two low-bandwidth color signals whose amplitudes are modulated in quadrature on a 3.58 MHz subcarrier. This signal is then further decoded into the red, green and blue signals that drive the display. One image "frame" is formed by interlacing two successive "fields" of 262.5 horizontal lines. Electric sparks create broad-band RF emissions which are transformed into oscillatory waveforms in the composite video signal, called AM impulses. See Figure 1. These impulses appear on a television screen as short, horizontal, multi-colored streaks which clearly stand out from the picture. Such sparks are commonly created by electric motors. There is little spatial (within a frame) or temporal (between frames) correlation between impulses. General considerations suggest a two step approach for the removal of impulses from the video signal - detect which samples have been corrupted, and replace them with values derived from their spatio-temporal neighbors. Although impulses are quite visible, they form a small fraction of the data, so only those samples detected as corrupted should be altered. An interpolated average of some sort will generally be a good estimate of impulse-corrupted samples because images are generally smoothly varying in space and time. There are a number of difficulties associated with this detection/replacement approach to the problem. There are many impulse-like waveforms present in normal video, which can cause "false positives" or "false alarms". See Figure 2. The algorithms that decode the composite signal into RGB spread impulses onto neighboring lines, so it is desirable to remove the impulses in the composite signal. However, the color encoding within the composite signal complicates matters. The sub carrier frequency is near the ringing frequency of the impulses and tends to hide the impulses. Furthermore, the replacement function cannot simply average the nearest Figure 1: Seven Representative AM Impulse Waveforms. They have been digitized and displayed at the intervals used in digital receivers (8 bits, .07 usec). The largest amplitude impulses are 20-30 samples wide, approximately 3% of the width of one line of active video (752 samples). Applications of Neural Networks in Video Signal Processing 255~~io----_ O~~ ~ +l2:~I11-____H,,--_____J,..J\II--!--1-'"- -128~L____ .. ________________________________________________~~ o 752 Figure 2: Corrupted Video Scan Line. (Top) Scan line of a composite video signal containing six impulse waveforms. (Bottom) The impulse waveforms, derived by subtracting the uncorrupted signal from the corrupted signal. Note the presence of many impulse-like features in the video signal. samples, because they represent different color components. The impulses also have a wide variety of waveforms (Figure I), including some variation caused by clipping in the receiver. 3 MODULAR NEURAL NETWORK SYSTEM The impulse removal system incorporates three small multi-layer perceptron networks (Rumelhart and McClelland, 1986), and all of the processing is confined to one field of data. See Figure 3. The replacement function is performed by one network, termed the i-net ("i" denotes interpolation). Its input is 5 consecutive samples each from the two lines above and the two lines below the current line. The network consists of 10 units in the first hidden layer, 5 in the second, and-one output node trained to estimate the center sample of the current line. The detection function employs 2 networks in series. (A single network detector has been tried, but it has never performed as well as this two-stage detector.) The inputs to the first network are 9 consecutive samples from the current line centered on the sample of interest. It has 3 nodes in the first layer, and one output node trained to compute a moving average of the absolute difference between the clean and noisy signals of the current inputs. It is thus trained to function as a filter for impulse energy, and is termed the e-net. The output of the e-net is then low-pass filtered and sub-sampled to remove redundant information. The inputs to the second network are 3 lines of 5 consecutive samples each, drawn from the post-processed output of the e-net, centered on the sample of interest. This network, like the e-net, has 3 nodes in the first layer and one output node. It is trained to output 1 if the sample of interest is contaminated with impulse noise, and 0 otherwise. It is thus an impulse detector, and is called the d-net. The output of the d-net is then fed to a binary switch, which passes through to the final system output either the output of the i-net or the original signal, depending on whether the input exceeds an adjustable threshold. 291 292 Pearson, Spence, and Sverdlove Original Dirty Picture - small Impu_lse_+-~ 'pseudo- Impulse" -- I "edge"--f-I--- big Impulse -+--l..,,+ false negative potential false negalive potenllaltrue -+-positive Interpolated Original - true positive -+-_ false positives - "Restored" Picture """"'r+smallimpulse let through blurred eyes ~-+-big Impulse removed Figure 3: The Neural Network AM Impulse Removal System. The cartoon face is used to illustrate salient image processing characteristics of the system. The e-net correctly signals the presence of the large impulse (chin), misses the small impulse (forehead), and incorrectly identifies edges (nose) and points (eyes) as impulses. The d-net correctly disregards the vertically correlated impulse features (nose) and detects the large impulse (chin), but incorrectly misses the small impulse (forehead) and the non-correlated impulse-like features (eyes). The i-net produces a fuzzy (doubled) version of the original, which is used to replace segments identified as corrupted by the d-net. Experience showed that the d-net tended to produce narrow spikes in response to impulse-like features of the image. To remove this source of false positives, the output of the d-net is averaged over a 19 sample region centered on the sample of interest. This reduces the peak amplitude of signals due to impulse-like features much more than the broad signals produced by true impulses. An impulse is considered to be present if this smoothed signal exceeds a threshold, the level of which is chosen so as to strike a balance between low false positive rates (high threshold), and high true positive rates (low threshold). Experience also showed that the fringes of the impulses were not being detected. To compensate for this, sub-threshold d-net output samples are set high if they are within 9 samples of a super-threshold d-net sample. Figure 4 shows the output of the resulting trained system for one scan line. The detection networks were trained on one frame of video containing impulses of 5 different amplitudes with the largest twenty times the smallest. Visually, these ranged from non-objectionable to brightly colored. Standard incremental backpropagation and conjugate gradient (NAG, 1990) were the training proceedures used. The complexity of the e-net and d-net were reduced in phases. These nets Applications of Neural Networks in Video Signal Processing (J'b 255~,,~ o INPUT +2~~~___~~~.____ -25~ r--r~--------~.----------~v~----~~ NOISE ~~_ _ _~-~N_E_T______~ SMOOTHED D-NET THRESHOL~ 1\ /\ /\ /\ 255 o o 752 Figure 4: Input and Network Signals. began as 3 layer nets. After a phase of training, redundant nodes were identified and removed, and training re-started. This process was repeated until there were no redundant nodes. 4 REAL-TIME SIMULATION ON THE PRINCETON ENGINE The trained system was simulated in real-time on the Princeton Engine (Chin et. al., 1988), and a video demonstration was presented at the conference. The Princeton Engine (PE) is a 29.3 GIPS image processing system consisting of up to 2048 processing elements in a SIMD configuration. Each processor is responsible for the output of one column of pixels, and contains a 16-bit arithmetic unit, multiplier, a 64-word triple-port register stack, and 16,000 words of local processor memory. In addition, an interprocessor communication bus permits exchanges of data between neighboring processors during one instruction cycle. While the i-net performs better than conventional interpolation methods, the difference is not significant for this problem because of the small amount of signal which is replaced. (If the whole image is replaced, the neural net interpolator gave about 1.5 dB better performance than a conventional method.) Thus it has not been implemented on the PE. The i-net may be of value in other video tasks, such as converting from an interlaced to a non-interlaced display. 16-bit fixed point arithmetic was used in these simulations, with 8 bits of fraction, and 10 bit sigmoid function look-up tables. Comparison with the double-precision arithmetic used on the conventional computers showed no significant reduction in 293 294 Pearson, Spence, and Sverdlove 10 :zo ---- ~ --- 100 --- 200 .2 O~~====~ o .02 .04 .08 .08 .1 % FALSE DETtcnoN O~~====~ o .02 .04 .08 .08 .1 % FALSE DETtcnON Figure 5: ROC Analysis of Neural Network and Median Detectors. performance. Current work is exploring the feasibility of implementing training on the PE. 5 PERFORMANCE ANALYSIS The mean squared error (MSE) is well known to be a poor measure of subjective image quality (Roufs and Bouma, 1980). A better measure of detection performance is given by the receiver operating characteristic, or ROC (Green and Swets, 1966, 1974). The ROC is a parametric plot of the fraction of corrupted samples correctly detected versus the fraction of clean samples that were falsely detected. In this case, the decision threshold for the smoothed output of the d-net was the parameter varied. Figure 5 (left) shows the neural network detector ROC for five different impulse amplitudes (tested on a video frame that it was not trained on). This quantifies the sharp breakdown in performance observed in real-time simulations at low impulse amplitude. This breakdown is not observed in analysis of the MSE. Median filters are often suggested for impulse removal tasks, and have been applied to the removal of impulses from FM TV transmission systems (Perlman, et aI, 1987). In order to assess the relative merits of the neural network detector, a median detector was designed and analyzed. This detector computes the m~dian of the current sample and its 4 nearest neighbors with the same color sub-carrier phase. A detection is registered if the difference between the median and the current sample is above threshold (the same additional measures were taken to insure that impulse fringes were detected as were described above for the neural network detector). Figure 5 (right) shows both the neural network and median detector ROC's for two different video frames, each of which contained a mixture of all 5 impulse amplitudes. One frame was used in training the network (TRAIN), and the other was not (TEST). This verifies that the network was not overtrained, and quantifies the superior performance of the network detector observed in real-time simulations. Applications of Neural Networks in Video Signal P1'ocessing 6 CONCLUSIONS We have presented a system using neural network algorithms that outperforms a conventional method, median filtering, in removing AM impulses from television signals. Of course an additional essential criterion is the cost and complexity of hardware implementations. Median filter chips have been successfully fabricated (Christopher et al., 1988). We are currently investigating the feasibility of casting small neural networks into special purpose chips. We are also applying neural nets to other television signal processing problems. Acknowledgements This work was supported by Thomson Consumer Electronics, under Erich Geiger and Dietrich Westerkamp. This work was part of a larger team effort, and we acknowledge their help, in particular: Nurit Binenbaum, Jim Gibson, Patrick Hsieh, and John Ju. References Chin, D., J. Passe, F. Bernard, H. Taylor and S. Knight, (1988). The Princeton Engine: A Real-Time Video System Simulator. IEEE Transactions on Consumer Electronics 34:2 pp. 285-297. Christopher, L.A., W.T. Mayweather III, and S. Perlman, (1988). A VLSI Median Filter for Impulse Noise Elimination in Composite or Component TV Signals. IEEE Transactions on Consumer Electronics 34:1 p. 262. Green, D.M., and J .A. Swets, (1966 and 1974). Signal Detection Theory and Psychophysics. New York, Wiley (1966). Reprinted with corrections, Huntington, N.Y., Krieger (1974). McIlwain, K. and C.E. Dean (eds.); Hazeltine Corporation Staff, (1956). Principles of Color Television. New York. John Wiley and Sons. NAG, (1990). The NAG Fortran Library Manual, Mark 14. Downers Grove, IL (The Numerical Algorithms Group Inc.). Pearson, D.E., (1975). Transmission and Display of Pictorial Information. New York. John Wiley and Sons. Perlman, S.S, S. Eisenhandler, P.W. Lyons, and M.J. Shumila, (1987). Adaptive Median Filtering for Impulse Noise Elimination in Real-Time TV Signals. IEEE Transactions on Communications COM-35:6 p. 646. Roufs, J .A. and H. Bouma, (1980). Towards Linking Perception Research and Image Quality. Proceedings of the SID 21:3, pp. 247-270. Rumelhart, D.E. and J.L. McClelland (eds.), (1986). Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Cambridge, Mass., MIT Press. 295 Part VII Visual Processing
313 |@word version:1 instruction:1 simulation:6 tried:1 rgb:1 hsieh:1 reduction:2 electronics:5 configuration:1 series:1 contains:1 subjective:1 outperforms:1 current:7 com:1 john:4 ronald:1 numerical:1 visible:1 motor:1 remove:3 plot:1 designed:1 short:1 colored:2 filtered:1 node:7 successive:1 five:1 interprocessor:1 consists:1 falsely:1 swets:2 market:1 p1:1 multi:2 simulator:1 detects:1 little:1 lyon:1 brightly:1 insure:1 mass:2 fuel:1 what:1 emerging:1 ringing:1 developed:1 fuzzy:1 fabricated:1 nj:1 corporation:1 temporal:2 quantitative:1 every:1 pseudo:1 unit:2 superiority:1 appear:1 before:1 positive:7 carrier:2 vertically:1 local:1 tends:1 io:1 encoding:1 interpolation:2 approximately:1 black:1 sarnoff:1 averaged:1 responsible:1 spence:4 perlman:3 backpropagation:1 signaling:1 area:1 gibson:1 dictate:1 composite:8 word:2 suggest:1 doubled:1 onto:1 cannot:1 applying:1 conventional:6 dean:2 demonstrated:1 center:2 economics:1 spark:2 embedding:1 variation:1 decode:1 element:1 rumelhart:2 recognition:1 breakdown:2 bottom:1 observed:3 sverdlove:4 region:1 cycle:1 removed:2 knight:1 complexity:3 trained:8 segment:1 serve:1 chip:2 zo:1 train:1 detected:5 pearson:6 whose:1 modular:2 quite:1 larger:1 otherwise:1 noisy:1 final:1 dietrich:1 net:26 dian:1 subtracting:1 product:3 neighboring:2 double:1 transmission:2 produce:2 incremental:1 help:1 depending:1 illustrate:1 nearest:2 implemented:1 waveform:6 filter:5 centered:3 exploration:1 elimination:2 implementing:2 exchange:1 microstructure:1 exploring:1 correction:1 considered:3 normal:1 visually:1 cognition:1 consecutive:3 smallest:1 purpose:1 radio:1 currently:1 largest:2 create:1 successfully:1 mit:1 clearly:1 super:1 varying:1 casting:1 derived:2 emission:1 contrast:1 rigorous:1 am:5 detect:1 hidden:1 vlsi:1 transformed:1 pixel:1 development:2 spatial:1 special:1 psychophysics:1 field:3 simd:1 never:1 cartoon:1 broad:2 look:1 contaminated:1 employ:1 pictorial:1 replaced:2 phase:3 consisting:1 replacement:3 detection:6 interest:4 certainly:1 analyzed:1 mixture:1 behind:1 grove:1 edge:2 experience:2 demodulated:1 taylor:1 re:1 ocessing:1 complicates:1 column:1 mhz:2 clipping:1 cost:4 oflow:1 comprised:1 corrupted:7 ju:1 fundamental:1 peak:1 squared:1 containing:3 broadcast:1 manufactured:1 potential:2 matter:1 blurred:1 inc:1 caused:1 register:1 performed:3 mcilwain:2 red:1 sort:1 parallel:2 ass:1 formed:1 il:1 characteristic:2 sid:1 produced:1 drive:2 processor:3 oscillatory:1 detector:11 tended:1 manual:1 ed:2 inexpensive:1 energy:1 frequency:3 pp:2 associated:1 sampled:1 color:7 clay:1 amplitude:8 response:1 done:1 furthermore:1 stage:1 correlation:1 until:1 horizontal:2 christopher:2 quality:2 impulse:49 ranged:1 true:3 multiplier:1 white:1 during:1 width:1 criterion:1 chin:4 thomson:1 performs:1 image:8 consideration:1 began:1 sigmoid:1 superior:1 million:1 linking:1 forehead:2 significant:2 cambridge:1 ai:1 erich:1 gips:1 cn5300:1 moving:1 operating:1 patrick:1 hide:1 showed:3 massively:1 termed:2 binary:1 uncorrupted:1 additional:2 staff:1 converting:1 redundant:3 strike:1 signal:33 arithmetic:3 ii:1 desirable:1 interlacing:1 reduces:1 exceeds:2 technical:1 compensate:1 post:1 feasibility:2 vision:1 represent:1 robotics:1 confined:1 addition:1 interval:1 median:10 source:1 pass:1 mature:1 db:1 incorporates:1 near:1 presence:2 iii:1 variety:1 switch:1 gave:1 bandwidth:2 identified:2 fm:1 idea:1 reprinted:1 interlaced:2 whether:1 six:1 effort:1 speech:1 york:3 cause:1 generally:2 amount:1 band:1 hardware:4 processed:1 mcclelland:2 reduced:1 per:1 correctly:3 blue:1 group:1 salient:1 threshold:8 interpolator:1 drawn:1 clean:2 luminance:1 fraction:4 year:1 geiger:1 decision:1 bit:5 layer:5 display:3 huntington:1 interpolated:2 tv:5 poor:1 conjugate:1 son:2 taken:1 bus:1 fortran:1 nose:2 merit:1 fed:1 adopted:1 permit:1 alternative:1 original:4 top:1 denotes:1 dirty:1 overtrained:1 spike:1 restored:1 parametric:1 gradient:1 simulated:1 seven:1 consumer:5 usec:1 balance:1 demonstration:1 negative:1 implementation:1 adjustable:1 twenty:1 acknowledge:1 displayed:1 incorrectly:2 communication:3 team:1 digitized:1 frame:7 jim:1 varied:1 stack:1 smoothed:3 sharp:1 david:1 engine:5 registered:1 narrow:1 established:2 suggested:1 below:1 perception:1 rf:2 green:3 including:1 video:19 memory:1 difficulty:1 altered:1 technology:5 eye:3 library:1 picture:3 identifies:1 created:1 started:1 l2:1 removal:5 acknowledgement:1 relative:1 catalyst:1 filtering:2 versus:1 triple:1 digital:1 principle:2 port:1 course:1 supported:1 perceptron:1 neighbor:2 wide:2 face:1 absolute:1 distributed:1 stand:1 computes:1 commonly:1 adaptive:1 longstanding:1 transaction:3 active:1 investigating:2 receiver:4 nag:3 spatio:1 quantifies:2 table:1 streak:1 mse:2 electric:2 spread:1 big:2 noise:6 alarm:1 whole:1 verifies:1 repeated:1 quadrature:1 representative:1 roc:5 screen:1 wiley:3 precision:1 sub:4 decoded:1 pe:3 removing:1 essential:1 false:9 i11:1 krieger:1 television:5 suited:1 smoothly:1 vii:1 simply:1 visual:1 contained:1 fringe:2 towards:1 replace:2 miss:2 called:4 pas:1 bernard:1 disregard:1 bouma:2 rarely:1 mark:1 modulated:2 scan:3 passe:1 ongoing:1 princeton:6 tested:1 correlated:2
2,348
3,130
Sparse Representation for Signal Classification Ke Huang and Selin Aviyente Department of Electrical and Computer Engineering Michigan State University, East Lansing, MI 48824 {kehuang, aviyente}@egr.msu.edu Abstract In this paper, application of sparse representation (factorization) of signals over an overcomplete basis (dictionary) for signal classification is discussed. Searching for the sparse representation of a signal over an overcomplete dictionary is achieved by optimizing an objective function that includes two terms: one that measures the signal reconstruction error and another that measures the sparsity. This objective function works well in applications where signals need to be reconstructed, like coding and denoising. On the other hand, discriminative methods, such as linear discriminative analysis (LDA), are better suited for classification tasks. However, discriminative methods are usually sensitive to corruption in signals due to lacking crucial properties for signal reconstruction. In this paper, we present a theoretical framework for signal classification with sparse representation. The approach combines the discrimination power of the discriminative methods with the reconstruction property and the sparsity of the sparse representation that enables one to deal with signal corruptions: noise, missing data and outliers. The proposed approach is therefore capable of robust classification with a sparse representation of signals. The theoretical results are demonstrated with signal classification tasks, showing that the proposed approach outperforms the standard discriminative methods and the standard sparse representation in the case of corrupted signals. 1 Introduction Sparse representations of signals have received a great deal of attentions in recent years. The problem solved by the sparse representation is to search for the most compact representation of a signal in terms of linear combination of atoms in an overcomplete dictionary. Recent developments in multi-scale and multi-orientation representation of signals, such as wavelet, ridgelet, curvelet and contourlet transforms are an important incentive for the research on the sparse representation. Compared to methods based on orthonormal transforms or direct time domain processing, sparse representation usually offers better performance with its capacity for efficient signal modelling. Research has focused on three aspects of the sparse representation: pursuit methods for solving the optimization problem, such as matching pursuit [1], orthogonal matching pursuit [2], basis pursuit [3], LARS/homotopy methods [4]; design of the dictionary, such as the K-SVD method [5]; the applications of the sparse representation for different tasks, such as signal separation, denoising, coding, image inpainting [6, 7, 8, 9, 10]. For instance, in [6], sparse representation is used for image separation. The overcomplete dictionary is generated by combining multiple standard transforms, including curvelet transform, ridgelet transform and discrete cosine transform. In [7], application of the sparse representation to blind source separation is discussed and experimental results on EEG data analysis are demonstrated. In [8], a sparse image coding method with the wavelet transform is presented. In [9], sparse representation with an adaptive dictionary is shown to have state-of-the-art performance in image denoising. The widely used shrinkage method for image desnoising is shown to be the first iteration of basis pursuit that solves the sparse representation problem [10]. In the standard framework of sparse representation, the objective is to reduce the signal reconstruction error with as few number of atoms as possible. On the other hand, discriminative analysis methods, such as LDA, are more suitable for the tasks of classification. However, discriminative methods are usually sensitive to corruption in signals due to lacking crucial properties for signal reconstruction. In this paper, we propose the method of sparse representation for signal classification (SRSC), which modifies the standard sparse representation framework for signal classification. We first show that replacing the reconstruction error with discrimination power in the objective function of the sparse representation is more suitable for the tasks of classification. When the signal is corrupted, the discriminative methods may fail because little information is contained in discriminative analysis to successfully deal with noise, missing data and outliers. To address this robustness problem, the proposed approach of SRSC combines discrimination power, signal reconstruction and sparsity in the objective function for classification. With the theoretical framework of SRSC, our objective is to achieve a sparse and robust representation of corrupted signals for effective classification. The rest of this paper is organized as follows. Section 2 reviews the problem formulation and solution for the standard sparse representation. Section 3 discusses the motivations for proposing SRSC by analyzing the reconstructive methods and discriminative methods for signal classification. The formulation and solution of SRSC are presented in Section 4. Experimental results with synthetic and real data are shown in Section 5 and Section 6 concludes the paper with a summary of the proposed work and discussions. 2 Sparse Representation of Signal The problem of finding the sparse representation of a signal in a given overcomplete dictionary can be formulated as follows. Given a N ? M matrix A containing the elements of an overcomplete dictionary in its columns, with M > N and usually M >> N , and a signal y ? RN , the problem of sparse representation is to find an M ? 1 coefficient vector x, such that y = Ax and x0 is minimized, i.e., y = Ax. s.t. x = min (1) x 0  x where x0 is the 0 norm and is equivalent to the number of non-zero components in the vector x. Finding the solution to equation (1) is NP hard due to its nature of combinational optimization. Suboptimal solutions to this problem can be found by iterative methods like the matching pursuit and orthogonal matching pursuit. An approximate solution is obtained by replacing the 0 norm in equation (1) with the 1 norm, as follows: y = Ax. s.t. (2) x = min x 1  x where x1 is the 1 norm. In [11], it is proved that if certain conditions on the sparsity is satisfied, i.e., the solution is sparse enough, the solution of equation (1) is equivalent to the solution of equation (2), which can be efficiently solved by basis pursuit using linear programming. A generalized version of equation (2), which allows for certain degree of noise, is to find x such that the following objective function is minimized: 2 J1 (x; ?) = y ? Ax2 + ? x1 (3) where the parameter ? > 0 is a scalar regularization parameter that balances the tradeoff between reconstruction error and sparsity. In [12], a Bayesian approach is proposed for learning the optimal value for ?. Except for the intuitive interpretation as obtaining a sparse factorization that minimizes signal reconstruction error, the problem formulated in equation (3) has an equivalent interpretation in the framework of Bayesian decision as follows [13]. The signal y is assumed to be generated by the following model: y = Ax + ? (4) where ? is white Gaussian noise. Moreover, the prior distribution of x is assumed to be superGaussian:   M  p p(x) ? exp ?? (5) |xi | i=1 where p ? [0, 1]. This prior has been shown to encourage sparsity in many situations, due to its heavy tails and sharp peak. Given this prior, maximum a posteriori (MAP) estimation of x is formulated as 2 xM AP = arg max p(x|y) = arg min[? log p(y|x) ? log p(x)] = arg min(y ? Ax2 + ? xp ) x x x (6) when p = 0, equation (6) is equivalent to the generalized form of equation (1); when p = 1, equation (6) is equivalent to equation (2). 3 Reconstruction and Discrimination Sparse representation works well in applications where the original signal y needs to be reconstructed as accurately as possible, such as denoising, image inpainting and coding. However, for applications like signal classification, it is more important that the representation is discriminative for the given signal classes than a small reconstruction error. The difference between reconstruction and discrimination has been widely investigated in literature. It is known that typical reconstructive methods, such as principal component analysis (PCA) and independent component analysis (ICA), aim at obtaining a representation that enables sufficient reconstruction, thus are able to deal with signal corruption, i.e., noise, missing data and outliers. On the other hand, discriminative methods, such as LDA [14], generate a signal representation that maximizes the separation of distributions of signals from different classes. While both methods have broad applications in classification, the discriminative methods have often outperformed the reconstructive methods for the classification task [15, 16]. However, this comparison between the two types of method assumes that the signals being classified are ideal, i.e., noiseless, complete(without missing data) and without outliers. When this assumption does not hold, the classification may suffer from the nonrobust nature of the discriminative methods that contains insufficient information to successfully deal with signal corruptions. Specifically, the representation provided by the discriminative methods for optimal classification does not necessarily contain sufficient information for signal reconstruction, which is necessary for removing noise, recovering missing data and detecting outliers. This performance degradation of discriminative methods on corrupted signals is evident in the examples shown in [17]. On the other hand, reconstructive methods have shown successful performance in addressing these problems. In [9], the sparse representation is shown to achieve state-of-the-art performance in image denoising. In [18], missing pixels in images are successfully recovered by inpainting method based on sparse representation. In [17, 19], PCA method with subsampling effectively detects and excludes outliers for the following LDA analysis. All of these examples motivate the design of a new signal representation that combines the advantages of both reconstructive and discriminative methods to address the problem of robust classification when the obtained signals are corrupted. The proposed method should generate a representation that contain discriminative information for classification, crucial information for signal reconstruction and preferably the representation should be sparse. Due to the evident reconstructive properties [9, 18], the available efficient pursuit methods and the sparsity of representation, we choose the sparse representation as the basic framework for the SRSC and incorporate a measure of discrimination power into the objective function. Therefore, the sparse representation obtained by the proposed SRSC contains both crucial information for reconstruction and discriminative information for classification, which enable a reasonable classification performance in the case of corrupted signals. The three objectives: sparsity, reconstruction and discrimination may not always be consistent. Therefore, weighting factors are introduced to adjust the tradeoff among these objectives, as the weighting factor ? in equation (3). It should be noted that the aim of SRSC is not to improve the standard discriminative methods like LDA in the case of ideal signals, but to achieve comparable classification results when the signals are corrupted. A recent work [17] that aims at robust classification shares some common ideas with the proposed SRSC. In [17], PCA with subsampling proposed in [19] is applied to detect and exclude outliers in images and the rest of pixels are used for calculating LDA. 4 Sparse Representation for Signal Classification In this section, the SRSC problem is formulated mathematically and a pursuit method is proposed to optimize the objective function. We first replace the term measuring reconstruction error with a term measuring discrimination power to show the different effects of reconstruction and discrimination. Further, we incorporate measure of discrimination power in the framework of standard sparse representation to effectively address the problem of classifying corrupted signals. The Fisher?s discrimination criterion [14] used in the LDA is applied to quantify the discrimination power. Other well-known discrimination criteria can easily be substituted. 4.1 Problem Formulation Given y = Ax as discussed in Section 2, we view x as the feature extracted from signal y for classification. The extracted feature should be as discriminative as possible between the different signal classes. Suppose that we have a set of K signals in a signal matrix Y = [y1 , y2 , ..., yK ] with the corresponding representation in the overcomplete dictionary as X = [x1 , x2 , ..., xK ], of which Ki samples are in the class Ci , for 1 ? i ? C. Mean mi and variance s2i for class Ci are computed in the feature space as follows: 1  1  mi = x , s2i = x ? mi 22 (7) Ki Ki x?Ci x?Ci The mean of all samples are defined as: m = can then be defined as: F (X) = SB = SW 1 K K  i=1 xi . Finally, the Fisher?s discrimination power 2 C    Ki (mi ? m)(mi ? m)T    i=1 C  i=1 The difference between the sample means SB 2 . (8) s2i C 2   T  =  Ki (mi ? m)(mi ? m)  can be interi=1 preted as the ?inter-class distance? and the sum of variance SW = C  i=1 2 s2i can be similarly interpreted as the ?inner-class scatter?. Fisher?s criterion is motivated by the intuitive idea that the discrimination power is maximized when the spatial distribution of different classes are as far away as possible and the spatial distribution of samples from the same class are as close as possible. Replacing the reconstruction error with the discrimination power, the objective function that focuses only on classification can be written as: J2 (X, ?) = F (X) ? ? K  i=1 xi 0 (9) where ? is a positive scalar weighting factor chosen to adjust the tradeoff between discrimination power and sparsity. Maximizing J2 (X, ?) generates a sparse representation that has a good discrimination power. When the discrimination power, reconstruction error and sparsity are combined, the objective function can be written as: J3 (X, ?1 , ?2 ) = F (X) ? ?1 K  i=1 xi 0 ? ?2 K  i=1 2 yi ? Axi 2 (10) where ?1 and ?2 are positive scalar weighting factors chosen to adjust the tradeoff between the discrimination power, sparsity and the reconstruction error. Maximizing J3 (X, ?1 , ?2 ) ensures that a representation with discrimination power, reconstruction property and sparsity is extracted for robust classification of corrupted signals. In the case that the signals are corrupted, the two terms K K   2 xi 0 and yi ? Axi 2 robustly recover the signal structure, as in [9, 18]. On the other hand, i=1 i=1 the inclusion of the term F (X) requires that the obtained representation contains discriminative information for classification. In the following discussions, we refer to the solution of the objective function J3 (X, ?1 , ?2 ) as the features for the proposed SRSC. 4.2 Problem Solution Both the objective function J2 (X, ?) defined in equation (9) and the objective function J3 (X, ?1 , ?2 ) defined in equation (10) have similar forms to the objective function defined in the standard sparse representation, as J1 (x; ?) in equation (3). However, the key difference is that the evaluation of F (X) in J2 (X, ?) and J3 (X, ?1 , ?2 ) involves not only a single sample, as in J1 (x; ?), but also all other samples. Therefore, not all the pursuit methods, such as basis pursuit and LARS/Homotopy methods, that are applicable to the standard sparse representation method can be directly applied to optimize J2 (X, ?) and J3 (X, ?1 , ?2 ). However, the iterative optimization methods employed in the matching pursuit and the orthogonal matching pursuit provide a direct reference to the optimization of J2 (X, ?) and J3 (X, ?1 , ?2 ). In this paper, we propose an algorithm similar to the orthogonal matching pursuit and inspired by the simultaneous sparse approximation algorithm described in [20, 21]. Taking the optimization of J3 (X, ?1 , ?2 ) as example, the pursuit algorithm can be summarized as follows: 1. Initialize the residue matrix R0 = Y and the iteration counter t = 0. 2. Choose the atom from the dictionary, A, that maximizes the objective function: g = argmaxg?A J3 (gT Rt , ?1 , ?2 ) (11) 3. Determine the orthogonal projection matrix Pt onto the span of the chosen atoms, and compute the new residue. Rt = Y ? Pt Y (12) 4. Increment t and return to Step 2 until t is less than or equal to a pre-determined number. The pursuit algorithm for optimizing J2 (X, ?) also follows the same steps. Detailed analysis of this pursuit algorithm can be found in [20, 21]. 5 Experiments Two sets of experiments are conducted. In Section 5.1, synthesized signals are generated to show the difference between the features extracted by J1 (X, ?) and J2 (X, ?), which reflects the properties of reconstruction and discrimination. In Section 5.2, classification on real data is shown. Random noise and occlusion are added to the original signals to test the robustness of SRSC. 5.1 Synthetic Example Two simple signal classes, f1 (t) and f2 (t), are constructed with the Fourier basis. The signals are constructed to show the difference between the reconstructive methods and discriminative methods. f1 (t) = g1 cos t + h1 sin t (13) f2 (t) = g2 cos t + h2 sin t (14) selected by J1 selected by J2 20 10 f f 1 1 f 18 8 17 7 16 15 14 6 5 4 13 3 12 2 11 1 10 0 20 40 60 sample index 80 f2 9 2 coefficient amplitude coefficient amplitude 19 100 0 0 20 40 60 sample index 80 100 Figure 1: Distributions of projection of signals from two classes with the first atom selected by: J1 (X, ?) (the left figure) and J2 (X, ?) (the right figure). The scalar g1 is uniformly distributed in the interval [0, 5], and the scalar g2 is uniformly distributed in the interval [5, 10]. The scalar h1 and h2 are uniformly distributed in the interval [10, 20]. Therefore, most of the energy of the signal can be described by the sine function and most of the discrimination power is in the cosine function. The signal component with most energy is not necessary the component with the most discrimination power. Construct a dictionary as {sin t, cos t}, optimizing the objective function J1 (X, ?) with the pursuit method described in Section 4.2 selects sin t as the first atom. On the other hand, optimizing the objective function J2 (X, ?) selects cos t as the first atom. In the simulation, 100 samples are generated for each class and the pursuit algorithm stops at the first run. The projection of the signals from both classes to the first atom selected by J1 (X, ?) and J2 (X, ?) are shown in Fig.1. The difference shown in the figures has direct impact on the classification. 5.2 Real Example Classification with J1 , J2 and J3 (SRSC) is also conducted on the database of USPS handwritten digits [22]. The database contains 8-bit grayscale images of ?0? through ?9? with a size of 16 ? 16 and there are 1100 examples of each digit. Following the conclusion of [23], 10-fold stratified cross validation is adopted. Classification is conducted with the decomposition coefficients (? X? in equation (10)) as feature and support vector machine (SVM) as classifier. In this implementation, the overcomplete dictionary is a combination of Haar wavelet basis and Gabor basis. Haar basis is good at modelling discontinuities in signal and on the other hand, Gabor basis is good at modelling continuous signal components. In this experiment, noise and occlusion are added to the signals to test the robustness of SRSC. First, white Gaussian noise with increasing level of energy, thus decreasing level of signal-to-noise ratio (SNR), are added to each image. Table 1 summarizes the classification error rates obtained with different SNR. Second, different sizes of black squares are overlayed on each image at a random location to generate occlusion (missing data). For the image size of 16 ? 16, black squares with size of 3 ? 3, 5 ? 5, 7 ? 7, 9 ? 9 and 11 ? 11 are overlayed as occlusion. Table 2 summarizes the classification error rates obtained with occlusion. Results in Table 1 and Table 2 show that in the case that signals are ideal (without missing data and noiseless) or nearly ideal, J2 (X, ?) is the best criterion for classification. This is consistent with the known conclusion that discriminative methods outperform reconstructive methods in classification. However, when the noise is increased or more data is missing (with larger area of occlusion), the accuracy based on J2 (X, ?) degrades faster than the accuracy base on J1 (X, ?). This indicates Table 1: Classification error rates with different levels of white Gaussian noise N oiseless 20db 15db 10db 5db J1 (Reconstruction) 0.0855 0.0975 0.1375 0.1895 0.2310 J2 (Discrimination) 0.0605 0.0816 0.1475 0.2065 0.2785 J3 (SRSC) 0.0727 0.0803 0.1025 0.1490 0.2060 Table 2: Classification error rates with different sizes of occlusion no occlusion 3?3 5?5 7?7 9?9 J1 (Reconstruction) 0.0855 0.0930 0.1270 0.1605 0.2020 J2 (Discrimination) 0.0605 0.0720 0.1095 0.1805 0.2405 J3 (SRSC) 0.0727 0.0775 0.1135 0.1465 0.1815 11 ? 11 0.2990 0.3305 0.2590 that the signal structures recovered by the standard sparse representation are more robust to noise and occlusion, thus yield less performance degradation. On the other hand, the SRSC demonstrates lower error rate by the combination of the reconstruction property and the discrimination power in the case that signals are noisy or with occlusions. 6 Discussions In summary, sparse representation for signal classification(SRSC) is proposed. SRSC is motivated by the ongoing researches in the area of sparse representation in the signal processing area. SRSC incorporates reconstruction properties, discrimination power and sparsity for robust classification. In current implementation of SRSC, the weight factors are empirically set to optimize the performance. Approaches to determine optimal values for the weighting factors are being conducted, following the methods similar to that introduced in [12]. It is interesting to compare SRSC with the relevance vector machine (RVM) [24]. RVM has shown comparable performance to the widely used support vector machine (SVM), but with a substantially less number of relevance/support vectors. Both SRSC and RVM incorporate sparsity and reconstruction error into consideration. For SRSC, the two terms are explicitly included into objective function. For RVM, the two terms are included in the Bayesian formula. In RVM, the ?dictionary? used for signal representation is the collection of values from the ?kernel function?. On the other hand, SRSC roots in the standard sparse representation and recent developments of harmonic analysis, such as curvelet, bandlet, contourlet transforms that show excellent properties in signal modelling. It would be interesting to see how RVM works by replacing the kernel functions with these harmonic transforms. Another difference between SRSC and RVM is how the discrimination power is incorporated. The nature of RVM is function regression. When used for classification, RVM simply changes the target function value to class membership. For SRSC, the discrimination power is explicitly incorporated by inclusion of a measure based on the Fisher?s discrimination. The adjustment of weighting factor in SRSC (in equation (10)) may give some flexibility for the algorithm when facing various noise levels in the signals. A thorough and systemic study of connections and difference between SRSC and RVM would be an interesting topic for the future research. References [1] S. Mallat and Z. Zhang, ?Matching pursuits with time-frequency dictionaries,? IEEE Trans. on Signal Processing, vol. 41, pp. 3397?3415, 1993. [2] Y. Pati, R. Rezaiifar, and P. Krishnaprasad, ?Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition,? in 27th Annual Asilomar Conference on Signals, Systems, and Computers, 1993. [3] S. Chen, D. Donoho, and M. Saunders, ?Atomic decomposition by basis pursuit,? SIAM J. Scientific Computing, vol. 20, no. 1, pp. 33?61, 1999. [4] I. Drori and D. Donoho, ?Solution of L1 minimization problems by LARS/Homotopy methods,? in ICASSP, 2006, vol. 3, pp. 636?639. [5] M. Aharon, M. Elad, and A. Bruckstein, ?The K-SVD: An algorithm for designing of overcomplete dictionaries for sparse representation,? IEEE Trans. On Signal Processing, to appear. [6] J. Starck, M. Elad, and D. Donoho, ?Image decomposition via the combination of sparse representation and a variational approach,? IEEE Trans. on Image Processing, vol. 14, no. 10, pp. 1570?1582, 2005. [7] Y. Li, A. Cichocki, and S. Amari, ?Analysis of sparse representation and blind source separation,? Neural Computation, vol. 16, no. 6, pp. 1193?1234, 2004. [8] B. Olshausen, P. Sallee, and M. Lewicki, ?Learning sparse image codes using a wavelet pyramid architecture,? in NIPS, 2001, pp. 887?893. [9] M. Elad and M. Aharon, ?Image denoising via learned dictionaries and sparse representation,? in CVPR, 2006. [10] M. Elad, B. Matalon, and M. Zibulevsky, ?Image denoising with shrinkage and redundant representation,? in CVPR, 2006. [11] D. Donoho and X. Huo, ?Uncertainty principles and ideal atomic decomposition,? IEEE Trans. on Information Theory, vol. 47, no. 7, pp. 2845?2862, 2001. [12] Y. Lin and D. Lee, ?Bayesian L1-Norm sparse learning,? in ICASSP, 2006, vol. 5, pp. 605?608. [13] D. Wipf and B. Rao, ?Sparse bayesian learning for basis selection,? IEEE Trans. on Signal Processing, vol. 52, no. 8, pp. 2153?2164, 2004. [14] R. Duda, P. Hart, and D. Stork, Pattern classification (2nd ed.), Wiley-Interscience, 2000. [15] P. Belhumeur, J. Hespanha, and D. Kriegman, ?Eigenfaces vs. fisherfaces: Recognition using class specific linear projection,? IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 711?720, 1997. [16] A. Martinez and A. Kak, ?PCA versus LDA,? IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 23, no. 2, pp. 228?233, 2001. [17] S. Fidler, D. Skocaj, and A. Leonardis, ?Combining reconstructive and discriminative subspace methods for robust classification and regression by subsampling,? IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 28, no. 3, pp. 337?350, 2006. [18] M. Elad, J. Starck, P. Querre, and D.L. Donoho, ?Simultaneous cartoon and texture image inpainting using morphological component analysis (MCA),? Journal on Applied and Computational Harmonic Analysis, vol. 19, pp. 340?358, 2005. [19] A. Leonardis and H. Bischof, ?Robust recognition using eigenimages,? Computer Vision and Image Understanding, vol. 78, pp. 99?118, 2000. [20] J. Tropp, A. Gilbert, and M. Strauss, ?Algorithms for simultaneous sparse approximation. part I: Greedy pursuit,? Signal Processing, special issue on Sparse approximations in signal and image processing, vol. 86, no. 4, pp. 572?588, 2006. [21] J. Tropp, A. Gilbert, and M. Strauss, ?Algorithms for simultaneous sparse approximation. part II: Convex relaxation,? Signal Processing, special issue on Sparse approximations in signal and image processing, vol. 86, no. 4, pp. 589?602, 2006. [22] USPS Handwritten Digit Database, ?available at: http://www.cs.toronto.edu/ roweis/data.html,? . [23] R. Kohavi, ?A study of cross-validation and bootstrap for accuracy estimation and model selection,? in IJCAI, 1995, pp. 1137?1145. [24] M. Tipping, ?Sparse bayesian learning and the relevance vector machine,? Journal of Machine Learning Research, vol. 1, pp. 211?244, 2001.
3130 |@word version:1 norm:5 duda:1 nd:1 simulation:1 decomposition:5 inpainting:4 contains:4 outperforms:1 recovered:2 current:1 scatter:1 written:2 j1:12 enables:2 discrimination:31 v:1 intelligence:3 selected:4 greedy:1 xk:1 huo:1 detecting:1 location:1 toronto:1 zhang:1 constructed:2 direct:3 combine:3 combinational:1 interscience:1 lansing:1 inter:1 ica:1 multi:2 inspired:1 detects:1 decreasing:1 little:1 increasing:1 provided:1 moreover:1 maximizes:2 interpreted:1 minimizes:1 substantially:1 proposing:1 finding:2 thorough:1 preferably:1 classifier:1 demonstrates:1 appear:1 positive:2 engineering:1 analyzing:1 ap:1 black:2 co:4 factorization:2 stratified:1 systemic:1 atomic:2 recursive:1 bootstrap:1 digit:3 drori:1 area:3 gabor:2 matching:9 projection:4 pre:1 onto:1 close:1 selection:2 optimize:3 equivalent:5 map:1 demonstrated:2 missing:9 maximizing:2 modifies:1 gilbert:2 attention:1 www:1 convex:1 focused:1 ke:1 orthonormal:1 searching:1 increment:1 pt:2 suppose:1 target:1 mallat:1 programming:1 designing:1 element:1 recognition:2 database:3 electrical:1 solved:2 ensures:1 morphological:1 counter:1 zibulevsky:1 yk:1 kriegman:1 motivate:1 solving:1 f2:3 basis:12 usps:2 easily:1 icassp:2 eigenimages:1 various:1 s2i:4 nonrobust:1 effective:1 reconstructive:9 saunders:1 widely:3 larger:1 elad:5 cvpr:2 amari:1 g1:2 transform:4 noisy:1 advantage:1 reconstruction:29 propose:2 j2:17 combining:2 flexibility:1 achieve:3 roweis:1 intuitive:2 ijcai:1 received:1 solves:1 recovering:1 c:1 involves:1 quantify:1 lars:3 enable:1 f1:2 homotopy:3 mathematically:1 hold:1 exp:1 great:1 rezaiifar:1 dictionary:16 estimation:2 outperformed:1 applicable:1 rvm:10 sensitive:2 supergaussian:1 successfully:3 reflects:1 minimization:1 gaussian:3 always:1 aim:3 shrinkage:2 ax:7 focus:1 modelling:4 indicates:1 detect:1 posteriori:1 membership:1 sb:2 selects:2 pixel:2 arg:3 classification:44 orientation:1 among:1 krishnaprasad:1 issue:2 html:1 development:2 art:2 spatial:2 initialize:1 special:2 equal:1 construct:1 atom:8 cartoon:1 broad:1 nearly:1 future:1 minimized:2 np:1 wipf:1 few:1 occlusion:10 overlayed:2 evaluation:1 adjust:3 encourage:1 capable:1 necessary:2 orthogonal:6 overcomplete:9 theoretical:3 instance:1 column:1 increased:1 rao:1 measuring:2 addressing:1 snr:2 sallee:1 successful:1 conducted:4 corrupted:10 synthetic:2 combined:1 peak:1 siam:1 lee:1 satisfied:1 containing:1 huang:1 choose:2 return:1 li:1 exclude:1 curvelet:3 coding:4 summarized:1 includes:1 coefficient:4 explicitly:2 blind:2 sine:1 view:1 h1:2 root:1 recover:1 square:2 accuracy:3 variance:2 efficiently:1 maximized:1 yield:1 bayesian:6 handwritten:2 accurately:1 corruption:5 classified:1 srsc:29 simultaneous:4 ed:1 energy:3 frequency:1 pp:18 mi:8 stop:1 proved:1 organized:1 amplitude:2 tipping:1 formulation:3 until:1 hand:9 tropp:2 replacing:4 lda:8 scientific:1 olshausen:1 effect:1 contain:2 y2:1 regularization:1 fidler:1 deal:5 white:3 sin:4 noted:1 kak:1 cosine:2 criterion:4 generalized:2 evident:2 complete:1 starck:2 l1:2 image:22 harmonic:3 consideration:1 variational:1 common:1 empirically:1 stork:1 discussed:3 interpretation:2 tail:1 synthesized:1 refer:1 similarly:1 inclusion:2 gt:1 base:1 recent:4 optimizing:4 certain:2 yi:2 mca:1 employed:1 r0:1 egr:1 determine:2 belhumeur:1 redundant:1 signal:85 ii:1 multiple:1 faster:1 offer:1 cross:2 lin:1 hart:1 impact:1 j3:12 basic:1 regression:2 vision:1 noiseless:2 iteration:2 kernel:2 pyramid:1 achieved:1 residue:2 interval:3 source:2 crucial:4 kohavi:1 rest:2 db:4 incorporates:1 ideal:5 enough:1 architecture:1 suboptimal:1 reduce:1 idea:2 inner:1 tradeoff:4 motivated:2 pca:4 suffer:1 detailed:1 transforms:5 generate:3 http:1 outperform:1 discrete:1 incentive:1 vol:16 key:1 excludes:1 relaxation:1 year:1 sum:1 run:1 uncertainty:1 reasonable:1 separation:5 decision:1 summarizes:2 comparable:2 bit:1 ki:5 fold:1 annual:1 x2:1 generates:1 aspect:1 fourier:1 min:4 span:1 department:1 combination:4 outlier:7 asilomar:1 equation:16 discus:1 fail:1 adopted:1 pursuit:24 available:2 aharon:2 argmaxg:1 away:1 robustly:1 robustness:3 original:2 assumes:1 aviyente:2 subsampling:3 sw:2 calculating:1 objective:21 added:3 degrades:1 rt:2 subspace:1 distance:1 capacity:1 topic:1 code:1 index:2 pati:1 insufficient:1 ratio:1 balance:1 hespanha:1 design:2 implementation:2 fisherfaces:1 situation:1 incorporated:2 y1:1 rn:1 sharp:1 introduced:2 connection:1 bischof:1 learned:1 discontinuity:1 trans:7 address:3 able:1 nip:1 leonardis:2 usually:4 pattern:4 xm:1 sparsity:14 max:1 including:1 power:21 suitable:2 haar:2 improve:1 concludes:1 cichocki:1 review:1 prior:3 literature:1 understanding:1 lacking:2 interesting:3 facing:1 versus:1 validation:2 h2:2 degree:1 sufficient:2 consistent:2 principle:1 classifying:1 share:1 heavy:1 summary:2 eigenfaces:1 taking:1 sparse:57 distributed:3 axi:2 collection:1 adaptive:1 far:1 transaction:1 reconstructed:2 approximate:1 compact:1 bruckstein:1 assumed:2 xi:5 discriminative:25 grayscale:1 msu:1 search:1 iterative:2 continuous:1 table:6 nature:3 robust:9 obtaining:2 eeg:1 investigated:1 necessarily:1 excellent:1 domain:1 substituted:1 motivation:1 noise:14 martinez:1 x1:1 fig:1 wiley:1 weighting:6 wavelet:5 removing:1 formula:1 specific:1 showing:1 svm:2 strauss:2 effectively:2 ci:4 texture:1 chen:1 suited:1 michigan:1 simply:1 contained:1 adjustment:1 g2:2 scalar:6 lewicki:1 extracted:4 formulated:4 donoho:5 replace:1 fisher:4 hard:1 change:1 included:2 typical:1 except:1 specifically:1 determined:1 uniformly:3 denoising:7 principal:1 degradation:2 svd:2 experimental:2 east:1 support:3 relevance:3 ongoing:1 incorporate:3
2,349
3,131
Subordinate class recognition using relational object models Aharon Bar Hillel Department of Computer Science The Hebrew university of Jerusalem aharonbh@cs.huji.ac.il Daphna Weinshall Department of Computer Science The Hebrew university of Jerusalem daphna@cs.huji.ac.il Abstract We address the problem of sub-ordinate class recognition, like the distinction between different types of motorcycles. Our approach is motivated by observations from cognitive psychology, which identify parts as the defining component of basic level categories (like motorcycles), while sub-ordinate categories are more often defined by part properties (like ?jagged wheels?). Accordingly, we suggest a two-stage algorithm: First, a relational part based object model is learnt using unsegmented object images from the inclusive class (e.g., motorcycles in general). The model is then used to build a class-specific vector representation for images, where each entry corresponds to a model?s part. In the second stage we train a standard discriminative classifier to classify subclass instances (e.g., cross motorcycles) based on the class-specific vector representation. We describe extensive experimental results with several subclasses. The proposed algorithm typically gives better results than a competing one-step algorithm, or a two stage algorithm where classification is based on a model of the sub-ordinate class. 1 Introduction Human categorization is fundamentally hierarchical, where categories are organized in tree-like hierarchies. In this organization, higher nodes close to the root describe inclusive classes (like vehicles), intermediate nodes describe more specific categories (like motorcycles), and lower nodes close to the leaves capture fine distinctions between objects (e.g., cross vs. sport motorcycles). Intuitively one could expect such hierarchy to be learnt either bottom-up or top-down (or both), but surprisingly, this is not the case. In fact, there is a well defined intermediate level in the hierarchy, called basic level, which is learnt first [11]. In addition to learning, this level is more primary than both more specific and more inclusive levels, in terms of many other psychological, anthropological and linguistic measures. The primary role of basic level categories seems related to the structure of objects in the world. In [13], Tversky & Hemenway promote the hypothesis that the explanation lies in the notion of parts. Their experiments show that basic level categories (like cars and flowers) are often described as a combination of distinctive parts (e.g., stem and petals), which are mostly unique. Higher (superordinate and more inclusive) levels are more often described by their function (e.g., ?used for transportation?), while lower (sub-ordinate and more specific) levels are often described by part properties (e.g., red petals) and other fine details. These points are illustrated in Fig. 1. This computational characterization of human categorization finds parallels in computer vision and machine learning. Specifically, traditional work in pattern recognition focused on discriminating vectors of features, where the features are shared by all objects, with different values. If we make the analogy between features and parts, this level of analysis is appropriate for sub-ordinate categories. In this level different objects share parts but differ in the parts? values (e.g., red petals vs. yellow petals); this is called ?modified parts? in [13]. Figure 1: Left Examples of sub-ordinate and basic level classification. Top row: Two motorcycle subordinate classes, sport (right) and cross (left). As members of the same basic level category, they share the same part structure. Bottom row: Objects from different basic level categories, like a chair and a face, lack such natural part correspondence. Right. Several parts from a learnt motorcycle model as detected in cross and sport motorcycle images. Based on the part correspondence we can build ordered vectors of part descriptions, and conduct the classification in this shared feature space. (Better seen in color) This discrimination paradigm cannot easily generalize to the classification of basic level objects, mostly because these objects do not share common informative parts, and therefore cannot be efficiently compared using an ordered vector of fixed parts. This problem is partially addressed in a more recent line of work (e.g., [5, 6, 2, 7, 9]), where part-based generative models of objects are learned directly from images. In this paradigm objects are modeled as a set of parts with spatial relations between them. The models are learnt and applied to images, which are represented as unordered feature sets (usually image patches). Learning algorithms developed within this paradigm are typically more complex and less efficient than traditional classifiers learnt in some fixed vector space. However, given the characteristics of human categorization discussed above, this seems to be the correct paradigm to address the classification of basic level categories. These considerations suggest that sub-ordinate classification should be solved using a two stage method: First we should learn a generative model for the basic category. Using such a model, the object parts should be identified in each image, and their descriptions can be concatenated into an ordered vector. In a second stage, the distinction between subordinate classes can be done by applying standard machine learning tools, like SVM, to the resulting ordered vectors. In this framework, the model learnt in stage 1 is used to solve the correspondence problem: features in the same entry in two different image vectors correspond since they implement the same part. Using this relatively high level representation, the distinction between subordinate categories may be expected to get easier. Similar notions, of constructing discriminative classifiers on top of generative models, have been recently proposed in the context of object localization [10] and class recognition [7]. The main motivation in these papers was to provide discriminative power to a generative model, optimized by maximum likelihood. Thus the discriminative classifier for a class in [7, 10] uses a generative model of the same class as a representation scheme.1 In contrast, in this work we use a recent learning algorithm, which already learns a generative relational model of basic categories using a discriminative boosting technique [2]. The new element in our approach is in the learning of a model of one class (the more general basic level category) to allow the efficient discrimination of another class (the more specific sub-ordinates). Thus our main contribution lies the use of objcet hierarchy, where we represent sub-ordinate classes using models of the more general, basic level class. The approach relies on a specific form of knowledge transfer between classes, and as such it is an instance of the ?learning-to-learn? paradigm. There are several potential benefits to this approach. First and most important is improved accuracy, especially when training data is scarce. For an under-sampled sub-ordinate class, the basic level model can be learnt from a larger sample, leading to a more stable representation for the second stage SVM and lower error rate. A second advantage becomes apparent when scalability is considered: A system which needs to discriminate between many subordinate classes will have to learn and keep considerably less models (only one for each basic level class) if built according to our proposed 1 An exception to this rule is the Caltech 101 experiment of [7], but there the discriminative classifiers for all 101 classes relies on the same two arbitrary class models. 100 50 0 ?50 ?100 ?150 ?100 ?50 0 50 Figure 2: Left A Bayesian network specifying the dependencies between the hidden variables Cl , Cs and the parts scale and location Xlk , Xsk for k = 1, .., P . The part appearance variables Xak are independent, and so they do not appear in this network. Middle The spatial relations between 5 parts from a learnt chair model. The cyan cross indicates the position of the hidden object center cl . Right The implementations of the 5 parts in a chair image. (Better seen in color) approach. Such a system can better cope with new subordinate classes, since learning to identify a new class may rely on existing basic class models. Typically the learning of generative models from unsegmented images is exponential in the number of parts and features [5, 6]. This significantly limits the richness of the generative model, to a point where it may not contain enough detail to distinguish between subclass instances. Alternatively, rich models can be learnt from images with part segmentations [4, 9], but obtaining such training data requires a lot of human labor. The algorithm we use in this work, presented in [2], learns from unsegmented images, and its complexity is linear in the number of model parts and image features. We can hence learn models with many parts, providing a rich object description. In section 3 we discuss the importance of this property. We briefly describe the model learning algorithm in Section 2.1. The details of the two-stage method are then described in Section 2.2. In Section 3 we describe experiments with sub-classes from six basic level categories. We compare our proposed approach, called BLP (Basic Level Primacy), to a one-stage approach. We also compare to another two-stage approach, called SLP (Subordinate Level Primacy), in which discrimination is done based on a model of the subordinate class. In most cases, the results support our claim and demonstrate the superiority of the BLP method. 2 Algorithms To learn class models, we use an efficient learning method briefly reviewed in Section 2.1. Section 2.2 describes the techniques we use for subclass recognition. 2.1 Efficient learning of object class models The learning method from [2] learns a generative relational object model, but the model parameters are discriminatively optimized using an extended boosting process. The class model is learnt from a set of object images and a set of background images. Image I is represented using an unordered feature set F (I) with Nf features extracted by the Kadir & Brady feature detector [8]. The feature set usually contains several hundred features in various scales, with considerable overlap. Features are normalized to uniform size, zero mean and unit variance. They are then represented using their first 15 DCT coefficients, augmented by the image location of the feature and its scale. The object model is a generative part-based model with P parts (see example in Fig. 2b), where each part is implemented by a single image feature. For each part, its appearance, location and scale are modeled. The appearance of parts is assumed to be independent, while their location and scale are relative to the unknown object location and scale. This dependence is captured by a Bayesian network model, shown in Fig. 2a. It is a star-like model, where the center node is a 3-dimensional ~ l , Cs ), with the vector C ~ l denoting the unknown object location and the scalar hidden node C = (C Cs denoting its unknown scale. All the components of the part model, including appearance, relative location and relative log-scale, are modeled using Gaussian distributions with a (scaled) identity covariance matrix. Based on this model and some simplifying assumptions, the likelihood ratio test classifier is approximated by f (I) = max C P X k=1 max log p(x|C, ?k ) ? ? x?F (I) (1) This classifier compares the first term, which represents the approximated image likelihood, to a threshold ?. The likelihood term approximates the image likelihood using the MAP interpretation of the model in the image, i.e., it is determined by the single best implementation of model parts by image features. This MAP solution can be efficiently found using standard message passing in time linear in the number of parts P and the number of image features Nf . However, Maximum Likelihood (ML) parameter optimization cannot be used, since the approximation permits part repetition, and as a result the ML solution is vulnerable to repetitive choices of the same part. Instead, the model is optimized to minimize a discriminative loss function. Specifically, labeling object images by +1 and background images by ?1, the learning algorithm PN tries to minimize the exp loss of the margin, L(f ) = i=1 exp(?yi f (Ii )), which is the loss minimized by the Adaboost algorithm [12]. The optimization is done using an extended ?relational? boosting scheme, which generalizes the boosting technique to classifiers of the form (1). In the relational boosting algorithm, the weak hypotheses (summands in Eq. (1)) are not merely functions of the image I, but depend also on the hidden variable C, which captures the unknown location and scale of the object. In order to find good part hypotheses, the weak learner is given the best current estimate of C, and uses it to guide the search for a discriminative part hypothesis. After the new part hypothesis is added to the model, C is re-inferred and the new estimate is used in the next boosting round. Additional tweaks are added to improve class recognition results, including a gradient descent weak learner and a feedback loop between the optimization of the a weak hypothesis and its weight. 2.2 Subclass recognition As stated in the introduction, we approach subclass recognition using a two-stage algorithm. In the first stage a model of the basic level class is applied to the image, and descriptors of the identified parts are concatenated into an ordered vector. In the second stage the subclass label is determined by feeding this vector into a classifier trained to identify the subclass. We next present the implementation details of these two stages. Class model learning Subclass recognition in the proposed framework depends on part consistency across images, and it is more sensitive to part identification failures than the original class recognition task. Producing an informative feature vector is only possible using a rich model with many stable parts. We therefore use a large number of features (Nf = 400) per image, and a relatively fine grid of C values, with 10 ? 10 locations over the entire image and 3 scales (a total of Nc = 300 possible values for the hidden variable C). We also learn large models with P = 60 parts.2 Note that such large values for Nf and P are not possible in a purely generative framework such as [5, 6] due to the prohibitive computational learning complexity of O(NfP ). In [2], model parts are learnt using a gradient based weak learner, which tends to produce exaggerated part location models to enhance its discriminative power. In such cases parts are modeled as being unrealistically far from the object center. Here we restrict the dynamics of the location model in order to produce more realistic and stable parts. In addition, we found out experimentally that when the data contains object images with rich backgrounds, performance of subclass recognition and localization is improved when using models with increased relative location weight. Specifically, a part hypothesis in the model includes appearance, location and scale components with relative weights ?i /(?1 + ?2 + ?3 ), i = 1, 2, 3, learnt automatically by the algorithm. We multiply ?2 of all the parts in the learnt model by a constant factor of 10 when learning from images with rich background. Probabilistically, such increase of ?2 amounts to smaller location covariance, and hence to stricter demands on the accuracy of the relative locations of parts. 2 In comparison, class recognition in [2] was done with Nf = 200, Nc = 108 and P = 50. Subclass discrimination Given a learnt object model and a new image, we match for each model part the corresponding image feature which implements it in the MAP solution. We then build the feature vector, which represents the new image, by concatenating all the feature descriptors implementing parts 1, ..P . Each feature is described using a 21-dimensional descriptor including: ? The 15 DCT coefficients describing the feature. ? The relative (x,y) location and log-scale of the feature (relative to the computed MAP value of C). ? A normalized mean of the feature (m ? m)/std(m) ? where m is the feature?s mean (over feature pixels), and m, ? std(m) are the empirical mean and std of m over the P parts in the image. ? A normalized logarithm of feature variance (v ? v?)/std(v) with v the logarithm of the feature?s variance (over feature pixels) and v?, std(v) the empirical mean and std of v over image parts. ? The log-likelihood of the feature (according to the part?s model). In the end, each image is represented by a vector of length 21?P . The training set is then normalized to have unit variance in all the dimensions, and the standard deviations are stored in order to allow for identical scaling of the test data. Vector representations are prepared in this manner for a training sample including objects from the sub-ordinate class, objects from other sub-ordinate classes of the same basic category, and background images. Finally, a linear SVM [3] is trained to discriminate the target subordinate class images from all other images. 3 Experimental results Methods: In our experiments, we regard subclass recognition as a binary classification problem in a retrieval scenario. Specifically, The learning algorithm is given a sample of background images, and a sample of unsegmented class images. Images are labeled by the subclass they represent, or as background if they do not contain any object from the inclusive class. The algorithm is trained to identify a specific subclass. In the test phase, the algorithm is given another sample from the same distribution of images, and is asked to identify images from the specific subclass. Several methodological problems arise in this scenario. First, subclasses are often not mutually exclusive [13], and in many cases there are borderline instances which are inherently ambiguous. This may lead to an ill-defined classification problem. We avoid this problem in the current study by filtering the data sets, leaving only instances with clear-cut subclass affiliation. The second problem concerns performance measurements. The common measure used in related work is the equal error rate of the ROC curve (denoted here EER), i.e., the error obtained when the rate of false positives and the rate of false negatives are equal. However, as discussed in [1], this measure is not well suited for a detection scenario, where the number of positive examples is much smaller than the number of negative examples. A better measure appears to be the equal error rate of the recall-precision curve (denoted here RPC). Subclass recognition has the same characteristics, and we therefore prefer the RPC measure; for completeness, and since the measures do not give qualitatively different results, the EER score is also provided. The algorithms compared: We compare the performance of the following three algorithms: ? Basic Level Primacy (BLP) - The two-stage method for subclass recognition described above, in which a model of the basic level category is used to form the vector representation. ? Subordinate level primacy (SLP) - A two-stage method for subclass recognition, in which a model of the sub-ordinate level category is used to form the vector representation. ? One stage method - The classification is based on the likelihood obtained by a model of the sub-ordinate class. The three algorithms use the same training sample in all the experiments. The class models in all the methods were implemented using the algorithm described in Section 2.1, with exactly the same Motorcycles Cross (106) Sport (156) Tables Dining (60) Coffee (60) Faces Male (272) Female (173) Chairs Dining (60) Guitars Classical (60) Electric (60) Pianos Living R. (60) Grand (60) Upright (60) Figure 3: Object images from the subclasses learnt in our experiments. We used 12 subclasses of 6 basic classes. The number of images in each subclass is indicated in the parenthesis next to the subclass name. Individual faces were also considered as subclasses, and the males and females subclasses above include a single example from 4 such individuals. parameters (reported in section 2.2). This algorithm is competitive with current state-of-the-art methods in object class recognition [2]. The third and the second method learn a different model for each subordinate category, and use images from the other sub-ordinate classes as part of the background class during model learning. The difference is that in the third method, classification is done based on the model score (as in [2]), and in the second the model is only used to build a representation, while classification is done with an SVM (as in [7]). The first and second method both employ the distinction between a representation and classification, but the first uses a model of the basic category, and so tries to take advantage of the structural similarity between different subordinate classes of the same basic category. Datasets We have considered 12 subordinate classes from 6 basic categories. The images were obtained from several sources. Specifically, we have re-labeled subsets of the Caltech Motorcycle and Faces database3 , to obtain the subordinates of sport and cross motorcycles, and male and female faces. For these data sets we have increased the weight of the location model, as mentioned in section 2.2. We took the subordinate classes of grand piano and electric guitar from the Caltech 101 dataset 4 and supplemented them with classes of upright piano and classical guitar collected using google images. Finally, we used subsets of the chairs and furniture background used in [2]5 to define classes of dining and living room chairs, dining and coffee tables. Example images from the data sets can be seen in Fig. 3. In all the experiments, the Caltech office background data was used as the background class. In each experiment half of the data was used for training and the other half for test. In addition, we have experimented with individual faces from the Caltech faces data set. In this experiment each individual is treated as a sub-ordinate class of the Faces basic class. We filtered the faces data to include only people which have at least 20 images. There were 19 such individuals, and we report the results of these experiments using the mean error. 3 Available at http://www.robots.ox.ac.uk/ vgg/data.html. Available at http://www.vision.caltech.edu/feifeili/Datasets.htm 5 Available at http://www.cs.huji.ac.il/ aharonbh/#Data. 4 Performance and parts number Performance and parts number 0.16 0.14 0.3 0.12 Error rate Error rate 0.25 0.2 0.15 0.1 0.06 0.04 0.05 0 0.1 0.08 0.02 10 20 30 40 Number of parts 50 60 0 10 20 30 40 Number of parts 50 60 Figure 4: Left: RPC error rates as a function of the number of model parts P in the two-stage BLP method, for 5 ? P ? 60. The curves are presented for 6 representative subclasses, one from each basic level category presented in Fig. 3 Right: classification error of the first stage classifier as a function of P . This graph reports errors for the 6 basic level models used in the experiments reported on the left graph. In general, while adding only a minor improvement to inclusive class recognition, adding parts beyond 30 significantly improves subclass recognition performance. Classification results Table 1 summarizes the classification results. We can see that both twostage methods perform better than the one-stage method. This shows the advantage of the distinction between representation and classification, which allows the two-stage methods to use the more powerful SVM classifier. When comparing the two two-stage methods, BLP is a clear winner in 7 of the 13 experiments, while SLP has a clear advantage only in a single case. The representation based on the basic level model is hence usually preferable for the fine discriminations required. Overall, the BLP method is clearly superior to the other two methods in most of the experiments, achieving results comparable or superior to the others in 11 of the 13 problems. It is interesting to note that the SLP and BLP show comparable performance when given the individual face subclasses. Notice however, that in this case BLP is far more economical, learning and storing a single face model instead of the 19 individual models used by SLP. Subclass Cross motor. Sport motor. Males Females Dining chair Living room chair Coffee table Dining table Classic guitar Electric guitar Grand piano Upright piano Individuals One stage method 14.5 (12.7) 10.5 (5.7) 20.6 (12.4) 10.6 (7.1) 6.7 (3.6 ) 6.7 (6.7) 13.3 (6.2) 6.7 (3.6) 4.9 (3.1) 6.7 (3.6) 10.0 (3.6) 3.3 (3.6) 27.5? (24.8)? Subordinate level primacy 9.9 (3.5) 6.6 (5.0) 24.7 (19.4) 10.6 (7.9) 0 (0) 0 (0) 8.4 (6.7) 4.9 (3.6) 3.3 (0.5) 3.3 (3.6) 10.0 (3.6) 10.0 (6.7) 17.9? (7.3)? Basic level primacy 5.5 ( 1.7) 4.6 (2.6) 21.9 (16.7) 8.2 (5.9) 0 (0) 0 (0) 3.3 (3.6) 0 (0) 6.7 (3.1) 3.3 (2.6) 6.7 (4.0) 3.3 (0.5) 19.2? (6.5)? Table 1: Error rates (in percents), when separating subclass images from non-subclass and background images. The main numbers indicate equal error rate of the recall precision curve (RPC). Equal error rate of the ROC (EER) are reported in parentheses. The best result in each row is shown in bold. For the individuals subclasses, the mean over 19 people is reported (marked by ?). Overall, the BLP method shows a clear advantage. Performance as a function of number of parts Fig. 4 presents errors as a function of P , the number of class model parts. The graph on the left plots RPC errors of the two stage BLP method on 6 representative data sets. The graph on the right describes the errors of the first stage class models in the task of discriminating the basic level classes background images. While the performance of inclusive class recognition stabilizes after ? 30 parts, the error rates in subclass recognition continue to drop significantly for most subclasses well beyond 30 parts. It seems that while later boosting rounds have minor contribution to class recognition in the first stage of the algorithm, the added parts enrich the class representation and allow better subclass recognition in the second stage. 4 Summary and Discussion We have addressed in this paper the challenging problem of distinguishing between subordinate classes of the same basic level category. We showed that two augmentations contribute to performance when solving such problems: First, using a two-stage method where representation and classification are solved separately. Second, using a larger sample from the more general basic level category to build a richer representation. We described a specific two stage method, and experimentally showed its advantage over two alternative variants. The idea of separating representation from classification in such a way was already discussed in [7]. However, our method is different both in motivation and in some important technical details. Technically speaking, we use an efficient algorithm to learn the generative model, and are therefore able to use a rich representation with dozens of parts (in [7] the representation typically includes 3 parts). Our experiments show that the large number of model parts i a critical for the success of the two stage method. The more important difference is that we use the hierarchy of natural objects, and learn the representation model for a more general class of objects - the basic level class (BLP). We show experimentally that this is preferable to using a model of the target subordinate (SLP). This distinction and its experimental support is our main contribution. Compared with the more traditional SLP method, the BLP method suggested here enjoys two significant advantages. First and most importantly, its accuracy is usually superior, as demonstrated by our experiments. Second, the computational efficiency of learning is much lower, as multiple SVM training sessions are typically much shorter than multiple applications of relational model learning. In our experiments, learning a generative relational model per class (or subclass) required 12-24 hours, while SVM training was typically done in a few seconds. This advantage is more pronounced as the number of subclasses of the same class increases. As scalability becomes an issue, this advantage becomes more important. References [1] S. Agarwal and D. Roth. Learning a sparse representation for object detection. In ECCV, 2002. [2] A. Bar-Hillel, T. Hertz, and D. Weinshall. Efficient learning of relational object class models. In ICCV, 2005. [3] G.C. Cawley. MATLAB SVM Toolbox [http://theoval.sys.uea.ac.uk/?gcc/svm/toolbox]. [4] P. Feltzenswalb and D. Hutenlocher. Pictorial structures for object recognition. IJCV, 61:55?79, 2005. [5] R. Fergus, P. Perona, and A. Zisserman. Object class recognition by unsupervised scale invariant learning. In CVPR, 2003. [6] R. Fergus, P. Perona, and A. Zisserman. A sparse object category model for efficient learning and exhaustive recognition. In CVPR, 2005. [7] AD. Holub, M. Welling, and P. Perona. Combining generative models and fisher kernels for object class recognition. In International Conference on Computer Vision (ICCV), 2005. [8] T. Kadir and M. Brady. Scale, saliency and image description. IJCV, 45(2):83?105, November 2001. [9] B. Leibe, A. Leonardis, and B. Schiele. Combined object categorization and segmentation with an implicit shape model. In ECCV workshop on statistical learning in computer vision, 2004. [10] Fritz M., Leibe B., Caputo B., and Schiele B. Integrating representative and discriminative models for object category detection. In ICCV, pages 1363?1370, 2005. [11] E. Rosch, C.B. Mervis, W.D. Gray, D.M. Johnson, and P. Boyes-Braem. Basic objects in natural categories. Cognitive Psychology, 8:382?439, 1976. [12] R.E. Schapire and Y. Singer. Improved boosting using confidence-rated predictions. Machine Learning, 37(3):297?336, 1999. [13] B. Tversky and K. Hemenway. Objects, parts, and categories. Journal of Experimental Psychology: General, 113(2):169?197, 1984.
3131 |@word briefly:2 middle:1 seems:3 covariance:2 simplifying:1 anthropological:1 contains:2 score:2 denoting:2 existing:1 current:3 comparing:1 dct:2 realistic:1 informative:2 shape:1 motor:2 plot:1 drop:1 v:2 discrimination:5 generative:14 leaf:1 prohibitive:1 half:2 accordingly:1 sys:1 filtered:1 characterization:1 boosting:8 node:5 location:17 completeness:1 contribute:1 ijcv:2 manner:1 expected:1 automatically:1 becomes:3 provided:1 weinshall:2 developed:1 brady:2 nf:5 subclass:38 stricter:1 exactly:1 preferable:2 classifier:11 scaled:1 uk:2 unit:2 appear:1 superiority:1 producing:1 positive:2 tends:1 limit:1 specifying:1 challenging:1 unique:1 borderline:1 implement:2 empirical:2 significantly:3 confidence:1 eer:3 integrating:1 suggest:2 get:1 cannot:3 wheel:1 close:2 context:1 applying:1 www:3 map:4 demonstrated:1 transportation:1 center:3 roth:1 jerusalem:2 focused:1 rule:1 importantly:1 classic:1 notion:2 hierarchy:5 target:2 us:3 distinguishing:1 hypothesis:7 element:1 recognition:27 approximated:2 std:6 cut:1 labeled:2 bottom:2 role:1 solved:2 capture:2 richness:1 mentioned:1 complexity:2 schiele:2 asked:1 dynamic:1 tversky:2 trained:3 depend:1 solving:1 purely:1 distinctive:1 localization:2 technically:1 learner:3 efficiency:1 easily:1 htm:1 represented:4 various:1 train:1 describe:5 detected:1 labeling:1 hillel:2 exhaustive:1 apparent:1 richer:1 larger:2 solve:1 kadir:2 cvpr:2 advantage:9 xak:1 dining:6 took:1 motorcycle:12 loop:1 combining:1 description:4 pronounced:1 scalability:2 produce:2 categorization:4 object:44 ac:5 minor:2 eq:1 implemented:2 c:6 indicate:1 differ:1 correct:1 human:4 implementing:1 subordinate:18 feeding:1 feifeili:1 considered:3 exp:2 uea:1 claim:1 stabilizes:1 label:1 sensitive:1 repetition:1 tool:1 clearly:1 gaussian:1 modified:1 pn:1 avoid:1 xsk:1 probabilistically:1 office:1 linguistic:1 improvement:1 methodological:1 likelihood:8 indicates:1 contrast:1 typically:6 entire:1 hidden:5 relation:2 perona:3 pixel:2 overall:2 classification:18 ill:1 html:1 denoted:2 issue:1 enrich:1 spatial:2 art:1 equal:5 identical:1 represents:2 unsupervised:1 promote:1 minimized:1 report:2 others:1 fundamentally:1 employ:1 few:1 individual:9 pictorial:1 phase:1 detection:3 organization:1 message:1 multiply:1 male:4 shorter:1 tree:1 conduct:1 logarithm:2 re:2 psychological:1 increased:2 instance:5 classify:1 tweak:1 deviation:1 entry:2 subset:2 hundred:1 uniform:1 johnson:1 stored:1 reported:4 hemenway:2 dependency:1 learnt:16 considerably:1 combined:1 fritz:1 grand:3 international:1 huji:3 discriminating:2 theoval:1 enhance:1 augmentation:1 cognitive:2 leading:1 potential:1 unordered:2 star:1 bold:1 includes:2 coefficient:2 jagged:1 depends:1 ad:1 vehicle:1 root:1 lot:1 try:2 later:1 red:2 competitive:1 parallel:1 contribution:3 minimize:2 il:3 accuracy:3 descriptor:3 variance:4 characteristic:2 efficiently:2 correspond:1 saliency:1 identify:5 yellow:1 generalize:1 weak:5 bayesian:2 identification:1 economical:1 detector:1 failure:1 sampled:1 dataset:1 recall:2 color:2 car:1 knowledge:1 improves:1 organized:1 segmentation:2 holub:1 appears:1 higher:2 adaboost:1 zisserman:2 improved:3 done:7 ox:1 stage:30 implicit:1 unsegmented:4 lack:1 google:1 gray:1 indicated:1 name:1 contain:2 normalized:4 hence:3 illustrated:1 round:2 during:1 ambiguous:1 demonstrate:1 percent:1 image:57 consideration:1 recently:1 common:2 superior:3 winner:1 discussed:3 interpretation:1 approximates:1 measurement:1 significant:1 consistency:1 grid:1 session:1 stable:3 robot:1 similarity:1 summands:1 recent:2 female:4 exaggerated:1 showed:2 scenario:3 binary:1 affiliation:1 continue:1 success:1 yi:1 caltech:6 seen:3 captured:1 additional:1 paradigm:5 living:3 ii:1 multiple:2 stem:1 technical:1 match:1 cross:8 retrieval:1 parenthesis:2 prediction:1 variant:1 basic:36 vision:4 repetitive:1 represent:2 kernel:1 agarwal:1 addition:3 background:13 fine:4 unrealistically:1 addressed:2 separately:1 cawley:1 leaving:1 source:1 blp:12 member:1 petal:4 structural:1 intermediate:2 enough:1 psychology:3 competing:1 identified:2 restrict:1 idea:1 vgg:1 motivated:1 six:1 passing:1 speaking:1 matlab:1 clear:4 amount:1 prepared:1 category:29 http:4 schapire:1 notice:1 per:2 threshold:1 achieving:1 graph:4 merely:1 powerful:1 slp:7 patch:1 prefer:1 scaling:1 summarizes:1 comparable:2 cyan:1 distinguish:1 furniture:1 correspondence:3 inclusive:7 chair:8 relatively:2 department:2 according:2 combination:1 hertz:1 describes:2 across:1 smaller:2 intuitively:1 iccv:3 invariant:1 mutually:1 discus:1 describing:1 singer:1 end:1 generalizes:1 aharon:1 available:3 permit:1 leibe:2 hierarchical:1 appropriate:1 primacy:6 alternative:1 database3:1 original:1 top:3 include:2 daphna:2 concatenated:2 build:5 especially:1 coffee:3 classical:2 braem:1 already:2 added:3 rosch:1 primary:2 dependence:1 exclusive:1 traditional:3 gradient:2 separating:2 collected:1 length:1 modeled:4 providing:1 ratio:1 hebrew:2 nc:2 mostly:2 stated:1 negative:2 gcc:1 implementation:3 unknown:4 perform:1 observation:1 datasets:2 descent:1 november:1 defining:1 relational:9 extended:2 arbitrary:1 inferred:1 ordinate:16 required:2 toolbox:2 extensive:1 optimized:3 distinction:7 learned:1 hour:1 mervis:1 address:2 beyond:2 bar:2 superordinate:1 flower:1 pattern:1 usually:4 able:1 suggested:1 leonardis:1 built:1 including:4 max:2 explanation:1 power:2 overlap:1 critical:1 natural:3 rely:1 treated:1 scarce:1 scheme:2 improve:1 rated:1 piano:5 rpc:5 relative:8 loss:3 expect:1 discriminatively:1 interesting:1 filtering:1 analogy:1 storing:1 share:3 row:3 eccv:2 summary:1 surprisingly:1 enjoys:1 guide:1 allow:3 aharonbh:2 nfp:1 face:11 sparse:2 benefit:1 regard:1 feedback:1 dimension:1 curve:4 world:1 rich:6 qualitatively:1 twostage:1 far:2 cope:1 welling:1 keep:1 ml:2 assumed:1 discriminative:10 fergus:2 alternatively:1 search:1 reviewed:1 table:6 learn:9 transfer:1 inherently:1 obtaining:1 caputo:1 complex:1 cl:2 constructing:1 electric:3 main:4 motivation:2 arise:1 augmented:1 fig:6 representative:3 roc:2 precision:2 sub:17 position:1 exponential:1 concatenating:1 lie:2 third:2 learns:3 dozen:1 down:1 specific:10 supplemented:1 experimented:1 svm:9 guitar:5 concern:1 workshop:1 false:2 adding:2 importance:1 margin:1 demand:1 easier:1 suited:1 appearance:5 labor:1 ordered:5 sport:6 partially:1 scalar:1 vulnerable:1 corresponds:1 relies:2 extracted:1 identity:1 marked:1 room:2 shared:2 xlk:1 considerable:1 experimentally:3 fisher:1 specifically:5 determined:2 upright:3 called:4 total:1 discriminate:2 boyes:1 experimental:4 exception:1 support:2 people:2
2,350
3,132
MLLE: Modified Locally Linear Embedding Using Multiple Weights Zhenyue Zhang Department of Mathematics Zhejiang University, Yuquan Campus, Hangzhou, 310027, P. R. China zyzhang@zju.edu.cn Jing Wang College of Information Science and Engineering Huaqiao University Quanzhou, 362021, P. R. China Dep. of Mathematics, Zhejiang University wroaring@yahoo.com.cn Abstract The locally linear embedding (LLE) is improved by introducing multiple linearly independent local weight vectors for each neighborhood. We characterize the reconstruction weights and show the existence of the linearly independent weight vectors at each neighborhood. The modified locally linear embedding (MLLE) proposed in this paper is much stable. It can retrieve the ideal embedding if MLLE is applied on data points sampled from an isometric manifold. MLLE is also compared with the local tangent space alignment (LTSA). Numerical examples are given that show the improvement and efficiency of MLLE. 1 Introduction The problem of nonlinear dimensionality reduction is to find the meaningful low-dimensional structure hidden in high dimensional data. Recently, there have been advances in developing effective and efficient algorithms to perform nonlinear dimension reduction which include isometric mapping Isomap [7], locally linear embedding (LLE) [5] and its variations, manifold charting [2], Hessian LLE [1] and local tangent space alignment (LTSA) [9]. All these algorithms cover two common steps: learn the local geometry around each data point and nonlinearly map the high dimensional data points into a lower dimensional space using the learned local information [3]. The performances of these algorithms, however, are different both in learning local information and in constructing global embedding, though each of them solves an eigenvalue problem eventually. The effectiveness of the local geometry retrieved determines the efficiency of the methods. This paper will focus on the reconstruction weights that characterize intrinsic geometric properties of each neighborhood in LLE [5]. LLE has many applications such as image classification, image recognition, spectra reconstruction and data visualization because of its simple geometric intuitions, straightforward implementation, and global optimization [6, 11]. It is however also reported that LLE may be not stable and may produce distorted embedding if the manifold dimension is larger than one. One of the curses that make LLE fail is that the local geometry exploited by the reconstruction weights is not well-determined, since the constrained least squares (LS) problem involved for determining the local weights may be ill-conditioned. A Tikhonov regularization is generally used for the ill conditions LS problem. However, a regularized solution may be not a good approximation to the exact solution if the regularization parameter is not suitably selected. The purpose of this paper is to improve LLE by making use of multiple local weight vectors. We will show the existence of linearly independent weight vectors that are approximately optimal. The local geometric structure determined by multiple weight vectors is much stable and hence can be used to improve the standard LLE. The modified LLE named as MLLE uses multiple weight vectors for each point in reconstruction of lower dimensional embedding. It can stably retrieve the ideal isometric ||y ||=2.6706e?5 5 ||y ||=8.5272e?4 0 ||y ||=1.6107 0 5 10 0 3 10 10 0 10 0 0 10 error error error 10 ?5 10 ?20 10 ?10 ?10 0 10 ? 10 10 ?5 10 ?5 10 ?10 ?20 10 ?10 10 ? 0 10 10 ?20 10 ?10 10 ? 0 10 Figure 1: Examples of w(?) ? w?  (solid line) and w(?) ? u (dotted line) for swiss-roll data. embedding approximately for an isometric manifold. MLLE has properties similar to LTSA both in measuring linear dependence of neighborhood and in constructing the (sparse) matrix whose smallest eigenvectors form the wanted lower dimensional embedding. It exploits the tight relations between LLE/MLLE and LTSA. Numerical examples given in this paper show the improvement and efficiency of MLLE. 2 The Local Combination Weights Let {x1 , . . . , xN } be a given data set of N points in Rm . LLE constructs locally linear structures at each point xi by representing xi using its selected neighbor set Ni = {xj , j ? Ji }. The optimal combination weights are determined by solving the constrained least squares problem   min xi ? wji xj , s.t. wji = 1. (2.1) j?Ji j?Ji Once all the reconstruction weights {wji , j ? Ji }, i = 1, ? ? ? , N , are computed, LLE maps the set {x1 , . . . , xN } to {t1 , . . . , tN } in a lower dimensional space Rd (d < m) that preserves the local combination properties totally,   min ti ? wji tj 2 , s.t. T T T = I. T =[t1 ,...,tN ] i j?Ji The low dimensional embedding T constructed by LLE tightly depends on the local weights. To formulate the weight vector wi consisting of the local  weights wji , j ? Ji , let us denote matrix Gi = [. . . , xj ? xi , . . .]j?Ji . Using the constraint j?Ji wji = 1, we can write the combination  error as xi ? j?Ji wji xj = Gi wi and hence (2.1) reads min Gi w, s.t. wT 1ki = 1, where 1ki denotes the ki -dimensional vector of all 1?s. Theoretically, a null vector of Gi that is not orthogonal to 1ki can be normalized to be a weight vector as required. Otherwise, a weight vector is given by wi = yi /1Tki yi with yi a solution to the linear system GTi Gi y = 1ki [6]. Indeed, one can formulate the solution using the singular value decomposition (SVD) of Gi . Theorem 2.1 Let G be a given matrix of k column vectors. Denote by y0 the orthogonal projection of 1k onto the null space of G and y1 = (GT G)+ 1k .1 Then the vector  y? y0 , y0 = 0 ? ? w = T ?, y = (2.2) y 1 , y0 = 0 1k y is an optimal solution to min1T w=1 Gw. k The problem of solving min1T w=1 Gw is not stable if GT G is singular (has zero eigenvalues) or nearly singular (has relative small eigenvalues). To regularize the problem, it is suggested in [5] to solve the regularized linear system replaced (GT G + ?G2F I)y = 1k , 1 w = y/1Tk y (?)+ denotes the Moore-Penrose generalized inverse of a matrix. (2.3) (1) ||X?Y (2) ||=1.277 ||X?Y (3) ||=0.24936 ||X?Y 1 1 1 0.8 0.8 0.8 0.6 0.6 0.6 0.4 0.4 0.4 0.2 0.2 0.2 0 0 0.5 1 0 0 0.5 1 0 0 ||=0.39941 0.5 1 Figure 2: A 2D data set (?-points) and computed coordinates (dot points) by LLE using different sets of optimal weight vectors (left two panels) or regularization weight vectors (right panel). with a small positive ?. Let y(?) be the unique solution to the regularized linear system. One can prove that w(?) = y(?)/1Tk y(?) converges to w? as ? ? 0. However, the convergence behavior of w(?) is quite uncertain for small ? > 0. In fact, if y0 = 0 is small, then w(?) tends to u = 1Ty1y1 at first and then turns to the limit value w? = 1Ty0y0 eventually. Note that u and w? are orthogonal each other. In Figure 1, we plot three examples of the error curves w(?)?w?  (solid line) and w(?)?u (dotted line) with different values of y0  for the swiss-roll data. The left two panels show the metaphase phenomenon clearly, where y0  ? 0. Therefore, w? can not be well approximated by w(?) if ? is not small enough. This partially explains the instability of LLE. Other factor that results in the instability of LLE is that the learned linear structure by using single weight vector at each point is brittle. LLE may give a wrong embedding even if all weight vector is well approximated in a high accuracy. It is imaginable if Gi is rank reducible since multiple optimal weight vectors exist in that case. Figure 2 shows a small example of N = 20 two-dimensional points for which LLE fails even if exact optimal weight vectors are used. We plot three sets of computed 2D embeddings T (j) (within an optimal affine transformation to the ideal X) by LLE with k = 4 using two sets of exact optimal weight vectors and one set of weight vectors that solve the regularized equations, respectively. The errors X ? Y (j)  = minc,L X ? (c1T + LT (j) ) between the ideal set X and the computed sets within optimal affine transformation are large in the example. The uncertainty of w(?) with small ? occurs because of existence of small singular values of G. Fortunately, it also implies the existence of multiple almost optimal weight vectors simultaneously. Indeed, if G has s ? k small singular values, then there are s approximately optimal weight vectors that are linear independent on each others. The following theorem characterizes construction of the approximately optimal weight vectors w() using the matrix V of left singular vectors corresponding to the s smallest singular values and bounds the combination errors Gw()  in terms of the minimum of Gw and the largest one of the s smallest singular values. Theorem 2.2 Let G ? Rm?k and ?1 (G) ? . . . ? ?k (G) be the singular values of G. Denote w() = (1 ? ?)w? + V H(:, ),  = 1, ? ? ? , s, where V is the eigenvector matrix of G corresponding to the s smallest right singular values, ? = ?1 V T 1k , and H is a Householder matrix that satisfies HV T 1k = ?1s .Then s Gw()  ? Gw?  + ?k?s+1 (G). (2.4) The Householder matrix is symmetric and orthogonal. It is given by H = I ? 2hhT with vector h ? Rs defined as follows. Let h0 = ?1s ? V T 1k . If h0 = 0, then h = 0. Otherwise, h = hh00  . Note that w?  can be very large when G is approximately singular. In that case, (1 ? ?)w? dominates w() and hence w(1) , . . . , w(s) are almost same and numerically linear dependent each others. (W ) Equivalently, W = [w(1) , . . . , w(s) ] has large condition number cond(W ) = ??max . For numermin (W ) ical stability, we replace w? by a regularized weight vector w(?) like in LLE. This modification is quite practical in application and, more importantly, it can reinforce the numerically linear independence of {w() }. In our experiment, the construction of the {w() } is stable with respect to the choice of ?. We show an estimation of the condition number cond(W ) for the modified W below. Theorem 2.3 Let W = (1 ? ?)w(?)1Ts + V H. Then cond(W ) ? (1 + 3 ? k(1 ? ?)w(?))2 . MLLE: Modified locally linear embedding It is justifiable to learn the local structure by multiple optimal weight vectors at each point, rather than a single one. Though the exact optimal weight vector may be unique, multiple approximately optimal weight vectors exist by Theorem 2.2. We will use these weight vectors to determine an improved and more stable embedding. Below we show the details of the modified locally linear embedding using multiple local weight vectors. Consider the neighbor set of xi with ki neighbors. Assume that the first ri singular values of Gi are large compared with the remaining si = ki ? ri singular values. (We will discuss how to choose it (1) (s ) later.) Let wi , . . . , wi i be si ? k linearly independent weight vectors, () wi = (1 ? ?i )wi (?) + Vi Hi (:, ),  = 1, ? ? ? , si . Here wi (?) is the regularized solution defined in (2.2) with G = Gi , Vi is the matrix of Gi corresponding to the si smallest right singular values, ?i = ?1si vi  with vi = ViT 1ki , and Hi is a Householder matrix that satisfies Hi ViT 1ki = ?i 1si . We look for a d-dimensional embedding {t1 , . . . , tN }, that minimizes the embedding cost function E(T ) = si N   ti ? i=1 =1  () wji tj 2 (3.5) j?Ji with the constraint T T T = I. Denote by Wi = (1 ? ?i )wi (?)1Tsi + Vi Hi the local weight matrix ? i ? RN ?si be the embedded matrix of Wi into the N -dimensional space such that and let W ? i (Ji , :) = Wi , W ? (i, :) = ?1Ts , W ? (j, :) = 0, W i j? / Ii = Ji ? {i}. The cost function (3.5) can be rewritten as   ? i 2 = Tr(T ? iW ? T T T ) = Tr(T ?T T ), T W W E(T ) = F i i (3.6) i  ? ?T T where ? = i W i Wi . The minimizer of E(T ) is given by the matrix T = [u2 , . . . , ud+1 ] of the d eigenvectors of ? corresponding to the 2nd to d + 1st smallest eigenvalues. 3.1 Determination of number si of approximation optimal weight vectors Obviously, si should be selected such that ?ki ?si +1 (Gi ) is relatively small. In general, if the data points are sampled from a d-dimensional manifold and the neighbor set is well selected, then ?d (Gi ) ?d+1 (Gi ). So si can be any integer satisfying si ? ki ? d, and si = ki ? d is the best choice. However because of noise and that the neighborhood is possibly not well selected, ?d+1 (Gi ) may be not relatively small. It makes sense to choose si as large as possible if the ratio (i) (i) +???+?k i ?si +1 i (i) (i) ?1 +???+?k ?s i i ?k (i) is small, where ?j = ?j2 (Gi ) are the eigenvalues of GTi Gi . There is a trade between the number of weight vectors and the approximation to Gi wi? . We suggest   ki (i) j=ki ?+1 ?j si = max  ? ki ? d, ki ? (i) < ? ,  j=1 ?j (3.7) for a given ? < 1 that is a threshold error. Here d can be over estimated to be d > d. Obviously, si depends on the parameter ? monotonically. The smaller ? is, the smaller si is, and of course, the smaller the combination errors for the weight vectors used are. We use an adaptive k i (i) d (i) strategy to set ? as follows. Let ?i = j=d+1 ?j / j=1 ?j , i = 1, . . . , N , and reorder {?i } as ??1 ? . . . ? ??N . Then we set ? to be the middle term of {?i }, ? = ??N/2 , where N/2 is the nearest integer of N/2 towards infinity. In general, if the manifold near xi is float or has small curvatures and the neighbors are well selected, ?i is smaller than ? and si = k ? d. For those neighbor sets with large local curvatures, ?i > ? and si < ki ? d. So less number of weight vectors are used in constructing the local linear structures and the combination errors decrease. We summarize the Modified Locally linear Embedding (MLLE) algorithm as follows. Algorithm MLLE (Modified Locally linear Embedding). 1. For each i = 1, ? ? ? , N , 1.1 Determine a neighbor set Ni = {xj , j ? Ji } of xi , i ? / Ji . 1.2 Compute the regularized solution wi (?) by (2.3) with a small ? > 0. (i) (i) (i) (i) 1.3 Compute the eigenvalues ?1 , . . . , ?ki and eigenvectors v1 , . . . , vki of GTi Gi . Set k i (i) d (i) ?j / j=1 ?j . ?i = j=d+1 2. Sort {?i } to be {??i } in increasing order and set ? = ??N/2 . 3. For each i = 1, ? ? ? , N , (i) (i) 3.1 Set si by (3.7) and set Vi = [vki ?si +1 , . . . , vki ], ?i = 1Tki Vi . 3.2 Construct ? by using Wi = wi (?)1Tsi + Vi . 4. Compute the d + 1 smallest eigenvectors of ? and pick up the eigenvector matrix corresponding to the 2nd to d + 1st smallest eigenvalues, and set T = [u2 , . . . , ud+1 ]T . The computational cost of MLLE is almost the same as that of LLE. The additional flops of MLLE for computing the eigendecomposition of GTi Gi is O(ki3 ) and totally O(k3 N ) with k = maxi ki . Note that the most computationally expensive steps in both LLE and MLLE are the neighborhood selection and the computation of the d + 1 eigenvectors of the alignment matrix ? corresponding to small eigenvalues. They cost O(mN 2 ) and O(dN 2 ), respectively. Because k  N , the additional cost of MLLE is ignorable. 4 An analysis of MLLE for isometric manifolds Consider the application of MLLE on an isometric manifold M = f (?) with open set ? ? Rd and smooth function f . Assume that {xi } are sampled from M, xi = f (?i ), i = 1, . . . , N . We have   xi ? wji xj  = ?i ? wji ?j  + O(?2i ), (4.8) j?Ji j?Ji due to the isometry of f . If ki > d, then the optimal reconstruction error of ?i should be zero. ? So we have that xi ? j?Ji wji xj  = O(?2i ). For the approximately optimal weight vectors  () () wi , we have xi ? j?Ji wji xj  ? ?ki ?si +1 (Gi ) + O(?2i ). Inversely, if follows from (4.8) that  () ?i ? j?Ji wji ?j  ? ?ki ?si +1 (Gi ) + O(?2i ). Therefore, denoting T ? = [?1 , . . . , ?N ], we have E(T ? ) = si N  N    ()  wji ?j ? ?i 2 ? si ?k2i ?si +1 (Gi ) + O(max ?2i ). i=1 =1 j?Ji i=1 i For the orthogonalized U of T ? , i.e., T ? = LU and U U T = I, since L = T ? U T ? Rd?d , we have that ?d (L) = ?d (T ? ) and E(U ) ? E(T ? )/?d2 (T ? ). Note that ?k2i ?si +1 (Gi ) is very small generally. So E(U ) is always small and approximately achieves the minimum. Roughly speaking, MLLE can retrieve the isometric embedding. 5 Comparison to LTSA MLLE has similar properties similar to those of LTSA. In this section, we compare MLLE and LTSA in the linear dependence of neighbors and alignment matrices. For simplicity, we assume that ri = d, i.e., ki ? d weight vectors are used in MLLE for each neighbor set. 5.1 Linear dependence of neighbors. The total combination error M LLE (Ni ) = k i ?d =1   () wji xj ? xi 2 = Gi Wi 2F j?Ji of xi can be a measure of the linear dependence of the neighborhood Ni . To  compare it with the measure of linear dependence defined by LTSA, we denote by x ?i = |I1i | j?Ii xj the mean of ? i = [. . . , xj ? x ?i , . . .]j?Ii . It can members in the whole neighbors of xi including xi itself, and X M LLE ? ? ? ?iW ? ? i 2 . be verified that Gi Wi = Xi Wi with Wi = Wi (Ii , :). So (Ni ) = X F In LTSA, the linear dependence of Ni is measured by the total errors  (i) ? i ? Qi ?i 2 = X ? i V?i 2 , xj ? x ?i ? Qi ?j 2 = X LT SA (Ni ) = F F j?Ii ? i corresponding to ki ? d smallest where V?i is the matrix consists of the right singular vectors of X singular values. The MLLE-measure M LLE and the LTSA-measure LT SA of neighborhood linear dependence are similar, () ?iW ?iw ? i 2 , X M LLE (Ni ) = X ?  ? min,  ? ki ? d, F i ? i V?i 2F = min X ? i Z2F . LT SA (Ni ) = X Z T Z=I 5.2 Alignment matrices. Both MLLE and LTSA minimize a trace function of an alignment matrix ? to obtain an embedding, minT T T =I trace(T ?T T ). The alignment matrix can be written in the same form ?= N  Si ?i SiT , i=1 where Si is a selection matrix consisting of the columns j ? Ii of the large identity matrix of order SA = V?i V?iT , see N . In LTSA, the local matrix ?i is given by the orthogonal projection, i.e. ?LT i M LLE T ? . It is interesting that the range space of W ? iW ? i span(W ? i ) and the =W [10]. For MLLE, ?i i range space span(V?i ) of V?i are tightly close each other if the reconstruction error of xi is small. The ? i , V?i ) between following theorem gives an upper bound of the closeness using the distance dist(W ? ? span(Wi ) and span(Vi ) that denotes the largest angle between the two subspaces. (See [4] for discussion about distance of subspaces.) ? i , V?i ) ? Theorem 5.1 Let Gi = [? ? ? , xj ? xi , ? ? ?]j?Ji . Then dist(W 6 Gi Wi  ? i )?d (X ?i) . ?d ( W Experimental Results. In this section, we present several numerical examples to illustrate the performance of MLLE algorithm. The test data sets include simulated date sets and real world examples. First, we compare Isomap, LLE, LTSA, and MLLE on the Swiss roll with a hole. The data points generated from a rectangle with a missing rectangle strip punched out of the center and then the resulting Swiss roll is not convex. We run these four algorithms with k = 10. In the top middle of Figure 3, we plot the computed coordinates by Isomap, and there is a dilation of the missing region and a warp on the rest of the embedding. As seen in the top right of Figure 3, there is a strong distortion on the computed coordinates by LLE. As we have shown in the bottom of Figure 3, LTSA and MLLE perform well. We now compare MLLE and LTSA for a 2D manifold with 3 peaks embedded in 3D space. We generate N = 1225 3D-points xi = [ti , si , h(ti , si )]T , where ti and si are uniformly distributed in the interval [?1.5, 1.5] and h(t, s) is defined by       ?10 (t?0.5)2 +(s?0.5)2 ?10 t2 +(s+1)2 ?10 (1+t)2 +s2 ?e ?e . h(t, s) = e Original ISOMAP LLE 20 15 3 2 10 10 5 1 0 0 0 ?5 ?1 ?10 ?15 30 ?10 ?2 20 10 0 ?10 10 0 ?20 ?60 ?40 ?20 Generating Coordinates 0 20 40 ?3 ?2 0 LTSA 2 MLLE 0.05 2 20 1 15 0 10 0 ?1 5 0 0 50 100 ?0.05 ?0.05 0 0.05 ?2 ?2 0 2 Figure 3: Left column: Swiss-roll data and generating coordinates with a missing rectangle. Middle column: computed results by Isomap and LTSA. Right column: results of LLE and MLLE. LTSA 0.1 1 0.05 0 0 ?1 2 ?0.05 2 0 0 ?2 ?0.1 ?0.1 ?2 ?0.05 Generating parameter 0 0.05 0.1 MLLE 1.5 2 1 1 0.5 0 0 ?0.5 ?1 ?1 ?2 ?1.5 ?2 ?1 0 1 2 ?2 ?1 0 1 2 Figure 4: Left column:Plots of the 3-peak data and the generating coordinates. Right column: Results of LTSA and MLLE. See the left of Figure 4 for the data points and the generating parameters. It is easy to show that the manifold parameterized by f (t, s) = [t, s, h(t, s)]T is approximately isometric since the Jacobian Jf (t, s) is orthonormal approximately. In the right of Figure 4, we plot the computed coordinates by LTSA and MLLE with k = 12. The deformations of the computed coordinates by LTSA near the peaks are prominent because the curvature of the 3-peak manifold varies very much. This bias can be reduced by the modified curvature model of LTSA proposed in [8]. MLLE can recover the generating parameter perfectly up to an affine transformation. Next, we consider a data set containing N = 4400 handwritten digits (?2?-?5?) with 1100 examples of each class. The gray scale images of handwritten numerals are at 16?16 resolution and converted m = 256 dimensional vectors2 . The data points are mapped into a 2-dimensional space using LLE and MLLE respectively. These experiments are shown in Figure 5. It is clear that MLLE performs much better than LLE. Most of the digit classes (digits ?2?-?5? are marked by ???, ??, ? ? and ?? respectively) are well clustered in the resulting embedding of MLLE. Finally, we consider application of MLLE and LLE on the real data set of 698 face images with variations of two pose parameters (left-right and up-down) and one lighting parameter. The image size is 64-by-64 pixel, and each image is converted to an m = 4096 dimensional vector. We apply MLLE with k = 14 and d = 3 on the data set. The first two coordinates of MLLE are plotted in the middle of Figure 6. We also extract four paths along the boundaries of the set of the first two coordinates, and display the corresponding images along each path. These components appear to capture well the pose and lighting variations in a continuous way. References [1] D. Donoho and C. Grimes. Hessian Eigenmaps: new tools for nonlinear dimensionality reduction. Proceedings of National Academy of Science, 5591-5596, 2003 2 The data set can be downloaded at http://www.cs.toronto.edu/ roweis/data.html. LLE MLLE 5 3 4 2.5 3 2 2 1.5 1 1 0 0.5 ?1 0 ?2 ?0.5 ?3 ?1 ?4 ?5 ?1.5 ?1 ?0.5 0 0.5 1 1.5 2 2.5 ?1.5 ?1.5 ?1 ?0.5 0 0.5 1 1.5 2 2.5 Figure 5: Embedding results of N = 4400 handwritten digits by LLE(left) and MLLE(right). Figure 6: Images of faces mapped into the embedding described by the first two coordinates of MLLE, using the parameters k = 14 and d = 3. [2] M. Brand. Charting a manifold. Advances in Neural Information Processing Systems, 15, MIT Press, 2003 [3] Jihun Ham, Daniel D. Lee, Sebastian Mika, Bernhard Scholkopf. A kernel view of the dimensionality reduction of manifolds. International Conference On Machine Learning 21, 2004. [4] G. H. Golub and C. F Van Loan. Matrix Computations. Johns Hopkins University Press, Baltimore, Maryland, 3nd edition, 1996. [5] S. Roweis and L Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290: 2323?2326, 2000. [6] L. Saul and S. Roweis. Think globally, fit locally: unsupervised learning of nonlinear manifolds. Journal of Machine Learning Research, 4:119-155, 2003. [7] J Tenenbaum, V. De Silva and J. Langford. A global geometric framework for nonlinear dimension reduction. Science, 290:2319?2323, 2000 [8] J. Wang, Z. Zhang and H. Zha. Adaptive Manifold Learning. Advances in Neural Information Processing Systems 17, edited by Lawrence K. Saul and Yair Weiss and L?eon Bottou, MIT Press, Cambridge, MA, pp.1473-1480, 2005. [9] Z. Zhang and H. Zha. Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment. SIAM J. Scientific Computing, 26(1):313?338, 2004. [10] H. Zha and Z. Zhang. Spectral Analysis of Alignment in Manifold Learning. Submitted, 2006. [11] M. Vlachos, C. Domeniconi, D. Gunopulos, G. Kollios, and N. Koudas Non-Linear Dimensionality Reduction Techniques for Classification and Visualization Proc. Eighth ACM SIGKDD Int?l Conf. Knowledge Discovery and Data Mining, July 2002.
3132 |@word middle:4 nd:3 suitably:1 open:1 d2:1 r:1 decomposition:1 pick:1 tr:2 solid:2 reduction:8 daniel:1 denoting:1 com:1 si:34 written:1 john:1 numerical:3 wanted:1 plot:5 selected:6 toronto:1 zhang:4 dn:1 constructed:1 along:2 scholkopf:1 prove:1 consists:1 theoretically:1 indeed:2 roughly:1 behavior:1 dist:2 globally:1 curse:1 totally:2 increasing:1 campus:1 panel:3 null:2 minimizes:1 eigenvector:2 transformation:3 ti:5 rm:2 wrong:1 appear:1 t1:3 positive:1 engineering:1 local:21 tki:2 tends:1 limit:1 gunopulos:1 path:2 approximately:10 mika:1 china:2 range:2 zhejiang:2 unique:2 practical:1 swiss:5 digit:4 projection:2 suggest:1 onto:1 close:1 selection:2 instability:2 www:1 map:2 missing:3 center:1 straightforward:1 l:2 vit:2 convex:1 formulate:2 resolution:1 simplicity:1 importantly:1 orthonormal:1 regularize:1 i1i:1 retrieve:3 embedding:26 stability:1 variation:3 coordinate:11 construction:2 exact:4 us:1 recognition:1 approximated:2 satisfying:1 expensive:1 ignorable:1 bottom:1 reducible:1 wang:2 hv:1 capture:1 region:1 trade:1 decrease:1 edited:1 intuition:1 ham:1 solving:2 tight:1 efficiency:3 effective:1 neighborhood:8 h0:2 quite:2 whose:1 larger:1 solve:2 distortion:1 otherwise:2 koudas:1 gi:27 think:1 itself:1 obviously:2 eigenvalue:8 vlachos:1 reconstruction:8 j2:1 date:1 roweis:3 academy:1 convergence:1 jing:1 produce:1 generating:6 converges:1 tk:2 illustrate:1 pose:2 measured:1 nearest:1 dep:1 sa:4 strong:1 solves:1 c:1 implies:1 imaginable:1 numeral:1 explains:1 clustered:1 around:1 k3:1 mapping:1 lawrence:1 achieves:1 smallest:9 vki:3 purpose:1 estimation:1 proc:1 iw:5 largest:2 tool:1 mit:2 clearly:1 always:1 modified:9 rather:1 minc:1 focus:1 improvement:2 zju:1 rank:1 sigkdd:1 sense:1 hangzhou:1 dependent:1 hidden:1 relation:1 ical:1 c1t:1 pixel:1 classification:2 ill:2 html:1 yahoo:1 constrained:2 construct:2 once:1 look:1 unsupervised:1 nearly:1 others:2 t2:1 simultaneously:1 preserve:1 tightly:2 national:1 replaced:1 geometry:3 consisting:2 mining:1 alignment:9 golub:1 grime:1 tj:2 k2i:2 orthogonal:5 plotted:1 orthogonalized:1 deformation:1 uncertain:1 column:7 cover:1 measuring:1 cost:5 introducing:1 vectors2:1 eigenmaps:1 characterize:2 reported:1 varies:1 st:2 peak:4 international:1 siam:1 lee:1 hopkins:1 containing:1 choose:2 possibly:1 conf:1 converted:2 de:1 int:1 depends:2 vi:9 later:1 view:1 characterizes:1 zha:3 sort:1 recover:1 minimize:1 square:2 ni:9 accuracy:1 roll:5 handwritten:3 lu:1 lighting:2 justifiable:1 submitted:1 sebastian:1 strip:1 pp:1 involved:1 sampled:3 knowledge:1 dimensionality:6 isometric:8 improved:2 wei:1 though:2 langford:1 nonlinear:7 stably:1 gray:1 scientific:1 gti:4 normalized:1 isomap:5 regularization:3 hence:3 read:1 symmetric:1 moore:1 gw:6 generalized:1 prominent:1 tn:3 performs:1 silva:1 image:8 recently:1 common:1 ji:22 numerically:2 cambridge:1 rd:3 mathematics:2 dot:1 stable:6 gt:3 curvature:4 isometry:1 retrieved:1 mint:1 tikhonov:1 yi:3 exploited:1 wji:15 seen:1 minimum:2 fortunately:1 additional:2 determine:2 ud:2 monotonically:1 july:1 ii:6 multiple:10 smooth:1 determination:1 qi:2 kernel:1 baltimore:1 interval:1 singular:16 float:1 rest:1 ltsa:22 member:1 effectiveness:1 integer:2 near:2 ideal:4 enough:1 embeddings:1 easy:1 xj:13 independence:1 fit:1 perfectly:1 cn:2 kollios:1 hessian:2 speaking:1 generally:2 clear:1 eigenvectors:5 locally:11 tenenbaum:1 reduced:1 generate:1 http:1 exist:2 dotted:2 estimated:1 write:1 four:2 threshold:1 verified:1 rectangle:3 v1:1 run:1 inverse:1 angle:1 uncertainty:1 parameterized:1 distorted:1 named:1 almost:3 ki:25 bound:2 hi:4 display:1 constraint:2 infinity:1 ri:3 min:5 span:4 relatively:2 department:1 developing:1 combination:8 smaller:4 y0:7 wi:25 making:1 modification:1 computationally:1 equation:1 visualization:2 turn:1 eventually:2 fail:1 discus:1 rewritten:1 apply:1 spectral:1 yair:1 existence:4 original:1 denotes:3 remaining:1 include:2 top:2 exploit:1 eon:1 occurs:1 strategy:1 dependence:7 subspace:2 distance:2 reinforce:1 simulated:1 mapped:2 maryland:1 manifold:17 charting:2 ratio:1 equivalently:1 trace:2 implementation:1 perform:2 upper:1 t:2 flop:1 y1:1 rn:1 householder:3 nonlinearly:1 required:1 learned:2 suggested:1 below:2 eighth:1 summarize:1 max:3 including:1 regularized:7 mn:1 representing:1 improve:2 inversely:1 extract:1 geometric:4 discovery:1 tangent:3 determining:1 relative:1 embedded:2 brittle:1 interesting:1 eigendecomposition:1 downloaded:1 affine:3 course:1 bias:1 lle:37 warp:1 neighbor:11 saul:3 face:2 sparse:1 distributed:1 van:1 curve:1 dimension:3 xn:2 world:1 boundary:1 zyzhang:1 adaptive:2 bernhard:1 global:3 tsi:2 reorder:1 xi:21 spectrum:1 continuous:1 dilation:1 learn:2 zhenyue:1 bottou:1 constructing:3 linearly:4 yuquan:1 noise:1 whole:1 s2:1 edition:1 x1:2 fails:1 jacobian:1 theorem:7 down:1 maxi:1 dominates:1 sit:1 intrinsic:1 closeness:1 conditioned:1 hole:1 lt:5 penrose:1 partially:1 u2:2 minimizer:1 determines:1 satisfies:2 acm:1 ma:1 identity:1 marked:1 donoho:1 towards:1 replace:1 jf:1 loan:1 determined:3 uniformly:1 wt:1 principal:1 total:2 domeniconi:1 hht:1 svd:1 experimental:1 brand:1 meaningful:1 cond:3 college:1 jihun:1 phenomenon:1
2,351
3,133
Adaptive Spatial Filters with predefined Region of Interest for EEG based Brain-Computer-Interfaces Moritz Grosse-Wentrup Institute of Automatic Control Engineering Technische Universit?at M?unchen 80333 M?unchen, Germany moritz@tum.de Klaus Gramann Department Psychology Ludwig-Maximilians-Universit?at M?unchen 80802 M?unchen, Germany gramann@psy.uni-muenchen.de Martin Buss Institute of Automatic Control Engineering Technische Universit?at M?unchen 80333 M?unchen, Germany mb@tum.de Abstract The performance of EEG-based Brain-Computer-Interfaces (BCIs) critically depends on the extraction of features from the EEG carrying information relevant for the classification of different mental states. For BCIs employing imaginary movements of different limbs, the method of Common Spatial Patterns (CSP) has been shown to achieve excellent classification results. The CSP-algorithm however suffers from a lack of robustness, requiring training data without artifacts for good performance. To overcome this lack of robustness, we propose an adaptive spatial filter that replaces the training data in the CSP approach by a-priori information. More specifically, we design an adaptive spatial filter that maximizes the ratio of the variance of the electric field originating in a predefined region of interest (ROI) and the overall variance of the measured EEG. Since it is known that the component of the EEG used for discriminating imaginary movements originates in the motor cortex, we design two adaptive spatial filters with the ROIs centered in the hand areas of the left and right motor cortex. We then use these to classify EEG data recorded during imaginary movements of the right and left hand of three subjects, and show that the adaptive spatial filters outperform the CSP-algorithm, enabling classification rates of up to 94.7 % without artifact rejection. 1 Introduction Brain-Computer-Interfaces (BCIs) allow communication without using the peripheral nervous systems by detecting intentional changes in the mental state of a user (see [1] for a review). For BCIs based on electroencephalography (EEG), different mental states are correlated with spatio-temporal pattern changes in the EEG. These can be detected and used for transmitting information by a suitable classification algorithm. While a variety of mental states can be used to induce pattern changes in the EEG, most BCIs utilize motor imagery of different limbs for this purpose. This is based on the observation that movement preparation of a certain limb leads to a power decrease (event related desynchronization - ERD) in the ?- (? 8 ? 12 Hz) and ?-spectrum (? 18 ? 26 Hz) over the area of the contralateral motor cortex representing the specific limb [2]. This ERD can also be observed in motor imagery, which was first used in [3] to discriminate imaginary movements of the left vs. imaginary movements of the right hand. While the methods presented in this paper are also applicable to BCIs that are not based on motor imagery, we restrict the discussion to this class of BCIs for sake of simplicity. In general, a BCI has to accomplish two tasks. The first task is the feature extraction, i.e., the extraction of information from the EEG relevant for discriminating different mental states. The second task is the actual classification of these feature vectors. For BCIs based on EEG, the feature extraction is aggravated by the fact that due to volume conduction only the superimposed electric activity of a large area of cortex can be measured at every electrode. While it is known that the ERD caused by motor imagery originates in the motor cortex [4], the EEG measured on the scalp over the motor cortex includes electrical activity of multiple neural sources that are not related to the imaginary movements. This in turn leads to a lower signal-to-noise-ratio (SNR) and subsequently to a lower classification accuracy. For this reason, algorithms have been developed that use multiple recording sites (electrodes) to improve feature extraction. One of the most successful algorithms in this context, as evidenced by the 2003 BCI Competition [5], is the method of Common Spatial Patterns (CSP). It was introduced to EEG analysis in [6] and first utilized for BCIs in [7]. Given two EEG data sets recorded during motor imagery of the left and right hand, the CSP-algorithm finds two linear transformations that maximize the variance of the one while minimizing the variance of the other data set. With the CSP-algorithm used for feature extraction, it has been shown that a simple linear classification algorithm suffices to obtain classification rates above 90 % for trained subjects [7]. While these are impressive results, the CSP algorithm suffers from a lack of robustness. The algorithm is trained to maximize the differences between two datasets, regardless of the cause of these differences. In the ideal case the spatio-temporal differences are caused only by the motor imagery, in which case the algorithm can be claimed to be optimal. In practice, however, the differences between two datasets will be due to multiple causes such as spontaneous EEG activity, other mental states or any kind of artifacts. For example, if a strong artifact is present in only one of the data sets, the CSP is trained on the artifact and not on the differences caused by motor imagery [7]. Consequently, the CSP algorithm requires artifact free data, which is a serious impairment for its practical use. In this paper, we propose to replace the information used for training in the CSP algorithm by apriori information. This is possible, since it is known that the signal of interest for the classification of imaginary movements originates in the motor cortex [4]. More specifically, we design an adaptive spatial filter (ASF) that maximizes the ratio of the variance of the electric field originating in a predefined region of interest (ROI) in the cortex and the overall variance of the EEG measurements. In this way, we can design spatial filters that optimally suppress electric activity originating from other areas than the chosen ROI. By designing two spatial filters with the respective ROIs centered in the hand areas of the motor cortex in the left and right hemisphere, we achieve a robust feature extraction that enables better classification results than obtained with the CSP-algorithm. The rest of this paper is organized as follows. In the methods section, we briefly review the CSPalgorithm, derive the ASF and discuss its properties. In the results section, the ASF is applied to EEG data of three subjects, recorded during imaginary movements of the right and left hand. The results are compared with the CSP-algorithm, and it is shown that the ASF is superior to the CSP approach. This is evidenced by a significant increase in measured ERD and higher classification accuracy. We conclude the paper with a discussion of the results and future lines of research. 2 Methods In this section we will first briefly review the CSP-algorithm, and then show how the information used for training the CSP-algorithm can be replaced by a-priori information. We then derive the ASF, and conclude the section with some remarks on the theoretical properties of the ASF. 2.1 The Common Spatial Patterns Algorithm Given two EEG data sets x1 ? RN ?T and x2 ? RN ?T with N the number of electrodes and T the number of samples, the CSP-algorithm finds a linear transformation w that maximizes the variance of the one while minimizing the variance of the other data set. This can be formulated as the following optimization problem [8]: max w  w T R1 w w T R2 w  (1) with R1 and R2 the spatial covariance matrices of x1 and x2 . This optimization problem is in the form of the well known Rayleigh quotient, which means that the solution to (1) is given by the eigenvector with the largest eigenvalue of the generalized eigenvalue problem R1 w = ?R2 w. (2) The two eigenvectors w1 and w2 with the largest and smallest respective eigenvalue represent two spatial filters that maximize the variance of the one while minimizing the variance of the other data set. If data set x1 was recorded during imaginary movements of the left while data set x2 was recorded during imaginary movements of the right hand, and the differences between the two data sets are only caused by motor imagery, the two spatial filters w1 and w2 optimally (in terms of the second moments) extract the component of a data set caused by the respective motor imagery. In practice, however, the differences between two datasets will have multiple causes such as spontaneous EEG activity, mental states unrelated to the motor imagery or muscular artifacts. If the CSP-algorithm is applied to such data, the linear transformations w1/2 will be trained to extract the artifactual components of the EEG and not the spatial-temporal pattern changes caused by the motor imagery. 2.2 Derivation of the Adaptive Spatial Filter To overcome the sensitivity of the CSP-algorithm to artifacts in the EEG, and achieve a robust feature extraction, we replace the information used for training in the CSP-algorithm by a-priori information. In the case of motor imagery, the specific a-priori information is that the signal of interest for classification originates in the motor cortex. We will now show how the component of the EEG originating in a certain ROI, chosen to correspond to the motor cortex for our purpose, can be estimated in an optimal manner. In general, it would be desirable to derive a spatial filter that eliminates all electric activity that does not originate in a chosen ROI. This however is not possible due to the ill-posed nature of the inverse problem of EEG (c.f., [9]). In EEG recordings, electric activity originating from an infinite dimensional space (the continuous current distribution within the brain) is mapped onto a finite number of measurement electrodes. Hence, the best one can do is to find a spatial filter that in some sense optimally suppresses all activity not originating in the chosen ROI. Towards this goal, note that the electric field generated by the brain at a position r outside the head is given by (c.f., [10]) Z ?(r, t) = L(r, r ? )T P (r ? , t)dV (r ? ), (3) V 3 with V the volume of the brain, P : R ? R 7? R3 the tissue dipole moment (source strength) at position r ? and time t in x, y, and z - direction, and L : R3 ? R3 7? R3 the so called leadfield equation, describing the projection strength of a source with dipole moment in x, y, and z - direction at position r ? to a measured electric field at position r. Note that the leadfield equation incorporates all geometric and conductive properties of the brain. In EEG recordings, the electric field of the brain is spatially sampled at i = 1 . . . N electrodes on the scalp with position r i , resulting in a measurement vector x(t) with the elements Z xi (t) = L(r i , r ? )T P (r ? , t)dV (r ? ), i = 1 . . . N. (4) V We now wish to find a linear transformation of the measured EEG y(t) = wT x(t) (5) that maximizes the ratio of the variance of the electric field originating in a certain area of the cortex and the overall variance. For this we define the component of the EEG originating in a certain ROI ? (t), with the elements as x Z x ?i (t) = L(r i , r ? )T P (r ? , t)dV (r ? ), i = 1 . . . N. (6) ROI The spatial filter w is then found by max {f (w)} with f (w) = w ? (t)? wT x x(t)T w wT Rx? (t)w = T T T w x(t)x(t) w w Rx (t)w (7) ? (t) and x(t). Note that this optimization and Rx? (t) and Rx (t) the (spatial) covariance matrices of x problem is in the same form as that of the CSP-algorithm in (1). As for (1), the solution to (7) is given by the eigenvector with the largest eigenvalue of the generalized eigenvalue problem Rx? (t)w = ?Rx (t)w. (8) The crucial difference between the CSP- and the ASF-algorithm is that for the CSP-algorithm the covariance matrix R1 in the numerator of (1), describing the signal subspace of the data, is given by the measured EEG of one condition. For the ASF-algorithm, the corresponding covariance matrix Rx? (t) is replaced by a-priori knowledge independent of the measured EEG. We will now show how estimates of the two covariance matrices Rx? (t) and Rx (t), necessary for solving (8), can be obtained. Assuming stationarity of the EEG, i.e., a constant covariance matrix, Rx (t) can be replaced by the estimated sample covariance matrix T X ?x = 1 R x(t)x(t)T , T t=1 (9) with T the number of samples. The covariance matrix of the EEG originating in the ROI however is substantially harder to estimate. To obtain an estimate of Rx? (t), we first derive an estimate of (6). This is done by placing an equally spaced grid with nodes at locations r ?i , i = 1 . . . M in the ROI, and replacing the integral over the ROI by a sum over the M grid points, x ?i (t) = M X L(r ?j , r i )T P (r ?j , t). (10) j=1 The estimated component of the EEG originating in the ROI can then be written in matrix notation as ? (t) = Lp(t), x (11) with L ? RN ?3M describing the projection strength of the M sources in x, y, and z - direction to each of the N electrodes, and p(t) ? R3M representing the dipole moments of the M sources. The estimate of the covariance matrix is then given by ? x? (t) = Lp(t)p(t)T LT = LRp (t)LT . R (12) In absence of any prior knowledge, the covariance matrix of the sources in the ROI is assumed to be the identity matrix, i.e., Rp (t) = I3M . The leadfield matrix L on the other hand can be estimated by a suitable model of EEG volume conduction. For sake of simplicity, we only consider a four-shell spherical head model, i.e., each column li of the leadfield matrix L is found by placing a single current dipole with unit dipole moment at position ri in a four shell spherical head model, and calculating its projection to each of the N electrodes [11]. In summary, the adaptive linear spatial filter w is given by the eigenvector with the largest eigenvalue of the generalized eigenvalue problem ? x w. LLT w = ?R (13) Note that the largest eigenvalue corresponds to the achieved ratio of the ASF, i.e., f (w) = ?. The largest eigenvalue of (13) thus is a measure for the quality of the obtained ASF. It is also important to point out that the covariance matrix of the component of the EEG originating in the ROI is assumed to be the identity matrix, implying that sources in the ROI are not correlated. This surely is an assumption that is not physiologically justified. We will address this issue in the discussion. Finally, note that the quality of the obtained filter also depends on the rank of the covariance matrix of the electric activity originating in the ROI. The higher the rank of Rx? , the more degrees of freedom of the spatial filter are required to pass activity from the signal subspace, i.e., activity originating in the ROI, and consequently less degrees of freedom are available for suppressing electric activity originating outside the ROI. The quality of the spatial filter thus decreases with the rank of Rx? . For this reason, it is beneficial to only consider radially oriented dipole sources in the ROI, which leads to a covariance matrix with a lower rank than if dipole moments in x, y, and z direction are considered. Furthermore, this is a physiologically justified assumption, since neurons in a cortical column are oriented radially to the surface of the cortex (c.f., [9]). 3 Results In this section, we evaluate the effectiveness of the ASF by applying it to EEG data gathered from three subjects during motor imagery of the right and left hand, and compare its performance with the CSP-algorithm. Subsequently motor imagery of the left/right hand will be termed condition IL/IR. 3.1 Experimental Setup Three subjects (S1, S2, S3) participated in the experiment, all of which were male, aged 26, 30, and 27 years, and had no known neurological disorders. Subjects S1 and S2 had no experience with motor imagery or BCIs, while subject S3 participated in a motor imagery experiment for the second time. The subjects were placed in a shielded room approximately two meters in front of a screen, and were asked to continually imagine opening and closing their right/left hand as long as an arrow pointing in the respective direction was displayed on the screen. The subjects were explicitly instructed to perform haptic motor imagery, i.e., to feel how they were opening and closing their hands, to ensure that actual motor imagery and not visual imagery was used. Each trial started with a fixation cross, which was superimposed by an arrow either pointing to the right or to the left after three seconds. The center of the arrow was placed in the middle of the screen to avoid lateralized visual evoked potentials. The arrow was removed again after further seven seconds, indicating the end of one trial. A total of 300 trials were recorded for each subject, consisting of 150 trials for each condition in randomized order. During the experiment, EEG was recorded at 128 channels with a sampling rate of 500 Hz. Electrode Cz was used as a reference, and the data was re-referenced offline to common average. The spatial position of each electrode was measured with a tracking system. No trials were rejected and no artifact correction was employed. 3.2 Design of the Common Spatial Patterns For each subject, the CSPs were found by by first bandpass-filtering the data between 10 - 30 Hz (as suggested in [7]) using a sixth order butterworth filter, and then calculating the sample covariance matrices R1 and R2 for both conditions (IL and IR) of all trials in a time window ranging from 3.5 to 10 s (i.e., starting 500 ms after the instruction which motor imagery to perform). Equation (2) was then solved and only the two most discriminative eigenvectors w1,CSP and w2,CSP , i.e., those with the largest and smallest respective eigenvalue, were used to obtain estimates of the two most discriminative components of each data set. 3.3 Design of the Adaptive Spatial Filters To obtain estimates of the electric field originating in the hand areas of the left and right motor cortex, two ASFs were designed. For the first ASF, the ROI was chosen as a sphere located 1 cm inside the cortex radially below electrode C3 (centered above the hand area of the motor cortex of the left hemisphere) with a radius of 1 cm. Radially oriented dipoles with unit moment were placed on an equally spaced grid 2 mm apart from each other inside the sphere, and their respective projections to each of the electrodes were calculated as in [11] to obtain an estimate of the leadfield matrix L in (11). For this purpose, the measured positions of the electrodes were radially projected onto the outermost sphere of the headmodel. The second ASF was designed in the same fashion, but with the center of the sphere located at the same depth as the first one radially below electrode C4 (centered above the hand area of the motor cortex of the right hemisphere). For each of the 300 trials of each subject, the data covariance matrix of the recorded EEG was estimated according to (9), using EEG data in the same time window as for computation of the CSPS (3.5 to 10 s of each trial). The two ASFs with the ROIs centered below electrodes C3 and C4 were then calculated by solving the generalized eigenvalue problem (13), and taking the eigenvector with the largest eigenvalue as the ASF. The estimated activity inside the ROIs was then obtained by multiplying the ASFs with the observed EEG of that trial according to (5). Note that this was done independently for each recorded trial. IR - IL C3 / C4 CSP 1/ CSP 2 ASF 1 / ASF 2 8 45 40 35 30 Hz 25 20 15 10 5 6 4 2 0 dB ?2 ?4 ?6 IL - IR ?8 8 45 40 35 30 Hz 25 20 15 10 5 1 6 4 2 0 dB ?2 ?4 ?6 2 3 4 5 6 Time [s] 7 8 91 2 3 4 5 6 Time [s] 7 8 91 2 3 4 5 6 7 8 9 ?8 Time [s] Figure 1: Difference plots of ERS/ERD relative to pre-stimulus baseline (0 - 3 s) between conditions IL and IR for subject S3. Time of stimulus onset is marked by the dotted vertical line. See text for explanations. 3.4 Experimental Results To obtain estimates of the frequency bands suitable for classification, event related synchronization and desynchronization (ERS/ERD) was calculated for each subject relative to the pre-stimulus baseline (0 - 3 s) as implemented in [12]. This was done independently for the EEG measured at electrodes C3 and C4, the EEG components obtained by the CSP-algorithm and the estimated EEG components originating in the motor cortex as obtained by the ASFs. Since motor imagery leads to a contralateral ERD, the ERS/ERD of condition IL was subtracted from the ERS/ERD of condition IR for measurements at electrode C3 and spatial filters focusing on the left hemisphere, and vice versa for measurements at electrode C4 and spatial filters focusing on the right hemisphere. The results for subject S3 are shown in Fig. 1. As can be seen in the first column, an ERD-difference of about 3 dB can be measured over the contralateral motor cortex at electrodes C3 and C4 in two frequency bands, starting briefly after the instruction to perform the motor imagery. The two reactive frequency bands are centered roughly around 12 and 25 Hz, agreeing well with the expected ERD in the ?- and ?-band [3]. The components extracted by the CSPs, shown in the second column of Fig. 1, show a different picture. While the first CSP (top row) extracts the ERD in the ?- and ?-band with roughly the same SNR as in the first row of Fig. 1, high-frequency noise is also mixed in. The second CSP (bottom row) on the other hand does not extract the ERD related to motor imagery, but focuses on a strong artifactual component above 15 Hz. The third column of Fig. 1 shows the results obtained with the ASFs centered radially below electrodes C3 and C4. The observed ERD is similar to the one measured directly at electrodes C3 and C4 (first column of Fig. 1), but shows a much stronger ERD of about 7 dB in the ?- and ?-band. These observation are also reflected in the actual CSPs (calculated for the whole data set) and ASFs (calculated for one representative trial) of subject S3 shown in Fig. 2. While CSP 1 focuses on electrodes in the vicinity of electrode C3, CSP 2 focuses on frontal areas of the recorded EEG that are not related motor control. The ASFs on the other hand can be seen to focus on motor areas surrounding electrodes C3 and C4, with various minor patches distributed over the scalp that suppress electric activity originating outside the ROI. It is thus evident that the ASFs improve the SNR of the component of the EEG related to motor imagery relative to just measuring the ERD above the motor cortex, while the CSPs fail to extract the ERD related to motor imagery due to artifactual components. Similar ERS/ERD results were obtained for subject S2, while only a very weak ERS/ERD could be observed for subject S1 for all three evaluation schemes. The ERD plots at electrodes C3 and C4 of each subject were then used to heuristically determine the time window and two reactive frequency bands used for actual classification. These are summarized in Tab. 1. For classification, the reactive frequency bands of the data sets obtained by the CSP 1 CSP 2 ASF 1 ASF 2 C3 C3 C3 C3 C4 C4 C4 C4 Figure 2: Spatial filters obtained by the CSP- and the ASF-algorithm for subject S3. SUBJECT S1 S2 S3 Table 1: Classification results TIME WINDOW FREQ. BANDS C3/C4 3.5 - 10 s 3.5 - 10 s 3.5 - 10 s 17 - 18 & 26 - 28 Hz 9 - 12.5 & 23 - 26 Hz 12 - 14 & 20 - 30 Hz 58.0 % 87.0 % 77.3 % CSP ASF 48.3 % 49.0 % 60.7 % 63.0 % 90.3 % 94.7 % CSP-algorithm, the ASFs, and the raw EEG data measured at electrodes C3 and C4 were extracted by using a sixth-order butterworth filter. For each of the three evaluation approaches, the feature vectors were formed by calculating the variance in each of the two frequency bands for each trial. This resulted in a four dimensional feature vector for each trial and each evaluation approach. The feature vectors were then classified using leave-one-out cross validation with Fisher Linear Discriminant Analysis (c.f., [13]). Note that for the CSP algorithm this required recalculation of the CSPs for each cross validation. The classification results for each approach and all three subject are shown in Tab. 1. As can be seen in Tab. 1, the ASFs lead to an increase in classification accuracy relative to measuring the ERS/ERD at electrodes C3 and C4 of 3.3 - 17.4 %. The CSP-algorithm on the other hand leads to worse classification results compared to only using the ERS/ERD measured at electrodes C3/C4. In fact, for S1 and S2 the classification accuracy was not above chance for the CSP-algorithm. In agreement with [7], the ERS/ERD as well as the CSP plots for subjects S1 and S2 (not shown here) indicate that this is due to the fact that the CSPs focus on artifactual components that are not related to motor imagery. Subjects S2 and S3 achieved a classification accuracy of 90.3 and 94.7 % using the ASFs, while subject S1 only achieved 63 %. This correlates with the personal report of the subjects, with S2 and S3 reporting that they considered their motor imagery to be successful, while S1 reported difficulties in imaging opening and closing his hands. 4 Discussion In this paper, we presented a new approach for feature extraction for EEG-based BCIs. We derived an adaptive spatial filter that maximizes the ratio of the variance of the electric field originating in a specified ROI of cortex and the overall variance of the measured EEG. By designing two ASFs with the ROIs centered in the hand areas of the motor cortex, we showed that classification accuracy of imaginary movements of the left and right hand increased between 3.3 and 17.4 % relative to using the EEG measured directly above motor areas at electrodes C3 and C4. This was achieved without any artifact correction or rejection of trials. In contrast, applying the CSP-algorithm to the same data sets lead to a classification accuracy below that of only using recordings from electrodes C3 and C4 for feature extraction for one subject, and a classification accuracy that was not above chance for the other two subjects. This was due to the lack of robustness of the CSP-algorithm, focusing on artifactual components of the EEG. We thus conclude that the proposed ASF-method enables a significant increase in classification accuracy, and is very robust to artifactual components in the EEG. While the presented results are already promising, several aspects of the ASF can be further optimized. These include the four-shell spherical head model used for estimating the leadfield matrix, which is the most simple and inaccurate model available in the literature. Employing more realistic models for volume conduction, such as finite element or boundary element methods (FEM/BEM), are expected to further increase classification accuracy. Furthermore, the ROIs were heuristically chosen as spheres located radially below electrodes C3 and C4. Due to individual differences in physiology and/or misplacement of the electrode caps, the ROIs were unlikely to be centered in the hand areas of the motor cortex of each subject. Optimization of the center and extent of the ROI, either by a-priori knowledge gained by fMRI scans or by numerical optimization of the ERD in specific frequency bands, is expected to lead to higher SNRs and hence higher classification accuracy. Another issue that can be addressed to improve performance of the ASF is the physiologically not justified assumption of uncorrelated sources in the ROI. Besides optimization issues of parameters, future lines of research include extending the algorithm to multi class problems, e.g., BCIs using motor imagery of more than two limbs. Conceptually, this can be done by designing another ASF centered in that area of motor cortex representing the specific limb. Further research has to show which body parts are most suited for this task. Finally, all work presented here has been done offline. Online versions of the ASF-algorithm are under development and will be presented in future work. References [1] J.R. Wolpaw, N. Birbaumer, D.J. McFarland, G. Pfurtscheller, and T.M. Vaughan. Braincomputer interfaces for communication and control. Clinical Neurophysiology, 113(6):767? 791, 2002. [2] H.H. Jasper and W. Penfield. Electrocorticograms in man: effect of the voluntary movement upon the electrical activity of the precentral gyrus. Arch. Psychiat. Z. Neurol., 183:163?174, 1949. [3] G. Pfurtscheller, Ch. Neuper, D. Flotzinger, and M. Pregenzer. EEG-based discrimination between imagination of right and left hand movement. Electroencephalography and Clinical Neurophysiology, 103:642?651, 1997. [4] G. Pfurtscheller and F.H. Lopes da Silva. Event-related EEG/MEG synchronization and desynchronization: basic principles. Clinical Neurophysiology, 110:1842?1857, 1999. [5] G. Blanchard and B. Blankertz. BCI competition 2003 - data set IIa: Spatial patterns of self-controlled brain rythm modulations. IEEE Transactions on Biomedical Engineering, 51(6):1062?1066, 2004. [6] Z.J. Koles. The quantitative extraction and topographic mapping of the abnormal components in the clinical EEG. Electroencephalography and Clinical Neurophysiology, 79:440? 447, 1991. [7] H. Ramoser, J. Mueller-Gerking, and G. Pfurtscheller. Optimal spatial filtering of single trial EEG during imagined hand movement. IEEE Transactions on Rehabilitation Engineering, 8(4):441?446, 2000. [8] L.C. Parra, C.D. Spence, A.D. Gerson, and P. Sajda. Recipes for linear analysis of EEG. Neuroimage, 28:326?341, 2005. [9] S. Baillet, J.C. Mosher, and R.M. Leahy. Electromagnetic brain mapping. IEEE Signal Processing Magazine, 18(6):14?30, 2001. [10] P.L. Nunez and R. Shrinivasan. Electric Fields of the Brain. The Neurophysics of EEG. Oxford University Press, 2nd edition, 2006. [11] B.N. Cuffin and D. Cohen. Comparison of the magnetoencephalogram and electroencephalogram. Electroencephalography and Clinical Neurophysiology, 47(2):132?146, 1979. [12] A. Delorme and S. Makeig. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics. Journal of Neuroscience Methods, 134:9?21, 2004. [13] R.O. Duda, P.E. Hart, and D.G. Stork. Pattern Classification. Wiley, 2nd edition, 2000.
3133 |@word neurophysiology:5 trial:16 version:1 briefly:3 middle:1 stronger:1 duda:1 nd:2 open:1 instruction:2 heuristically:2 covariance:15 harder:1 moment:7 mosher:1 suppressing:1 imaginary:11 current:2 written:1 realistic:1 numerical:1 enables:2 motor:48 designed:2 plot:3 v:1 implying:1 discrimination:1 nervous:1 psychiat:1 mental:7 detecting:1 node:1 location:1 fixation:1 inside:3 manner:1 expected:3 roughly:2 multi:1 brain:11 spherical:3 actual:4 window:4 electroencephalography:4 estimating:1 unrelated:1 notation:1 maximizes:5 pregenzer:1 kind:1 cm:2 substantially:1 eigenvector:4 suppresses:1 developed:1 transformation:4 temporal:3 quantitative:1 every:1 universit:3 makeig:1 control:4 originates:4 unit:2 continually:1 engineering:4 referenced:1 oxford:1 modulation:1 approximately:1 evoked:1 practical:1 spence:1 practice:2 wolpaw:1 area:15 physiology:1 projection:4 pre:2 induce:1 flotzinger:1 shielded:1 onto:2 context:1 applying:2 vaughan:1 center:3 regardless:1 starting:2 independently:2 simplicity:2 disorder:1 dipole:8 leahy:1 his:1 feel:1 spontaneous:2 imagine:1 user:1 magazine:1 designing:3 agreement:1 element:4 utilized:1 located:3 observed:4 bottom:1 electrical:2 solved:1 region:3 wentrup:1 movement:15 decrease:2 removed:1 asked:1 dynamic:1 personal:1 trained:4 carrying:1 solving:2 upon:1 various:1 derivation:1 surrounding:1 snrs:1 sajda:1 detected:1 neurophysics:1 klaus:1 outside:3 posed:1 bci:3 topographic:1 online:1 eigenvalue:12 propose:2 mb:1 asf:25 relevant:2 ludwig:1 achieve:3 competition:2 recipe:1 electrode:31 r1:5 extending:1 leave:1 derive:4 measured:17 minor:1 strong:2 implemented:1 quotient:1 indicate:1 direction:5 radius:1 filter:25 subsequently:2 centered:10 suffices:1 electromagnetic:1 parra:1 correction:2 mm:1 around:1 intentional:1 considered:2 roi:31 mapping:2 pointing:2 gerson:1 smallest:2 purpose:3 leadfield:6 applicable:1 largest:8 vice:1 lrp:1 butterworth:2 csp:44 avoid:1 derived:1 focus:5 rank:4 superimposed:2 contrast:1 lateralized:1 psy:1 sense:1 baseline:2 mueller:1 inaccurate:1 unlikely:1 originating:18 germany:3 issue:3 overall:4 classification:28 ill:1 priori:6 development:1 spatial:32 apriori:1 field:9 extraction:11 sampling:1 placing:2 future:3 fmri:1 report:1 stimulus:3 serious:1 opening:3 penfield:1 oriented:3 resulted:1 individual:1 replaced:3 consisting:1 freedom:2 stationarity:1 interest:5 evaluation:3 male:1 predefined:3 integral:1 gramann:2 necessary:1 experience:1 respective:6 re:1 precentral:1 theoretical:1 increased:1 classify:1 column:6 measuring:2 technische:2 contralateral:3 snr:3 successful:2 front:1 optimally:3 reported:1 conduction:3 accomplish:1 gerking:1 sensitivity:1 discriminating:2 randomized:1 transmitting:1 w1:4 imagery:28 again:1 recorded:10 electrocorticograms:1 r3m:1 worse:1 imagination:1 li:1 potential:1 de:3 summarized:1 includes:1 blanchard:1 caused:6 explicitly:1 depends:2 onset:1 tab:3 il:6 ir:6 accuracy:11 formed:1 variance:15 correspond:1 spaced:2 gathered:1 conceptually:1 weak:1 raw:1 critically:1 rx:13 multiplying:1 tissue:1 classified:1 llt:1 suffers:2 sixth:2 frequency:8 sampled:1 radially:8 knowledge:3 cap:1 organized:1 focusing:3 tum:2 higher:4 unchen:6 reflected:1 erd:24 done:5 furthermore:2 rejected:1 just:1 arch:1 biomedical:1 hand:24 replacing:1 lack:4 quality:3 artifact:10 bcis:12 effect:1 requiring:1 vicinity:1 hence:2 spatially:1 moritz:2 freq:1 during:8 numerator:1 self:1 m:1 generalized:4 evident:1 electroencephalogram:1 interface:4 silva:1 ranging:1 bem:1 common:5 superior:1 jasper:1 stork:1 birbaumer:1 cohen:1 volume:4 imagined:1 measurement:5 significant:2 versa:1 automatic:2 grid:3 iia:1 closing:3 had:2 cortex:24 impressive:1 surface:1 showed:1 csps:7 hemisphere:5 apart:1 termed:1 claimed:1 certain:4 seen:3 employed:1 surely:1 determine:1 maximize:3 signal:6 multiple:4 desirable:1 baillet:1 cross:3 long:1 sphere:5 clinical:6 hart:1 equally:2 controlled:1 basic:1 muenchen:1 nunez:1 represent:1 cz:1 achieved:4 justified:3 participated:2 addressed:1 aged:1 source:10 crucial:1 w2:3 rest:1 eliminates:1 haptic:1 subject:29 hz:11 recording:4 db:4 incorporates:1 effectiveness:1 ideal:1 variety:1 aggravated:1 psychology:1 restrict:1 cause:3 remark:1 impairment:1 eigenvectors:2 band:11 gyrus:1 outperform:1 s3:9 dotted:1 estimated:7 neuroscience:1 koles:1 four:4 utilize:1 imaging:1 sum:1 year:1 inverse:1 lope:1 reporting:1 patch:1 abnormal:1 replaces:1 activity:15 scalp:3 strength:3 x2:3 ri:1 sake:2 aspect:1 martin:1 department:1 according:2 peripheral:1 beneficial:1 agreeing:1 maximilians:1 lp:2 rehabilitation:1 s1:8 dv:3 equation:3 bus:1 turn:1 discus:1 r3:4 describing:3 fail:1 end:1 available:2 limb:6 subtracted:1 robustness:4 rp:1 top:1 ensure:1 include:2 calculating:3 already:1 subspace:2 mapped:1 originate:1 seven:1 extent:1 discriminant:1 reason:2 assuming:1 meg:1 besides:1 ratio:6 minimizing:3 conductive:1 setup:1 suppress:2 design:6 perform:3 vertical:1 observation:2 neuron:1 datasets:3 enabling:1 finite:2 displayed:1 voluntary:1 communication:2 head:4 rn:3 introduced:1 evidenced:2 required:2 specified:1 c3:22 optimized:1 misplacement:1 toolbox:1 c4:21 delorme:1 address:1 suggested:1 mcfarland:1 below:6 pattern:9 max:2 explanation:1 power:1 suitable:3 event:3 difficulty:1 braincomputer:1 representing:3 scheme:1 improve:3 blankertz:1 picture:1 started:1 extract:5 text:1 review:3 geometric:1 prior:1 meter:1 literature:1 relative:5 synchronization:2 mixed:1 recalculation:1 filtering:2 validation:2 degree:2 principle:1 uncorrelated:1 row:3 summary:1 placed:3 free:1 offline:2 allow:1 institute:2 taking:1 distributed:1 outermost:1 overcome:2 calculated:5 cortical:1 depth:1 boundary:1 instructed:1 adaptive:10 projected:1 employing:2 correlate:1 transaction:2 uni:1 conclude:3 assumed:2 spatio:2 xi:1 discriminative:2 spectrum:1 continuous:1 physiologically:3 fem:1 table:1 promising:1 nature:1 channel:1 robust:3 eeg:57 excellent:1 electric:16 ramoser:1 da:1 arrow:4 s2:8 noise:2 whole:1 edition:2 x1:3 body:1 site:1 fig:6 representative:1 screen:3 fashion:1 grosse:1 wiley:1 pfurtscheller:4 neuroimage:1 position:8 wish:1 bandpass:1 third:1 specific:4 er:9 desynchronization:3 r2:4 neurol:1 gained:1 rejection:2 suited:1 rayleigh:1 artifactual:6 lt:2 visual:2 tracking:1 neurological:1 ch:1 corresponds:1 chance:2 extracted:2 shell:3 goal:1 formulated:1 identity:2 consequently:2 marked:1 towards:1 room:1 replace:2 absence:1 fisher:1 change:4 man:1 eeglab:1 specifically:2 muscular:1 infinite:1 wt:3 called:1 total:1 discriminate:1 pas:1 experimental:2 neuper:1 indicating:1 scan:1 reactive:3 frontal:1 preparation:1 evaluate:1 correlated:2
2,352
3,134
Logistic Regression for Single Trial EEG Classification Ryota Tomioka? Kazuyuki Aihara? Dept. of Mathematical Informatics, IST, The University of Tokyo, 113-8656 Tokyo, Japan. ryotat@first.fhg.de aihara@sat.t.u-tokyo.ac.jp Klaus-Robert M? uller? Dept. of Computer Science, Technical University of Berlin, Franklinstr. 28/29, 10587 Berlin, Germany. klaus@first.fhg.de Abstract We propose a novel framework for the classification of single trial ElectroEncephaloGraphy (EEG), based on regularized logistic regression. Framed in this robust statistical framework no prior feature extraction or outlier removal is required. We present two variations of parameterizing the regression function: (a) with a full rank symmetric matrix coefficient and (b) as a difference of two rank=1 matrices. In the first case, the problem is convex and the logistic regression is optimal under a generative model. The latter case is shown to be related to the Common Spatial Pattern (CSP) algorithm, which is a popular technique in Brain Computer Interfacing. The regression coefficients can also be topographically mapped onto the scalp similarly to CSP projections, which allows neuro-physiological interpretation. Simulations on 162 BCI datasets demonstrate that classification accuracy and robustness compares favorably against conventional CSP based classifiers. 1 Introduction The goal of Brain-Computer Interface (BCI) research [1, 2, 3, 4, 5, 6, 7] is to provide a direct control pathway from human intentions reflected in brain signals to computers. Such a system will not only provide disabled people more direct and natural control over a neuroprosthesis or over a computer application (e.g. [2]) but also opens up a further channel of man machine interaction for healthy people to communicate solely by their intentions. Machine learning approaches to BCI have proven to be effective by requiring less subject training and by compensating for the high inter-subject variability. In this field, a number of studies have focused on constructing better low dimensional representations that combine various features of brain activities [3, 4], because the problem of classifying EEG signals is intrinsically high dimensional. In particular, efforts have been made to reduce the number of electrodes by eliminating electrodes recursively [8] or by decomposition techniques e.g., ICA, which only uses the marginal distribution, or Common Spatial Patterns (CSP) [9] which additionally takes the labels into account. In practice, often a BCI system has been constructed by combining a feature extraction step and a classification step. Our contribution is a logistic regression classifier that integrates both steps under the roof of a single minimization problem and uses well controlled regularization. Moreover, the classifier output has a probabilistic interpretation. We study a BCI based on the motor ? ? Fraunhofer FIRST.IDA, Kekul?estr. 7, 12489 Berlin, Germany. ERATO Aihara Complexity Modeling Project, JST, 153-8505 Tokyo, Japan imagination paradigm. Motor imagination can be captured through spatially localized bandpower modulation in the ?- (10-15Hz) or ?- (20-30Hz) band characterized by the secondorder statistics of the signal; the underlying neuro-physiology is well known as Event Related Desynchronization (ERD) [10]. 1.1 Problem setting Let us denote by X ? Rd?T the EEG signal of a single trial of an imaginary motor movement1 , where d is the number of electrodes and T is the number of sampled time-points in a trial. We consider a binary classification problem where each class, e.g. right or left hand imaginary movement, is called positive (+) or negative (?) class. Let y ? {+1, ?1} be the class label. Given a set of trials and labels {Xi , yi }ni=1 , the task is to predict the class label y for an unobserved trial X. 1.2 Conventional method: classifying with CSP features In the motor-imagery EEG signal classification, Common Spatial Pattern (CSP) based classifiers have proven to be powerful [11, 3, 6]. CSP is a decomposition method proposed by Koles [9] that finds a set of projections that simultaneously diagonalize the covariance matrices corresponding to two brain states. Formally, the covariance matrices2 are defined as: 1 X Xi Xi> (c ? {+, ?}), (1) ?c = |Ic | i?Ic where Ic is the set of indices belonging to a class c ? {+, ?}; thus I+ ? I? = {1, . . . , n}. Then, the simultaneous diagonalization is achieved by solving the following generalized eigenvalue problem: ?+ w = ??? w. (2) w> j ?+ w j Note that for each pair of eigenvector and eigenvalue (wj , ?j ), the equality ?j = w> j ?? w j holds. Therefore, the eigenvector with the largest eigenvalue corresponds to the projection with the maximum ratio of power for the ?+? class and the ??? class, and the otherway-around for the eigenvector with the smallest eigenvalue. In this paper, we call these eigenvectors filters3 ; we call the eigenvector of an eigenvalue smaller (or larger) than one a filter for the ?+? class (or the ??? class), respectively, because the signal projected with them optimally (in the spirit of eigenvalues) captures the task related de-synchronization in each class. It is common practice that only the first nof largest eigenvectors and the last nof smallest eigenvectors are used to construct a low dimensional feature representation. The feature vector consists of logarithms of the projected signal powers and a Linear Discriminant Analysis (LDA) classifier is trained on the resulting feature vector. To summarize, the conventional CSP based classifier can be constructed as follows: How to build a CSP based classifier: 1. Solve the generalized eigenvalue problem Eq. (2). 2. Take the nof largest and smallest eigenvectors {wj }Jj=1 ? ?J > 3. xi := log w> (i = 1, . . . , n). j Xi Xi w j j=1 (J = 2nof ). 4. Train an LDA classifier on {xi , yi }ni=1 . 1 For simplicity, we assume that ` the signal? is already band-pass filtered and each trial is centered and scaled as X = ?1T Xoriginal IT ? T1 11> . 2 Although it is convenient to call Eq. (1) a covariance matrix, calling it an averaged cross power matrix gives better insight into the nature of the problem, because we are focusing on the task related modulation of rhythmic activities. 3 according to the convention by [12]. 2 Theory 2.1 The model We consider the following discriminative model; we model the symmetric logit transform of the posterior class probability to be a linear function with respect to the second order statistics of the EEG signal: ? ? P (y = +1|X) = f (X; ?) := tr W XX > + b, log (3) P (y = ?1|X) where ? := (W, b) ? Sym(d) ? R, W is a symmetric d ? d matrix and b is the bias term. The model (3) can be derived by assuming a zero-mean Gaussian distribution with no temporal correlation with a covariance matrix ?? for each class as follows: ? ? 1 ?? P (y = +1|X) = tr ??+ ?1 + ?? ?1 XX > + const.. log (4) P (y = ?1|X) 2 However training of a discriminative model is robust to misspecification of the marginal distribution P (X) [13]. In another words, the marginal distribution P (X) is a nuisance parameter; we maximize the joint log-likelihood, which is decomposed as log P (y, X|?) = log P (y|X, ?) + log P (X), only with respect to ? [14]. Therefore, no assumption about the generative model is necessary. Note that from Eq. (4) normally the optimal W has both positive and negative eigenvalues. 2.2 2.2.1 Logistic regression Linear logistic regression We minimize the negative log-likelihood of Eq. (3) with an additional regularization term, which is written as follows: n ? ? C ? ? 1X log 1 + e?yi f (Xi ; ?) + tr?P W ?P W + b2 . (5) min 2n W ?Sym(d),b?R n i=1 Pn Here, the pooled covariance matrix ?P := n1 i=1 Xi Xi> is introduced in the regularization term in order to make the regularization invariant to linear transformation of the data; if ?1/2 ? ?1/2 we rewrite W as W := ?P W ?P , one can easily see that the regularization term is ? ; the transformation corresponds to the simply the Frobenius norm of a symmetric matrix W ?1/2 ? whitening of the signal X = ?P X. By simple calculation, one can see that the loss term Qn is the negative logarithm of the conditional likelihood i=1 1/(1 + e?yi f (Xi ;? ) ), in another words the probability of observing head (yi = +1) or tail (yi = ?1) by tossing n coins with probability P (y = +1|X = Xi , ?) (i = 1, . . . , n) for the head. From a general point of view, the loss term of Eq. (5) converges asymptotically to the true loss where the empirical average is replaced by the expectation over X and y, whose minimum over functions in L2 (PX ) is achieved by the symmetric logit transform of P (y = +1|X) [15]. Note that the problem Eq. (5) is convex. The problem of classifying motor imagery EEG signals is now addressed under a single loss function. Based on the criterion (Eq. (5)) we can say how good a solution is and we know how to properly regularize it. 2.2.2 Rank=2 approximation of the linear logistic regression Here we present a rank=2 approximation of the regression function (3). Using this approximation we can greatly reduce the number of parameters to be estimated from a symmetric matrix coefficient to a pair of projection coefficients and additionally gain insight into the relevant feature the classifier has found. The rank=2 approximation of the regression function (3) is written as follows: ?? ? ? ? := 1 tr ?w1 w> + w2 w> XX > + b, f?(X; ?) 1 2 2 (6) ? := (w1 , w2 , b) ? Rd ? Rd ? R. The rationale for choosing this special form of where ? function is that the Bayes optimal regression coefficients in Eq. (4) is the difference of two positive definite matrices; therefore two bases with opposite signs are at least necessary in capturing the nature of Eq. (4) (incorporating more bases goes beyond the scope of this contribution). The rank=2 parameterized logistic regression can be obtained by minimizing the sum of the logistic regression loss and regularization terms similarly to Eq. (5): n ? ? 1X ? ? ? C ? > 2 log 1 + e?yi f (Xi ; ?) + w1 ?P w1 + w> . min 2 ?P w 2 + b 2n w1 ,w2 ?Rd ,b?R n i=1 (7) Here, again the pooled covariance matrix ?P is used as a metric in order to ensure the invariance to linear transformations. Note that the bases {w1 , w2 } give projections of the signal into a two dimensional feature space in a similar manner as CSP (see Sec. 1.2). We call w1 and w2 filters corresponding to ?+? and ??? classes, respectively, similarly to CSP. The filters can be topographically mapped onto the scalp, from which insight into the classifier can be obtained. However, the major difference between CSP and the rank=2 parameterized logistic regression (Eq. (7)) is that in our new approach, there is no distinction between the feature extraction step and the classifier training step. The coefficient that linearly combines the features (i.e., the norm of w1 and w2 ) is optimized in the same optimization problem (Eq. (7)). 3 3.1 Results Experimental settings We compare the logistic regression classifiers (Eqs. (3) and (6)) against CSP based classifiers with nof = 1 (total 2 filters) and nof = 3 (total 6 filters). The comparison is a chronological validation. All methods are trained on the first half of the samples and applied on the second half. We use 60 BCI experiments [6] from 29 subjects where the subjects performed three imaginary movements, namely ?right hand? (R), ?left hand? (L) and ?foot? (F) according to the visual cue presented on the screen, except 9 experiments where only two classes were performed. Since we focus on binary classification, all the pairwise combination of the performed classes produced 162 (= 51 ? 3 + 9) datasets. Each dataset contains 70 to 600 trials (at median 280) of imaginary movements. All the recordings come from the calibration measurements, i.e. no feedback was presented to the subjects. The signal was recorded from the scalp with multi-channel EEG amplifiers using 32, 64 or 128 channels. The signal was sampled at 1000Hz and down-sampled to 100Hz before the processing. The signal is band-pass filtered at 7-30Hz and the interval 500-3500ms after the appearance of visual cue is cut out from the continuous EEG signal as a trial X. The training data is whitened before minimizing Eqs. (5) and (7) because both problems become considerably simpler when ?P is an identity matrix. For the prediction of test data, coefficients including ?1/2 ? ?1/2 ?1/2 ? j (j = 1, 2) for the whitening operation W = ?P W ?P for Eq. (3) and wj = ?P w ? ? j denote the minimizer of Eqs. (5) and (7) for the whitened Eq. (6) are used, where W and w data. Note that we did not whitened the training and test data jointly, which could have improved the performance. The regularization constant C for the proposed method is chosen by 5?10 cross-validation on the training set. 3.2 Classification performance In Fig. 1, logistic regression (LR) classifiers with the full rank parameterization (Eq. (3); left column) and the rank=2 parameterization (Eq. (6); right column) are compared against CSP based classifiers with 6 filters (top row) and 2 filters (bottom row). Each plot shows the bit-rates achieved by CSP (horizontal) and LR (vertical) for each dataset as a circle. Here the bit-rate (per decision) is defined based on the classification test error perr as the capacity of a binary symmetric channel with the same error probability: 1 43% 52% 0.8 LR (rank=2) LR (full rank) 0.8 0.6 0.4 0 0 48% 0.2 0.4 0.6 0.8 CSP (6 filters) 0 0 1 52% 0.4 0.6 0.8 CSP (6 filters) 64% LR (rank=2) 0.6 0.4 0.6 0.4 0.2 0.2 0 0 38% 0.2 0.8 0.8 LR (full rank) 0.4 0.2 0.2 1 0.6 43% 0.2 0.4 0.6 0.8 CSP (2 filters) 1 0 0 28% 0.2 0.4 0.6 0.8 CSP (2 filters) Figure 1: Comparison of bit-rates achieved by the CSP based classifiers and the logistic regression (LR) classifiers. The bit-rates achieved by the conventional CSP based classifier and the proposed LR classifier are shown as a circle for each dataset. The proportion of datasets lying above/below the diagonal is shown at top-left/bottom-right corners of each plot, respectively. Only the difference between CSP with 2 filters and rank=2 approximated LR (lower right) is significant based on Fisher sign test at 5% level. ? ? 1 + (1 ? perr ) log2 1?p1 err . The proposed method improves upon the con1 ? perr log2 perr ventional method for datasets lying above the diagonal. Note that our proposed logistic regression ansatz is significantly better only in the lower right plot. Figure 2 shows examples of spatial filter coefficients obtained by CSP (6 filters) and rank=2 parameterized logistic regression. The CSP filters for subject A (see Fig. 2(a)) include typical cases (the first filter for the ?left hand? class and the first two filters for the ?right hand? class) of filters corrupted by artifacts, e.g., muscle movements. The CSP filters for the ?foot? class in subject B (see Fig. 2(b)) are corrupted by strong occipital ?-activity, which might have been weakly correlated to the labels by chance. Note that CSP with 2 filters only use the first filter for each class, which corresponds to the first row in Figs. 2(a) and 2(b). On the other hand the filter coefficients obtained by the logistic regression are clearly focused on the area physiologically corresponding to ERD in the motor cortex (see Figs. 2(c) and (d)). 4 4.1 Discussion Relation to CSP Here, we show that at the optimum of Eq. (7) the regression coefficients w1 and w2 are generalized eigenvectors of two uncertainty weighted covariance matrices corresponding to two motor imagery classes, which are weighted by the uncertainty of the decision 1 ? P (y = yi |X = Xi ) for each sample. Samples that are easily explained by the regression function are weighed low whereas those lying close to the decision boundary or those lying on the wrong side of the boundary are highly weighted. Although, both CSP and the rank=2 approximated logistic regression can be understood as generalized eigenvalue decomposition, the classification-optimized weighting in the logistic regression yields filters that focus on the task related modulation of rhythmic activities more clearly when compared to CSP, as shown in Fig. 2. Differentiating Eq. (7) with either w1 or w2 , we obtain the following equality which holds n at the optimum. X e?zi ? y X X > w?j + C?P w?j = 0 (j = 1, 2), (8) ?zi i i i 1 + e i=1 ? ? ) and ? denotes + and ? for j = 1, 2, where we define the short hand zi := yi f?(Xi ; ? respectively. Moreover, Eq. (8) can be rewritten as follows: ? ? , 0)w? = ?+(? ? ? , C)w? , ??(? 1 1 ? ? ? ? ? ? ?+(? , 0)w2 = ??(? , C)w2 , (9) (10) where we define the uncertainty weighted covariance matrix as: n X e?zi CX > ? ? , C) = Xi Xi> . ?? (? X X + i i 1 + e?zi n i=1 i?I? Note that increasing the regularization constant C biases the uncertainty weighted covariance matrix to the pooled covariance matrix ?P ; the regularization only affects the righthand side of Eqs. (9) and (10). If C > 0, the optimal filter coefficients w?j (j = 1, 2) are the generalized eigenvectors of Eqs. (9) and (10), respectively. 4.2 CSP is not optimal When first proposed, CSP was rather a decomposition technique than a classification technique (see [9]). After being introduced to the BCI community by [11], it has proved to be also powerful in classifying imaginary motor movements [3, 6]. However, since it is not optimized for the classification problem, there are two major drawbacks. Firstly, the selection of ?good? CSP components is usually done somewhat arbitrarily. A widely used heuristic is to choose several generalized eigenvectors from both ends of the eigenvalue spectrum. However, as in subject B in Fig. 2, it is often observed that filters corresponding to overwhelming strong power come to the top of the spectrum though they are not correlated to the label so strongly. In practice, an experienced investigator can choose good filters by looking at them, however the validity of the selection cannot be assessed because the manual selection cannot be done inside the cross-validation. Secondly, simultaneous diagonalization of covariance matrices can suffer greatly from a few outlier trials as seen in subject A in Fig. 2. Again, in practice one can inspect the EEG signals to detect outliers, however a manual outlier detection is also a somewhat arbitrary, non-reproducible process, which cannot be validated. 5 Conclusion In this paper, we have proposed an unified framework for single trial classification of motorimagery EEG signals. The problem is addressed as a single minimization problem without any prior feature extraction or outlier removal steps. The task is to minimize a logistic regression loss with a regularization term. The regression function is a linear function with respect to the second order statistics of the EEG signal. We have tested the proposed method on 162 BCI datasets. By parameterizing the whole regression coefficients directly, we have obtained comparable classification accuracy with CSP based classifiers. By parameterizing the regression coefficients as the difference of two rank-one matrices, improvement against CSP based classifiers was obtained. We have shown that in the rank=2 parameterization of the logistic regression function, the optimal filter coefficients has an interpretation as a solution to a generalized eigenvalue problem similarly to CSP. However, the difference is that in the case of logistic regression every sample is weighted according to the importance to the overall classification problem whereas in CSP all the samples have uniform importance. The proposed framework provides a basis for various future directions. For example, incorporating more than two filters will connect the two parameterizations of the regression function shown in this paper and it may allow us to investigate how many filters are sufficient for good classification. Since the classifier output is the logit transform of the class probability, it is straightforward to generalize the method to multi-class problems. Also non-stationarities, e.g. caused by a covariate shift (see [16, 17]) in the density P (X) from one session to another, could be corrected by adapting the likelihood model. Acknowledgments: This research was partially supported by MEXT, Grant-in-Aid for JSPS fellows, 17-11866 and Grant-in-Aid for Scientific Research on Priority Areas, 17022012, by BMBF-grant FKZ 01IBE01A, and by the IST Programme of the European Community, under the PASCAL Network of Excellence, IST-2002-506778. This publication only reflects the authors? views. References [1] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, ?Braincomputer interfaces for communication and control?, Clin. Neurophysiol., 113: 767?791, 2002. [2] N. Birbaumer, N. Ghanayim, T. Hinterberger, I. Iversen, B. Kotchoubey, A. K? ubler, J. Perelmouter, E. Taub, and H. Flor, ?A spelling device for the paralysed?, Nature, 398: 297?298, 1999. [3] G. Pfurtscheller, C. Neuper, C. Guger, W. Harkam, R. Ramoser, A. Schl? ogl, B. Obermaier, and M. Pregenzer, ?Current Trends in Graz Brain-computer Interface (BCI)?, IEEE Trans. Rehab. Eng., 8(2): 216?219, 2000. [4] B. Blankertz, G. Curio, and K.-R. M? uller, ?Classifying Single Trial EEG: Towards Brain Computer Interfacing?, in: T. G. Diettrich, S. Becker, and Z. Ghahramani, eds., Advances in Neural Inf. Proc. Systems (NIPS 01), vol. 14, 157?164, 2002. [5] B. Blankertz, G. Dornhege, C. Sch? afer, R. Krepki, J. Kohlmorgen, K.-R. M? uller, V. Kunzmann, F. Losch, and G. Curio, ?Boosting Bit Rates and Error Detection for the Classification of Fast-Paced Motor Commands Based on Single-Trial EEG Analysis?, IEEE Trans. Neural Sys. Rehab. Eng., 11(2): 127?131, 2003. [6] B. Blankertz, G. Dornhege, M. Krauledat, K.-R. M? uller, V. Kunzmann, F. Losch, and G. Curio, ?The Berlin Brain-Computer Interface: EEG-based communication without subject training?, IEEE Trans. Neural Sys. Rehab. Eng., 14(2): 147?152, 2006. [7] G. Dornhege, J. del R. Mill? an, T. Hinterberger, D. McFarland, and K.-R. M? uller, eds., Towards Brain-Computer Interfacing, MIT Press, 2006, in press. [8] T. N. Lal, M. Schr? oder, T. Hinterberger, J. Weston, M. Bogdan, N. Birbaumer, and B. Sch? olkopf, ?Support Vector Channel Selection in BCI?, IEEE Transactions Biomedical Engineering, 51(6): 1003?1010, 2004. [9] Z. J. Koles, ?The quantitative extraction and topographic mapping of the abnormal components in the clinical EEG?, Electroencephalogr. Clin. Neurophysiol., 79: 440?447, 1991. [10] G. Pfurtscheller and F. H. L. da Silva, ?Event-related EEG/MEG synchronization and desynchronization: basic principles?, Clin. Neurophysiol., 110(11): 1842?1857, 1999. [11] H. Ramoser, J. M? uller-Gerking, and G. Pfurtscheller, ?Optimal spatial filtering of single trial EEG during imagined hand movement?, IEEE Trans. Rehab. Eng., 8(4): 441?446, 2000. [12] N. J. Hill, J. Farquhar, T. N. Lal, and B. Sch? olkopf, ?Time-dependent demixing of task-relevant EEG sources?, in: Proceedings of the 3rd International Brain-Computer Interface Workshop and Training Course 2006, Verlag der Technischen Universit? at Graz, 2006. [13] B. Efron, ?The Efficiency of Logistic Regression Compared to Normal Discriminant Analysis?, J. Am. Stat. Assoc., 70(352): 892?898, 1975. [14] T. Minka, ?Discriminative models, not discriminative training?, Tech. Rep. TR-2005-144, Microsoft Research Cambridge, 2005. [15] T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning, SpringerVerlag, 2001. [16] H. Shimodaira, ?Improving predictive inference under covariate shift by weighting the loglikelihood function?, Journal of Statistical Planning and Inference, 90: 227?244, 2000. [17] S. Sugiyama and K.-R. M? uller, ?Input-Dependent Estimation of Generalization Error under Covariate Shift?, Statistics and Decisions, 23(4): 249?279, 2005. right hand (c) Subject A. Logistic regression (rank=2) filter coefficients [0.61] foot [0.70] [0.67] [7.11] [4.74] [3.19] (a) Subject A. CSP filter coefficients left hand left hand [0.41] [0.33] right hand [0.59] [1.88] [2.04] [2.40] left hand (b) Subject B. CSP filter coefficients left hand foot (d) Subject B. Logistic regression (rank=2) filter coefficients Figure 2: Examples of spatial filter coefficients obtained by CSP and the rank=2 parameterized logistic regression. (a) Subject A. Some CSP filters are corrupted by artifacts. (b) Subject B. Some CSP filters are corrupted by strong occipital ?-activity. (c) Subject A. Logistic regression coefficients are focusing on the physiologically expected ?left hand? and ?right hand? areas. (d) Subject B. Logistic regression coefficients are focusing on the ?left hand? and ?foot? areas. Electrode positions are marked with crosses in every plot. For CSP filters, the generalized eigenvalues (Eq. (2)) are shown inside brackets.
3134 |@word trial:14 eliminating:1 norm:2 proportion:1 logit:3 open:1 simulation:1 decomposition:4 covariance:11 eng:4 tr:5 recursively:1 contains:1 imaginary:5 err:1 current:1 ida:1 written:2 motor:9 plot:4 reproducible:1 generative:2 half:2 cue:2 device:1 parameterization:3 sys:2 short:1 lr:9 filtered:2 provides:1 parameterizations:1 boosting:1 firstly:1 simpler:1 mathematical:1 constructed:2 direct:2 become:1 consists:1 pathway:1 combine:2 inside:2 manner:1 excellence:1 pairwise:1 inter:1 expected:1 ica:1 p1:1 bandpower:1 planning:1 multi:2 brain:10 compensating:1 decomposed:1 overwhelming:1 kohlmorgen:1 electroencephalography:1 increasing:1 project:1 xx:3 moreover:2 underlying:1 pregenzer:1 eigenvector:4 perr:4 unified:1 unobserved:1 transformation:3 dornhege:3 temporal:1 fellow:1 every:2 stationarities:1 quantitative:1 chronological:1 universit:1 classifier:22 scaled:1 wrong:1 control:3 normally:1 grant:3 assoc:1 positive:3 t1:1 before:2 understood:1 engineering:1 solely:1 modulation:3 might:1 diettrich:1 averaged:1 acknowledgment:1 practice:4 definite:1 wolpaw:1 area:4 empirical:1 physiology:1 significantly:1 projection:5 convenient:1 intention:2 word:2 adapting:1 onto:2 close:1 selection:4 cannot:3 vaughan:1 weighed:1 conventional:4 go:1 straightforward:1 occipital:2 convex:2 focused:2 simplicity:1 parameterizing:3 insight:3 regularize:1 variation:1 us:2 secondorder:1 trend:1 element:1 approximated:2 cut:1 bottom:2 observed:1 capture:1 wj:3 graz:2 movement:6 complexity:1 trained:2 weakly:1 solving:1 rewrite:1 topographically:2 predictive:1 upon:1 efficiency:1 basis:1 neurophysiol:3 easily:2 joint:1 various:2 train:1 fast:1 effective:1 klaus:2 choosing:1 whose:1 heuristic:1 larger:1 solve:1 widely:1 say:1 loglikelihood:1 bci:10 statistic:4 topographic:1 transform:3 jointly:1 eigenvalue:12 propose:1 interaction:1 rehab:4 relevant:2 combining:1 ogl:1 kunzmann:2 frobenius:1 olkopf:2 guger:1 electrode:4 optimum:2 converges:1 bogdan:1 ac:1 stat:1 schl:1 eq:25 strong:3 come:2 nof:6 convention:1 direction:1 foot:5 drawback:1 tokyo:4 filter:37 centered:1 human:1 jst:1 generalization:1 secondly:1 hold:2 lying:4 around:1 ic:3 normal:1 scope:1 predict:1 mapping:1 major:2 smallest:3 estimation:1 proc:1 integrates:1 label:6 healthy:1 largest:3 electroencephalogr:1 weighted:6 reflects:1 uller:7 minimization:2 mit:1 clearly:2 interfacing:3 gaussian:1 csp:42 rather:1 pn:1 command:1 publication:1 derived:1 focus:2 validated:1 properly:1 improvement:1 rank:21 likelihood:4 ubler:1 tech:1 greatly:2 detect:1 am:1 inference:2 dependent:2 relation:1 fhg:2 germany:2 overall:1 classification:17 pascal:1 spatial:6 special:1 marginal:3 field:1 construct:1 extraction:5 future:1 few:1 simultaneously:1 roof:1 replaced:1 n1:1 microsoft:1 friedman:1 amplifier:1 detection:2 highly:1 investigate:1 righthand:1 bracket:1 paralysed:1 necessary:2 logarithm:2 circle:2 con1:1 column:2 modeling:1 kekul:1 uniform:1 jsps:1 optimally:1 connect:1 perelmouter:1 corrupted:4 considerably:1 gerking:1 density:1 international:1 ghanayim:1 probabilistic:1 informatics:1 ansatz:1 w1:10 imagery:3 again:2 recorded:1 choose:2 obermaier:1 hinterberger:3 priority:1 corner:1 imagination:2 japan:2 account:1 de:3 b2:1 pooled:3 sec:1 coefficient:21 caused:1 performed:3 view:2 observing:1 bayes:1 contribution:2 minimize:2 ni:2 accuracy:2 yield:1 generalize:1 produced:1 kotchoubey:1 simultaneous:2 manual:2 ed:2 against:4 minka:1 sampled:3 gain:1 dataset:3 proved:1 popular:1 intrinsically:1 efron:1 improves:1 focusing:3 reflected:1 improved:1 erd:2 done:2 though:1 strongly:1 biomedical:1 correlation:1 hand:17 horizontal:1 del:1 logistic:27 lda:2 artifact:2 disabled:1 scientific:1 validity:1 requiring:1 true:1 regularization:10 equality:2 spatially:1 symmetric:7 erato:1 during:1 nuisance:1 criterion:1 generalized:8 m:1 hill:1 demonstrate:1 ventional:1 interface:5 silva:1 estr:1 novel:1 common:4 birbaumer:3 jp:1 imagined:1 tail:1 interpretation:3 measurement:1 significant:1 taub:1 cambridge:1 framed:1 rd:5 similarly:4 session:1 sugiyama:1 calibration:1 afer:1 cortex:1 whitening:2 base:3 posterior:1 inf:1 verlag:1 binary:3 arbitrarily:1 rep:1 yi:9 der:1 muscle:1 captured:1 minimum:1 additional:1 somewhat:2 seen:1 tossing:1 paradigm:1 maximize:1 signal:19 full:4 technical:1 characterized:1 calculation:1 cross:4 clinical:1 controlled:1 prediction:1 neuro:2 regression:38 basic:1 whitened:3 expectation:1 metric:1 achieved:5 whereas:2 addressed:2 interval:1 median:1 source:1 diagonalize:1 sch:3 w2:10 flor:1 subject:18 hz:5 recording:1 spirit:1 call:4 affect:1 zi:5 hastie:1 fkz:1 opposite:1 reduce:2 technischen:1 shift:3 becker:1 effort:1 suffer:1 jj:1 oder:1 krauledat:1 eigenvectors:7 band:3 sign:2 estimated:1 per:1 tibshirani:1 vol:1 ist:3 koles:2 asymptotically:1 sum:1 parameterized:4 powerful:2 franklinstr:1 communicate:1 uncertainty:4 decision:4 comparable:1 bit:5 capturing:1 abnormal:1 paced:1 scalp:3 activity:5 calling:1 min:2 px:1 according:3 combination:1 shimodaira:1 belonging:1 smaller:1 matrices2:1 aihara:3 outlier:5 invariant:1 explained:1 know:1 krepki:1 end:1 operation:1 rewritten:1 robustness:1 coin:1 top:3 denotes:1 ensure:1 include:1 log2:2 clin:3 iversen:1 const:1 ghahramani:1 build:1 already:1 spelling:1 diagonal:2 mapped:2 berlin:4 capacity:1 discriminant:2 assuming:1 meg:1 index:1 ratio:1 minimizing:2 robert:1 ryota:1 favorably:1 farquhar:1 negative:4 vertical:1 inspect:1 datasets:5 variability:1 misspecification:1 head:2 looking:1 communication:2 harkam:1 schr:1 arbitrary:1 community:2 neuroprosthesis:1 introduced:2 pair:2 required:1 namely:1 optimized:3 lal:2 distinction:1 nip:1 trans:4 beyond:1 mcfarland:2 below:1 pattern:3 usually:1 summarize:1 including:1 power:4 event:2 natural:1 braincomputer:1 regularized:1 blankertz:3 fraunhofer:1 prior:2 kazuyuki:1 removal:2 l2:1 synchronization:2 loss:6 rationale:1 filtering:1 proven:2 localized:1 validation:3 sufficient:1 principle:1 classifying:5 row:3 course:1 supported:1 last:1 sym:2 bias:2 side:2 allow:1 differentiating:1 rhythmic:2 feedback:1 boundary:2 qn:1 author:1 made:1 projected:2 programme:1 transaction:1 sat:1 xi:17 discriminative:4 spectrum:2 continuous:1 physiologically:2 additionally:2 channel:5 nature:3 robust:2 eeg:19 improving:1 european:1 constructing:1 ramoser:2 da:1 did:1 linearly:1 whole:1 fig:8 screen:1 aid:2 bmbf:1 tomioka:1 experienced:1 pfurtscheller:4 position:1 weighting:2 down:1 covariate:3 desynchronization:2 physiological:1 demixing:1 incorporating:2 curio:3 workshop:1 importance:2 diagonalization:2 cx:1 mill:1 simply:1 appearance:1 visual:2 partially:1 corresponds:3 minimizer:1 chance:1 weston:1 conditional:1 goal:1 identity:1 marked:1 losch:2 towards:2 man:1 fisher:1 springerverlag:1 typical:1 except:1 corrected:1 called:1 total:2 pas:2 invariance:1 experimental:1 neuper:1 formally:1 people:2 mext:1 latter:1 support:1 assessed:1 investigator:1 dept:2 tested:1 correlated:2
2,353
3,135
Branch and Bound for Semi-Supervised Support Vector Machines Olivier Chapelle1 Max Planck Institute T? ubingen, Germany chapelle@tuebingen.mpg.de Vikas Sindhwani University of Chicago Chicago, USA vikass@cs.uchicago.edu S. Sathiya Keerthi Yahoo! Research Santa Clara, USA selvarak@yahoo-inc.com Abstract Semi-supervised SVMs (S3 VM) attempt to learn low-density separators by maximizing the margin over labeled and unlabeled examples. The associated optimization problem is non-convex. To examine the full potential of S3 VMs modulo local minima problems in current implementations, we apply branch and bound techniques for obtaining exact, globally optimal solutions. Empirical evidence suggests that the globally optimal solution can return excellent generalization performance in situations where other implementations fail completely. While our current implementation is only applicable to small datasets, we discuss variants that can potentially lead to practically useful algorithms. 1 Introduction A major line of research on extending SVMs to handle partially labeled datasets is based on the following idea: solve the standard SVM problem while treating the unknown labels as additional optimization variables. By maximizing the margin in the presence of unlabeled data, one learns a decision boundary that traverses through low data-density regions while respecting labels in the input space. In other words, this approach implements the cluster assumption for semi-supervised learning ? that points in a data cluster have similar labels. This idea was first introduced in [14] under the name Transductive SVM, but since it learns an inductive rule defined over the entire input space, we refer to this approach as Semisupervised SVM (S3 VM). Since its first implementation in [9], a wide spectrum of techniques have been applied to solve the non-convex optimization problem associated with S3 VMs, e.g., local combinatorial search [9], gradient descent [6], continuation techniques [3], convex-concave procedures [7], and deterministic annealing [12]. While non-convexity is partly responsible for this diversity of methods, it is also a departure from one of the nicest features of SVMs. Several experimental studies have established that S3 VM implementations show varying degrees of empirical success. This is conjectured to be closely tied to their susceptibility to local minima problems. The following questions motivate this paper: How well do current S3 VM implementations approximate the exact, globally optimal solution of the non-convex problem associated with S3 VMs ? Can one expect significant improvements in generalization performance by better approaching the global solution? We believe that these questions are of fundamental importance for S3 VM research and are largely unresolved. This is partly due to the lack of simple implementations that practitioners can use to benchmark new algorithms against the global solution, even on small-sized problems. 1 Now part of Yahoo! Research, chap@yahoo-inc.com Our contribution in this paper is to outline a class of Branch and Bound algorithms that are guaranteed to provide the globally optimal solution for S3 VMs. Branch and bound techniques have previously been noted in the context of S3 VM in [16], but no details were presented there. We implement and evaluate a branch and bound strategy that can serve as an upper baseline for S3 VM algorithms. This strategy is not practical for typical semisupervised settings where large amounts of unlabeled data is available. But we believe it opens up new avenues of research that can potentially lead to more efficient variants. Empirical results on some semi-supervised tasks presented in section 7 show that the exact solution found by branch and bound has excellent generalization performance, while other S3 VM implementations perform poorly. These results also show that S3 VM can compete and even outperform graph-based techniques (e.g.,[17, 13]) on problems where the latter class of methods have typically excelled. 2 Semi-Supervised Support Vector Machines We consider the problem of binary classification. The training set consists of l labeled examples {(xi , yi )}li=1 , yi = ?1, and of u the unlabeled examples {xi }ni=l+1 , with n = l + u. In the linear case, the following objective function is minimized on both the hyperplane parameters w and b, and on the label vector yu := [yl+1 . . . yn ]> , min w,b,yu ,?i ?0 l n X X 1 2 w +C ?ip + C ? ?ip 2 i=1 (1) i=l+1 under constraints yi (w ? xi + b) ? 1 ? ?i , 1 ? i ? n. Non linear decision boundaries can be constructed using the kernel trick [15]. While in general any convex loss function can be used, it is common to either penalize the training errors linearly (p = 1) or quadratically (p = 2). In the rest of the paper, we consider p = 2. The first two terms in (1) correspond to a standard SVM. The last one takes into account the unlabeled points and can be seen as an implementation of the cluster assumption [11] or low density separation assumption [6]; indeed, it drives the outputs of the unlabeled points away from 0 (see figure 1). 1 0.8 Loss 0.6 0.4 0.2 0 ?1.5 ?1 ?0.5 0 0.5 Signed output 1 1.5 Figure 1: With p = 2 in (1), the loss of a point with label y and signed output t is max(0, 1 ? yt)2 . For an unlabeled point, this is miny max(0, 1 ? yt)2 = max(0, 1 ? |t|)2 . For simplicity, we take C ? = C. But in practice, it is important to set these two values independently because C reflects our confidence in the labels of the training points, while C ? corresponds to our belief in the low density separation assumption. In addition, we add the following balancing constraint to (1), n 1 X max(yi , 0) = r. u (2) i=l+1 This constraint is necessary to avoid unbalanced solutions and has also been used in the original implementation [9]. Ideally, the parameter r should be set to the ratio of positive points in the unlabeled set. Since it is unknown, r is usually estimated through the class ratio on the labeled set. In that case, one may wish to ?soften? this constraint, as in [6]. For the sake of simplicity, in the rest of the paper, we set r to the true ratio of positive points in the unlabeled set. Let us call I the objective function to be minimized: n I(w, b, yu ) = X 1 2 w +C max(0, 1 ? yi (w ? xi + b))2 . 2 i=1 There are two main strategies to minimize I: (1) For a given fixed w and b, the optimal yu is simply given by the signs of w ? xi + b. Then a continuous optimization on w and b can be done [6]. But note that the constraint (2) is then not straightforward to enforce. (2) For a given yu , the optimization on w and b is a standard SVM training. Let?s define J (yu ) = min I(w, b, yu ). w,b (3) Now the goal is to minimize J over a set of binary variables (and each evaluation of J is a standard SVM training). This was the approach followed in [9] and the one that we take in this paper. The constraint (2) is implemented by setting J (yu ) = +? for all vectors yu not satisfying it. 3 Branch and bound 3.1 Branch and bound basics Suppose we want to minimize a function f over a space X , where X is usually discrete. A branch and bound algorithm has two main ingredients: Branching : the region X is recursively split into smaller subregions. This yields a tree structure where each node corresponds to a subregion. Bounding : consider two (disjoint) subregions (i.e. nodes) A and B ? X . Suppose that an upper bound (say a) on the best value of f over A is known and a lower bound (say b) on the best value of f over B is known and that a < b. Then, we know there is an element in the subset A that is better than all elements of B. So, when searching for the global minimizer we can safely discard the elements of B from the search: the subtree corresponding to B is pruned. 3.2 Branch and bound for S3 VM The aim is to minimize (3) over all 2u possible choices for the vector yu ,1 which constitute the set X introduced above. The binary search tree has the following structure. Any node corresponds to a partial labeling of the data set and its two children correspond to the labeling of some unlabeled point. One can thus associate with any node a labeled set L containing both the original labeled examples and a subset S of unlabeled examples {(xj , yj )}j?S?[l+1...n] to which the labels yj have been assigned. One can also associate an unlabeled set U = [l + 1 . . . n] \ S corresponding to the subset of unlabeled points which have not been assigned a label yet. The size of the subtree rooted at this node is thus 2|U | . The root of the tree has only the original set of labeled examples associated with it, i.e S is empty. The leaves in the tree correspond to a complete labeling of the dataset, i.e. U is empty. All other nodes correspond to partial labelings. As for any branch and bound algorithm, we have to decide about the following choices, Branching: For a given node in the tree (i.e. a partial labeling of the unlabeled set), what should be its two children (i.e. which unlabeled point should be labeled next)? Bounding: Which upper and lower bounds should be used? Exploration: In which order will the search tree be examined? In other words, which subtree should be explored next? Note that the tree is not built explicitly but on the fly as we explore it. 1 ? There are actually only u ur ? effective choices because of the constraint (2). Concerning the upper bound, we decided to have the following simple strategy: for a leaf node, the upper bound is simply the value of the function; for a non leaf node, there is no upper bound. In other words, the upper bound is the best objective function found so far. Coming back to the notations of section 3.1, the set A is the leaf corresponding to the best solution found so far and the set B is the subtree that we are considering to explore. Because of this choice for the upper bound, a natural way to explore the tree is a depth first search. Indeed it is important to go to the leaves as often as possible in order to have a tight upper bound and thus perform aggressive pruning. The choice of the lower bound and the branching strategy are presented next. 4 Lower bound We consider a simple lower bound based on the following observation. The minimum of the objective function (1) is smaller when C ? = 0 than when C ? > 0. But C ? = 0 corresponds to a standard SVM, ignoring the unlabeled data. We can therefore compute a lower bound at a given node by optimizing a standard SVM on the labeled set associated with this node. We now present a more general framework for computing lower bounds. It is based on the dual objective function of SVMs. Let D(?, yU ) be the dual objective function, where yU corresponds to the labels of the unlabeled points which have not been assigned a label yet,   n n X 1 X ?ij D(?, yU ) = ?i ? ?i ?j yi yj K(xi , xj ) + . (4) 2 i,j=1 2C i=1 The dual feasibility is ?i ? 0 and X ?i yi = 0. (5) Now suppose that we have a strategy that, given yU , finds a vector ?(yU ) satisfying (5). Since the dual is maximized, D(?(yU ), yU ) ? max D(?, yU ) = J (yU ), ? where J has been defined in (3). Let Q(yU ) := D(?(yU ), yU ) and lb a lower bound on (or the value of) min Q(yU ), where the minimum is taken over all yU satisfying the balancing constraint (2). Then lb is also a lower bound for the value of the objective function corresponding to that node. The goal is thus to find a choice for ?(yU ) such that a lower bound on Q can be computed efficiently. The choice corresponding to the lower bound presented above is the following. Train an SVM on the labeled points, obtain the vector ? and complete it with zeros for the unlabeled points. Then Q(yU ) is the same for all the possible labelings of the unlabeled points and the lower bound is the SVM objective function on the labeled points. Here is a sketch of another possibility for ?(yU ) that one can explore: instead of completing the vector ? by zeros, we complete it by a constant would typically be of the P ? which same order of magnitude as ?. Then Q(yU ) = ?i ? 12 y> Hy, where Hij = ?i ?j Kij . To lower bound Q, one can use results from the quadratic zero-one programming P literature [10] or solve a constrained eigenvalue problem [8]. Finally, note that unless U yi = 0, the P constraint ?P be satisfied. One remedy is to train the supervised SVM with i yi = 0 will not P P the constraint ?i yi = ?? U yi = ?(n ? 2ru + L yi ) (because of (2)). In the primal, this amounts to penalizing the bias term b. 5 Branching At a given node, some unlabeled points have already been assigned a label. Which unlabeled point should be labeled next? Since our strategy is to reach a good solution as soon as possible (see last paragraph of section 3.2), it seems natural to assign the label that we are the most confident about. A simple possibility would be to branch on the unlabeled point which is the nearest from another labeled point using a reliable distance metric. But we now present a more principled approach based on the analysis of the objective value. We say that we are ?confident? about a particular label of an unlabeled point when assigning the opposite label results in a big increase of the objective value: this partial solution would then be unlikely to lead to the optimal one. Let us formalize this strategy. Remember from section 3.2 that a node is associated with a set L of currently labeled examples and a set U of unlabeled examples. Let s(L) be the SVM objective function trained on the labeled set, X 1 2 s(L) = min w +C max(0, 1 ? yi (w ? xi + b))2 . (6) w,b 2 (xi ,yi )?L As discussed in the previous section, the lower bound is s(L). Now our branching strategy consists in selecting the following point in U , arg max s(L ? {x, y}) (7) x?U, y??1 In other words, we want to find the unlabeled point x? and its label y ? which would make the objective function increase as much as possible. Then we branch on x? , but start exploring the branch with the most likely label ?y ? . This strategy has an intuitive link with the ?label propagation? idea [17]: an unlabeled point which is near from a labeled point is likely to be of the same label; otherwise, the objective function would be large. A main disadvantage of this approach is that to solve (7), a lot of SVM trainings are necessary. It is however possible to approximately compute s(L ? {x, y}). The idea is similar to the fast approximation of the leave-one-out solution [5]. Here the situation is ?add-one-in?. If an SVM has been trained on the set L it is possible to efficiently compute the solution when one point is added in the training set. This is under the assumption that the set of support vectors does not change when adding this point. In practice, the set is likely to change and the solution will only be approximate. Proposition 1 Consider training an SVM on a labeled set L with quadratic penalization of the errors (cf (6) or (4)). Let f be the learned function and sv be the set of support vectors. Then, if sv does not change while adding a point (x, y) in the training set, max(0, 1 ? yf (x))2 (8) s(L ? {x, y}) = s(L) + 2Sx2 + 1/C > ?1 where Sx2 = sv v,  K(x, x) ? v K ! ?ij 1 K(xi , xj ) + 2C ? i , x)i?sv 1). Ksv = and v> = (K(x i,j?sv 1> 0 > ?1 Ksv ysv The proof is omitted because of lack of space. It is based on the fact that s(L) = 21 ysv and relies on the block matrix inverse formula. 6 Algorithm The algorithm is implemented recursively (see algorithm 1). At the beginning, the upper bound can either be set to +? or to a solution found by another algorithm. Note that the SVM trainings are incremental: whenever we go down the tree, one point is added in the labeled set. For this reason, the retraining can be done efficiently (also see [2]) since effectively, we just need to update the inverse of a matrix. 7 Experiments We consider here two datasets where other S3 VM implementations are unable to achieve satisfying test error rates. This naturally raises the following questions: Is this weak per- Algorithm 1 Branch and bound for S3 VM(BB). Function: (Y ? , v) ? S3 VM(Y, ub) % Recursive implementation Input: Y : a partly labeled vector (0 for unlabeled) ub: an upper bound on the optimal objective value. Output: Y ? : optimal fully labeled vector v: corresponding P objective function. P if max(0, Yi ) > ur OR max(0, ?Yi ) < n ? ur then return % Constraint (2) can not be satisfied end if ?? Do not explore this subtree v ? SVM(Y ) % Compute the SVM objective function on the labeled points. if v > ub then return % The lower bound is higher than the upper bound end if ?? Do not explore this subtree if Y is fully labeled then Y? ?Y return % We are at a leaf end if Find index i and label y as in (7) % Find next unlabeled point to label Yi ? ?y % Start first by the most likely label (Y ? , v) ? S3 VM(Y, ub) % Find (recursively) the best solution Yi ? ?Yi % Switch the label (Y2? , v2 ) ? S3 VM(Y, min(ub, v)) % Explore other branch with updated upper-bound if v2 < v then Y ? ? Y2? and v ? v2 % Keep the best solution end if formance due to the unsuitability of the S3 VM objective function for these problems or do these methods get stuck at highly sub-optimal local minima? 7.1 Two moons The ?two moons? dataset is now a standard benchmark for semi-supervised learning algorithms. Most graph-based methods such as [17] easily solve this problem , but so far, all S3 VM algorithms find it difficult to construct the right boundary (an exception is [12] using an L1 loss). We drew 100 random realizations of this dataset, fixed the bandwidth of an RBF kernel to ? = 0.5 and set C = 10. Each moon contained 50 unlabeled points. We compared ?S3 VM[6], cS3 VM[3], CCCP [7], SVMlight [9] and DA [12]. For the first 3 methods, there is no direct way to enforce the constraint (2). However, these methods have a constraint that the mean output on the unlabeled point should be equal to some constant. This constant is normally fixed to the mean of the labels, but for the sake of consistency we did a dichotomy search on this constant in order to have (2) satisfied. Results are presented in table 1. Note that the test errors for other S3 VM implementations are likely to be improved by hyperparameter tuning, but they will still stay very high. For comparison, we have also included the results of a state-of-the-art graph based method, LapSVM [13] whose hyperparameters were optimized for the test error and the threshold adjusted to satisfy the constraint (2). Matlab source code and a demo of our algorithm on the ?two moons? dataset is accessible as supplementary material with this paper. 7.2 COIL Extensive benchmark results reported in [4, benchmark chapter] show that on problems where classes are expected to reside on low-dimensional non-linear manifolds, e.g., handwritten digits, graph-based algorithms significantly outperform S3 VM implementations. Table 1: Results on the two moons dataset (averaged over 100 random realizations) ?S3 VM cS3 VM CCCP SVMlight DA BB LapSVM Test error (%) 59.3 45.7 64 66.2 34.1 0 3.7 Objective function 13.64 13.25 39.55 20.94 46.85 7.81 N/A We consider here such a dataset by selecting three confusible classes from the COIL20 dataset [6] (see figure 2). There are 72 images per class, corresponding to rotations of 5 degrees (and thus yielding a one dimensional manifold). We randomly selected 2 images per class to be in the labeled set and the rest being unlabeled. Results are reported in table 2. The hyperparameters were chosen to be ? = 3000 and C = 100. Figure 2: The 3 cars from the COIL dataset, subsampled to 32?32 Table 2: Results on the Coil dataset (averaged over 10 random realizations) ?S3 VM cS3 VM CCCP SVMlight DA BB LapSVM Test error (%) 60.6 60.6 47.5 55.3 48.2 0 7.5 Objective function 267.4 235 588.3 341.6 611 110.7 N/A From tables 1 and 2, it appears clearly that (1) the S3 VM objective function leads to excellent test errors; (2) other S3 VM implementations fail completely in finding a good minimum of the objective function2 and (3) the global S3 VM solution can actually outperform graphbased alternatives even if other S3 VM implementations are not found to be competitive. Concerning the running time, it is of the order of a minute for both datasets. We do not expect this algorithm to be able to handle datasets much larger than couple of hundred points. 8 Discussion and Conclusion We implemented and evaluated one strategy amongst many in the class of branch and bound methods to find the globally optimal solution of S3 VMs. The work of [1] is the most closely related to our methods. However that paper presents an algorithm for linear S3 VMs and relies on generic mixed integer programming which does not make use of the problem structure as our methods can. This basic implementation can perhaps be made more efficient by choosing better bounding and branching schemes. Also, by fixing the upper bound as the currently best objective 2 The reported test errors are somehow irrelevant and should not be used for ranking the different algorithms. They should just be interpreted as ?failure?. value, we restricted our implementation to follow depth-first search. It is conceivable that breadth-first search is equally or more effective in conjunction with alternative upper bounding schemes. Pruning can be done more aggressively to speed-up termination at the expense of obtaining a solution that is suboptimal within some tolerance (i.e prune B if a < b ? ). Finally, we note that a large family of well-tested branch and bound procedures from zeroone quadratic programming literature can be immediately applied to the S3 VM problem for the special case of squared loss. An interesting open question is whether one can provide a guarantee for polynomial time convergence under some assumptions on the data and the kernel. Concerning the running time of our current implementation, we have observed that it is most efficient whenever the global minimum is significantly smaller than most local minima: in that case, the tree can be pruned efficiently. This happens when the clusters are well separated and C and ? are not too small. For these reasons, we believe that this implementation does not scale to large datasets, but should instead be considered as a proof of concept: the S3 VM objective function is very well suited for semi-supervised learning and more effort should be made on trying to efficiently find good local minima. References [1] K. Bennett and A. Demiriz. Semi-supervised support vector machines. In Advances in Neural Information processing systems 12, 1998. [2] G. Cauwenberghs and T. Poggio. Incremental and decremental support vector machine learning. In Advances in Neural Information Processing Systems, pages 409?415, 2000. [3] O. Chapelle, M. Chi, and A. Zien. A continuation method for semi-supervised svms. In International Conference on Machine Learning, 2006. [4] O. Chapelle, B. Sch? olkopf, and A. Zien, editors. Semi-Supervised Learning. MIT Press, Cambridge, 2006. in press. www.kyb.tuebingen.mpg.de/ssl-book/. [5] O. Chapelle, V. Vapnik, O. Bousquet, and S. Mukherjee. Choosing multiple parameters for support vector machines. Machine Learning, 46:131?159, 2002. [6] O. Chapelle and A. Zien. Semi-supervised classification by low density separation. In Tenth International Workshop on Artificial Intelligence and Statistics, 2005. [7] R. Collobert, F. Sinz, J. Weston, and L. Bottou. Large scale transductive SVMs. Journal of Machine Learning Research, 7:1687?1712, 2006. [8] W. Gander, G. H. Golub, and U. Von Matt. A constrained eigenvalue problem. Linear Algebra and its Applications, 114/115:815?839, 1989. [9] T. Joachims. Transductive inference for text classification using support vector machines. In International Conference on Machine Learning, 1999. [10] P.M. Pardalos and G.P. Rodgers. Computational aspects of a branch and bound algorithm for quadratic zero-one programming. Computing, 45:131?144, 1990. [11] M. Seeger. A taxonomy of semi-supervised learning methods. In O. Chapelle, B. Sch? olkopf, and A. Zien, editors, Semi-Supervised Lerning. MIT Press, 2006. [12] V. Sindhwani, S. Keerthi, and O. Chapelle. Deterministic annealing for semi-supervised kernel machines. In International Conference on Machine Learning, 2006. [13] V. Sindhwani, P. Niyogi, and M. Belkin. Beyond the point cloud: From transductive to semi-supervised learning. In International Conference on Machine Learning, 2005. [14] V. Vapnik and A. Sterin. On structural risk minimization or overall risk in a problem of pattern recognition. Automation and Remote Control, 10(3):1495?1503, 1977. [15] V. N. Vapnik. Statistical Learning Theory. John Wiley & Sons, Inc., New York, 1998. [16] W. Wapnik and A. Tscherwonenkis. Theorie der Zeichenerkennung. Akademie Verlag, Berlin, 1979. [17] X. Zhu and Z. Ghahramani. Learning from labeled and unlabeled data with label propagation. Technical Report 02-107, CMU-CALD, 2002.
3135 |@word polynomial:1 seems:1 retraining:1 open:2 termination:1 recursively:3 selecting:2 current:4 com:2 clara:1 yet:2 assigning:1 john:1 chicago:2 kyb:1 treating:1 update:1 intelligence:1 leaf:6 selected:1 beginning:1 node:14 traverse:1 constructed:1 direct:1 consists:2 paragraph:1 expected:1 indeed:2 mpg:2 examine:1 chi:1 excelled:1 globally:5 chap:1 considering:1 notation:1 what:1 interpreted:1 finding:1 sinz:1 guarantee:1 safely:1 remember:1 concave:1 control:1 normally:1 yn:1 planck:1 positive:2 local:6 approximately:1 signed:2 examined:1 suggests:1 averaged:2 decided:1 practical:1 responsible:1 yj:3 practice:2 block:1 implement:2 recursive:1 digit:1 procedure:2 empirical:3 akademie:1 significantly:2 word:4 confidence:1 get:1 unlabeled:32 context:1 risk:2 function2:1 www:1 deterministic:2 yt:2 maximizing:2 straightforward:1 go:2 independently:1 convex:5 selvarak:1 simplicity:2 immediately:1 rule:1 handle:2 searching:1 updated:1 suppose:3 modulo:1 exact:3 olivier:1 programming:4 trick:1 element:3 associate:2 satisfying:4 recognition:1 mukherjee:1 labeled:24 observed:1 cloud:1 fly:1 region:2 remote:1 principled:1 convexity:1 respecting:1 miny:1 ideally:1 motivate:1 trained:2 tight:1 raise:1 algebra:1 serve:1 cs3:3 completely:2 easily:1 chapter:1 train:2 separated:1 fast:1 effective:2 artificial:1 dichotomy:1 labeling:4 choosing:2 whose:1 supplementary:1 solve:5 larger:1 say:3 otherwise:1 statistic:1 niyogi:1 transductive:4 demiriz:1 ip:2 eigenvalue:2 coming:1 unresolved:1 realization:3 poorly:1 achieve:1 intuitive:1 olkopf:2 convergence:1 cluster:4 empty:2 extending:1 incremental:2 leave:1 fixing:1 nearest:1 ij:2 subregion:1 implemented:3 c:1 closely:2 exploration:1 material:1 pardalos:1 assign:1 generalization:3 proposition:1 adjusted:1 exploring:1 practically:1 considered:1 major:1 susceptibility:1 omitted:1 applicable:1 label:24 combinatorial:1 currently:2 tscherwonenkis:1 reflects:1 minimization:1 mit:2 clearly:1 aim:1 avoid:1 varying:1 conjunction:1 joachim:1 improvement:1 seeger:1 baseline:1 inference:1 entire:1 typically:2 unlikely:1 labelings:2 germany:1 arg:1 classification:3 dual:4 overall:1 yahoo:4 constrained:2 art:1 special:1 ssl:1 equal:1 construct:1 yu:28 minimized:2 report:1 belkin:1 randomly:1 subsampled:1 keerthi:2 zeichenerkennung:1 attempt:1 possibility:2 highly:1 evaluation:1 golub:1 yielding:1 primal:1 partial:4 necessary:2 poggio:1 unless:1 tree:10 kij:1 disadvantage:1 soften:1 subset:3 hundred:1 too:1 reported:3 sv:5 confident:2 density:5 fundamental:1 international:5 accessible:1 stay:1 vm:31 yl:1 squared:1 von:1 satisfied:3 containing:1 book:1 return:4 li:1 account:1 potential:1 aggressive:1 de:2 diversity:1 automation:1 inc:3 satisfy:1 explicitly:1 ranking:1 collobert:1 root:1 lot:1 graphbased:1 cauwenberghs:1 start:2 competitive:1 contribution:1 minimize:4 ni:1 moon:5 formance:1 largely:1 efficiently:5 maximized:1 correspond:4 yield:1 weak:1 handwritten:1 drive:1 reach:1 whenever:2 against:1 failure:1 naturally:1 associated:6 proof:2 couple:1 dataset:9 car:1 formalize:1 actually:2 back:1 appears:1 higher:1 supervised:16 follow:1 improved:1 done:3 evaluated:1 just:2 sketch:1 lack:2 propagation:2 somehow:1 yf:1 perhaps:1 believe:3 semisupervised:2 name:1 usa:2 matt:1 cald:1 true:1 remedy:1 y2:2 inductive:1 concept:1 assigned:4 aggressively:1 branching:6 rooted:1 noted:1 trying:1 outline:1 complete:3 l1:1 image:2 common:1 rotation:1 discussed:1 rodgers:1 refer:1 significant:1 cambridge:1 tuning:1 consistency:1 chapelle:7 add:2 coil20:1 conjectured:1 optimizing:1 irrelevant:1 discard:1 verlag:1 ubingen:1 binary:3 success:1 yi:19 der:1 seen:1 minimum:9 additional:1 prune:1 semi:15 branch:19 full:1 zien:4 multiple:1 technical:1 concerning:3 cccp:3 equally:1 feasibility:1 variant:2 basic:2 metric:1 cmu:1 kernel:4 penalize:1 addition:1 want:2 unsuitability:1 annealing:2 source:1 sch:2 rest:3 practitioner:1 call:1 integer:1 near:1 presence:1 svmlight:3 structural:1 split:1 switch:1 xj:3 approaching:1 opposite:1 bandwidth:1 suboptimal:1 idea:4 avenue:1 whether:1 effort:1 york:1 constitute:1 matlab:1 useful:1 santa:1 amount:2 subregions:2 svms:6 continuation:2 outperform:3 s3:34 sign:1 estimated:1 disjoint:1 per:3 discrete:1 hyperparameter:1 threshold:1 penalizing:1 breadth:1 tenth:1 graph:4 compete:1 inverse:2 family:1 decide:1 separation:3 sx2:2 decision:2 bound:42 completing:1 guaranteed:1 followed:1 quadratic:4 constraint:14 sake:2 hy:1 bousquet:1 aspect:1 speed:1 min:5 pruned:2 smaller:3 son:1 ur:3 happens:1 restricted:1 taken:1 previously:1 discus:1 fail:2 know:1 end:4 available:1 apply:1 away:1 enforce:2 v2:3 generic:1 vikass:1 alternative:2 vikas:1 original:3 running:2 cf:1 wapnik:1 ghahramani:1 objective:23 question:4 already:1 added:2 strategy:11 gradient:1 amongst:1 conceivable:1 distance:1 link:1 unable:1 berlin:1 manifold:2 tuebingen:2 reason:2 ru:1 code:1 index:1 ratio:3 difficult:1 potentially:2 taxonomy:1 hij:1 expense:1 theorie:1 implementation:20 unknown:2 perform:2 upper:15 observation:1 datasets:6 benchmark:4 descent:1 situation:2 lb:2 introduced:2 extensive:1 optimized:1 quadratically:1 learned:1 established:1 able:1 beyond:1 usually:2 pattern:1 departure:1 built:1 max:12 reliable:1 belief:1 natural:2 zhu:1 scheme:2 text:1 literature:2 loss:5 expect:2 fully:2 mixed:1 interesting:1 ingredient:1 penalization:1 degree:2 editor:2 balancing:2 last:2 soon:1 bias:1 uchicago:1 institute:1 wide:1 tolerance:1 boundary:3 depth:2 stuck:1 reside:1 made:2 far:3 bb:3 decremental:1 approximate:2 pruning:2 keep:1 global:5 sathiya:1 xi:9 demo:1 spectrum:1 search:8 continuous:1 table:5 lapsvm:3 learn:1 ignoring:1 obtaining:2 excellent:3 bottou:1 separator:1 da:3 did:1 main:3 linearly:1 bounding:4 big:1 hyperparameters:2 child:2 wiley:1 sub:1 vms:6 wish:1 tied:1 learns:2 formula:1 down:1 minute:1 explored:1 svm:18 evidence:1 workshop:1 vapnik:3 adding:2 effectively:1 importance:1 drew:1 magnitude:1 subtree:6 margin:2 suited:1 simply:2 explore:7 likely:5 contained:1 partially:1 sindhwani:3 corresponds:5 minimizer:1 relies:2 coil:3 weston:1 sized:1 goal:2 rbf:1 bennett:1 change:3 included:1 typical:1 hyperplane:1 ksv:2 partly:3 experimental:1 exception:1 support:8 gander:1 latter:1 unbalanced:1 ub:5 evaluate:1 tested:1
2,354
3,136
A Nonparametric Bayesian Method for Inferring Features From Similarity Judgments Daniel J. Navarro School of Psychology University of Adelaide Adelaide, SA 5005, Australia daniel.navarro@adelaide.edu.au Thomas L. Griffiths Department of Psychology UC Berkeley Berkeley, CA 94720, USA tom griffiths@berkeley.edu Abstract The additive clustering model is widely used to infer the features of a set of stimuli from their similarities, on the assumption that similarity is a weighted linear function of common features. This paper develops a fully Bayesian formulation of the additive clustering model, using methods from nonparametric Bayesian statistics to allow the number of features to vary. We use this to explore several approaches to parameter estimation, showing that the nonparametric Bayesian approach provides a straightforward way to obtain estimates of both the number of features used in producing similarity judgments and their importance. 1 Introduction One of the central problems in cognitive science is determining the mental representations that underlie human inferences. A variety of solutions to this problem are based on the analysis of similarity judgments. By defining a probabilistic model that accounts for the similarity between stimuli based on their representation, statistical methods can be used to infer underlying representations from human similarity judgments. The particular methods used to infer representions from similarity judgments depend on the nature of the underlying representations. For stimuli that are assumed to be represented as points in some psychological space, multidimensional scaling algorithms [1] can be used to translate similarity judgments into stimulus locations. For stimuli that are assumed to be represented in terms of a set of latent features, additive clustering is the method of choice. The original formulation of the additive clustering (ADCLUS) problem [2] is as follows. Assume that we have data in the form of a n?n similarity matrix S = [sij ], where sij is the judged similarity between the ith and jth of n objects. Similarities are assumed to be symmetric (with sij = sji ) and non-negative, often constrained to lie on the interval [0, 1]. These empirical similarities are assumed to be well-approximated by a weighted linear function of common features. Under these assumptions, a representation that uses m features to describe n objects is given by an n ? m matrix F = [fik ], where fik = 1 if the ith object possesses the kth feature, and fik = 0 if it is not. Each feature has an associated non-negative saliency weight w = (w1, . . . , wm ). When written in matrix form, the ADCLUS model seeks to uncover a feature matrix F and a weight vector w such that S ? FWF0 , where W = diag(w) is a diagonal matrix with nonzero elements corresponding to the saliency weights. In most applications it is assumed that there is a fixed ?additive constant?, a required feature possessed by all objects. 2 A Nonparametric Bayesian ADCLUS Model To formalize additive clustering as a statistical model, it is standard practice to assume that error terms are i.i.d. Gaussian [3], yielding the model: S = FWF0 + E, (1) O D w F The Indian Buffet f1 f2 f3 f4 ... The Diners f11 =1 P f21 =1 f23 =1 f33 =1 V s f12 =1 f34 =1 n(n-1)/2 (a) (b) Figure 1: Graphical model representation of the IBP-ADCLUS model. Panel (a) shows the hierarchical structure of the ADCLUS model, and panel (b) illustrates the method by which a feature matrix is generated using the Indian Buffet Process. where E = [ij ] is an n ? n matrix with entries drawn from a Gaussian(0, ?2 ) distribution. Equation 1 reveals that the additive clustering model is structurally similar to the better-known factor analysis model [4], although there are several differences: most notably the constraints that F is binary P valued, W is necessarily diagonal and S is non-negative. In any case, if we define ?ij = k wk fik fjk to be the similarity predicted by a particular choice of F and w, then: sij | F, w, ? ? Normal(?ij , ?2), (2) where ?2 is the variance of the Gaussian error distribution. However, self-similarities sii are not modeled in additive clustering, and are generally fixed to (the same) arbitrary values for both the model and data. It is typical to treat ?2 as a fixed parameter [5], and while this could perhaps be improved upon, we leave this open for future research. In our approach, additive clustering is framed as a form of nonparametric Bayesian inference, in which Equation 2 provides the likelihood function, and the model is completed by placing priors over the weights w and the feature matrix F. We assume a fixed Gamma prior over feature saliencies though it is straightforward to extend this to other, more flexible, priors. Setting a prior over binary feature matrices F is more difficult, since there is generally no good reason to assume an upper bound on the number of features that might be relevant to a particular similarity matrix. For this reason we use the ?nonparametric? Indian Buffet Process (IBP) [6], which provides a proper prior distribution over binary matrices with a fixed number of rows and an unbounded number of columns. The IBP can be understood by imagining an Indian buffet containing an infinite number of dishes. Each customer entering the restaurant samples a number of dishes from the buffet, with a preference for those dishes that other diners have tried. For the kth dish sampled by at least one of the first n ? 1 customers, the probability that the nth customer will also try that dish is nk p(fnk = 1|Fn?1) = , (3) n where Fn?1 records the choices of the previous customers, and nk denotes the number of previous customers that have sampled that dish. Being adventurous, the new customer may try some hitherto untasted meals from the infinite buffet on offer. The number of new dishes taken by customer n follows a Poisson(?/n) distribution. The complete IBP-ADCLUS model becomes, sij | F, w, ? ? wk | ?1, ?2 ? F|? ? Normal(?ij , ?2 ) Gamma(?1 , ?2) IBP(?). (4) The structure of this model is illustrated graphically in Figure 1(a), and an illustration of the IBP prior is shown in Figure 1(b). 3 A Gibbs-Metropolis Sampling Scheme As a Bayesian formulation of additive clustering, statistical inference in Equation 4 is based on the posterior distribution over feature matrices and saliency vectors, p(F, w | S). Naturally, the ideal approach is to calculate posterior quantities using exact methods. Unfortunately, this is generally quite difficult, so a natural alternative is to use Markov chain Monte Carlo (MCMC) methods to repeatedly sample from the posterior distribution: estimates of posterior quantities can be made using these samples as proxies for the full distribution. We construct a simple MCMC scheme for the Bayesian ADCLUS model using a combination of Gibbs sampling [7] and more general Metropolis proposals [8]. Saliency Weights. We use a Metropolis scheme to resample the saliency weights. If the current saliency is wk , a candidate wk? is first generated from a Gaussian(wk , 0.05) distribution. The value of wk is then reassigned using the Metropolis update rule. If w?k denotes the set of all saliencies except wk , this rule is  ? ? ? wk with probability a p(S | F,w ,wk )p(wk | ?) , where a = p(S | F,w?k . (5) wk ? ,w )p(w ?k k k | ?) wk with probability 1 ? a With a Gamma prior, the Metropolis sampler automatically rejects all negative valued wk? . ?Pre-Existing? Features. For features currently possessed by at least one object, assignments are updated using a standard Gibbs sampler: the value of fik is drawn from the conditional posterior distribution over fik | S, F?ik, w. Since feature assignments are discrete, it is easy to find this conditional probability by noting that p(fik |S, F?ik, w) ? p(S|F, w)p(fik|F?ik ), (6) where F?ik denotes the set of all feature assignments except fik . The first term in this expression is just the likelihood function for the ADCLUS model, and is simple to calculate. Moreover, since feature assignments in the IBP are exchangeable, we can treat the kth assignment as if it were the last. Given this, Equation 3 indicates that p(fik |F?ik) = n?ik /n, where n?ik counts the number of stimuli (besides the ith) that currently possess the kth feature. The Gibbs sampler deletes all single-stimulus features with probability 1, since n?ik will be zero for one of the stimuli. ?New? Features. Since the IBP describes a prior over infinite feature matrices, the resampling procedure needs to accommodate the remaining (infinite) set of features that are not currently represented among the manifest features F. When resampling feature assignments, some finite number of those currently-latent features will become manifest. When sampling from the conditional prior over feature assignments for the ith stimulus, we hold the feature assignments fixed for all other stimuli, so this is equivalent to sampling some number of ?singleton? features (i.e., features possessed only by stimulus i) from the conditional prior, which is Poisson( ?/n) as noted previously. When working with this algorithm, we typically run several chains. For each chain, we initialize the Gibbs-Metropolis sampler more or less arbitrarily. After a ?burn-in? period is allowed for the sampler to converge to a sensible location (i.e., for the state to represent a sample from the posterior), we make a ?draw? by recording the state of the sampler, leaving a ?lag? of several iterations between successive draws to reduce the autocorrelation between samples. When doing so, it is important to ensure that the Markov chains converge on the target distribution p(F, w | S). We did so by inspecting the time series plot formed by graphing the log posterior probability of successive samples. To illustrate this, one of the chains used in our simulations (see Section 5) is displayed in Figure 2, with nine parallel chains used for comparison: the time series plot shows no long-term trends, and that different chains are visually indistinguishable from one another. Although elaborations and refinements are possible for both the sampler [9] and the convergence check [10], we have found this approach to be reasonably effective for the moderate-sized problems considered in our applications. 4 Four Estimators for the ADCLUS Model Since the introduction of the additive clustering model, a range of algorithms have been used to infer features, including ?subset selection? [2], expectation maximization [3], continuous approximations [11] and stochastic hillclimbing [5] among others. A review, as well as an effective combinatorial search algorithm, is given in [12]. Curiously, while the plethora of algorithms available for extracting estimates of F and w have been discussed in the literature, the variety in the choice of estimator has been largely overlooked, to our knowledge. One advantage of the IBP-ADCLUS approach is that it allows us to discuss a range of different estimators that within a single framework. We will explore estimators based on computing the posterior distribution over F and w given S. This includes estimators based on maximum a posteriori (MAP) estimation, corresponding to the value of a variable with highest posterior probability, and taking expectations over the posterior distribution. Smoothed Log?Posterior Probability ?170 ?175 ?180 ?185 ?190 ?195 0 100 200 300 400 500 600 Sample Number 700 800 900 1000 Figure 2: Smoothed time series showing log-posterior probabilities for successive draws from the Gibbs-Metropolis sampler, for simulated similarity data with n = 16. The bold line shows a single chain, while the dotted lines show the remaining nine chains. Conditional MAP Estimation. Much of the literature defines an estimator conditional on the assumption that the number of features in the model m is fixed [3][11][12]. These approaches seek to estimate the values of F and w that jointly maximize some utility function conditional on this known m. If we treat the posterior probability to be our measure of utility, the estimators become, ? 1, w ? 1 = arg max p(F, w | S, m) F F,w Estimating the dimension is harder. The natural (MAP) estimate for m is easy to state: " # X Z p(F, w | S) dw m ? 1 = arg max p(m | S) = arg max m m (7) (8) F?Fm where Fm denotes the set of feature matrices containing m unique features. In practice, given the difficulty of working with Equation 8, it is typical to fix m on the basis of intuition, or via some heuristic method. MAP Feature Estimation. In the previous approach, m is given primacy, since F and w cannot be estimated until it is known. No distinction is made between F and w. In many practical situations [13], this does not reflect the priorities of the researcher. Often the feature matrix F is the psychologically relevant variable, with w and m being nuisance parameters. In such cases, it is natural to marginalize w when estimating F, and let the estimated feature matrix itself determine m. That is, we first select Z  ? F2 = arg max p(F | S) = arg max p(F, w | S)dw . (9) F F ? 2 provides an implicit estimate of m Notice that F ? 2 , which may differ from m ? 1 . The saliencies are ? 2 is chosen, via conditional MAP estimation: estimated after F ? 2, S). ? 2 = arg max p(w | F w w (10) This approach is typical of existing (parametric) Bayesian approaches to additive clustering [5][14], where analytic approximations to p(F | S) are used for expediency. Joint MAP Estimation. Both approaches discussed so far require some aspects of the model to be estimated before others. While the rationales for this constraint differ, both approaches seem sensible. Another approach, not as common in the literature, is to jointly estimate F and w without conditioning on m, yielding the MAP estimators, ? 3, w ? 3 = arg max p(F, w | S). F F,w (11) Early papers [2] recognized that this approach can be prone to overfitting, and thus requires that the prior place some emphasis on parsimony. However, many theoretically-motivated priors (including the IBP) allow the researcher to emphasize parsimony, and some frequentist methods used in ADCLUS-like models apply penalty functions for this reason [15]. 15 Features Recovered (a) (b) n 8 10 16 5 32 0 So Sn St So Sn St So Sn St ?1 S 79 78 87 89 90 96 91 91 100 ?2 S 81 81 88 88 88 95 91 91 100 ?3 S 79 78 87 89 90 96 91 91 100 ?4 S 84 84 92 90 90 97 91 91 100 6[8] 8[16] 10[32] Number of Latent Features [Number of Objects] Figure 3: Posterior distributions (a) over the number of features p(m | So ) in simulations containing mt = 6, 8 and 10 features respectively. Variance accounted for (b) by the four similarity estimators ? where the target is either the observed training data So , a new test data set Sn, or the true similarity S, matrix St. Approximate Expectations. A fourth approach aims to summarize the posterior distribution by looking at the marginal posterior probabilities associated with particular features. The probability that a particular feature fk belongs in the representation is given by: X p(F | S). (12) p(fk | S) = F:fk ?F Although this approach has never been applied in the ADCLUS literature, the concept is implicit in more general discussions of mental representation [16] that ask whether or not a specific predicate is likely to be represented. Letting r?k = p(fk | S) denote the posterior probability that feature fk is manifest, we can construct a vector ? r = [? rk] that contains these probabilities for all 2n possible features. Although this vector discards the covariation between features across the posterior distribution, it is useful both theoretically (for testing hypotheses about specific features) and pragmatically, since the expected posterior similarities can be written as follows: X E[sij ? |S] = fik fjk r?k w ?k , (13) fk where w ?k = E [wk |fk, S] denotes the expected saliency for feature fk on those occasions when it is represented (Equation 13 relies on the fact that features combine linearly in the ADCLUS model, and is straightforward to derive). In practice, it is impossible to look at all 2n features, so one would typically report only those features for which r?k is large. Since these tend to be the features that make the largest contributions to E[sij ? |S], there is a sense in which this approach approximates the expected posterior similarities. 5 Recovering Noisy Feature Matrices By using the IBP-ADCLUS framework, we can compare the performance of the four estimators in a reasonable fashion. Loosely following [12], we generated noisy similarity matrices with n = 8, 16 and 32 stimuli, based on ?true? feature matrices Ft in which mt = 2 log2(n), where each object possessed each feature with probability 0.5. Saliency weights wt were generated uniformly from the interval [1, 3], but were subsequently rescaled to ensure that the ?true? similarities St had variance 1. Two sets of Gaussian noise were injected into the similarities with fixed ? = 0.3, ensuring that the noise accounted for approximately 10% of the variance in the ?observed? data matrix So and the ?new? matrix Sn. We fixed ? = 2 for all simulations: since the number of manifest features in an IBP model follows a Poisson(?Hn) distribution (where Hn is the nth harmonic number) [6], the prior has a strong bias toward parsimony. The prior expected number of features is approximately 5.4, 6.8 and 8.1 (as compared to the true values of 6, 8 and 10). We approximated the posterior distribution p(F, w | S1), by drawing samples in the following manner. For a given similarity matrix, 10 Gibbs-Metropolis chains were run from different start points, and 1000 samples were drawn from each. The chains were burnt in for 1000 iterations, and a lag of 10 iterations was used between successive samples. Visual inspection suggested that five chains in the n = 32 condition did not converge: log-posteriors were low, differed substantially from one (b) 0.4 0.4 0.3 Probability 0.3 Probability (c) 0.4 0.2 0.1 0.3 Probability (a) 0.2 0.1 0 0.1 0 5 10 15 20 Number of Features 0.2 0 0 5 10 15 Number of Features 10 15 20 25 Number of Features Figure 4: Posterior distributions over the number of features when the Bayesian ADCLUS model is applied to (a) the numbers data, (b) the countries data and (c) the letters data. Table 1: Two representations of the numbers data. (a) The representation reported in [3], extracted using an EM algorithm with the number of features fixed at eight. (b) The 10 most probable features extracted using the Bayesian ADCLUS model. The first column gives the posterior probability that a particular feature belongs in the representation. The second column displays the average saliency of a feature in the event that it is included. (a) FEATURE 2 0 1 2 4 3 WEIGHT 8 6 9 6 7 8 9 2 3 4 5 6 1 3 5 7 9 1 2 3 4 4 5 6 7 8 additive constant 0.444 0.345 0.331 0.291 0.255 0.216 0.214 0.172 0.148 (b) PROB . WEIGHT FEATURE 3 6 9 2 4 8 0 1 2 2 3 4 5 6 6 7 8 9 0 1 2 3 4 2 4 6 8 1 3 5 7 9 4 5 6 7 8 7 8 9 additive constant 0.79 0.70 0.69 0.59 0.57 0.42 0.41 0.40 0.34 0.26 1.00 0.326 0.385 0.266 0.240 0.262 0.173 0.387 0.223 0.181 0.293 0.075 another, and had noticable positive slope. In this case, the estimators were constructed from the five remaining chains. Figure 3(a) shows the posterior distributions over the number of features m for each of the three simulation conditions. There is a tendency to underestimate the number of features when provided with small similarity matrices, with the modal number being 3, 7 and 10. However, since the posterior estimate of m is below the prior estimate when n = 8, it seems this effect is data-driven, as 79% of the variance in the data matrix So can be accounted for using only three features. ? a natural comparSince each approach allows the construction of an estimated similarity matrix S, ison is to look at the proportion of variance this estimate accounts for in the observed data So , the novel data set Sn, and the true matrix St. In view of the noise model used to construct these matrices, the ?ideal? answer for these three should be around 90%, 90% and 100% respectively. When n = 32, this profile is observed for all four estimators, suggesting that in this case all four estimators have converged appropriately. For the smaller matrices, the conditional MAP and joint MAP esti? 1 and S ? 3) agree closely. The MAP feature approach S ? 3 appears to perform slightly better, mators (S ? 4 provides the best estimate. though the difference is very small. The expectation method S 6 Modeling Empirical Similarities We now turn to the analysis of empirical data. Since space constraints preclude detailed reporting of all four estimators with respect to all data sets, we limit the discussion to the most novel IBPADCLUS estimators, namely the direct estimates of dimensionality provided through Equation 8, and the features extracted via ?approximate expectation?. Featural representations of numbers. A standard data set used in evaluating additive clustering models measures the conceptual similarity of the numbers 0 through 9 [17]. This data set is often used as a benchmark due to the complex interrelationships between the numbers. Table 1(a) shows an eight-feature representation of these data, taken from [3] who applied a maximum likelihood approach. This representation explains 90.9% of the variance, with features corresponding to arith- Table 2: Featural representation of the similarity between 16 countries. The table shows the eight highest-probability features extracted by the Bayesian ADCLUS model. Each column corresponds to a single feature, with the associated probabilities and saliencies shown below. The average weight associated with the additive constant is 0.035. FEATURE Italy Vietnam Germany Zimbabwe Zimbabwe Germany China Russia Nigeria Nigeria Spain Japan USA Cuba Philippines China Jamaica Indonesia Japan Iraq Libya PROB . 1.00 1.00 0.99 0.62 0.52 WEIGHT 0.593 0.421 0.267 0.467 0.209 Iraq Zimbabwe Philippines Libya Nigeria Indonesia Iraq Libya 0.36 0.373 0.33 0.299 0.25 0.311 Table 3: Featural representation of the perceptual similarity between 26 capital letters. The table shows the ten highest-probability features extracted by the Bayesian ADCLUS model. Each column corresponds to a single feature, with the associated probabilities and saliencies shown below. The average weight associated with the additive constant is 0.003. FEATURE M I C D P E E K B C N L G O R F H X G J W T Q R U PROB . 1.00 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.98 0.92 WEIGHT 0.686 0.341 0.623 0.321 0.465 0.653 0.322 0.427 0.226 0.225 metic concepts and to numerical magnitude. Fixing ? = 0.05, and ? = 0.5, we drew 10,000 lagged samples to construct estimates. Although the posterior probability is spread over a large number of feature matrices, 92.6% of sampled matrices had between 9 and 13 features. The modal number of represented features was m ? 1 =11, with 27.2% of the posterior mass. The posterior distribution over the number of features is shown in Figure 4(a). Since none of the existing literature has used the ?approximate expectation? approach to find highly probable features, it is useful to note the strong similarities between Table 1(a) and Table 1(b), which reports the ten highest-probability features across the entire posterior distribution. Applying this approach to obtain an estimate of the posterior ? 4 revealed that this matrix accounts for 97.4% of the variance in the data. predictive similarities S Featural representations of countries. A second application is to human forced-choice judgments of the similarities between 16 countries [18]. In this task, participants were shown lists of four countries and asked to pick out the two countries most similar to each other. Applying the Bayesian model to these data with ? = 0.1 reveals that only eight features appear in the representation more than 25% of the time. Given this, it is not surprising that the posterior distribution over the number of features, shown in Figure 4 (b), indicates that the modal number of features is eight. The eight most probable features are listed in Table 2. The ?approximate expectation? method explains 85.4% of the variance, as compared to the 78.1% found by a MAP feature approach [18]. The features are interpretable, corresponding to a range of geographical, historical, and economic regularities. Featural representations of letters. As a third example, we analyzed a somewhat larger data set, consisting of kindergarten children?s assessment of the perceptual similarity of the 26 capital letters [19]. In this case, we used ? = 0.05, and the Bayesian model accounted for 89.2% of the variance in the children?s similarity judgments. The posterior distribution over the number of represented features is shown in Figure 4(c). Table 3 shows the ten features that appeared in more than 90% of samples from the posterior. The model recovers an extremely intuitive set of overlapping features. For example, it picks out the long strokes in I, L, and T, and the elliptical forms of D, O, and Q. 7 Discussion Learning how similarity relations are represented is a difficult modeling problem. Additive clustering provides a framework for learning featural representations of stimulus similarity, but remains underused due to the difficulties associated with the inference. By adopting a Bayesian approach to additive clustering, we are able to obtain a richer characterization of the structure behind human similarity judgments. Moreover, by using nonparametric Bayesian techniques to place a prior distribution over infinite binary feature matrices via the Indian Buffet Process, we can allow the data to determine the number of features that the algorithm recovers. This is theoretically important as well as pragmatically useful. As noted by [16], people are capable of recognizing that individual stimuli possess an arbitrarily large number of characteristics, but in any particular context will make judgments using only a finite, usually small number of properties that form part of our current mental representation. In other words, by moving to a Bayesian nonparametric form, we are able to bring the ADCLUS model closer to the kinds of assumptions that are made by psychological theories. Acknowledgements. TLG was supported by NSF grant number 0631518, and DJN by ARC grants DP0451793 and DP-0773794. We thank Nancy Briggs, Simon Dennis and Michael Lee for helpful comments on this work. References [1] W. S. Torgerson. Theory and Methods of Scaling. Wiley, New York, 1958. [2] R. N. Shepard and P. Arabie. Additive clustering: Representation of similarities as combinations of discrete overlapping properties. Psychological Review, 86:87?123, 1979. [3] J. B. Tenenbaum. Learning the structure of similarity. In D. S. Touretzky, M. C. Mozer, and M. E. Hasselmo, editors, Advances in Neural Information Processing Systems , volume 8, pages 3?9. MIT Press, Cambridge, MA, 1996. [4] L. L. Thurstone. Multiple-Factor Analysis. University of Chicago Press, Chicago, 1947. [5] M. D. Lee. Generating additive clustering models with limited stochastic complexity. Journal of Classification, 19:69?85, 2002. [6] T. L. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. Technical Report 2005-001, Gatsby Computational Neuroscience Unit, 2005. [7] S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6:721?741, 1984. [8] N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller. Equations of state calculations by fast computing machines. Journal of Chemical Physics , 21:1087?1092, 1953. [9] Q.-M. Shao M.-H. Chen and J. G. Ibrahim. Monte Carlo Methods in Bayesian Computation . Springer, New York, 2000. [10] M. K. Cowles and B. P. Carlin. Markov chain Monte Carlo convergence diagnostics: A comparative review. Journal of the American Statistical Association , 91:833?904, 1996. [11] P. Arabie and J. Douglas Carroll. MAPCLUS: A mathematical programming approach to fitting the ADCLUS model. Psychometrika, 45:211?235, 1980. [12] W. Ruml. Constructing distributed representations using additive clustering. In Advances in Neural Information Processing Systems 14, Cambridge, MA, 2001. MIT Press. [13] M. D. Lee and D. J. Navarro. Extending the ALCOVE model of category learning to featural stimulus domains. Psychonomic Bulletin and Review, 9:43?58, 2002. [14] D. J. Navarro. Representing Stimulus Similarity. Ph.D. Thesis, University of Adelaide, 2003. [15] L. E. Frank and W. J. Heiser. Feature selection in Feature Network Models: Finding predictive subsets of features with the Positive Lasso. British Journal of Mathematical and Statistical Psychology , in press. [16] D. L. Medin and A. Ortony. Psychological essentialism. In Similarity and Analogical Reasoning . Cambridge University Press, New York, 1989. [17] R. N. Shepard, D. W. Kilpatric, and J. P. Cunningham. The internal representation of numbers. Cognitive Psychology, 7:82?138, 1975. [18] D. J. Navarro and M. D. Lee. Commonalities and distinctions in featural stimulus representations. In Proceedings of the 24th Annual Conference of the Cognitive Science Society , pages 685?690, Mahwah, NJ, 2002. Lawrence Erlbaum. [19] E. Z. Rothkopf. A measure of stimulus similarity and errors in some paired-associate learning tasks. Journal of Experimental Psychology, 53:94?101, 1957.
3136 |@word seems:1 proportion:1 open:1 seek:2 tried:1 simulation:4 heiser:1 pick:2 harder:1 accommodate:1 series:3 contains:1 daniel:2 existing:3 current:2 recovered:1 elliptical:1 surprising:1 written:2 indonesia:2 fn:2 additive:22 numerical:1 chicago:2 analytic:1 plot:2 interpretable:1 update:1 resampling:2 intelligence:1 inspection:1 ith:4 record:1 mental:3 provides:6 characterization:1 location:2 preference:1 successive:4 five:2 unbounded:1 mathematical:2 sii:1 constructed:1 become:2 direct:1 ik:8 combine:1 fitting:1 autocorrelation:1 manner:1 theoretically:3 notably:1 expected:4 f11:1 automatically:1 preclude:1 becomes:1 provided:2 estimating:2 underlying:2 moreover:2 panel:2 mass:1 psychometrika:1 spain:1 hitherto:1 kind:1 parsimony:3 substantially:1 finding:1 nj:1 esti:1 berkeley:3 multidimensional:1 exchangeable:1 unit:1 underlie:1 grant:2 appear:1 producing:1 before:1 positive:2 understood:1 treat:3 limit:1 approximately:2 might:1 burn:1 emphasis:1 au:1 china:2 nigeria:3 limited:1 range:3 burnt:1 medin:1 unique:1 practical:1 testing:1 practice:3 procedure:1 empirical:3 reject:1 pre:1 word:1 griffith:3 cannot:1 marginalize:1 selection:2 judged:1 context:1 impossible:1 applying:2 equivalent:1 map:11 customer:7 straightforward:3 graphically:1 fik:11 rule:2 estimator:15 dw:2 thurstone:1 updated:1 mapclus:1 target:2 construction:1 exact:1 programming:1 us:1 hypothesis:1 associate:1 element:1 trend:1 approximated:2 iraq:3 geman:2 observed:4 ft:1 calculate:2 highest:4 rescaled:1 intuition:1 mozer:1 complexity:1 asked:1 arabie:2 depend:1 predictive:2 upon:1 f2:2 basis:1 shao:1 joint:2 represented:8 forced:1 fast:1 describe:1 effective:2 monte:3 quite:1 lag:2 widely:1 valued:2 heuristic:1 larger:1 drawing:1 richer:1 statistic:1 jointly:2 itself:1 noisy:2 advantage:1 relevant:2 translate:1 cuba:1 analogical:1 intuitive:1 convergence:2 regularity:1 plethora:1 extending:1 generating:1 comparative:1 leave:1 object:7 illustrate:1 derive:1 fixing:1 ij:4 school:1 ibp:12 sa:1 strong:2 recovering:1 predicted:1 differ:2 philippine:2 closely:1 f4:1 stochastic:3 subsequently:1 human:4 australia:1 explains:2 require:1 diner:2 f1:1 fix:1 probable:3 inspecting:1 hold:1 around:1 considered:1 normal:2 visually:1 lawrence:1 vary:1 early:1 commonality:1 resample:1 estimation:6 combinatorial:1 currently:4 largest:1 hasselmo:1 f21:1 alcove:1 weighted:2 mit:2 gaussian:5 aim:1 likelihood:3 indicates:2 check:1 sense:1 posteriori:1 inference:4 helpful:1 typically:2 entire:1 cunningham:1 relation:1 germany:2 arg:7 among:2 flexible:1 classification:1 constrained:1 initialize:1 uc:1 marginal:1 construct:4 f3:1 never:1 sampling:4 placing:1 look:2 future:1 others:2 stimulus:18 develops:1 report:3 gamma:3 individual:1 consisting:1 highly:1 analyzed:1 yielding:2 diagnostics:1 behind:1 chain:14 capable:1 closer:1 loosely:1 psychological:4 column:5 modeling:2 ison:1 restoration:1 assignment:8 maximization:1 entry:1 subset:2 recognizing:1 predicate:1 erlbaum:1 reported:1 answer:1 st:6 geographical:1 probabilistic:1 lee:4 physic:1 michael:1 w1:1 thesis:1 central:1 reflect:1 containing:3 hn:2 russia:1 priority:1 cognitive:3 american:1 japan:2 account:3 suggesting:1 singleton:1 bold:1 wk:14 includes:1 try:2 view:1 doing:1 wm:1 start:1 participant:1 parallel:1 slope:1 simon:1 contribution:1 f12:1 formed:1 variance:10 largely:1 who:1 characteristic:1 judgment:10 saliency:14 bayesian:20 none:1 carlo:3 researcher:2 converged:1 stroke:1 touretzky:1 underestimate:1 naturally:1 associated:7 recovers:2 sampled:3 ask:1 covariation:1 manifest:4 knowledge:1 nancy:1 dimensionality:1 fwf0:2 formalize:1 uncover:1 appears:1 zimbabwe:3 tom:1 modal:3 improved:1 formulation:3 though:2 just:1 implicit:2 until:1 working:2 dennis:1 jamaica:1 assessment:1 overlapping:2 pragmatically:2 defines:1 perhaps:1 usa:2 effect:1 concept:2 true:5 chemical:1 entering:1 symmetric:1 nonzero:1 illustrated:1 indistinguishable:1 self:1 nuisance:1 noted:2 occasion:1 complete:1 interrelationship:1 bring:1 rothkopf:1 reasoning:1 image:1 harmonic:1 novel:2 common:3 psychonomic:1 mt:2 conditioning:1 shepard:2 volume:1 extend:1 discussed:2 approximates:1 association:1 cambridge:3 gibbs:8 meal:1 framed:1 fk:8 libya:3 f23:1 had:3 moving:1 similarity:44 tlg:1 carroll:1 posterior:34 italy:1 moderate:1 dish:7 belongs:2 discard:1 driven:1 sji:1 binary:4 arbitrarily:2 somewhat:1 recognized:1 converge:3 maximize:1 period:1 determine:2 full:1 multiple:1 infer:4 technical:1 calculation:1 offer:1 long:2 elaboration:1 paired:1 ensuring:1 expectation:7 poisson:3 iteration:3 represent:1 psychologically:1 adopting:1 proposal:1 interval:2 kilpatric:1 leaving:1 country:6 appropriately:1 posse:3 navarro:5 recording:1 tend:1 comment:1 seem:1 extracting:1 noting:1 ideal:2 revealed:1 easy:2 variety:2 restaurant:1 psychology:5 carlin:1 fm:2 lasso:1 reduce:1 economic:1 whether:1 expression:1 motivated:1 curiously:1 utility:2 ibrahim:1 penalty:1 york:3 nine:2 repeatedly:1 generally:3 useful:3 detailed:1 listed:1 nonparametric:8 ten:3 tenenbaum:1 ph:1 category:1 nsf:1 notice:1 dotted:1 estimated:5 torgerson:1 neuroscience:1 mators:1 discrete:2 four:7 deletes:1 drawn:3 capital:2 douglas:1 relaxation:1 run:2 prob:3 letter:4 fourth:1 injected:1 place:2 reporting:1 reasonable:1 draw:3 scaling:2 bound:1 expediency:1 vietnam:1 display:1 annual:1 constraint:3 aspect:1 extremely:1 department:1 combination:2 describes:1 across:2 em:1 smaller:1 slightly:1 metropolis:9 s1:1 sij:7 taken:2 equation:8 agree:1 previously:1 remains:1 discus:1 count:1 turn:1 letting:1 briggs:1 available:1 noticable:1 apply:1 eight:6 hierarchical:1 primacy:1 alternative:1 buffet:8 frequentist:1 thomas:1 original:1 denotes:5 clustering:17 remaining:3 ensure:2 completed:1 graphical:1 log2:1 ghahramani:1 society:1 quantity:2 parametric:1 diagonal:2 f34:1 kth:4 dp:1 thank:1 simulated:1 sensible:2 reason:3 toward:1 besides:1 modeled:1 illustration:1 difficult:3 unfortunately:1 frank:1 negative:4 lagged:1 rosenbluth:2 proper:1 perform:1 upper:1 markov:3 benchmark:1 finite:2 arc:1 possessed:4 displayed:1 defining:1 situation:1 looking:1 smoothed:2 arbitrary:1 overlooked:1 namely:1 required:1 adclus:20 distinction:2 able:2 suggested:1 below:3 usually:1 pattern:1 appeared:1 summarize:1 including:2 max:7 event:1 natural:4 difficulty:2 nth:2 representing:1 scheme:3 djn:1 fjk:2 graphing:1 featural:8 sn:6 prior:16 review:4 literature:5 acknowledgement:1 teller:2 determining:1 fully:1 rationale:1 proxy:1 editor:1 row:1 prone:1 accounted:4 supported:1 last:1 jth:1 bias:1 allow:3 taking:1 bulletin:1 distributed:1 dimension:1 evaluating:1 made:3 refinement:1 historical:1 far:1 transaction:1 f33:1 emphasize:1 approximate:4 overfitting:1 reveals:2 conceptual:1 assumed:5 continuous:1 latent:4 search:1 table:10 reassigned:1 nature:1 reasonably:1 ca:1 imagining:1 necessarily:1 adventurous:1 complex:1 constructing:1 diag:1 ruml:1 did:2 domain:1 spread:1 linearly:1 noise:3 profile:1 mahwah:1 allowed:1 child:2 fashion:1 differed:1 gatsby:1 wiley:1 structurally:1 inferring:1 lie:1 candidate:1 perceptual:2 third:1 rk:1 british:1 specific:2 showing:2 list:1 importance:1 drew:1 magnitude:1 illustrates:1 nk:2 chen:1 explore:2 likely:1 fnk:1 hillclimbing:1 visual:1 springer:1 corresponds:2 relies:1 extracted:5 ma:2 conditional:9 sized:1 included:1 typical:3 infinite:6 except:2 uniformly:1 sampler:8 wt:1 tendency:1 experimental:1 select:1 internal:1 people:1 adelaide:4 indian:6 mcmc:2 cowles:1
2,355
3,137
Differential Entropic Clustering of Multivariate Gaussians Jason V. Davis Inderjit Dhillon Dept. of Computer Science University of Texas at Austin Austin, TX 78712 {jdavis,inderjit}@cs.utexas.edu Abstract Gaussian data is pervasive and many learning algorithms (e.g., k-means) model their inputs as a single sample drawn from a multivariate Gaussian. However, in many real-life settings, each input object is best described by multiple samples drawn from a multivariate Gaussian. Such data can arise, for example, in a movie review database where each movie is rated by several users, or in time-series domains such as sensor networks. Here, each input can be naturally described by both a mean vector and covariance matrix which parameterize the Gaussian distribution. In this paper, we consider the problem of clustering such input objects, each represented as a multivariate Gaussian. We formulate the problem using an information theoretic approach and draw several interesting theoretical connections to Bregman divergences and also Bregman matrix divergences. We evaluate our method across several domains, including synthetic data, sensor network data, and a statistical debugging application. 1 Introduction Gaussian data is pervasive in all walks of life and many learning algorithms?e.g. k-means, principal components analysis, linear discriminant analysis, etc?model each input object as a single sample drawn from a multivariate Gaussian. For example, the k-means algorithm assumes that each input is a single sample drawn from one of k (unknown) isotropic Gaussians. The goal of k-means can be viewed as the discovery of the mean of each Gaussian and recovery of the generating distribution of each input object. However, in many real-life settings, each input object is naturally represented by multiple samples drawn from an underlying distribution. For example, a student?s scores in reading, writing, and arithmetic can be measured at each of four quarters throughout the school year. Alternately, consider a website where movies are rated on the basis of originality, plot, and acting. Here, several different users may rate the same movie. Multiple samples are also ubiquitous in time-series data such as sensor networks, where each sensor device continually monitors its environmental conditions (e.g. humidity, temperature, or light). Clustering is an important data analysis task used in many of these applications. For example, clustering sensor network devices has been used for optimizing routing of the network and also for discovering trends between sensor nodes. If the k-means algorithm is employed, then only the means of the distributions will be clustered, ignoring all second order covariance information. Clearly, a better solution is needed. In this paper, we consider the problem of clustering input objects, each of which can be represented by a multivariate Gaussian distribution. The ?distance? between two Gaussians can be quantified in an information theoretic manner, in particular by their differential relative entropy. Interestingly, the differential relative entropy between two multivariate Gaussians can be expressed as the convex combination of two Bregman divergences?a Mahalanobis distance between mean vectors and a Burg matrix divergence between the covariance matrices. We develop an EM style clustering algorithm and show that the optimal cluster parameters can be cheaply determined via a simple, closed-form solution. Our algorithm is a Bregman-like clustering method that clusters both means and covariances of the distributions in a unified framework. We evaluate our method across several domains. First, we present results from synthetic data experiments, and show that incorporating second order information can dramatically increase clustering accuracy. Next, we apply our algorithm to a real-world sensor network dataset comprised of 52 sensor devices that measure temperature, humidity, light, and voltage. Finally, we use our algorithm as a statistical debugging tool by clustering the behavior of functions in a program across a set of known software bugs. 2 Preliminaries We first present some essential background material. The multivariate Gaussian distribution is the multivariate generalization of the standard univariate case. The probability density function (pdf) of a d-dimensional multivariate Gaussian is parameterized by mean vector ? and positive definite covariance matrix ?:   1 1 T ?1 p(x|?, ?) = ? (x ? ?) ? (x ? ?) , d 1 exp 2 (2?) 2 |?| 2 where |?| is the determinant of ?. The Bregman divergence [2] with respect to ? is defined as: D? (x, y) = ?(x) ? ?(y) ? (x ? y)T ??(y), where ? is a real-valued, strictly convex function defined over a convex set Q = dom(?) ? Rd such that ? is differentiable on the relative interior of Q. For example, if ?(x) = xT x, then the resulting Bregman divergence is the standard squared Euclidean distance. Similarly, if ?(x) = xT AT Ax, for some arbitrary non-singular matrix A, then the resulting divergence is the Mahalanobis distance T ?1 ?1 T MS ?1 (x, y) = (x?y)  S (x?y), parameterized by the covariance matrix S, S = A A. Alternately, if ?(x) = i (xi log xi ? xi ), then the resulting divergence is the (unnormalized) relative entropy. Bregman divergences generalize many properties of squared loss and relative entropy. Bregman divergences can be naturally extended to matrices, as follows: D? (X, Y ) = ?(X) ? ?(Y ) ? tr((??(Y ))T (X ? Y )), where X and Y are matrices, ? is a real-valued, strictly convex function defined over matrices, and tr(A) denotes the trace of A. Consider the function ?(X) = X2F . Then the corresponding Bregman matrix divergence is the squared Frobenius norm, X ? Y 2F . The Burg matrix divergence is generated from a function of the eigenvalues ?1 , ..., ?d of the positive definite matrix X:  ?(X) = ? i log ?i = ? log |X|, the Burg entropy of the eigenvalues. The resulting Burg matrix divergence is: B(X, Y ) = tr(XY ?1 ) ? log |XY ?1 | ? d. (1) As we shall see later, the Burg matrix divergence will arise naturally in our application. Let ?1 , ..., ?d be the eigenvalues of X and v1 , ..., vd the corresponding eigenvectors and let ?1 , ..., ?d be the eigenvalues of Y with eigenvectors w1 , ..., wd . The Burg matrix divergence can also be written as    ?i ?i (viT wj )2 ? log ? d. B(X, Y ) = ?j ?i i j i From the first term above, we see that the Burg matrix divergence is a function of the eigenvalues as well as of the eigenvectors of X and Y . The differential entropy of a continuous random variable x with probability density function f is defined as  h(f ) = ? f (x) log f (x)dx. It can be shown [3] that an n-bit quantization of a continuous random variable with pdf f has Shannon entropy approximately equal to h(f ) + n. The continuous analog of the discrete relative entropy is the differential relative entropy. Given a random variable x with pdf?s f and g, the differential relative entropy is defined as  D(f ||g) = 3 f (x) log f (x) dx. g(x) Clustering Multivariate Gaussians via Differential Relative Entropy Given a set of n multivariate Gaussians parameterized by mean vectors m1 , ..., mn and covariances S1 , ..., Sn , we seek a disjoint and exhaustive partitioning of these Gaussians into k different clusters, ?1 , ..., ?k . Each cluster j can be represented by a multivariate Gaussian parameterized by mean ?j and covariance ?j . Using differential relative entropy as the distance measure between Gaussians, the problem of clustering may be posed as the minimization (over all clusterings) of k   D(p(x|mi , Si )||p(x|?j , ?j )). (2) j=1 {i:?i =j} 3.1 Differential Relative Entropy and Multivariate Gaussians We first show that the differential entropy between two multivariate Gaussians can be expressed as a convex combination of a Mahalanobis distance between means and the Burg matrix divergence between covariance matrices. Consider two multivariate Gaussians, parameterized by mean vectors m and ?, and covariances S and  ?, respectively. We first notethat the differential relative entropy can be expressed as D(f ||g) = f log f ? f log g = ?h(f ) ? f log g. The first term is just the negative differential entropy of p(x|m, S), which can be shown [3] to be: h(p(x|m, S)) = d 1 + log(2?)d |S|. 2 2 (3) We now consider the second term:   p(x|m, S) log p(x|?, ?)  d 1 1 = p(x|m, S) ? (x ? ?)T ??1 (x ? ?) ? log(2?) 2 |?| 2 2  1 p(x|m, S)tr(??1 (x ? ?)(x ? ?)T ) = ? 2  d 1 ? p(x|m, S) log(2?) 2 |?| 2  1 1 ? tr ??1 E (x ? ?)(x ? ?)T ? log(2?)d |?| 2 2 1 ?1 = ? tr ? E ((x ? m) + (m ? ?))((x ? m) + (m ? ?))T 2 1 ? log(2?)d |?| 2 1 1 = ? tr ??1 S + ??1 (m ? ?)(m ? ?)T ? log(2?)d |?| 2 2 1 1 ?1 1 T ?1 = ? tr ? S ? (m ? ?) ? (m ? ?) ? log(2?)d |?|. 2 2 2 = The expectation above is taken over the distribution p(x|m, S). The second to last line above follows from the definition of ? = E[(x ? m)(x ? m)T ] and also from the fact that E[(x ? m)(m ? ?)T ] = E[x ? m](m ? ?)T = 0. Thus, we have D(p(x|m, S)||p(x|?, ?)) 1 1 d 1 (4) = ? ? log(2?)d |S| + tr(??1 S) + log(2?)d |?| 2 2 2 2 1 + (m ? ?)T ??1 (m ? ?) 2 1 1 tr(S??1 ) ? log |S??1 | ? d + (m ? ?)T ??1 (m ? ?) = 2 2 1 1 B(S, ?) + M??1 (m, ?), (5) = 2 2 where B(S, ?) is the Burg matrix divergence and M??1 (m, ?) is the Mahalanobis distance, parameterized by the covariance matrix ?. We now consider the problem of finding the optimal representative Gaussian for a set of c Gaussians with means m1 , ..., mc and covariances S1 , ..., Sc . For non-negative weights ?1 , ...?c such that  i ?i = 1, the optimal representative minimizes the cumulative differential relative entropy:  ?i D(p(x|mi , Si )||p(x|?, ?)) (6) p(x|?? , ?? ) = arg min p(x|?,?) = arg min i  p(x|?,?)  ?i i  1 1 B(Si , ?) + M??1 (mi , ?) . 2 2 (7) The second term can be viewed as minimizing the Bregman information with respect to some fixed (albeit unknown) Bregman divergence (i.e. the Mahalanobis distance parameterized by some covariance matrix ?). Consequently, it has a unique minimizer [1] of the form  ?? = ?i mi . (8) i Next, we note that equation (7) is strictly convex in ??1 . Thus, we can derive the optimal ?? by setting the gradient of (7) with respect to ??1 to 0: n ?  ?i D(p(x|mi , Si )||p(x|?, ?)) ???1 i=1 Setting this to zero yields ?? =  = n  ?i Si ? ? + (mi ? ?? )(mi ? ?? )T . i=1 ?i Si + (mi ? ?? )(mi ? ?? )T . (9) i Figure 1 illustrates optimal representatives of two 2-dimensional Gaussians with means marked by points A and B, and covariances outlined with solid lines. The optimal Gaussian representatives are denoted with dotted covariances; the representative on the left uses weights, (?A = 23 , ?B = 13 ), while the representative on the right uses weights (?A = 13 , ?B = 23 ). As we can see from equation (8), the optimal representative mean is the weighted average of the means of the constituent Gaussians. Interestingly, the optimal covariance turns out to be the average of the constituent covariances plus rank one updates. These rank-one changes account for the deviations from the individual means to the representative mean. 3.2 Algorithm Algorithm 1 presents our clustering algorithm for the case where each Gaussian has equal weight ?i = n1 . The method works in an EM-style framework. Initially, cluster assignments are chosen (these can be assigned randomly). The algorithm then proceeds iteratively, until convergence. First, the mean and covariance parameters for the cluster representative distributions are optimally computed given the cluster assignments. These parameters are updated as shown in (8) and (9). Next, the cluster assignments ? are updated for each input Gaussian. This is done by assigning the ith Gaussian to the cluster j with representative Gaussian that is closest in differential relative entropy. 6 5 4 A 3 2 1 B 0 ?1 ?2 0 1 2 3 4 5 6 7 Figure 1: Optimal Gaussian representatives (shown with dotted lines) of two Gaussians centered at A and B (for two different sets of weights). While the optimal mean of each representative is the average of the individual means, the optimal covariance is the average of the individual covariances plus rank-one corrections. Since both of these steps are locally optimal, convergence of the algorithm to a local optima can be shown. Note that the problem is N P -hard, so convergence to a global optima cannot be guaranteed. We next consider the running time of Algorithm 1 when the input Gaussians are d-dimensional. Lines 6 and 9 compute the optimal means and covariances for each cluster, which requires O(nd) and O(nd2 ) total work, respectively. Line 12 computes the differential relative entropy between each input Gaussian and each cluster representative Gaussian. As only the arg min over all ?j is needed, ?1 we can reduce the Burg matrix divergence computation (equation (1)) to tr(Si ??1 j ) ? log |?j |. 3 Once the inverse of each cluster covariance is computed (for a cost of O(kd )), the first term can be computed in O(d2 ) time. The second term can similarly be computed once for each cluster for a total cost of O(kd3 ). Computing the Mahalanobis distance is an O(d2 ) operation. Thus, total cost of line 12 is O(kd3 + nkd2 ) and the total running time of the algorithm, given ? iterations, is O(? kd2 (n + d)). Algorithm 1 Differential Entropic Clustering of Multivariate Gaussians 1: {m1 , ..., mn } ? means of input Gaussians 2: {S1 , ..., Sn } ? covariance matrices of input Gaussians 3: ? ? initial cluster assignments 4: while not converged do 5: for j = 1 to k do {update cluster means}  6: ?j ? |{i:?1i =j}| i:?i =j mi 7: end for 8: for j = 1 to k do {update cluster covariances}  9: ?j ? |{i:?1i =j}| i:?i =j Si + (mi ? ?j )(mi ? ?j )T 10: end for 11: for i = 1 to n do {assign each Gaussian to the closest cluster representative Gaussian} 12: ?i ? argmin1?j?k B(Si , ?j ) + M?j ?1 (mi , ?j ) {B is the Burg matrix divergence and M??1 is the Mahalanobis distance parameterized by ?j } j 13: end for 14: end while 4 Experiments We now present experimental results for our algorithm across three different domains: a synthetic dataset, sensor network data, and a statistical debugging application. 4.1 Synthetic Data Our synthetic datasets consist of a set of 200 objects, each of which consists of 30 samples drawn from one of k randomly generated d-dimensional multivariate Gaussians. The k Gaussians are 1 1.1 Normalized Mutual Information Normalized Mutual Information Multivariate Gaussian Clustering K?means 0.9 0.8 0.7 0.6 0.5 0.4 2 Multivariate Gaussian Clustering K?means 1 0.9 0.8 0.7 0.6 0.5 3 4 5 6 7 Number of Clusters 8 9 10 0.4 4 5 6 7 8 Number of Dimensions 9 10 Figure 2: Clustering quality of synthetic data. Traditional k-means clustering uses only first-order information (i.e. the mean), whereas our Gaussian clustering algorithm also incorporates second-order covariance information. Here, we see that our algorithm achieves higher clustering quality for datasets composed of fourdimensional Gaussians with varied number of clusters (left), as well as for varied dimensionality of the input Guassians with k = 5 (right). generated by choosing a mean vector uniformly at random from the unit simplex and randomly selecting a covariance matrix from the set of matrices with eigenvalues 1, 2, ..., d. In Figure 2, we compare our algorithm to the k-means algorithm, which clusters each object solely on the mean of the samples. Accuracy is quantified in terms of normalized mutual information (NMI) between discovered clusters and the true clusters, a standard technique for determining the quality of clusters. Figure 2 (left) shows the clustering quality as a function of the number of clusters when the dimensionality of the input Gaussians is fixed (d = 4). Figure 2 (right) gives clustering quality for five clusters across a varying number of dimensions. All results represent averaged NMI values across 50 experiments. As can be seen in Figure 2, our multivariate Gaussian clustering algorithm yields significantly higher NMI values than k-means for all experiments. 4.2 Sensor Networks Sensor networks are wireless networks composed of small, low-cost sensors that monitor their surrounding environment. An open question in sensor networks research is how to minimize communication costs between the sensors and the base station: wireless communication requires a relatively large amount of power, a limited resource on current sensor devices (which are usually battery powered). A recently proposed sensor network system, BBQ [4], reduces communication costs by modelling sensor network data at each sensor device using a time-varying multivariate Gaussian and transmitting only model parameters to the base station. We apply our multivariate Gaussian clustering algorithm to cluster sensor devices from the Intel Lab at Berkeley [8]. Clustering has been used in sensor network applications to determine efficient routing schemes, as well as for discovering trends between groups of sensor devices. The Intel sensor network consists of 52 working sensors, each of which monitors ambient temperature, humidity, light levels, and voltage every thirty seconds. Conditioned on time, the sensor readings can be fit quite well by a multivariate Gaussian. Figure 3 shows the results of our multivariate Gaussian clustering algorithm applied to this sensor network data. For each device, we compute the sample mean and covariance from sensor readings between noon and 2pm each day, for 36 total days. To account for varying scales of measurement, we normalize all variables to have unit variance. The second cluster (denoted by ?2? in figure 3) has the largest variance among all clusters: many of the sensors for this cluster are located in high traffic areas, including the large conference room at the top of the lab, and the smaller tables in the bottom of the lab. Since the measurements were taken during lunchtime, we expect higher traffic in these areas. Interestingly, this cluster shows very high co-variation between humidity and voltage. Cluster one is characterized by high temperatures, which is not surprising, as there are several windows on the left side of the lab. This window faces west and has an unobstructed view of the ocean. Finally, cluster three has a moderate level of total variation, with relatively low light levels. The cluster is primarily located in the center and the right of lab, away from outside windows. Figure 3: To reduce communication costs in sensor networks, each sensor device may be modelled by a multivariate Gaussian. The above plot shows the results of applying our algorithm to cluster sensors into three groups, denoted by labels ?1?, ?2?, and ?3?. 4.3 Statistical Debugging Leveraging program runtime statistics for the purpose of software debugging has received recent research attention [12]. Here we apply our algorithm to cluster functional behavior patterns over software bugs in the LATEX document preparation program. The data is taken from the Navel system [7], a system that uses machine learning to provide better error messaging. The dataset contains four software bugs, each of which is caused by an unsuccessful LATEX compilation (e.g. specifying an incorrect number of columns in an array environment) with ambiguous or unclear error messages provided. LATEX has notoriously cryptic error messages for document compilation failures?for example, the message ?LaTeX Error: There?s no line here to end? can be caused by numerous problems in the source document. Each function in the program?s source is measured by the frequency with which it is called across each of the four software bugs. We model this distribution as a 4-dimensional multivariate Gaussian, one dimension for each bug. The distributions are estimated from a set of samples; each sample corresponds to a single LATEX file drawn from a set of grant proposals and submitted computer science research papers. For each file and for each of the four bugs, the LATEX compiler is executed over a slightly modified version of the file that has been changed to exhibit the bug. During program execution, function counts are measured and recorded. More details can be found in [7]. Clustering these function counts can yield important debugging insight to assist a software engineer in understanding error dependent program behavior. Figure 4 shows three covariance matrices from a sample clustering of eight clusters. To capture the dependencies between bugs, we normalize each input Gaussian to have zero mean and unit variance. Cluster (a) represents functions that are highly error independent?i.e. the matrix shows high levels of covariation among all pairs of error classes. Conversely, clusters (b) and (c) show that some functions are highly error dependent. Cluster (b) shows a high dependency between bugs 1 and 4, while cluster (c) exhibits high covariation between bugs 1 and 3, and between bugs 2 and 4. ? 1.00 ? 0.94 ? 0.94 0.94 0.94 0.94 1.00 0.94 0.94 1.00 0.94 0.94 (a) ? 0.94 0.94 ? 0.94 ? 1.00 ? 1.00 ? 0.58 ? 0.58 0.91 0.58 0.58 1.00 0.55 0.55 1.00 0.67 0.68 (b) ? ? 1.00 0.91 0.67 ? ? 0.58 0.68 ? ? 0.95 1.00 0.58 0.58 0.95 1.00 0.58 0.58 1.00 0.95 0.58 (c) ? 0.58 0.95 ? 0.58 ? 1.00 Figure 4: Covariance matrices for three clusters discovered by clustering functional behavior of the LATEX doc- ument preparation program. Cluster (a) corresponds to functions which are error-independent, while clusters (b) and (c) represent two groups of functions that exhibit different types of error dependent behavior. 5 Related Work In this work, we showed that the differential relative entropy between two multivariate Gaussian distributions can be expressed as a convex combination of the Mahalanobis distance between their mean vectors and the Burg matrix divergence between their covariances. This is in contrast to information theoretic clustering [5], where each input is taken to be a probability distribution over some finite set. In [5], no parametric form is assumed, and the Kullback-Liebler divergence (i.e. discrete relative entropy) can be computed directly from the distributions. The differential entropy between two multivariate Gaussians wass considered in [10] in the context of solving Gaussian mixture models. Although an algebraic expression for this differential entropy was given in [10], no connection to the Burg matrix divergence was made there. Our algorithm is based on the standard expectation-maximization style clustering algorithm [6]. Although the closed-form updates used by our algorithm are similar to those employed by a Bregman clustering algorithm [1], we note that the computation of the optimal covariance matrix (equation (9)) involves the optimal mean vector. In [9], the problem of clustering Gaussians with respect to the symmetric differential relative entropy, D(f ||g)+D(g||f ) is considered in the context of learning HMM parameters for speech recognition. The resulting algorithm, however, is much more computationally expensive than ours; whereas in our method, the optimal means and covariance parameters can be computed via a simple closed form solution. In [9], no such solution is presented and an iterative method must instead be employed. The problem of finding the optimal Gaussian with respect to the first argument (note that equation (6) is minimized with respect to the second argument) is considered in [11] for the problem of speaker interpolation. Here, only one source is assumed, and thus clustering is not needed. 6 Conclusions We have presented a new algorithm for the problem of clustering multivariate Gaussian distributions. Our algorithm is derived in an information theoretic context, which leads to interesting connections with the differential entropy between multivariate Gaussians, and Bregman divergences. Unlike existing clustering algorithms, our algorithm optimizes both first and second order information in the data. We have demonstrated the use of our method on sensor network data and a statistical debugging application. References [1] A. Banerjee, S. Merugu, I. Dhillon, and S. Ghosh. Clustering with Bregman divergences. In Siam International Conference on Data Mining, pages 234?245, 2004. [2] L. Bregman. The relaxation method finding the common point of convex sets and its application to the solutions of problems in convex programming. In USSR Comp. of Mathematics and Mathematical Physics, volume 7, pages 200?217, 1967. [3] T. M. Cover and J. A. Thomas. Elements of information theory. Wiley Series in Telecommunications, 1991. [4] A. Deshpande, C. Guestrin, S. Madden, J. Hellerstein, and W. Hong. Model-based approximate querying in sensor networks. In International Journal of Very Large Data Bases, 2005. [5] I. Dhillon, S. Mallela, and R. Kumar. A divisive information-theoretic feature clustering algorithm for text classification. In Journal of Machine Learning Research, volume 3, pages 1265?1287, 2003. [6] R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification. John Wiley and Sons, Inc., 2001. [7] J. Ha, H. Ramadan, J. Davis, C. Rossbach, I. Roy, and E. Witchel. Navel: Automating software support by classifying program behavior. Technical Report TR-06-11, University of Texas at Austin, 2006. [8] S. Madden. Intel lab data. http://berkeley.intel-research.net/labdata, 2004. [9] T. Myrvoll and F. Soong. On divergence based clustering of normal distributions and its application to HMM adaptation. In Eurospeech, pages 1517?1520, 2003. [10] Y. Singer and M. Warmuth. Batch and on-line parameter estimation of Gaussian mixtures based on the joint entropy. In Neural Information Processing Systems, 1998. [11] T. Yoshimura, T. Masuko, K. Tokuda, T. Kobayashi, and T. Kitamura. Speaker interpolation in HMMbased speech synthesis. In European Conference on Speech Communication and Technology, 1997. [12] A. Zheng, M. Jordan, B. Liblit, and A. Aiken. Statistical debugging of sampled programs. In Neural Information Processing Systems, 2004.
3137 |@word determinant:1 version:1 norm:1 duda:1 humidity:4 nd:1 open:1 d2:2 seek:1 covariance:32 tr:12 solid:1 initial:1 series:3 score:1 selecting:1 contains:1 document:3 interestingly:3 ours:1 existing:1 current:1 wd:1 surprising:1 si:9 assigning:1 dx:2 written:1 must:1 john:1 plot:2 update:4 discovering:2 website:1 device:9 warmuth:1 isotropic:1 ith:1 node:1 five:1 mathematical:1 differential:21 tokuda:1 incorrect:1 consists:2 manner:1 liblit:1 behavior:6 window:3 provided:1 underlying:1 minimizes:1 unified:1 finding:3 ghosh:1 berkeley:2 every:1 runtime:1 partitioning:1 unit:3 grant:1 continually:1 positive:2 kobayashi:1 local:1 solely:1 interpolation:2 approximately:1 plus:2 quantified:2 specifying:1 conversely:1 co:1 limited:1 averaged:1 unique:1 thirty:1 definite:2 area:2 significantly:1 cannot:1 interior:1 context:3 applying:1 writing:1 demonstrated:1 center:1 attention:1 vit:1 convex:9 formulate:1 recovery:1 insight:1 array:1 variation:2 updated:2 user:2 programming:1 us:4 trend:2 element:1 recognition:1 expensive:1 located:2 roy:1 database:1 bottom:1 capture:1 parameterize:1 wj:1 environment:2 battery:1 dom:1 solving:1 basis:1 lunchtime:1 joint:1 represented:4 tx:1 surrounding:1 sc:1 yoshimura:1 choosing:1 outside:1 exhaustive:1 quite:1 posed:1 valued:2 statistic:1 differentiable:1 eigenvalue:6 net:1 adaptation:1 bug:11 frobenius:1 normalize:2 constituent:2 convergence:3 cluster:43 optimum:2 generating:1 object:8 derive:1 develop:1 measured:3 school:1 received:1 c:1 involves:1 centered:1 routing:2 material:1 assign:1 clustered:1 generalization:1 preliminary:1 strictly:3 correction:1 considered:3 normal:1 exp:1 achieves:1 entropic:2 purpose:1 estimation:1 label:1 utexas:1 largest:1 tool:1 weighted:1 minimization:1 clearly:1 sensor:32 gaussian:39 argmin1:1 modified:1 varying:3 voltage:3 pervasive:2 ax:1 derived:1 nd2:1 rank:3 modelling:1 contrast:1 ument:1 dependent:3 initially:1 arg:3 among:2 classification:2 denoted:3 ussr:1 mutual:3 ramadan:1 equal:2 once:2 represents:1 kd2:1 simplex:1 minimized:1 report:1 primarily:1 randomly:3 composed:2 divergence:27 individual:3 n1:1 message:3 highly:2 mining:1 zheng:1 mixture:2 light:4 compilation:2 ambient:1 bregman:15 xy:2 euclidean:1 walk:1 theoretical:1 column:1 cover:1 assignment:4 maximization:1 cost:7 deviation:1 comprised:1 eurospeech:1 optimally:1 dependency:2 synthetic:6 density:2 international:2 siam:1 automating:1 physic:1 synthesis:1 transmitting:1 w1:1 squared:3 recorded:1 messaging:1 style:3 account:2 student:1 inc:1 caused:2 later:1 view:1 jason:1 closed:3 lab:6 traffic:2 compiler:1 minimize:1 accuracy:2 variance:3 merugu:1 yield:3 generalize:1 modelled:1 mc:1 notoriously:1 comp:1 converged:1 submitted:1 liebler:1 definition:1 failure:1 frequency:1 deshpande:1 naturally:4 mi:13 latex:7 sampled:1 dataset:3 covariation:2 dimensionality:2 ubiquitous:1 higher:3 day:2 done:1 just:1 until:1 working:1 banerjee:1 quality:5 normalized:3 true:1 assigned:1 symmetric:1 dhillon:3 iteratively:1 mahalanobis:8 during:2 davis:2 ambiguous:1 speaker:2 unnormalized:1 hong:1 m:1 pdf:3 theoretic:5 temperature:4 recently:1 common:1 unobstructed:1 quarter:1 functional:2 stork:1 volume:2 analog:1 m1:3 measurement:2 rd:1 outlined:1 pm:1 similarly:2 mathematics:1 etc:1 base:3 multivariate:31 closest:2 showed:1 recent:1 optimizing:1 moderate:1 optimizes:1 life:3 seen:1 guestrin:1 employed:3 mallela:1 determine:1 arithmetic:1 multiple:3 reduces:1 technical:1 characterized:1 hart:1 expectation:2 iteration:1 represent:2 proposal:1 background:1 whereas:2 singular:1 source:3 unlike:1 file:3 incorporates:1 leveraging:1 jordan:1 fit:1 reduce:2 texas:2 expression:1 assist:1 algebraic:1 speech:3 dramatically:1 eigenvectors:3 amount:1 locally:1 http:1 dotted:2 estimated:1 disjoint:1 labdata:1 discrete:2 shall:1 group:3 four:4 monitor:3 drawn:7 aiken:1 v1:1 relaxation:1 year:1 inverse:1 parameterized:8 telecommunication:1 throughout:1 draw:1 doc:1 bit:1 guaranteed:1 software:7 argument:2 min:3 kumar:1 relatively:2 debugging:8 combination:3 kd:1 across:7 nmi:3 em:2 smaller:1 slightly:1 son:1 s1:3 soong:1 taken:4 computationally:1 equation:5 resource:1 turn:1 count:2 needed:3 singer:1 end:5 gaussians:26 operation:1 apply:3 eight:1 away:1 hellerstein:1 ocean:1 batch:1 thomas:1 denotes:1 clustering:39 assumes:1 running:2 burg:13 top:1 cryptic:1 question:1 parametric:1 traditional:1 unclear:1 exhibit:3 gradient:1 distance:11 vd:1 hmm:2 evaluate:2 discriminant:1 minimizing:1 executed:1 bbq:1 trace:1 negative:2 unknown:2 datasets:2 finite:1 extended:1 communication:5 discovered:2 varied:2 station:2 arbitrary:1 pair:1 connection:3 kd3:2 alternately:2 proceeds:1 usually:1 pattern:2 reading:3 program:9 including:2 unsuccessful:1 power:1 mn:2 scheme:1 movie:4 rated:2 technology:1 numerous:1 madden:2 originality:1 sn:2 text:1 review:1 understanding:1 discovery:1 powered:1 determining:1 relative:18 loss:1 expect:1 interesting:2 querying:1 classifying:1 austin:3 changed:1 last:1 wireless:2 side:1 face:1 dimension:3 world:1 cumulative:1 hmmbased:1 computes:1 made:1 guassians:1 approximate:1 kullback:1 global:1 assumed:2 xi:3 navel:2 continuous:3 iterative:1 table:1 noon:1 ignoring:1 european:1 domain:4 arise:2 representative:14 intel:4 west:1 wiley:2 xt:2 incorporating:1 essential:1 quantization:1 albeit:1 consist:1 execution:1 illustrates:1 conditioned:1 entropy:26 univariate:1 cheaply:1 expressed:4 inderjit:2 corresponds:2 minimizer:1 environmental:1 goal:1 viewed:2 marked:1 consequently:1 room:1 change:1 hard:1 determined:1 uniformly:1 acting:1 principal:1 engineer:1 total:6 called:1 experimental:1 divisive:1 shannon:1 support:1 preparation:2 dept:1
2,356
3,138
High-Dimensional Graphical Model Selection Using `1-Regularized Logistic Regression Martin J. Wainwright Department of Statistics Department of EECS Univ. of California, Berkeley Berkeley, CA 94720 Pradeep Ravikumar Machine Learning Dept. Carnegie Mellon Univ. Pittsburgh, PA 15213 John D. Lafferty Computer Science Dept. Machine Learning Dept. Carnegie Mellon Univ. Pittsburgh, PA 15213 Abstract We focus on the problem of estimating the graph structure associated with a discrete Markov random field. We describe a method based on `1 regularized logistic regression, in which the neighborhood of any given node is estimated by performing logistic regression subject to an `1 -constraint. Our framework applies to the high-dimensional setting, in which both the number of nodes p and maximum neighborhood sizes d are allowed to grow as a function of the number of observations n. Our main result is to establish sufficient conditions on the triple (n, p, d) for the method to succeed in consistently estimating the neighborhood of every node in the graph simultaneously. Under certain mutual incoherence conditions analogous to those imposed in previous work on linear regression, we prove that consistent neighborhood selection can be obtained as long as the number of observations n grows more quickly than 6d6 log d + 2d5 log p, thereby establishing that logarithmic growth in the number of samples n relative to graph size p is sufficient to achieve neighborhood consistency. Keywords: Graphical models; Markov random fields; structure learning; `1 -regularization; model selection; convex risk minimization; high-dimensional asymptotics; concentration. 1 Introduction Consider a p-dimensional discrete random variable X = (X1 , X2 , . . . , Xp ) where the distribution of X is governed by an unknown undirected graphical model. In this paper, we investigate the problem of estimating the graph structure from an i.i.d. sample of n data (i) (i) points {x(i) = (x1 , . . . , xp }ni=1 . This structure learning problem plays an important role in a broad range of applications where graphical models are used as a probabilistic representation tool, including image processing, document analysis and medical diagnosis. Our approach is to perform an `1 -regularized logistic regression of each variable on the remaining variables, and to use the sparsity pattern of the regression vector to infer the underlying neighborhood structure. The main contribution of the paper is a theoretical analysis showing that, under suitable conditions, this procedure recovers the true graph structure with probability one, in the high-dimensional setting in which both the sample size n and graph size p = p(n) increase to infinity. The problem of structure learning for discrete graphical models?due to both its importance and difficulty?has attracted considerable attention. Constraint based approaches use hypothesis testing to estimate the set of conditional independencies in the data, and then determine a graph that most closely represents those independencies [8]. An alternative approach is to view the problem as estimation of a stochastic model, combining a scoring metric on candidate graph structures with a goodness of fit measure to the data. The scoring met- ric approach must be used together with a search procedure that generates candidate graph structures to be scored. The combinatorial space of graph structures is super-exponential, however, and Chickering [1] shows that this problem is in general NP-hard. The space of candidate structures in scoring based approaches is typically restricted to directed models (Bayesian networks) since the computation of typical score metrics involves computing the normalization constant of the graphical model distribution, which is intractable for general undirected models. Estimation of graph structures in undirected models has thus largely been restricted to simple graph classes such as trees [2], polytrees [3] and hypertrees [9]. The technique of `1 regularization for estimation of sparse models or signals has a long history in many fields; we refer to Tropp [10] for a recent survey. A surge of recent work has shown that `1 -regularization can lead to practical algorithms with strong theoretical guarantees (e.g., [4, 5, 6, 10, 11, 12]). In this paper, we adapt the technique of `1 -regularized logistic regression to the problem of inferring graph structure. The technique is computationally efficient and thus well-suited to high dimensional problems, since it involves the solution only of standard convex programs. Our main result establishes conditions on the sample size n, graph size p and maximum neighborhood size d under which the true neighborhood structure can be inferred with probability one as (n, p, d) increase. Our analysis, though asymptotic in nature, leads to growth conditions that are sufficiently weak so as to require only that the number of observations n grow logarithmically in terms of the graph size. Consequently, our results establish that graphical structure can be learned from relatively sparse data. Our analysis and results are similar in spirit to the recent work of Meinshausen and B? uhlmann [5] on covariance selection in Gaussian graphical models, but focusing rather on the case of discrete models. The remainder of this paper is organized as follows. In Section 2, we formulate the problem and establish notation, before moving on to a precise statement of our main result, and a high-level proof outline in Section 3. Sections 4 and 5 detail the proof, with some technical details deferred to the full-length version. Finally, we provide experimental results and a concluding discussion in Section 6. 2 Problem Formulation and Notation Let G = (V, E) denote a graph with vertex set V of size |V | = p and edge set E. We denote by N (s) the set of neighbors of a vertex v ? V ; that is N (s) = {(s, t) ? E}. A pairwise graphical model with graph G is a family of Qprobability distributions for a random variable X = (X1 , X2 , . . . , Xp ) given by p(x) ? (s,t)?E ?st (xs , xt ). In this paper, we restrict our attention to the case where each xs ? {0, 1} is binary, and the family of probability distributions is given by the Ising model P  P p(x; ?) = exp ? x + ? x x ? ?(?) . (1) s?V s s (s,t)?E st s t Given such an exponential family in a minimal representation, the log partition function ?(?) is strictly convex, which ensures that the parameter matrix ? is identifiable. We address the following problem of graph learning. Given n samples x(i) ? {0, 1}p drawn bn be an estimated set of edges. from an unknown distribution p(x; ?? ) of the form (1), let E Our set-up includes the important situation in which the number of variables p may be large relative to the sample size n. In particular, we allow the graph Gn = (Vn , En ) to vary with n, so that the number of variables p = |Vn | and the sizes of the neighborhoods ds := |N (s)| may vary with sample size. (For notational clarity we will sometimes omit subscripts indicating bn for which P[E bn = En ] ? 1 a dependence on n.) The goal is to construct an estimator E bn (s) ? Vn as n ? ?. Equivalently, we consider the problem of estimating neighborhoods N b so that P[ Nn (s) = N (s), ? s ? Vn ] ?? 1. For many problems of interest, the graphical model provides a compact representation where the size of the neighborhoods are typically small?say ds  p for all s ? Vn . Our goal is to use `1 -regularized logistic regression to estimate these neighborhoods; for this paper, the actual values of the parameters ?ij is a secondary concern. Given input data {(z (i) , y (i) )}, where z (i) is a p-dimensional covariate and y (i) ? {0, 1} is a binary response, logistic regression involves minimizing the negative log likelihood n o 1 Xn fs (?; x) = log(1 + exp(?T z (i) )) ? y (i) ?T z (i) . (2) n i=1 We focus on regularized version of this regression problem, involving an `1 constraint on (a (i) subset of) the parameter vector ?. For convenience, we assume that z1 = 1 is a constant so that ?1 is a bias term, which is not regularized; we denote by ?\s the vector of all coefficients of ? except the one in position s. For the graph learning task, we regress each variable Xs onto the remaining variables, sharing the same data x(i) across problems. This leads to the following collection of optimization problems (p in total, one for each graph node): ( n ) i Xh 1 s,? T (i,s) (i) T (i,s) ?b = arg minp log(1 + exp(? z )) ? xs ? z + ?n k?\s k1 . (3) ??R n i=1 (i,s) (i) where s ? V , and z (i,s) ? {0, 1}p denotes the vector where zt = xt for t 6= s and (i,s) zs = 1. The parameter ?s acts as a bias term, and is not regularized. Thus, the quantity ?bts,? can be thought of as a penalized conditional likelihood estimate of ?s,t . Our estimate of the neighborhood N (s) is then given by n o bn (s) = t ? V, t 6= s : ?bs,? 6= 0 . N t Our goal is to provide conditions on the graphical model?in particular, relations among the number of nodes p, number of observations n and maximum node degree d?that ensure that the collection of neighborhood estimates (2), one for each node s of the graph, is consistent with high probability. We conclude this section with some additional notation that is used throughout the sequel. Defining the probability p(z (i,s) ; ?) := [1 + exp(??T z (i,s) )]?1 , straightforward calculations yield the gradient and Hessian, respectively, of the negative log likelihood (2): ! n n 1X 1 X (i) (i,s) (i,s) (i,s) T ?? fs (?; x) = p(z ; ?) z ?? x z (4a) n i=1 n i=1 s n ?2? fs (?; x) = 1X p(z (i,s) ; ?) [1 ? p(z (i,s) ; ?)] z (i,s) (z (i,s) )T . n i=1 (4b) Finally, for ease of notation, we make frequent use the shortand Qs (?) = ?2 fs (?; x). 3 Main Result and Outline of Analysis In this section, we begin with a precise statement of our main result, and then provide a high-level overview of the key steps involved in its proof. 3.1 Statement of main result We begin by stating the assumptions that underlie our main result. A subset of the assumptions involve the Fisher information matrix associated with the logistic regression model, defined for each node s ? V as   ? ? ? T (5) Qs = E ps (Z; ? ) {1 ? ps (Z; ? )}ZZ , Note that Q?s is the population average of the Hessian Qs (?? ). For ease of notation we use S to denote the neighborhood N (s), and S c to denote the complement V ? N (s). Our first two assumptions (A1 and A2) place restrictions on the dependency and coherence structure of this Fisher information matrix. We note that these first two assumptions are analogous to conditions imposed in previous work [5, 10, 11, 12] on linear regression. Our third assumption is a growth rate condition on the triple (n, p, d). [A1] Dependency condition: We require that the subset of the Fisher information matrix corresponding to the relevant covariates has bounded eigenvalues: namely, there exist constants Cmin > 0 and Cmax < +? such that Cmin ? ?min (Q?SS ), and ?max (Q?SS ) ? Cmax . (6) These conditions ensure that the relevant covariates do not become overly dependent, and can be guaranteed (for instance) by assuming that ?bs,? lies within a compact set. [A2] Incoherence condition: Our next assumption captures the intuition that the large number of irrelevant covariates (i.e., non-neighbors of node s) cannot exert an overly strong effect on the subset of relevant covariates (i.e., neighbors of node s). To formalize this intuition, we require the existence of an  ? (0, 1] such that kQ?S c S (Q?SS )?1 k? ? 1 ? . (7) Analogous conditions are required for the success of the Lasso in the case of linear regression [5, 10, 11, 12]. [A3] Growth rates: Our second set of assumptions involve the growth rates of the number of observations n, the graph size p, and the maximum node degree d. In particular, we require that: n ? 6d log(d) ? 2 log(p) ? +?. (8) d5 Note that this condition allows the graph size p to grow exponentially with the number of observations (i.e., p(n) = exp(n? ) for some ? ? (0, 1). Moreover, it is worthwhile noting that for model selection in graphical models, one is typically interested in node degrees d that remain bounded (e.g., d = O(1)), or grow only weakly with graph size (say d = o(log p)). With these assumptions, we now state our main result: Theorem 1. Given a graphical model and triple (n, p, d) such that conditions A1 through A3 are satisfied, suppose that the regularization parameter ?n is chosen such that (a) bn (s) = N (s), ? s ? Vn ] ? 1 as n?2n ? 2 log(p) ? +?, and (b) d?n ? 0. Then P[ N n ? +?. 3.2 Outline of analysis We now provide a high-level roadmap of the main steps involved in our proof of Theorem 1. Our approach is based on the notion of a primal witness: in particular, focusing our attention on a fixed node s ? V , we define a constructive procedure for generating a primal vector ?b ? Rp as well as a corresponding subgradient zb ? Rn that together satisfy the zero-subgradient optimality conditions associated with the convex program (3). We then show that this construction succeeds with probability converging to one under the stated conditions. A key fact is that the convergence rate is sufficiently fast that a simple union bound over all graph nodes shows that we achieve consistent neighborhood estimation for all nodes simultaneously. To provide some insight into the nature of our construction, the analysis in Section 4 shows b zb) satisfies the the neighborhood N (s) is correctly recovered if and only if the pair (?, b b following four conditions: (a) ?S c = 0; (b) |?t | > 0 for all t ? S; (c) zbS = sgn(?S? ); and (d) b zb) such that both kb zS c k? < 1. The first step in our construction is to choose the pair (?, conditions (a) and (c) hold. The remainder of the analysis is then devoted to establishing that properties (b) and (d) hold with high probability. In the first part of our analysis, we assume that the dependence (A1) mutual incoherence (A2) conditions hold for the sample Fisher information matrices Qs (?? ) defined below equation (4b). Under this assumption, we then show that the conditions on ?n in the theorem statement suffice to guarantee that properties (b) and (d) hold for the constructed pair b zb). The remainder of the analysis, provided in the full-length version of this paper, is (?, devoted to showing that under the specified growth conditions (A3), imposing incoherence and dependence assumptions on the population version of the Fisher information Q? (?? ) guarantees (with high probability) that analogous conditions hold for the sample quantities Qs (?? ). While it follows immediately from the law of large numbers that the empirical Fsiher information QnAA (?? ) converges to the population version Q?AA for any fixed subset, the delicacy is that we require controlling this convergence over subsets of increasing size. Our analysis therefore requires the use of uniform laws of large numbers [7]. 4 Primal-Dual Relations for `1 -Regularized Logistic Regression Basic convexity theory can be used to characterize the solutions of `1 -regularized logistic regression. We assume in this section that ?1 corresponds to the unregularized bias term, and omit the dependence on sample size n in the notation. The objective is to compute    minp L(?, ?) = minp f (?; x) + ? k?\1 k1 ? b = minp f (?; x) + ?k?\1 k1 (9) ??R ??R ??R The function L(?, ?) is the Lagrangian function for the problem of minimizing f (?; x) subject to k?\1 k1 ? b for some b. The dual function is h(?) = inf ? L(?, ?). If p ? n then f (?; x) is a strictly convex function of ?. Since the `1 -norm is convex, it follows that L(?, ?) is convex in ? and strictly convex in ? for p ? n. Therefore the set of solutions b is also a to (9) is convex. If ?b and ?b0 are two solutions, then by convexity ?b + ?(?b0 ? ?) solution for any ? ? [0, 1]. Since the solutions minimize f (?; x) subject to k?\1 k1 ? b, the b is independent of ?, and ?? f (?; b x) is independent of the particular value of f (?b + ?(?b0 ? ?)) b These facts are summarized below. solution ?. Lemma 1. If p ? n then a unique solution to (9) exists. If p ? n then the set of solutions b x) constant across all solutions. In particular, if p ? n is convex, with the value of ?? f (?; b b then ?bt = 0 for all solutions. and |??t f (?; x)| < ? for some solution ?, The subgradient ?k?\1 k1 ? Rp is the collection of all vectors z satisfying |zt | ? 1 and  0 for t = 1 zt = sign(?t ) if ?t 6= 0. Any optimum of (9) must satisfy b ?) = ?? f (?; b x) + ?z = 0 ?? L(?, (10) for some z ? ?k?\1 k. The analysis in the following sections shows that, with high probability, b zb) can be constructed so that |b a primal-dual pair (?, zt | < 1 and therefore ?bt = 0 in case ? ? ?t = 0 in the true model ? from which the data are generated. 5 Constructing a Primal-Dual Pair We now fix a variable Xs for the logistic regression, denoting the set of variables in its neighborhood by S. From the results of the previous section we observe that the `1 regularized regression recovers the sparsity pattern if and only if there exists a primal-dual b zb) satisfying the zero-subgradient condition, and the conditions (a) ?bS c = 0; solution pair (?, b zS c k? < 1. (b) |?t | > 0 for all t ? S and sgn(?bS ) = sgn(?? S ); (c) zbS = sgn(?S? ); and (d) kb Our proof proceeds by showing the existence (with high probability) of a primal-dual pair b zb) that satisfy these conditions. We begin by setting ?bS c = 0, so that (a) holds, and (?, also setting zbS = sgn(?bS ), so that (c) holds. We first establish a consistency result when incoherence conditions are imposed on the sample Fisher information Qn . The remaining analysis, deferred to the full-length version, establishes that the incoherence assumption (A2) on the population version ensures that the sample version also obeys the property with probability converging to one exponentially fast. Theorem 2. Suppose that kQnS c S (QnSS )?1 k? ? 1? (11) for some  ? (0, 1]. Assume that ?n ? 0 is chosen that ?2n n ? log(p) ? +? and ?n d ? 0.  b (s) = N (s) = 1 ? O(exp(?cn? )) for some ? > 0. Then P N Proof. Let us introduce the notation n W n := 1 X (i,s) z n i=1 x(i) s exp(?? T z (i,s) ) ? 1 + exp(?? T z (i,s) ) ! Substituting into the subgradient optimality condition (10) yields the equivalent condition b x) ? ?f (?; x) ? W n + ?n zb = ?f (?; 0. (12) By a Taylor series expansion, this condition can be re-written as ?2 f (?? ; x) [?b ? ?? ] = W n ? ?n zb + Rn , (13) where the remainder Rn is a term of order kRn k2 = O(k?b ? ?? k2 ). Using our shorthand Qn = ?2? f (?? ; x), we write the zero-subgradient condition (13) in block form as: (14a) Qn c [?bs,? ? ?? ] = W nc ? ?n zbS c + Rnc , S S QnSS S S [?bSs,? ? ?S? ] = It can be shown that the matrix rewritten as QnSS S WSn ? ?n zbS + (14b) is invertible w.p. one, so that these conditions can be QnS c S (QnSS )?1 [WSn ? ?n zbS + RSn ] = Re-arranging yields the condition S n RS . WSnc ? ?n zbS c + RSnc . QnS c S (QnSS )?1 [WSn ? RSn ] ? [WSnc ? RSnc ] + ?n QnS c S (QnSS )?1 zbS = ?n zbS c . (15) (16) Analysis of condition (d): We now demonstrate that kb zS c k? < 1. Using triangle inequality and the sample incoherence bound (11) we have that kb zS c k? ? (2 ? ) [kW n k? + kRn k? ] + (1 ? ) ?n (17) We complete the proof that kb zS c k? < 1 with the following two lemmas, proved in the full-length version. Lemma 2. If n?2n ? log(p) ? +?, then   2?  n P kW k? ? ? 0 (18) ?n 4  at rate O(exp ?n?2n + log(p) ). Lemma 3. If n?2n ? log(p) ? +? and d?n ? 0, then we have   2? n  P kR k? ? ? 0 ?n 4  at rate O(exp ?n?2n + log(p) ). (19) We apply these two lemmas to the bound   (17) to obtain that with probability converging to one at rate O(exp exp n?2n ? log(p) , we have kb zS c k? ?    + + (1 ? ) = 1 ? . 4 4 2 Analysis of condition (b): We next show that condition (b) can be satisfied, so that sgn(?bS ) = sgn(?? S ). Define ?n := mini?S |?S? |. From equation (14b), we have ?bSs,? = ?S? ? (QnSS )?1 [WS ? ?n zbS + RS ] . (20) Therefore, in order to establish that |?bis,? | > 0 for all i ? S, and moreover that sign(?bSs,? ) = sign(?S? ), it suffices to show that n ?1 ?n (QSS ) [WS ? ?n zbS + RS ] ? ? . 2 Using our eigenvalue bounds, we have n ?1 (QSS ) [WS ? ?n zbS + RS ] ? ? k(QnSS )?1 k? [kWS k? + ?n + kRS k? ] ? ? d k(QnSS )?1 k2 [kWS k? + ?n + kRS k? ] ? d [kWS k? + ?n + kRS k? ] . ? Cmin In fact, the righthand side tends to zero from our earlier results on W and R, and the assumption that ?n d ? 0. Together with the exponential rates of convergence established by the stated lemmas, this completes the proof of the result. 6 Experimental Results We briefly describe some experimental results that demonstrate the practical viability and performance of our proposed method. We generated random Ising models (1) using the following procedure: for a given graph size p and maximum degree d, we started with a graph with disconnected cliques of size less than or equal to ten, and for each node, removed edges randomly until the sparsity condition (degree less than d) was satisfied. For all edges (s, t) present in the resulting random graph, we chose the edge weight ?st ? U[?3, 3]. We drew n i.i.d. samples from the resulting random Ising model by exact methods. We implemented the ? `1 -regularized logistic regression by setting the `1 penalty as ?n = O((log p)3 n), and solved the convex program using a customized primal-dual algorithm (described in more detail in the full-length version of this paper). We considered various sparsity regimes, including constant (d = ?(1)), logarithmic (d = ? log(p)), or linear (d = ?p). In each case, we evaluate a given method in terms of its average precision (one minus the fraction of falsely included edges), and its recall (one minus the fraction of falsely excluded edges). Figure 1 shows results for the case of constant degrees (d ? 4), and graph sizes p ? {100, 200, 400}, for the AND method (respectively the OR) method, in which an edge (s, t) is included if and only if it is included in the local regressions at both node s and (respectively or ) node t. Note that both the precision and recall tend to one as the number of samples n is increased. 7 Conclusion We have shown that a technique based on `1 -regularization, in which the neighborhood of any given node is estimated by performing logistic regression subject to an `1 -constraint, can be used for consistent model selection in discrete graphical models. Our analysis applies to the high-dimensional setting, in which both the number of nodes p and maximum neighborhood sizes d are allowed to grow as a function of the number of observations n. Whereas the current analysis provides sufficient conditions on the triple (n, p, d) that ensure consistent neighborhood selection, it remains to establish necessary conditions as well [11]. Finally, the ideas described here, while specialized in this paper to the binary case, should be more broadly applicable to discrete graphical models. Acknowledgments Research supported in part by NSF grants IIS-0427206, CCF-0625879 and DMS-0605165. AND Recall 0.7 0.95 0.6 0.9 0.5 0.85 0.4 Recall Precision AND Precision 1 0.8 0.3 0.2 0.75 0.1 0.7 p = 100 p = 200 p = 400 p = 100 p = 200 p = 400 0.65 0 200 400 600 800 1000 1200 1400 1600 1800 0 2000 0 200 400 600 800 1000 1200 1400 1600 1800 2000 Number of Samples Number of Samples OR Recall OR Precision 0.7 1 0.95 0.6 0.9 0.5 0.8 Recall Precision 0.85 0.75 0.4 0.3 0.7 0.65 0.2 0.6 0.1 0.5 p = 100 p = 200 p = 400 p = 100 p = 200 p = 400 0.55 0 200 400 600 800 1000 1200 1400 Number of Samples 1600 1800 2000 0 0 200 400 600 800 1000 1200 1400 1600 1800 2000 Number of Samples Figure 1. Precision/recall plots using the AND method (top), and the OR method (bottom). Each panel shows precision/recall versus n, for graph sizes p ? {100, 200, 400}. References [1] D. Chickering. Learning Bayesian networks is NP-complete. Proceedings of AI and Statistics, 1995. [2] C. Chow and C. Liu. Approximating discrete probability distributions with dependence trees. IEEE Trans. Info. Theory, 14(3):462?467, 1968. [3] S. Dasgupta. Learning polytrees. In Uncertainty on Artificial Intelligence, pages 134? 14, 1999. [4] D. Donoho and M. Elad. Maximal sparsity representation via `1 minimization. Proc. Natl. Acad. Sci., 100:2197?2202, March 2003. [5] N. Meinshausen and P. B? uhlmann. High dimensional graphs and variable selection with the lasso. Annals of Statistics, 34(3), 2006. [6] A. Y. Ng. Feature selection, l1 vs. l2 regularization, and rotational invariance. In International Conference on Machine Learning, 2004. [7] D. Pollard. Convergence of stochastic processes. Springer-Verlag, New York, 1984. [8] P. Spirtes, C. Glymour, and R. Scheines. Causation, prediction and search. MIT Press, 2000. [9] N. Srebro. Maximum likelihood bounded tree-width Markov networks. Artificial Intelligence, 143(1):123?138, 2003. [10] J. A. Tropp. Just relax: Convex programming methods for identifying sparse signals. IEEE Trans. Info. Theory, 51(3):1030?1051, March 2006. [11] M. J. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using `1 -constrained quadratic programs. In Proc. Allerton Conference on Communication, Control and Computing, October 2006. [12] P. Zhao and B. Yu. Model selection with the lasso. Technical report, UC Berkeley, Department of Statistics, March 2006. Accepted to Journal of Machine Learning Research.
3138 |@word briefly:1 version:10 norm:1 r:4 bn:6 covariance:1 thereby:1 minus:2 liu:1 series:1 score:1 denoting:1 document:1 recovered:1 current:1 attracted:1 must:2 john:1 written:1 partition:1 plot:1 v:1 intelligence:2 provides:2 node:20 allerton:1 constructed:2 become:1 prove:1 shorthand:1 introduce:1 falsely:2 pairwise:1 surge:1 actual:1 increasing:1 begin:3 estimating:4 notation:7 underlying:1 bounded:3 moreover:2 suffice:1 panel:1 provided:1 z:7 guarantee:3 berkeley:3 every:1 act:1 growth:6 k2:3 control:1 medical:1 omit:2 underlie:1 grant:1 before:1 local:1 tends:1 acad:1 establishing:2 subscript:1 incoherence:7 chose:1 exert:1 meinshausen:2 polytrees:2 ease:2 range:1 bi:1 obeys:1 directed:1 practical:2 unique:1 acknowledgment:1 testing:1 union:1 block:1 procedure:4 asymptotics:1 empirical:1 thought:1 convenience:1 onto:1 selection:10 cannot:1 risk:1 restriction:1 equivalent:1 imposed:3 lagrangian:1 straightforward:1 attention:3 convex:12 survey:1 formulate:1 identifying:1 immediately:1 recovery:1 estimator:1 d5:2 q:7 insight:1 population:4 notion:1 analogous:4 arranging:1 annals:1 construction:3 play:1 suppose:2 controlling:1 exact:1 programming:1 hypothesis:1 pa:2 logarithmically:1 satisfying:2 ising:3 bottom:1 role:1 qns:3 rsnc:2 capture:1 solved:1 ensures:2 removed:1 intuition:2 convexity:2 covariates:4 weakly:1 triangle:1 various:1 univ:3 fast:2 describe:2 artificial:2 neighborhood:21 elad:1 say:2 s:3 relax:1 statistic:4 noisy:1 eigenvalue:2 maximal:1 remainder:4 frequent:1 relevant:3 combining:1 achieve:2 convergence:4 p:2 optimum:1 generating:1 converges:1 stating:1 ij:1 keywords:1 b0:3 strong:2 implemented:1 involves:3 met:1 closely:1 stochastic:2 kb:6 sgn:7 cmin:3 require:5 fix:1 suffices:1 hypertrees:1 strictly:3 hold:7 sufficiently:2 considered:1 exp:12 substituting:1 rnc:1 vary:2 a2:4 estimation:4 proc:2 applicable:1 combinatorial:1 uhlmann:2 establishes:2 tool:1 minimization:2 mit:1 gaussian:1 super:1 rather:1 focus:2 notational:1 consistently:1 likelihood:4 dependent:1 nn:1 typically:3 bt:2 chow:1 w:3 relation:2 interested:1 arg:1 among:1 dual:7 rsn:2 constrained:1 mutual:2 uc:1 field:3 construct:1 equal:1 ng:1 zz:1 represents:1 broad:1 kw:5 yu:1 np:2 report:1 causation:1 randomly:1 simultaneously:2 interest:1 investigate:1 righthand:1 deferred:2 pradeep:1 primal:8 natl:1 devoted:2 edge:8 necessary:1 tree:3 taylor:1 re:2 theoretical:2 minimal:1 instance:1 increased:1 earlier:1 gn:1 bts:1 goodness:1 vertex:2 subset:6 kq:1 uniform:1 characterize:1 dependency:2 eec:1 st:3 international:1 sequel:1 probabilistic:1 invertible:1 together:3 quickly:1 satisfied:3 choose:1 zhao:1 wsn:3 summarized:1 includes:1 coefficient:1 satisfy:3 view:1 contribution:1 minimize:1 ni:1 largely:1 yield:3 weak:1 bayesian:2 history:1 sharing:1 involved:2 regress:1 dm:1 associated:3 proof:8 recovers:2 proved:1 recall:8 organized:1 formalize:1 focusing:2 response:1 formulation:1 though:1 just:1 until:1 d:2 tropp:2 logistic:13 grows:1 effect:1 true:3 ccf:1 regularization:6 excluded:1 spirtes:1 width:1 outline:3 complete:2 demonstrate:2 l1:1 image:1 specialized:1 overview:1 exponentially:2 mellon:2 refer:1 imposing:1 ai:1 consistency:2 moving:1 recent:3 irrelevant:1 inf:1 certain:1 verlag:1 inequality:1 binary:3 success:1 scoring:3 additional:1 determine:1 signal:2 ii:1 full:5 infer:1 technical:2 adapt:1 calculation:1 long:2 dept:3 ravikumar:1 a1:4 converging:3 involving:1 regression:20 basic:1 prediction:1 metric:2 normalization:1 sometimes:1 whereas:1 qnss:9 completes:1 grow:5 subject:4 tend:1 undirected:3 lafferty:1 spirit:1 noting:1 viability:1 fit:1 restrict:1 lasso:3 idea:1 cn:1 penalty:1 f:4 pollard:1 hessian:2 york:1 involve:2 ten:1 exist:1 nsf:1 sign:3 estimated:3 overly:2 correctly:1 diagnosis:1 broadly:1 carnegie:2 discrete:7 write:1 dasgupta:1 independency:2 key:2 four:1 threshold:1 drawn:1 clarity:1 graph:32 subgradient:6 fraction:2 uncertainty:1 place:1 family:3 throughout:1 vn:6 coherence:1 ric:1 bound:4 guaranteed:1 quadratic:1 identifiable:1 constraint:4 infinity:1 x2:2 generates:1 min:1 concluding:1 optimality:2 performing:2 martin:1 relatively:1 glymour:1 department:3 march:3 disconnected:1 across:2 remain:1 b:8 restricted:2 unregularized:1 computationally:1 equation:2 scheines:1 remains:1 rewritten:1 apply:1 observe:1 worthwhile:1 alternative:1 rp:2 existence:2 denotes:1 remaining:3 ensure:3 top:1 graphical:15 cmax:2 k1:6 establish:6 approximating:1 objective:1 quantity:2 concentration:1 dependence:5 gradient:1 sci:1 d6:1 roadmap:1 assuming:1 length:5 mini:1 rotational:1 minimizing:2 equivalently:1 nc:1 october:1 statement:4 info:2 negative:2 stated:2 zt:4 unknown:2 perform:1 observation:7 markov:3 situation:1 defining:1 witness:1 precise:2 communication:1 rn:3 sharp:1 inferred:1 complement:1 namely:1 required:1 pair:7 specified:1 z1:1 california:1 learned:1 established:1 trans:2 address:1 proceeds:1 below:2 pattern:2 regime:1 sparsity:6 program:4 including:2 max:1 wainwright:2 suitable:1 difficulty:1 regularized:12 customized:1 started:1 l2:1 relative:2 asymptotic:1 law:2 srebro:1 versus:1 triple:4 degree:6 sufficient:3 consistent:5 xp:3 minp:4 penalized:1 supported:1 bias:3 allow:1 side:1 neighbor:3 sparse:3 bs:3 xn:1 qn:3 collection:3 compact:2 clique:1 pittsburgh:2 conclude:1 search:2 wsnc:2 nature:2 ca:1 expansion:1 constructing:1 main:10 scored:1 allowed:2 x1:3 en:2 precision:8 inferring:1 position:1 xh:1 exponential:3 candidate:3 governed:1 lie:1 chickering:2 third:1 krn:2 theorem:4 xt:2 covariate:1 showing:3 x:5 concern:1 a3:3 intractable:1 exists:2 importance:1 kr:4 drew:1 suited:1 logarithmic:2 applies:2 springer:1 aa:1 corresponds:1 satisfies:1 succeed:1 conditional:2 goal:3 consequently:1 donoho:1 fisher:6 considerable:1 hard:1 included:3 typical:1 except:1 lemma:6 zb:21 total:1 secondary:1 invariance:1 experimental:3 accepted:1 succeeds:1 indicating:1 constructive:1 evaluate:1
2,357
3,139
Scalable Discriminative Learning for Natural Language Parsing and Translation Joseph Turian, Benjamin Wellington, and I. Dan Melamed {lastname}@cs.nyu.edu Computer Science Department New York University New York, New York 10003 Abstract Parsing and translating natural languages can be viewed as problems of predicting tree structures. For machine learning approaches to these predictions, the diversity and high dimensionality of the structures involved mandate very large training sets. This paper presents a purely discriminative learning method that scales up well to problems of this size. Its accuracy was at least as good as other comparable methods on a standard parsing task. To our knowledge, it is the first purely discriminative learning algorithm for translation with treestructured models. Unlike other popular methods, this method does not require a great deal of feature engineering a priori, because it performs feature selection over a compound feature space as it learns. Experiments demonstrate the method?s versatility, accuracy, and efficiency. Relevant software is freely available at http://nlp.cs.nyu.edu/parser and http://nlp.cs.nyu.edu/GenPar. 1 Introduction Discriminative machine learning methods have led to better solutions for many problems in natural language processing (NLP), such as various kinds of sequence labeling. However, only limited advances have been made on NLP problems involving tree-structured prediction. State of the art methods for both parsing and translation use discriminative methods, but they are still limited by their reliance on generative models that can be estimated relatively cheaply. For example, some parsers and translators use a generative model to generate a list of candidates, and then rerank them using a discriminative reranker (e.g., Henderson, 2004; Charniak & Johnson, 2005; Cowan et al., 2006). Others use a generative model as a feature in a discriminative framework, because otherwise training is impractically slow (Collins & Roark, 2004; Taskar et al., 2004; Riezler & Maxwell, 2006). Similarly, the best machine translation (MT) systems use discriminative methods only to calibrate the weights of a handful of different knowledge sources, which are either enumerated by hand or learned automatically but not discriminatively (e.g., Chiang, 2005). The problem with generative models is that they are typically not regularized in a principled way, and it is difficult to make up for their unregularized risk post-hoc. It is also difficult to come up with a generative model for certain kinds of data, especially the kind used to train MT systems, so approaches that rely on generative models are hard to adapt. This paper proposes a discriminative learning method that can scale up to large structured prediction problems, without using generative models in any way. The proposed method employs the traditional AI technique of predicting a structure by searching over possible sequences of inferences, where each inference predicts a part of the eventual structure. However, unlike most approaches employed in NLP, the proposed method makes no independence assumptions: The function that evaluates each inference can use arbitrary information not only from the input, but also from all previous inferences. Let us define some terms to help explain how our algorithm predicts a tree. An item is a node in the tree. Every state in the search space consists of a set of items, representing nodes that have been inferred since the algorithm started. States whose items form a complete tree1 are final states. An inference is a (state, item) pair, i.e. a state and an item to be added to it. Each inference represents a transition from one state to another. A state is correct if it is possible to infer zero or more items to obtain the final state that corresponds to the training data tree. Similarly, an inference is correct if it leads to a correct state. Given input s, the inference engine searches the possible complete trees T (s) for the tree t? ? T (s) that has minimum cost C? (t) under model ?: ? ? |t| ??? ???X (1) t? = arg min C? (t) = arg min ???? c? (i j )???? t?T (s) t?T (s) j=1 The i j are the inferences involved in constructing tree t. c? (i) is the cost of an individual inference i. The number of states in the search space is exponential in the size of the input. The freedom to compute c? (i) using arbitrary non-local information from anywhere in inference i?s state precludes exact solutions by ordinary dynamic programming. We know of two effective ways to approach such large search problems. The first, which we use for our parsing experiments, is to severely restrict the order in which items can be inferred. The second, which we use for translation, is to make the simplifying assumption that the cost of adding a given item to a state is the same for all states. Under this assumption, the fraction of any state?s cost due to a particular item can be computed just once per item, instead of once per state. However, in contrast to traditional context-free parsing algorithms, that computation can involve context-sensitive features. An important design decision in learning the inference cost function c? is the choice of feature set. Given the typically very large number of possible features, the learning method must satisfy two criteria. First, it must be able to learn effectively even if the number of irrelevant features is exponential in the number of examples. It is too time-consuming to manually figure out the right feature set for such problems. Second, the learned function must be sparse. Otherwise, it would be too large for the memory of an ordinary computer, and therefore impractical. Section 2 presents an algorithm that satisfies these criteria. This algorithm is in the family that has been shown to converge to an `1 -optimal separating hyperplane, which maximizes the minimum `1 -margin on separable training data (Rosset et al., 2004). Sections 3 and 4 present experiments on parsing and translation, respectively, illustrating the advantages of this algorithm. For lack of space, the experiments are described tersely; for details see Turian and Melamed (2006a) and Wellington et al. (2006). Also, Turian and Melamed (2006b) show how to reduce training time. 2 Learning Method 2.1 The Training Set The training data used for both parsing and translation initially comes in the form of trees.2 These gold-standard trees are used to generate training examples, each of which is a candidate inference: Starting at the initial state, we randomly choose a sequence of correct inferences that lead to the (gold-standard) final state. All the candidate inferences that can possibly follow each state in this sequence become part of the training set. The vast majority of these inferences will lead to incorrect states, which makes them negative examples. An advantage of this method of generating training examples is that it does not require a working inference engine and can be run prior to any training. A disadvantage of this approach is that it does not teach the model to recover from mistakes. We conjecture this this approach is not subject to label bias because states can ?dampen? the mass they receive, as recommended by Lafferty et al. (2001). The training set I consists of training examples i, where each i is a tuple hX(i), y(i), b(i)i. X(i) is a feature vector describing i, with each element in {0, 1}. We will use X f (i) to refer to the element of 1 2 What counts as a complete tree is problem-specific. E.g., in parsing, a complete tree is one that covers the input and has a root labeled TOP. Section 4 shows how to do MT by predicting a certain kind of tree. X(i) that pertains to feature f. y(i) = +1 if i is correct, and y(i) = ?1 if not. Some training examples might be more important than others, so each is given a bias b(i) ? R+ . By default, all b(i) = 1. A priori, we define only a set A of simple atomic features (described later). The learner then induces compound features, each of which is a conjunction of possibly negated atomic features. Each atomic feature can have one of three values (yes/no/don?t care), so the size of the compound feature space is 3|A| , exponential in the number of atomic features. In our experiments, it was also exponential in the number of training examples, because |A| ? |I|. For this reason, we expect that the number of irrelevant (compound) features is exponential in the number of training examples. 2.2 Objective Function The training method induces a real-valued inference evaluation function h? (i). In the present work, h? is a linear model parameterized by a real vector ?, which has one entry for each feature f : X h? (i) = ? ? X(i) = ? f ? X f (i) (2) f The sign of h? (i) predicts the y-value of i and the magnitude gives the confidence in this prediction. The training procedure adjusts ? to minimize the expected risk R? over training set I. R? is the objective function, which is the sum of loss function L? and regularization term ?? . We use the log-loss and `1 regularization, so we have ? ? ? ? ? ? ??? ??? ??? X ??? ???X ???X (3) |? f |???? R? (I) = L? (I) + ?? = ??? l? (i)??? + ?? = ??? [b(i) ? ln(1 + exp(??? (i)))]??? + ????? ? i?I i?I f ? is a parameter that controls the strength of the regularizer and ?? (i) = y(i) ? h? (i) is the margin of example i. The tree cost C? (Equation 1) is obtained by computing the objective function with y(i) = +1 and b(i) = 1 for every inference in the tree, and treating the penalty term ?? as constant. I.e., c? (i) = ln(1 + exp(?h? (i))). This choice of objective function was motivated by Ng (2004), who showed that it is possible to achieve sample complexity that is logarithmic in the number of irrelevant features by minimizing the `1 -regularized log-loss. On the other hand, Ng showed that most other discriminative learning algorithms used for structured prediction in NLP will overfit in this setting, including: the perceptron algorithm, unregularized logistic regression, logistic regression with an `2 penalty (a Gaussian prior), SVMs using most kernels, and neural nets trained by back-propagation. 2.3 Boosting `1 -Regularized Decision Trees We use an ensemble of confidence-rated decision trees (Schapire & Singer, 1999) to represent h? .3 Each internal node is split on an atomic feature. The path from the root to each node n in a decision tree corresponds to a compound feature f, and we write ?(n) = f. An inference i percolates down to node n iff X?(n) = 1. Each leaf node n keeps track of the parameter value ??(n) . To score an inference i using a decision tree, we percolate the inference down to a leaf n and return confidence ??(n) . The score h? (i) given to an inference i by the whole ensemble is the sum of the confidences returned by all trees in the ensemble. Listing 1 Outline of training algorithm. procedure T????(I) procedure M???T???(t, I) ensemble ? ? while some leaf in t can be split do `1 parameter ? ? ? split the leaf to maximize gain while not converged do percolate every i ? I to a leaf node t ? tree with one (root) node for each leaf n in t do while the root node cannot be split do update ??(n) to minimize R? decay ? append t to ensemble M???T???(t, I) Listing 1 presents our training algorithm. At the beginning of training, the ensemble is empty, ? = 0, and ? is set to ?. We grow the ensemble until the objective cannot be further reduced for the current 3 Turian and Melamed (2005) built more accurate parsers more quickly using decision trees rather than decision stumps, so we build full decision trees. choice of ?. We then relax the regularization penalty by decreasing ? and continue training. In this way, instead of choosing the best ? heuristically, we can optimize it during a single training run. Each invocation of M???T??? has several steps. First, we choose some compound features that will allow us to decrease the objective function. We do this by building a decision tree, whose leaf node paths represent the chosen compound features. Second, we confidence-rate each leaf to minimize the objective over the examples that percolate down to that leaf. Finally, we append the decision tree to the ensemble and update parameter vector ? accordingly. In this manner, compound feature selection is performed incrementally during training, as opposed to a priori. Our strategy for feature selection is a variant of steepest descent (Perkins et al., 2003), extended to work over the compound feature space. The construction of each decision tree begins with a root node, which corresponds to a dummy ?always true? feature. To avoid the discontinuity at ? f = 0 of the gradient of the regularization term in the objective (Equation 3), we define the gain of feature f as: ! ?L? (I) G? (I; f ) = max 0, ? ? (4) ?? f The gain function indicates how the polyhedral structure of the `1 norm tends to keep the model sparse (Riezler & Vasserman, 2004). Unless the magnitude of the gradient of the loss |?L? (I)/?? f | exceeds the penalty term ?, the gain is zero and the objective cannot be reduced by adjusting parameter ? f away from zero. However, if the gain is non-zero, G? (I; f ) is the magnitude of the gradient of the objective as we adjust ? f in the direction that reduces R? . Let us define the weight of an example i under the current model as the rate at which loss decreases as the margin of i increases: ?l? (i) 1 w? (i) = ? = b(i) ? (5) ??? (i) 1 + exp(?? (i)) Now, to compute the gain (Equation 4), we note that: X X ?L? (I) X ?l? (i) X ?l? (i) ??? (i) w? (i) ? [y(i) ? X f (i)] = ? w? (i) ? y(i) (6) = = ? =? ?? f ?? f ??? (i) ?? f i?I: i?I i?I i?I X f (i)=1 We recursively split leaf nodes by choosing the best atomic splitting feature that will allow us to increase the gain. Specifically, we consider splitting each leaf node n using atomic feature a? , where   a? = arg max G? (I; f ? a) + G? (I; f ? ?a) (7) a?A Splitting using a? would create children nodes n1 and n2 , with ?(n1 ) = f ? a? and ?(n2 ) = f ? ??a. We split node n using a? only if the total gain of these two children exceeds the gain of the unsplit node, i.e. if: G? (I; f ? a? ) + G? (I; f ? ??a) > G? (I; f ) (8) Otherwise, n remains a leaf node of the decision tree, and ??(n) becomes one of the values to be optimized during the parameter update step. Parameter update is done sequentially on only the most recently added compound features, which correspond to the leaves of the new decision tree. After the entire tree is built, we percolate each example down to its appropriate leaf node. A convenient property of decision trees is that the leaves? compound features are mutually exclusive, so their parameters can be directly optimized independently of each other. We use a line search to choose for each leaf node n the parameter ??(n) that minimizes the objective over the examples in n. 3 Parsing The parsing algorithm starts from an initial state that contains one terminal item per input word, labeled with a part-of-speech (POS) tag by the method of Ratnaparkhi (1996). For simplicity and efficiency, we impose a (deterministic) bottom-up right-to-left order for adding items to a state. The resulting search space is still exponential, and one might worry about search errors. However, in our experiments, the inference evaluation function was learned accurately enough to guide the parser to the optimal parse reasonably quickly without pruning, and thus without search errors. Following Taskar et al. (2004), we trained and tested a parser using the algorithm in Section 2 on ? 15 word sentences from the English Penn Treebank (Taylor et al., 2003). We used sections 02?21 Table 1 Accuracy on the English Penn Treebank, training and testing on sentences of ? 15 words. % Recall % Precision F1 Turian and Melamed (2005) 86.47 87.80 87.13 Bikel (2004) 87.85 88.75 88.30 Taskar et al. (2004) 89.10 89.14 89.12 our parser 89.26 89.55 89.40 for training, section 22 for development, and section 23 for testing. There were 40 million training inferences. Turian and Melamed (2005) observed that uniform example biases b(i) produced lower accuracy as training progressed, because the model minimized the error per example. To minimize the error per state, we assigned every training state equal value and shared half the value uniformly among negative examples generated from that state and gave the other half to the positive examples. Our atomic feature set A contained features of the form ?is there an item in group J whose label/headword/headtag/headtagclass is X??. Possible values of X for each predicate were collected from the training data. Some examples of possible values for J are the last n child items, the first n left-context items, all right-context items, and the terminal items dominated by the non-head child items. These feature templates gave rise to 1.1 million different atomic features. Significantly smaller feature sets lowered accuracy on the development set. To situate our results in the literature, we compared them to those reported by Taskar et al. (2004) and Turian and Melamed (2005) for their discriminative parsers, which were also trained and tested on ? 15 word sentences.4 We also compared our parser to a representative non-discriminative parser (Bikel, 2004)5 , the only one that we were able to train and test under exactly the same experimental conditions, including the use of POS tags from Ratnaparkhi (1996). The comparison was in terms of the standard PARSEVAL measures (Black et al., 1991): labeled precision, labeled recall, and labeled F-measure, which are based on the number of non-terminal items in the parser?s output that match those in the gold-standard parse. Table 1 shows the results of these four parsers on the test set. The accuracy of our parser is at least as high as that of comparable parsers in the literature. An advantage of our choice of loss function is that each of the binary classifiers can be learned independently of the others. We parallelized training by inducing 26 separate classifiers, one for each non-terminal label in the Penn Treebank. It took less than five CPU-days to build each of the ensembles used at test time by the final parser. By comparison, it took several CPU-months to train the parser of Taskar et al. (2004) (Dan Klein, p.c.). 4 Translation The experiments in this section employed the tree transduction approach to translation, which is used by today?s best MT systems (Marcu et al., 2006). To translate by tree transduction, we assume that the input sentence has already been parsed by a parser like the one described in Section 3. The transduction algorithm performs a sequence of inferences to transform this input parse tree into an output parse tree, which has words of the target language in its leaves, often in a different order than the corresponding words in the source tree. The words are then read off the target tree and outputted; the rest of the tree is discarded. Inferences are ordered by their cost, just like in ordinary parsing, and tree transduction stops when each source node has been transduced. The data for our experiments came from the English and French components of the EuroParl corpus (Koehn, 2005). From this corpus, we extracted sentence pairs where both sentences had between 5 and 40 words, and where the ratio of their lengths was no more than 2:1. We then extracted disjoint training, tuning, development, and test sets. The tuning, development, and test sets were 1000 sentence pairs each. Typical MT systems in the literature are trained on hundreds of thousands of sentence pairs, so our main experiment used 100K sentence pairs of training data. Where noted, preliminary experiments were performed using 10K sentence pairs of training data. We computed parse trees for all the English sentences in all data sets. For each of our two training sets, we induced word alignments using the default configuration of GIZA++ (Och & Ney, 2003). The training set 4 5 The results reported by Taskar et al. (2004) were not for a purely discriminative parser. Their parser beat the generative model of Bikel (2004) only after using the output from a generative model as a feature. Bikel (2004) is a ?clean room? reimplementation of the Collins (1999) model with comparable accuracy. word alignments and English parse trees were fed into the default French-English hierarchical alignment algorithm distributed with the GenPar system (Burbank et al., 2005) to produce binarized tree alignments. Tree alignments are the ideal form of training data for tree transducers, because they fully specify the relation between nodes in the source tree and nodes in the target tree. We experimented with a simplistic tree transducer that involves only two types of inferences. The first type transduces words at the leaves of the source tree; the second type transduces internal nodes. To transduce a word w at the leaf, the transducer replaces it with a single word v that is a translation of w. v can be empty (?NULL?). Leaves that are transduced to NULL are deterministically erased. Internal nodes are transduced merely by permuting the order of their children, where one of the possible permutations is to retain the original order. E.g., for a node with two children, the permutation classifier predicts either (1,2) or (2,1). This transducer is grossly inadequate for modeling real translations (Galley et al., 2004): It cannot account for many kinds of noise nor for many real translingual phenomena, such as head-switching and discontinuous constituents, which are important for accurate MT. It cannot even capture common ?phrasal? translations such as English there is to French il y a. However, it is sufficient for controlled comparison of learning methods. One could apply the same learning methods to more sophisticated tree transducers. When inducing leaf transducers using 10K training sentence pairs, there were 819K training inferences and 80.9K tuning inferences. For 100K training sentence pairs, there were 36.8M and 375K, respectively. And for inducing internal node transducers using 100K training sentence pairs, there were 1.0M and 9.2K, respectively. 362K leaf transduction inferences were used for development. We parallelized training of the word transducers according to the source and target word pair (w, v). Prior to training, we filtered out word translation examples that were likely to be noise.6 Given this filtering, we induced 11.6K different word transducers over 10K training sentence pairs, and 41.3K over 100K sentence pairs. We used several kinds of features to evaluate leaf transductions. ?Window? features included the source words and part-of-speech (POS) tags within a 2-word window around the word in the leaf (the ?focus? word), along with their relative positions (from -2 to +2). ?Co-occurrence? features included all words and POS tags from the whole source sentence, without position information. ?Dependency? features were compiled from the automatically generated English parse trees. The literature on monolingual parsing gives a standard procedure for annotating each node in an English parse tree with its ?lexical head word.? The dependency features of each word were the label of its maximal projection7 , the label and lexical head of the parent of the maximal projection, the label and lexical head of all dependents of the maximal projection, and all the labels of all head-children, recursively, of the maximal projection. The features used to evaluate transductions of internal nodes included all those listed for leaf transduction above, where the focus words were the head words of the children of the internal node. Using these features, we applied the method of Section 2 to induce confidence-rating binary classifiers for each word pair in the lexicon, and additional binary classifiers for predicting the permutations of the children of internal tree nodes. Before attempting the whole transduction task, we compared the model of Section 2 with the model of Vickrey et al. (2005), which learned word transduction classifiers using logistic regression with `2 regularization. The `2 parameters were optimized using the conjugate gradient implementation of Daum?e (2004). We induced word transduction classifiers over the 10K training data using this model and our own, and tested them on the development set. The accuracy of the two models was statistically indistinguishable (about 54%). However, the models were vastly different in their size. The boosted decision trees had a total of about 38.7K non-zero compound features over an even smaller number of atomic features. In contrast, the `2 -regularized model had about 6.5 million nonzero features?an increase of more than two orders of magnitude. We estimated that, to scale up to training data sizes typically used by modern statistical MT systems, the `2 classifiers would not fit in memory. To make them fit, we set all but the heaviest feature weights to zero. The number of features allowed to remain active in the `2 classifier was the number of active features in the `1 classifier. With the playing field leveled, the accuracy of the `2 classifiers was only 6 7 Specifically: v was retained as a possible translation of w if v was the most frequent translation of w, or if v occurred as a translation of w at least three times and accounted for at least 20% of the translations of w in the training data. I.e., the highest node that has the focus word as its lexical head; if it is a leaf, then that label is a POS tag. Table 2 Accuracy of tree transducers using 100K sentence pairs of training data. exponent = 1.0 exponent = 2.0 Precision Recall F1 Precision Recall F1 generative 51.29 38.30 43.85 22.62 16.90 19.35 discriminative 62.36 39.06 48.04 28.02 17.55 21.59 45%, even worse than the baseline accuracy of 48% obtained by always predicting the most common translation. In the main experiment, we compared two models of the inference cost function c? ?one generative and one discriminative. The generative model was a top-down tree transducer (Comon et al., 1997), which stochastically generates the target tree top-down given the source tree. Under this model, the loss of an inference i is the negative log-probability of the node n(i) that it infers. We estimated the parameters of this transducer using the Viterbi approximation to the inside-outside algorithm described by Graehl and Knight (2004). We lexicalized the nodes so that their probabilities could capture bilexical dependencies. Our hypothesis was that the discriminative approach would be more accurate than the generative model, because its evaluation of each inference could take into account a greater variety of information in the tree, including its entire yield (string), not just the information in nearby nodes. We used the second search technique described in Section 1 to find the minimum cost target tree. For efficiency, we used a chart to keep track of item costs, and pruned items whose cost was more than 103 times the cost of the least expensive item in the same chart cell. We also pruned items whenever the number of items in the same cell exceeded 40. Our entire tree transduction algorithm was equivalent to bottom-up synchronous parsing (Melamed, 2004) where the source side of the output bi-tree is constrained by the input (source) tree. We compared the generative and discriminative models by reading out the string encoded in their predicted trees, and computing the F-measure between that string and the reference target sentence in the test corpus. Turian et al. (2003) show how to compute precision, recall, and the F-measure over pairs of strings without double-counting. Their family of measures is parameterized by an exponent. With the exponent set to 1.0, the F-measure is essentially the unigram overlap ratio. With the exponent set to 2.0, the F-measure rewards longer n-gram matches without double-counting. The generative transducer achieved its highest F-measure when the input parse trees were computed by the generative parser of Bikel (2004). The discriminatively trained transducer was most accurate when the source trees were computed by the parser in Section 3. Table 2 shows the results?the discriminatively trained transducer was much more accurate on all measures, at a statistical significance level of 0.001 using the Wilcoxon signed ranks test. Conclusion We have demonstrated how to predict tree structures using binary classifiers. These classifiers are discriminatively induced by boosting confidence-rated decision trees to minimize the `1 -regularized log-loss. For large problems in tree-structured prediction, such as natural language parsing and translation, this learning algorithm has several attractive properties. It learned a purely discriminative machine over 40 million training examples and 1.1 million atomic features, using no generative model of any kind. The method did not require a great deal of feature engineering a priori, because it performed feature selection over a compound feature space as it learned. To our knowledge, this is the first purely discriminatively trained constituent parser that surpasses a generative baseline, as well as the first published method for purely discriminative training of a syntax-driven MT system that makes no use of generative translation models, either in training or translation. In future work, we plan to integrate the parsing and translation methods described in our experiments, to reduce compounded error. Acknowledgments The authors would like to thank L?eon Bottou, Patrick Haffner, Fernando Pereira, Cynthia Rudin, and the anonymous reviewers for their helpful comments and constructive criticism. This research was sponsored by NSF grants #0238406 and #0415933. References Bikel, D. M. (2004). Intricacies of Collins? parsing model. Computational Linguistics, 30(4), 479?511. Black, E., Abney, S., Flickenger, D., Gdaniec, C., Grishman, R., Harrison, P., et al. (1991). A procedure for quantitatively comparing the syntactic coverage of English grammars. In Speech and Natural Language. Burbank, A., Carpuat, M., Clark, S., Dreyer, M., Fox, P., Groves, D., et al. (2005). Final report on statistical machine translation by parsing (Tech. Rep.). Johns Hopkins University Center for Speech and Language Processing. http://www.clsp.jhu.edu/ws2005/groups/statistical/report.html. Charniak, E., & Johnson, M. (2005). Coarse-to-fine n-best parsing and MaxEnt discriminative reranking. In ACL. Chiang, D. (2005). A hierarchical phrase-based model for statistical machine translation. In ACL. Collins, M. (1999). Head-driven statistical models for natural language parsing. Doctoral dissertation, University of Pennsylvania. Collins, M., & Roark, B. (2004). Incremental parsing with the perceptron algorithm. In ACL. Comon, H., Dauchet, M., Gilleron, R., Jacquemard, F., Lugiez, D., Tison, S., et al. (1997). Tree automata techniques and applications. Available at http://www.grappa.univ-lille3.fr/tata. (released October 1, 2002) Cowan, B., Ku?cerov?a, I., & Collins, M. (2006). A discriminative model for tree-to-tree translation. In EMNLP. Daum?e, H. (2004). Notes on CG and LM-BFGS optimization of logistic regression. (Paper available at http://pub.hal3.name#daume04cg-bfgs, implementation available at http://hal3.name/megam/) Galley, M., Hopkins, M., Knight, K., & Marcu, D. (2004). What?s in a translation rule? In HLT-NAACL. Graehl, J., & Knight, K. (2004). Training tree transducers. In HLT-NAACL. Henderson, J. (2004). Discriminative training of a neural network statistical parser. In ACL. Koehn, P. (2005). Europarl: A parallel corpus for statistical machine translation. In MT Summit X. Lafferty, J., McCallum, A., & Pereira, F. (2001). Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML. Marcu, D., Wang, W., Echihabi, A., & Knight, K. (2006). SPMT: Statistical machine translation with syntactified target language phrases. In EMNLP. Melamed, I. D. (2004). Statistical machine translation by parsing. In ACL. Ng, A. Y. (2004). Feature selection, `1 vs. `2 regularization, and rotational invariance. In ICML. Och, F. J., & Ney, H. (2003). A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1), 19?51. Perkins, S., Lacker, K., & Theiler, J. (2003). Grafting: Fast, incremental feature selection by gradient descent in function space. Journal of Machine Learning Research, 3, 1333?1356. Ratnaparkhi, A. (1996). A maximum entropy part-of-speech tagger. In EMNLP. Riezler, S., & Maxwell, J. T. (2006). Grammatical machine translation. In HLT-NAACL. Riezler, S., & Vasserman, A. (2004). Incremental feature selection and `1 regularization for relaxed maximumentropy modeling. In EMNLP. Rosset, S., Zhu, J., & Hastie, T. (2004). Boosting as a regularized path to a maximum margin classifier. Journal of Machine Learning Research, 5, 941?973. Schapire, R. E., & Singer, Y. (1999). Improved boosting using confidence-rated predictions. Machine Learning, 37(3), 297?336. Taskar, B., Klein, D., Collins, M., Koller, D., & Manning, C. (2004). Max-margin parsing. In EMNLP. Taylor, A., Marcus, M., & Santorini, B. (2003). The Penn Treebank: An overview. In A. Abeill?e (Ed.), Treebanks: Building and using parsed corpora (chap. 1). Turian, J., & Melamed, I. D. (2005). Constituent parsing by classification. In IWPT. Turian, J., & Melamed, I. D. (2006a). Advances in discriminative parsing. In ACL. Turian, J., & Melamed, I. D. (2006b). Computational challenges in parsing by classification. In HLT-NAACL workshop on Computationally Hard Problems and Joint Inference in Speech and Language Processing. Turian, J., Shen, L., & Melamed, I. D. (2003). Evaluation of machine translation and its evaluation. In MT Summit IX. Vickrey, D., Biewald, L., Teyssier, M., & Koller, D. (2005). Word-sense disambiguation for machine translation. In EMNLP. Wellington, B., Turian, J., Pike, C., & Melamed, I. D. (2006). Scalable purely-discriminative training for word and tree transducers. In AMTA.
3139 |@word illustrating:1 norm:1 heuristically:1 simplifying:1 recursively:2 initial:2 configuration:1 contains:1 score:2 charniak:2 pub:1 current:2 comparing:1 must:3 parsing:26 john:1 treating:1 sponsored:1 update:4 headword:1 v:1 generative:19 leaf:26 half:2 item:24 rudin:1 accordingly:1 graehl:2 reranking:1 beginning:1 mccallum:1 steepest:1 dissertation:1 chiang:2 filtered:1 coarse:1 boosting:4 node:34 lexicon:1 five:1 tagger:1 along:1 become:1 incorrect:1 consists:2 transducer:17 dan:2 polyhedral:1 inside:1 manner:1 expected:1 nor:1 grappa:1 terminal:4 decreasing:1 chap:1 automatically:2 cpu:2 window:2 becomes:1 begin:1 dreyer:1 maximizes:1 mass:1 transduced:3 null:2 what:2 kind:7 minimizes:1 string:4 impractical:1 every:4 binarized:1 exactly:1 classifier:14 control:1 penn:4 och:2 grant:1 segmenting:1 positive:1 before:1 engineering:2 local:1 tends:1 mistake:1 severely:1 riezler:4 switching:1 bilexical:1 path:3 might:2 black:2 signed:1 acl:6 doctoral:1 co:1 limited:2 bi:1 statistically:1 acknowledgment:1 testing:2 atomic:11 reimplementation:1 procedure:5 burbank:2 jhu:1 significantly:1 outputted:1 convenient:1 projection:3 confidence:8 word:32 induce:1 cannot:5 selection:7 giza:1 risk:2 context:4 optimize:1 equivalent:1 deterministic:1 transduce:1 lexical:4 demonstrated:1 reviewer:1 center:1 www:2 starting:1 independently:2 automaton:1 maximumentropy:1 shen:1 simplicity:1 splitting:3 adjusts:1 rule:1 searching:1 phrasal:1 construction:1 today:1 parser:22 target:8 exact:1 programming:1 hypothesis:1 melamed:14 element:2 expensive:1 marcu:3 summit:2 predicts:4 labeled:5 bottom:2 taskar:7 observed:1 wang:1 capture:2 thousand:1 decrease:2 highest:2 knight:4 principled:1 benjamin:1 complexity:1 reward:1 dynamic:1 trained:7 purely:7 efficiency:3 learner:1 tersely:1 po:5 joint:1 various:2 regularizer:1 train:3 univ:1 fast:1 effective:1 treebanks:1 labeling:2 choosing:2 outside:1 whose:4 encoded:1 koehn:2 valued:1 ratnaparkhi:3 relax:1 otherwise:3 precludes:1 annotating:1 grammar:1 syntactic:1 transform:1 final:5 hoc:1 sequence:6 advantage:3 net:1 took:2 maximal:4 fr:1 frequent:1 relevant:1 iff:1 translate:1 achieve:1 gold:3 inducing:3 constituent:3 parent:1 empty:2 double:2 produce:1 generating:1 incremental:3 help:1 coverage:1 c:3 involves:1 come:2 predicted:1 direction:1 correct:5 discontinuous:1 translating:1 require:3 hx:1 f1:3 preliminary:1 anonymous:1 enumerated:1 around:1 exp:3 great:2 viterbi:1 predict:1 lm:1 released:1 label:8 sensitive:1 treestructured:1 create:1 dampen:1 gaussian:1 always:2 rather:1 avoid:1 boosted:1 conjunction:1 focus:3 rank:1 indicates:1 tech:1 contrast:2 criticism:1 baseline:2 cg:1 sense:1 helpful:1 inference:35 dependent:1 typically:3 entire:3 initially:1 relation:1 koller:2 hal3:2 arg:3 among:1 html:1 classification:2 priori:4 exponent:5 proposes:1 development:6 art:1 constrained:1 plan:1 equal:1 once:2 field:2 ng:3 manually:1 represents:1 icml:2 progressed:1 future:1 minimized:1 others:3 report:2 quantitatively:1 employ:1 modern:1 randomly:1 individual:1 versatility:1 n1:2 freedom:1 evaluation:5 adjust:1 henderson:2 alignment:6 permuting:1 accurate:5 grove:1 tuple:1 unless:1 tree:72 fox:1 taylor:2 maxent:1 modeling:2 cover:1 disadvantage:1 gilleron:1 calibrate:1 ordinary:3 cost:12 phrase:2 surpasses:1 entry:1 uniform:1 hundred:1 predicate:1 galley:2 inadequate:1 johnson:2 too:2 reported:2 dependency:3 rosset:2 retain:1 probabilistic:1 off:1 systematic:1 quickly:2 hopkins:2 vastly:1 heaviest:1 opposed:1 choose:3 possibly:2 emnlp:6 iwpt:1 worse:1 stochastically:1 return:1 account:2 diversity:1 stump:1 bfgs:2 satisfy:1 leveled:1 later:1 root:5 performed:3 start:1 recover:1 parallel:1 minimize:5 il:1 chart:2 accuracy:11 who:1 ensemble:9 listing:2 correspond:1 yield:1 translator:1 yes:1 accurately:1 produced:1 published:1 converged:1 explain:1 whenever:1 hlt:4 ed:1 evaluates:1 grossly:1 involved:2 gain:9 stop:1 adjusting:1 popular:1 recall:5 knowledge:3 dimensionality:1 infers:1 sophisticated:1 back:1 worry:1 exceeded:1 maxwell:2 day:1 follow:1 specify:1 improved:1 done:1 anywhere:1 just:3 until:1 overfit:1 hand:2 working:1 parse:9 lack:1 propagation:1 incrementally:1 french:3 logistic:4 building:2 name:2 naacl:4 true:1 regularization:7 assigned:1 read:1 nonzero:1 vickrey:2 deal:2 attractive:1 transduces:2 indistinguishable:1 during:3 lastname:1 noted:1 criterion:2 syntax:1 outline:1 complete:4 demonstrate:1 performs:2 recently:1 common:2 mt:10 overview:1 million:5 occurred:1 refer:1 ai:1 tuning:3 similarly:2 language:10 had:3 lowered:1 longer:1 compiled:1 patrick:1 wilcoxon:1 own:1 showed:2 irrelevant:3 driven:2 compound:13 certain:2 binary:4 continue:1 came:1 rep:1 minimum:3 additional:1 care:1 impose:1 greater:1 employed:2 freely:1 parallelized:2 converge:1 maximize:1 fernando:1 wellington:3 recommended:1 full:1 infer:1 reduces:1 exceeds:2 compounded:1 match:2 adapt:1 post:1 controlled:1 prediction:7 scalable:2 involving:1 regression:4 variant:1 simplistic:1 essentially:1 kernel:1 represent:2 achieved:1 cell:2 receive:1 fine:1 mandate:1 grow:1 source:12 harrison:1 rest:1 unlike:2 comment:1 subject:1 induced:4 cowan:2 lafferty:2 counting:2 ideal:1 split:6 enough:1 variety:1 independence:1 fit:2 gave:2 pennsylvania:1 restrict:1 hastie:1 reduce:2 haffner:1 synchronous:1 motivated:1 penalty:4 returned:1 speech:6 york:3 pike:1 involve:1 listed:1 induces:2 svms:1 reduced:2 http:6 generate:2 schapire:2 nsf:1 sign:1 estimated:3 disjoint:1 per:5 track:2 dummy:1 klein:2 write:1 group:2 four:1 reliance:1 clean:1 vast:1 merely:1 fraction:1 sum:2 run:2 parameterized:2 family:2 roark:2 disambiguation:1 decision:16 comparable:3 replaces:1 strength:1 handful:1 perkins:2 software:1 tag:5 dominated:1 generates:1 nearby:1 min:2 pruned:2 attempting:1 separable:1 relatively:1 conjecture:1 department:1 structured:4 according:1 manning:1 conjugate:1 smaller:2 remain:1 joseph:1 comon:2 translingual:1 unregularized:2 ln:2 equation:3 mutually:1 remains:1 computationally:1 describing:1 count:1 singer:2 know:1 fed:1 available:4 apply:1 hierarchical:2 away:1 appropriate:1 occurrence:1 ney:2 original:1 top:3 lacker:1 nlp:6 linguistics:2 daum:2 parsed:2 eon:1 especially:1 build:2 objective:11 added:2 already:1 unsplit:1 strategy:1 teyssier:1 exclusive:1 traditional:2 gradient:5 separate:1 thank:1 separating:1 majority:1 tata:1 collected:1 reason:1 marcus:1 length:1 retained:1 ratio:2 minimizing:1 rotational:1 difficult:2 october:1 teach:1 negative:3 rise:1 append:2 design:1 implementation:2 negated:1 discarded:1 descent:2 beat:1 extended:1 santorini:1 head:9 arbitrary:2 inferred:2 rating:1 pair:15 optimized:3 sentence:19 engine:2 learned:7 discontinuity:1 able:2 reading:1 challenge:1 built:2 including:3 memory:2 max:3 vasserman:2 overlap:1 natural:6 rely:1 regularized:6 predicting:5 zhu:1 representing:1 rated:3 clsp:1 started:1 prior:3 literature:4 relative:1 loss:8 expect:1 rerank:1 discriminatively:5 parseval:1 fully:1 permutation:3 filtering:1 monolingual:1 clark:1 integrate:1 sufficient:1 theiler:1 treebank:4 playing:1 translation:32 accounted:1 last:1 free:1 english:10 bias:3 allow:2 guide:1 perceptron:2 side:1 template:1 sparse:2 distributed:1 grammatical:1 default:3 transition:1 gram:1 author:1 made:1 situate:1 pruning:1 grafting:1 keep:3 sequentially:1 active:2 corpus:5 consuming:1 discriminative:24 tree1:1 don:1 percolate:5 search:9 table:4 abney:1 learn:1 reasonably:1 ku:1 bottou:1 bikel:6 constructing:1 did:1 significance:1 main:2 whole:3 noise:2 relaxed:1 turian:13 n2:2 child:9 allowed:1 grishman:1 representative:1 transduction:12 slow:1 precision:5 position:2 pereira:2 deterministically:1 exponential:6 candidate:3 invocation:1 learns:1 ix:1 down:6 specific:1 unigram:1 cynthia:1 nyu:3 list:1 decay:1 experimented:1 workshop:1 lexicalized:1 adding:2 effectively:1 magnitude:4 margin:5 entropy:1 intricacy:1 led:1 logarithmic:1 likely:1 cheaply:1 contained:1 ordered:1 corresponds:3 satisfies:1 extracted:2 conditional:1 viewed:1 month:1 eventual:1 room:1 shared:1 erased:1 hard:2 included:3 specifically:2 typical:1 uniformly:1 impractically:1 hyperplane:1 total:2 invariance:1 experimental:1 internal:7 collins:7 pertains:1 constructive:1 evaluate:2 tested:3 phenomenon:1 europarl:2
2,358
314
Second Order Properties of Error Surfaces : Learning Time and Generalization Yann Le Cun Ido Kanter AT &T Bell Laboratories Department of Physics Bar Ilan University Crawfords Corner Rd. Holmdel, NJ 07733, USA Ramat Gan, 52100 Israel Sara A. Sona AT&T Bell Laboratories Crawfords Corner Rd. Holmdel, NJ 07733, USA Abstract The learning time of a simple neural network model is obtained through an analytic computation of the eigenvalue spectrum for the Hessian matrix, which describes the second order properties of the cost function in the space of coupling coefficients. The form of the eigenvalue distribution suggests new techniques for accelerating the learning process, and provides a theoretical justification for the choice of centered versus biased state variables. 1 INTRODUCTION Consider the class of learning algorithms which explore a space {W} of possible couplings looking for optimal values W? for which a cost function E(W) is minimal. The dynamical properties of searches based on gradient descent are controlled by the second order properties of the E(W) surface. An analytic investigation of such properties provides a characterization of the time scales involved in the relaxation to the solution W?. The discussion focuses on layered networks with no feedback, a class of architectures remarkably successful at perceptual tasks such as speech and image recognition. We derive rigorous results for the learning time of a single linear unit, and discuss their generalization to multi-layer nonlinear networks. Causes for the slowest time constants are identified, and specific prescriptions to eliminate their effect result in practical methods to accelerate convergence. 918 Second Order Properties of Error Surfaces 2 LEARNING BY GRADIENT DESCENT Multi-layer networks are composed of model neurons interconnected through a feedforward graph. The state Xi of the i-th neuron is computed from the states {Xj} of the set Si of neurons that feed into it through the total input (or induced local field) ai E j e Si Wij X j. The coefficient Wij ofthe linear combination is the coupling from neuron j to neuron i. The local field ai determines the state Xi through a nonlinear differentiable function! called the activation function: Xi !(ad. The activation function is often chosen to be the hyperbolic tangent or a similar sigmoid function. = = The connection graph of multi-layer networks has no feedback loops, and the stable state is computed by propagating state information from the input units (which receive no input from other units) to the output units (which propagate no information to other units). The initialization of the state of the input units through an input vector X results in an output vector 0 describing the state of the output units. The network thus implements an input-output map, 0 O(X, W), which depends on the values assigned to the vector W of synaptic couplings. = The learning process is formulated as a search in the space W, so as to find an optimal configuration W'" which minimizes a function E(W). Given a training set of p input vectors XP and their desired outputs DP, 1 < J-t < p, the cost function p E(W) = ..!:.. L 2p IIDP - O(XP, W)II 2 (2.1) p=l measures the discrepancy between the actual behavior of the system and the desired behavior. The minimization of E with respect to W is usually performed through iterative updates using some form of gradient descent: W(k + 1) = W(k) - r(\7 E, (2.2) where TJ is used to adjust the size of the updating step, and 'lEis an estimate of the gradient of E with respect to W . The commonly used Back-Propagation algorithm popularized by (Rumelhart, Hinton, and Williams, 1986), provides an efficient way of estimating 'lEfor multi-layer networks. The dynamical behavior of learning algorithms based on the minimization of E(W) through gradient descent is controlled by the properties of the E( W) surface. The goal of this work is to gain better understanding of the structure of this surface through an investigation of its second derivatives, as contained in the Hessian matrix H. 3 SECOND ORDER PROPERTIES We now consider a simple model which can be investigated analytically: an Ndimensional input vector feeding onto a single output unit with a linear activation function !(a) a. The output corresponding to input XP is given by = N OP = LWiXr = WTXP, i=l (3.1) 919 920 Le Cun, Kanter, and Solla where xf is the i-th component of the J..'-th input vector, and from the i-th input unit to the output. Wi is the coupling The rule for weight updates p W(k + 1) =W(k) - !1. 2:)OIS - dlS)XIS (3.2) P 1S=1 follows from the gradient of the cost function 1 E(W) =- 1 p L(dlS - OIS)2 2p 1S=1 =- p L(dlS - WT XIS)2. 2p 1S=1 (3.3) Note that the cost function of Eq. (3.3) is quadratic in W, and can be rewritten as (3.4) where R is the covariance matrix of the input, Rij = l/p L~=1 xfxj, a symmetric and nonnegative N x N matrix; the N-dimensional vector Q has components qi 1/PL~=1 dlSxf, and the constant c 1/PL~=1(dlS)2. The gradient is given by \1 E RW - Q, while the Hessian matrix of second derivatives is H R. = = = = The solution space of vectors W* which minimize E(W) is the subspace of solutions of the linear equation RW Q, resulting from \1 E O. This subspace reduces to a point if R is full rank. The diagonalization of R provides a diagonal matrix A formed by its eigenvalues, and a matrix U formed by its eigenvectors. Since R is nonnegative, all eigenvalues satisfy A > O. = = = Consider now a two-step coordinate transformation: a translation V' W - W* provides new coordinates centered at the solution point; it is followed by a rotation V UV' U(W - W*) onto the principal axes of the error surface. In the new coordinate system = = (3.5) = = = with A UTRU and Eo E(W*). Then 8E/8vj AjVj, and 82E/8vj8vk = Aj 6jk . The eigenvalues of the input covariance matrix give the second derivatives of the error surface with respect to its principal axes. In the new coordinate system the Hessian matrix is the diagonal matrix A, and the rule for weight updates becomes a set of N decoupled equations: V(k + 1) = V(k) -17AV(k), (3.6) The evolution of each component along a principal direction is given by vj(k) so that Vj will converge to zero o < 17 < 2/ Aj. In this regime Vj time Tj = (17Aj )-1. The range dynamics: the step size is large = (1 -17Aj)kVj(O), (3.7) (and thus Wj to the solution wj) provided that decays to zero exponentially, with characteristic l/Aj < 17 < 2/Aj corresponds to underdamped and convergence to the solution occurs through Second Order Properties of Error Surfaces oscillatory behavior. The range 0 < TJ < 1/ Aj corresponds to overdamped dynamics: the step size is small and convergence requires many iterations. Critical damping occurs for TJ 1/ Aj; if such choice is possible, the solution is reached in one iteration (Newton's method). = = If all eigenvalues are equal, Aj A for all 1 < j < N, the Hessian matrix is diagonal: H A. Convergence can be obtained in one iteration, with optimal step size TJ = I/A, and learning time T = 1. This highly symmetric case occurs when cross-sections of E(W) are hyperspheres in the N-dimensional space {W} . Such high degree of symmetry is rarely encountered: correlated inputs result in nondiagonal elements for H, and the principal directions are rotated with respect to the original coordinates. The cross-sections of E(W) are elliptical, with different eigenvalues along different principal directions. Convergence requires 0 < TJ < 2/ Aj for alII < j < N, thus TJ must be chosen in the range 0 < TJ < I/A max , where Amax is the largest eigenvalue. The slowest time constant in the system is Tmax (TJA min)-l, where Amin is the lowest nonzero eigenvalue. The optimal step size TJ = I/A max thus leads to T max = Amax/ Amin for the decay along the principal direction of smallest nonzero curvature. A distribution of eigenvalues in the range Amin < A < Amax results in a distribution of learning times, with average < T >= Amax < I/A >. = = This analysis demonstrates that learning dynamics in quadratic surfaces are fully controlled by the eigenvalue distribution of the Hessian matrix. It is thus of interest to investigate such eigenvalue distribution. 4 EIGENVALUE SPECTRUM The simple linear unit of Eq. (3.1) leads to the error function (3.4), for which the Hessian is given by the covariance matrix p Rij = lip L zrzj. (4.1) #,=1 It is assumed that the input components {zr} are independent, and drawn from a distribution with mean m and variance v. The size ofthe training set is quantified by the ratio a p/N between the number of training examples and the dimensionality of the input vector. The eigenvalue spectrum has been computed (Le Cun, Kanter, and Solla, 1990), and it exhibits three dominant features: = (a) If p < N, the rank of the matrix R is p. The existence of (N - p) zero eigenvalues out of N results in a delta function contribution of weight (I-a) at A 0 for a < l. = (b) A continuous part of the spectrum, (4.2) within the bounded interval A_ < A < A+, with A? = (1 ? va)2 v/ a (Krogh and Hertz, 1991). Note that peA) is controlled only by the variance v of the distribution from which the inputs are drawn. The bounds A? are well defined, and of order one. For all a < 1, A_ > 0, indicating a gap at the lower end of the spectrum. 921 922 Le Cun, Kanter, and Solla (c) An isolated eigenvalue of order N, AN, present in the case of biased inputs (m 1= 0). True correlations between pairs (xi, x,,) of input components might lead to a quite different spectrum from the one described above. The continuous part (4.2) of the eigenvalue spectrum has been computed in the N ---4 00 limit, while keeping a constant and finite. The magnitude of finite size effects has been investigated numerically for N < 200 and various values of a. Results for N 200, shown in Fig. 1, indicate that finite size effects are negligible: the distribution peA) is bounded within the interval [A_, A+], in good agreement with the theoretical prediction, even for such small systems. The result (4.2) is thus applicable in the finite p aN case, an important regime given the limited availability of training data in most learning problems. 2.5.-________________________________-. = = 2.0 1.5 p(A,} 1.0 0.5 0.6 / o.oUl~~-=r:~~::~~~~~~~.J o 1 2 3 4 5 6 Figure 1: Spectral density peA) predicted by Eq. (4.2) for m 0, v = 1, and a = 0.6,1.2,4, and 16. Experimental histograms for a = 0.6 (full squares) and a 4 (open squares) are averages over 100 trials with N 200 and ?1 with probability 1/2 each. = = xr = The existence of a large eigenvalue AN is easily understood by considering the structure of the covariance matrix R in the p ---4 00 limit, a regime for which a detailed analysis is available in the adaptive filters literature (Widrow and Stearns, 1985). In this limit, all off-diagonal elements of R are equal to m 2 , and all diagonal elements are equal to v + m 2 ? The eigenvector UN = (1...1) thus corresponds to the eigenvalue AN = N m 2 + v. The remaining (N - 1) eigenvalues are all equal to v (note that the continuous part of the spectrum collapses onto a delta function at A_ A+ v as p ---4 00 ), thus satisfying trR N(m 2 + v). The large part of AN is eliminated for centered distributions with m = 0, such as = ?1 with probability 1/2, or = 3, -1, -2 with probability 1/3. Note that although m is = = = xr xr Second Order Properties of Error Surfaces crucial in controlling the existence of an isolated eigenvalue of order N, it plays no role in the spectral density of Eq. (4.2). 5 LEARNING TIME Consider the learning time T = a(Amax/Amin). The eigenvalue ratio (Amax/Amin) measures the maximum number of iterations, and the factor of a accounts for the time needed for each presentation of the full training set. = = = = = 0, Amax A+, and Amin A_. The learning time T a(A+/A_) can be For m easily computed using Eq. (4.2): T a(l + ~2 /(1- .jQ)2. As a function of a, T diverges at a = 1, and, surprisingly, goes through a minimum at a = (1 + J2)2 = 5.83 before diverging linearly for a ...... 00. Numerical simulations were performed to estimate T by counting the number T of presentations of training examples needed to reach an allowed error level jj; through gradient descent. If the prescribed error jj; is sufficiently close to the minimum error Eo, T is controlled by the slowest mode, and it provides a good estimate for T. Numerical results for T as a function of a, shown in Fig. 2, were obtained by training a single linear neuron on randomly generated vectors. As predicted, the curve exhibits a clear maximum at a = 1, as well as a minimum between a = 4 and a = 5. The existence of such optimal training set size for fast learning is a surprising result. ______________________________ 800~ ~ 600 T(a) 400 200 2 4 6 8 10 12 14 16 18 Figure 2: Number of iterations T (averaged over 20 trials) needed to train a linear 100 inputs. The xj are uniformly distributed between -1 and +l. neuron with N Initial and target couplings W are chosen randomly from a uniform distribution within the [-1, +l]N hypercube. Gradient descent is considered complete when the error reaches the prescribed value jj; = 0.001 above the Eo = 0 minimum value. = 923 924 Le Cun, Kanter, and Solla Biased inputs m 1:- 0 produce a large eigenvalue Amax = AN, proportional to N and responsible for slow convergence. A simple approach to reducing the learning time is to center each input variable Xi by subtracting its mean. An obvious source of systematic bias m is the use of activation functions which restrict the state variables to the [0,1] interval. Symmetric activation functions such as the hyperbolic tangent are empirically known to yield faster convergence than their nonsymmetric counterparts such as the logistic function. Our results provide an explanation to this observation. The extension of these results to multi-layer networks rests on the observation that each neuron i receives state information {xi} from the j E Si neurons that feed into it, and can be viewed as minimizing a local objective function Ei whose Hessian involves the the covariance matrix of such inputs. If all input variables are uncorrelated and have zero mean, no large eigenvalues will appear. But states with Xj = m 1:- 0 produce eigenvalues proportional to the number of input neurons Ni in the set Si, resulting in slow convergence if the connectivity is large. An empirically known solution to this problem, justified by our theoretical analysis, is to use individual learning rates '1'Ji inversely proportional to the number of inputs Ni to the i-th neuron. Yet another approach is to keep a running estimate of the average Xi and use centered state variables ij Xi - xi. Such algorithm results in considerable reductions in learning time. = 6 CONCLUSIONS Our results are based on a rigorous calculation of the eigenvalue spectrum for a symmetric matrix constructed from the outer product of random vectors. The spectral density provides a full description of the relaxation of a single adaptive linear unit, and yields a surprising result for the optimal size of the training set in batch learning. Various aspects of the dynamics of learning in multi-layer networks composed of nonlinear units are clarified: the theory justifies known empirical methods and suggests novel approaches to reduce learning times. References A. Krogh and J. A. Hertz (1991), 'Dynamics of generalization in linear perceptrons', in Advances in Neural Information Processing Systems 3, ed. by D. S. Touretzky and R. Lippman, Morgan Kaufmann (California). Y. Le Cun, I. Kanter, and S. A. Solla (1990), 'Eigenvalues of covariance matrices: application to neural-network learning', Phys. Rev., to be published. D. E. Rumelhart, G. E. Hinton, and R. J. Williams (1986), 'Learning representations by back-propagating errors', Nature 323, 533-536. B. Widrow and S. D. Stearns (1985), Adaptive Signal Processing, Prentice-Hall (New Jersey).
314 |@word trial:2 open:1 simulation:1 propagate:1 covariance:6 reduction:1 initial:1 configuration:1 elliptical:1 surprising:2 si:4 activation:5 yet:1 must:1 numerical:2 analytic:2 update:3 characterization:1 provides:7 clarified:1 along:3 constructed:1 behavior:4 multi:6 actual:1 considering:1 becomes:1 provided:1 estimating:1 bounded:2 lowest:1 israel:1 minimizes:1 eigenvector:1 transformation:1 nj:2 demonstrates:1 unit:12 appear:1 before:1 negligible:1 understood:1 local:3 limit:3 tmax:1 might:1 initialization:1 quantified:1 suggests:2 sara:1 ramat:1 limited:1 collapse:1 range:4 averaged:1 practical:1 responsible:1 utru:1 implement:1 xr:3 lippman:1 empirical:1 bell:2 hyperbolic:2 onto:3 close:1 layered:1 prentice:1 map:1 center:1 williams:2 go:1 rule:2 amax:8 coordinate:5 justification:1 controlling:1 play:1 target:1 agreement:1 element:3 rumelhart:2 recognition:1 jk:1 updating:1 satisfying:1 role:1 rij:2 wj:2 solla:5 dynamic:5 accelerate:1 easily:2 various:2 jersey:1 train:1 fast:1 kanter:6 quite:1 whose:1 eigenvalue:27 differentiable:1 underdamped:1 subtracting:1 interconnected:1 product:1 j2:1 loop:1 amin:6 description:1 convergence:8 diverges:1 produce:2 rotated:1 coupling:6 derive:1 widrow:2 propagating:2 ij:1 op:1 eq:5 krogh:2 ois:2 predicted:2 indicate:1 involves:1 direction:4 filter:1 pea:3 centered:4 feeding:1 generalization:3 investigation:2 extension:1 pl:2 sufficiently:1 considered:1 hall:1 oul:1 smallest:1 applicable:1 largest:1 minimization:2 ax:2 focus:1 rank:2 slowest:3 rigorous:2 eliminate:1 jq:1 wij:2 field:2 equal:4 eliminated:1 discrepancy:1 randomly:2 composed:2 individual:1 interest:1 highly:1 investigate:1 adjust:1 tj:9 decoupled:1 damping:1 desired:2 isolated:2 theoretical:3 minimal:1 cost:5 uniform:1 successful:1 ido:1 density:3 systematic:1 physic:1 off:1 kvj:1 connectivity:1 corner:2 derivative:3 sona:1 account:1 ilan:1 availability:1 coefficient:2 satisfy:1 ad:1 depends:1 performed:2 reached:1 contribution:1 minimize:1 formed:2 ni:2 square:2 variance:2 characteristic:1 kaufmann:1 yield:2 ofthe:2 published:1 oscillatory:1 reach:2 phys:1 touretzky:1 synaptic:1 ed:1 involved:1 obvious:1 gain:1 dimensionality:1 back:2 feed:2 correlation:1 nondiagonal:1 receives:1 ei:1 nonlinear:3 propagation:1 mode:1 logistic:1 aj:10 lei:1 usa:2 effect:3 true:1 counterpart:1 evolution:1 analytically:1 assigned:1 symmetric:4 laboratory:2 nonzero:2 complete:1 image:1 novel:1 sigmoid:1 rotation:1 empirically:2 ji:1 exponentially:1 nonsymmetric:1 numerically:1 ai:2 rd:2 uv:1 stable:1 surface:10 dominant:1 curvature:1 morgan:1 minimum:4 eo:3 converge:1 signal:1 ii:1 full:4 reduces:1 xf:1 faster:1 calculation:1 cross:2 prescription:1 controlled:5 qi:1 va:1 prediction:1 iteration:5 histogram:1 receive:1 justified:1 remarkably:1 interval:3 source:1 crucial:1 biased:3 rest:1 induced:1 counting:1 feedforward:1 xj:3 architecture:1 identified:1 restrict:1 reduce:1 accelerating:1 speech:1 hessian:8 cause:1 jj:3 detailed:1 eigenvectors:1 clear:1 rw:2 stearns:2 delta:2 drawn:2 graph:2 relaxation:2 yann:1 holmdel:2 layer:6 bound:1 followed:1 quadratic:2 encountered:1 nonnegative:2 aspect:1 min:1 prescribed:2 department:1 popularized:1 combination:1 hertz:2 describes:1 wi:1 cun:6 rev:1 equation:2 discus:1 describing:1 needed:3 end:1 available:1 rewritten:1 spectral:3 batch:1 existence:4 original:1 running:1 remaining:1 gan:1 newton:1 hypercube:1 objective:1 occurs:3 diagonal:5 exhibit:2 gradient:9 dp:1 subspace:2 outer:1 ratio:2 minimizing:1 av:1 neuron:11 observation:2 finite:4 descent:6 hinton:2 looking:1 pair:1 connection:1 california:1 bar:1 dynamical:2 usually:1 regime:3 max:3 explanation:1 hyperspheres:1 critical:1 zr:1 ndimensional:1 inversely:1 alii:1 crawford:2 understanding:1 literature:1 tangent:2 fully:1 proportional:3 versus:1 degree:1 xp:3 uncorrelated:1 translation:1 surprisingly:1 keeping:1 bias:1 trr:1 distributed:1 feedback:2 curve:1 commonly:1 adaptive:3 keep:1 assumed:1 a_:6 xi:11 spectrum:9 search:2 iterative:1 continuous:3 un:1 tja:1 lip:1 nature:1 symmetry:1 investigated:2 vj:4 linearly:1 allowed:1 fig:2 slow:2 perceptual:1 specific:1 decay:2 dl:4 diagonalization:1 magnitude:1 justifies:1 gap:1 explore:1 contained:1 corresponds:3 determines:1 goal:1 formulated:1 presentation:2 viewed:1 considerable:1 uniformly:1 reducing:1 wt:1 principal:6 total:1 called:1 experimental:1 diverging:1 perceptrons:1 rarely:1 indicating:1 overdamped:1 correlated:1
2,359
3,140
A recipe for optimizing a time-histogram Hideaki Shimazaki Department of Physics, Graduate School of Science Kyoto University Kyoto 606-8502, Japan shimazaki@ton.scphys.kyoto-u.ac.jp Shigeru Shinomoto Department of Physics, Graduate School of Science Kyoto University Kyoto 606-8502, Japan shinomoto@scphys.kyoto-u.ac.jp Abstract The time-histogram method is a handy tool for capturing the instantaneous rate of spike occurrence. In most of the neurophysiological literature, the bin size that critically determines the goodness of the fit of the time-histogram to the underlying rate has been selected by individual researchers in an unsystematic manner. We propose an objective method for selecting the bin size of a time-histogram from the spike data, so that the time-histogram best approximates the unknown underlying rate. The resolution of the histogram increases, or the optimal bin size decreases, with the number of spike sequences sampled. It is notable that the optimal bin size diverges if only a small number of experimental trials are available from a moderately fluctuating rate process. In this case, any attempt to characterize the underlying spike rate will lead to spurious results. Given a paucity of data, our method can also suggest how many more trials are needed until the set of data can be analyzed with the required resolution. 1 Introduction The rate of spike occurrence, or the firing rate, of a neuron can be captured by the (peri-stimulus) time-histogram (PSTH) [1, 2], which is constructed easily as follows: Align spike sequences to the onset of stimuli, divide time into discrete bins, count the number of spikes that enter each bin, and divide the counts by the bin size and the number of sequences. The shape of a PSTH depends on the choice of the bin size. With too large a bin size, one cannot represent the detailed time-dependent rate, while with too small a bin size, the time-histogram fluctuates greatly and one cannot discern the underlying spike rate. There exists an ideal bin size for estimating the spike rate for each set of experimental data. This important parameter has mostly been selected subjectively by individual researchers. We devised a method of selecting the bin size objectively so that a PSTH best approximates the underlying rate, which is unknown. In the course of our study, we found an interesting paper that proposed an empirical method of choosing the histogram bin size for a probability density function (Rudemo M, (1982) Scandinavian Journal of Statistics 9: 65-78 [3]). Although applicable to a Poisson point process, this theory appears to have rarely been applied to PSTHs. It would be preferable to have a theory in accordance with the procedures of neurophysiological experiments in which a stimulus is repeated to extract a signal from a neuron. Given a set of experimental data, we wish to not only determine the optimal bin size, but also estimate how many more experimental trials should be performed in order to obtain a resolution we deem sufficient. It was revealed by a theoretical analysis that the optimal bin size may diverge for a small number of spike sequences derived from a moderately fluctuating rate [4]. This implies that any attempt to characterize the underlying rate will lead to spurious results. The present method can indicate the divergence of the optimal bin size only from the spike data. Even under such a condition, the present method nevertheless provides an inference on the number of trails that need to be performed in order to obtain a meaningful estimated rate. 2 Methods We consider sequences of spikes repeatedly recorded from identical experimental trials. A recent analysis revealed that in vivo spike trains are not simply random, but possess inter-spike-interval distributions intrinsic and specific to individual neurons [5, 6]. However, spikes accumulated from a large number of spike trains recorded from a single neuron are, in the majority, mutually independent. Being free from the intrinsic inter-spike-interval distributions of individual spike trains, the accumulated spikes can be regarded as being derived repeatedly from Poisson processes of an identical time-dependent rate [7, 8]. ? t to the underlying spike rate It would be natural to assess the goodness of the fit of the estimator ? ?t over the total observation period T by the mean integrated squared error (MISE), Z 1 T ? t ? ?t )2 dt, MISE ? E (? (1) T 0 where E refers to the expectation over different realization of point events, given ?t . We suggest a method for minimizing the MISE with respect to the bin size ?. The difficulty of the present problem comes from the fact that the underlying spike rate ?t is not known. 2.1 Selection of the bin size ? t , and explore a method to select We choose the (bar-graph) PSTH as a way to estimate the rate ? the bin size of a PSTH that minimizes MISE in Eq.(1). A PSTH is constructed simply by counting the number of spikes that belong to each bin. For an observation period T , we obtain N = ?T /?? intervals. The number of spikes accumulated from all n sequences in the ith interval is counted as ki . The bar height at the ith bin is given by ki /n?. Given a bin of width ?, the expected height of a bar graph for t ? [0, ?] is the time-averaged rate, Z 1 ? ?= ?t dt. (2) ? 0 The total number of spikes k from n spike sequences that enter a bin of width ? obeys a Poisson distribution with the expected number n??, k p(k | n??) = (n??) ?n?? e . k! (3) The unbiased estimator for ? is given as ?? = k/(n?), which is the empirical height of the bar graph for t ? [0, ?]. By segmenting the total observation period T into N intervals of size ?, the MISE defined in Eq.(1) can be rewritten as Z N o 1 ? 1 Xn E ( ??i ? ?t+(i?1)? )2 dt, (4) MISE = ? 0 N i=1 where ??i ? ki /(n?). Hereafter we denote the average over those segmented rate ?t+(i?1)? as an average over an ensemble of (segmented) rate functions {?t } defined in an interval of t ? [0, ?]: Z E 1 ?D MISE = E ( ?? ? ?t )2 dt. (5) ? 0 Table 1: A method for bin size selection for a PSTH (i) Divide the observation period T into N bins of width ?, and count the number of spikes ki from all n sequences that enter the ith bin. (ii) Construct the mean and variance of the number of spikes {ki } as, N N 1 X 1 X ? 2. k? ? ki , and v ? (ki ? k) N i=1 N i=1 (iii) Compute the cost function, Cn (?) = (iv) 2k? ? v . (n?)2 Repeat i through iii while changing the bin size ? to search for ?? that minimizes Cn (?). The expectation E now refers to the average over the spike count, or ?? = k/(n?), given a rate function ?t , or its mean value, ?. The MISE can be decomposed into two parts, Z Z E D E E 1 ?D 1 ?D 2 2 2 ? ? MISE = E ( ? ? ? + ? ? ?t ) dt = E(? ? ?) + (?t ? ?) dt. (6) ? 0 ? 0 The first and second terms are respectively the stochastic fluctuation of the estimator ?? around the expected mean rate ?, and the temporal fluctuation of ?t around its mean ? over an interval of length ?, averaged over the segments. The second term of Eq.(6) can further be decomposed into two parts, Z Z E D E 1 ? 1 ?D 2 2 (?t ? h?i + h?i ? ?)2 dt = (?t ? h?i) dt ? (? ? h?i) . ? 0 ? 0 (7) The first term in the rhs of Eq.(7) represents a mean squared fluctuation of the underlying rate ?t from the mean rate h?i, and is independent of the bin size ?, because Z Z E 1 ?D 1 T 2 2 (?t ? h?i) dt = (?t ? h?i) dt. (8) ? 0 T 0 We define a cost function by subtracting this term from the original MISE, Z E 1 ?D 2 (?t ? h?i) dt Cn (?) ? MISE ? ? 0 D E D E 2 = E(?? ? ?)2 ? (? ? h?i) . (9) This cost function corresponds to the ?risk function? in the report by Rudemo, (Eq. 2.3), obtained by direct decomposition of the MISE [3]. The second term in Eq.(9) represents the temporal fluctuation of the expected mean rate ? for individual intervals of period ?. As the expected mean rate is not an observable quantity, we must replace the fluctuation of the expected mean rate with that of the ? Using the decomposition rule for an unbiased estimator (E ?? = ?), observable estimator ?. D E D E D E D E ? 2 = E(?? ? ? + ? ? h?i)2 = E(?? ? ?)2 + (? ? h?i)2 , E(?? ? hE ?i) (10) the cost function is transformed into D E D E ? 2 . Cn (?) = 2 E(?? ? ?)2 ? E(?? ? hE ?i) (11) Due to the assumed Poisson nature of the point process, the number of spikes k counted in each bin obeys a Poisson distribution: the variance of k is equal to the mean. For the estimated rate defined as ?? = k/(n?), this variance-mean relation corresponds to E(?? ? ?)2 = 1 ? E ?. n? (12) ? By incorporating Eq.(12) into Eq.(11), the cost function is given as a function of the estimator ?, E 2 D ?E D ? ? 2 . Cn (?) = E ? ? E(? ? hE ?i) (13) n? The optimal bin size is obtained by minimizing the cost function Cn (?): ?? ? arg min Cn (?). ? (14) By replacing the expectation of ?? in Eq.(13) with the sample spike counts, the method is converted into a user-friendly recipe summarized in Table 1. 2.2 Extrapolation of the cost function With the method developed in the preceding subsection, we can determine the optimal bin size for a given set of experimental data. In this section, we develop a method to estimate how the optimal bin size decreases when more experimental trials are added to the data set. Assume that we are in possession of n spike sequences. The fluctuation of the expected mean rate (? ? h?i)2 in Eq.(10) is replaced with the empirical fluctuation of the time-histogram ??n using the decomposition rule for the unbiased estimator ??n satisfying E ??n = ?, D E D E D E E(??n ? hE ??n i)2 = E(??n ? ? + ? ? h?i)2 = E(??n ? ?)2 + (? ? h?i)2 . (15) The expected cost function for m sequences can be obtained by substituting the above equation into Eq.(9), yielding D E D E D E Cm (?|n) = E(??m ? ?)2 + E(??n ? ?)2 ? E(??n ? hE ??n i)2 . (16) Using the variance-mean relation for the Poisson distribution, Eq.(12), and E(??m ? ?)2 = we obtain Cm (?|n) =  1 1 E ??m = E ??n , m? m? 1 1 ? m n  1 D ? E E ?n + Cn (?) , ? (17) (18) where Cn (?) is the original cost function, Eq.(13), computed using the estimators ??n . By replacing the expectation with sample spike count averages, the cost function for m sequences can be extrapolated as Cm (?|n) with this formula, using the sample mean k? and variance v of the numbers of spikes, given n sequences and the bin size ?. The extrapolation method is summarized in Table 2. It may come to pass that the original cost function Cn (?) computed for n spike sequences does not have a minimum, or have a minimum at a bin size comparable to the observation period T . In such a case, with the method summarized in Table 2, one may estimate the critical number of sequences nc above which the cost function has a finite bin size ?? , and consider carrying out more experiments to obtain a reasonable rate estimation. In the case that the optimal bin size exhibits continuous divergence, the cost function can be expanded as   1 1 1 1 Cn (?) ? ? ? + u 2, (19) n nc ? ? where we have introduced nc and u, which are independent of n. The optimal bin size undergoes a phase transition from the vanishing 1/?? for n < nc to a finite 1/?? for n > nc . In this case, the inverse optimal bin size is expanded in the vicinity of nc as 1/?? ? (1/n ? 1/nc ). We can Table 2: A method for extrapolating the cost function for a PSTH (A) Construct the  extrapolated function,  cost 1 1 k? Cm (?|n) = ? + Cn (?), m n n?2 using the sample mean k? and variance v of the number of spikes obtained from n sequences of spikes. (B) Search for ??m that minimizes Cm (?|n). (C) Repeat A and B while changing m, and plot 1/??m vs 1/m to search for the critical value 1/m = 1/? nc above which 1/??m practically vanishes. ? ?m estimated from estimate the critical value n ? c by applying this asymptotic relation to the set of ? Cm (?|n) for various values of m:   1 1 1 ? ? . (20) ??m m n ?c It should be noted that there are cases that the optimal bin size exhibits discontinuous divergence from a finite value. Even in such cases, the plot of {1/m, 1/?? } could be useful in exploring a discontinuous transition from nonvanishing values of 1/?? to practically vanishing values. 2.3 Theoretical cost function In this section, we obtain a ?theoretical? cost function directly from a process with a known underlying rate, ?t , and compare it with the ?empirical? cost function which can be evaluated without knowing the rate process. Note that this theoretical cost function is not available in real experimental conditions in which the underlying rate is not known. The present estimator ?? ? k/(n?) is a uniformly minimum variance unbiased estimator (UMVUE) of ?, which achieves the lower bound of the Cram?er-Rao inequality [9, 10], #?1 " ? 2 X ? ? log p (k|?) 2 = . (21) E(?? ? ?) = ? p (k|?) ??2 n? k=0 Inserting this into Eq.(9), the cost function is represented as E h?i D 2 Cn (?) = ? (? ? h?i) n? Z ?Z ? ? 1 = ? ? (t1 ? t2 ) dt1 dt2 , n? ?2 0 0 (22) where ? is the mean rate, and ?(t) is the autocorrelation function of the rate fluctuation, ?t ? ?. Based on the symmetry ?(t) = ?(?t), the cost function can be rewritten as Z ? ? 1 Cn (?) = ? 2 (? ? |t|)?(t) dt n? ? ?? Z Z ? ? 1 ? 1 ? ? ?(t) dt + 2 |t|?(t) dt, (23) n? ? ?? ? ?? which can be identified with Eq.(19) with parameters given by Z ? nc = ? ?(t) dt , ?? Z ? u = |t|?(t) dt. ?? (24) (25) B Underlying rate, t 60 30 0 0 A Cost function, 100 2 Spike Sequences C 200 1 Empirical cost function 0 1 2 Theoretical cost function ^ Histograms, D 0 t 120 60 0 -100 60 30 0 0.1 0.2 0.3 0.4 0.5 0 60 30 0 0 1 2 Time, t Figure 1: A: (Dots): The empirical cost function, Cn (?), computed from spike data according to the method in Table 1. (Solid line): The ?theoretical? cost function computed directly from the underlying fluctuating rate, with Eq.(22). B: (Above): The underlying fluctuating rate ?t . (Middle): Spike sequences derived from the rate. (Below): Time-histograms made using three types of bin sizes: too small, optimal, and too large. Model parameters: the number of sequences n = 30; total observation period T = 30 [sec]; the mean rate ? = 30 [1/s]; the amplitude of rate fluctuation ? = 10 [1/s]; time scale of rate fluctuation ? = 0.05 [s]. 3 Results Our first objective was to develop a method for selecting the ideal bin size using spike sequences derived repeatedly from Poisson processes, all with a given identical rate ?t . The MISE of the PSTH from the underlying rate is minimized by minimizing the cost function Cn (?). Figure 1A displays the cost function computed with the method summarized in Table 1. This ?empirical? cost function is compared with the ?theoretical? cost function Eq.(22) that is computed directly from the underlying rate ?t . The figure exhibits that the ?empirical? cost function is consistent with the ?theoretical? cost function. The time-histogram constructed using the optimal bin size is compared with those constructed using non-optimal bin sizes in Figs. 1B, demonstrating the effectiveness of the present method of bin size selection. We also tested a method for extrapolating the cost function. Figures 2A and B demonstrate the extrapolated cost functions for several sequences with differing values of m and the plot of {1/m, 1/?? } for estimating the critical value 1/m = 1/? nc , above which 1/?? practically vanishes. Figure 2C depicts the critical number n ? c estimated from the smaller or larger numbers of spike sequences n. The empirically estimated critical number n ? c approximates the theoretically predicted critical number nc computed using Eq.(24). Note that the critical number is correctly estimated from the small number of sequences, with which the optimal bin size practically diverges (n < nc ). 4 Summary We have developed a method for optimizing the bin size, so that the PSTH best represents the (unknown) underlying spike rate. For a small number of spike sequences derived from a modestly Extrapolated cost function Estimated optimal bin size B A 10 12 m=10 5 6 m=20 0 m=30 -5 0 0 2 4 6 8 0.02 10 0.06 0.1 1/m ^ Estimated critical number, nc C 40 30 20 10 0 0 10 20 30 40 Number of sequences, n Figure 2: A: Extrapolated cost functions Cm (?|n) plotted against 1/? for several numbers of sequences m = 10, 20 and 30 computed from n = 10 sample sequences. B: The plot of {1/m, 1/?? } used for estimating the critical value 1/m = 1/? nc , above which 1/?? practically vanishes. C: The number of spike sequences n used to obtain the extrapolated cost function Cm (?|n) and an estimated critical number n ? c . Model parameters: the number of sequences n = 10; total observation period T = 30 [sec]; the mean rate ? = 30 [1/s]; the amplitude of rate fluctuation ? = 4 [1/s]; time scale of rate fluctuation ? = 0.05 [s]. The theoretical critical number is computed with Eq.(24), giving nc = 21.1 for the present underlying fluctuating rate. This theoretical nc is depicted as the horizontal dashed line. fluctuating rate, the cost function does not have a minimum, implying the uselessness of the rate estimation. Our method can nevertheless extrapolate the cost function for any number of spike sequences, and suggest how many trials are needed in order to obtain a meaningful time-histogram with the required accuracy. The suitability of the present method was demonstrated by application to spike sequences generated by time-dependent Poisson processes. Acknowledgements This study is supported in part by Grants-in-Aid for Scientific Research to SS from the Ministry of Education, Culture, Sports, Science and Technology of Japan (16300068, 18020015) and the 21st Century COE ?Center for Diversity and Universality in Physics?. HS is supported by the Research Fellowship of the Japan Society for the Promotion of Science for Young Scientists. References [1] E. D. Adrian. The Basis of Sensation: The Action of the Sense Organs. W.W. Norton, New York, 1928. [2] G. L. Gerstein and N. Y. S. Kiang. An approach to the quantitative analysis of electrophysiological data from single neurons. Biophysical Journal, 1(1):15?28, 1960. [3] M. Rudemo. Empirical choice of histograms and kernel density estimators. Scandinavian Journal of Statistics, 9(2):65?78, 1982. [4] S. Koyama and S. Shinomoto. Histogram bin width selection for time-dependent poisson processes. Journal of Physics A-Mathematical and General, 37(29):7255?7265, 2004. [5] S. Shinomoto, K. Shima, and J. Tanji. Differences in spiking patterns among cortical neurons. Neural Computation, 15(12):2823?2842, 2003. [6] S. Shinomoto, Y. Miyazaki, H. Tamura, and I. Fujita. Regional and laminar differences in in vivo firing patterns of primate cortical neurons. Journal of Neurophysiology, 94(1):567?575, 2005. [7] D. L. Snyder. Random Point Processes. John Wiley & Sons, Inc., New York, 1975. [8] D. J. Daley and D. Vere-Jones. An Introduction to the Theory of Point Processes. SpringerVerlag, New York, USA, 1988. [9] R. E. Blahut. Principles and practice of information theory. Addison-Wesley, Reading, Mass, 1987. [10] T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley & Sons, Inc., New York, 1991.
3140 |@word h:1 trial:6 neurophysiology:1 middle:1 adrian:1 decomposition:3 solid:1 selecting:3 hereafter:1 universality:1 must:1 vere:1 john:2 shape:1 extrapolating:2 plot:4 v:1 implying:1 selected:2 ith:3 vanishing:2 provides:1 psth:10 height:3 mathematical:1 constructed:4 direct:1 autocorrelation:1 manner:1 theoretically:1 inter:2 expected:8 decomposed:2 deem:1 estimating:3 underlying:18 mass:1 kiang:1 miyazaki:1 cm:8 minimizes:3 developed:2 differing:1 possession:1 temporal:2 quantitative:1 friendly:1 preferable:1 grant:1 segmenting:1 t1:1 scientist:1 accordance:1 firing:2 fluctuation:12 graduate:2 averaged:2 obeys:2 practice:1 handy:1 procedure:1 empirical:9 refers:2 cram:1 suggest:3 cannot:2 selection:4 risk:1 applying:1 demonstrated:1 center:1 resolution:3 estimator:11 rule:2 regarded:1 century:1 user:1 trail:1 element:1 satisfying:1 decrease:2 vanishes:3 moderately:2 dt2:1 carrying:1 segment:1 basis:1 easily:1 various:1 represented:1 train:3 choosing:1 fluctuates:1 larger:1 s:1 objectively:1 statistic:2 sequence:30 scphys:2 biophysical:1 propose:1 subtracting:1 inserting:1 realization:1 recipe:2 diverges:2 develop:2 ac:2 school:2 eq:19 predicted:1 implies:1 indicate:1 come:2 sensation:1 discontinuous:2 stochastic:1 bin:48 education:1 suitability:1 exploring:1 practically:5 around:2 substituting:1 achieves:1 estimation:2 applicable:1 organ:1 tool:1 promotion:1 derived:5 greatly:1 sense:1 inference:1 dependent:4 accumulated:3 integrated:1 spurious:2 relation:3 transformed:1 fujita:1 arg:1 among:1 equal:1 construct:2 identical:3 represents:3 jones:1 minimized:1 report:1 stimulus:3 t2:1 divergence:3 individual:5 replaced:1 phase:1 blahut:1 attempt:2 unsystematic:1 analyzed:1 yielding:1 culture:1 iv:1 divide:3 plotted:1 theoretical:10 rao:1 cover:1 goodness:2 cost:39 shima:1 too:4 characterize:2 st:1 peri:1 density:2 rudemo:3 physic:4 diverge:1 nonvanishing:1 squared:2 recorded:2 choose:1 japan:4 converted:1 diversity:1 summarized:4 sec:2 inc:2 notable:1 onset:1 depends:1 performed:2 extrapolation:2 shigeru:1 vivo:2 ass:1 accuracy:1 variance:7 ensemble:1 critically:1 researcher:2 norton:1 against:1 sampled:1 mise:13 subsection:1 electrophysiological:1 amplitude:2 appears:1 wesley:1 dt:16 evaluated:1 until:1 horizontal:1 replacing:2 undergoes:1 scientific:1 usa:1 unbiased:4 vicinity:1 shinomoto:5 width:4 noted:1 demonstrate:1 instantaneous:1 psths:1 spiking:1 empirically:1 jp:2 belong:1 he:5 approximates:3 enter:3 dot:1 scandinavian:2 subjectively:1 align:1 recent:1 optimizing:2 inequality:1 captured:1 minimum:4 ministry:1 preceding:1 determine:2 period:8 signal:1 ii:1 dashed:1 kyoto:6 segmented:2 devised:1 expectation:4 poisson:9 histogram:16 represent:1 kernel:1 tamura:1 fellowship:1 interval:8 regional:1 posse:1 effectiveness:1 counting:1 ideal:2 revealed:2 iii:2 fit:2 identified:1 cn:16 knowing:1 york:4 repeatedly:3 action:1 useful:1 detailed:1 estimated:9 correctly:1 discrete:1 snyder:1 nevertheless:2 demonstrating:1 changing:2 graph:3 inverse:1 discern:1 reasonable:1 gerstein:1 comparable:1 capturing:1 ki:7 bound:1 display:1 laminar:1 tanji:1 min:1 expanded:2 department:2 according:1 smaller:1 son:2 primate:1 equation:1 mutually:1 count:6 needed:2 addison:1 available:2 rewritten:2 fluctuating:6 occurrence:2 original:3 thomas:1 coe:1 paucity:1 giving:1 society:1 uselessness:1 objective:2 added:1 quantity:1 spike:46 modestly:1 exhibit:3 majority:1 koyama:1 length:1 minimizing:3 nc:16 mostly:1 unknown:3 neuron:7 observation:7 finite:3 introduced:1 required:2 bar:4 below:1 pattern:2 reading:1 event:1 critical:12 natural:1 difficulty:1 technology:1 extract:1 literature:1 acknowledgement:1 asymptotic:1 interesting:1 shimazaki:2 sufficient:1 consistent:1 principle:1 course:1 summary:1 extrapolated:6 repeat:2 supported:2 free:1 xn:1 transition:2 cortical:2 made:1 counted:2 observable:2 assumed:1 search:3 continuous:1 table:7 nature:1 symmetry:1 rh:1 repeated:1 fig:1 depicts:1 aid:1 wiley:2 wish:1 daley:1 young:1 formula:1 specific:1 er:1 ton:1 exists:1 intrinsic:2 incorporating:1 depicted:1 simply:2 explore:1 neurophysiological:2 sport:1 corresponds:2 dt1:1 determines:1 replace:1 springerverlag:1 uniformly:1 total:5 pas:1 experimental:8 meaningful:2 rarely:1 select:1 tested:1 extrapolate:1
2,360
3,141
Causal inference in sensorimotor integration Konrad P. Ko? rding Department of Physiology and PM&R Northwestern University Chicago, IL 60611 konrad@koerding.com Joshua B. Tenenbaum Massachusetts Institute of Technology Cambridge, MA 02139 jbt@mit.edu Abstract Many recent studies analyze how data from different modalities can be combined. Often this is modeled as a system that optimally combines several sources of information about the same variable. However, it has long been realized that this information combining depends on the interpretation of the data. Two cues that are perceived by different modalities can have different causal relationships: (1) They can both have the same cause, in this case we should fully integrate both cues into a joint estimate. (2) They can have distinct causes, in which case information should be processed independently. In many cases we will not know if there is one joint cause or two independent causes that are responsible for the cues. Here we model this situation as a Bayesian estimation problem. We are thus able to explain some experiments on visual auditory cue combination as well as some experiments on visual proprioceptive cue integration. Our analysis shows that the problem solved by people when they combine cues to produce a movement is much more complicated than is usually assumed, because they need to infer the causal structure that is underlying their sensory experience. 1 Introduction Our nervous system is constantly integrating information from many different sources into a unified percept. When we interact with objects for example we see them and feel them and often enough we can also hear them. All these pieces of information need to be combined into a joint percept. Traditionally, cue combination is formalized as a simple weighted combination of estimates coming from each modality (Fig 1A). According to this view the nervous system acquires these weights through some learning process [1]. Recently many experiments have shown that various manipulations, such as degrading the quality of the feedback from one modality, can vary the weights. Recently, these experiments have been phrased in a Bayesian framework, assuming that all the cues are about one given variable. Research often focuses on exploring in which coordinate system the problem is being solved [2, 3] and how much weight is given to each variable as a function of the uncertainty in each modality and the prior[4, 5, 6, 7, 8]. Throughout this paper we consider cue combination to estimate a position. Cue combination may, however, be equally important when estimating many other variables such as the nature of material, the weight of an object or the relevant aspects of a social situation. These studies focus on the way information is combined and assume that is known that there is just one cause for the cues. However, in many cases people can not be certain of the causal structure. If two cues share a common cause (as in Fig 1B) they should clearly be combined. In general, however, there may either be one common cause ? or two separate causes(Fig 1C). In such cases people can not know which of the two models to use and have to estimate the causal structure of the problem along with the parameter values. The issue of causal inference has long been an exciting question in A:traditional view B: common cause x est C: uncertainty about causes x real x real +?visual W vis W aud x vis +?visual x aud x vis +? aud x aud x vis +? aud x aud or x2 x1 +?visual +?aud x vis x aud Figure 1: Different causal structures of two cues. Bold circles indicate the variables the subjects are interested in. A) The traditional view is sketched where the estimate is a weighted combination of the estimates of each modality. B) One cause can be responsible for both cues. In this case cues should be combined to infer about the single cause. C) In many cases people will be unable to know if one common cause or two independent causes are responsible for the cues. In that case people will have to estimate which causal structure is present from the properties of their sensory stimuli. the psychological community [9, 10, 11, 12]. Here we derive a rigorous model of causal inference in the context of psychophysical experiments. 2 Cue combination: one common cause A large number of recent studies have interpreted the results from cue combination studies in a Bayesian framework[13]. We discuss the case of visuoauditory integration as the statistical relations are identical in other cue combination cases. A statistical generative model for the data is formulated (see figure 1B). It is acknowledged that if a signal is coming from a specific position the signal received by the nervous system in each modality will be noisy. If the real position of a stimulus is xreal then the nervous system will not be able to directly know this variable but the visual modality will obtain a noisy estimate thereof x vis . Typically it is assumed that in the process that the visually perceived position is a noisy version of the real position x vis = xreal +noise. A statistical treatment thus results in p(xvis |xreal ) = N (xreal ? xvis , ?vis ) where ?vis is the variance introduced by the visual modality and N (?, ?) stands for a Gaussian distribution with mean ? and standard deviation ?. If two cues are available, for example vision and audition then it is assumed that both cues x vis and xaud provide noisy observations of the relevant variable x real . Using the assumption that each modality provides an independent measurement of x real Bayes rule yields: p(xreal |xvis , xaud ) ? = p(xreal )p(xvis , xaud |xreal ) p(xreal )p(xvis |xreal )p(xaud |xreal ) (1) (2) The estimate that minimizes the mean squared error is then: x ? = ?xvis + (1 ? ?)xaud 2 2 /(?aud ?aud (3) 2 ?vis ). + The optimal solution is thus a weighing of the estimates from where ? = both modalities but the weighing is a function of the variances. Given the variances of the cues, it is possible to predict the weighing people should optimally use. Over the last couple of years various studies have described this approach. These papers assumed that we have two sources of information about one and the same variable and have shown that in psychophysical experiments people often show this kind of optimal integration and that the weights can be predicted from the variances [13, 14, 15, 4, 16]. However, in all these cases there is ample of evidence provided to the subjects that just one single variable is involved in the experiment. For example in [4] a stimulus is felt and seen at exactly the same position. 3 Combination of visual and auditory cues: uncertainty about the causal structure Here we consider the range of experiments where people hear a tone and simultaneously see a visual stimulus that may or may not come from the same position. Subjects are asked to estimate which direction the tone is coming from and point to that direction ? placing this experiment in the realm of sensorimotor integration. Subjects are asked to estimate which direction the tone is coming from and do so with a motor response. To optimally estimate where the tone is coming from people need to infer the causal structure (Fig 1 C) and decide if they should assume a single cause or two causes. Based on this calculation they can proceed to estimate where the tone is coming from. The Schirillo group has extensively tested human behavior in such a situation [17, 18]. For different distances between the visual and the auditory stimulus they analyzed the strategies people use to estimate the position of the auditory stimuli (see figure 2). It has long been realized that integration of different cues should only occur if the cues have the same cause [9, 10, 8, 19]. 3.1 Loss function and probabilistic model To model this choice phenomenon we assume that the estimate should be as precise as possible and that this error function is minimized:  E(xestimated ) = p(xtrue |cues)(xtrue ? xestimated )2 dxtrue (4) We assume that subjects have obtained a prior estimate p same of how likely it is that a visual and an auditory signal that appear near instantaneously have the same cause. In everyday life this will not be constant but depend on temporal delays, visual experience, context and many other factors. In the experiments we consider all these factors are held constant so we can use a constant p same . We assume that positions are drawn from a Gaussian distribution with a width ? pos . 3.2 Inference The probability that the two signals are from the same source will only weakly depend on the spatial prior but mostly depend on the distance ? av = xaud ? xvis between visually and auditory perceived positions. We thus obtain: p(same|?av ) psame p(?av |same) = p(different|?av ) (1 ? psame )p(?av |different) (5) Using p(same|?av ) + p(different|?av ) = 1 we can readily calculate the probability p(same|? av ) of the two signals coming from the same source. Using Equation 4 we can then calculate the optimal solution which is: x ? = p(same|?av )? xsame + (1 ? p(same|?av ))? xdifferent (6) We know the optimal estimates in the same case already from equation 3 and in the different case the optimal estimate exclusively relies on the auditory signal. We furthermore assume that the position sensed by the sensory system is a noisy version of x observed = x ? +  where  is drawn from a Gaussian with zero mean and a standard deviation of ? motor . We are thus able to calculate the optimal estimate and the expected uncertainty given our assumptions. 3.3 Model parameter estimation The prior p same characterizes how likely given the temporal delay and other experimental parameters it is a priori that two signals have the same source. As this characterizes a property of everyday life we can not readily estimate this parameter but instead fit it to the gain (?) data. To compare the predictions of our model with the experimental data we need to know the values of the variables that characterize our model. Vision is much more precise than audition in such situations. We estimate the relevant uncertainties as follows. In both auditory and visual trials the noise will have two sources, motor noise and sensory noise. Even if people knew perfectly where a stimulus was coming from they would make small errors at pointing because their motor system is not perfect. We assume that visual only trials are dominated by motor noise, stemming from motor errors and memory errors and that the noise in the visual trials is essentially exclusively motor noise (? vis = 0.01). Choosing a smaller ?vis does not change the results to any meaningful degree. From figure 2 of the experiments by Hairston et al [17] where movements are made towards unimodally ? presented cues we obtain ?motor = 2.5 deg and because variances are added linearly ? aud = 82 ? 2.52 = 7.6 deg. A Bayes B Visual stimulus Gain ? [%] Bayes unoptimized Auditory stimulus Gain>0 MAP C Gain<0 mean Combination Experiment Wallace et al Estimated one cause Estimated two causes One cause -5 Two causes 0 Position [deg] -5 Figure 2: Uncertainty if one or two causes are relevant. Experimental data reprinted with permission from [18]. A) The gain, the relative weight of vision for the estimation of the position of the auditory signal is shown. It is plotted as a function of the spatial disparity, the distance between visual and auditory stimulus. A gain value of ? = 100% implies that subjects only use visual information. A negative ? means that on average subjects point away from the visual stimulus. Different models of human behavior make different preditictions. B) A sketch explaining the finding of negative gains. The visual stimulus is always at -5 deg (solid line) and the auditory stimulus is always straight ahead at 0deg(dotted line). Visual perception is very low noise and the perceived position x vis is shown as red dots (each dot is one trial). Auditory perception is noisy and the perceived auditory position x aud is shown as black dots. In the white area where the subject perceive two causes, the average position of perceived auditory signals is further to the right. This explains the negative bias in reporting: when perceiving two causes, subjects are more likely to have heard a signal to the right. Those trials that are not unified thus exhibit a selection bias that confers the negative gain. C) The measured standard deviation of the human pointing behavior are shown as a function of the spatial disparity. The standard deviations predicted by the model are shown as well. Same colors as in A) We want to remark that this estimation is only approximate because people can use priors and combine them with likelihoods and objective functions for making their estimates even in the unimodal case. We also want to emphasize that we in no way tried to tune these parameters to lead to better fits of the data. From the specifications of the experiments we know that the distribution of auditory sources has a width of 20deg relative to the fixation point and we assume that this width is known to the subjects from repeated trials. 3.4 Comparison of the model with the experimental data Figure 2A shows a comparison between the gains (?) measured in the experiment of [17] with the gains (alpha) predicted by the Bayesian model. p same = 0.57 was fitted to the data. We assume that the model reports identical whenever one source is a posteriori more probable than two sources. The model predicts the counterintuitive finding that the trials where people inferred two causes exhibit negative gain. Figure 2B explains why negative gains are found. The model explains 99% of the variance of the gain with just one free parameter p same . Very similar effects are found if we fix psame at 0.5 assuming that fusion and segregation are equally likely and this parameter free model still explains 98% of the variance. The simple full combination model (shown in green) that does not allow for two sources completely fails to predict any of these effects even when fitting all the standard deviations and thus explains 0% of the variance of the gains. The results clearly rule out a strategy in which all cues are always combined. On some trials noise in the auditory signal will make it appear as if the auditory signal is very close to the visual signal. In this case the system will infer that both have the same source and part of the reported high gain for the fused cases will be because noise already perturbed the auditory signal towards the visual. However, on some trials the auditory signal will be randomly perturbed away from the visual signal. In this case the system will infer that very likely the two signals have different sources. Because both estimation of position and the estimation of identity are based on the same noisy signal the two processes are not independent of one another. This lack of independence is causing the difference between the fusion and the no-fusion case. 3.5 Maximum A Posteriori over causal structure In the derivations above we assumed that people are fully Bayesian, in the sense that they consider both possible structures for cue-integration and integrate over them to arrive at an optimal decision. An alternative would be a Maximum A Posteriori (MAP) approach: people could first choose the appropriate structure one source or two and then use only that structure for subsequent decisions. Figure 2A shows that this model (we fitted p same = 0.72) also well predicts the main effect and explains 98% of the variance of the gains. To test how the two models compare we looked at the standard deviations that had also been measured in the [17] experiment. The fully Bayesian model explains 65% of the variance of the standard deviation plot and the MAP model explains 0% of the variance of that plot. This difference is observed because the MAP model strongly underestimates the uncertainty in the single cause case and strongly overestimates the uncertainty in the dual cause case (Fig 2C). The Bayesian model on the other hand always considers that it could be wrong, leading to more variance in the single cause case and less in the dual cause case. Even the Bayesian system tends to predict overly large standard deviations in the case of two causes. This effect goes away if we assume that people underestimate the variance of the auditory source relative to the fixation spot (data not shown). A deeper analysis taking into account all the available data and its variance over subjects will be necessary to test if a MAP strategy can be ruled out. The present analysis may lead to an understanding of the inference algorithm used by the nervous system. In summary, the problem of crossmodal integration is much more complicated than it seems as it necessitates inference of the causal structure. People still solve this complicated problem in a way that can be understood as being close to optimal. 4 Combination of visual and proprioceptive cues Typical experiments in movement psychophysics where a virtual reality display is used to disturb the perceived position of the hand lead to an analogous problem. In these experiments subjects proprioceptively feel their hand somewhere, but they cannot see their hand; at the same time, they visually perceive a cursor somewhere. Subjects again can not be sure if the seen cursor position and the felt hand position are cues about the same variable (hand=cursor) or if each of them are independent and the experiment is just cheating them leading to the same causal structure inference problem described above. In this section we extend the model to also explain such sensorimotor integration. We model the studies by Sober and Sabes [5, 6] that inspired this work. In these experiments one hand is moving a cursor to either the position of a visually displayed (v) target or the position of the other hand (p). People need to estimate two distinct variables: (1) the direction in which they are to move their arm, a visually perceived variable, the so-called movement vector (MV) and (2) a proprioceptively perceived variable, the configuration of their joints (J). Subjects obtain visual information about the position of the cursor and they obtain proprioceptive information from feeling the position of their hand. Traditionally it would have been assumed that the seen cursor position and the proprioceptively felt hand position are cues caused by one single variable, the hand. As a result, the position of the cursor uniquely defines the configuration of the joints and vice versa. As in the cue combination case above there should not be full cue combination but instead each variable (MV) and (J) should be estimated separately. In this experiment a situation is produced where the visual position of a cursor is different from the actual position of the right hand. Subjects are then asked to move their hand towards targets that are in 8 concentric directions. The estimate of the movement vector affects movements direction in a way that is specific to the target direction. The estimate of the joint configuration affects movement direction irrespective of the target direction. The experimental studies then report the gain ?, the linear weight ? of vision on the estimate of (MV) and (J) in both the visual and the proprioceptive target conditions(figure 3A and B). If people only inferred about one common cause then the weight of vision should always be the same, indicating that more than just one cause is assumed by the subjects. 4.1 Coordinate systems The probabilistic model that we use is identical to the model introduced above with one exception. In the sensorimotor integration case there is uncertainty about the alignment of the two coordinate systems. For example if we hold an arm under a table and where asked to show where the other arm is under the table we would have significant alignment errors. When using information from one coordinate system for an estimation in a different coordinate system there is uncertainty about the alignment of the coordinate systems. This means that when we use visual information to estimate the position of a joint in joint space our visual system appears to be more noisy and vice versa. As we are only interested in estimates along one dimension and can model the uncertainty about the alignment of the coordinate systems as a one dimensional Gaussian with width ? trans . When using information from one modality for estimations of a variable in the other coordinate system we need 2 2 2 to use ?ef f ective = ?modality + ?trans . The two target conditions in the experiments, moving the cursor to a visual target (v) and moving the cursor to the position of the other hand (p) produce two different estimation problems. When we try to move a cursor to a visually displayed target we must compute MV in visual space. If to the contrary we try to move a cursor with one hand to the position of the other hand then we must calculate MV in joint space. Loss functions and therefore necessary estimates are thus defined in different spaces. Altogether people are faced with 4 problems, they have to estimate (MV) and (J) in both the visual (v) condition and the proprioceptive (p) condition. 4.2 Probabilistic model As above we assume that visual and proprioceptive uncertainty lead to probability distributions in the respective space that are characterized by Gaussians of width ? vis and ?prop . These variables are now defined in terms of position not in terms of direction. Subjects are not asked if they experience one or two causes. Under these circumstances it is only important how likely on average people find that the two percepts are unified (p unif ied = psame p(?pv |same)). We assume that when moving the cursor to a visual target the average squared deviation of the cursor and the target in visual space is minimized. We assume that when moving the cursor to a proprioceptive target the average squared deviation of the cursor and the target in proprioceptive space is minimized. Apart from this difference the whole derivation of the equations is identical to the one above for the auditory visual integration. However, the results are not analyzed conditional on the inference of one or two causes but averaged over these. 4.3 Tool use Above we assumed that cursor and hand either have the same cause (the position of the hand, or different causes and are therefore unrelated. Another way of thinking about the Sober and Sabes experiments could be in terms of tool use. The cursor could be seen as a tool that is seen displaced relative to our hand. The tip of the tool will move with our hand. As tools are typically short the probability is largest that the tip of a tool is at the position of the hand and this probability will decay with increasing distance between the hand and the position of the tool. The distance between the tip of the tool and the hand is thus another random variable that is assumed to be Gaussian with width ?tool (see fig. 3E). The minimal end point error solutions of this are: ?MV,v 2 2 2 2 2 2 2 = (?prop + ?trans + ?tool )/(?prop + ?vis + ?trans + ?tool ) ?J,v = ?MV,p = ?J,p = 2 2 2 2 2 2 (?prop + ?trans )/(?prop + ?vis + ?trans + ?tool ) 2 2 2 2 2 2 (?prop + ?tool )/(?prop + ?vis + ?trans + ?tool ) 2 2 2 2 2 (?prop )/(?prop + ?vis + ?trans + ?tool ) (7) (8) (9) (10) We are thus able to predict the weights that people should use if they assume a causal relationship deriving from tool use. 4.4 Comparison of the model with the data We add to the Bayesian model introduced above a part for modeling the uncertainty about the alignment of the coordinate systems, and compare the results from this modified model with the data. The E D 100 Combination or not 50 0 ? J,v ? J,p ? MV,v ? MV,p +?visual 50 +? prop 0 x visual x aud 0 G 100 ? J,v ? J,p? MV,v ? MV,p Full Combination gain ? [%] MV Gaussian 100 gain ? [%] J 0 free parameters 50 F +?tool 100 gain ? [%] Proprioceptive condition hand position estimated hand position cursor position estimated cursor position C gain ? [%] B Visual condition A 50 ? J,v ? J,p ? MV,v ? MV,p Sober and Sabes 2005 0 ? J,v ? J,p ? MV,v ? MV,p Model Figure 3: Cue combination in motor control, experiments from [6] A) The estimated quantities. B) The two experimental conditions. C)The predictions from the model. D)The predictions obtained when using the estimate of a specialist. E) The tool use model. The cursor will be close to the position of the hand. F)The predictions from a tool use model. G)The predictions from a full combination model. model has several parameters, important the uncertainties of proprioception and of the coordinate transformation compared to the visual uncertainty. Another parameter is the probability of unification. All parameters are fit to the data. The model explains the data that have a standard deviation of 0.32 with a standard deviation of only 0.08 (Figure 3C). Fitting 3 parameters to 4 data points can be seen as some major overfitting. To avoid overfitting we guessed p unif ied = 0.5 and asked one of our colleagues,, Daniel Wolpert, for estimates. He estimated ? vis = 1cm,?prop = 3cm,?trans = 5cm. With these values we explain the data with a standard deviation of 0.13 capturing all main effects(Figure 3D). Another experimental modification in [6] deserves mentioning. The image of an arm is rendered on top of the cursor position. The experiment finds that this has the effect that people rely much more on vision for estimating their joint configuration. In our interpretation, the rendering of the arm makes the probability much higher that actually the position of the visual display is the position of the hand and p unif ied would be much higher. 4.5 Analysis if subjects view a cursor as a tool Another possible model that seemed very likely to us was assuming that the cursor should appear somewhere close to the hand modeling the cursor hand relationship as another Gaussian variable (Fig 3E). We fit the 3 parameters of this model, the uncertainty of proprioception and the coordinate transformation relative to the visual uncertainty as well as the width of the Gaussian describing the tool. Figure 3F shows that this model too can fit the main results of the experiment. With a standard deviation of the residual of 0.14 however it does worse than the parameter free model above. If we take the values given by Daniel Wolpert (see above) and fit the value of ? tool we obtain a standard deviation of 0.28. This model of tool use seems to thus be doing poorer than the model we introduced earlier. Sober and Sabes [5, 6] explain the finding that two variables are estimated by the finding that cortex exhibits two important streams of information processing, one for visual processing and the other for motor tasks [20]. The model we present here gives a reason for the estimation of distinct variables. If people see a cursor close to their hand they do not assume that they actually see their hand. The models that we introduced can be understood as special instantiations of a model where the cursor position relative to the hand is drawn from a general probability distribution. 5 Discussion An impressive range of recent studies show that people do not just estimate one variable in situations of cue combination [5, 6, 17, 18]. Here we have shown that the statistical problem that people solve in such situations involves an inference about the causal structure. People have uncertainty about the identity and number of relevant variables. The problem faced by the nervous system is similar to cognitive problems that occur in the context of causal induction. Many experiments show that people and in particular infants interpret events in terms of cause and effect [11, 21, 22]. The results presented here show that sensorimotor integration exhibits some of the factors that make human cognition difficult. Carefully studying and analyzing seemingly simple problems such as cue combination may provide a fascinating way of studying the human cognitive system in a quantitative fashion. References [1] Q. Haijiang, J. A. Saunders, R. W. Stone, and B. T. Backus. Demonstration of cue recruitment: change in visual appearance by means of pavlovian conditioning. Proc Natl Acad Sci U S A, 103(2):483?8, 2006. 0027-8424 (Print) Journal Article. [2] J. W. Krakauer, M. F. Ghilardi, and C. Ghez. Independent learning of internal models for kinematic and dynamic control of reaching. Nat Neurosci, 2(11):1026?31, 1999. [3] R. Shadmehr and F. A. Mussa-Ivaldi. Adaptive representation of dynamics during learning of a motor task. J Neurosci, 14(5 Pt 2):3208?24, 1994. [4] M. O. Ernst and M. S. Banks. Humans integrate visual and haptic information in a statistically optimal fashion. Nature, 415(6870):429?33, 2002. [5] S. J. Sober and P. N. Sabes. Multisensory integration during motor planning. J Neurosci, 23(18):6982?92, 2003. [6] S. J. Sober and P. N. Sabes. Flexible strategies for sensory integration during motor planning. Nat Neurosci, 8(4):490?7, 2005. [7] K. P. Koerding and D. M. Wolpert. Bayesian integration in sensorimotor learning. Nature, 427(6971):244? 7, 2004. [8] L. Shams, W. J. Ma, and U. Beierholm. Sound-induced flash illusion as an optimal percept. Neuroreport, 16(17):1923?7, 2005. [9] E Hirsch. The concept of Identity. Oxford University Press, Oxford, 1982. [10] A. Leslie, F. Xu, P. Tremoulet, and B. Scholl. Indexing and the object concept: ?what? and ?where? in infancy. Trends in Cognitive Sciences, 2:10?18, 1998. [11] A. Gopnik, C. Glymour, D. M. Sobel, L. E. Schulz, T. Kushnir, and D. Danks. A theory of causal learning in children: causal maps and bayes nets. Psychol Rev, 111(1):3?32, 2004. [12] T. L. Griffiths and J. B. Tenenbaum. From mere coincidences to meaningful discoveries. Cognition, 2006. 0010-0277 (Print) Journal article. [13] Z. Ghahramani. Computational and psychophysics of sensorimotor integration. PhD thesis, Massachusetts Institute of Technology, 1995. [14] R. A. Jacobs. Optimal integration of texture and motion cues to depth. Vision Res, 39(21):3621?9, 1999. [15] R. J. van Beers, A. C. Sittig, and J. J. Gon. Integration of proprioceptive and visual position-information: An experimentally supported model. J Neurophysiol, 81(3):1355?64, 1999. [16] D. Alais and D. Burr. The ventriloquist effect results from near-optimal bimodal integration. Curr Biol, 14(3):257?62, 2004. [17] W. D. Hairston, M. T. Wallace, J. W. Vaughan, B. E. Stein, J. L. Norris, and J. A. Schirillo. Visual localization ability influences cross-modal bias. J Cogn Neurosci, 15(1):20?9, 2003. [18] M. T. Wallace, G. E. Roberson, W. D. Hairston, B. E. Stein, J. W. Vaughan, and J. A. Schirillo. Unifying multisensory signals across time and space. Exp Brain Res, 158(2):252?8, 2004. [19] Shams L Beierholm U, Quartz S. Bayesian inference as a unifying model of auditory-visual integration and segregation. In Proceedings of the society of neuroscience, 2005. [20] M. A. Goodale, G. Kroliczak, and D. A. Westwood. Dual routes to action: contributions of the dorsal and ventral streams to adaptive behavior. Prog Brain Res, 149:269?83, 2005. [21] R. Saxe, J. B. Tenenbaum, and S. Carey. Secret agents: inferences about hidden causes by 10- and 12-month-old infants. Psychol Sci, 16(12):995?1001, 2005. [22] T. L. Griffiths and J. B. Tenenbaum. Structure and strength in causal induction. Cognit Psychol, 51(4):334?84, 2005. 0010-0285 (Print) Journal Article.
3141 |@word trial:9 version:2 seems:2 unif:3 sensed:1 tried:1 jacob:1 solid:1 ivaldi:1 configuration:4 exclusively:2 disparity:2 daniel:2 com:1 must:2 readily:2 stemming:1 subsequent:1 chicago:1 motor:13 plot:2 infant:2 cue:39 generative:1 weighing:3 nervous:6 tone:5 short:1 cognit:1 provides:1 along:2 fixation:2 combine:3 fitting:2 burr:1 rding:1 secret:1 expected:1 behavior:4 wallace:3 planning:2 brain:2 inspired:1 actual:1 increasing:1 provided:1 estimating:2 underlying:1 schirillo:3 unrelated:1 what:1 kind:1 interpreted:1 minimizes:1 cm:3 degrading:1 unified:3 finding:4 transformation:2 temporal:2 quantitative:1 exactly:1 wrong:1 control:2 appear:3 overestimate:1 understood:2 tends:1 acad:1 analyzing:1 oxford:2 black:1 mentioning:1 range:2 statistically:1 averaged:1 responsible:3 illusion:1 cogn:1 spot:1 area:1 physiology:1 integrating:1 griffith:2 cannot:1 close:5 selection:1 context:3 influence:1 crossmodal:1 vaughan:2 confers:1 map:6 go:1 independently:1 formalized:1 perceive:2 rule:2 counterintuitive:1 deriving:1 traditionally:2 coordinate:11 analogous:1 feel:2 target:12 pt:1 beierholm:2 roberson:1 trend:1 gon:1 predicts:2 observed:2 coincidence:1 solved:2 calculate:4 movement:7 backus:1 asked:6 goodale:1 dynamic:2 koerding:2 depend:3 weakly:1 localization:1 completely:1 neurophysiol:1 necessitates:1 po:1 joint:10 various:2 derivation:2 distinct:3 choosing:1 saunders:1 solve:2 ability:1 noisy:8 seemingly:1 net:1 coming:8 causing:1 relevant:5 combining:1 ernst:1 everyday:2 produce:2 disturb:1 perfect:1 object:3 derive:1 measured:3 received:1 predicted:3 involves:1 indicate:1 come:1 implies:1 aud:13 direction:10 gopnik:1 human:6 saxe:1 material:1 virtual:1 sober:6 explains:9 fix:1 ied:3 probable:1 exploring:1 hold:1 visually:6 exp:1 cognition:2 predict:4 pointing:2 major:1 vary:1 ventral:1 perceived:9 estimation:10 proc:1 largest:1 vice:2 tool:23 weighted:2 instantaneously:1 mit:1 clearly:2 danks:1 gaussian:8 always:5 modified:1 reaching:1 avoid:1 focus:2 likelihood:1 rigorous:1 sense:1 posteriori:3 inference:11 typically:2 hidden:1 relation:1 scholl:1 unoptimized:1 schulz:1 interested:2 alais:1 sketched:1 issue:1 dual:3 flexible:1 priori:1 spatial:3 integration:20 psychophysics:2 special:1 identical:4 placing:1 thinking:1 minimized:3 report:2 stimulus:13 randomly:1 simultaneously:1 mussa:1 curr:1 kinematic:1 alignment:5 analyzed:2 natl:1 held:1 sobel:1 poorer:1 unification:1 necessary:2 experience:3 respective:1 old:1 circle:1 plotted:1 causal:20 ruled:1 re:3 minimal:1 fitted:2 psychological:1 modeling:2 earlier:1 leslie:1 deserves:1 deviation:15 delay:2 too:1 optimally:3 characterize:1 reported:1 perturbed:2 combined:6 probabilistic:3 tip:3 fused:1 ghez:1 squared:3 again:1 thesis:1 choose:1 worse:1 cognitive:3 audition:2 leading:2 account:1 bold:1 caused:1 mv:17 depends:1 stream:2 vi:20 piece:1 view:4 try:2 analyze:1 characterizes:2 red:1 doing:1 bayes:4 complicated:3 carey:1 tremoulet:1 contribution:1 il:1 variance:14 percept:4 yield:1 guessed:1 bayesian:11 produced:1 mere:1 straight:1 explain:4 whenever:1 underestimate:2 sensorimotor:7 colleague:1 involved:1 thereof:1 couple:1 gain:21 auditory:23 treatment:1 massachusetts:2 ective:1 realm:1 color:1 carefully:1 actually:2 appears:1 higher:2 response:1 modal:1 strongly:2 furthermore:1 just:6 sketch:1 hand:32 lack:1 defines:1 quality:1 effect:8 concept:2 jbt:1 proprioceptive:10 white:1 konrad:2 during:3 width:7 uniquely:1 acquires:1 stone:1 motion:1 image:1 ef:1 recently:2 common:6 conditioning:1 extend:1 interpretation:2 he:1 interpret:1 measurement:1 significant:1 cambridge:1 versa:2 pm:1 had:1 dot:3 moving:5 specification:1 cortex:1 impressive:1 add:1 recent:3 apart:1 manipulation:1 route:1 certain:1 life:2 joshua:1 seen:6 ventriloquist:1 signal:19 full:4 unimodal:1 sham:2 infer:5 sound:1 characterized:1 calculation:1 cross:1 long:3 equally:2 prediction:5 ko:1 vision:7 essentially:1 circumstance:1 bimodal:1 want:2 separately:1 source:15 modality:13 sure:1 haptic:1 subject:18 induced:1 ample:1 contrary:1 proprioception:2 xtrue:2 near:2 enough:1 rendering:1 independence:1 fit:6 affect:2 perfectly:1 reprinted:1 proceed:1 cause:40 remark:1 action:1 heard:1 tune:1 ghilardi:1 stein:2 tenenbaum:4 extensively:1 processed:1 dotted:1 estimated:8 overly:1 neuroscience:1 group:1 acknowledged:1 drawn:3 year:1 recruitment:1 uncertainty:18 reporting:1 throughout:1 arrive:1 decide:1 prog:1 decision:2 capturing:1 display:2 fascinating:1 strength:1 occur:2 ahead:1 x2:1 phrased:1 felt:3 dominated:1 aspect:1 pavlovian:1 rendered:1 glymour:1 department:1 according:1 combination:21 smaller:1 across:1 sittig:1 rev:1 making:1 modification:1 indexing:1 equation:3 segregation:2 discus:1 describing:1 know:7 end:1 studying:2 available:2 gaussians:1 away:3 appropriate:1 permission:1 alternative:1 specialist:1 altogether:1 top:1 unifying:2 somewhere:3 krakauer:1 ghahramani:1 society:1 psychophysical:2 objective:1 move:5 question:1 realized:2 already:2 added:1 strategy:4 looked:1 quantity:1 print:3 traditional:2 exhibit:4 distance:5 separate:1 unable:1 sci:2 considers:1 reason:1 induction:2 assuming:3 modeled:1 relationship:3 demonstration:1 difficult:1 mostly:1 negative:6 kushnir:1 av:10 observation:1 displaced:1 displayed:2 situation:7 precise:2 community:1 concentric:1 inferred:2 introduced:5 cheating:1 xvi:7 trans:9 able:4 usually:1 perception:2 hear:2 green:1 memory:1 event:1 rely:1 residual:1 arm:5 technology:2 sabes:6 irrespective:1 psychol:3 faced:2 prior:5 understanding:1 discovery:1 relative:6 fully:3 loss:2 northwestern:1 integrate:3 degree:1 agent:1 beer:1 article:3 exciting:1 bank:1 share:1 summary:1 supported:1 last:1 free:4 bias:3 allow:1 deeper:1 institute:2 explaining:1 taking:1 van:1 feedback:1 dimension:1 depth:1 stand:1 seemed:1 sensory:5 made:1 adaptive:2 feeling:1 social:1 approximate:1 emphasize:1 alpha:1 deg:6 hirsch:1 overfitting:2 instantiation:1 assumed:9 knew:1 hairston:3 why:1 reality:1 table:2 nature:3 interact:1 main:3 linearly:1 neurosci:5 whole:1 noise:10 repeated:1 child:1 x1:1 xu:1 fig:7 fashion:2 fails:1 position:46 pv:1 proprioceptively:3 infancy:1 specific:2 quartz:1 decay:1 evidence:1 fusion:3 phd:1 texture:1 nat:2 cursor:27 wolpert:3 likely:7 appearance:1 visual:49 norris:1 constantly:1 relies:1 ma:2 prop:11 conditional:1 identity:3 formulated:1 month:1 flash:1 towards:3 change:2 experimentally:1 typical:1 perceiving:1 shadmehr:1 called:1 experimental:7 est:1 meaningful:2 multisensory:2 indicating:1 exception:1 internal:1 people:28 dorsal:1 phenomenon:1 neuroreport:1 tested:1 biol:1
2,361
3,142
An Approach to Bounded Rationality Eli Ben-Sasson Department of Computer Science Technion ? Israel Institute of Technology Adam Tauman Kalai Department of Computer Science College of Computing Georgia Tech Ehud Kalai MEDS Department Kellogg Graduate School of Management Northwestern University Abstract A central question in game theory and artificial intelligence is how a rational agent should behave in a complex environment, given that it cannot perform unbounded computations. We study strategic aspects of this question by formulating a simple model of a game with additional costs (computational or otherwise) for each strategy. First we connect this to zero-sum games, proving a counter-intuitive generalization of the classic min-max theorem to zero-sum games with the addition of strategy costs. We then show that potential games with strategy costs remain potential games. Both zero-sum and potential games with strategy costs maintain a very appealing property: simple learning dynamics converge to equilibrium. 1 The Approach and Basic Model How should an intelligent agent play a complicated game like chess, given that it does not have unlimited time to think? This question reflects one fundamental aspect of ?bounded rationality,? a term coined by Herbert Simon [1]. However, bounded rationality has proven to be a slippery concept to formalize (prior work has focused largely on finite automata playing simple repeated games such as prisoner?s dilemma, e.g. [2, 3, 4, 5]). This paper focuses on the strategic aspects of decisionmaking in complex multi-agent environments, i.e., on how a player should choose among strategies of varying complexity, given that its opponents are making similar decisions. Our model applies to general strategic games and allows for a variety of complexities that arise in real-world applications. For this reason, it is applicable to one-shot games, to extensive games, and to repeated games, and it generalizes existing models such as repeated games played by finite automata. To easily see that bounded rationality can drastically affect the outcome of a game, consider the following factoring game. Player 1 chooses an n-bit number and sends it to Player 2, who attempts to find its prime factorization. If Player 2 is correct, he is paid 1 by Player 1, otherwise he pays 1 to Player 1. Ignoring complexity costs, the game is a trivial win for Player 2. However, for large n, the game should is essentially a win for Player 1, who can easily output a large random number that Player 2 cannot factor (under appropriate complexity assumptions). In general, the outcome of a game (even a zero-sum game like chess) with bounded rationality is not so clear. To concretely model such games, we consider a set of available strategies along with strategy costs. Consider an example of two players preparing to play a computerized chess game for $100K prize. Suppose the players simultaneously choose among two available options: to use a $10K program A or an advanced program B, which costs $50K. We refer to the row chooser as white and to the column chooser as black, with the corresponding advantages reflected by the win probabilities of white described in Table 1a. For example, when both players use program A, white wins 55% of the time and black wins 45% of the time (we ignore draws). The players naturally want to choose strategies to maximize their expected net payoffs, i.e., their expected payoff minus their cost. Each cell in Table 1b contains a pair of payoffs in units of thousands of dollars; the first is white?s net expected payoff and the second is black?s. a) A B A 55% 93% B 13% 51% b) A (-10) B (-50) A (-10) 45, 35 43,-3 B (-50) 3, 37 1,-1 Figure 1: a) Table of first-player winning probabilities based on program choices. b) Table of expected net earnings in thousands of dollars. The unique equilibrium is (A,B) which strongly favors the second player. A surprising property is evident in the above game. Everything about the game seems to favor white. Yet due to the (symmetric) costs, at the unique Nash equilibrium (A,B) of Table 1b, black wins 87% of the time and nets $34K more than white. In fact, it is a dominant strategy for white to play A and for black to play B. To see this, note that playing B increases white?s probability of winning by 38%, independent of what black chooses. Since the pot is $100K, this is worth $38K in expectation, but B costs $40K more than A. On the other hand, black enjoys a 42% increase in probability of winning due to B, independent of what white does, and hence is willing to pay the extra $40K. Before formulating the general model, we comment on some important aspects of the chess example. First, traditional game theory states that chess can be solved in ?only? two rounds of elimination of dominated strategies [10], and the outcome with optimal play should always be the same: either a win for white or a win for black. This theoretical prediction fails in practice: in top play, the outcome is very nondeterministic with white winning roughly twice as often as black. The game is too large and complex to be solved by brute force. Second, we have been able to analyze the above chess program selection example exactly because we formulated as a game with a small number of available strategies per player. Another formulation that would fit into our model would be to include all strategies of chess, with some reasonable computational costs. However, it is beyond our means to analyze such a large game. Third, in the example above we used monetary software cost to illustrate a type of strategy cost. But the same analysis could accommodate many other types of costs that can be measured numerically and subtracted from the payoffs, such as time or effort involved in the development or execution of a strategy, and other resource costs. Additional examples in this paper include the number of states in a finite automaton, the number of gates in a circuit, and the number of turns on a commuter?s route. Our analysis is limited, however, to cost functions that depend only on the strategy of the player and not the strategy chosen by its opponent. For example, if our players above were renting computers A or B and paying for the time of actual usage, then the cost of using A would depend on the choice of computer made by the opponent. Generalizing the example above, we consider a normal form game with the addition of strategy costs, a player-dependent cost for playing each available strategy. Our main results regard two important classes of games: constant-sum and potential games. Potential games with strategy costs remain potential games. While two-person constant-sum games are no longer constant, we give a basic structural description of optimal play in these games. Lastly, we show that known learning dynamics converge in both classes of games. 2 Definition of strategy costs We first define an N -person normal-form game G = (N, S, p) consisting of finite sets of (available) pure strategies S = (S1 , . . . , SN ) for the N players, and a payoff function p : S1 ? . . . ? SN ? RN . Players simultaneously choose strategies si ? Si after which player i is rewarded with pi (s1 , . . . , sN ). A randomized or mixed strategy ?i for player i is a probability distribution over its pure strategies Si , n o X ?i ? ?i = x ? R|Si | : xj = 1, xj ? 0 . We extend p to ?1 ? . . . ? ?N in the natural way, i.e., pi (?1 , . . . , ?N ) = E[pi (s1 , . . . , sN )] where each si is drawn from ?i , independently. Denote by s?i = (s1 , s2 , . . . , si?1 , si+1 , . . . , sN ) and similarly for ??i . A best response by player i to ??i is ?i ? ?i such that pi (?i , ??i ) = max?i0 ??i pi (?i0 , ??i ). A (mixed strategy) Nash equilibrium of G is a vector of strategies (?1 , . . . , ?N ) ? ?1 ? . . . ? ?N such that each ?i is a best response to ??i . We now define G?c , the game G with strategy costs c = (c1 , . . . , cN ), where ci : Si ? R. It is simply an N -person normal-form game G?c = (N, S, p?c ) with the same sets of pure strategies as G, but with a new payoff function p?c : S1 ? . . . ? SN ? RN where, p?c i (s1 , . . . , sN ) = pi (s1 , . . . , sN ) ? ci (si ), for i = 1, . . . , N. We similarly extend ci to ?i in the natural way. 3 Two-person constant-sum games with strategy costs Recall that a game is constant-sum (k-sum for short) if at every combination of individual strategies, the players? payoffs sum to some constant k. Two-person k-sum games have some important properties, not shared by general sum games, which result in more effective game-theoretic analysis. In particular, every k-sum game has a unique value v ? R. A mixed strategy for player 1 is called optimal if it guarantees payoff ? v against any strategy of player 2. A mixed strategy for player 2 is optimal if it guarantees ? k ? v against any strategy of player 1. The term optimal is used because optimal strategies guarantee as much as possible (v + k ? v = k) and playing anything that is not optimal can result in a lesser payoff, if the opponent responds appropriately. (This fact is easily illustrated in the game rock-paper-scissors ? randomizing uniformly among the strategies guarantees each player 50% of the pot, while playing anything other than uniformly random enables the opponent to win strictly more often.) The existence of optimal strategies for both players follows from the min-max theorem. An easy corollary is that the Nash equilibria of a k-sum game are exchangeable: they are simply the cross-product of the sets of optimal mixed strategies for both players. Lastly, it is well-known that equilibria in two-person k-sum games can be learned in repeated play by simple dynamics that are guaranteed to converge [17]. With the addition of strategy costs, a k-sum game is no longer k-sum and hence it is not clear, at first, what optimal strategies there are, if any. (Many examples of general-sum games do not have optimal strategies.) We show the following generalization of the above properties for zero-sum games with strategies costs. Theorem 1. Let G be a finite two-person k-sum game and G?c be the game with strategy costs c = (c1 , c2 ). 1. There is a value v ? R for G?c and nonempty sets OPT1 and OPT2 of optimal mixed strategies for the two players. OPT1 is the set of strategies that guarantee player 1 payoff ? v ? c2 (?2 ), against any strategy ?2 chosen by player 2. Similarly, OPT2 is the set of strategies that guarantee player 2 payoff ? k ? v ? c1 (?1 ) against any ?1 . 2. The Nash equilibria of G?c are exchangeable: the set of Nash equilibria is OPT1 ?OPT2 . 3. The set of net payoffs possible at equilibrium is an axis-parallel rectangle in R2 . For zero-sum games, the term optimal strategy was natural: the players could guarantee v and k ? v, respectively, and this is all that there was to share. Moreover, it is easy to see that only pairs of optimal strategies can have the Nash equilibria property, being best responses to each other. In the case of zero-sum games with strategy costs, the optimal structure is somewhat counterintuitive. First, it is strange that the amount guaranteed by either player depends on the cost of the other player?s action, when in reality each player pays the cost of its own action. Second, it is not even clear why we call these optimal strategies. To get a feel for this latter issue, notice that the sum of the net payoffs to the two players is always k ? c1 (?1 ) ? c2 (?2 ), which is exactly the total of what optimal strategies guarantee, v ? c2 (?2 ) + k ? v ? c1 (?1 ). Hence, if both players play what we call optimal strategies, then neither player can improve and they are at Nash equilibrium. On the other hand, suppose player 1 selects a strategy ?1 that does not guarantee him payoff at least v ? c2 (?2 ). This means that there is some response ?2 by player 2 for which player 1?s payoff is < v ? c2 (?2 ) and hence player 2?s payoff is > k ? v ? c1 (?1 ). Thus player 2?s best response to ?1 must give player 2 payoff > k ? v ? c1 (?1 ) and leave player 1 with < v ? c2 (?2 ). The proof of the theorem (the above reasoning only implies part 2 from part 1) is based on the following simple observation. Consider the k-sum game H = (N, S, q) with the following payoffs: q1 (s1 , s2 ) = p1 (s1 , s2 ) ? c1 (s1 ) + c2 (s2 ) = p?c 1 (s1 , s2 ) + c2 (s2 ) q2 (s1 , s2 ) = p2 (s1 , s2 ) ? c2 (s1 ) + c1 (s1 ) = p?c 2 (s1 , s2 ) + c1 (s1 ) That is to say, Player 1 pays its strategy cost to Player 2 and vice versa. It is easy to verify that, ?c 0 ??1 , ?10 ? ?1 , ?2 ? ?2 q1 (?1 , ?2 ) ? q1 (?10 , ?2 ) = p?c 1 (?1 , ?2 ) ? p1 (?1 , ?2 ) (1) This means that the relative advantage in switching strategies in games G?c and H are the same. In particular, ?1 is a best response to ?2 in G?c if and only if it is in H. A similar equality holds for player 2?s payoffs. Note that these conditions imply that the games G?c and H are strategically equivalent in the sense defined by Moulin and Vial [16]. Proof of Theorem 1. Let v be the value of the game H. For any strategy ?1 that guarantees player 1 payoff ? v in H, ?1 guarantees player 1 ? v ? c2 (?2 ) in G?c . This follows from the definition of H. Similarly, any strategy ?2 that guarantees player 2 payoff ? k ? v in H will guarantee ? k ? v ? c1 (?1 ) in G?c . Thus the sets OPT1 and OPT2 are non-empty. Since v ? c2 (?2 ) + k ? v ? c1 (?1 ) = k ? c1 (?1 ) ? c2 (?2 ) is the sum of the payoffs in G?c , nothing greater can be guaranteed by either player. Since the best responses of G?c and H are the same, the Nash equilibria of the two games are the same. Since H is a k-sum game, its Nash equilibria are exchangeable, and thus we have part 2. (This holds for any game that is strategically equivalent to k-sum.) Finally, the optimal mixed strategies OPT1 , OPT2 of any k-sum game are convex sets. If we look at the achievable costs of the mixed strategies in OPTi , by the definition of the cost of a mixed strategy, this will be a convex subset of R, i.e., an interval. By parts 1 and 2, the set of achievable net payoffs at equilibria of G?c are therefore the cross-product of intervals. To illustrate Theorem 1 graphically, Figure 2 gives a 4 ? 4 example with costs of 1, 2, 3, and 4, respectively. It illustrates a situation with multiple optimal strategies. Notice that player 1 is completely indifferent between its optimal choices A and B, and player 2 is completely indifferent between C and D. Thus the only question is how kind they would like to be to their opponent. The (A,C) equilibrium is perhaps most natural as it is yields the highest payoffs for both parties. Note that the proof of the above theorem actually shows that zero-sum games with costs share additional appealing properties of zero-sum games. For example, computing optimal strategies is a polynomial time-computation in an n ? n game, as it amounts to computing the equilibria of H. We next show that they also have appealing learning properties, though they do not share all properties of zero-sum games.1 3.1 Learning in repeated two-person k-sum games with strategy costs Another desirable property of k-sum games is that, in repeated play, natural learning dynamics converge to the set of Nash equilibria. Before we state the analogous conditions for k-sum games with costs, we briefly give a few definitions. A repeated game is one in which players chooses a sequence of strategies vectors s1 , s2 , . . ., where each st = (st1 , . . . , stN ) is a strategy vector of some fixed stage game G = (N, S, p). Under perfect monitoring, when selecting an action in any period the players know all the previous selected actions.As we shall discuss, it is possible to learn to play without perfect monitoring as well. 1 One property that is violated by the chess example is the ?advantage of an advantage? property. Say Player 1 has the advantage over Player 2 in a square game if p1 (s1 , s2 ) ? p2 (s2 , s1 ) for all strategies s1 , s2 . At equilibrium of a k-sum game, a player with the advantage must have a payoff at least as large as its opponent. This is no longer the case after incorporating strategy costs, as seen in the chess example, where Player 1 has the advantage (even including strategy costs), yet his equilibrium payoff is smaller than 2?s. a) A B C D A 6, 4 7, 3 7.5, 2.5 8.5, 1.5 B 5, 5 6, 4 6.5, 3.5 7, 3 C 3, 7 4, 6 4.5, 5.5 5.5, 4.5 D 2, 8 3, 7 3.5, 6.5 4.5, 5.5 b) A (-1) B (-2) C (-3) D (-4) A (-1) 5, 3 5, 2 4.5, 1.5 4.5, 0.5 B (-2) 4, 3 4, 2 3.5, 1.5 3, 1 C (-3) 2, 4 2, 3 1.5, 2.5 1.5, 1.5 D (-4) 1, 4 1, 3 0.5, 2.5 0.5, 1.5 PLAYER 2 NET PAYOFF Nash Eq. A,D value A,C B,D B,C C,D C,C D,D D,C A,B A,A B,B B,A C,B C,A D,B D,A PLAYER 1 NET PAYOFF Figure 2: a) Payoffs in 10-sum game G. b) Expected net earnings in G?c . OPT1 is any mixture of A and B, and OPT2 is any mixture of C and D. Each player?s choice of equilibrium strategy affects only the opponent?s net payoff. c) A graphical display of the payoff pairs. The shaded region shows the rectangular set of payoffs achievable at mixed strategy Nash equilibria. Perhaps the most intuitive dynamics are best-response: at each stage, each player selects a best response to the opponent?s previous stage play. Unfortunately, these naive dynamics fails to converge to equilibrium in very simple examples. The fictitious play dynamics prescribe, at stage t, selecting any strategy that is a best response to the empirical distribution of opponent?s play during the first t ? 1 stages. It has been shown that fictitious play converges to equilibrium (of the stage game G) in k-sum games [17]. However, fictitious play requires perfect monitoring. One can learn to play a two-person k-sum game with no knowledge of the payoff table or anything about the other players actions. Using experimentation, the only observations required by each player are its own payoffs in each period (in addition to the number of available actions). So-called bandit algorithms [7] must manage the exploration-exploitation tradeoff. The proof of their convergence follows from the fact that they are no-regret algorithms. (No-regret algorithms date back to Hannan in the 1950?s [12], but his required perfect monitoring). The regret of a player i at stage T is defined to be, regret of i at T = T X ? ? 1 max pi (si , st?i ) ? pi (sti , st?i ) , T si ?Si t=1 that is, how much better in hindsight player i could have done on the first T stages had it used one fixed strategy the whole time (and had the opponents not changed their strategies). Note that regret can be positive or negative. A no-regret algorithm is one in which each player?s asymptotic regret converges to (??, 0], i.e., is guaranteed to approach 0 or less. It is well-known that noregret condition in two-person k-sum games implies convergence to equilibrium (see, e.g., [13]). In particular, the pair of mixed strategies which are the empirical distributions of play over time approaches the set of Nash equilibrium of the stage game. Inverse-polynomial rates of convergence (that are polynomial also in the size of the game) can be given for such algorithms. Hence no-regret algorithms provide arguably reasonable ways to play a k-sum game of moderate size. Note that in general-sum games, no such dynamics are known. Fortunately, the same algorithm that works for learning in k-sum games seem to work for learning in such games with strategy costs. Theorem 2. Fictitious play converges to the set of Nash equilibria of the stage game in a two-person k-sum game with strategy costs, as do no-regret learning dynamics. Proof. The proof again follows from equation (1) regarding the game H. Fictitious play dynamics are defined only in terms of best response play. Since G?c and H share the same best responses, fictitious play dynamics are identical for the two games. Since they share the same equilibria and fictitious play converges to equilibria in H, it must converge in G?c as well. For no-regret algorithms, equation (1) again implies that for any play sequence, the regret of each player i with respect to game G?c is the same as its regret with respect to the game H. Hence, no regret in G?c implies no regret in H. Since no-regret algorithms converge to the set of equilibria in k-sum games, they converge to the set of equilibria in H and therefore in G?c as well. 4 Potential games with strategic costs Let us begin with an example of a potential game, called a routing game [18]. There is a fixed directed graph with n nodes and m edges. Commuters i = 1, 2, . . . , N each decide on a route ?i , to take from their home si to their work ti , where si and ti are nodes in the graph. For each edge, uv, let nuv be the number of commuters whose path ?i contains edge uv. Let fuv : P Z ? R be a nonnegative monotonically increasing congestion function. Player i?s payoff is ? uv??i fuv (nuv ), i.e., the negative sum of the congestions on the edges in its path. An N -person normal form game G is said to be a potential game [15] if there is some potential function ? : S1 ? . . . SN ? R such that changing a single player?s action changes its payoff by the change in the potential function. That is, there exists a single function ?, such that for all players i and all pure strategy vectors s, s0 ? S1 ? . . . ? SN that differ only in the ith coordinate, pi (s) ? pi (s0 ) = ?(s) ? ?(s0 ). (2) Potential games have appealing learning properties: simple better-reply dynamics converge to purestrategy Nash equilibria, as do the more sophisticated fictitious-play dynamics described earlier [15]. In our example, this means that if players change their individual paths so as to selfishly reduce the sum of congestions on their path, this will eventually lead to an equilibrium where no one can improve. (This is easy to see because ? keeps increasing.) The absence of similar learning properties for general games presents a frustrating hole in learning and game theory. It is clear that the theoretically clean commuting example above misses some realistic considerations. One issue regarding complexity is that most commuters would not be willing to take a very complicated route just to save a short amount of time. To model this, we consider potential games with strategy costs. In our example, this would be a cost associated with every path. For example, suppose the graph represented streets in a given city. We consider a natural strategy complexity cost associated with a route ?, say ?(#turns(?))2 , where there is a parameter ? ? R and #turns(?) is defined as the number of times that a commuter has to turn on a route. (To be more precise, say each edge in the graph is annotated with a street name, and a turn is defined to be a pair of consecutive edges in the graph with different street names.) Hence, a best response for player i would minimize: ? min (total congestion of ?) + ?(#turns(?))2 . from si to ti While adding strategy costs to potential games allows for much more flexibility in model design, one might worry that appealing properties of potential games, such as having pure strategy equilibria and easy learning dynamics, no longer hold. This is not the case. We show that strategic costs fit easily into the potential game framework: Theorem 3. For any potential game G and any cost functions c, G?c is also a potential game. Proof. Let ? be a potential function for G. It is straightforward to verify that the G?c admits the following potential function ?0 : ?0 (s1 , . . . , sN ) = ?(s1 , . . . , sN ) ? c1 (s1 ) ? . . . ? cN (sN ). 5 Additional remarks Part of the reason that the notion of bounded rationality is so difficult to formalize is that understanding enormous games like chess is a daunting proposition. That is why we have narrowed it down to choosing among a small number of available programs. A game theorist might begin by examining the complete payoff table of Figure 1a, which is prohibitively large. Instead of considering only the choices of programs A and B, each player considers all possible chess strategies. In that sense, our payoff table in 1a would be viewed as a reduction of the ?real? normal form game. A computer scientist, on the other hand, may consider it reasonable to begin with the existing strategies that one has access to. Regardless of how you view the process, it is clear that for practical purposes players in real life do simplify and analyze ?smaller? sets of strategies. Even if the players consider the option of engineering new chess-playing software, this can be viewed as a third strategy in the game, with its own cost and expected payoffs. Again, when considering small number of available strategies, like the two programs above, it may still be difficult to assess the expected payoffs that result when (possibly randomized) strategies play against each other. An additional assumption made throughout the paper is that the players share the same assessments about these expected payoffs. Like other common-knowledge assumptions made in game theory, it would be desirable to weaken this assumption. In the special families of games studied in this paper, and perhaps in additional cases, learning algorithms may be employed to reach equilibrium without knowledge of payoffs. 5.1 Finite automata playing repeated games There has been a large body of interesting work on repeated games played by finite automata (see [14] for a survey). Much of this work is on achieving cooperation in the classic prisoner?s dilemma game (e.g., [2, 3, 4, 5]). Many of these models can be incorporated into the general model outlined in this paper. For example, to view the Abreu and Rubinstein model [6] as such, consider the normal form of an infinitely repeated game with discounting, but restricted to strategies that can be described by finite automata (the payoffs in every cell of the payoff table are the discounted sums of the infinite streams of payoffs obtained in the repeated game). Let the cost of a strategy be an increasing function of the number of states it employs. For Neyman?s model [3], consider the normal form of a finitely repeated game with a known number of repetitions. You may consider strategies in this normal form to be only ones with a bounded number of states, as required by Neyman, and assign zero cost to all strategies. Alternatively, you may allow all strategies but assign zero cost to ones that employ number of states below Neyman?s bounds, and an infinite cost to strategies that employ a number of states that exceeds Neyman?s bounds. The structure of equilibria proven in Theorem 1 applies to all the above models when dealing with repeated k-sum games, as in [2]. 6 Future work There are very interesting questions to answer about bounded rationality in truly large games that we did not touch upon. For example, consider the factoring game from the introduction. A pure strategy for Player 1 would be outputting a single n-bit number. A pure strategy for Player 2 would be any factoring program, described by a circuit that takes as input an n-bit number and attempts to output a representation of its prime factorization. The complexity of such a strategy would be an increasing function of the number of gates in the circuit. It would be interesting to make connections between asymptotic algorithm complexity and games. Another direction regards an elegant line of work on learning to play correlated equilibria by repeated play [11]. It would be natural to consider how strategy costs affect correlated equilibria. Finally, it would also be interesting to see how strategy costs affect the so-called ?price of anarchy? [19] in congestion games. Acknowledgments This work was funded in part by a U.S. NSF grant SES-0527656, a Landau Fellowship supported by the Taub and Shalom Foundations, a European Community International Reintegration Grant, an Alon Fellowship, ISF grant 679/06, and BSF grant 2004092. Part of this work was done while the first and second authors were at the Toyota Technological Institute at Chicago. References [1] H. Simon. The sciences of the artificial. MIT Press, Cambridge, MA, 1969. [2] E. Ben-Porath. Repeated games with finite automata, Journal of Economic Theory 59: 17?32, 1993. [3] A. Neyman. Bounded Complexity Justifies Cooperation in the Finitely Repeated Prisoner?s Dilemma. Economic Letters, 19: 227?229, 1985. [4] A. Rubenstein. Finite automata play the repeated prisoner?s dilemma. Journal of Economic Theory, 39:83? 96, 1986. [5] C. Papadimitriou, M. Yannakakis: On complexity as bounded rationality. In Proceedings of the TwentySixth Annual ACM Symposium on Theory of Computing, pp. 726?733, 1994. [6] D. Abreu and A. Rubenstein. The Structure of Nash Equilibrium in Repeated Games with Finite Automata. Econometrica 56:1259-1281, 1988. [7] P. Auer, N. Cesa-Bianchi, Y. Freund, R. Schapire. The Nonstochastic Multiarmed Bandit Problem. SIAM J. Comput. 32(1):48-77, 2002. [8] X. Chen, X. Deng, and S. Teng. Computing Nash Equilibria: Approximation and smoothed complexity. Electronic Colloquium on Computational Complexity Report TR06-023, 2006. [9] K. Daskalakis, P. Goldberg, C. Papadimitriou. The complexity of computing a Nash equilibrium. Electronic Colloquium on Computational Complexity Report TR05-115, 2005. [10] C. Ewerhart. Chess-like Games Are Dominance Solvable in at Most Two Steps. Games and Economic Behavior, 33:41-47, 2000. [11] D. Foster and R. Vohra. Regret in the on-line decision problem. Games and Economic Behavior, 21:40-55, 1997. [12] J. Hannan. Approximation to Bayes risk in repeated play. In M. Dresher, A. Tucker, and P. Wolfe, editors, Contributions to the Theory of Games, volume 3, pp. 97?139. Princeton University Press, 1957. [13] S. Hart and A. Mas-Colell. A General Class of Adaptive Strategies. Journal of Economic Theory 98(1):26? 54, 2001. [14] E. Kalai. Bounded rationality and strategic complexity in repeated games. In T. Ichiishi, A. Neyman, and Y. Tauman, editors, Game Theory and Applications, pp. 131?157. Academic Press, San Diego, 1990. [15] D. Monderer, L. Shapley. Potential games. Games and Economic Behavior, 14:124?143, 1996. [16] H. Moulin and P. Vial. Strategically Zero Sum Games: the Class of Games Whose Completely Mixed Equilibria Cannot Be Improved Upon. International Journal of Game Theory, 7:201?221, 1978. [17] J. Robinson, An iterative method of solving a game, Ann. Math. 54:296?301, 1951. [18] R. Rosenthal. A Class of Games Possessing Pure-Strategy Nash Equilibria. International Journal of Game Theory, 2:65?67, 1973. [19] E. Koutsoupias and C. Papadimitriou. Worstcase equilibria. In Proceedings of the 16th Annual Symposium on Theoretical Aspects of Computer Science, pp. 404?413, 1999.
3142 |@word exploitation:1 briefly:1 polynomial:3 achievable:3 seems:1 willing:2 noregret:1 q1:3 paid:1 minus:1 accommodate:1 shot:1 reduction:1 contains:2 selecting:2 existing:2 surprising:1 si:15 yet:2 must:4 realistic:1 chicago:1 enables:1 congestion:5 intelligence:1 selected:1 ith:1 prize:1 short:2 math:1 node:2 earnings:2 unbounded:1 along:1 c2:13 symposium:2 shapley:1 nondeterministic:1 theoretically:1 expected:8 behavior:3 p1:3 roughly:1 multi:1 discounted:1 landau:1 actual:1 considering:2 increasing:4 begin:3 bounded:11 moreover:1 circuit:3 israel:1 what:5 kind:1 q2:1 hindsight:1 guarantee:13 every:4 ti:3 exactly:2 prohibitively:1 brute:1 unit:1 exchangeable:3 grant:4 arguably:1 anarchy:1 before:2 positive:1 scientist:1 engineering:1 switching:1 opti:1 path:5 black:9 might:2 twice:1 studied:1 shaded:1 factorization:2 limited:1 graduate:1 directed:1 unique:3 practical:1 acknowledgment:1 practice:1 regret:16 empirical:2 get:1 cannot:3 selection:1 risk:1 equivalent:2 graphically:1 straightforward:1 regardless:1 independently:1 automaton:9 focused:1 convex:2 rectangular:1 survey:1 pure:8 bsf:1 counterintuitive:1 his:2 proving:1 classic:2 notion:1 coordinate:1 analogous:1 feel:1 diego:1 rationality:9 play:31 suppose:3 goldberg:1 prescribe:1 wolfe:1 renting:1 solved:2 thousand:2 region:1 counter:1 highest:1 technological:1 environment:2 nash:19 complexity:15 colloquium:2 econometrica:1 dynamic:14 depend:2 solving:1 dilemma:4 upon:2 completely:3 easily:4 represented:1 effective:1 opt2:6 artificial:2 rubinstein:1 outcome:4 choosing:1 whose:2 say:4 s:1 otherwise:2 favor:2 think:1 advantage:7 sequence:2 net:11 rock:1 outputting:1 product:2 monetary:1 date:1 flexibility:1 intuitive:2 description:1 convergence:3 empty:1 decisionmaking:1 adam:1 leave:1 ben:2 perfect:4 converges:4 illustrate:2 alon:1 measured:1 finitely:2 school:1 eq:1 paying:1 p2:2 pot:2 implies:4 differ:1 direction:1 correct:1 annotated:1 exploration:1 routing:1 elimination:1 everything:1 st1:1 assign:2 generalization:2 proposition:1 strictly:1 hold:3 normal:8 equilibrium:43 consecutive:1 purpose:1 applicable:1 him:1 vice:1 repetition:1 city:1 reflects:1 mit:1 always:2 kalai:3 varying:1 corollary:1 focus:1 rubenstein:2 tech:1 dollar:2 sense:2 dependent:1 factoring:3 i0:2 bandit:2 selects:2 issue:2 among:4 development:1 special:1 having:1 preparing:1 identical:1 yannakakis:1 look:1 future:1 papadimitriou:3 report:2 intelligent:1 simplify:1 employ:3 few:1 strategically:3 simultaneously:2 individual:2 consisting:1 maintain:1 attempt:2 indifferent:2 mixture:2 truly:1 edge:6 theoretical:2 weaken:1 nuv:2 column:1 earlier:1 kellogg:1 strategic:6 cost:54 subset:1 frustrating:1 technion:1 examining:1 colell:1 too:1 connect:1 answer:1 randomizing:1 chooses:3 person:12 st:3 fundamental:1 randomized:2 international:3 siam:1 again:3 central:1 cesa:1 management:1 manage:1 choose:4 possibly:1 potential:21 scissors:1 depends:1 stream:1 view:2 analyze:3 bayes:1 option:2 complicated:2 parallel:1 narrowed:1 simon:2 contribution:1 minimize:1 square:1 ass:1 largely:1 who:2 yield:1 computerized:1 monitoring:4 worth:1 vohra:1 reach:1 definition:4 against:5 pp:4 involved:1 tucker:1 naturally:1 proof:7 associated:2 rational:1 recall:1 knowledge:3 formalize:2 sophisticated:1 actually:1 back:1 auer:1 worry:1 reflected:1 response:13 improved:1 daunting:1 formulation:1 done:2 though:1 strongly:1 just:1 stage:10 lastly:2 reply:1 hand:3 touch:1 assessment:1 perhaps:3 usage:1 name:2 concept:1 verify:2 hence:7 equality:1 discounting:1 symmetric:1 fuv:2 illustrated:1 white:11 round:1 game:139 during:1 anything:3 evident:1 theoretic:1 complete:1 reasoning:1 consideration:1 possessing:1 common:1 commuter:5 volume:1 extend:2 he:2 opt1:6 numerically:1 isf:1 refer:1 taub:1 multiarmed:1 versa:1 theorist:1 cambridge:1 uv:3 outlined:1 similarly:4 had:2 funded:1 access:1 longer:4 dominant:1 own:3 moderate:1 shalom:1 prime:2 rewarded:1 route:5 life:1 herbert:1 seen:1 additional:6 somewhat:1 moulin:2 greater:1 fortunately:1 employed:1 deng:1 converge:9 maximize:1 period:2 monotonically:1 multiple:1 desirable:2 hannan:2 exceeds:1 academic:1 cross:2 hart:1 prediction:1 basic:2 essentially:1 expectation:1 cell:2 c1:14 addition:4 want:1 fellowship:2 interval:2 sends:1 appropriately:1 extra:1 comment:1 med:1 elegant:1 seem:1 call:2 structural:1 easy:5 reintegration:1 variety:1 affect:4 fit:2 xj:2 nonstochastic:1 reduce:1 regarding:2 cn:2 lesser:1 tradeoff:1 economic:7 effort:1 action:7 remark:1 clear:5 amount:3 vial:2 schapire:1 nsf:1 notice:2 rosenthal:1 per:1 shall:1 dominance:1 enormous:1 achieving:1 drawn:1 changing:1 neither:1 clean:1 rectangle:1 graph:5 sum:49 sti:1 eli:1 inverse:1 you:3 letter:1 throughout:1 reasonable:3 strange:1 decide:1 family:1 electronic:2 home:1 draw:1 decision:2 bit:3 bound:2 pay:4 guaranteed:4 played:2 display:1 dresher:1 nonnegative:1 annual:2 software:2 unlimited:1 dominated:1 aspect:5 min:3 formulating:2 department:3 combination:1 remain:2 smaller:2 appealing:5 making:1 s1:27 chess:13 restricted:1 resource:1 equation:2 neyman:6 turn:6 discus:1 nonempty:1 eventually:1 know:1 generalizes:1 available:8 opponent:11 experimentation:1 appropriate:1 subtracted:1 save:1 gate:2 existence:1 top:1 include:2 graphical:1 coined:1 question:5 strategy:101 traditional:1 responds:1 said:1 win:9 monderer:1 street:3 considers:1 trivial:1 reason:2 difficult:2 unfortunately:1 negative:2 design:1 perform:1 bianchi:1 observation:2 finite:11 commuting:1 behave:1 payoff:46 situation:1 incorporated:1 precise:1 rn:2 smoothed:1 community:1 pair:5 required:3 extensive:1 connection:1 learned:1 robinson:1 able:1 beyond:1 below:1 program:9 max:4 including:1 natural:7 force:1 solvable:1 advanced:1 improve:2 technology:1 imply:1 axis:1 slippery:1 naive:1 sn:13 prior:1 understanding:1 stn:1 relative:1 asymptotic:2 freund:1 northwestern:1 mixed:12 interesting:4 fictitious:8 proven:2 foundation:1 agent:3 abreu:2 s0:3 foster:1 editor:2 playing:7 pi:10 share:6 row:1 changed:1 cooperation:2 supported:1 enjoys:1 drastically:1 allow:1 institute:2 tauman:2 regard:2 world:1 concretely:1 made:3 author:1 adaptive:1 san:1 party:1 ignore:1 keep:1 dealing:1 alternatively:1 daskalakis:1 iterative:1 why:2 table:9 reality:1 learn:2 ignoring:1 complex:3 european:1 ehud:1 did:1 main:1 s2:13 whole:1 arise:1 nothing:1 repeated:20 body:1 georgia:1 fails:2 winning:4 comput:1 third:2 toyota:1 theorem:10 down:1 r2:1 admits:1 incorporating:1 exists:1 adding:1 ci:3 execution:1 illustrates:1 justifies:1 hole:1 chen:1 generalizing:1 simply:2 infinitely:1 prisoner:4 applies:2 tr06:1 worstcase:1 acm:1 ma:2 viewed:2 formulated:1 ann:1 shared:1 absence:1 price:1 change:3 infinite:2 koutsoupias:1 uniformly:2 miss:1 called:4 total:2 teng:1 player:86 college:1 latter:1 violated:1 princeton:1 correlated:2
2,362
3,143
Multi-Task Feature Learning Andreas Argyriou Department of Computer Science University College London Gower Street, London WC1E 6BT, UK a.argyriou@cs.ucl.ac.uk Theodoros Evgeniou Technology Management and Decision Sciences, INSEAD, Bd de Constance, Fontainebleau 77300, France theodoros.evgeniou@insead.edu Massimiliano Pontil Department of Computer Science University College London Gower Street, London WC1E 6BT, UK m.pontil@cs.ucl.ac.uk Abstract We present a method for learning a low-dimensional representation which is shared across a set of multiple related tasks. The method builds upon the wellknown 1-norm regularization problem using a new regularizer which controls the number of learned features common for all the tasks. We show that this problem is equivalent to a convex optimization problem and develop an iterative algorithm for solving it. The algorithm has a simple interpretation: it alternately performs a supervised and an unsupervised step, where in the latter step we learn commonacross-tasks representations and in the former step we learn task-specific functions using these representations. We report experiments on a simulated and a real data set which demonstrate that the proposed method dramatically improves the performance relative to learning each task independently. Our algorithm can also be used, as a special case, to simply select ? not learn ? a few common features across the tasks. 1 Introduction Learning multiple related tasks simultaneously has been empirically [2, 3, 8, 9, 12, 18, 19, 20] as well as theoretically [2, 4, 5] shown to often significantly improve performance relative to learning each task independently. This is the case, for example, when only a few data per task are available, so that there is an advantage in ?pooling? together data across many related tasks. Tasks can be related in various ways. For example, task relatedness has been modeled through assuming that all functions learned are close to each other in some norm [3, 8, 15, 19]. This may be the case for functions capturing preferences in users? modeling problems [9, 13]. Tasks may also be related in that they all share a common underlying representation [4, 5, 6]. For example, in object recognition, it is well known that the human visual system is organized in a way that all objects 1 are represented ? at the earlier stages of the visual system ? using a common set of features learned, e.g. local filters similar to wavelets [16]. In modeling users? preferences/choices, it may also be the case that people make product choices (e.g. of books, music CDs, etc.) using a common set of features describing these products. In this paper, we explore the latter type of task relatedness, that is, we wish to learn a lowdimensional representation which is shared across multiple related tasks. Inspired by the fact that the well known 1?norm regularization problem provides such a sparse representation for the single 1 We consider each object recognition problem within each object category, e.g. recognizing a face among faces, or a car among cars, to be a different task. task case, in Section 2 we generalize this formulation to the multiple task case. Our method learns a few features common across the tasks by regularizing within the tasks while keeping them coupled to each other. Moreover, the method can be used, as a special case, to select (not learn) a few features from a prescribed set. Since the extended problem is nonconvex, we develop an equivalent convex optimization problem in Section 3 and present an algorithm for solving it in Section 4. A similar algorithm was investigated in [9] from the perspective of conjoint analysis. Here we provide a theoretical justification of the algorithm in connection with 1-norm regularization. The learning algorithm simultaneously learns both the features and the task functions through two alternating steps. The first step consists of independently learning the parameters of the tasks? regression or classification functions. The second step consists of learning, in an unsupervised way, a low-dimensional representation for these task parameters, which we show to be equivalent to learning common features across the tasks. The number of common features learned is controlled, as we empirically show, by the regularization parameter, much like sparsity is controlled in the case of single-task 1-norm regularization. In Section 5, we report experiments on a simulated and a real data set which demonstrate that the proposed method learns a few common features across the tasks while also improving the performance relative to learning each task independently. Finally, in Section 6 we briefly compare our approach with other related multi-task learning methods and draw our conclusions. 2 Learning sparse multi-task representations We begin by introducing our notation. We let IR be the set of real numbers and IR+ (IR++ ) the subset of non-negative (positive) ones. Let T be the number of tasks and define IN T := {1, . . . , T }. For each task t ? INT , we are given m input/output examples (xt1 , yt1 ), . . . (xtm , ytm ) ? IRd ? IR. Based on this data, we wish to estimate T functions ft : IRd ? IR, t ? INT , which approximate well the data and are statistically predictive, see e.g. [11]. Pd If w, u ? IRd , we define hw, ui := i=1 wi ui , the standard inner product in IRd . For every p ? 1, Pd 1 we define the p-norm of vector w as kwkp := ( i=1 |wi |p ) p . If A is a d ? T matrix we denote by ai ? IRT and aj ? IRd the i-th row and the j-th column of A respectively. For every r, p ? 1 we 1 Pd i p p define the (r, p)-norm of A as kAkr,p := . i=1 ka kr We denote by Sd the set of d ? d real symmetric matrices and by Sd+ the subset of positive semidefPd inite ones. If D is a d ? d matrix, we define trace(D) := i=1 Dii . If X is a p ? q real matrix, range(X) denotes the set {x ? IRp : x = Xz, for some z ? IRq }. We let Od be the set of d ? d orthogonal matrices. Finally, D + denotes the pseudoinverse of a matrix D. 2.1 Problem formulation The underlying assumption in this paper is that the functions ft are related so that they all share a small set of features. Formally, our hypothesis is that the functions ft can be represented as ft (x) = d X ait hi (x), t ? INT , (2.1) i=1 where hi : IRd ? IR are the features and ait ? IR are the regression parameters. Our main assumption is that all the features but a few have zero coefficients across all the tasks. For simplicity, we focus on linear features, that is, hi (x) = hui , xi, where ui ? IRd . In addition, we assume that the vectors ui are orthonormal. Thus, if U denotes the d ? d matrix with columns d the vectors P ui , then U ? O . The functions ft are linear as well, that is ft (x) = hwt , xi, where wt = i ait ui . Extensions to nonlinear functions may be done, for example, by using kernels along the lines in [8, 15]. Since this is not central in the present paper we postpone its discussion to a future occasion. Let us denote by W the d ? T matrix whose columns are the vectors wt and by A the d ? T matrix with entries ait . We then have that W = U A. Our assumption that the tasks share a ?small? set of features means that the matrix A has ?many? rows which are identically equal to zero and, so, the corresponding features (columns of matrix U ) will not be used to represent the task parameters (columns of matrix W ). In other words, matrix W is a low rank matrix. We note that the problem of learning a low-rank matrix factorization which approximates a given partially observed target matrix has been considered in [1], [17] and references therein. We briefly discuss its connection to our current work in Section 4. In the following, we describe our approach to computing the feature vectors u i and the parameters ait . We first consider the case that there is only one task (say task t) and the features u i are fixed. To learn the P parameter vector at ? IRd from data {(xti , yti )}m i=1 we would like to minimize the empirim cal error i=1 L(yti , hat , U > xti i) subject to an upper bound on the number of nonzero components of at , where L : IR ? IR ? IR+ is a prescribed loss function which we assume to be convex in the second argument. This problem is intractable and is often by requiring an upper bound on Prelaxed m 2 > 2 the 1-norm of at . That is, we consider the problem min L(y ti , hat , U xti i) : kat k1 ? ? , i=1 or equivalently the unconstrained problem (m ) X d > 2 min L(yti , hat , U xti i) + ?kat k1 : at ? IR , (2.2) i=1 where ? > 0 is the regularization parameter. It is well known that using the 1-norm leads to sparse solutions, that is, many components of the learned vector at are zero, see [7] and references therein. Moreover, the number of nonzero components of a solution to problem (2.2) is ?typically? a nonincreasing function of ? [14]. We now generalize problem (2.2) to the multi-task case. For this purpose, we introduce the regularization error function T X m X L(yti , hat , U > xti i) + ?kAk22,1 . (2.3) E(A, U ) = t=1 i=1 The first term in (2.3) is the average of the empirical error across the tasks while the second one is a regularization term which penalizes the (2, 1)-norm of the matrix A. It is obtained by first computing the 2-norm of the (across the tasks) rows ai (corresponding to feature i) of matrix A and then the 1-norm of the vector b(A) = (ka1 k2 , . . . , kad k2 ). This norm combines the tasks and ensures that common features will be selected across them. Indeed, if the features U are prescribed and A? minimizes the function E over A, the number of ? will typically be non-increasing with ? like in the case nonzero components of the vector b(A) ? indicate how of 1-norm single-task regularization. Moreover, the components of the vector b( A) important each feature is and favor uniformity across the tasks for each feature. Since we do not simply want to select the features but also learn them, we further minimize the function E over U , that is, we consider the optimization problem n o min E(A, U ) : U ? Od , A ? IRd?T . (2.4) This method learns a low-dimensional representation which is shared across the tasks. As in the single-task case, the number of features will be typically non-increasing with the regularization parameter ? we shall present experimental evidence of this fact in Section 5 (see Figure 1 therein). We note that when the matrix U is not learned and we set U = Id?d , problem (2.4) computes a common set of variables across the tasks. That is, we have the following convex optimization problem ( T m ) XX d?T 2 min L(yti , hat , xti i) + ?kAk2,1 : A ? IR . (2.5) t=1 i=1 We shall return to problem (2.5) in Section 4 where we present an algorithm for solving it. 3 Equivalent convex optimization formulation Solving problem (2.4) is a challenging task for two main reasons. First, it is a non-convex problem, although it is separately convex in each of the variables A and U . Second, the norm kAk 2,1 is nonsmooth which makes it more difficult to optimize. A main result in this paper is that problem (2.4) can be transformed into an equivalent convex problem. To this end, for every W ? IRd?T and D ? Sd+ , we define the function R(W, D) = T X m X L(yti , hwt , xti i) + ? t=1 i=1 T X hwt , D+ wt i. (3.1) t=1 Theorem 3.1. Problem (2.4) is equivalent to the problem n o min R(W, D) : W ? IRd?T , D ? Sd+ , trace(D) ? 1, range(W ) ? range(D) . (3.2) ? U ? U ? ) is an optimal solution for (2.4) if and only if (W ? , D) ? = (U ? A, ? U ? Diag(?) ? > ) is an That is, (A, optimal solution for (3.2), where a i k2 ? i := k? . (3.3) ? ? 2,1 kAk i ka k2 Proof. Let W = U A and D = U Diag( kAk )U > . Then kai k2 = kW > ui k2 and hence 2,1 T X hwt , D+ wt i = trace(W > D+ W ) = kAk2,1 trace(W > U Diag(kW > ui k2 )+ U > W ) = t=1 kAk2,1 trace d X i=1 d X  = kAk kW > ui k2 = kAk22,1 . W (kW > ui k2 )+ W > ui u> 2,1 i i=1 Therefore, minW,D R(W, D) ? minA,U E(A, U ). Conversely, let D = U Diag(?)U > . Then T X + > > 2 hwt , D+ wt i = trace(W > U Diag(?+ i )U W ) = trace(Diag(?i )AA ) ? kAk2,1 , t=1 by Lemma 4.2. Note that the range constraint ensures that W is a multiple of the submatrix of U which corresponds to the nonzero eigenvalues of D, and hence if ?i = 0 then ai = 0 as well. Therefore, minA,U E(A, U ) ? minW,D R(W, D). In problem (3.2) we have constrained the trace of D, otherwise the optimal solution would be to simply set D = ? and only minimize the empirical error term in (3.1). Similarly, we have imposed the range constraint to ensure that the penalty term is bounded below and away from zero. Indeed, without this constraint, it may be possible that DW = 0 when W does not have full rank, in which PT case there is a matrix D for which t=1 hwt , D+ wt i = trace(W > D+ W ) = 0. We note that the rank of matrix D indicates how many common relevant features the tasks share. Indeed, it is clear from equation (3.3) that the rank of matrix D equals the number of nonzero rows of matrix A. We now show that the function R in equation (3.1) is jointly convex in W and D. For this purpose, we define the function f (w, D) = w > D+ w, if D ? Sd+ and w ? range(D), and f (w, D) = +? otherwise. Clearly, R is convex provided f is convex. The latter is true since a direct computation expresses f as the supremum of a family of convex functions, namely we have that f (w, D) = sup{w> v + trace(ED) : E ? Sd , v ? IRd , 4E + vv >  0}. 4 Learning algorithm We solve problem (3.2) by alternately minimizing the function R with respect to D and the w t (recall that wt is the t-th column of matrix W ). When we keep D fixed, the minimization over wt simply consists of learning the parameters wt independently by a regularization method, for example by an SVM or ridge regression type method 2 . For a fixed value of the vectors wt , we learn D by simply solving the minimization problem ( T ) X + d min hwt , D wt i : D ? S+ , trace(D) ? 1, range(W ) ? range(D) . (4.1) t=1 The following theorem characterizes the optimal solution of problem (4.1). 2 As noted in the introduction, other multi-task learning methods can be used. For example, we can also penalize the variance of the wt ?s ? ?forcing? them to be close to each other ? as in [8]. This would only slightly change the overall method. Algorithm 1 (Multi-Task Feature Learning) Input: training sets {(xti , yti )}m i=1 , t ? INT Parameters: regularization parameter ? Output: d ? d matrix D, d ? T regression matrix W = [w1 , . . . , wT ] Initialization: set D = Id?d d while convergence condition is not true do for t = 1, . . . , T do o nP m d + compute wt = argmin i=1 L(yti , hw, xti i) + ?hw, D wi : w ? IR , w ? range(D) end for 1 > 2 set D = (W W )> 1 trace(W W ) 2 end while Theorem 4.1. Let C = W W > . The optimal solution of problem (4.1) is 1 D= C2 (4.2) 1 trace C 2 1 and the optimal value equals (trace C 2 )2 . We first introduce the following lemma which is useful in our analysis. Lemma 4.2. For any b = (b1 , . . . , bd ) ? IRd , we have that ( d ) d X b2 X i inf ?i ? 1 = kbk21 : ?i > 0, ? i i=1 i=1 ?i = and any minimizing sequence converges to ? |bi | kbk1 , (4.3) i ? INd . 1 P ? 12 2 Proof. From the Cauchy-Schwarz inequality we have that kbk1 = bi 6=0 ?i ?i |bi | ? P Pd 1 P ?1 2 12 2 12 ( bi 6=0 ?i ) 2 ( bi 6=0 ??1 i bi ) ? ( i=1 ?i bi ) . Convergence to the infimum is obtained when Pd |bj | |bi | |bi | i=1 ?i ? 1 and ?i ? ?j ? 0 for all i, j ? INd such that bi , bj 6= 0. Hence ?i ? kbk1 . The infimum is attained when bi 6= 0 for all i ? INd . Proof of Theorem 4.1. We write D = U Diag(?)U > , with U ? Od and ? ? IRd+ . We first minimize over ?. For this purpose, we use Lemma 4.2 to obtain that inf{trace(W > U Diag(?)?1 U > W ) : ? ? IRd++ , d X ?i ? 1} = kU > W k22,1 = i=1 d X i=1 Next we show that 2 kW > ui k2 . 1 min{kU > W k22,1 : U ? Od } = (trace C 2 )2 and a minimizing U is a system of eigenvectors of C. To see this, note that 1 1 > > > > 2 2 trace(W W > ui u> i ) = trace(C ui ui ui ui C ) trace(ui ui ui ui ) 1 1 1 > 2 > > 2 2 ? (trace(C 2 ui u> i ui ui )) = trace(C ui ui ) = ui C ui 1 > > > > 2 since ui u> i ui ui = ui ui . The equality is verified if and only if C ui ui = aui ui which implies 1 1 that C 2 ui = aui , that is, if ui is an eigenvector of C. The optimal a is trace(C 2 ). 1 The expression trace(W W > ) 2 in (4.2) is simply the sum of the singular values of W and is sometimes called the trace norm. As shown in [10], the trace norm is the convex envelope of rank(W ) in the unit ball, which gives another interpretation of the relationship between the rank and ? in our experiments. Using the trace norm, problem (3.2) becomes a regularization problem which depends only on W . 18 15 16 14 10 12 10 8 5 6 4 2 ?4 10 ?2 10 0 0 0 10 10 1 10 Figure 1: Number of features learned versus the regularization parameter ? (see text for description). However, since the trace norm is nonsmooth, we have opted for the above alternating minimization strategy which is simple to implement and has a natural interpretation. Indeed, Algorithm 1 alternately performs a supervised and an unsupervised step, where in the latter step we learn common representations across the tasks and in the former step we learn task-specific functions using these representations. We conclude this section by noting that when matrix D in problem (3.2) is additionally constrained to be diagonal, problem (3.2) reduces to problem (2.5). Formally, we have the following corollary. Corollary 4.3. Problem (2.5) is equivalent to the problem ) ( d X d?T d i ?i ? 1, ?i 6= 0 when w 6= 0 (4.4) min R(W, Diag(?)) : W ? IR , ? ? IR+ , i=1 and the optimal ? is given by ?i = kwi k2 , kW k2,1 i ? INd . (4.5) Using this corollary we can make a simple modification to Algorithm 1 in order to use it for variable selection. That is, we modify the computation of the matrix D (penultimate line in Algorithm 1) as D = Diag(?), where the vector ? = (?1 , . . . , ?d ) is computed using equation (4.5). 5 Experiments In this section, we present experiments on a synthetic and a real data set. In all of our experiments, we used the square loss function and automatically tuned the regularization parameter ? with leaveone-out cross validation. Synthetic Experiments. We created synthetic data sets by generating T = 200 task parameters wt from a 5-dimensional Gaussian distribution with zero mean and covariance equal to Diag(1, 0.25, 0.1, 0.05, 0.01). These are the relevant dimensions we wish to learn. To these we kept adding up to 20 irrelevant dimensions which are exactly zero. The training and test sets were selected randomly from [0, 1]25 and contained 5 and 10 examples per task respectively. The outputs yti were computed from the wt and xti as yti = hwt , xti i + ?, where ? is zero-mean Gaussian noise with standard deviation equal to 0.1. We first present, in Figure 1, the number of features learned by our algorithm, as measured by rank(D). The plot on the left corresponds to a data set of 200 tasks with 25 input dimensions and that on the right to a real data set of 180 tasks described in the next subsection. As expected, the number of features decreases with ?. Figure 2 depicts the performance of our algorithm for T = 10, 25, 100 and 200 tasks along with the performance of 200 independent standard ridge regressions on the data. For T = 10, 25 and 100, we averaged the performance metrics over runs on all the data so that our estimates have comparable variance. In agreement with past empirical and theoretical evidence (see e.g. [4]), learning multiple tasks together significantly improves on learning the tasks independently. Moreover, the performance of the algorithm improves when more tasks are available. This improvement is moderate for low dimensionalities but increases as the number of irrelevant dimensions increases. T = 200 T = 100 T = 25 T = 10 independent 0.16 0.14 1.1 0.12 1 0.1 0.9 0.08 0.8 0.06 0.7 0.04 5 10 T = 200 T = 100 T = 25 T = 10 independent 1.2 15 20 25 5 10 15 20 25 Figure 2: Test error (left) and residual of learned features (right) vs. dimensionality of the input. 5.3 0.25 0.8 5.2 0.7 0.2 5.1 0.6 0.15 5 0.5 4.9 0.1 4.8 0.4 0.05 4.7 0.3 4.6 0 0.2 4.5 0.1 4.4 4.3 0 ?0.05 50 100 150 200 0 2 4 6 8 10 12 14 ?0.1 TE RAM SC CPU HD CD CA CO AV WA SW GU PR Figure 3: Test error vs. number of tasks (left) for the computer survey data set. Significance of features (middle) and attributes learned by the most important feature (right). On the right, we have plotted a residual measure of how well the learned features approximate the actual ones used to generate the data. More specifically, we depict the Frobenius norm of the difference of the learned and actual D?s versus the input dimensionality. We observe that adding more tasks leads to better estimates of the underlying features. Conjoint analysis experiment. We then tested the method using a real data set about people?s ratings of products from [13]. The data was taken from a survey of 180 persons who rated the likelihood of purchasing one of 20 different personal computers. Here the persons correspond to tasks and the PC models to examples. The input is represented by the following 13 binary attributes: telephone hot line (TE), amount of memory (RAM), screen size (SC), CPU speed (CPU), hard disk (HD), CD-ROM/multimedia (CD), cache (CA), Color (CO), availability (AV), warranty (WA), software (SW), guarantee (GU) and price (PR). We also added an input component accounting for the bias term. The output is an integer rating on the scale 0?10. Following [13], we used 4 examples per task as the test data and 8 examples per task as the training data. As shown in Figure 3, the performance of our algorithm improves with the number of tasks. It also performs much better than independent ridge regressions, whose test error is equal to 16.53. In this particular problem, it is also important to investigate which features are significant to all consumers and how they weight the 13 computer attributes. We demonstrate the results in the two adjacent plots, which were obtained with the data for all 180 tasks. In the middle, the distribution of the eigenvalues of D is depicted, indicating that there is a single most important feature which is shared by all persons. The plot on the right shows the weight of each input dimension in this most important feature. This feature seems to weight the technical characteristics of a computer (RAM, CPU and CD-ROM) against its price. Therefore, in this application our algorithm is able to discern interesting patterns in people?s decision process. School data. Preliminary experiments with the school data used in [3] achieved explained variance 37.1% compared to 29.5% in that paper. These results will be reported in future work. 6 Conclusion We have presented an algorithm which learns common sparse function representations across a pool of related tasks. To our knowledge, our approach provides the first convex optimization formulation for multi-task feature learning. Although convex optimization methods have been derived for the simpler problem of feature selection [12], prior work on multi-task feature learning has been based on more complex optimization problems which are not convex [2, 4, 6] and, so, are at best only guaranteed to converge to a local minimum. Our algorithm shares some similarities with recent work in [2] where they also alternately update the task parameters and the features. Two main differences are that their formulation is not convex and that, in our formulation, the number of learned features is not a parameter but it is controlled by the regularization parameter. This work may be extended in different directions. For example, it would be interesting to explore whether our formulation can be extended to more general models for the structure across the tasks, like in [20] where ICA type features are learned, or to hierarchical feature models like in [18]. Acknowledgments We wish to thank Yiming Ying and Raphael Hauser for observations on the convexity of (3.2), Charles Micchelli for valuable suggestions and the anonymous reviewers for their useful comments. This work was supported by EPSRC Grants GR/T18707/01 and EP/D071542/1, and by the IST Programme of the European Commission, under the PASCAL Network of Excellence IST-2002506778. References [1] J.Abernethy, F. Bach, T. Evgeniou and J-P. Vert. Low-rank matrix factorization with attributes. Technical report N24/06/MM, Ecole des Mines de Paris, 2006. [2] R.K. Ando and T. Zhang. A framework for learning predictive structures from multiple tasks and unlabeled data. J. Machine Learning Research. 6: 1817?1853, 2005. [3] B. Bakker and T. Heskes. Task clustering and gating for Bayesian multi?task learning. J. of Machine Learning Research, 4: 83?99, 2003. [4] J. Baxter. A model for inductive bias learning. J. of Artificial Intelligence Research, 12: 149?198, 2000. [5] S. Ben-David and R. Schuller. Exploiting task relatedness for multiple task learning. Proceedings of Computational Learning Theory (COLT), 2003. [6] R. Caruana. Multi?task learning. Machine Learning, 28: 41?75, 1997. [7] D. Donoho. For most large underdetermined systems of linear equations, the minimal l 1 -norm nearsolution approximates the sparsest near-solution. Preprint, Dept. of Statistics, Stanford University, 2004. [8] T. Evgeniou, C.A. Micchelli and M. Pontil. Learning multiple tasks with kernel methods. J. Machine Learning Research, 6: 615?637, 2005. [9] T. Evgeniou, M. Pontil and O. Toubia. A convex optimization approach to modeling consumer heterogeneity in conjoint estimation. INSEAD N 2006/62/TOM/DS. [10] M. Fazel, H. Hindi and S. P. Boyd. A rank minimization heuristic with application to minimum order system approximation. Proceedings, American Control Conference, 6, 2001. [11] T. Hastie, R. Tibshirani and J. Friedman. The Elements of Statistical Learning: Data Mining, Inference and Prediction. Springer Verlag Series in Statistics, New York, 2001. [12] T. Jebara. Multi-task feature and kernel selection for SVMs. Proc. of ICML 2004. [13] P.J. Lenk, W.S. DeSarbo, P.E. Green, M.R. Young. Hierarchical Bayes conjoint analysis: recovery of partworth heterogeneity from reduced experimental designs. Marketing Science, 15(2): 173?191, 1996. [14] C.A. Micchelli and A. Pinkus. Variational problems arising from balancing several error criteria. Rendiconti di Matematica, Serie VII, 14: 37-86, 1994. [15] C. A. Micchelli and M. Pontil. On learning vector?valued functions. Neural Computation, 17:177?204, 2005. [16] T. Serre, M. Kouh, C. Cadieu, U. Knoblich, G. Kreiman, T. Poggio. Theory of object recognition: computations and circuits in the feedforward path of the ventral stream in primate visual cortex. AI Memo No. 2005-036, MIT, Cambridge, MA, October, 2005. [17] N. Srebro, J.D.M. Rennie, and T.S. Jaakkola. Maximum-margin matrix factorization. NIPS 2004. [18] A. Torralba, K. P. Murphy and W. T. Freeman. Sharing features: efficient boosting procedures for multiclass object detection. Proc. of CVPR?04, pages 762?769, 2004. [19] K. Yu, V. Tresp and A. Schwaighofer. Learning Gaussian processes from multiple tasks. Proc. of ICML 2005. [20] J. Zhang, Z. Ghahramani and Y. Yang. Learning Multiple Related Tasks using Latent Independent Component Analysis. NIPS 2006.
3143 |@word middle:2 briefly:2 norm:21 lenk:1 seems:1 disk:1 covariance:1 accounting:1 serie:1 series:1 tuned:1 ecole:1 past:1 ka:2 current:1 od:4 bd:2 plot:3 depict:1 update:1 v:2 intelligence:1 selected:2 provides:2 boosting:1 preference:2 theodoros:2 simpler:1 zhang:2 along:2 c2:1 direct:1 kak22:2 consists:3 combine:1 introduce:2 excellence:1 theoretically:1 ica:1 expected:1 indeed:4 xz:1 multi:11 inspired:1 freeman:1 automatically:1 xti:11 cpu:4 actual:2 cache:1 increasing:2 becomes:1 begin:1 xx:1 underlying:3 moreover:4 notation:1 bounded:1 provided:1 circuit:1 argmin:1 minimizes:1 eigenvector:1 bakker:1 guarantee:1 every:3 ti:1 exactly:1 k2:12 uk:4 control:2 unit:1 grant:1 positive:2 local:2 modify:1 sd:6 id:2 path:1 kad:1 therein:3 initialization:1 conversely:1 challenging:1 co:2 factorization:3 range:9 statistically:1 bi:11 averaged:1 fazel:1 acknowledgment:1 postpone:1 implement:1 kat:2 procedure:1 pontil:5 empirical:3 significantly:2 vert:1 boyd:1 word:1 close:2 selection:3 cal:1 unlabeled:1 optimize:1 equivalent:7 imposed:1 reviewer:1 independently:6 convex:18 survey:2 simplicity:1 recovery:1 argyriou:2 orthonormal:1 dw:1 hd:2 kouh:1 justification:1 target:1 pt:1 user:2 hypothesis:1 agreement:1 element:1 recognition:3 observed:1 ft:6 epsrc:1 ep:1 preprint:1 ensures:2 decrease:1 inite:1 valuable:1 pd:5 convexity:1 ui:38 mine:1 personal:1 uniformity:1 solving:5 predictive:2 upon:1 gu:2 various:1 represented:3 regularizer:1 massimiliano:1 describe:1 london:4 artificial:1 sc:2 abernethy:1 whose:2 heuristic:1 kai:1 solve:1 cvpr:1 stanford:1 say:1 otherwise:2 valued:1 rennie:1 favor:1 statistic:2 jointly:1 advantage:1 eigenvalue:2 sequence:1 ucl:2 lowdimensional:1 product:4 raphael:1 relevant:2 description:1 frobenius:1 exploiting:1 convergence:2 generating:1 converges:1 yiming:1 object:6 ben:1 develop:2 ac:2 measured:1 school:2 c:2 indicate:1 implies:1 direction:1 attribute:4 filter:1 human:1 dii:1 preliminary:1 anonymous:1 underdetermined:1 extension:1 mm:1 considered:1 bj:2 ventral:1 torralba:1 purpose:3 hwt:8 estimation:1 proc:3 constance:1 schwarz:1 minimization:4 mit:1 clearly:1 gaussian:3 jaakkola:1 corollary:3 derived:1 focus:1 improvement:1 rank:10 indicates:1 likelihood:1 opted:1 inference:1 irp:1 xtm:1 bt:2 typically:3 transformed:1 france:1 overall:1 among:2 classification:1 pascal:1 colt:1 constrained:2 special:2 equal:6 evgeniou:5 cadieu:1 kw:6 yu:1 unsupervised:3 icml:2 future:2 report:3 nonsmooth:2 np:1 few:6 randomly:1 simultaneously:2 murphy:1 ando:1 friedman:1 partworth:1 detection:1 investigate:1 mining:1 pc:1 nonincreasing:1 poggio:1 minw:2 orthogonal:1 penalizes:1 plotted:1 theoretical:2 minimal:1 column:6 modeling:3 earlier:1 caruana:1 introducing:1 deviation:1 subset:2 entry:1 recognizing:1 gr:1 reported:1 commission:1 hauser:1 synthetic:3 person:3 pool:1 together:2 w1:1 central:1 management:1 book:1 american:1 return:1 de:3 b2:1 fontainebleau:1 int:4 coefficient:1 availability:1 depends:1 toubia:1 stream:1 sup:1 characterizes:1 bayes:1 ytm:1 minimize:4 square:1 ir:15 variance:3 kakr:1 who:1 characteristic:1 correspond:1 ka1:1 generalize:2 bayesian:1 sharing:1 ed:1 against:1 proof:3 di:1 recall:1 color:1 knowledge:1 subsection:1 car:2 improves:4 dimensionality:3 organized:1 attained:1 supervised:2 tom:1 formulation:7 done:1 marketing:1 stage:1 d:1 nonlinear:1 infimum:2 aj:1 k22:2 requiring:1 true:2 serre:1 former:2 regularization:16 hence:3 equality:1 alternating:2 symmetric:1 nonzero:5 inductive:1 ind:4 adjacent:1 noted:1 kak:4 criterion:1 occasion:1 mina:2 ridge:3 demonstrate:3 performs:3 regularizing:1 variational:1 charles:1 common:14 empirically:2 pinkus:1 interpretation:3 approximates:2 significant:1 cambridge:1 ai:4 unconstrained:1 heskes:1 similarly:1 similarity:1 cortex:1 etc:1 recent:1 t18707:1 perspective:1 inf:2 irrelevant:2 wellknown:1 irq:1 forcing:1 moderate:1 nonconvex:1 verlag:1 inequality:1 binary:1 minimum:2 converge:1 multiple:11 full:1 reduces:1 technical:2 cross:1 bach:1 controlled:3 n24:1 prediction:1 regression:6 metric:1 kernel:3 represent:1 sometimes:1 achieved:1 penalize:1 addition:1 want:1 separately:1 singular:1 envelope:1 kwi:1 pooling:1 subject:1 comment:1 desarbo:1 integer:1 near:1 noting:1 yang:1 feedforward:1 identically:1 baxter:1 hastie:1 andreas:1 inner:1 multiclass:1 whether:1 expression:1 penalty:1 ird:15 york:1 dramatically:1 useful:2 clear:1 eigenvectors:1 amount:1 svms:1 category:1 reduced:1 generate:1 arising:1 per:4 tibshirani:1 write:1 shall:2 express:1 ist:2 verified:1 kept:1 ram:3 sum:1 run:1 knoblich:1 discern:1 family:1 yt1:1 draw:1 decision:2 comparable:1 submatrix:1 capturing:1 bound:2 hi:3 guaranteed:1 kreiman:1 aui:2 constraint:3 software:1 kwkp:1 speed:1 argument:1 prescribed:3 min:8 department:2 ball:1 across:17 slightly:1 wi:3 modification:1 primate:1 explained:1 pr:2 taken:1 equation:4 describing:1 discus:1 end:3 available:2 observe:1 hierarchical:2 away:1 hat:5 denotes:3 clustering:1 ensure:1 sw:2 music:1 wc1e:2 gower:2 k1:2 build:1 ghahramani:1 micchelli:4 added:1 strategy:1 kak2:4 diagonal:1 kbk1:3 thank:1 simulated:2 penultimate:1 street:2 cauchy:1 reason:1 rom:2 assuming:1 consumer:2 modeled:1 relationship:1 minimizing:3 ying:1 equivalently:1 difficult:1 october:1 trace:27 negative:1 memo:1 irt:1 design:1 upper:2 av:2 observation:1 heterogeneity:2 extended:3 jebara:1 rating:2 david:1 namely:1 paris:1 connection:2 warranty:1 learned:14 alternately:4 nip:2 able:1 below:1 pattern:1 sparsity:1 green:1 memory:1 hot:1 natural:1 residual:2 schuller:1 hindi:1 improve:1 technology:1 rated:1 created:1 coupled:1 tresp:1 text:1 prior:1 relative:3 loss:2 interesting:2 suggestion:1 srebro:1 versus:2 conjoint:4 d071542:1 validation:1 purchasing:1 share:5 cd:5 balancing:1 row:4 rendiconti:1 supported:1 keeping:1 bias:2 vv:1 face:2 sparse:4 leaveone:1 dimension:5 computes:1 programme:1 matematica:1 approximate:2 relatedness:3 supremum:1 keep:1 pseudoinverse:1 b1:1 xt1:1 conclude:1 xi:2 iterative:1 latent:1 additionally:1 learn:11 ku:2 ca:2 improving:1 investigated:1 complex:1 european:1 diag:11 significance:1 main:4 noise:1 ait:5 depicts:1 screen:1 wish:4 sparsest:1 wavelet:1 learns:5 hw:3 young:1 theorem:4 specific:2 gating:1 svm:1 evidence:2 intractable:1 adding:2 kr:1 hui:1 te:2 margin:1 vii:1 depicted:1 simply:6 explore:2 visual:3 contained:1 schwaighofer:1 partially:1 insead:3 springer:1 aa:1 corresponds:2 ma:1 donoho:1 shared:4 price:2 yti:10 change:1 hard:1 specifically:1 telephone:1 wt:16 lemma:4 called:1 multimedia:1 experimental:2 indicating:1 select:3 college:2 formally:2 people:3 latter:4 dept:1 tested:1
2,363
3,144
Randomized PCA Algorithms with Regret Bounds that are Logarithmic in the Dimension Manfred K. Warmuth Computer Science Department University of California - Santa Cruz manfred@cse.ucsc.edu Dima Kuzmin Computer Science Department University of California - Santa Cruz dima@cse.ucsc.edu Abstract We design an on-line algorithm for Principal Component Analysis. In each trial the current instance is projected onto a probabilistically chosen low dimensional subspace. The total expected quadratic approximation error equals the total quadratic approximation error of the best subspace chosen in hindsight plus some additional term that grows linearly in dimension of the subspace but logarithmically in the dimension of the instances. 1 Introduction In Principal Component Analysis the n-dimensional data instances are projected into a kdimensional subspace (k < n) so that the total quadratic approximation error is minimized. After centering the data, the problem is equivalent to finding the eigenvectors of the k largest eigenvalues of the data covariance matrix. We develop a probabilistic on-line version of PCA: in each trial the algorithm chooses a kdimensional projection matrix P t based on some internal parameter; then an instance xt is received and the algorithm incurs loss kxt ? P t xt k22 ; finally the internal parameter is updated. The goal is to obtain algorithms whose total loss in all trials is close to the smallest total loss of any k-dimensional subspace P chosen in hindsight. We first develop our algorithms in the expert setting of on-line learning. The algorithm maintains a mixture vector over the n experts. At the beginning of trial t the algorithm chooses a subset P t of k t n experts based on the current mixture vector wt . It then receives a loss vector P ? ? [0..1] t and incurs loss equal to the remaining n ? k components of the loss vector, i.e. i?{1,...,n}?P t `i . Finally it updates its mixture vector to wt+1 . Note that now the subset P t corresponds to the subspace onto which we ?project?, i.e. we incur no loss on the k components of P t and are charged only for the remaining n ? k components. The trick is to maintain a mixture vector wt as a parameter with the additional constraint that wti ? 1 n?k . We will show that these constrained mixture vectors represent an implicit mixture over subsets of experts of size n ? k, and given wt we can efficiently sample from the implicit mixture and use it to predict. This gives an on-line algorithm whose total loss is close to the smallest n ? k components P of t ?t and this algorithm generalizes to an on-line PCA algorithm when the mixture vectors are 1 replaced by density matrices whose eigenvalues are bounded by n?k . Now the constrained density matrices represent implicit mixtures of the (n ? k)-dimensional subspaces. The complementary k-dimensional space is used to project the current instance. 2 Standard PCA and On-line PCA Given a sequence of data vectors x1 , . . . , xT , the goal is to find a low-dimensional approximation of this data that minimizes the 2-norm approximation error. Specifically, we want to find a rank k projection matrix P and a bias vector b ? Rn such that the following cost function is minimized: loss(P , b) = T X kxt ? (P xt + b)k22 . t=1 ? , where x ? is the data mean. Substituting Differentiating and solving for b gives us b = (I ? P ) x this bias b into the loss we obtain loss(P ) = T X ? )k22 = k(I ? P )(xt ? x t=1 T X ? )> (I ? P )2 (xt ? x ? ). (xt ? x t=1 2 Since I ? P is a projection matrix, (I ? P ) = I ? P , and we get: loss(P ) = tr((I ? P ) T X i=1 ? )(xi ? x ? )> ) = tr((I ? P ) C) = tr(C) ? tr( |{z} P C), (xi ? x | {z } rank n?k rank k where C is the data covariance matrix. Therefore the loss is minimized over (n ? k)-dimensional subspaces and this is equivalent to maximizing over k-dimensional subspaces. In the on-line setting, learning proceeds in trials. (For the sake of simplicity we are not using a bias term at this point.) At trial t, the algorithm chooses a rank k projection matrix P t . It then receives an instance xt and incurs loss kxt ? P t xt k22 = tr((I ? P t ) xt (xt )> ). Our goal is to obtain an PT algorithm whose total loss over a sequence of trials t=1 tr((I ? P t ) xt (xt )> ) is close to the total PT loss of the best rank k projection matrix P , i.e. inf P tr((I ? P ) t=1 xt (xt )> ). Note that the latter loss is equal to the loss of standard PCA on data sequence x1 , . . . , xT (assuming the data is centered). 3 Choosing a Subset of Experts Recall that projection matrices are symmetric positive definite matrices with eigenvalues in {0, 1}. Pk Thus a rank k projection matrix can be written as P = i=1 pi p> i , where the pi are the k orthonormal vectors forming the basis of the subspace. Assume for the moment that the eigenvectors are restricted to be standard basis vectors. Now projection matrices become diagonal matrices with entries in {0, 1}, where the number of ones is the rank. Also, the trace of a product of such a diagonal projection matrix and any symmetric matrix becomes a dot product between the diagonals of both matrices and the whole problem reduces to working with vectors: the rank k projection matrices reduce to vectors with k ones and n ? k zeros and the diagonal of the symmetric matrix may be seen as a loss vector ?t . Our goal now is to develop on-line algorithms for finding the lowest n ? k components of the loss vectors ?t so that the total loss is close the to the lowest n ? k components PT of t=1 ?t . Equivalently, we want to find the highest k components in ?t . We begin by developing some methods for dealing with subsets of components. For convenience we encode such subsets as probability vectors: we call r ? [0, 1]n an m-corner if it has m components 1 and the remaining n ? m components set to zero. At trial t the algorithm chooses an set to m (n ? k)-corner r t . It then receives a loss vector ?t and incurs loss (n ? k) r t ? ?t . Let Anm consist of all convex combinations of m-corners. In other words, Anm is the convex hull  n 1 of the m m-corners. Clearly any component wi of a vector w in Anm is at most m because it 1 n n n is a convex combination of numbers in [0.. m ]. Therefore Am ? Bm , where Bm is the set of nP 1 dimensional vectors w for which |w| = i wi = 1 and 0 ? wi ? m , for all i. The following n n theorem implies that Am = Bm : Theorem 1. Algorithm 1 produces a convex combination1 of at most n m-corners for any vector in n Bm . Algorithm 1 Mixture Construction n input 1 ? m < n and w ? Bm repeat Let r be a corner whose m components correspond to nonzero components of w and contain all the components of w that are equal to |w| m Let s be the smallest of the m chosen components in w and l be the largest value of the remaining n ? m components p }| { z w := w ? min(m s, |w| ? m l) r and output p r until w = 0 Proof. Let b(w) be the number of boundary components in w, i.e. b(w) := |{i : wi is 0 or |w| m }|. n em Let B be all vectors w such that 0 ? wi ? |w| , for all i. If b(w) = n, then w is either a corner or m 0. The loop stops when w = 0. If w is a corner then it takes one iteration to arrive at 0. We show n n em em b ?B b > b(w). Clearly, if w ? B and w is neither a corner nor 0, then the successor w and b(w) b ? 0, because the amount that is subtracted in the m components of the corner is at most as large w b w| b i ? |m as the corresponding components of w. We next show that w . If i belongs to the corner then b b w| |w|?p |w| p w bi = wi ? m ? m = m . Otherwise w bi = wi ? l, and l ? |m follows from the fact that en . b ?B p ? |w| ? m l. This proves that w m b > b(w) first observe that all boundary components in w remain boundary For showing that b(w) b zeros stay zeros and if wi = |w| components in w: bi = m then i is included in the corner and w b |w| |w|?p = m . However, the number of boundary components is increased at least by one because m the components corresponding to s and l are both non-boundary components in w and at least one b if p = m s then the component corresponding to s in of them becomes a boundary point in w: p b and if p = |w| ? m l then the component corresponding to l in w is w is s ? m = 0 in w b w| l = |w|?p = |m . It follows that it may take up to n iterations to arrive at a corner which has n m boundary components and one more iteration to arrive at 0. Finally note that there is no weight e n s.t. b(w) = n ? 1 and therefore the size of the produced linear combination is at vector w ? B m most n. More precisely, the size is at most n ? b(w) if n ? b(w) ? n ? 2 and one if w is a corner. P The algorithm produces a linear combinations of corners, i.e. w = j pj r j . Since pj ? 0 and all P |r j | = 1, j pj = 1 and we actually have a convex combination. Fact 1. For any loss vector ?, the following corner has the smallest loss of any convex combination n : Greedily pick the component of minimum loss (m times). of corners in Anm = Bm How can we use the above construction and fact? It seems too hard to maintain information about all n of corners, n?k corners of size n?k. However, the best corner is also the best convex combination  n i.e. the best from the set Ann?k where each member of this set is given by n?k coefficients. Luckily, n this set of convex combinations equals Bn?k and it takes n coefficients to specify a member in that n set. Therefore we can search for the best hypothesis in the set Bn?k and for any such hypothesis we can always construct a convex combination (of size ? n) of (n ? k)-corners which has the same expected loss for each loss vector. This means that any algorithm predicting with a hypothesis vector n in Bn?k can be converted to an algorithm that probabilistically chooses an (n ? k)-corner. Finally, the set P t of the k components missed by the chosen (n ? k)-corner corresponds to the subspace we project onto. Algorithm 2 spells out the details for this approach. The algorithm chooses a corner probabilistically n b t onto Bn?k and (n ? k) wt ? ?t is the expected loss in one trial. The projection w can be achieved as 1 follows: find the smallest l s.t. capping the largest l components to n?k and rescaling the remaining l 1 n?l weights to total weight 1? n?k makes none of the rescaled weights go above n?k . The simplest 1 The existence of a convex combination of at most n corners is implied by Carath?eodory?s theorem [Roc70], but the algorithm gives an effective construction. algorithm starts with sorting the weights and then searches for l with a binary search. However, a linear algorithm that recursively uses the median is given in [HW01]. Algorithm 2 Capped Weighted Majority Algorithm n input: 1 ? k < n and an initial probability vector w1 ? Bn?k for t = 1 to T do P Decompose wt as j pj r j with Algorithm 1, where m = n ? k Draw a corner r = r j with probability pj Let P t be the k components outside the drawn corner Receive loss vector ?t P Incur loss (n ? k) r ? ?t = i?{1,...,n}?P t `ti . w bit := wit exp(??`ti ) / Z, where Z normalizes the weights to one b t) wt+1 := argmin d(w, w n w?Bn?k end for When k = n ? 1, n ? k = 1 and B1n is the entire probability simplex. In this case the call to Algorithm 1 and the projection onto B1n are vacuous and we get the standard Randomized Weighted Majority algorithm [LW94]2 with loss vector ?t . P ui Let d(u, w) denote the relative entropy between two probability vectors: d(u, w) = i ui log w . i Theorem 2. On an arbitrary sequence of loss vectors ?1 , . . . , ?T ? [0, 1]n , the total expected loss of Algorithm 2 is bounded as follows: PT T X ? t=1 u ? ?t + d(u, w1 ) ? d(u, wT +1 ) t t (n ? k) w ? ? ? (n ? k) , 1 ? exp(??) t=1 n for any learning rate ? > 0 and comparison vector u ? Bn?k . b t in Algorithm 2 is the update of the Continuous Weighted Majority for Proof. The update for w which the following basic inequality is known (essentially [LW94], Lemma 5.3): b t ) ? ?? u ? ?t + wt ? ?t (1 ? exp(??)). d(u, wt ) ? d(u, w (1) n b t onto the convex set Bn?k The weight vector wt+1 is a Bregman projection of vector w . For such projections the Generalized Pythagorean Theorem holds (see e.g [HW01] for details): b t ) ? d(u, wt+1 ) + d(wt+1 , w b t) d(u, w b t ) term and get the followSince Bregman divergences are non-negative, we can drop the d(wt+1 , w ing inequality: n b t ) ? d(u, wt+1 ) ? 0, for u ? Bn?k d(u, w . Adding this to the previous inequality we get: d(u, wt ) ? d(u, wt+1 ) ? ?? u ? ?t + wt ? ?t (1 ? exp(??)) By summing over t, multiplying by n ? k, and dividing by 1 ? exp(??), the bound follows. 4 On-line PCA 1 In this context (matrix) corners are density matrices with m eigenvalues equal to m and the rest are n 0. Also the set Am consists of all convex combinations of such corners. The maximum eigenvalue of a convex combination of symmetric matrices is at most as large as the maximum eigenvalue of any of the matrices ([Bha97], Corollary III.2.2). Therefore each convex combination of corners is 2 The original Weighted Majority algorithms were described for the absolute loss. The idea of using loss vectors instead was introduced in [FS97]. 1 a density matrix whose eigenvalues are bounded by m and Anm ? Bnm , where Bnm consists of all 1 . Assume we have some density matrix density matrices whose maximum eigenvalue is at most m n > W ? Bm with eigendecomposition W diag(?)W . Algorithm 1 can be applied to the vector of eigenvalues ? of this density matrix. The output convex combination of up to n diagonal corners P ? = j pj r j can be turned into a convex combination of matrix corners that expresses the density P matrix: W = j pj W diag(r j )W > . It follows that Anm = Bnm as in the diagonal case. Theorem 3. For any symmetric matrix S, minW ?Bnm tr(W S) attains its minimum at the following matrix corner: greedily choose orthogonal eigenvectors of S of minimum eigenvalue (m times). Proof. Let ?? (W ) denote the vector of eigenvalues of W in descending order and let ?? (S) be the same vector of S but in ascending order. Since both matrices are symmetric, tr(W S) ? ?? (W ) ? n ?? (S) ([MO79], Fact H.1.h of Chapter 9). Since ?? (W ) ? Bm , the dot product is minimized and the inequality is tight when W is an m-corner corresponding to the m smallest eigenvalues of S. Also the greedy algorithm finds the solution (see Fact 1 of this paper). Algorithm 2 generalizes to the matrix setting. The Weighted Majority update is replaced by the corresponding matrix version which employs the matrix exponential and matrix logarithm [WK06] (The update can be seen as a special case of the Matrix Exponentiated Gradient update [TRW05]). The following theorem shows that for the projection we can keep the eigensystem fixed. Here ?(U , W ) denotes the quantum relative entropy tr(U (log U ? log W )). Theorem 4. Projecting a density matrix onto Bnm w.r.t. the quantum relative entropy is equivalent to projecting the vector of eigenvalues w.r.t. the ?normal? relative entropy: If W has the eigendecomposition W diag(?)W > , then argmin ?(U , W ) = Wu? W > , where u? = argmin d(u, ?). U ?Bn m n u?Bm Proof. If ?? (S) denotes the vector of eigenvalues of a symmetric matrix S arranged in descending order, then tr(ST ) ? ?? (S) ? ?? (T ) ([MO79], Fact H.1.g of Chapter 9). This implies that tr(U log W ) ? ?? (U ) ? log ?? (W ) and ?(U , W ) ? d(?? (U ), ?? (W )). Therefore minn ?(U , W ) ? minn d(u, ?) and if u? minimizes the r.h.s. then W diag(u? )W > miniU ?Bm u?Bm mizes the l.h.s. because ?(W diag(u? )W, W ) = d(u? , ?). Algorithm 3 On-line PCA algorithm input: 1 ? k < n and an initial density matrix W 1 ? Bnn?k for t = 1 to T do Perform eigendecomposition W t = W?W > P Decompose ? as j pj r j with Algorithm 1, where m = n ? k Draw a corner r = r j with probability pj Form a matrix corner R = W diag(r)W > Form a rank k projection matrix P t = I ? (n ? k)R Receive data instance vector xt Incur loss kxt ? P t xt k22 = tr((I ? P t ) xt (xt )> ) t c = exp(log W t ? ? xt (xt )> ) / Z, where Z normalizes the trace to 1 W t c ) W t+1 := argmin ?(W , W W ?Bn n?k end for The expected loss in trial t of this algorithm is given by (n ? k)tr(W t xt (xt )> ) Theorem 5. For an arbitrary sequence of data instances x1 , . . . , xT of 2-norm at most one, the total expected loss of the algorithm is bounded as follows: PT T X ? t=1 tr(U xt (xt )> ) + ?(U , W 1 ) ? ?(U , W T ) t t t > (n ? k)tr(W x (x ) ) ? (n ? k) , 1 ? exp(??) t=1 for any learning rate ? > 0 and comparator density matrix U ? Bnn?k .3 t c is a density matrix version of the standard Weighted Majority update Proof. The update for W which was used for variance minimization along a single direction (i.e. k = n ? 1) in [WK06]. The basic inequality (1) for that update becomes: t c ) ? ?? tr(U xt (xt )> ) + tr(W t xt (xt )> )(1 ? exp(??)) ?(U , W t ) ? ?(U , W As in the proof of Theorem 2 of this paper, the Generalized Pythagorean theorem applies and dropping one term we get the following inequality: t c ) ? ?(U , W t+1 ) ? 0, for U ? Bn . ?(U , W n?k Adding this to the previous inequality we get: ?(U , W t ) ? ?(U , W t+1 ) ? ?? tr(U xt (xt )> ) + tr(W t xt (xt )> )(1 ? exp(??)) By summing over t, multiplying by n ? k, and dividing by 1 ? exp(??), the bound follows. n It is easy to see that ?(U , W 1 ) ? (n ? k) log n?k . If k ? n/2, then this is further bounded by n k log k . Thus, the r.h.s. is essentially linear in k, but logarithmic in the dimension n. By tuning ? [CBFH+ 97, FS97], we can get regret bounds of the form: (expected total loss of alg.) - (total loss best k-space) r n n . = O (total loss of best k-subspace) k log + k log k k (2) Using standard but significantly simplified conversion techniques from [CBFH+ 97] based on the leave-one-out loss we also obtain algorithms with good regret bounds in the following model: the algorithm is given T ? 1 instances drawn from a fixed but unknown distribution and produces a k-space based on those instances; it then receives a new instance from the same distribution. We can bound the expected loss on the last instance: (expected loss of alg.) - (expected loss best k-space) r  (expected loss of best k-subspace) k log = O T 5 n k + k log T n k  . (3) Lower Bound The simplest competitor to our on-line PCA algorithm is the algorithm that does standard (uncentered) PCA on all the data points seen so far. In the expert setting this algorithm corresponds to ?projecting? to the n ? k experts that have minimum loss so far (where ties are broken arbitrarily). When k = n ? 1, this becomes the follow the leader algorithm. It is easy to construct an adversary strategy for this type of deterministic algorithm (any k) that forces the on-line algorithm to incur n times as much loss as the off-line algorithm. In contrast our algorithm is guaranteed to have expected additional loss (regret) of the order of square root of k ln n times the total loss of the best off-line algorithm. When the instances are diagonal matrices then our algorithm specializes to the standard expert setting and in that setting there are probabilistic lower bounds that show that our tuned bounds (2,3) are tight [CBFH+ 97]. 6 Simple Experiments The above lower bounds do not justify our complicated algorithms for on-line PCA because natural data might be more benign. However natural data often shifts and we constructed a simple dataset of this type in Figure 1. The first 333 20-dimensional points were drawn from a Gaussian distribution with a rank 2 covariance matrix. This is repeated twice for different covariance matrices of rank 3 The xt (xt )> can replaced by symmetric matrices S t whose eigenvalues have range at most one. Figure 1: The data set used for the experiments. Different colors/symbols denote the data points that came from three different Gaussians with rank 2 covariance matrices. The data vectors are 20-dimensional but we plot only the first 3 dimensions. Figure 2: The blue curve plots the total loss of on-line algorithm up to trial t for 50 different runs (with k = 2 and ? fixed to one). Note that the variance of the losses is small. The red single curve plots the total loss of the best subspace of dimension 2 for the first t points. Figure 3: Behavior of the algorithm around a transition point between two distributions. Each ellipse depicts the projection matrix with the largest coefficient in the decomposition of W t . The transition sequence starts with the algorithm focused on the projection matrix for the first subset of data and ends with essentially the optimal matrix for the second subset. The depicted transition takes about 60 trials. 2. We compare the total loss of our on-line algorithm with the total loss of the best subspace for the first t data points. During the first 333 datapoints the latter loss is zero since the first dataset is 2-dimensional, but after the third dataset is completed, the loss of any fixed off-line comparator is large. Figure 3 depicts how our algorithm transitions between datasets and exploits the on-lineness of the data. Randomly permuting the dataset removes the on-lineness and results in a plot where the total loss of the algorithm is somewhat above that of the off-line comparator (not shown). Any simple ?windowing algorithm? would also be able to detect the switches. Such algorithms are often unwieldy and we don?t know any strong regret bounds for them. In the expert setting there is however a long line of research on shifting (see e.g. [BW02, HW98]). An algorithm that mixes a little bit of the uniform distribution into the current mixture vector is able to restart when the data switches. More importantly, an algorithm that mixes in a little bit of the past average density matrix is able to switch quickly to previously seen subspaces and to our knowledge windowing techniques cannot exploit this type of switching. Preliminary experiments on face image data indicate that the algorithms that accommodate switching work as expected, but more comprehensive experiments still need to be done. 7 Conclusions We developed a new set of techniques for low dimensional approximation with provable bounds. Following [TRW05, WK06], we essentially lifted the algorithms and bounds developed for diagonal case to the matrix case. Are there general reductions? The on-line PCA problem was also addressed in [Cra06]. However, that paper does not fully capture the PCA problem because their algorithm predicts with a full-rank matrix in each trial, whereas we predict with a probabilistically chosen projection matrix of the desired rank k. Furthermore, that paper proves bounds on the filtering loss, which are typically easier to prove, and it is not clear how this loss relates to the more standard regret bounds proven in this paper. For the expert setting there are alternate techniques for designing on-line algorithms that do as well asPthe best subset k experts: set {i1 , . . . , in?k } receives weight proportional to Q of n ? <t exp(? j `<t ij ) = j exp(?`ij ). In this case we can get away with keeping only one weight per expert (the ith expert gets weight exp(?`<t i )) and then use dynamic programming to sum over sets (see e.g. [TW03] for this type of methods). With some more work, dynamic programming can also be applied for PCA. However, our new trick of using additional constraints on the eigenvalues is an alternative that avoids dynamic programming. Many technical problems remain. For example we would like to enhance our algorithms to learn a bias as well and apply our low-dimensional approximation techniques to regression problems. Acknowledgment: Thanks to Allen Van Gelder for valuable discussions re. Algorithm 1. References [Bha97] [BW02] R. Bhatia. Matrix Analysis. Springer, Berlin, 1997. Olivier Bousquet and Manfred K. Warmuth. Tracking a small set of experts by mixing past posteriors. Journal of Machine Learning Research, 3:363?396, 2002. [CBFH+ 97] N. Cesa-Bianchi, Y. Freund, D. Haussler, D. P. Helmbold, R. E. Schapire, and M. K. Warmuth. How to use expert advice. 44(3):427?485, 1997. [Cra06] Koby Crammer. Online tracking of linear subspaces. In Proceedings of the 19th Annual Conference on Learning Theory (COLT 06), Pittsburg, June 2006. Springer. [FS97] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119?139, August 1997. [HW98] Mark Herbster and Manfred Warmuth. Tracking the best expert. Machine Learning, 32(2):151?178, 1998. Earlier version in 12th ICML, 1995. [HW01] Mark Herbster and Manfred K. Warmuth. Tracking the best linear predictor. Journal of Machine Learning Research, 1:281?309, 2001. [LW94] N. Littlestone and M. K. Warmuth. The weighted majority algorithm. Inform. Comput., 108(2):212?261, 1994. [MO79] A. W. Marshall and I. Olkin. Inequalities: Theory of Majorization and its Applications. Academic Press, 1979. [Roc70] R. Rockafellar. Convex Analysis. Princeton University Press, 1970. [TRW05] K. Tsuda, G. R?atsch, and M. K. Warmuth. Matrix exponentiated gradient updates for on-line learning and Bregman projections. Journal of Machine Learning Research, 6:995?1018, June 2005. [TW03] Eiji Takimoto and Manfred K. Warmuth. Path kernels and multiplicative updates. Journal of Machine Learning Research, 4:773?818, 2003. [WK06] Manfred K. Warmuth and Dima Kuzmin. Online variance minimization. In Proceedings of the 19th Annual Conference on Learning Theory (COLT 06), Pittsburg, June 2006. Springer.
3144 |@word trial:13 version:4 norm:2 seems:1 bn:12 covariance:5 decomposition:1 pick:1 incurs:4 tr:20 accommodate:1 recursively:1 reduction:1 moment:1 initial:2 tuned:1 past:2 current:4 olkin:1 written:1 cruz:2 benign:1 remove:1 drop:1 plot:4 update:11 greedy:1 warmuth:9 beginning:1 ith:1 manfred:7 boosting:1 cse:2 along:1 ucsc:2 constructed:1 become:1 consists:2 prove:1 expected:13 behavior:1 nor:1 little:2 becomes:4 project:3 begin:1 bounded:5 lowest:2 argmin:4 minimizes:2 gelder:1 developed:2 hindsight:2 finding:2 ti:2 tie:1 dima:3 positive:1 switching:2 path:1 might:1 plus:1 twice:1 bi:3 range:1 acknowledgment:1 regret:6 definite:1 significantly:1 projection:20 word:1 get:9 onto:7 close:4 cannot:1 convenience:1 context:1 descending:2 equivalent:3 deterministic:1 charged:1 maximizing:1 go:1 convex:17 focused:1 wit:1 simplicity:1 helmbold:1 haussler:1 importantly:1 orthonormal:1 datapoints:1 lw94:3 updated:1 pt:5 construction:3 programming:3 olivier:1 us:1 designing:1 hypothesis:3 trick:2 logarithmically:1 fs97:3 predicts:1 capture:1 highest:1 rescaled:1 valuable:1 broken:1 ui:2 dynamic:3 solving:1 tight:2 incur:4 basis:2 chapter:2 effective:1 bhatia:1 aspthe:1 choosing:1 outside:1 whose:8 otherwise:1 online:2 kxt:4 eigenvalue:16 sequence:6 product:3 turned:1 loop:1 mixing:1 produce:3 leave:1 develop:3 ij:2 received:1 strong:1 dividing:2 implies:2 indicate:1 direction:1 hull:1 luckily:1 centered:1 successor:1 generalization:1 decompose:2 preliminary:1 hold:1 around:1 normal:1 exp:13 predict:2 substituting:1 smallest:6 largest:4 weighted:7 minimization:2 clearly:2 always:1 gaussian:1 bha97:2 lifted:1 probabilistically:4 corollary:1 encode:1 june:3 rank:14 contrast:1 greedily:2 attains:1 am:3 detect:1 carath:1 entire:1 typically:1 i1:1 pittsburg:2 colt:2 constrained:2 special:1 equal:6 construct:2 koby:1 icml:1 bnm:5 minimized:4 np:1 simplex:1 employ:1 randomly:1 divergence:1 comprehensive:1 replaced:3 maintain:2 mixture:11 permuting:1 bregman:3 minw:1 orthogonal:1 logarithm:1 littlestone:1 desired:1 re:1 tsuda:1 hw98:2 instance:13 increased:1 earlier:1 marshall:1 yoav:1 cost:1 subset:9 entry:1 uniform:1 predictor:1 too:1 chooses:6 st:1 density:13 thanks:1 randomized:2 herbster:2 stay:1 probabilistic:2 off:4 kdimensional:2 enhance:1 quickly:1 w1:2 cesa:1 choose:1 corner:36 expert:16 rescaling:1 converted:1 coefficient:3 rockafellar:1 multiplicative:1 root:1 red:1 start:2 maintains:1 complicated:1 majorization:1 square:1 variance:3 efficiently:1 correspond:1 produced:1 cbfh:4 none:1 multiplying:2 inform:1 centering:1 competitor:1 eodory:1 proof:6 stop:1 dataset:4 recall:1 color:1 knowledge:1 actually:1 follow:1 specify:1 arranged:1 done:1 furthermore:1 implicit:3 until:1 working:1 receives:5 grows:1 k22:5 contain:1 spell:1 trw05:3 symmetric:8 nonzero:1 bnn:2 anm:6 during:1 generalized:2 eigensystem:1 theoretic:1 allen:1 image:1 tuning:1 dot:2 posterior:1 inf:1 belongs:1 inequality:8 binary:1 arbitrarily:1 came:1 seen:4 minimum:4 additional:4 somewhat:1 relates:1 windowing:2 mix:2 full:1 reduces:1 ing:1 technical:1 academic:1 long:1 basic:2 regression:1 essentially:4 iteration:3 represent:2 kernel:1 achieved:1 receive:2 whereas:1 want:2 addressed:1 median:1 rest:1 member:2 call:2 iii:1 easy:2 switch:3 wti:1 reduce:1 idea:1 shift:1 pca:14 santa:2 clear:1 eigenvectors:3 amount:1 eiji:1 simplest:2 schapire:2 per:1 blue:1 dropping:1 express:1 drawn:3 pj:9 neither:1 takimoto:1 sum:1 run:1 arrive:3 wu:1 missed:1 draw:2 decision:1 bit:3 bound:15 guaranteed:1 quadratic:3 annual:2 precisely:1 constraint:2 sake:1 bousquet:1 min:1 department:2 developing:1 alternate:1 combination:15 remain:2 em:3 b1n:2 wi:8 projecting:3 restricted:1 ln:1 previously:1 know:1 ascending:1 end:3 bw02:2 generalizes:2 gaussians:1 apply:1 observe:1 away:1 subtracted:1 alternative:1 existence:1 original:1 denotes:2 remaining:5 completed:1 exploit:2 prof:2 ellipse:1 implied:1 strategy:1 diagonal:8 gradient:2 subspace:17 berlin:1 majority:7 restart:1 provable:1 assuming:1 minn:2 equivalently:1 robert:1 trace:2 negative:1 design:1 unknown:1 perform:1 bianchi:1 conversion:1 datasets:1 rn:1 arbitrary:2 august:1 introduced:1 vacuous:1 california:2 mizes:1 capped:1 able:3 adversary:1 proceeds:1 shifting:1 natural:2 force:1 predicting:1 specializes:1 relative:4 freund:2 loss:61 fully:1 filtering:1 proportional:1 proven:1 eigendecomposition:3 pi:2 normalizes:2 repeat:1 last:1 keeping:1 bias:4 exponentiated:2 face:1 differentiating:1 absolute:1 van:1 boundary:7 dimension:6 curve:2 transition:4 avoids:1 quantum:2 projected:2 simplified:1 bm:11 far:2 keep:1 dealing:1 uncentered:1 summing:2 xi:2 leader:1 don:1 search:3 continuous:1 learn:1 alg:2 diag:6 pk:1 linearly:1 whole:1 repeated:1 complementary:1 kuzmin:2 x1:3 advice:1 en:1 depicts:2 exponential:1 comput:1 third:1 capping:1 theorem:11 unwieldy:1 xt:37 showing:1 symbol:1 consist:1 adding:2 sorting:1 easier:1 entropy:4 depicted:1 logarithmic:2 forming:1 tracking:4 applies:1 springer:3 corresponds:3 comparator:3 goal:4 ann:1 hard:1 included:1 specifically:1 wt:18 justify:1 principal:2 lemma:1 total:21 atsch:1 internal:2 mark:2 latter:2 crammer:1 pythagorean:2 princeton:1
2,364
3,145
Geometric entropy minimization (GEM) for anomaly detection and localization Alfred O Hero, III University of Michigan Ann Arbor, MI 48109-2122 hero@umich.edu Abstract We introduce a novel adaptive non-parametric anomaly detection approach, called GEM, that is based on the minimal covering properties of K-point entropic graphs when constructed on N training samples from a nominal probability distribution. Such graphs have the property that as N ? ? their span recovers the entropy minimizing set that supports at least ? = K/N (100)% of the mass of the Lebesgue part of the distribution. When a test sample falls outside of the entropy minimizing set an anomaly can be declared at a statistical level of significance ? = 1 ? ?. A method for implementing this non-parametric anomaly detector is proposed that approximates this minimum entropy set by the influence region of a K-point entropic graph built on the training data. By implementing an incremental leave-one-out k-nearest neighbor graph on resampled subsets of the training data GEM can efficiently detect outliers at a given level of significance and compute their empirical p-values. We illustrate GEM for several simulated and real data sets in high dimensional feature spaces. 1 Introduction Anomaly detection and localization are important but notoriously difficult problems. In such problems it is crucial to identify a nominal or baseline feature distribution with respect to which statistically significant deviations can be reliably detected. However, in most applications there is seldom enough information to specify the nominal density accurately, especially in high dimensional feature spaces for which the baseline shifts over time. In such cases standard methods that involve estimation of the multivariate feature density from a fixed training sample are inapplicable (high dimension) or unreliable (shifting baseline). In this paper we propose an adaptive non-parametric method that is based on a class of entropic graphs [1] called K-point minimal spanning trees [2] and overcomes the limitations of high dimensional feature spaces and baseline shift. This method detects outliers by comparing them to the most concentrated subset of points in the training sample. It follows from [2] that this most concentrated set converges to the minimum entropy set of probability ? as N ? ? and K/N ? ?. Thus we call this approach to anomaly detection the geometric entropy minimization (GEM) method. Several approaches to anomaly detection have been previously proposed. Parametric approaches such as the generalized likelihood ratio test lead to simple and classical algorithms such as the Student t-test for testing deviation of a Gaussian test sample from a nominal mean value and the Fisher F-test for testing deviation of a Gaussian test sample from a nominal variance. These methods fall under the statistical nomenclature of the classical slippage problem [3] and have been applied to detecting abrupt changes in dynamical systems, image segmentation, and general fault detection applications [4]. The main drawback of these algorithms is that they rely on a family of parameterically defined nominal (no-fault) distributions. An alternative to parametric methods of anomaly detection are the class of novelty detection algorithms and include the GEM approach described herein. Scholkopf and Smola introduced a kernelbased novelty detection scheme that relies on unsupervised support vector machines (SVM) [5]. The single class minimax probability machine of Lanckriet etal [6] derives minimax linear decision regions that are robust to unknown anomalous densities. More closely related to our GEM approach is that of Scott and Nowak [7] who derive multiscale approximations of minimum-volume-sets to estimate a particular level set of the unknown nominal multivariate density from training samples. For a simple comparative study of several of these methods in the context of detecting network intrusions the reader is referred to [8]. The GEM method introduced here has several features that are summarized below. (1) Unlike the MPM method of Lanckriet etal [6] the GEM anomaly detector is not restricted to linear or even convex decision regions. This translates to higher power for specified false alarm level. (2) GEMs computational complexity scales linearly in dimension and can be applied to level set estimation in feature spaces of unprecedented (high) dimensionality. (3) GEM has no complicated tuning parameters or function approximation classes that must be chosen by the user. (4) Like the method of Scott and Nowak [7] GEM is completely non-parametric, learning the structure of the nominal distribution without assumptions of linearity, smoothness or continuity of the level set boundaries. (5) Like Scott and Nowak?s method, GEM is provably optimal - indeed uniformly most powerful of specified level - for the case that the anomaly density is a mixture of the nominal and a uniform density. (6) GEM easily adapts to local structure, e.g. changes in local dimensionality of the support of the nominal density. We introduce an incremental Leave-one-out (L1O) kNNG as a particularly versatile and fast anomaly detector in the GEM class. Despite the similarity in nomenclature, the L1O kNNG is different from k nearest neighbor (kNN) anomaly detection of [9]. The kNN anomaly detector is based on thresholding the distance from the test point to the k-th nearest neighbor. The L1O kNNG detector computes the change in the topology of the entire kNN graph due to the addition of a test sample and does not use a decision threshold. Furthermore, the parent GEM anomaly detection methodology has proven theoretical properties, e.g. the (restricted) optimality property for uniform mixtures and general consistency properties. We introduce the statistical framework for anomaly detection in the next section. We then describe the GEM approach in Section . Several simulations are presented n Section 4. 2 Statistical framework The setup is the following. Assume that a training sample Xn = {X1 , . . . , Xn } of d-dimensional vectors Xi is available. Given a new sample X the objective is to declare X to be a ?nominal? sample consistent with Xn or an ?anomalous? sample that is significantly different from Xn . This declaration is to be constrained to give as few false positives as possible. To formulate this problem we adopt the standard statistical framework for testing composite hypotheses. Assume that Xn is an independent identically distributed (i.i.d.) sample from a multivariate density f0 (x) supported on the unit d-dimensional cube [0, 1]d . Let X have density f (x). Anomaly detection can be formulated as testing the hypotheses H0 : f = fo versus H0 : f = fo at a prescribed level ? of significance P (declare H1 |H0 ) ? ?. The minimum-volume-set of level ? is defined as a set ?? in IRd which minimizes the volume  |?? | = ?? dx subject to the constraint ?? f0 (x)dx ? 1 ? ?. The minimum-entropy-set of level  1 ln ?? f ? (x)dx ? is defined as a set ?? in IRd which minimizes the R?enyi entropy H? (?? ) = 1??  subject to the constraint ?? f0 (x)dx ? 1 ? ?. Here ? is any real valued parameter between 0 < ? < 1. When f is a Lebesgue density in IRd it is easy to show that these three sets are identical almost everywhere. The test ?decide anomaly if X ? ?? ? is equivalent to implementing the test function   1, x ? ?? . ?(x) = 0, o.w. This test has a strong optimality property: when f0 is Lebesgue continuous it is a uniformly most powerful (UMP) level ? for testing anomalies that follow a uniform mixture distribution. Specif- ically, let X have density f (x) = (1 ? )f0 (x) + U (x) where U (x) is the uniform density over [0, 1]d and  ? [0, 1]. Consider testing the hypotheses H0 H1 : : =0 >0 (1) (2) Proposition 1 Assume that under H0 the random vector X has a Lebesgue continuous density f0 and that Z = f0 (X) is also a continuous random variable. Then the level-set test of level ? is uniformly most powerful for testing (2). Furthermore, its power function ? = P (X ? ?? |H1 ) is given by ? = (1 ? )? + (1 ? |?? |). A sufficient condition for the random variable Z above to be continuous is that the density f0 (x) have no flat spots over its support set {f0 (x) > 0}. The proof of this proposition is omitted. There are two difficulties with implementing the level set test. First, for known f0 the level set may be very difficult if not impossible to determine in high dimensions d  2. Second, when only a training sample from f0 is available and f0 is unknown the level sets have to be learned from the training data. There are many approaches to doing this for minimum volume tests and these are reviewed in [7]. These methods can be divided into two main approaches: density estimation followed by plug in estimation of ?? via variational methods; and (2) direct estimation of the level set using function approximation and non-parametric estimation. Since both approaches involve explicit approximation of high dimensional quantities, e.g. the multivariate density or the boundary of the set ??, these methods are difficult to apply in high dimensional problems, i.e. d > 2. The GEM method we propose in the next section overcomes these difficulties. 3 GEM and entropic graphs GEM is a method that directly estimates the critical region for detecting anomalies using minimum coverings of subsets of points in a nominal training sample. These coverings are obtained by constructing minimal graphs, e.g. a MST or kNNG, covering a K-point subset that is a given proportion of the training sample. Points not covered by these K-point minimal graphs are identified as tail events and allow one to adaptively set a pvalue for the detector. For a set of n points Xn in IRd a graph G over Xn is a pair (V, E) where V = Xn is the set of vertices and E = {e} is the set of edges of the graph. The total power weighted length, or, more simply, the length, of G is L(Xn ) = e?E |e|? where ? > 0 is a specified edge exponent parameter. 3.1 K-point MST The MST with power weighting ? is defined as the graph that spans Xn with minimum total length:  |e|? . LM ST (Xn ) = min T ?T e?T where T is the set of all trees spanning Xn . n subsets of K distinct points from Xn . Definition 1 K-point MST: Let Xn,K denote one of the K Among all of the MST?s spanning these sets, the K-MST is defined as the one having minimal length minXn,K ?Xn LM ST (Xn,k ). The K-MST thus specifies the minimal subset of K points in addition to specifying the minimum length. This subset of points, which we call a minimal graph covering of Xn of size K, can be viewed as capturing the densest region of Xn . Furthermore, if Xn is a i.i.d. sample from a multivariate density f (x) and if limK,n?? K/n = ? and a greedy version of the K-MST is implemented, this set converges a.s. to the minimum ?-entropy set containing a proportion of at least ? = K/n of the mass of the (Lebesgue component of) f (x), where ? = (d ? ?)/d. This fact was used in [2] to motivate the greedy K-MST as an outlier resistant estimator of entropy for finite n, K. Define the K-point subset ? Xn,K = argminXn,K ?Xn LM ST (Xn,K ) selected by the greedy K-MST. Then we have the following As the minimum entropy set and minimum volume set are identical, this suggests the following minimal-volume-set anomaly detection algorithm, which we call the ?K-MST anomaly detector.? K-MST anomaly detection algorithm [1]Process training sample: Given a level of significance ? and a training sample Xn = ? {X1 , . . . , Xn }, construct the greedy K-MST and retain its vertex set Xn,K . [2]Process test sample: Given a test sample X run the K-MST on the merged training-test sample ? Xn+1 = Xn ? {X} and store the minimal set of points Xn+1,K . [3]Make decision: Using the test function ? defined below decide H1 if ?(X) = 1 and decide H0 if ?(X) = 0.  ? 1, X ? Xn+1,K . ?(x) = 0, o.w. When the density f0 generating the training sample is Lebesgue continuous, it follows from [2, Theorem 2] that as K, n ? ? the K-MST anomaly detector has false alarm probability that converges to ? = 1 ? K/n and power that converges to that of the minimum-volume-set test of level ?. When the density f0 is not Lebesgue continuous some optimality properties of the K-MST anomaly detector still hold. Let this nominal density have the decomposition f0 = ?0 + ?0 , where ?0 is Lebesgue continuous and ?0 is singular. Then, according to [2, Theorem 2], the K-MST anomaly detector will have false alarm probability that converges to (1 ? ?)?, where ? is the mass of the singular component of f0 , and it is a uniformly most powerful test for anomalies in the continuous component, i.e. for the test of H0 : ? = ?0 , ? = ?0 against H1 : ? = (1 ? )?0 + U (x), ? = ?0 . It is well known that the K-MST construction is of exponential complexity in n [10]. In fact, even for K = n ? 1, a case one can call the leave-one-out MST, there is no simple fast algorithm for computation. However, the leave-one-out kNNG, described below, admits a fast incremental algorithm. 3.2 K-point kNNG Let Xn = {X1 , . . . , Xn } be a set of n points. The k nearest neighbors (kNN) {Xi(1) , . . . Xi(k) } of a point Xi ? Xn are the k closest points to Xi points in Xn ? {Xi }. Here the measure of closeness is the Euclidean distance. Let {ei(1) , . . . , ei(k) } be the set of edges between Xi and its k nearest neighbors. The kNN graph (kNNG) over Xn is defined as the union of all of the kNN edges {ei(1) , . . . , ei(k) }ni=1 and the total power weighted edge length of the kNN graph is LkN N (Xn ) = n  k  |ei(l) |? . i=1 l=1 n subsets of K distinct points from Xn . Definition 2 K-point kNNG: Let Xn,K denote one of the K Among all of the kNNG over each of these sets, the K-kNNG is defined as the one having minimal length minXn,K ?Xn LkN N (Xn,k ). As the kNNG length is also a quasi additive continuous functional [11], the asymptotic KMST theory of [2] extends to the K-point kNNG. Of course, computation of the K-point kNNG also has exponential complexity. However, the same type of greedy approximation introduced by Ravi [10] for the K-MST can be implemented to reduce complexity of the K-point kNNG. This approximation to the K-point kNNG will satisfy the tightly coverable graph property of [2, Defn. 2]. We have the following result that justifies the use of such an approximation as an anomaly detector of level ? = 1 ? ?, where ? = K/n: ? Proposition 2 Let Xn,K be the set of points in Xn that results from any approximation to the K? point kNNG that satisfies the property [2, Defn. 2]. Then limn?? P0 (Xn,K ? ?? ) ? 1 and ? limn?? P0 (Xn,K ? ?? ) ? 0, where K = K(n) = ?oor(?n), ?? is a minimum-volume-set of level ? = 1 ? ? and ?? = [0, 1]d ? ?? . Proof: We provide a rough sketch using the terminology of [2]. Recall that a set B m ? [0, 1]d of resolution 1/m is representable by a union of elements of the uniform partition of [0, 1]d into hypercubes of volume 1/md . Lemma 3 of [2] asserts that there exists an M such that for m > M the limits claimed in Proposition 2 hold with ?? replaced by Am ? , a minimum volume set of resolution 1/m that contains ?? . As limm?? Am = ? this establishes the proposition. ? ? Figures 1-2 illustrate the use of the K-point kNNG as an anomaly detection algorithm. K?point kNNG, k=5, N=200, ?=0.9, K=180 Bivariate Gaussian mixture density 3 3 2 2 1 1 0 0 ?1 ?1 ?2 ?2 ?3 ?3 ?4 ?4 ?5 ?6 ?4 ?2 0 2 4 ?5 ?6 ?4 ?2 0 2 4 Figure 1: Left: level sets of the nominal bivariate mixture density used to illustrate the K point kNNG anomaly detection algorithms. Right: K-point kNNG over N=200 random training samples drawn from the nominal bivariate mixture at left. Here k=5 and K=180, corresponding to a significance level of ? = 0.1. K?point kNNG, k=5, N=200, ?=0.9, K=180 K?point kNNG, k=5, N=200, ?=0.9, K=180 3 3 2 2 1 1 0 0 ?1 ?1 ?2 ?2 ?3 ?3 ?4 ?4 ?5 ?6 ?4 ?2 0 2 4 ?5 ?6 ?4 ?2 0 2 4 Figure 2: Left: The test point ?*? is declared anomalous at level ? = 0.1 as it is not captured by the K-point kNNG (K=180) constructed over the combined test sample and the training samples drawn from the nominal bivariate mixture shown in Fig. 1. Right: A different test point ?*? is declared non-anomalous as it is captured by this K-point kNNG. 3.3 Leave-one-out kNNG (L1O-kNNG) The theoretical equivalence between the K-point kNNG and the level set anomaly detector motivates a low complexity anomaly detection scheme, which we call the leave-one-out kNNG, discussed in this section and adopted for the experiments below. As before, assume a single test sample X = Xn+1 and a training sample Xn . Fix k and assume that the kNNG over the set Xn has been computed. To determine the kNNG over the combined sample Xn+1 = Xn ? {Xn+1 } one can execute the following algorithm: L1O kNNG anomaly detection algorithm 1. For each Xi ? Xn+1 , i = 1, . . . , n + 1, compute the kNNG total length difference ?i LkN N = LkN N (Xn+1 ) ? LkN N (Xn+1 ? {Xi }) by the following steps. For each i: k (a) Find the k edges Ei?? of all of the kNN?s of Xi . k (b) Find the edges E??i of other points in Xn+1 ? {Xi } that have Xi as one of their kNNs. For these points find the edges E?k+1 to their respective k + 1st NN point.    (c) Compute ?i LkN N = e?E k |e|? + e?E k |e|? ? e?E?k+1 |e|? i?? ??i 2. Define the kNNG most ?outlying point? as Xo = argmaxi=1,...,n+1 ?i LkN N . 3. Declare the test sample Xn+1 an anomaly if Xn+1 = Xo . This algorithm will detect anomalies with a false alarm level of approximately 1/(n+1). Thus larger sizes n of the training sample will correspond to more stringent false alarm constraints. Furthermore, the p-value of each test point Xi is easily computed by recursing over the size n of the training   sample. In particular, let n vary from k to n and define n? as the minimum value of n for which Xi is declared an anomaly. Then the p-value of Xi is approximately 1/(n? + 1). A useful relative influence coefficient ? can be defined for each point Xi in the combined sample Xn+1 ?(Xi ) = ?i LkN N . maxi ?i LkN N (3) The coefficient ?(Xn+1 ) = 1 when the test point Xn+1 is declared an anomaly. Using matlab?s matrix sort algorithm step 1 of this algorithm can be computed an order of magnitude faster than the K-point MST (N 2 logN vs N 3 logN ). For example, the experiments below have shown that the above algorithm can find and determine the p-value of 10 outliers among 1000 test samples in a few seconds on a Dell 2GHz processor running Matlab 7.1. 4 Illustrative examples Here we focus on the L1O kNNG algorithm due to its computational speed. We show a few representative experiments for simple Gaussian and Gaussian mixture nominal densities f0 . L1O kNN scores. rho=0.998, Mmin=500 , detection rate=0.009 iteration 20, pvalue 0.001 1 4 2 2 0 0 0 ?2 ?2 ?2 ?4 ?4 ?4 ?6 ?4 0.6 iteration 246, pvalue 0.001 4 2 ?2 0 2 4 6 ?6 ?4 ?2 0 2 4 6 ?6 ?4 ?2 0 2 4 6 n i i score = ? /max (? ) 0.8 iteration 203, pvalue 0.001 4 iteration 294, pvalue 0.001443 iteration 307, pvalue 0.001 4 0.4 2 0.2 2 2 0 0 0 ?2 ?2 ?4 ?4 ?4 0 2 4 6 ?6 ?4 iteration 574, pvalue 0.001 ?2 0 2 4 6 ?6 ?2 iteration 712, pvalue 0.0011614 4 0 2 4 6 iteration 791, pvalue 0.0011682 4 2 ?0.2 4 ?2 ?6 ?2 0 iteration 334, pvalue 0.001 4 4 2 2 0 0 0 ?2 ?0.4 ?2 ?2 ?4 100 200 300 400 500 600 sample number 700 800 900 1000 ?6 ?5 0 5 ?4 ?2 ?4 0 2 4 6 ?6 ?2 0 2 4 6 Figure 3: Left: The plot of the anomaly curve for the L1O kNNG anomaly detector for detecting deviations from a nominal 2D Gaussian density with mean (0,0) and correlation coefficient -0.5. The boxes on peaks of curve correspond to positions of detected anomalies and the height of the boxes are equal to one minus the computed p-value. Anomalies were generated (on the average) every 100 samples and drawn from a 2D Gaussian with correlation coefficient 0.8. The parameter ? is equal to 1 ? ?, where ? is the user defined false alarm rate. Right: the resampled nominal distribution (???) and anomalous points detected (?*?) at the iterations indicated at left. First we illustrate the L1O kNNG algorithm for detection of non-uniformly distributed anomalies from training samples following a bivariate Gaussian nominal density. Specifically, a 2D Gaussian density with mean (0,0) and correlation coefficient -0.5 was generated to train of the L1O kNNG detector. The test sample consisted of a mixture of this nominal and a zero mean 2D Gaussian with correlation coefficient 0.8 with mixture coefficient  = 0.01. In Fig. 3 the results of simulation with a training sample of 2000 samples and 1000 tests samples are shown. Fig. 3 is a plot of the relative influence curve (3) over the test samples as compared to the most outlying point in the (resampled) training sample. When the relative influence curve is equal to 1 the corresponding test sample is the most outlying point and is declared an anomaly. The 9 detected anomalies in Fig. 3 have p-values less than 0.001 and therefore one would expect an average of only one false alarm at this level of significance. In the right panel of Fig. 3 the detected anomalies (asterisks) are shown along with the training sample (dots) used to grow the L1O kNNG for that particular iteration - note that to protect against bias the training sample is resampled at each iteration. Next we compare the performance of the L1O kNNG detector to that of the UMP test for the hypotheses (2). We again trained on a bivariate Gaussian f0 with mean zero, but this time with identical component variances of ? = 0.1. This distribution has essential support on the unit square. For this?simple case the minimum volume set of level ? is a disk centered at the origin with radius 2? 2 ln 1/? and the power of the of the UMP can be computed in closed form: ? = (1 ? )? + (1 ? 2?? 2 ln 1/?). We implemented the GEM anomaly detector with the incremental leave-one-out kNNG using k = 5. The training set consisted of 1000 samples from f0 and the test set consisted of 1000 samples from the mixture of a uniform density and f0 with parameter  ranging from 0 to 0.2. Figure 4 shows the empirical ROC curves obtained using the GEM test vs the theoretical curves (labeled ?clairvoyant?) for several different values of the mixing parameter. Note the good agreement between theoretical prediction and the GEM implementation of the UMP using the kNNG. ROC curves for Gaussian+uniform mixture. k=5, N=1000, Nrep=10 0.5 L1O?kNN Clairvoyant ?=0.5 0.45 0.4 0.35 ?=0.3 ? 0.3 0.25 0.2 ?=0.1 0.15 0.1 ?=0 0.05 0 0 0.01 0.02 0.03 0.04 0.05 ? 0.06 0.07 0.08 0.09 0.1 Figure 4: ROC curves for the leave-one-out kNNG anomaly detector described in Sec. 3.3. The labeled ?clairvoyant? curve is the ROC of the UMP anomaly detector. The training sample is a zero mean 2D spherical Gaussian distribution with standard deviation 0.1 and the test sample is a this 2D Gaussian and a 2D uniform-[0, 1]2 mixture density. The plot is for various values of the mixture parameter . 5 Conclusions A new and versatile anomaly detection method has been introduced that uses geometric entropy minimization (GEM) to extract minimal set coverings that can be used to detect anomalies from a set of training samples. This method can be implemented through the K-point minimal spanning tree (MST) or the K-point nearest neighbor graph (kNNG). The L1O kNNG is significantly less computationally demanding than the K-point MST. We illustrated the L1O kNNG method on simulated data containing anomalies and showed that it comes close to achieving the optimal performance of the UMP detector for testing the nominal against a uniform mixture with unknown mixing parameter. As the L1O kNNG computes p-values on detected anomalies it can be easily extended to account for false discovery rate constraints. By using a sliding window, the methodology derived in this paper is easily extendible to on-line applications and has been applied to non-parametric intruder detection using our Crossbow sensor network testbed (reported elsewhere). Acknowledgments This work was partially supported by NSF under Collaborative ITR grant CCR-0325571. References [1] A. Hero, B. Ma, O. Michel, and J. Gorman, ?Applications of entropic spanning graphs,? IEEE Signal Processing Magazine, vol. 19, pp. 85?95, Sept. 2002. www.eecs.umich.edu/?hero/imag_proc.html. [2] A. Hero and O. Michel, ?Asymptotic theory of greedy approximations to minimal k-point random graphs,? IEEE Trans. on Inform. Theory, vol. IT-45, no. 6, pp. 1921?1939, Sept. 1999. [3] T. S. Ferguson, Mathematical Statistics - A Decision Theoretic Approach. Academic Press, Orlando FL, 1967. [4] I. V. Nikiforov and M. Basseville, Detection of abrupt changes: theory and applications. Prentice-Hall, Englewood-Cliffs, NJ, 1993. [5] B. Scholkopf, R. Williamson, A. Smola, J. Shawe-Taylor, and J. Platt, ?Support vector method for novelty detection,? in Advances in Neural Information Processing Systems (NIPS), vol. 13, 2000. [6] G. R. G. Lanckriet, L. El Ghaoui, and M. I. Jordan, ?Robust novelty detection with single-class mpm,? in Advances in Neural Information Processing Systems (NIPS), vol. 15, 2002. [7] C. Scott and R. Nowak, ?Learning minimum volume sets,? Journal of Machine Learning Research, vol. 7, pp. 665?704, April 2006. [8] A. Lazarevic, A. Ozgur, L. Ertoz, J. Srivastava, and V. Kumar, ?A comparative study of anomaly detection schemes in network intrusion detection,? in SIAM Conference on data mining, 2003. [9] S. Ramaswamy, R. Rastogi, and K. Shim, ?Efficient algorithms for mining outliers from large data sets,? in Proceedings of the ACM SIGMOD Conference, 2000. [10] R. Ravi, M. Marathe, D. Rosenkrantz, and S. Ravi, ?Spanning trees short or small,? in Proc. 5th Annual ACM-SIAM Symposium on Discrete Algorithms, (Arlington, VA), pp. 546?555, 1994. [11] J. E. Yukich, Probability theory of classical Euclidean optimization, vol. 1675 of Lecture Notes in Mathematics. Springer-Verlag, Berlin, 1998.
3145 |@word version:1 proportion:2 disk:1 simulation:2 decomposition:1 p0:2 minus:1 versatile:2 contains:1 score:2 comparing:1 dx:4 must:1 mst:23 additive:1 partition:1 plot:3 v:2 greedy:6 selected:1 mpm:2 short:1 detecting:4 dell:1 height:1 mathematical:1 along:1 constructed:2 direct:1 symposium:1 scholkopf:2 clairvoyant:3 introduce:3 indeed:1 detects:1 spherical:1 window:1 linearity:1 panel:1 mass:3 minxn:2 minimizes:2 nj:1 every:1 platt:1 unit:2 grant:1 positive:1 declare:3 before:1 local:2 limit:1 despite:1 cliff:1 approximately:2 equivalence:1 specifying:1 suggests:1 statistically:1 acknowledgment:1 testing:8 union:2 spot:1 empirical:2 significantly:2 composite:1 oor:1 close:1 prentice:1 context:1 influence:4 impossible:1 www:1 equivalent:1 convex:1 formulate:1 resolution:2 abrupt:2 estimator:1 construction:1 nominal:22 user:2 anomaly:52 magazine:1 densest:1 us:1 hypothesis:4 origin:1 lanckriet:3 agreement:1 element:1 particularly:1 labeled:2 slippage:1 region:5 complexity:5 motivate:1 trained:1 localization:2 inapplicable:1 completely:1 easily:4 various:1 train:1 enyi:1 fast:3 describe:1 distinct:2 argmaxi:1 detected:6 outside:1 h0:7 larger:1 valued:1 rho:1 statistic:1 knn:10 unprecedented:1 propose:2 mixing:2 adapts:1 asserts:1 parent:1 comparative:2 incremental:4 leave:8 converges:5 generating:1 illustrate:4 derive:1 nearest:6 strong:1 implemented:4 come:1 radius:1 drawback:1 closely:1 merged:1 centered:1 stringent:1 implementing:4 orlando:1 fix:1 proposition:5 hold:2 hall:1 lm:3 vary:1 entropic:5 adopt:1 omitted:1 estimation:6 proc:1 establishes:1 weighted:2 minimization:3 rough:1 sensor:1 gaussian:14 derived:1 focus:1 likelihood:1 intrusion:2 baseline:4 detect:3 am:2 el:1 nn:1 ferguson:1 entire:1 quasi:1 limm:1 provably:1 among:3 html:1 logn:2 exponent:1 constrained:1 cube:1 equal:3 construct:1 having:2 identical:3 unsupervised:1 few:3 tightly:1 defn:2 replaced:1 lebesgue:8 yukich:1 detection:28 englewood:1 mining:2 mixture:15 edge:8 nowak:4 respective:1 tree:4 euclidean:2 taylor:1 theoretical:4 minimal:13 deviation:5 subset:9 vertex:2 uniform:9 reported:1 eec:1 lkn:9 combined:3 adaptively:1 st:4 density:28 hypercubes:1 peak:1 siam:2 retain:1 again:1 containing:2 michel:2 account:1 student:1 summarized:1 sec:1 coefficient:7 satisfy:1 h1:5 ramaswamy:1 closed:1 ump:6 doing:1 sort:1 complicated:1 collaborative:1 square:1 ni:1 variance:2 who:1 efficiently:1 correspond:2 identify:1 rastogi:1 accurately:1 notoriously:1 processor:1 detector:19 fo:2 inform:1 definition:2 against:3 pp:4 proof:2 mi:1 recovers:1 recall:1 dimensionality:2 segmentation:1 higher:1 follow:1 methodology:2 specify:1 arlington:1 april:1 execute:1 box:2 furthermore:4 smola:2 correlation:4 sketch:1 ei:6 multiscale:1 continuity:1 indicated:1 consisted:3 l1o:16 illustrated:1 mmin:1 covering:6 illustrative:1 generalized:1 theoretic:1 image:1 variational:1 ranging:1 novel:1 functional:1 volume:12 tail:1 discussed:1 approximates:1 significant:1 smoothness:1 tuning:1 seldom:1 consistency:1 mathematics:1 etal:2 shawe:1 dot:1 resistant:1 f0:20 similarity:1 multivariate:5 closest:1 showed:1 store:1 claimed:1 verlag:1 fault:2 captured:2 minimum:18 novelty:4 determine:3 signal:1 sliding:1 faster:1 academic:1 plug:1 divided:1 va:1 prediction:1 anomalous:5 iteration:12 addition:2 singular:2 recursing:1 limn:2 crucial:1 specif:1 grow:1 unlike:1 limk:1 subject:2 jordan:1 call:5 iii:1 enough:1 identically:1 easy:1 topology:1 identified:1 reduce:1 itr:1 translates:1 shift:2 ird:4 nomenclature:2 matlab:2 useful:1 covered:1 involve:2 concentrated:2 specifies:1 nsf:1 ccr:1 alfred:1 discrete:1 vol:6 terminology:1 threshold:1 achieving:1 drawn:3 ravi:3 graph:19 run:1 everywhere:1 powerful:4 extends:1 family:1 reader:1 almost:1 decide:3 intruder:1 decision:5 capturing:1 fl:1 resampled:4 followed:1 annual:1 knng:46 constraint:4 flat:1 declared:6 speed:1 span:2 optimality:3 prescribed:1 min:1 kumar:1 according:1 representable:1 ozgur:1 outlier:5 restricted:2 xo:2 ghaoui:1 ln:3 computationally:1 previously:1 hero:5 umich:2 adopted:1 available:2 nikiforov:1 apply:1 coverable:1 alternative:1 running:1 include:1 sigmod:1 especially:1 classical:3 objective:1 quantity:1 parametric:8 md:1 distance:2 simulated:2 berlin:1 spanning:6 length:9 ratio:1 minimizing:2 difficult:3 setup:1 implementation:1 reliably:1 motivates:1 unknown:4 finite:1 extended:1 introduced:4 pair:1 specified:3 extendible:1 learned:1 herein:1 testbed:1 protect:1 nip:2 trans:1 dynamical:1 below:5 scott:4 built:1 max:1 shifting:1 power:7 critical:1 event:1 difficulty:2 rely:1 demanding:1 minimax:2 scheme:3 pvalue:10 extract:1 sept:2 geometric:3 discovery:1 asymptotic:2 relative:3 expect:1 shim:1 lecture:1 ically:1 limitation:1 proven:1 versus:1 asterisk:1 sufficient:1 consistent:1 thresholding:1 course:1 elsewhere:1 supported:2 bias:1 allow:1 fall:2 neighbor:6 distributed:2 ghz:1 boundary:2 dimension:3 xn:58 curve:9 computes:2 adaptive:2 outlying:3 unreliable:1 overcomes:2 gem:24 marathe:1 xi:17 continuous:9 reviewed:1 robust:2 williamson:1 constructing:1 significance:6 main:2 linearly:1 alarm:7 x1:3 fig:5 referred:1 representative:1 roc:4 position:1 explicit:1 exponential:2 weighting:1 theorem:2 maxi:1 svm:1 admits:1 closeness:1 derives:1 exists:1 bivariate:6 essential:1 false:9 magnitude:1 justifies:1 gorman:1 entropy:12 michigan:1 simply:1 partially:1 knns:1 springer:1 satisfies:1 relies:1 acm:2 ma:1 declaration:1 viewed:1 formulated:1 ann:1 basseville:1 fisher:1 change:4 specifically:1 uniformly:5 lemma:1 called:2 total:4 arbor:1 support:6 kernelbased:1 srivastava:1
2,365
3,146
Part-based Probabilistic Point Matching using Equivalence Constraints Graham McNeill, Sethu Vijayakumar Institute of Perception, Action and Behavior School of Informatics, University of Edinburgh, Edinburgh, UK. EH9 3JZ [graham.mcneill, sethu.vijayakumar]@ed.ac.uk Abstract Correspondence algorithms typically struggle with shapes that display part-based variation. We present a probabilistic approach that matches shapes using independent part transformations, where the parts themselves are learnt during matching. Ideas from semi-supervised learning are used to bias the algorithm towards finding ?perceptually valid? part structures. Shapes are represented by unlabeled point sets of arbitrary size and a background component is used to handle occlusion, local dissimilarity and clutter. Thus, unlike many shape matching techniques, our approach can be applied to shapes extracted from real images. Model parameters are estimated using an EM algorithm that alternates between finding a soft correspondence and computing the optimal part transformations using Procrustes analysis. 1 Introduction Shape-based object recognition is a key problem in machine vision and content-based image retrieval (CBIR). Over the last decade, numerous shape matching algorithms have been proposed that perform well on benchmark shape retrieval tests. However, many of these techniques share the same limitations: Firstly, they operate on contiguous shape boundaries (i.e. the ordering of the boundary points matters) and assume that every point on one boundary has a counterpart on the boundary it is being matched to (c.f. Fig. 1c). Secondly, they have no principled mechanism for handling occlusion, non-boundary points and clutter. Finally, they struggle to handle shapes that display significant part-based variation. The first two limitations mean that many algorithms are unsuitable for matching shapes extracted from real images; the latter is important since many common objects (natural and man made) display part-based variation. Techniques that match unordered point sets (e.g. [1]) are appealing since they do not require ordered boundary information and can work with non-boundary points. The methods described in [2, 3, 4] can handle outliers, occlusions and clutter, but are not designed to handle shapes whose parts are independently transformed. In this paper, we introduce a probabilistic model that retains the desirable properties of these techniques but handles parts explicitly by learning the most likely part structure and correspondence simultaneously. In this framework, a part is defined as a set of points that undergo a common transformation. Learning these variation-based parts from scratch is an underconstrained problem. To address this, we incorporate prior knowledge about valid part assignments using two different mechanisms. Firstly, the distributions of our hierarchical mixture model are chosen so that the learnt parts are spatially localized. Secondly, ideas from semi-supervised learning [5] are used to encourage a perceptually meaningful part decomposition. The algorithm is introduced in Sec. 2 and described in detail in Sec. 3. Examples are given in Sec. 4 and a sequential approach for tackling model selection (the number of parts) and parameter initialization is introduced in Sec. 5. a. Occlusion b. Irreg. sampling c. Localized dissimilarity Figure 1: Examples of probabilistic point matching (PPM) using the technique described in [4]. In each case, the initial alignment and the final match are shown. 2 Part-based Point Matching (PBPM): Motivation and Overview The PBPM algorithm combines three key ideas: Probabilistic point matching (PPM): Probabilistic methods that find a soft correspondence between unlabeled point sets [2, 3, 4] are well suited to problems involving occlusion, absent features and clutter (Fig. 1). Natural Part Decomposition (NPD): Most shapes have a natural part decomposition (NPD) (Fig. 2) and there are several algorithms available for finding NPDs (e.g. [6]). We note that in tasks such as object recognition and CBIR, the query image is frequently a template shape (e.g. a binary image or line drawing) or a high quality image with no occlusion or clutter. In such cases, one can apply an NPD algorithm prior to matching. Throughout this paper, it is assumed that we have obtained a sensible NPD for the query shape only1 ? it is not reasonable to assume that an NPD can be computed for each database shape/image. Variation-based Part Decomposition (VPD): A different notion of parts has been used in computer vision [7], where a part is defined as a set of pixels that undergo the same transformations across images. We refer to this type of part decomposition (PD) as a variation-based part decomposition (VPD). Given two shapes (i.e. point sets), PBPM matches them by applying a different transformation to each variation-based part of the generating shape. These variation-based parts are learnt during matching, where the known NPD of the data shape is used to bias the algorithm towards choosing a ?perceptually valid? VPD. This is achieved using the equivalence constraint Constraint 1 (C1): Points that belong to the same natural part should belong to the same variation-based part. As we shall see in Sec. 3, this influences the learnt VPD by changing the generative model from one that generates individual data points to one that generates natural parts (subsets of data points). To further increase the perceptual validity of the learnt VPD, we assume that variation-based parts are composed of spatially localized points of the generating shape. PBPM aims to find the correct correspondence at the level of individual points, i.e. each point of the generating shape should be mapped to the correct position on the data shape despite the lack of an exact point wise correspondence (e.g. Fig. 1b). Soft correspondence techniques that achieve this using a single nonlinear transformation [2, 3] perform well on some challenging problems. However, the smoothness constraints used to control the nonlinearity of the transformation will prevent these techniques from selecting the discontinuous transformations associated with part-based movements. PBPM learns an independent linear transformation for each part and hence, can find the correct global match. In relation to the point matching literature, PBPM is motivated by the success of the techniques described in [8, 2, 3, 4] on non-part-based problems. It is perhaps most similar to the work of Hancock and colleagues (e.g. [8]) in that we use ?structural information? about the point sets to constrain the matching problem. In addition to learning multiple parts and transformations, our work differs in the type of structural information used (the NPD rather then the Delauney triangulation) and the way in which this information is incorporated. With respect to the shape-matching literature, PBPM can be seen as a novel correspondence technique for use with established NPD algorithms. Despite the large number of NPD algorithms, there 1 The NPDs used in the examples were constructed manually. a. c. b. d. Figure 2: The natural part decomposition (NPD) (b-d) for different representations of a shape (a). are relatively few NPD-based correspondence techniques. Siddiqi and Kimia show that the parts used in their NPD algorithm [6] correspond to specific types of shocks when shock graph representations are used. Consequently, shock graphs implicitly capture ideas about natural parts. The Inner-Distance method of Ling and Jacobs [9] handles part articulation without explicitly identifying the parts. 3 Part-based Point Matching (PBPM): Algorithm 3.1 Shape Representation Shapes are represented by point sets of arbitrary size. The points need not belong to the shape boundary and the ordering of the points is irrelevant. Given a generating shape X = (x1 , x2 , . . . , xM )T ? RM?2 and a data shape Y = (y1 , y2 , . . . , yN )T ? RN ?2 (generally M 6= N ), our task is to compute the correspondence between X and Y. We assume that an NPD of Y is available, expressed as S a partition of Y into subsets (parts): Y = L l=1 Yl . 3.2 The Probabilistic Model We assume that a data point y is generated by the mixture model p(y) = V X p(y|v)?v , (1) v=0 where v indexes the variation-based parts. A uniform background component, y|(v=0) ? Uniform, ensures that all data points are explained to some extent and hence, robustifies the model against outliers. The distribution of y given a foreground component v is itself a mixture model : p(y|v) = M X p(y|m, v)p(m|v), v = 1, 2, . . . , V, (2) m=1 with y|(m, v) ? N (Tv xm , ? 2 I). (3) Here, Tv is the transformation used to match points of part v on X to points of part v on Y. Finally, we define p(m|v) in such a way that the variation-based parts v are forced to be spatially coherent: m ? ?v ) Pexp{?(x exp{?(x ? ? ??1 v (xm ? ?v )/2} , T ??1 (x ) v m ? ?v )/2} v T p(m|v) = m m (4) where ?v ? R2 is a mean vector and ?v ? R2?2 is a covariance matrix. In words, we identify m ? {1, . . . , M } with the point xm that it indexes and assume that the xm follow a bivariate Gaussian distribution. Since m must take a value in {1, . . . , M }, the distribution is normalized using the points x1 , . . . , xM only. This assumption means that the xm themselves are essentially generated by a GMM with V components. However, this GMM is embedded in the larger model and maximizing the data likelihood will balance this GMM?s desire for coherent parts against the need for the parts and transformations to explain the actual data (the yn ). Having defined all the distributions, the next step is to estimate the parameters whilst making use of the known NPD of Y. 3.3 Parameter Estimation With respect to the model defined in the previous section, C1 states that all yn that belong to the same subset Yl were generated by the same mixture component v. This requirement can be enforced using the technique introduced by Shental et. al. [5] for incorporating equivalence constraints between data points in mixture models. The basic idea is to estimate the model parameters using the EM algorithm. However, when taking the expectation (of the complete log-likelihood) we now only sum over assignments of data points to components which are valid with respect to the constraints. Assuming that subsets and points within subsets are sampled i.i.d., it can be shown that the expectation is given by: E= V X L X V X L X X p(v|Yl ) log ?v + p(v|Yl ) log p(yn |v). (5) v=0 l=1 yn ?Yl v=0 l=1 Note that eq.(5) involves p(v|Yl ) ? the responsibility of a component v for a subset Yl , rather than the term p(v|yn ) that would be present in an unconstrained mixture model. Using the expression for p(yn |v) in eq.(2) and rearranging slightly, we have E = L V X X p(v|Yl ) log ?v + v=0 l=1 + L X l=1 L V X X X p(v|Yl ) log v=1 l=1 yn ?Yl p(v=0|Yl ) log{u|Yl | } (X M ) p(yn |m, v)p(m|v) , (6) m=1 where u is the constant associated with the uniform distribution p(yn |v=0). The parameters to be estimated are ?v (eq.(1)), ?v , ?v (eq.(4)) and the transformations Tv (eq.(3)). With the exception of ?v , these are found by maximizing the final term in eq.(6). For a fixed v, this term is the log-likelihood of data points y1 , . . . , yN under a mixture model, with the modification that there is a weight, p(v|Yl ), associated with each data point. Thus, we can treat this subproblem as a standard maximum likelihood problem and derive the EM updates as usual. The resulting EM algorithm is given below. E-step. Compute the responsibilities using the current parameters: p(m|yn , v) = p(v|Yl ) = n |m, v)p(m|v) Pp(yp(y , n |m, v)p(m|v) m Q y ?Y p(yn |v) P?v? Q p(y |v) n v v v = 1, 2, . . . , V l yn ?Yl (7) (8) n M-step. Update the parameters using the responsibilities: ?v = ?v = ?v = Tv = 1 L L X p(v|Yl ) P p(v|Y )p(m|y , v)x Pn,m p(v|Yl,n )p(m|yn , v)m n l,n P n,mp(v|Y )p(m|y , v)(x ? ? )(x ? ? )T m m v v n,m Pl,n p(v|Yn )p(m|y , v) n l,n X n,m 2 (9) l=1 p(v|Yl,n )p(m|yn , v)kyn ? Tv xm k arg min T (10) (11) (12) n,m where Yl,n is the subset Yl containing yn . Here, we define Tv x ? sv ?v x + cv , where sv is a scale parameter, cv ? R2 is a translation vector and ?v is a 2D rotation matrix. Thus, eq.(12) becomes a weighted Procrustes matching problem between two points sets, each of size N ? M ? the extent to which xm corresponds to yn in the context of part v is given by p(v|Yl,n )p(m|yn , v). This least squares problem for the optimal transformation parameters sv , ?v and cv can be solved analytically [8]. The weights associated with the updates in eqs.(10-12) are similar to p(v|yn )p(m|yn , v) = p(m, v|yn ), the responsibility of the hidden variables (m, v) for the observed data, yn . The difference is that p(v|yn ) is replaced by p(v|Yl,n ), and hence, the impact of the equivalence constraints is propagated throughout the model. The same fixed variance ? 2 (eq.(3)) is used in all experiments. For the examples in Sec. 4, we initialize ?v , ?v and ?v by fitting a standard GMM to the xm . In Sec. 5, we describe a sequential algorithm that can be used to select the number of parts V as well as provide initial estimates for all parameters. X and initial Gaussians for p(m|v) Y NPD of Y Initial alignment VPD of Y NPD of X Final match Input Output VPD of X with final Gaussians for p(m|v) Transformed X Figure 3: An example of applying PBPM with V =3. PPM PBPM 2 parts 4 parts 5 parts 6 parts VPD of X VPD of Y Final match Figure 4: Results for the problem in Fig. 3 using PPM [4] and PBPM with V = 2, 4, 5 and 6. 4 Examples As discussed in Secs. 1 and 2, unsupervised matching of shapes with moving parts is a relatively unexplored area ? particularly for shapes not composed of single closed boundaries. This makes it difficult to quantitatively assess the performance of our algorithm. Here, we provide illustrative examples which demonstrate the various properties of PBPM and then consider more challenging problems involving shapes extracted from real images. The number of parts, V , is fixed prior to matching in these examples; a technique for estimating V is described in Sec. 5. To visualize the matches found by PBPM, each point yn is assigned to a part v using maxv p(v|yn ). Points assigned to v=0 are removed from the figure. For each yn assigned to some v ? {1, . . . , V }, we find mn ? arg maxm p(m|yn , v) and assign xmn to v. Those xm not assigned to any parts are removed from the figure. The means and the ellipses of constant probability density associated with the distributions N (?v , ?v ) are plotted on the original shape X. We also assign the xm to natural parts using the known natural part label of the yn that they are assigned to. Fig. 3 shows an example of matching two human body shapes using PBPM with V =3. The learnt VPD is intuitive and the match is better than that found using PPM (Fig. 4). The results obtained using different values of V are shown in Fig. 4. Predictably, the match improves as V increases, but the improvement is negligible beyond V =4. When V =5, one of the parts is effectively repeated, suggesting that four parts is sufficient to cover all the interesting variation. However, when V =6 all parts are used and the VPD looks very similar to the NPD ? only the lower leg and foot on each side are grouped together. In Fig. 5, there are two genuine variation-based parts and X contains additional features. PBPM effectively ignores the extra points of X and finds the correct parts and matches. In Fig. 6, the left leg is correctly identified and rotated, whereas the right leg of Y is ?deleted?. We find that deletion from the generating shape tends to be very precise (e.g. Fig. 5), whereas PBPM is less inclined to delete points from the data shape when it involves breaking up natural parts (e.g. Fig. 6). This is X and initial Gaussians for p(m|v) Y NPD of Y Initial alignment VPD of Y NPD of X Final match Input Output VPD of X with final Gaussians for p(m|v) Transformed X Figure 5: Some features of X are not present on Y; the main building of X is smaller and the tower is more central. X and initial Gaussians for p(m|v) Y NPD of Y Initial alignment NPD of X Final match Input Output VPD of X with final Gaussians for p(m|v) Transformed X VPD of Y Figure 6: The left legs do not match and most of the right leg of X is missing. largely due to the equivalence constraints trying to keep natural parts intact, though the value of the uniform density, u, and the way in which points are assigned to parts is also important. In Figs. 7 and 8, a template shape is matched to the edge detector output from two real images. We have not focused on optimizing the parameters of the edge detector since the aim is to demonstrate the ability of PBPM to handle suboptimal shape representations. The correct correspondence and PDs is estimated in all cases, though the results are less precise for these difficult problems. Six parts are used in Fig. 8, but two of these are initially assigned to clutter and end up playing no role in the final match. The object of interest in X is well matched to the template using the other four parts. Note that the left shoulder is not assigned to the same variation-based part as the other points of the torso, i.e. the soft equivalence constraint has been broken in the interests of finding the best match. We have not yet considered the choice of V . Figs. 4 (with V =5) and 8 indicate that it may be possible to start with more parts than are required and either allow extraneous parts to go unused or perhaps prune parts during matching. Alternatively, one could run PBPM for a range of V and use a model selection technique based on a penalized log-likelihood function (e.g. BIC) to select a V . Finally, one could attempt to learn the parts in a sequential fashion. This is the approach considered in the next section. 5 Sequential Algorithm for Initialization When part variation is present, one would expect PBPM with V =1 to find the most significant part and allow the background to explain the remaining parts. This suggests a sequential approach whereby a single part is learnt and removed from further consideration at each stage. Each new part/component should focus on data points that are currently explained by the background. This is achieved by modifying the technique described in [7] for fitting mixture models sequentially. Specifically, assume that the first part (v=1) has been learnt and now learn the second part using the Y X = edge detector output. NPD of Y Initial alignment Input Output VPD of X with final Gaussians for p(m|v) Transformed X VPD of Y NPD of X Final match Figure 7: Matching a template shape to an object in a cluttered scene. Y X = edge detector output. NPD of Y Initial alignment Input Output VPD of X with final Gaussians for p(m|v) Transformed X VPD of Y NPD of X Final match Figure 8: Matching a template shape to a real image. weighted log-likelihood J2 = L X zl1 log{p(Yl |v=2)?2 + u|Yl | (1 ? ?1 ? ?2 )}. (13) l=1 Here, ?1 is known and zl1 ? u|Yl | (1 ? ?1 ) p(Yl |v=1)?1 + u|Yl | (1 ? ?1 ) (14) is the responsibility of the background component for the subset Yl after learning the first part ? the superscript of z indicates the number of components that have already been learnt. Using the modified log-likelihood in eq.(13) has the desired effect of forcing the new component (v=2) to explain the data currently explained by the uniform component. Note that we use the responsibilities for the subsets Yl rather than the individual yn [7], in line with the assumption that complete subsets belong to the same part. Also, note that eq.(13) is a weighted sum of log-likelihoods over the subsets, it cannot be written as a sum over data points since these are not sampled i.i.d. due to the equivalence constraints. Maximizing eq.(13) leads to similar EM updates to those given in eqs.(7)(12). Having learnt the second part, additional components v = 3, 4, . . . are learnt in the same way except for minor adjustments to eqs.(13) and (14) to incorporate all previously learnt components. The sequential algorithm terminates when the uniform component is not significantly responsible for any data or the most recently learnt component is not significantly responsible for any data. As discussed in [7], the sequential algorithm is expected to have fewer problems with local minima since the objective function will be smoother (a single component competes against a uniform component at each stage) and the search space smaller (fewer parameters are learnt at each stage). Preliminary experiments suggest that the sequential algorithm is capable of solving the model selection problem (choosing the number of parts) and providing good initial parameter values for the full model described in Sec. 3. Some examples are given in Figs. 9 and 10 ? the initial transformations for each part are not shown. The outcome of the sequential algorithm is highly dependent on the value of the uniform density, u. We are currently investigating how the model can be made more robust to this value and also how the used xm should be subtracted (in a probabilistic sense) at each step. X and initial Gaussians for p(m|v) Y NPD of Y Initial alignment VPD of Y NPD of X Final match Input Output VPD of X with final Gaussians for p(m|v) Transformed X Figure 9: Results for PBPM; V and initial parameters were found using the sequential approach. X and initial Gaussians for p(m|v) Y NPD of Y Initial alignment VPD of Y NPD of X Final match Input Output VPD of X with final Gaussians for p(m|v) Transformed X Figure 10: Results for PBPM; V and initial parameters were found using the sequential approach. 6 Summary and Discussion Despite the prevalence of part-based objects/shapes, there has been relatively little work on the associated correspondence problem. In the absence of class models and training data (i.e. the unsupervised case), this is a particularly difficult task. In this paper, we have presented a probabilistic correspondence algorithm that handles part-based variation by learning the parts and correspondence simultaneously. Ideas from semi-supervised learning are used to bias the algorithm towards finding a ?perceptually valid? part decomposition. Future work will focus on robustifying the sequential approach described in Sec. 5. References [1] S. Belongie, J. Malik, and J. Puzicha. Shape matching and object recognition using shape contexts. PAMI, 24:509?522, 2002. [2] H. Chui and A. Rangarajan. A new point matching algorithm for non-rigid registration. Comp. Vis. and Image Understanding, 89:114?141, 2003. [3] Z. Tu and A.L. Yuille. Shape matching and recognition using generative models and informative features. In ECCV, 2004. [4] G. McNeill and S. Vijayakumar. A probabilistic approach to robust shape matching. In ICIP, 2006. [5] Noam Shental, Aharon Bar-Hillel, Tomer Hertz, and Daphna Weinshall. Computing Gaussian mixture models with EM using equivalence constraints. In NIPS. 2004. [6] Kaleem Siddiqi and Benjamin B. Kimia. Parts of visual form: Computational aspects. PAMI, 17(3):239? 251, 1995. [7] M. Titsias. Unsupervised Learning of Multiple Objects in Images. PhD thesis, Univ. of Edinburgh, 2005. [8] B. Luo and E.R. Hancock. A unified framework for alignment and correspondence. Computer Vision and Image Understanding, 92(26-55), 2003. [9] H. Ling and D.W. Jacobs. Using the inner-distance for classification of ariculated shapes. In CVPR, 2005.
3146 |@word covariance:1 decomposition:8 jacob:2 initial:18 contains:1 selecting:1 current:1 luo:1 tackling:1 yet:1 must:1 written:1 partition:1 informative:1 shape:46 designed:1 update:4 maxv:1 generative:2 fewer:2 firstly:2 constructed:1 combine:1 fitting:2 introduce:1 expected:1 behavior:1 themselves:2 frequently:1 actual:1 little:1 becomes:1 estimating:1 matched:3 competes:1 weinshall:1 whilst:1 unified:1 finding:5 transformation:15 every:1 unexplored:1 rm:1 uk:2 control:1 yn:31 negligible:1 local:2 treat:1 struggle:2 tends:1 despite:3 pami:2 initialization:2 equivalence:8 suggests:1 challenging:2 range:1 responsible:2 differs:1 prevalence:1 cbir:2 area:1 significantly:2 matching:25 word:1 suggest:1 cannot:1 unlabeled:2 selection:3 context:2 applying:2 influence:1 missing:1 maximizing:3 go:1 independently:1 cluttered:1 focused:1 identifying:1 handle:8 notion:1 variation:17 exact:1 recognition:4 particularly:2 database:1 observed:1 role:1 subproblem:1 solved:1 capture:1 ensures:1 inclined:1 ordering:2 movement:1 removed:3 principled:1 benjamin:1 pd:2 broken:1 ppm:5 solving:1 yuille:1 titsias:1 represented:2 various:1 univ:1 forced:1 hancock:2 describe:1 query:2 choosing:2 outcome:1 hillel:1 whose:1 npd:29 larger:1 cvpr:1 drawing:1 ability:1 itself:1 final:18 superscript:1 j2:1 tu:1 achieve:1 intuitive:1 requirement:1 mcneill:3 rangarajan:1 generating:5 rotated:1 object:8 pexp:1 derive:1 ac:1 minor:1 school:1 eq:14 involves:2 indicate:1 foot:1 correct:5 discontinuous:1 modifying:1 human:1 require:1 assign:2 preliminary:1 secondly:2 pl:1 considered:2 exp:1 visualize:1 estimation:1 label:1 currently:3 maxm:1 grouped:1 weighted:3 gaussian:2 aim:2 modified:1 rather:3 pn:1 focus:2 improvement:1 likelihood:8 indicates:1 sense:1 dependent:1 rigid:1 typically:1 initially:1 hidden:1 relation:1 transformed:8 pixel:1 arg:2 classification:1 extraneous:1 initialize:1 genuine:1 having:2 sampling:1 manually:1 look:1 unsupervised:3 foreground:1 future:1 quantitatively:1 few:1 composed:2 simultaneously:2 individual:3 replaced:1 occlusion:6 attempt:1 interest:2 highly:1 alignment:9 mixture:9 edge:4 encourage:1 capable:1 desired:1 plotted:1 delete:1 soft:4 contiguous:1 cover:1 retains:1 assignment:2 subset:11 kimia:2 uniform:8 learnt:14 sv:3 density:3 vijayakumar:3 probabilistic:10 yl:29 informatics:1 together:1 thesis:1 central:1 containing:1 yp:1 suggesting:1 unordered:1 sec:11 matter:1 explicitly:2 mp:1 vi:1 responsibility:6 closed:1 start:1 ass:1 square:1 variance:1 largely:1 correspond:1 identify:1 comp:1 explain:3 detector:4 ed:1 against:3 colleague:1 pp:1 associated:6 propagated:1 sampled:2 knowledge:1 improves:1 torso:1 supervised:3 follow:1 though:2 kyn:1 stage:3 nonlinear:1 lack:1 quality:1 perhaps:2 building:1 effect:1 validity:1 normalized:1 y2:1 zl1:2 counterpart:1 hence:3 analytically:1 assigned:8 spatially:3 during:3 illustrative:1 whereby:1 trying:1 complete:2 demonstrate:2 image:14 wise:1 consideration:1 novel:1 recently:1 common:2 rotation:1 overview:1 belong:5 discussed:2 significant:2 refer:1 cv:3 smoothness:1 unconstrained:1 nonlinearity:1 moving:1 triangulation:1 optimizing:1 irrelevant:1 forcing:1 binary:1 success:1 seen:1 minimum:1 additional:2 prune:1 semi:3 smoother:1 multiple:2 desirable:1 full:1 match:21 retrieval:2 ellipsis:1 impact:1 involving:2 basic:1 vision:3 essentially:1 expectation:2 achieved:2 c1:2 background:5 whereas:2 addition:1 extra:1 operate:1 unlike:1 undergo:2 structural:2 unused:1 bic:1 identified:1 suboptimal:1 inner:2 idea:6 absent:1 motivated:1 expression:1 six:1 action:1 generally:1 procrustes:2 clutter:6 siddiqi:2 estimated:3 correctly:1 shall:1 shental:2 key:2 four:2 deleted:1 changing:1 prevent:1 gmm:4 registration:1 shock:3 graph:2 sum:3 enforced:1 run:1 throughout:2 reasonable:1 graham:2 eh9:1 display:3 correspondence:15 constraint:11 constrain:1 x2:1 scene:1 generates:2 chui:1 aspect:1 robustifying:1 min:1 relatively:3 tv:6 alternate:1 hertz:1 across:1 slightly:1 em:6 smaller:2 terminates:1 appealing:1 making:1 modification:1 leg:5 outlier:2 explained:3 vpd:23 previously:1 mechanism:2 end:1 available:2 gaussians:12 aharon:1 apply:1 hierarchical:1 subtracted:1 original:1 remaining:1 daphna:1 unsuitable:1 objective:1 malik:1 already:1 usual:1 distance:2 mapped:1 sensible:1 tower:1 sethu:2 extent:2 assuming:1 index:2 providing:1 balance:1 difficult:3 noam:1 perform:2 benchmark:1 incorporated:1 precise:2 shoulder:1 y1:2 rn:1 arbitrary:2 tomer:1 introduced:3 required:1 icip:1 coherent:2 deletion:1 established:1 nip:1 address:1 xmn:1 beyond:1 bar:1 below:1 perception:1 xm:13 articulation:1 natural:11 mn:1 numerous:1 prior:3 literature:2 understanding:2 embedded:1 expect:1 interesting:1 limitation:2 localized:3 sufficient:1 playing:1 share:1 translation:1 eccv:1 penalized:1 summary:1 last:1 bias:3 side:1 allow:2 institute:1 template:5 taking:1 edinburgh:3 boundary:9 valid:5 ignores:1 made:2 implicitly:1 keep:1 global:1 sequentially:1 investigating:1 predictably:1 assumed:1 belongie:1 alternatively:1 robustifies:1 search:1 decade:1 jz:1 learn:2 robust:2 rearranging:1 main:1 motivation:1 ling:2 repeated:1 x1:2 body:1 fig:16 fashion:1 position:1 perceptual:1 breaking:1 learns:1 specific:1 r2:3 bivariate:1 incorporating:1 underconstrained:1 sequential:12 effectively:2 phd:1 dissimilarity:2 perceptually:4 suited:1 likely:1 visual:1 expressed:1 ordered:1 desire:1 adjustment:1 corresponds:1 extracted:3 consequently:1 towards:3 man:1 content:1 absence:1 specifically:1 except:1 meaningful:1 intact:1 exception:1 select:2 puzicha:1 latter:1 incorporate:2 scratch:1 handling:1
2,366
3,147
Particle Filtering for Nonparametric Bayesian Matrix Factorization Frank Wood Department of Computer Science Brown University Providence, RI 02912 fwood@cs.brown.edu Thomas L. Griffiths Department of Psychology University of California, Berkeley Berkeley, CA 94720 tom griffiths@berkeley.edu Abstract Many unsupervised learning problems can be expressed as a form of matrix factorization, reconstructing an observed data matrix as the product of two matrices of latent variables. A standard challenge in solving these problems is determining the dimensionality of the latent matrices. Nonparametric Bayesian matrix factorization is one way of dealing with this challenge, yielding a posterior distribution over possible factorizations of unbounded dimensionality. A drawback to this approach is that posterior estimation is typically done using Gibbs sampling, which can be slow for large problems and when conjugate priors cannot be used. As an alternative, we present a particle filter for posterior estimation in nonparametric Bayesian matrix factorization models. We illustrate this approach with two matrix factorization models and show favorable performance relative to Gibbs sampling. 1 Introduction One of the goals of unsupervised learning is to discover the latent structure expressed in observed data. The nature of the learning problem will vary depending on the form of the data and the kind of latent structure it expresses, but many unsupervised learning problems can be viewed as a form of matrix factorization ? i.e. decomposing an observed data matrix, X, into the product of two or more matrices of latent variables. If X is an N ? D matrix, where N is the number of D-dimensional observations, the goal is to find a low-dimensional latent feature space capturing the variation in the observations making up X. This can be done by assuming that X ? ZY, where Z is a N ? K matrix indicating which of (and perhaps the extent to which) K latent features are expressed in each of the N observations and Y is a K ? D matrix indicating how those K latent features are manifest in the D dimensional observation space. Typically, K is less than D, meaning that Z and Y provide an efficient summary of the structure of X. A standard problem for unsupervised learning algorithms based on matrix factorization is determining the dimensionality of the latent matrices, K. Nonparametric Bayesian statistics offers a way to address this problem: instead of specifying K a priori and searching for a ?best? factorization, nonparametric Bayesian matrix factorization approaches such as those in [1] and [2] estimate a posterior distribution over factorizations with unbounded dimensionality (i.e. letting K ? ?). This remains computationally tractable because each model uses a prior that ensures that Z is sparse, based on the Indian Buffet Process (IBP) [1]. The search for the dimensionality of the latent feature matrices thus becomes a problem of posterior inference over the number of non-empty columns in Z. Previous work on nonparametric Bayesian matrix factorization has used Gibbs sampling for posterior estimation [1, 2]. Indeed, Gibbs sampling is the standard inference algorithm used in nonparametric Bayesian methods, most of which are based on the Dirichlet process [3, 4]. However, recent work has suggested that sequential Monte Carlo methods such as particle filtering can provide an efficient alternative to Gibbs sampling in Dirichlet process mixture models [5, 6]. In this paper we develop a novel particle filtering algorithm for posterior estimation in matrix factorization models that use the IBP, and illustrate its applicability to two specific models ? one with a conjugate prior, and the other without a conjugate prior but tractable in other ways. Our particle filtering algorithm is by nature an ?on-line? procedure, where each row of X is processed only once, in sequence. This stands in comparison to Gibbs sampling, which must revisit each row many times to converge to a reasonable representation of the posterior distribution. We present simulation results showing that our particle filtering algorithm can be significantly more efficient than Gibbs sampling for each of the two models, and discuss its applicability to the broad class of nonparametric matrix factorization models based on the IBP. 2 Nonparametric Bayesian Matrix Factorization Let X be an observed N ? D matrix. Our goal is to find a representation of the structure expressed in this matrix in terms of the latent matrices Z (N ? K) and Y (K ? D). This can be formulated as a statistical problem if we view X as being produced by a probabilistic generative process, resulting in a probability distribution P (X|Z, Y). The critical assumption necessary to make this a matrix factorization problem is that the distribution of X is conditionally dependent on Z and Y only through the product ZY. Although defining P (X|Z, Y) allows us to use methods such as maximum-likelihood estimation to find a point estimate, our goal is to instead compute a posterior distribution over possible values of Z and Y. To do so we need to specify a prior over the latent matrices P (Z, Y), and then we can use Bayes? rule to find the posterior distribution over Z and Y P (Z, Y|X) ? P (X|Z, Y)P (Z, Y). (1) This constitutes Bayesian matrix factorization, but two problems remain: the choice of K, and the computational cost of estimating the posterior distribution. Unlike standard matrix factorization methods that require an a priori choice of K, nonparametric Bayesian approaches allow us to estimate a posterior distribution over Z and Y where the size of these matrices is unbounded. The models we discuss in this paper place a prior on Z that gives each ?left-ordered? binary matrix (see [1] for details) probability K+ Y (N ? mk )!(mk ? 1)! P (Z) = Q2N ?1 exp{??HN } N! Kh ! ?K+ h=1 (2) k=1 where K+ is the number of columns of Z with non-zero entries, mk is the number of 1?s in column PN k, N is the number of rows, HN = i=1 1/i is the N th harmonic number, and Kh is the number of columns in Z that when read top-to-bottom form a sequence of 1?s and 0?s corresponding to the binary representation of the number h. This prior on Z is a distribution on sparse binary matrices that favors those that have few columns with many ones, with the rest of the columns being all zeros. This distribution can be derived as the outcome of a sequential generative process called the Indian buffet process (IBP) [1]. Imagine an Indian restaurant into which N customers arrive one by one and serve themselves from the buffet. The first customer loads her plate from the first Poisson(?) dishes. The ith customer chooses dishes proportional to their popularity, choosing a dish with probability mk /i where mk is the number of people who have choosen the k th dish previously, then chooses Poisson(?/i) new dishes. If we record the choices of each customer on one row of a matrix whose columns correspond to a dishes on the buffet (1 if chosen, 0 if not) then (the left-ordered form of) that matrix constitutes a draw from the distribution in Eqn. 2. The order in which the customers enter the restaurant has no bearing on the distribution of Z (up to permutation of the columns), making this distribution exchangeable. In this work we assume that Z and Y are independent, with P (Z, Y) = P (Z)P (Y). As shown in Fig. 1, since we use the IBP prior for P (Z), Y is a matrix with an infinite number of rows and D columns. We can take any appropriate distribution for P (Y), and the infinite number of rows will not pose a problem because only K+ rows will interact with non-zero elements of Z. A posterior distribution over Z and Y implicitly defines a distribution over the effective dimensionality of these X N Z D ~ Y 1 * N D 1 K+ Figure 1: Nonparametric Bayesian matrix factorization. The data matrix X is the product of Z and Y, which have an unbounded number of columns and rows respectively. matrices, through K+ . This approach to nonparametric Bayesian matrix factorization has been used for both continuous [1, 7] and binary [2] data matrices X. Since the posterior distribution defined in Eqn. 1 is generally intractable, Gibbs sampling has previously been employed to construct a sample-based representation of this distribution. However, generally speaking, Gibbs sampling is slow, requiring each entry in Z and Y to be repeatedly updated conditioned on all of the others. This problem is compounded in contexts where the the number of rows of X increases as a consequence of new observations being introduced, where the Gibbs sampler would need to be restarted after the introduction of each new observation. 3 Particle Filter Posterior Estimation Our approach addresses the problems faced by the Gibbs sampler by exploiting the fact that the prior on Z is recursively decomposable. To explain this we need to introduce new notation, let X(i) be the ith row of X, and X(1:i) and Z(1:i) be all the rows of X and Z up to i respectively. Note that because the IBP prior is recursively decomposable it is easy to sample from P (Z(1:i) |Z(1:i?1) ); to do so simply follow the IBP in choosing dishes for the ith customer given the record of which dishes were chosen by the first i ? 1 customers (see Algorithm 1). Applying Bayes? rule, we can write the posterior on Z(1:i) and Y given X(1:i) in the following form P (Z(1:i) , Y|X(1:i) ) ? P (X(i) |Z(1:i) , Y, X(1:i?1) )P (Z(1:i) , Y|X(1:i?1) ). (3) Here we do not index Y as it is always an infinite matrix.1 If we could evaluate P (Z(1:i?1) , Y|X(1:i?1) ), we could obtain weighted samples (or ?particles?) from P (Z(1:i) , Y|X(1:i) ) using importance sampling with a proposal distribution of X P (Z(1:i) |Z(1:i?1) )P (Z(1:i?1) , Y|X(1:i?1) ) (4) P (Z(1:i) , Y|X(1:i?1) ) = Z(1:i?1) and taking (1:i) w? ? P (X(i) |Z(?) , Y(?) , X(1:i?1) ) (5) th as the weight associated with the ? particle. However, we could also use a similar scheme to approximate P (Z(1:i?1) , Y|X(1:i?1) ) if we could evaluate P (Z(1:i?2) , Y|X(1:i?2) ). Following Eq. 4, we could then approximately generate a set of weighted particles from (1:i?1) P (Z(1:i) , Y|X(1:i?1) ) by using the IBP to sample a value from P (Z(1:i) |Z(?) ) for each particle from P (Z(1:i?1) , Y|X(1:i?1) ) and carrying forward the weights associated with those particles. This ?particle filtering? procedure defines a recursive importance sampling scheme for the full posterior P (Z, Y|X), and is known as sequential importance sampling [8]. When applied in its basic form this procedure can produce particles with extreme weights, so we resample the particles at each iteration of the recursion from the distribution given by their normalized weights and set w? = 1/L for all ?, which is a standard method known as sequential importance resampling [8]. The procedure defined in the previous paragraphs is a general-purpose particle filter for matrixfactorization models based on the IBP. This procedure will work even when the prior defined on 1 In practice, we need only keep track of the rows of Y that correspond to the non-empty columns of Z, as the posterior distribution for the remaining entries is just the prior. Thus, if new non-empty columns are added in moving from Z(i?1) to Z(i) , we need to expand the number of rows of Y that we represent accordingly. Algorithm 1 Sample P (Z(1:i) |Z(1:i?1) , ?) using the Indian Buffet process 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: Z ? Z(1:i?1) if i = 1 then sample Kinew ? Poisson(?) Zi,1:Kinew ? 1 else K+ ? number of non-zero columns in Z for k = 1, . . . , K+ do m ) sample zi,k according to P (zi,k = 1) ? Bernoulli( ?i,k i end for sample Kinew ? Poisson( ?i ) Zi,K+ +1:K+ +Kinew ? 1 end if Z(1:i) ? Z Y is not conjugate to the likelihood (and is much simpler than other algorithms for using the IBP with non-conjugate priors, e.g. [9]). However, the procedure can be simplified further in special cases. The following example applications illustrate the particle filtering approach for two different models. In the first case, the prior over Y is conjugate to the likelihood which means that Y need not be represented. In the other case, although the prior is not conjugate and thus Y does need to be explicitly represented, we present a way to improve the efficiency of this general particle filtering approach by taking advantage of certain analytic conditionals. The particle filtering approach results in significant improvements in performance over Gibbs sampling in both models. 4 A Conjugate Model: Infinite Linear-Gaussian Matrix Factorization In this model, explained in detail in [1], the entries of both X and Y are continuous. We report results on the modeling of image data of the same kind as was originally used to demonstrate the model in [1]. Here each row of X is an image, each row of Z indicates the ?latent features? present in that image, such as the objects it contains, and each column of Y indicates the pixel values associated with a latent feature. The likelihood for this image model is matrix Gaussian P (X|Z, Y, ?x ) = 1 1 exp{? 2 tr((X ? ZY)T (X ? ZY))} 2 )N D/2 2?X (2??X 2 where ?X is the noise variance. The prior on the parameters of the latent features is also Gaussian P (Y|?Y ) = 1 1 exp{? 2 tr(YT Y)} 2?Y (2??Y2 )KD/2 with each element having variance ?Y2 . Because both the likelihood and the prior are matrix Gaussian, they form a conjugate pair and Y can be integrated out to yield the collapsed likelihood, P (X|Z, ?x ) = 1 ?2 (N ?K+ )D K+ D D/2 (2?)N D/2 ?X ?Y |ZT+ Z+ ?X 2 IK+ | Y exp{? 1 T ?1 X)} (6) 2 tr(X ? 2?X ?2 ?1 T which is matrix Gaussian with covariance ??1 = I ? Z+ (ZT+ Z + ?X Z+ . Here Z+ = 2 IK+ ) Y Z1:i,1:K+ is the first K+ columns of Z and K+ is the number of non-zero columns of Z. 4.1 Particle Filter The use of a conjugate prior means that we do not need to represent Y explicitly in our particle filter. In this case the particle filter recursion shown in Eqns. 3 and 4 reduces to X P (Z(1:i) |X(1:i) ) ? P (X(i) |Z(1:i) , X(1:i?1) ) P (Z(1:i) |Z(1:i?1) )P (Z(1:i?1) |X(1:i?1) ) Z(1:i?1) and may be implemented as shown in Algorithm 2. Algorithm 2 Particle filter for Infinite Linear Gaussian Model (0) 1: initialize L particles [Z? ], ? = 1, . . . , L 2: for i = 1, . . . , N do 3: for ? = 1, . . . , L do (1:i) (1:i?1) 4: sample Z? from Z? using Algorithm 1 5: calculate w? using Eqns. 5 and 7 6: end for 7: normalize particle weights 8: resample particles according to weight cumulative distribution 9: end for y 1,: y 2,: y 3,: y 4,: z (i,:)Y noise x i,: Figure 2: Generation of X under the linear Gaussian model. The first four images (left to right) correspond to the true latent features, i.e. rows of Y. The fifth shows how the images get combined, with two source images added together by multiplying by a single row of Z, zi,: = [1 0 0 1]. The sixth is Gaussian noise. The seventh image is the resulting row of X. Reweighting the particles requires computing P (X(i) |Z(1:i) , X(1:i?1) ), the conditional probability of the most recent row of X given all the previous rows and Z. Since P (X(1:i) |Z(1:i) ) is matrix Gaussian we can find the required conditional distribution by following the standard rules for con?1 2 ditioning in Gaussians. Letting ??1 /?X be the covariance matrix for X(1:i) given Z(1:i) , ? = ? we can partition this matrix into four parts " # A c ??1 ? = cT b where A is a matrix, c is a vector, and b is a scalar. Then the conditional distribution of X(i) is X(i) |Z(1:i) , X(1:i?1) ? Gaussian(cT A?1 X(1:i?1) , b ? cT A?1 c). (7) This requires inverting a matrix A which grows linearly with the size of the data; however, A is highly structured and this can be exploited to reduce the cost of this inversion [10]. 4.2 Experiments We compared the particle filter in Algorithm 2 with Gibbs sampling on an image dataset similar to that used in [1]. Due to space limitations we refer the reader to [1] for the details of the Gibbs sampler for this model. As illustrated in Fig. 2, our ground-truth Y consisted of four different 6 ? 6 latent images. A 100 ? 4 binary ground-truth matrix Z was generated with by sampling from P (zi,k = 1) = 0.5. The observed matrix X was generated by adding Gaussian noise with ?X = 0.5 to each entry of ZY. Fig. 3 compares results from the particle filter and Gibbs sampler for this model. The performance of the models was measured by comparing a general error metric computed over the posterior distributions estimated by each approach. The error metric (the vertical axis in Figs. 3 and 5) was computed by taking the expectation of the matrix ZZT over the posterior samples produced by each algorithm and taking the summed absolute difference (i.e. L1 norm) between the upper triangular portion of E[ZZT ] computed over the samples and the upper triangular portion of the true ZZT (including the diagonal). See Fig. 4 for an illustration of the information conveyed by ZZT . This error metric measures the distance of the mean of the posterior to the ground-truth. It is zero if the mean of the distribution matches the ground truth. It grows as a function of the difference between the ground truth and the posterior mean, accounting both for any difference in the number of latent factors that are present in each observation and for any difference in the number of latent factors that are shared between all pairs of observations. The particle filter was run using many different numbers of particles, P . For each value of P , the particle filter was run 10 times. The horizontal axis location of each errorbar in the plot is the mean 5000 Gibbs Sampler Particle Filter Error 4000 3000 2000 50000 25000 10000 5000 2500 Wallclock runtime in sec. 1000 100 10 0 1 1000 Figure 3: Performance results for particle filter vs. Gibbs sampling posterior estimation for the infinite linear Gaussian matrix factorization. Each point is an average over 10 runs with a particular number of particles or sweeps of the sampler P = [1, 10, 100, 500, 1000, 2500, 5000] left to right, and error bars indicate the standard deviation of the error. wall-clock computation time on 2 Ghz Athlon 64 processors running Matlab for the corresponding number of particles P while the error bars indicate the standard deviation of the error. The Gibbs sampler was run for varying numbers of sweeps, with the initial 10% of samples being discarded. The number of Gibbs sampler sweeps was varied and the results are displayed in the same way as described for the particle filter above. The results show that the particle filter attains low error in significantly less time than the Gibbs sampler, with the difference being an order or magnitude or more in most cases. This is a result of the fact that the particle filter considers only a single row of X on each iteration, reducing the cost of computing the likelihood. 5 A Semi-Conjugate Model: Infinite Binary Matrix Factorization In this model, first presented in the context of learning hidden causal structure [2], the entries of both X and Y are binary. Each row of X represents the values of a single observed variable across D trials or cases, each row of Y gives the values of a latent variable (a ?hidden cause?) across those trials or cases, and Z is the adjacency matrix of a bipartite Bayesian network indicating which latent variables influence which observed variables. Learning the hidden causal structure then corresponds to inferring Z and Y from X. The model fits our schema for nonparametric Bayesian matrix factorization model (and hence is amenable to the use of our particle filter) since the likelihood function it uses depends only on the product ZY. The likelihood Q function for this model assumes that each entry of X is generated independently P (X|Z, Y) = i,d P (xi,d |Z, Y), with its probability given by the ?noisy-OR? [11] of the causes that influence that variable (identified by the corresponding row of Z) and are active for that case or trial (expressed in Y). The probability that xi,d takes the value 1 is thus P (xi,d = 1|Z, Y) = 1 ? (1 ? ?)zi,: ?y:,d (1 ? ?) (8) PK where zi,: is the ith row of Z, y:,d is the dth column of Y, and zi,: ? y:,d = k=1 zi,k yk,d . The parameter ? sets the probability that xi,d = 1 when no relevant causes are active, and ? determines how this probability changes as the number of relevant active hidden causes increases. To complete the model, we assume that the entries Q of Y are generated independently from a Bernoulli process with parameter p, to give P (Y) = k,d pyk,d (1 ? p)1?yk,d , and use the IBP prior for Z. 5.1 Particle Filter In this model the prior over Y is not conjugate to the likelihood, so we are forced to explicitly represent Y in our particle filter state, as outlined in Eqns. 3 and 4. However, we can define a more efficient algorithm than the basic particle filter due to the tractability of some integrals. This is why we call this model a ?semi-conjugate? model. The basic particle filter defined in Section 3 requires drawing the new rows of Y from the prior when we generate new columns of Z. This can be problematic since the chance of producing an assignment of values to Y that has high probability under the likelihood can be quite low, in effect wasting many particles. However, if we can analytically marginalize out the new rows of Y, we can avoid sampling those values from the prior and instead sample them from the posterior, in Algorithm 3 Particle filter for Infinite Binary Matrix Factorization (0) (0) 1: initialize L particles [Z? , Y? ], ? = 1, . . . , L 2: for i = 1, . . . , N do 3: for ? = 1, . . . , L do (i) (i?1) 4: sample Z? from Z? using Algorithm 1 5: calculate w? using Eqns. 5 and 8 6: end for 7: normalize particle weights 8: resample particles according to weight CDF 9: for ? = 1, . . . , L do (i) (i) (1:i) (1:i?1) 10: sample Y? from P (Y? |Z? , Y? , X(1:i) ) 11: end for 12: end for Figure 4: Infinite binary matrix factorization results. On the left is ground truth, the causal graph representation of Z and ZZT . The middle and right are particle filtering results; a single random particle Z and E[ZZT ] from a 500 and 10000 particle run middle and right respectively. effect saving many of the potentially wasted particles. If we let Y(1:i) denote the rows of Y that correspond to the first i columns of Z and Y(i) denote the rows (potentially more than 1) of Y that are introduced to match the new columns appearing in Z(i) , then we can write P (Z(1:i) , Y(1:i) |X(1:i) ) = P (Y(i) |Z(1:i) , Y(1:i?1) , X(1:i) )P (Z(1:i) , Y(1:i?1) |X(1:i) ) (9) where P (Z(1:i) , Y(1:i?1) |X(1:i) ) ? P (X(i) |Z(1:i) , Y(1:i?1) , X(1:i?1) )P (Z(1:i) , Y(1:i?1) |X(1:i?1) ). (10) Thus, we can use the particle filter to estimate P (Z(1:i) , Y(1:i?1) |X(1:i) ) (vs. P (Z(1:i) , Y(1:i) |X(1:i) )) provided that we can find a way to compute P (X(i) |Z(1:i) , Y(1:i?1) ) and sample from the distribution P (Y(i) |Z(1:i) , Y(1:i?1) , X(1:i) ) to complete our particles. The procedure described in the previous paragraph is possible in this model because, while our prior on Y is not conjugate to the likelihood, it is still possible to compute P (X(i) |Z(1:i) , Y(1:i?1) ). The entries of X(i) are independent given Z(1:i) and Y(1:i) . Since the entries in each column of Y(i) will influence only a single entry in X(i) , this independence is maintained when we sum out Y(i) . So we Q (i) (1:i) (1:i?1) can derive an analytic solution to P (X |Z ,Y ) = d P (xi,d |Z(1:i) , Y(1:i?1) ) where new P (xi,d = 1|Z(1:i) , Y(1:i?1) ) = 1 ? (1 ? ?)(1 ? ?)? (1 ? ?p)Ki (11) with Kinew being the number of new columns in Z(i) , and ? = zi,1:K (1:i) ? y1:K (1:i) ,d . For a detailed + + derivation see [2]. This gives us the likelihood we need for reweighting particles Z(1:i) and Y(1:i?1) . The posterior distribution on Y(i) is straightforward to compute by combining the likelihood in Eqn. 8 with the prior P (Y). The particle filtering algorithm for this model is given in Algorithm 3. 5.2 Experiments We compared the particle filter in Algorithm 3 with Gibbs sampling on a dataset generated from the model described above, using the same Gibbs sampling algorithm and data generation procedure as developed in [2]. We took K+ = 4 and N = 6, running the IBP multiple times with ? = 3 until a matrix Z of correct dimensionality (6 ? 4) was produced. This matrix is shown in Fig. 4 as a bipartite graph, where the observed variables are shaded. A 4 ? 250 random matrix Y was generated with p = 0.1. The observed matrix X was then sampled from Eqn. 8 with parameters ? = .9 and ? = .01. Comparison of the particle filter and Gibbs sampling was done using the procedure outlined in Section 4.2, producing similar results: the particle filter gave a better approximation to the posterior distribution in less time, as shown in Fig. 5. 50 Gibbs Sampler Particle Filter Error 40 30 20 10 500 Wallclock runtime in sec. 100 50 10 5 2 1 0..5 0.25 0 Figure 5: Performance results for particle filter vs. Gibbs sampling posterior estimation for the infinite binary matrix factorization model. Each point is an average over 10 runs with a particular number of particles or sweeps of the sampler P = [1, 2, 5, 10, 20, 50, 100, 200, 500, 1000] from left to right, and error bars indicate the standard deviation of the error. 6 Conclusion In this paper we have introduced particle filter posterior estimation for non-parametric Bayesian matrix factorization models based on the Indian buffet process. This approach is applicable to any Bayesian matrix factorization model with a sparse recursively decomposable prior. We have applied this approach with two different models, one with a conjugate prior and one with a non-conjugate prior, finding significant computational savings over Gibbs sampling for each. However, more work needs to be done to explore the strengths and weakneses of these algorithms. In particular, simple sequential importance resampling is known to break down when applied to datasets with many observations, although we are optimistic that methods for addressing this problem that have been developed for Dirichlet process mixture models (e.g., [5]) will also be applicable in this setting. By exploring the strengths and weaknesses of different methods for approximate inference in these models, we hope to come closer to our ultimate goal of making nonparametric Bayesian matrix factorization into a tool that can be applied on the scale of real world problems. Acknowledgements This work was supported by both NIH-NINDS R01 NS 50967-01 as part of the NSF/NIH Collaborative Research in Computational Neuroscience Program and NSF grant 0631518. References [1] T. L. Griffiths and Z. Ghahramani, ?Infinite latent feature models and the Indian buffet process,? Gatsby Computational Neuroscience Unit, Tech. Rep. 2005-001, 2005. [2] F. Wood, T. L. Griffiths, and Z. Ghahramani, ?A non-parametric Bayesian method for inferring hidden causes,? in Proceeding of the 22nd Conference on Uncertainty in Artificial Intelligence. in press, 2006. [3] T. Ferguson, ?A Bayesian analysis of some nonparametric problems,? The Annals of Statistics, vol. 1, pp. 209?230, 1973. [4] R. M. Neal, ?Markov chain sampling methods for Dirichlet process mixture models,? Department of Statistics, University of Toronto, Tech. Rep. 9815, 1998. [5] P. Fearnhead, ?Particle filters for mixture models with an unknown number of components,? Journal of Statistics and Computing, vol. 14, pp. 11?21, 2004. [6] S. N. MacEachern, M. Clyde, and J. Liu, ?Sequential importance sampling for nonparametric Bayes models: the next generation,? The Canadian Journal of Statistics, vol. 27, pp. 251?267, 1999. [7] T. Griffiths and Z. Ghahramani, ?Infinite latent feature models and the Indian buffet process,? in Advances in Neural Information Processing Systems 18, Y. Weiss, B. Sch?olkopf, and J. Platt, Eds. Cambridge, MA: MIT Press, 2006. [8] A. Doucet, N. de Freitas, and N. Gordon, Sequential Monte Carlo Methods in Practice. Springer, 2001. [9] D. G?or?ur, F. J?akel, and C. R. Rasmussen, ?A choice model with infinitely many latent features,? in Proceeding of the 23rd International Conference on Machine Learning, 2006. [10] S. Barnett, Matrix Methods for Engineers and Scientists. [11] J. Pearl, Probabilistic reasoning in intelligent systems. McGraw-Hill, 1979. San Francisco, CA: Morgan Kaufmann, 1988.
3147 |@word trial:3 middle:2 inversion:1 norm:1 nd:1 simulation:1 covariance:2 accounting:1 tr:3 recursively:3 initial:1 liu:1 contains:1 freitas:1 comparing:1 must:1 partition:1 analytic:2 plot:1 resampling:2 v:3 generative:2 intelligence:1 accordingly:1 ith:4 record:2 location:1 toronto:1 simpler:1 unbounded:4 ik:2 paragraph:2 introduce:1 indeed:1 themselves:1 becomes:1 provided:1 discover:1 estimating:1 notation:1 kind:2 developed:2 finding:1 wasting:1 berkeley:3 runtime:2 platt:1 exchangeable:1 unit:1 grant:1 producing:2 scientist:1 fwood:1 consequence:1 approximately:1 specifying:1 shaded:1 factorization:30 recursive:1 practice:2 procedure:9 pyk:1 significantly:2 griffith:5 get:1 cannot:1 marginalize:1 context:2 applying:1 influence:3 collapsed:1 customer:7 yt:1 straightforward:1 independently:2 decomposable:3 rule:3 searching:1 variation:1 updated:1 annals:1 zzt:6 imagine:1 us:2 element:2 observed:9 bottom:1 calculate:2 ensures:1 yk:2 carrying:1 solving:1 serve:1 bipartite:2 efficiency:1 represented:2 derivation:1 forced:1 effective:1 monte:2 artificial:1 outcome:1 choosing:2 whose:1 quite:1 drawing:1 triangular:2 favor:1 statistic:5 noisy:1 sequence:2 advantage:1 wallclock:2 took:1 product:5 relevant:2 combining:1 kh:2 normalize:2 olkopf:1 exploiting:1 empty:3 produce:1 object:1 illustrate:3 depending:1 develop:1 pose:1 derive:1 measured:1 ibp:12 eq:1 implemented:1 c:1 indicate:3 come:1 drawback:1 correct:1 filter:30 adjacency:1 require:1 wall:1 exploring:1 ground:6 exp:4 vary:1 resample:3 purpose:1 estimation:9 favorable:1 applicable:2 tool:1 weighted:2 hope:1 mit:1 always:1 gaussian:12 fearnhead:1 pn:1 avoid:1 varying:1 derived:1 improvement:1 bernoulli:2 likelihood:14 indicates:2 tech:2 attains:1 inference:3 dependent:1 ferguson:1 typically:2 integrated:1 her:1 hidden:5 expand:1 pixel:1 priori:2 special:1 initialize:2 summed:1 once:1 construct:1 having:1 saving:2 sampling:24 barnett:1 represents:1 broad:1 unsupervised:4 constitutes:2 others:1 report:1 gordon:1 intelligent:1 few:1 highly:1 weakness:1 mixture:4 extreme:1 yielding:1 chain:1 amenable:1 integral:1 closer:1 necessary:1 causal:3 mk:5 column:22 modeling:1 assignment:1 applicability:2 cost:3 deviation:3 entry:11 tractability:1 addressing:1 seventh:1 providence:1 chooses:2 combined:1 clyde:1 international:1 probabilistic:2 together:1 hn:2 de:1 sec:2 explicitly:3 depends:1 view:1 break:1 optimistic:1 schema:1 portion:2 bayes:3 collaborative:1 variance:2 who:1 akel:1 kaufmann:1 correspond:4 yield:1 bayesian:19 zy:6 produced:3 carlo:2 multiplying:1 processor:1 explain:1 ed:1 sixth:1 pp:3 associated:3 con:1 sampled:1 dataset:2 manifest:1 dimensionality:7 originally:1 follow:1 tom:1 specify:1 wei:1 done:4 just:1 clock:1 until:1 eqn:4 horizontal:1 reweighting:2 defines:2 perhaps:1 grows:2 effect:2 brown:2 true:2 consisted:1 requiring:1 normalized:1 hence:1 analytically:1 y2:2 read:1 neal:1 illustrated:1 conditionally:1 eqns:4 maintained:1 plate:1 hill:1 complete:2 demonstrate:1 l1:1 reasoning:1 meaning:1 harmonic:1 image:10 novel:1 nih:2 significant:2 refer:1 cambridge:1 gibbs:26 enter:1 rd:1 outlined:2 particle:65 moving:1 posterior:28 recent:2 dish:8 certain:1 binary:10 rep:2 exploited:1 morgan:1 employed:1 converge:1 semi:2 full:1 multiple:1 reduces:1 compounded:1 match:2 offer:1 basic:3 expectation:1 metric:3 poisson:4 iteration:2 q2n:1 represent:3 proposal:1 athlon:1 conditionals:1 else:1 source:1 sch:1 rest:1 unlike:1 call:1 canadian:1 easy:1 independence:1 restaurant:2 psychology:1 zi:11 fit:1 identified:1 gave:1 reduce:1 ultimate:1 speaking:1 cause:5 repeatedly:1 matlab:1 generally:2 detailed:1 nonparametric:16 processed:1 generate:2 problematic:1 nsf:2 revisit:1 estimated:1 neuroscience:2 popularity:1 track:1 write:2 vol:3 express:1 four:3 graph:2 wasted:1 wood:2 sum:1 run:6 uncertainty:1 place:1 arrive:1 reasonable:1 reader:1 draw:1 ninds:1 capturing:1 ki:1 ct:3 strength:2 ri:1 department:3 structured:1 according:3 conjugate:16 kd:1 remain:1 across:2 reconstructing:1 ur:1 making:3 explained:1 computationally:1 remains:1 previously:2 discus:2 letting:2 tractable:2 end:7 decomposing:1 gaussians:1 appropriate:1 appearing:1 alternative:2 buffet:8 thomas:1 top:1 dirichlet:4 remaining:1 running:2 assumes:1 ghahramani:3 r01:1 sweep:4 added:2 parametric:2 diagonal:1 distance:1 extent:1 considers:1 assuming:1 index:1 illustration:1 potentially:2 frank:1 zt:2 unknown:1 upper:2 vertical:1 observation:9 datasets:1 discarded:1 markov:1 displayed:1 defining:1 y1:1 varied:1 introduced:3 inverting:1 pair:2 required:1 z1:1 california:1 pearl:1 address:2 dth:1 suggested:1 bar:3 challenge:2 program:1 including:1 critical:1 recursion:2 scheme:2 improve:1 axis:2 faced:1 prior:27 acknowledgement:1 determining:2 relative:1 permutation:1 generation:3 limitation:1 filtering:11 proportional:1 conveyed:1 row:29 summary:1 supported:1 rasmussen:1 choosen:1 allow:1 taking:4 fifth:1 sparse:3 absolute:1 ghz:1 stand:1 cumulative:1 world:1 forward:1 san:1 simplified:1 approximate:2 implicitly:1 mcgraw:1 keep:1 dealing:1 doucet:1 active:3 francisco:1 xi:6 search:1 latent:24 continuous:2 why:1 nature:2 ca:2 interact:1 bearing:1 pk:1 linearly:1 noise:4 fig:7 gatsby:1 slow:2 n:1 inferring:2 down:1 load:1 specific:1 showing:1 intractable:1 sequential:7 adding:1 importance:6 magnitude:1 conditioned:1 simply:1 explore:1 infinitely:1 expressed:5 ordered:2 scalar:1 restarted:1 springer:1 corresponds:1 truth:6 determines:1 chance:1 cdf:1 ma:1 conditional:3 goal:5 viewed:1 formulated:1 shared:1 change:1 infinite:12 reducing:1 sampler:11 engineer:1 called:1 indicating:3 people:1 maceachern:1 indian:7 evaluate:2
2,367
3,148
Learning on Graph with Laplacian Regularization Rie Kubota Ando IBM T.J. Watson Research Center Hawthorne, NY 10532, U.S.A. rie1@us.ibm.com Tong Zhang Yahoo! Inc. New York City, NY 10011, U.S.A. tzhang@yahoo-inc.com Abstract We consider a general form of transductive learning on graphs with Laplacian regularization, and derive margin-based generalization bounds using appropriate geometric properties of the graph. We use this analysis to obtain a better understanding of the role of normalization of the graph Laplacian matrix as well as the effect of dimension reduction. The results suggest a limitation of the standard degree-based normalization. We propose a remedy from our analysis and demonstrate empirically that the remedy leads to improved classification performance. 1 Introduction In graph-based methods, one often constructs similarity graphs by linking similar data points that are close in the feature space. It was proposed in [3] that one may first project these data points into the eigenspace corresponding to the largest eigenvalues of a normalized adjacency matrix of the graph and then use the standard k-means method for clustering. In the ideal case, points in the same class will be mapped into a single point in the reduced eigenspace, while points in different classes will be mapped to different points. One may also consider similar ideas in semi-supervised learning using a discriminative kernel method. If the underlying kernel is induced from the graph, one may formulate semi-supervised learning directly on the graph (e.g., [1, 5, 7, 8]). In these studies, the kernel is induced from the adjacency matrix W whose (i, j)-entry is the weight of edge (i, j). W is sometimes normalized by D?1/2 WD?1/2 [2, 4, 3, 7] where D is a diagonal matrix whose (j, j)-entry is the degree of the j-th node, but sometimes not [1, 8]. Although such normalization may significantly affect the performance, the issue has not been studied from the learning theory perspective. The relationship of kernel design and graph learning was investigated in [6], which argued that quadratic regularization-based graph learning can be regarded as kernel design. However, normalization of W was not considered there. The goal of this paper is to provide some learning theoretical insight into the role of normalization of the graph Laplacian matrix (D ? W). We first present a model for transductive learning on graphs and develop a margin analysis for multi-class graph learning. Based on this, we analyze the performance of Laplacian regularization-based graph learning in relation to graph properties. We use this analysis to obtain a better understanding of the role of normalization of the graph Laplacian matrix as well as dimension reduction in graph learning. The results indicate a limitation of the commonly practiced degree-based normalization mentioned above. We propose a learning theoretical remedy based on our analysis and use experiments to demonstrate that the remedy leads to improved classification performance. 2 Transductive Learning Model We consider the following multi-category transductive learning model defined on a graph. Let V = {v1 , . . . , vm } be a set of m nodes, and let Y be a set of K possible output values. Assume that each node vj is associated with an output value yj ? Y, which we are interested in predicting. We randomly draw a set of n indices Zn = {ji : 1 ? i ? n} from {1, . . . , m} uniformly and without replacement. We then manually label the n nodes vji with labels yji ? Y, and then automatically label the remaining m ? n nodes. The goal is to estimate the labels on the remaining m ? n nodes as accurately as possible. We encode the label yj into a vector in RK , so that the problem becomes that of generating an estimation vector fj,? = [fj,1 , . . . , fj,K ] ? RK , which can then be used to recover the label yj . In multi-category classification with K classes Y = {1, . . . , K}, we encode each yj = k ? Y as ek ? RK , where ek is a vector of zero entries except for the k-th entry being one. Given fj,? = [fj,1 , . . . , fj,K ] ? RK (which is intended to approximate eyj ), we decode the corresponding label estimation y?j as: y?j = arg maxk {fj,k : k = 1, . . . , K}. If the true label is yj , then the classification error is err(fj,? , yj ) = I(? yj 6= yj ), where we use I(?) to denote the set indicator function. In order to estimate f = [fj,k ] ? RmK from only a subset of labeled nodes, we consider for a PK T given kernel matrix K ? Rm , the quadratic regularization f T QK f = k=1 f?,k K?1 f?,k , where f?,k = [f1,k , . . . , fm,k ] ? Rm . We assume that K is full-rank. We will consider the kernel matrix induced by the graph Laplacian, to be introduced later in the paper. Note that the bold symbol K denotes the kernel matrix, and regular K denotes the number of classes. Given a vector f ? RmK , the accuracy of its component fj,? = [fj,1 , . . . , fj,K ] ? RK is measured by a loss function ?(fj,? , yj ). Our learning method attempts to minimize the empirical risk on the set Zn of n labeled training nodes, subject to f T QK f being small: ? ? X 1 f?(Zn ) = arg min ? ?(fj,? , yj ) + ?f T QK f ? . (1) f ?RmK n j?Zn where ? > 0 is an appropriately chosen regularization parameter. In this paper, we focus on a special class of loss function that is of the form ?(fj,? , yj ) = PK k=1 ?0 (fj,k , ?k,yj ), where ?a,b is the delta function defined as: ?a,b = 1 when a = b and ?a,b = 0 otherwise. We are interested in the generalization behavior of (1) compared to a properly defined optimal regularized risk, often referred to as ?oracle inequalities? in the learning theory literature. P Theorem 1 Let ?(fj,? , yj ) = K k=1 ?0 (fj,k , ?k,yj ) in (1). Assume that there exist positive constants a, b, and c such that: (i) ?0 (x, y) is non-negative and convex in x, (ii) ?0 (x, y) is Lipschitz with constant b when ?0 (x, y) ? a, and (iii) c = inf{x : ?0 (x, 1) ? a} ? sup{x : ?0 (x, 0) ? a}. Then ?p > 0, the expected generalization error of the learning method (1) over the random training samples Zn can be bounded by: EZn 1 m?n X ?n j?Z 1 err(f?j,? (Zn ), yj ) ? a inf f ?RmK where Z?n = {1, . . . , m} ? Zn , trp (K) =  1 m " X m 1 m T ?0 (fj,? , yj ) + ?f QK f + j=1 Pm #  p j=1 Kj,j 1/p btrp (K) ?nc p , , and Kj,j is the (j, j)-entry of K. Proof. The proof is similar to the proof of a related bound for binary-classification in [6]. We shall introduce the following notation. let in+1 6= i1 , . . . , in be an integer randomly drawn from Z?n , and let Zn+1 = Zn ? {in+1 }. Let f?(Zn+1 ) behthe semi-supervised learning methodi (1) using training P data in Zn+1 : f?(Zn+1 ) = arg inf f ?RmK 1 ?(fj,? , Yj ) + ?f T QK f . Adapted from a n j?Zn+1 related lemma used in [6] for proving a similar result, we have the following inequality for each k = 1, . . . , K: |f?in+1 ,k (Zn+1 ) ? f?in+1 ,k (Zn )| ? |?1,k ?(f?in+1 ,? (Zn+1 ), Yin+1 )|Kin+1 ,in+1 /(2?n), (2) where ?1,k ?(fi,? , y) denotes a sub-gradient of ?(fi,? , y) with respect to fi,k , where fi,? = [fi,1 , . . . , fi,K ]. Next we prove   p  1 b ? ? err(fin+1 ,? (Zn ), yin+1 ) ? sup ?0 (fin+1 ,k (Zn+1 ), ?in+1 ,k ) + Ki ,i . (3) c?n n+1 n+1 k=k0 ,in+1 a In fact, if f?(Zn ) does not make an error on the in+1 -th example, then the inequality automatically holds. Otherwise, assume that f?(Zn ) makes an error on the in+1 -th example, then there exists k0 6= yin+1 such that f?in+1 ,yin+1 (Zn ) ? f?in+1 ,k0 (Zn ). If we let d = (inf{x : ?0 (x, 1) ? a} + sup{x : ?0 (x, 0) ? a})/2, then either f?i ,y (Zn ) ? d or f?i ,k (Zn ) ? d. By the definition of c and n+1 in+1 n+1 0 ? d, it follows that there exists k = k0 or k = in+1 such that either ?0 (fin+1 ,k (Zn+1 ), ?in+1 ,k ) ? a or ? fin+1 ,k (Zn+1 ) ? f?in+1 ,k (Zn ) ? c/2. Using (2), we have either ?0 (f?in+1 ,k (Zn+1 ), ?in+1 ,k ) ? a  bK p in+1 ,in+1 or bKin+1 ,in+1 /(2?n) ? c/2, implying that a1 ?0 (f?in+1 ,k (Zn+1 ), ?in+1 ,k ) + ?1= c?n ? err(fi ,? (Zn ), yi ). This proves (3). n+1 n+1 (j) We are now ready to prove Theorem 1 using (3). For every j ? Zn+1 , denote by Zn+1 the (j) subset of n samples in Zn+1 with the j-th data point left out. We have err(f?j,? (Zn ), yj ) ? p b 1 mK ? : a ?(fj,? (Zn+1 ), yj ) + c?n Kj,j . We thus obtain for all f ? R EZn 1 m?n X err(f?j,? (Zn ), yj ) ? X 1 EZ err(f?j,? (Zn(j) ), yj ) n + 1 n+1 j?Z 2  b p 3 X X 1 1 4 ? EZ ?(f?j,? (Zn+1 ), yj ) + Kj,j 5 n+1 a j?Z c?n j?Z 2 3 X X  n 1 1 4 ? ?(fj,? , yj ) + ?f T QK f 5 + EZ EZ ?n j?Z n+1 n+1 n+1 a(n + 1) n+1 n n+1 n+1 j?Zn+1 n+1 j?Zn+1 b Kj,j c?n p . 2 The formulation used here corresponds to the one-versus-all method for multi-category classification. For the SVM loss ?0 (x, y) = max(0, 1 ? (2x ? 1)(2y ? 1)), we may take a = 0.5, b = 2, and c = 0.5. In the experiments reported here, we shall employ the least squares function ?0 (x, y) = (x ? y)2 which is widely used for graph learning. With this formulation, we may choose a = 1/16, b = 0.5, c = 0.5 in Theorem 1. 3 Laplacian regularization Consider an undirected graph G = (V, E) defined on the nodes V = {vj : j = 1, . . . , m}, with ? edges E ? {1, . . . , m} ? {1, . . . , m}, and weights wj,j ? ? 0 associated with edges (j, jP ) ? E. For m simplicity, we assume that (j, j) ? / E and wj,j ? = 0 when (j, j ? ) ? / E. Let degj (G) = j ? =1 wj,j ? be the degree of node j of graph G. We consider the following definition of normalized Laplacian. Definition 1 Consider a graph G = (V, E) of m nodes with weights wj,j ? (j, j ? = 1, . . . , m). The unnormalized Laplacian matrix L(G) ? Rm?m is defined as: Lj,j ? (G) = ?wj,j ? if j 6= j ? ; degj (G) otherwise. Given m scaling factors Sj (j = 1, . . . , m), let S = diag({Sj }). The S-normalized Laplacian matrix is defined as: LS (G) = S?1/2 L(G)S?1/2 . The corresponding  2 fj? ,k fj,k 1 Pm T ? ? ? regularization is based on: f?,k LS (G)f?,k = 2 j,j ? =1 wj,j ? S . Sj j? A common choice of S is S = I, corresponding to regularizing with the unnormalized Laplacian L. The idea is natural: we assume that the predictive values fj,k and fj ? ,k should be close when (j, j ? ) ? E with a strong link. Another common choice is to normalize by Sj = degj (G) (i.e. S = D) so that diagonals of LS become all one [3, 4, 7, 2]. Definition 2 Given label y = {yj }j=1,...,m on V , we define the cut for LS in Definition 1 as:  2   P P w ? wj,j? 1 1 1 1 ? ? cut(LS , y) = j,j ? :yj 6=yj? j,j + + ? . ? j,j :yj =yj? 2 Sj S ? 2 S j Sj j? Unlike typical graph-theoretical definitions of graph-cut, this learning theoretical definition of graphcut penalizes not only between-class edge weights but also within-class edge weights when such an edge connects two nodes with different scaling factors. pThis penalization is intuitive p if we look at the regularizer in Definition (1), which encourages fj,k / Sj to be similar to fj ? ,k / Sj ? when wj,j ? is large. If j and j ? belongs to the same class, we want fj,k to be similar to fj ? ,k . Therefore for such an in-class pair (j, j ? ), we want to have Sj ? Sj ? . This penalization has important consequences, which we will investigate later in the paper. For unnormalized Laplacian (i.e. Sj = 1), the second term on the right hand side of Definition 2 vanishes, and our learning Ptheoretical definition becomes identical to the standard graph-theoretical definition: cut(L, y) = j,j ? :yj 6=yj? wj,j ? . We consider K in (1) defined as follows: K = (?S?1 + LS (G))?1 , where ? > 0 is a tuning parameter to make K strictly positive definite. This parameter is important. For simplicity, we state the generalization bound based on Theorem 1 with optimal ?. Note that in applications, ? is usually tuned through cross validation. Therefore assuming optimal ? will simplify the bound so that we can focus on the more essential characteristics of generalization performance. Theorem 2 Let the conditions in Theorem 1 hold with the regularization condition K = (?S?1 + LS (G))?1 . Assume that ?0 (0, 0) = ?0 (1, 1) = 0, then ?p > 0, there exists a sample independent regularization parameter ? in (1) such that the expected generalization error is bounded by: X 1 Cp (a, b, c) EZn err(f?j,? (Zn ), yj ) ? p/(p+1) (?s + cut(LS , y))p/(p+1) trp (K)p/(p+1) , m?n ? n j?Zn Pm where Cp (a, b, c) = (b/ac)p/(p+1) (p1/(p+1) + p?p/(p+1) ) and s = j=1 S?1 j . Pm Proof. Let fj,k = ?yj ,k . It can be easily verified that j=1 ?(fj,? , yj )/m + ?f T QK f = ?(?s + cut(LS , y)). Now, we simply use this expression in Theorem 1, and then optimize over ?. 2 This theorem relates graph-cut to generalization performance. The conditions on the loss function in Theorem 2 hold for least squares with b/ac = 16. It also applies to other standard loss functions such as SVM. With p fixed, the generalization error decreases at the rate O(n?p/(p+1) ) when n increases. This rate of convergence is faster when p increases. However in general, trp (K) is an increasing function of p. Therefore we have a trade-off between the two terms. The bound also suggests that if we normalize the diagonal entries of K such that Kj,j is a constant, then trp (K) is independent of p, and thus a larger p can be used in the bound. This motivates the idea of normalizing the diagonals p p of K. Our goal is to better understand how the quantity (?s + cut(LS , y)) p+1 trp (K) p+1 is related to properties of the graph, which gives better understanding of graph-based learning. Definition 3 A subgraph G0 = (V0 , E0 ) of G = (V, E) is a pure component if G0 is connected, E0 is induced by restricting E on V0 , and if labels y have identical values on V0 . A pure subgraph G? = ?q?=1 G? of G divides V into q disjoint sets V = ?q?=1 V? such that each subgraph G? = (V? , E? ) is a pure component. Denote by ?i (G? ) = ?i (L(G? )) the i-th smallest eigenvalue of L(G? ). If we remove all edges of G that connect nodes with different labels, then the resulting subgraph is a pure subgraph (but not the only one). For each pure component G? , its first eigenvalue ?1 (G? ) is always zero. The second eigenvalue ?2 (G? ) > 0, and it measures how well-connected Gi is [2]. Theorem 3 Let the assumptions of Theorem 2 hold, and G? = ?q?=1 G? (G? = (V? , E? )) be a pure subgraph of G. For all p ? 1, there exist sample-independent ? and ?, such that the generalization P performance of (1), EZn j?Z?n err(f?j,? , yj )/(m ? n), is bounded by ? !1/2p !1/2p ?2p/(p+1) q q X Cp (a, b, c) ? 1/2 X s? (p)/m s (p)/m ? ? s + cut(LS , y)1/2 , mp? ?2 (G? )p np/(p+1) ?=1 where m? = |V? |, s = Pm ?1 j=1 Sj , ?=1 and s? (p) = P p j?V? Sj . Proof sketch. We simply upper bound trp (K) in terms of ?2 (G? ) and s? , where K = (?S?1 + LS )?1 . Substitute this estimation into Theorem 2 and optimize it over ?. 2 To put this into perspective, suppose that we use unnormalized Laplacian regularizer on a zero-cut graph. Then S = I and cut(LS , y) = 0, and by letting p = 1 and p ? ? in Theorem 3, we have: r X err(f?j,? , yj ) X err(f?j,? , yj ) b q b m . EZn ?2 ? and EZn ? ? m?n ac n m?n ac n min? m? ? ? j?Zn j?Zn p That is, in the zero-cut case, the generalization performance can be bounded as O( q/n). We can also achieve a faster convergence rate of O(1/n), but it also depends on m/(min? m? ) ? q. This implies that we will achieve better convergence at the O(1/n) level if the sizes of the components p are balanced, while the convergence may behave like O( q/n) otherwise. 3.1 Near zero-cut optimum scaling factors The above observation motivates a scaling matrix S so that it compensates for the unbalanced pure component sizes. From Definition 2 and Theorem 2 we know that good scaling factors should be approximately constant within each class. Here we focus on the case that scaling factors are constant within each pure component (Sj = s?? when j ? V? ) in order to derive optimum scaling factors. P P P w ? Let us define cut(G? , y) = j,j ? :yj 6=yj? wj,j ? + ?6=?? j?V? ,j ? ?V?? j,j 2 . In Theorem 3, when we ? use cut(LS , y) ? cut(G , y)/ min? s?? and let p ? ? and assume that cut(G? , y) is sufficiently Pq ? small, the dominate term of the bound becomes max? (?ns? /m? ) ?=1 m s ?? , which can then be optimized with the choice s?? = m? , and the resulting bound becomes: s !2 ? , y) X 1 b 1 cut(G ? err(f?j,? , yj ) ? ? q+ , m?n ? ac n u(G? ) min? m? j?Zn where u(G? ) = min? (?2 (G? )/m? ). Hence, if cut(G? , y) is small, then we should choose s?? ? m? for each pure component ? so that the generalization performance is approximately (ac)?1 b ? q/n. The analysis provided here not only formally shows the importance of normalization in the learning theoretical framework but also suggests that the good normalization factor for each node j is approximately the size of the well-connected pure component that contains node j (assuming that nodes belonging to different pure components are only weakly connected). The commonly practiced degree-based normalization method Sj = degj (G) provides such good normalization factors under a simplified ?box model? used in early studies e.g. [4]. In this model, each node connects to itself and all other nodes of the same pure component with edge weight wj,j ? = 1. The degree is thus degj (G? ) = |V? | = m? , which gives the optimal scaling in our analysis. However, in general, the box model may not be a good approximation for practical problems. A more realistic approximation, which we call core-satellite model, will be introduced in the experimental section. For such a model, the degree-based normalization can fail because the degj (G? ) within each pure component G? is not approximately constant (thus raising cut(LS , y)), and it may not be proportional to m? . ? = (?I + L)?1 be the kernel matrix corresponding to the unnormalOur remedy is as follows. Let K m ized Laplacian. Let v? ? R be the vector whose j-th entry is 1 if j ? V? P and 0 otherwise. Then it ? = q v? v T /m? + O(1), is easy to verify that for small ? and near-zero cut(G? , y), we have ?K ? ?=1 ? j,j ? m?1 for each j ? V? . Therefore the scaling factor Sj = 1/K ? j,j is nearly optimal and thus K ? ? j,j , K = (?S?1 + LS )?1 ) K-scaling in for all j. We call this method of normalization (Sj = 1/K this paper as it scales the kernel matrix K so that each Kj,j = 1. By contrast, we call the standard degree-based normalization (Sj = degj (G), K = (?I + LS )?1 ) L-scaling as it scales diagonals of LS to 1. Although K-scaling coincides with a common practice in standard kernel learning, it is important to notice that showing this method behaves well in the graph learning setting is non-trivial and novel. In fact, no one has proposed this normalization method in the graph learning setting before this work. Without the learning theoretical results developed here, it is not obvious whether this method should work better than the commonly practiced degree-based normalization. 4 Dimension Reduction Normalization and dimension reduction have been commonly used in spectral clustering such as [3, 4]. For semi-supervised learning, dimension reduction (without normalization) is known to improve performance [1, 6] while normalization (without dimension reduction) has also been explored [7]. An appropriate combination of normalization and dimension reduction can further improve performance. We shall first introduce dimension reduction with normalized Laplacian LS (G). Denote by PrS (G) the projection operator onto the eigenspace of ?S?1 + LS (G) corresponding to the r smallest eigenvalues. Now, we may define the following regularizer on the reduced subspace:  T ?1 f?,k K0 f?,k PrS (G)f?,k = f?,k , T f?,k K?1 f?,k = +? otherwise. (4) Note that we will focus on bounding the generalization complexity using the reduced dimensionality r. In such context, the choice of K0 is not important. For example, we may simply choose K0 = I. The benefit of dimension reduction in graph learning has been investigated in [6], under the spectral kernel design framework. Note that the normalization issue, which will change the eigenvectors and their ordering, wasn?t investigated there. The following theorem shows that the target vectors can be well approximated by its projection onto PqS (G). We skip the proof due to the space limitation. Theorem 4 Let G? = ?q?=1 G? (G? = (V? , E? )) be a pure subgraph of G. Consider r ? q: ?r+1 (LS (G)) ? ?r+1 (LS (G? )) ? min? ?2 (LS (G? )). For each k, let f?j,k = ?yj ,k be the target (encoding of the true labels) for class k (j = 1, . . . , m). Then kPrS (G)f??,k ? f??,k k22 ? ?r (S)kf??,k k22 , ? P ?1/2 ?1/2 S (G ))k2 +d(S) , d(S) = max? 2|V1 ? | j,j ? ?V? (Sj ? Sj ? )2 . where ?r (S) = kLS (G)?L ?r+1 (LS (G)) We can prove a generalization bound using Theorem 4. For simplicity, we only consider least P 2 squares loss ?(fj,? , yj ) = K k=1 (fj,k ? ?k,yj ) in (1) using regularization (4) and K0 = I. With P m 1 2 ? p = 1, we have m j=1 ?(fj,? , yj ) ? ?r (S) + ?m. It is also equivalent to take K0 = PrS (G) due to the dimension reduction, so that we can use tr(K) = r. Now from Theorem 1 with a = 1/16, r 1 P 2 ? b = 0.5, c = 0.5, we have EZn m?n ?n err(fj,? , yj ) ? 16(?r (S) +?m)+ ?nm . By optimizing j?Z over ?, we obtain X err(f?j,? , yj ) p EZn ? 16?r (S)2 + 32 r/n. (5) m?n ? j?Zn The analysis of optimum scaling factors is analogous to Section 3.1, and the conclusions there hold. Compared to Theorem 3, the advantage of dimension reduction in (5) is that the quantity cut(LS , y) is replaced by kLS (G) ? LS (G? )k2 , which is typically much smaller. Instead of a rigorous analysis, we shall just give a brief intuition. For simplicity we take S = I so that we can ignore the variations caused by S. The 2-norm of the symmetric error matrix LS (G) ? LS (G? ) is its largest eigenvalue, which is no more than the largest 1-norm of one of its row vectors. In contrast, cut(LS , y) behaves similar to the absolute sum of entries of the error matrix, which is m times more than the averaged 1-norm of its row vectors. Therefore if error is relatively uniform across rows, then cut(LS , y) can be at an order of m times more than kLS (G) ? LS (G? )k2 . 5 Experiments We test the three types of the kernel matrix K (Unnormalized, normalized by K-scaling or Lscaling) with the two regularization methods: the first method is to use K without dimension reduction, and the second method reduces the dimension of K?1 to eigenvectors corresponding to the smallest r eigenvalues and regularizes with f T K?1 f if PrS (G)f = f and +? otherwise. We are particularly interested in how well K-scaling performs. From m data points, n training labeled examples are randomly chosen while ensuring that at least one training example is chosen from each class. The remaining m ? n data points serve as test data. The regularization parameter ? is chosen by cross validation on the n training labeled examples. We will show performance either when the rest of the parameters (? and dimensionality r) are also chosen by cross validation or when they are set to the optimum (oracle performance). The dimensionality r is chosen from K, K + 5, K + 10, ? ? ? , 100 where K is the number of classes unless otherwise specified. Our focus is on small n close to the number of classes. Throughout this section, we conduct 10 runs with random training/test splits and report the average accuracy. We use the one-versus-all strategy with least squares loss ?k (a, b) = (a ? ?k,b )2 . Controlled data experiments The purpose of the controlled data experiments is to observe the correlation of the effectiveness of the normalization methods with graph properties. The graphs we generate contain 2000 nodes, each of which is assigned one of 10 classes. We show the results when dimension reduction is applied Accuracy (%) Accuracy (%) 100 80 60 40 graph1 Unnormalized graph2 L-scaling graph3 K-scaling 100 90 80 70 60 graph6 graph7 graph8 graph9 graph10 Unnormalized (a) Nearly-constant degrees. L-scaling K-scaling (b) Core-satellite graphs Figure 1: Classification accuracy (%). (a) Graphs with near constant within class degrees. (b) Core-satellite graphs. n = 40, m = 2000. With dimension reduction (dim ? 20; chosen by cross validation). to the three types of matrix K. The performance is averaged over 10 random splits with error bar representing one standard deviation. Figure 1 (a) shows classification accuracy on three graphs that were generated so that the node degrees (of either correct edges or erroneous edges) are close to constant within each class but vary across classes. On these graphs, both K-scaling and L-scaling significantly improve classification accuracy over the unnormalized baseline. There is not much difference between K-scaling?s and L-scaling?s. Observe that K-scaling and L-scaling perform differently on the graphs used in Figure 1 (b). These five graphs have the following properties. Each class consists of core nodes and satellite nodes. Core nodes of the same class are tightly connected with each other and do not have any erroneous edges. Satellite nodes are relatively weakly connected to core nodes of the same class. They are also connected to some other classes? satellite nodes (i.e., introducing errors). This core-satellite model is intended to simulate real-world data in which some data points are close to the class boundaries (satellite nodes). For graphs generated in this manner, degrees vary within the same class since the satellite nodes have smaller degrees than the core nodes. Our analysis suggests that L-scaling will do poorly. Figure 1 (b) shows that on the five core-satellite graphs, K-scaling indeed produces higher performance than L-scaling. In particular, K-scaling does well even when L-scaling rather underperforms the unnormalized baseline. Real-world data experiments Our real-world data experiments use an image data set (MNIST) and a text data set (RCV1). The MNIST data set, downloadable from http://yann.lecun.com/exdb/mnist/, consists of hand-written digit image data (representing 10 classes, from digit ?0? to ?9?). For our experiments, we randomly choose 2000 images (i.e., m = 2000). Reuters Corpus Version 1 (RCV1) consists of news articles labeled with topics. For our experiments, we chose 10 topics (ranging from sports to labor issues; representing 10 classes) that have relatively large populations and randomly chose 2000 articles that are labeled with exactly one of those 10 topics. To generate graphs from the image data, as is commonly done, we first generate the vectors of the gray-scale values of the pixels, and produce the edge weight between the i-th and the j-th data points Xi and Xj by wi,j = exp(?||Xi ? Xj ||2 /t) where t > 0 is a parameter (RBF kernels). To generate graphs from the text data, we first create the bag-of-word vectors and then set wi,j based on RBF as above. As our baseline, we test the supervised configuration by letting W + ?I be the kernel matrix and using the same least squares loss function, where we use the oracle ? which is optimal. Figures 2 (a-1,2) shows performance in relation to the number of labeled examples (n) on the MNIST data set. The comparison of the three bold lines (representing the methods with dimension reduction) in Figure 2 (a-1) shows that when the dimensionality and ? are determined by cross validation, (b-1) RCV1 cross validation 75 65 accuracy (%) (a-1) MNIST, dim and ? (a-2) MNIST, optimum dim and ? 75 85 by cross validation 85 (b-2) RCV1 optimum Superv ised baseline Unnormalized (w /o dim redu.) 75 75 65 65 65 55 55 K-scaling (w /o dim redu.) 55 55 45 45 Unnormalized (w / dim redu.) 45 45 35 35 L-scaling (w /o dim redu.) L-scaling (w / dim redu.) 10 30 50 # of labeled ex amples 10 30 50 # of labeled ex amples K-scaling (w / dim redu.) 10 50 90 10 50 90 # of labeled ex amples # of labeled ex . Figure 2: Classification accuracy (%) versus sample size n (m = 2000). (a-1) MNIST, dim and ? determined by cross validation. (a-2) MNIST, dim and ? set to the optimum. (b-1) RCV1, dim and ? determined by cross validation. (b-2) RCV1, dim and ? set to the optimum. K-scaling outperforms L-scaling, and L-scaling outperforms the unnormalized Laplacian. The performance differences among these three are statistically significant (p ? 0.01) based on the paired t test. The performance of the unnormalized Laplacian (with dimension reduction) is roughly consistent with the performance with similar (m, n) with heuristic dimension selection in [1]. Without dimension reduction, L-scaling and K-scaling still improve performance over the unnormalized Laplacian. The best performance is always obtained by K-scaling with dimension reduction. In Figure 2 (a-1), the unnormalized Laplacian with dimension reduction underperforms the unnormalized Laplacian without dimension reduction, indicating that dimension reduction rather degrades performance. By comparing Figure 2 (a-1) and (a-2), we observe that this seemingly counterintuitive performance trend is caused by the difficulty of choosing the right dimensionality by cross validation. Figure 2 (a-2) shows the performance at the oracle optimal dimensionality and ?. As observed, if the optimal dimensionality is known (as in (a-2)), dimension reduction improves performance either with or without normalization by K-scaling and L-scaling, and all transductive configurations outperform the supervised baseline. We also note that the comparison of Figure 2 (a-1) and (a-2) shows that choosing good dimensionality by cross validation is much harder than choosing ? by cross validation, especially when the number of labeled examples is small. On the RCV1 data set, the performance trend is similar to that of MNIST. Figures 2 (b-1,2) shows the performance on RCV1 using the RBF kernel (t = 0.25, 100NN). In the setting of Figure 2 (b-1) where the dimensionality and ? were determined by cross validation, K-scaling with dimension reduction generally performs the best. By setting the dimensionality and ? to the optimum, the benefit of K-scaling with dimension reduction is even clearer (Figure 2 (b-2)). Its performance differences from the second and third best ?L-scaling (w/ dim redu.)? and ?Unnormalized (w/ dim redu.)? are statistically significant (p ? 0.01) in both Figure 2 (b-1) and (b-2). In our experiments, K-scaling with dimension reduction consistently outperformed others. Without dimension reduction, K-scaling and L-scaling are not always effective. This is consistent with our analysis. On real data, cut is not near-zero, and the effect of normalization is unclear (Section 3.1); however, when dimension is reduced, kLS (G) ? LS (G? )k2 (corresponding to cut) can be much smaller (Section 4), which suggests that K-scaling should improve performance. 6 Conclusion We derived generalization bounds for learning on graphs with Laplacian regularization, using properties of the graph. In particular, we explained the importance of Laplacian normalization and dimension reduction for graph learning. We argued that the standard L-scaling normalization method has the undesirable property that the normalization factors can vary significantly within a pure component. An alternate normalization method, which we call K-scaling, is proposed to remedy the problem. Experiments confirm the superiority of the this normalization scheme. References [1] M. Belkin and P. Niyogi. Semi-supervised learning on Riemannian manifolds. Machine Learning, Special Issue on Clustering:209?239, 2004. [2] F. R. Chung. Spectral Graph Theory. Regional Conference Series in Mathematics. American Mathematical Society, Rhode Island, 1998. [3] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In NIPS, pages 849?856, 2001. [4] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell, 22:888?905, 2000. [5] M. Szummer and T. Jaakkola. Partially labeled classification with Markov random walks. In NIPS 2001, 2002. [6] T. Zhang and R. K. Ando. Analysis of spectral kernel design based semi-supervised learning. In NIPS, 2006. [7] D. Zhou, O. Bousquet, T. Lal, J. Weston, and B. Schlkopf. Learning with local and global consistency. In NIPS 2003, pages 321?328, 2004. [8] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using Gaussian fields and harmonic functions. In ICML 2003, 2003.
3148 |@word version:1 norm:3 tr:1 harder:1 reduction:27 configuration:2 contains:1 series:1 practiced:3 tuned:1 outperforms:2 err:14 com:3 wd:1 comparing:1 written:1 realistic:1 remove:1 implying:1 core:9 provides:1 node:29 zhang:2 five:2 mathematical:1 become:1 prove:3 consists:3 manner:1 introduce:2 indeed:1 expected:2 roughly:1 p1:1 behavior:1 multi:4 automatically:2 increasing:1 becomes:4 project:1 provided:1 bounded:4 notation:1 underlying:1 eigenspace:3 developed:1 every:1 exactly:1 rm:3 k2:4 superiority:1 positive:2 before:1 local:1 consequence:1 encoding:1 mach:1 approximately:4 rhode:1 chose:2 studied:1 suggests:4 statistically:2 averaged:2 practical:1 lecun:1 yj:45 practice:1 definite:1 digit:2 empirical:1 significantly:3 projection:2 word:1 regular:1 suggest:1 onto:2 close:5 graph1:1 operator:1 selection:1 undesirable:1 risk:2 put:1 context:1 optimize:2 equivalent:1 center:1 shi:1 l:32 convex:1 formulate:1 simplicity:4 pure:15 insight:1 regarded:1 dominate:1 counterintuitive:1 proving:1 population:1 variation:1 analogous:1 target:2 suppose:1 decode:1 trend:2 approximated:1 particularly:1 cut:27 labeled:13 observed:1 role:3 wj:11 connected:7 news:1 ordering:1 decrease:1 trade:1 mentioned:1 balanced:1 vanishes:1 intuition:1 complexity:1 graph2:1 weakly:2 predictive:1 serve:1 easily:1 k0:9 differently:1 regularizer:3 effective:1 choosing:3 whose:3 heuristic:1 widely:1 larger:1 otherwise:8 compensates:1 niyogi:1 gi:1 transductive:5 itself:1 seemingly:1 advantage:1 eigenvalue:7 propose:2 redu:8 subgraph:7 poorly:1 achieve:2 intuitive:1 normalize:2 convergence:4 optimum:9 satellite:10 produce:2 generating:1 derive:2 develop:1 ac:6 clearer:1 measured:1 strong:1 skip:1 indicate:1 implies:1 correct:1 adjacency:2 argued:2 f1:1 generalization:14 strictly:1 hold:5 sufficiently:1 considered:1 exp:1 vary:3 early:1 smallest:3 purpose:1 estimation:3 outperformed:1 bag:1 label:12 largest:3 create:1 city:1 always:3 gaussian:1 rather:2 zhou:1 jaakkola:1 encode:2 derived:1 focus:5 properly:1 consistently:1 rank:1 contrast:2 rigorous:1 baseline:5 dim:15 nn:1 lj:1 typically:1 relation:2 i1:1 interested:3 pixel:1 issue:4 classification:11 arg:3 among:1 yahoo:2 special:2 field:1 construct:1 ng:1 manually:1 identical:2 look:1 icml:1 nearly:2 np:1 report:1 simplify:1 others:1 employ:1 belkin:1 randomly:5 tightly:1 intell:1 replaced:1 intended:2 connects:2 replacement:1 ando:2 attempt:1 investigate:1 edge:12 unless:1 conduct:1 divide:1 penalizes:1 walk:1 e0:2 theoretical:7 mk:1 rmk:5 zn:46 introducing:1 deviation:1 entry:8 subset:2 uniform:1 reported:1 connect:1 vm:1 off:1 nm:1 choose:4 ek:2 chung:1 american:1 amples:3 pqs:1 bold:2 downloadable:1 inc:2 mp:1 caused:2 depends:1 later:2 analyze:1 sup:3 recover:1 minimize:1 square:5 accuracy:9 qk:7 characteristic:1 accurately:1 schlkopf:1 definition:13 obvious:1 associated:2 proof:6 riemannian:1 dimensionality:10 improves:1 segmentation:1 higher:1 supervised:9 improved:2 wei:1 rie:1 formulation:2 done:1 box:2 just:1 correlation:1 hand:2 sketch:1 gray:1 effect:2 k22:2 normalized:7 true:2 remedy:6 verify:1 contain:1 regularization:14 hence:1 assigned:1 symmetric:1 encourages:1 pthis:1 unnormalized:17 coincides:1 exdb:1 demonstrate:2 performs:2 cp:3 fj:36 image:5 ranging:1 harmonic:1 novel:1 fi:7 common:3 behaves:2 empirically:1 ji:1 jp:1 linking:1 significant:2 tuning:1 consistency:1 pm:5 mathematics:1 pq:1 similarity:1 v0:3 perspective:2 optimizing:1 inf:4 belongs:1 inequality:3 binary:1 watson:1 yi:1 semi:7 ii:1 full:1 relates:1 reduces:1 faster:2 cross:13 graphcut:1 a1:1 laplacian:23 ensuring:1 controlled:2 paired:1 normalization:29 kernel:17 sometimes:2 underperforms:2 want:2 ezn:8 appropriately:1 rest:1 unlike:1 regional:1 induced:4 subject:1 undirected:1 lafferty:1 effectiveness:1 jordan:1 integer:1 call:4 near:4 ideal:1 iii:1 easy:1 split:2 affect:1 xj:2 fm:1 idea:3 wasn:1 whether:1 expression:1 york:1 generally:1 eigenvectors:2 category:3 reduced:4 generate:4 http:1 outperform:1 exist:2 notice:1 delta:1 disjoint:1 shall:4 drawn:1 verified:1 v1:2 graph:51 sum:1 run:1 tzhang:1 throughout:1 yann:1 draw:1 scaling:51 graph3:1 bound:11 ki:1 quadratic:2 oracle:4 adapted:1 bousquet:1 simulate:1 min:7 degj:7 rcv1:8 relatively:3 kubota:1 alternate:1 combination:1 belonging:1 smaller:3 across:2 wi:2 island:1 vji:1 explained:1 pr:4 fail:1 know:1 letting:2 observe:3 appropriate:2 spectral:5 substitute:1 denotes:3 clustering:4 remaining:3 ghahramani:1 prof:1 especially:1 society:1 malik:1 g0:2 quantity:2 strategy:1 degrades:1 diagonal:5 unclear:1 gradient:1 subspace:1 link:1 mapped:2 topic:3 manifold:1 trivial:1 assuming:2 index:1 relationship:1 nc:1 negative:1 ized:1 design:4 anal:1 motivates:2 perform:1 upper:1 observation:1 markov:1 fin:4 behave:1 maxk:1 regularizes:1 ised:1 introduced:2 bk:1 pair:1 specified:1 optimized:1 lal:1 raising:1 nip:4 trans:1 bar:1 usually:1 pattern:1 max:3 natural:1 difficulty:1 regularized:1 predicting:1 indicator:1 zhu:1 representing:4 scheme:1 improve:5 brief:1 ready:1 kj:7 text:2 geometric:1 understanding:3 literature:1 kf:1 loss:8 limitation:3 proportional:1 versus:3 penalization:2 validation:13 degree:14 consistent:2 article:2 ibm:2 row:3 side:1 understand:1 absolute:1 benefit:2 boundary:1 dimension:30 world:3 commonly:5 hawthorne:1 simplified:1 sj:20 approximate:1 ignore:1 confirm:1 global:1 corpus:1 discriminative:1 xi:2 yji:1 investigated:3 vj:2 diag:1 pk:2 bounding:1 reuters:1 referred:1 ny:2 tong:1 n:1 sub:1 third:1 kin:1 rk:5 theorem:20 erroneous:2 showing:1 symbol:1 explored:1 svm:2 normalizing:1 exists:3 essential:1 mnist:9 restricting:1 importance:2 margin:2 yin:4 simply:3 ez:4 labor:1 sport:1 partially:1 trp:6 applies:1 corresponds:1 kls:4 weston:1 goal:3 rbf:3 lipschitz:1 change:1 typical:1 except:1 uniformly:1 determined:4 lemma:1 experimental:1 indicating:1 formally:1 szummer:1 unbalanced:1 regularizing:1 ex:4
2,368
3,149
Bayesian Model Scoring in Markov Random Fields Sridevi Parise Bren School of Information and Computer Science UC Irvine Irvine, CA 92697-3425 sparise@ics.uci.edu Max Welling Bren School of Information and Computer Science UC Irvine Irvine, CA 92697-3425 welling@ics.uci.edu Abstract Scoring structures of undirected graphical models by means of evaluating the marginal likelihood is very hard. The main reason is the presence of the partition function which is intractable to evaluate, let alone integrate over. We propose to approximate the marginal likelihood by employing two levels of approximation: we assume normality of the posterior (the Laplace approximation) and approximate all remaining intractable quantities using belief propagation and the linear response approximation. This results in a fast procedure for model scoring. Empirically, we find that our procedure has about two orders of magnitude better accuracy than standard BIC methods for small datasets, but deteriorates when the size of the dataset grows. 1 Introduction Bayesian approaches have become an important modeling paradigm in machine learning. They offer a very natural setting in which to address issues such as overfitting which plague standard maximum likelihood approaches. A full Bayesian approach has its computational challenges as it often involves intractable integrals. While for Bayesian networks many of these challenges have been met successfully[3], the situation is quite reverse for Markov random field models. In fact, it is very hard to find any literature at all on model order selection in general MRF models. The main reason for this discrepancy is the fact that MRF models have a normalization constant that depends on the parameters but is in itself intractable to compute, let alone integrate over. In fact, the presence of this term even prevents one to draw samples from the posterior distribution in most situations except for some special cases1 . In terms of approximating the posterior some new methods have become available recently. In [7] a number of approximate MCMC samplers are proposed. Two of them were reported to be most successful: one based on Langevin sampling with approximate gradients given by contrastive divergence and one where the acceptance probability is approximated by replacing the log partition function with the Bethe free energy. Both these methods are very general, but inefficient. In [2] MCMC methods are explored for the Potts model based on the reversible jump formalism. To compute acceptance ratios for dimension-changing moves they need to estimate the partition function 1 If one can compute the normalization term exactly (e.g. graphs with small treewidth) or if one can draw perfect samples from the MRF [8](e.g. positive interactions only) then one construct a Markov chain for the posterior. using a separate estimation procedure making it rather inefficient as well. In [6] and [8] MCMC methods are proposed that use perfect samples to circumvent the calculation of the partition function altogether. This method is elegant but limited in its application due to the need to draw perfect samples. Moreover, two approaches that approximate the posterior by a Gaussian distribution are proposed in [11] (based on expectation propagation) and [13] (based on the Bethe-Laplace approximation). In this paper we focus on a different problem, namely that of approximating the marginal likelihood. This quantity is at the heart of Bayesian analysis because it allows one to compare models of different structure. One can use it to either optimize or average over model structures. Even if one has an approximation to the posterior distribution it is not at all obvious how to use it to compute a good estimate for the marginal likelihood. The most direct approach is to use samples from the posterior and compute importance weights, p(D) ? N 1 X p(D|?n )p(?n )/Q(?n |D) N n=1 ?n ? Q(?n |D) (1) where Q(?n |D) denotes the approximate posterior. Unfortunately, this importance sampler suffers from very high variance when the number of parameters becomes large. It is not untypical that the estimate is effectively based on a single example. We propose to use the Laplace approximation, including all O(1) terms where the intractable quantities of interest are approximated by either belief propagation (BP) or the linear response theorem based on the solution of BP. We show empirically that the O(1) terms are indispensable for small N . Their inclusion can improve accuracy to up to two orders of magnitude. At the same time we observe that as a function of N , the O(1)-term based on the covariance between features deteriorates and should be omitted for large N . We conjecture that this phenomenon is explained by the fact that the calculation of the covariance between features, which is equal to the second derivative of the log-normalization constant, becomes instable if the bias in the MAP estimate of the parameters is of the order of the variance in the posterior. For any biased estimate of the parameters this phenomenon is therefore bound to happen as we increase N because the variance of the posterior distribution is expected to decrease with N . In summary we present a very accurate estimate for the marginal likelihood where it is most needed, i.e. for small N . This work seems to be the first practical method for estimating the marginal evidence in undirected graphical models. 2 The Bethe-Laplace Approximation for log p(D) Without loss of generality we represent a MRF as a log-linear model, h i 1 exp ?T f (x) p(x|?) = Z(?) (2) where f (x) represent features. In the following we will assume that the random variables x are observed. Generalizations to models with hidden variables exist in theory but we defer the empirical evaluation of this case to future research. To score a structure we will follow the Bayesian paradigm and aim to compute the log-marginal likelihood log p(D) where D represents a dataset of size N , Z log p(D) = log d? p(D|?) p(?) (3) where p(?) is some arbitrary prior on the parameters ?. In order to approximate this quantity we employ two approximations. Firstly, we expand the both log-likelihood and log-prior around the MAP value ?MP . For the log-likelihood this boils down to expanding the log-partition function, 1 log Z(?) ? log Z(?MP ) + ?T ?? + ??T C?? 2 (4) with ?? = (? ? ?MP ) and C = E[f (x)f (x)T ]p(x) ? E[f (x)]p(x) E[f (x)]Tp(x) , ? = E[f (x)]p(x) (5) and where all averages are taken over p(x|?MP ). Similarly for the prior we find, 1 log p(?) = log p(?MP ) + gT ?? + ??T H?? (6) 2 where g is the first derivative of log p evaluated at ?MP and H is the second derivative (or Hessian). The variables ?? represent fluctuations of the parameters around the MAP value ?MP . The marginal likelihood can now be approximated by integrating out the fluctuations ??, considering ?MP as a hyper-parameter, Z log p(D) = log d?? p(D|??, ?MP ) p(??|?MP ) (7) Inserting the expansions eqns.4 and 6 into eqn.7 we arrive at the standard expression for the Laplace approximation applied to MRFs, log p(D) ? (8) X 1 1 1 H T ?MP f (xn ) ? N log Z(?MP ) + log p(?MP ) + F log(2?) ? F log(N ) ? log det(C ? ) 2 2 2 N n with F the number of features. The difference with Laplace approximations for Bayesian networks is the fact that many terms in the expression above can not be evaluated. First of all, determining ?MP requires running gradient ascent or iterative scaling to maximize the penalized log-likelihood which requires the computation of the average sufficient statistics E[f (x)]p(x) . Secondly, the expression contains the log-partition function Z(?MP ) and the covariance matrix C which are both intractable quantities. 2.1 The BP-Linear Response Approximation To make further progress, we introduce a second layer of approximations based on belief propagation. In particular, we approximate the required marginals in the gradient for ?MP with the ones obtained with BP. For fully observed MRFs the value for ?MP will be very close to the solution obtained by pseudo-moment matching (PMM) [5]; the influence of the prior being the only difference between the two. Hence, we use ?PMM to initialize gradient descent. The approximation incurred by PMM is not always small [10] in which case other approximations such as contrastive divergence may be substituted instead. The term ? log Z(?MP ) will be approximated with the Bethe free energy. This will involve running belief propagation on a model with parameters ?MP and inserting the beliefs at their fixed points into the expression for the Bethe free energy [16]. To compute the covariance matrix between the features C (eqn.5), we use the linear response algorithm of [15]. This approximation is based on the observation that C is the Hessian of the logpartition function w.r.t. the parameters. This is approximated by the Hessian of the Bethe free energy w.r.t. the parameters which in turn depends to the partial derivatives of the beliefs from BP w.r.t. the parameters. ? 2 log Z(?) ?pBP (x? |?) ? 2 log FBethe (?) X C?? = f? (x? ) ? ?? = (9) ??? ??? ??? ??? ??? x ? MP pBP ? where ? = ? , is the marginal computed using belief propagation and x? is the collection of variables in the argument of feature f? (e.g. nodes or edges). This approximate C is also guaranteed to be symmetric and positive semi-definite. In [15] two algorithms were discussed to compute C in the linear response approximation, one based on a matrix inverse, the other a local propagation algorithm. The main idea is to perform a Taylor expansion of the beliefs and messages in the parameters ?? = ? ? ?MP and keep track of first order terms in the belief propagation equations. One can show that the first order terms carry the information to compute the covariance matrix. We refer to [15] for more information. In appendix A we provide explicit equations for the case of Boltzman machines which is what is needed to reproduce the experiments in section 4. true model?5 nodes, 6 edges; N=10000 true model?5 nodes, 6 edges; N=50 ?3.1 ?3.04 ?3.15 ?3.05 ?3.2 ?3.06 score/N score/N ?3.25 ?3.3 ?3.07 ?3.08 BIC?ML MAP BP?LR AIS BP?LR?ExactGrad Laplace?Exact ?3.35 BIC?ML MAP BP?LR AIS BP?LR?ExactGrad Laplace?Exact ?3.4 ?3.45 ?3.5 ?2 0 2 4 6 8 #edges (nested models) ?3.09 ?3.1 10 12 ?3.11 ?2 0 (a) 2 4 6 8 #edges (nested models) 10 12 (b) Figure 1: Comparision of various scores on synthetic data 3 Conditional Random Fields Perhaps the most practical class of undirected graphical models are the conditional random field (CRF) models. Here we jointly model labels t and input variables x. The most significant modification relative to MRFs is that the normalization term now depends on the input variable. The probability of label given input is given as, h i 1 exp ?T f (t, x) (10) p(t|x, ?) = Z(?, x) To approximate the log marginal evidence we obtain an expression very similar to eqn.8 with the following replacement, ? ? N 1 X Cx (11) N n=1 n ? ? X? T X? T ?MP f (xn ) ? N log Z(?MP ) ? ?MP f (tn , xn ) ? log Z(?MP , xn ) (12) C? n where n Cxn = E[f (t, xn )f (t, xn )T ]p(t|xn ) ? E[f (t, xn )]p(t|xn ) E[f (t, xn )]Tp(t|xn ) MP and where all averages Pare taken over distributions p(t|xn , ? ) at the MAP value ? tional log-likelihood n log p(tn |xn , ?). 4 MP (13) of the condi- Experiments In the following experiments we probe the accuracy of the Bethe-Laplace(BP-LR) approximation. In these experiments we have focussed on comparing the value of the estimated log marginal likelihood with ?annealed importance sampling? (AIS), which we treat as ground truth[9, 1]. We have focussed on this performance measure because the marginal likelihood is the relevant quantity for both Bayesian model averaging as well as model selection. We perform experiments on synthetic data as well as a real-world dataset. For the synthetic data, we use Boltzman machine models (binary undirected graphical models with pairwise interactions) because we believe that the results will be representative of multi-state models and because the implementation of the linear response approximation is straightforward in this case (see appendix A). true model:5nodes, 6 edges; N=50 0.25 9 BIC?ML MAP BP?LR BP?LR?ExactGrad Laplace?Exact x 10 8 Abs. score diff. with AIS Abs. score diff. with AIS 0.3 true model:5nodes, 6 edges; N=10000 ?3 0.35 0.2 0.15 0.1 7 BIC?ML MAP BP?LR BP?LR?ExactGrad Laplace?Exact 6 5 4 3 2 0.05 0 ?2 1 0 2 4 6 8 #edges (nested models) (a) 10 12 0 ?2 0 2 4 6 8 #edges (nested models) 10 12 (b) Figure 2: Mean difference in scores with AIS (synthetic data). Error-bars are too small to see. Scores computed using the proposed method (BP-LR) were compared against MAP scores (or penalized log-likelihood) where we retain only the first three terms in equation (8) and the commonly used BIC-ML scores where we ignore all O(1) terms (i.e retain only terms 1, 2 and 5). BIC-ML uses the maximum likelihood value ?ML instead of ?MP . We also evaluate two other scores - BPLR-ExactGrad where we use exact gradients to compute the ?MP and Laplace-Exact which is same as BP-LR-ExactGrad but with C computed exactly as well. Note that these last two methods are practical only for models with small tree-width. Nevertheless they are useful here to illustrate the effect of the bias from BP. 4.1 Synthetic Data We generated 50 different random structures on 5 nodes. For each we sample 6 different sets of parameters with weights w ? U{[?d, ?d + ?] ? [d, d + ?]}, d > 0, ? = 0.1 4 and biases b ? U[?1, 1] 0.2 0.5 1.0 1.5 2.0 , , , , , ]. and varying the edge strength d in [ 0.1 We then generated N = 10000 4 4 4 4 4 4 samples from each of these (50 ? 6) models using exact sampling by exhaustive enumeration. In the first experiment we picked a random dataset/model with d = 0.5 4 (the true structure had 6 edges) and studied the variation of different scores with model complexity. We define an ordering on models based on complexity by using nested model sequences. These are such that a model appearing later in the sequence contains all edges from models appearing earlier. Figure (1) shows the results for two such random nested sequences around the true model, for the number of datacases N = 50 and N = 10000 respectively. The error-bars for AIS are over 10 parallel annealing runs which we see are very small. We repeated the plots for multiple such model sequences and the results were similar. Figure (2) shows the average absolute difference of each score with the AIS score over 50 sequences. From these one can see that BP-LR is very accurate at low N . As known in the literature, BIC-ML tends to over-penalize model complexity. At large N , the performance of all methods improve but BP-LR does slightly worse than the BIC-ML. In order to better understand the performance of various scores with N , we took the datasets at d = 0.5 4 and computed scores at various values of N . At each value, we find the absolute difference in the score assigned to the true structure with the corresponding AIS score. These are then averaged over the 50 datasets. The results are shown in figure (3). We note that all BP-LR methods are about two orders of magnitude more accurate than methods that ignore the O(1) term based on C. However, as we increase N , BP-LR based on ?MP computed using BP significantly deteriorates. This does not happen with both BP-LR methods based on ?MP computed using exact gradients (i.e. BPLR-ExactGrad and Laplace-Exact). Since the latter two methods perform identically, we conclude that it is not the approximation of C by linear response that breaks down, but rather that the bias in ?MP is the reason that the estimate of C becomes unreliable. We conjecture that this happens when the bias becomes of the order of the standard deviation of the posterior distribution. Since the bias is d=0.5/4 0 ?1 Mean Absolute score diff. with AIS Mean absolute score diff. with AIS BIC?ML MAP BP?LR BP?LR?ExactGrad Laplace?Exact 10 ?2 10 ?3 10 ?1 10 BIC?ML MAP BP?LR BP?LR?ExactGrad Laplace?Exact ?2 10 ?3 10 ?4 ?4 10 ?2000 N=10k 0 10 10 0 2000 4000 6000 N (# samples) 8000 10000 12000 Figure 3: Variation of score accuracy with N 10 0 0.5 1 1.5 d*4 (edge strength) 2 2.5 Figure 4: Variation of score accuracy with d constant but the variance in the posterior decreases as O(1/N ) this phenomenon is bound to happen for some value of N . Finally since our BP-LR method relies on the BP approximation which is known to break down at strong interactions, we investigated the performance of various scores with d. Again at each value of d we compute the average absolute difference in the scores assigned to the true structure by a method and AIS. We use N = 10000 to keep the effect of N minimal. Results are shown in figure (4). As expected all BP based methods deteriorate with increasing d. The exact methods show that one can improve performance by having a more accurate estimate of ?MP . 4.2 Real-world Data To see the performance of the BP-LR on real world data, we implemented a linear chain CRF on the ?newsgroup FAQ dataset?2 [4]. This dataset contains 48 files where each line can be either a header, a question or an answer. The problem is binarized by only retaining the question/answer lines. For each line we use 24 binary features g a (x) = 0/1, a = 1, .., 24 as provided by [4]. These are used to define state and transition features using: fia (ti , xi ) = ti g a (xi ) and fia (ti , ti+1 , xi ) = ti ti+1 g a (xi ) where i denotes the line in a document and a indexes the 24 features. We generated a random sequence of models by incrementally adding some state features and then some transition features. We then score each model using MAP, BIC-MAP (which is same as BICML but with ?MP ), AIS and Laplace-Exact. Note that since the graph is a chain, BP-LR is equivalent to BP-LR-ExactGrad and Laplace-Exact. We use N = 2 files each truncated to 100 lines. The results are shown in figure (5). Here again, the Laplace-Exact agrees very closely with AIS compared to the other two methods. (Another less relevant observation is that the scores flatten out around the point where we stop adding the state features showing their importance compared to transition features). 5 Discussion The main conclusion from this study is that the Bethe-Laplace approximation can give an excellent approximation to the marginal likelihood for small datasets. We discovered an interesting phenomenon, namely that as N grows the error in the O(1) term based on the covariance between features increases. We found that this term can give an enormous boost in accuracy for small N (up to two orders of magnitude), but its effect can be detrimental for large N . We conjecture that this switch-over point takes place when the bias in ?MP becomes of the order of the standard deviation in the posterior (which decreases as 1/N ). At that point the second derivative of the log-likelihood in the Taylor expansion becomes unreliable. There are a number of ways to improve the accuracy of approximation. One approach is to use higher order Kikuchi approximations to replace the Bethe approximation. Linear response results are also 2 Downloaded from: http://www.cs.umass.edu/? mccallum/data/faqdata/ CRF N=2, Sequence Length=100 ?15 ?20 ?25 score/N ?30 ?35 ?40 ?45 MAP BIC_MAP Laplace?Exact AIS ?50 ?55 ?60 0 10 20 30 # features 40 50 Figure 5: Comparision of various scores on real-world dataset available for this case [12]. A second improvement could come from improving the estimate of ?MP using alternative learning techniques such as contrastive divergence or alternative sample-based approaches. As discussed above, less bias in ?MP will make the covariance term useful for larger N . Finally, the case of hidden variables needs to be addressed. It is not hard to imagine how to extend the techniques proposed in this paper to hidden variables in theory, but we haven?t run the experiments necessary to make claims about its performance. This, we leave for future study. A Computation of C for Boltzman Machines For binary variables and pairwise interactions we define the variables as ? = {?i , wij } where ?i is a parameter multiplying the node-feature xi and wij the parameter multiplying the edge feature xi xj . Moreover, we?ll define the following independent quantities qi = p(xi = 1) and ?ij = p(xi = 1, xj = 1). Note that all other quantities, e.g. p(xi = 1, xj = 0) are functions of {qi , ?ij }. In the following we will assume that {qi , ?ij } are computed using belief propagation (BP). At the fixed points of BP the following relations hold [14], ? wij = log ?ij (?ij + 1 ? qi ? qj ) (qi ? ?ij )(qj ? ?ij ) ? ? ?i = log (1 ? qi )zi ?1 zi ?1 Q qi Q j?N (i) (qi j?N (i) (?ij ? ?ij ) + 1 ? qi ? qj ) ! (14) where N (i) are neighboring nodes of node i in the graph and zi = |N (i)| is the number of neighbors of node i. ?1 To compute the = # covariance matrix we first compute its inverse from eqns.14 as follows, C " ?? ?q ?w ?q ?? ?? ?w ?? and subsequently take its inverse. The four terms in this matrix are given by, ? ? ? X ? 1 1 1 ? z ??i i ? ?ik + + =? ?qk qi (1 ? qi ) qi ? ?ij ?ij + 1 ? qi ? qj j?N (i) ? ? ? ? 1 1 ??i ?1 ?1 + + = ?ij + ?ik ??jk qi ? ?ik ?ik + 1 ? qi ? qk qi ? ?ij ?ij + 1 ? qi ? qj ? ? ? ? 1 1 ?Wij 1 1 ? ? = ?ik + ?jk ?qk qi ? ?ij ?ij + 1 ? qi ? qj qj ? ?ij ?ij + 1 ? qi ? qj ? ? 1 1 1 1 ?Wij + + ?ik ?jl = + ??kl ?ij ?ij + 1 ? qi ? qj qi ? ?ij qj ? ?ij (15) (16) (17) (18) Acknowledgments This material is based upon work supported by the National Science Foundation under Grant No. 0447903. References [1] M.J. Beal and Z. Ghahramani. The variational bayesian EM algorithm for incomplete data: with application to scoring graphical model structures. In Bayesian Statistics, pages 453?464. Oxford University Press, 2003. [2] P. Green and S. Richardson. Hidden markov models and disease mapping. Journal of the American Statistical Association, 97(460):1055?1070, 2002. [3] D. Heckerman. A tutorial on learning with bayesian networks. pages 301?354, 1999. [4] A. McCallum and D. Freitag F. Pereira. Maximum entropy Markov models for information extraction and segmentation. In Int?l Conf. on Machine Learning, pages p.591?598, San Francisco, 2000. [5] T.S. Jaakkola M.J. Wainwright and A.S. Willsky. Tree-reweighted belief propagation algorithms and approximate ml estimation via pseudo-moment matching. In AISTATS, 2003. [6] J. M?ller, A. Pettitt, K. Berthelsen, and R. Reeves. An efficient Markov chain Monte Carlo method for distributions with intractable normalisation constants. Biometrica, 93, 2006. to appear. [7] I. Murray and Z. Ghahramani. Bayesian learning in undirected graphical models: approximate MCMC algorithms. In Proceedings of the 14th Annual Conference on Uncertainty in Artificial Intelligence (UAI-04), San Francisco, CA, 2004. [8] I. Murray, Z. Ghahramani, and D.J.C. MacKay. Mcmc for doubly-intractable distributions. In Proceedings of the 14th Annual Conference on Uncertainty in Artificial Intelligence (UAI-06), Pittsburgh, PA, 2006. [9] R.M. Neal. Annealed importance sampling. In Statistics and Computing, pages 125?139, 2001. [10] S. Parise and M. Welling. Learning in markov random fields: An empirical study. In Proc. of the Joint Statistical Meeting ? JSM2005, 2005. [11] Y. Qi, M. Szummer, and T.P. Minka. Bayesian conditional random fields. In Artificial Intelligence and Statistics, 2005. [12] K. Tanaka. Probabilistic inference by means of cluster variation method and linear response theory. IEICE Transactions in Information and Systems, E86-D(7):1228?1242, 2003. [13] M. Welling and S. Parise. Bayesian random fields: The Bethe-Laplace approximation. In UAI, 2006. [14] M. Welling and Y.W. Teh. Approximate inference in boltzmann machines. Artificial Intelligence, 143:19?50, 2003. [15] M. Welling and Y.W. Teh. Linear response algorithms for approximate inference in graphical models. Neural Computation, 16 (1):197?221, 2004. [16] J.S. Yedidia, W. Freeman, and Y. Weiss. Constructing free energy approximations and generalized belief propagation algorithms. Technical report, MERL, 2002. Technical Report TR-2002-35.
3149 |@word seems:1 covariance:8 contrastive:3 tr:1 carry:1 moment:2 contains:3 score:28 uma:1 document:1 cxn:1 comparing:1 happen:3 partition:6 plot:1 alone:2 intelligence:4 mccallum:2 lr:24 node:10 firstly:1 direct:1 become:2 ik:6 freitag:1 doubly:1 introduce:1 deteriorate:1 pairwise:2 expected:2 multi:1 freeman:1 enumeration:1 considering:1 increasing:1 becomes:6 provided:1 estimating:1 moreover:2 what:1 pseudo:2 binarized:1 ti:6 exactly:2 grant:1 appear:1 positive:2 local:1 treat:1 tends:1 oxford:1 fluctuation:2 studied:1 limited:1 averaged:1 practical:3 acknowledgment:1 definite:1 procedure:3 empirical:2 significantly:1 matching:2 flatten:1 integrating:1 close:1 selection:2 influence:1 www:1 equivalent:1 optimize:1 map:14 logpartition:1 annealed:2 straightforward:1 variation:4 laplace:21 imagine:1 exact:16 us:1 pa:1 approximated:5 jk:2 observed:2 bren:2 pmm:3 ordering:1 decrease:3 disease:1 fbethe:1 complexity:3 parise:3 upon:1 joint:1 various:5 fast:1 monte:1 artificial:4 hyper:1 header:1 exhaustive:1 quite:1 larger:1 statistic:4 richardson:1 jointly:1 itself:1 beal:1 sequence:7 took:1 propose:2 interaction:4 instable:1 inserting:2 uci:2 relevant:2 neighboring:1 cluster:1 perfect:3 leave:1 kikuchi:1 illustrate:1 ij:22 school:2 progress:1 strong:1 implemented:1 c:1 involves:1 treewidth:1 come:1 met:1 closely:1 subsequently:1 material:1 generalization:1 secondly:1 hold:1 around:4 ic:2 ground:1 exp:2 mapping:1 claim:1 omitted:1 estimation:2 proc:1 label:2 agrees:1 successfully:1 gaussian:1 always:1 aim:1 rather:2 varying:1 jaakkola:1 focus:1 improvement:1 potts:1 likelihood:18 tional:1 inference:3 mrfs:3 hidden:4 relation:1 expand:1 reproduce:1 wij:5 issue:1 retaining:1 special:1 initialize:1 uc:2 marginal:13 field:7 construct:1 equal:1 having:1 extraction:1 sampling:4 mackay:1 represents:1 discrepancy:1 future:2 report:2 haven:1 employ:1 divergence:3 national:1 replacement:1 ab:2 acceptance:2 interest:1 message:1 normalisation:1 evaluation:1 chain:4 accurate:4 integral:1 edge:14 partial:1 necessary:1 tree:2 incomplete:1 taylor:2 minimal:1 merl:1 formalism:1 modeling:1 earlier:1 tp:2 deviation:2 successful:1 too:1 reported:1 answer:2 synthetic:5 retain:2 probabilistic:1 again:2 worse:1 conf:1 american:1 inefficient:2 derivative:5 int:1 mp:37 depends:3 later:1 break:2 picked:1 parallel:1 defer:1 accuracy:7 variance:4 qk:3 bayesian:14 carlo:1 multiplying:2 suffers:1 against:1 energy:5 minka:1 obvious:1 boil:1 irvine:4 stop:1 dataset:7 segmentation:1 higher:1 follow:1 response:10 wei:1 evaluated:2 generality:1 eqn:3 replacing:1 reversible:1 propagation:11 incrementally:1 perhaps:1 believe:1 ieice:1 grows:2 effect:3 true:8 hence:1 assigned:2 symmetric:1 neal:1 reweighted:1 ll:1 width:1 eqns:2 generalized:1 crf:3 tn:2 berthelsen:1 variational:1 recently:1 pbp:2 empirically:2 jl:1 discussed:2 extend:1 association:1 marginals:1 refer:1 significant:1 ai:15 reef:1 similarly:1 inclusion:1 had:1 gt:1 posterior:13 reverse:1 indispensable:1 binary:3 meeting:1 scoring:4 paradigm:2 ller:1 maximize:1 biometrica:1 semi:1 full:1 multiple:1 technical:2 calculation:2 offer:1 qi:23 mrf:4 expectation:1 normalization:4 represent:3 faq:1 penalize:1 condi:1 annealing:1 addressed:1 biased:1 ascent:1 file:2 elegant:1 undirected:5 e86:1 presence:2 identically:1 switch:1 xj:3 bic:12 zi:3 idea:1 det:1 qj:10 expression:5 hessian:3 useful:2 involve:1 http:1 exist:1 tutorial:1 deteriorates:3 estimated:1 track:1 four:1 nevertheless:1 enormous:1 changing:1 graph:3 run:2 inverse:3 uncertainty:2 arrive:1 place:1 draw:3 appendix:2 scaling:1 bound:2 layer:1 guaranteed:1 annual:2 comparision:2 strength:2 bp:35 argument:1 conjecture:3 heckerman:1 slightly:1 em:1 making:1 modification:1 happens:1 explained:1 heart:1 taken:2 equation:3 turn:1 needed:2 fia:2 available:2 yedidia:1 probe:1 observe:1 appearing:2 alternative:2 altogether:1 denotes:2 remaining:1 running:2 graphical:7 ghahramani:3 murray:2 approximating:2 move:1 question:2 quantity:8 gradient:6 detrimental:1 separate:1 reason:3 willsky:1 length:1 index:1 ratio:1 unfortunately:1 implementation:1 boltzmann:1 perform:3 teh:2 observation:2 markov:7 datasets:4 descent:1 truncated:1 situation:2 langevin:1 discovered:1 arbitrary:1 namely:2 required:1 kl:1 plague:1 boost:1 tanaka:1 address:1 bar:2 challenge:2 max:1 including:1 green:1 belief:12 wainwright:1 natural:1 circumvent:1 normality:1 pettitt:1 improve:4 pare:1 prior:4 literature:2 determining:1 relative:1 loss:1 fully:1 interesting:1 foundation:1 integrate:2 incurred:1 downloaded:1 sufficient:1 summary:1 penalized:2 supported:1 last:1 free:5 bias:8 understand:1 neighbor:1 focussed:2 absolute:5 dimension:1 xn:13 evaluating:1 world:4 transition:3 collection:1 jump:1 commonly:1 san:2 employing:1 boltzman:3 welling:6 transaction:1 approximate:14 ignore:2 keep:2 unreliable:2 ml:12 overfitting:1 uai:3 pittsburgh:1 conclude:1 francisco:2 xi:9 iterative:1 bethe:10 ca:3 expanding:1 improving:1 expansion:3 investigated:1 excellent:1 constructing:1 substituted:1 aistats:1 main:4 repeated:1 representative:1 pereira:1 explicit:1 untypical:1 theorem:1 down:3 showing:1 explored:1 evidence:2 intractable:8 adding:2 effectively:1 importance:5 magnitude:4 entropy:1 cx:1 prevents:1 nested:6 truth:1 relies:1 conditional:3 replace:1 hard:3 except:1 diff:4 sampler:2 averaging:1 newsgroup:1 latter:1 szummer:1 evaluate:2 mcmc:5 phenomenon:4
2,369
315
A B-P ANN Commodity Trader Joseph E. Collard Martingale Research Corporation 100 Allentown Pkwy., Suite 211 Allen, Texas 75002 Abstract An Artificial Neural Network (ANN) is trained to recognize a buy/sell (long/short) pattern for a particular commodity future contract. The BackPropagation of errors algorithm was used to encode the relationship between the Long/Short desired output and 18 fundamental variables plus 6 (or 18) technical variables into the ANN. Trained on one year of past data the ANN is able to predict long/short market positions for 9 months in the future that would have made $10,301 profit on an investment of less than $1000. 1 INTRODUCTION An Artificial Neural Network (ANN) is trained to recognize a long/short pattern for a particular commodity future contract. The Back-Propagation of errors algorithm was used to encode the relationship between the Long/Short desired output and 18 fundamental variables plus 6 (or 18) technical variables into the ANN. 2 NETWORK ARCHITECTURE The ANNs used were simple, feed forward, single hidden layer networks with no input units, N hidden units and one output unit. See Figure 1. N varied from six (6) through sixteen (16) hidden units. 551 552 Collard INPUTS ?? ? HIDDEN LAYER INPUT LAYER OUTPUT LAYER OUTPUT ? ? ? BUY ? ? T, TZ T3 ? ? ? OR SELL LONG OR SHORT ? ? ? T6 OR TI8 DESIRED OUTPUT Figure 1. 3 The Network Architecture TRAINING PROCEDURE Back Propagation of Errors Algorithm: The ANN was trained using the well known ANN training algorithm called Back Propagation of Errors which will not be elaborated on here. A Few Mods to the Algorithm: We are using the algorithm above with three changes. The changes, when implemented and tested on the standard exclusive-or problem. resulted in a trained, one hidden unit network after 60-70 passes through the 4 pattern vectors. This compares to the 245 passes cited by Rumelhart [2]. Even with a 32 hidden unit network, Yves found the average number of passes to be 120 [2]. A B-P ANN Commodity Trader The modifications to standard back propagation are: 1. 2. 3. 4 Minimum Slope Term in the Derivative of the Activation Function [John Denker at Bell Labs]. Using the Optional Momentum Term [2]. Weight change frequency [1]. DATA In all cases, the six market technical variables (Open, High, Low, Close, Open Interest and Volume) were that trading day's data for the "front month" commodity contract (roughly speaking, the most active month's commodity contract) . The first set of training data consisted of 105 or 143 "high confidence" trading days in 1988. Each trading day had associated with it a twenty-five component pattern vector (25-vector) consisting of eighteen fundamental variables, such as weather indicators and seasonal indicators, plus the six market technical variables for the trading day, and finally, the EXPERT's hindsight long/short position. The test data for these networks was all 253 25-vectors in 1988. The next training data set consisted of all 253 trading days in 1988. Again each trading day had associated with it a 25-vector consisting of the same eighteen fundamental variables plus the six market technical variables and finally, the EXPERT's long/short position. The test set for these networks consisted of 25-vectors from the first 205 trading days in 1989. Finally, the last set of training data consisted of the last 251 trading days in 1988. For this set each trading day had associated with it a 37 component pattern vector (37-vector) consisting of the same eighteen fundamental variables plus six market technical variables for that trading day, six market technical variables for the previous trading day, six market technical variables for the two days previous trading day, and finally, the EXPERT's long/short position. The test set for these networks consisted of 37-vectors from the first 205 trading days in 1989. 5 RESULTS The results for 7 1. trained networks are summarized in Table 553 554 Collard Table 1. SIZE/IN TRAIN. # % Study Results ? ?-XPT PROFIT/RTS TEST PROFIT/RTS 005 6-1 /24 105-'88 100 '.125 253-'88 76% 006 6-1 /24 143-'88 99 '.125-1 253-'88 82% Targets ?> 253-'88 --------- 009 10-1/24 253-'88 98 ? .25-4 $24,173/14 205-'89 $ 7,272/ 6 010 6-1 /24 105-'88 100 ? . 1 Targets ?> 251-'88 --------- 011 10-1/36 251-'88 98 @ .25-4 $23,370/14 205-'89 $ 7,272/ 6 012 13-1/36 251-'88 97 @ .25-7 $22,965/12 205-'89 $ 6,554/14 013 16-1/36 251-'88 99 ~ $25,296/10 205- t 89 $14,596/ 6 $17,534/13 253-'88 80% $24,819/10 205-'89 $14,596/ 6 .25-3 $22,495/12 205-'89 $10,301/19 The column headings for Table 1 have the following meanings: # Size/In Train. % @ E-Xpt Profit RTs Test set The numerical designation of the Network. The hidden-output layer dimensions and the number of inputs to the network. The number of days and year of the training set. The percent of the training data encoded in the network at less than E error - the number of days not encoded. The profit computed for the training or test set and how many round turns (RTs) it required for that profit. Or, if the profit calculation was not yet available, then the percent the network is in agreement with the EXPERT. The number of trading days/year of the test set. Figure 2 shows how well the 013 network agrees with its training set's long/short positions. The NET 19 INPUT curve is the commodities price curve for 1988's 251 trading days. A B-P ANN Commodity Trader FILE: sqlc913i ROWS: 1 TO 252 i PLOT OF ? us. RO~ HET 1 OUTPl-tPE 11 TEIlCHR ????)(? HET 19 INPUt - -e . OUTPUT,tEOCHER & PRICE 1.99 :~. --- .--.}(:~- - T : - - r - - l (- - - - . r" ~ 9.Bc] 9.18 9.67 \ ~/ r! I I I I ,l~J 9.33 9.11 J ) I y I 9.44 --J) IJ t IJ J~\\,f 9.56 9.22 , ? ~)\ft.P~ I) 1 I I I I 'y'I,f\ I ( ~I 1\ "I. ( ~~ "- P t ~ ~ ~ \ If ~" 9.99 \ i' ~~, ;:J~'I: l I I I 29 51 85 Figure 2. 'I 113 I I 149 16B 196 I 224 I 252 Trained Network 013 Figure 3 shows the corresponding profit plot on the training data for both the EXPERT and network 013. TOLCOF &EXPERT PROFIT ,24B19 I 22961. I 193~r.1. I I 13788. I 16546 11939. ~.J~~~. __ ": NeTWORK 8273 5515. 2757. I 9.99 +---r-*""1----1----..,.----r----r--,----.-----..-----. B5 113 57 149 168 1 29 196 224 252 Figure 3. Network 013's and EXPERT's Profit for 1988 data 555 556 Collard Figure 4 is the profit plot for the network when tested on the first 205 trading days in 1989. Two si~nificant features should be noted in Figure 4. The network's profit step function is almost always increasing. The profit is never negative. TOLCOF &[~PrRT PROFIT '14591 I 12915. :-._---.... I 11353. I I 8199. I 64B1. I 4865. I 3243. I 9?31. 1621. :.?? _i:(-_ ..............)(_. :~.... - EXPERT : ...........)Eo .......... . r?TWORK 9. 99 -+-c=~~-.,..--.:..----.---.-----.---........----.----r----. 41 . 69 92 115 138 1 24 169 183 296 '89 Figure 4. 6 Network 013's and EXPERT's Profit for 1989 data REFERENCES 1. Pao, Y.- H., Adaptive Pattern Recognition and Neural Networks, Addison Wesley, 1989. 2. Rumelhart, D., and McClelland, J., Parallel Distributed Processing, MIT Press, 1986.
315 |@word implemented:1 consisted:5 trading:16 open:2 ti8:1 exclusive:1 round:1 pkwy:1 profit:15 pao:1 noted:1 bc:1 allen:1 past:1 percent:2 relationship:2 meaning:1 activation:1 yet:1 si:1 predict:1 john:1 numerical:1 negative:1 plot:3 volume:1 twenty:1 rts:4 agrees:1 eighteen:3 short:10 optional:1 mit:1 always:1 had:3 varied:1 five:1 het:2 encode:2 required:1 seasonal:1 market:7 roughly:1 able:1 minimum:1 pattern:6 eo:1 hidden:7 increasing:1 technical:8 indicator:2 calculation:1 long:10 hindsight:1 never:1 corporation:1 suite:1 sell:2 commodity:8 xpt:2 future:3 ro:1 unit:6 few:1 recognize:2 resulted:1 twork:1 consisting:3 designation:1 sixteen:1 pass:3 file:1 interest:1 plus:5 mod:1 row:1 last:2 architecture:2 investment:1 t6:1 heading:1 backpropagation:1 procedure:1 texas:1 desired:3 six:7 distributed:1 bell:1 curve:2 weather:1 dimension:1 b5:1 confidence:1 column:1 forward:1 made:1 adaptive:1 close:1 speaking:1 trader:3 active:1 buy:2 front:1 b1:1 mcclelland:1 cited:1 fundamental:5 table:3 contract:4 target:2 again:1 agreement:1 rumelhart:2 tz:1 recognition:1 expert:9 derivative:1 year:3 ft:1 summarized:1 nificant:1 almost:1 martingale:1 position:5 momentum:1 lab:1 layer:5 parallel:1 slope:1 trained:7 elaborated:1 yves:1 t3:1 train:2 anns:1 artificial:2 encoded:2 joseph:1 frequency:1 modification:1 associated:3 net:1 turn:1 month:3 ann:10 addison:1 back:4 price:2 feed:1 change:3 wesley:1 available:1 day:19 denker:1 called:1 tpe:1 propagation:4 ij:2 tested:2
2,370
3,150
Map-Reduce for Machine Learning on Multicore Cheng-Tao Chu ? chengtao@stanford.edu Sang Kyun Kim ? skkim38@stanford.edu YuanYuan Yu ? yuanyuan@stanford.edu Gary Bradski ?? garybradski@gmail Yi-An Lin ? ianl@stanford.edu Andrew Y. Ng ? ang@cs.stanford.edu Kunle Olukotun ? kunle@cs.stanford.edu ? . CS. Department, Stanford University 353 Serra Mall, Stanford University, Stanford CA 94305-9025. ? . Rexee Inc. Abstract We are at the beginning of the multicore era. Computers will have increasingly many cores (processors), but there is still no good programming framework for these architectures, and thus no simple and unified way for machine learning to take advantage of the potential speed up. In this paper, we develop a broadly applicable parallel programming method, one that is easily applied to many different learning algorithms. Our work is in distinct contrast to the tradition in machine learning of designing (often ingenious) ways to speed up a single algorithm at a time. Specifically, we show that algorithms that fit the Statistical Query model [15] can be written in a certain ?summation form,? which allows them to be easily parallelized on multicore computers. We adapt Google?s map-reduce [7] paradigm to demonstrate this parallel speed up technique on a variety of learning algorithms including locally weighted linear regression (LWLR), k-means, logistic regression (LR), naive Bayes (NB), SVM, ICA, PCA, gaussian discriminant analysis (GDA), EM, and backpropagation (NN). Our experimental results show basically linear speedup with an increasing number of processors. 1 Introduction Frequency scaling on silicon?the ability to drive chips at ever higher clock rates?is beginning to hit a power limit as device geometries shrink due to leakage, and simply because CMOS consumes power every time it changes state [9, 10]. Yet Moore?s law [20], the density of circuits doubling every generation, is projected to last between 10 and 20 more years for silicon based circuits [10]. By keeping clock frequency fixed, but doubling the number of processing cores on a chip, one can maintain lower power while doubling the speed of many applications. This has forced an industrywide shift to multicore. We thus approach an era of increasing numbers of cores per chip, but there is as yet no good framework for machine learning to take advantage of massive numbers of cores. There are many parallel programming languages such as Orca, Occam ABCL, SNOW, MPI and PARLOG, but none of these approaches make it obvious how to parallelize a particular algorithm. There is a vast literature on distributed learning and data mining [18], but very little of this literature focuses on our goal: A general means of programming machine learning on multicore. Much of this literature contains a long and distinguished tradition of developing (often ingenious) ways to speed up or parallelize individual learning algorithms, for instance cascaded SVMs [11]. But these yield no general parallelization technique for machine learning and, more pragmatically, specialized implementations of popular algorithms rarely lead to widespread use. Some examples of more general papers are: Caregea et. al. [5] give some general data distribution conditions for parallelizing machine learning, but restrict the focus to decision trees; Jin and Agrawal [14] give a general machine learning programming approach, but only for shared memory machines. This doesn?t fit the architecture of cellular or grid type multiprocessors where cores have local cache, even if it can be dynamically reallocated. In this paper, we focuses on developing a general and exact technique for parallel programming of a large class of machine learning algorithms for multicore processors. The central idea of this approach is to allow a future programmer or user to speed up machine learning applications by ?throwing more cores? at the problem rather than search for specialized optimizations. This paper?s contributions are: (i) We show that any algorithm fitting the Statistical Query Model may be written in a certain ?summation form.? This form does not change the underlying algorithm and so is not an approximation, but is instead an exact implementation. (ii) The summation form does not depend on, but can be easily expressed in a map-reduce [7] framework which is easy to program in. (iii) This technique achieves basically linear speed-up with the number of cores. We attempt to develop a pragmatic and general framework. What we do not claim: (i) We make no claim that our technique will necessarily run faster than a specialized, one-off solution. Here we achieve linear speedup which in fact often does beat specific solutions such as cascaded SVM [11] (see section 5; however, they do handle kernels, which we have not addressed). (ii) We make no claim that following our framework (for a specific algorithm) always leads to a novel parallelization undiscovered by others. What is novel is the larger, broadly applicable framework, together with a pragmatic programming paradigm, map-reduce. (iii) We focus here on exact implementation of machine learning algorithms, not on parallel approximations to algorithms (a worthy topic, but one which is beyond this paper?s scope). In section 2 we discuss the Statistical Query Model, our summation form framework and an example of its application. In section 3 we describe how our framework may be implemented in a Googlelike map-reduce paradigm. In section 4 we choose 10 frequently used machine learning algorithms as examples of what can be coded in this framework. This is followed by experimental runs on 10 moderately large data sets in section 5, where we show a good match to our theoretical computational complexity results. Basically, we often achieve linear speedup in the number of cores. Section 6 concludes the paper. 2 Statistical Query and Summation Form For multicore systems, Sutter and Larus [25] point out that multicore mostly benefits concurrent applications, meaning ones where there is little communication between cores. The best match is thus if the data is subdivided and stays local to the cores. To achieve this, we look to Kearns? Statistical Query Model [15]. The Statistical Query Model is sometimes posed as a restriction on the Valiant PAC model [26], in which we permit the learning algorithm to access the learning problem only through a statistical query oracle. Given a function f (x, y) over instances, the statistical query oracle returns an estimate of the expectation of f (x, y) (averaged over the training/test distribution). Algorithms that calculate sufficient statistics or gradients fit this model, and since these calculations may be batched, they are expressible as a sum over data points. This class of algorithms is large; We show 10 popular algorithms in section 4 below. An example that does not fit is that of learning an XOR over a subset of bits. [16, 15]. However, when an algorithm does sum over the data, we can easily distribute the calculations over multiple cores: We just divide the data set into as many pieces as there are cores, give each core its share of the data to sum the equations over, and aggregate the results at the end. We call this form of the algorithm the ?summation form.? As an example, consider ordinary least Pm squares (linear regression), which fits a model of the form y = ?T x by solving: ?? = min? i=1 (?T xi ? yi )2 The parameter ? is typically solved for by 1.1.3.2 Algorithm 1.1.1.2 Data 2 0: data input 1: run 1.1.3.1: query_info Engine 1.2 1.1: run 1.1.3: reduce Master 1.1.2: intermediate data Mapper Mapper Reducer 1.1.1: map (split data) Mapper 1.1.4: result Mapper 1.1.1.1: query_info Figure 1: Multicore map-reduce framework defining the design matrix X ? Rm?n to be a matrix whose rows contain the training instances x1 , . . . , xm , letting ~y = [y1 , . . . , ym ]m be the vector of target labels, and solving the normal equations to obtain ?? = (X T X)?1 X T ~y . To put this computation into summation form, we reformulate it into a two phase algorithm where we first compute sufficient statistics by summing over the data, and then aggregate those statistics ? ?1 and solve we compute A = X T X and b = X T ~y as follows: Pm to getT? = A b. PConcretely, m A = i=1 (xi xi ) and b = i=1 (xi yi ). The computation of A and b can now be divided into equal size pieces and distributed among the cores. We next discuss an architecture that lends itself to the summation form: Map-reduce. 3 Architecture Many programming frameworks are possible for the summation form, but inspired by Google?s success in adapting a functional programming construct, map-reduce [7], for wide spread parallel programming use inside their company, we adapted this same construct for multicore use. Google?s map-reduce is specialized for use over clusters that have unreliable communication and where individual computers may go down. These are issues that multicores do not have; thus, we were able to developed a much lighter weight architecture for multicores, shown in Figure 1. Figure 1 shows a high level view of our architecture and how it processes the data. In step 0, the map-reduce engine is responsible for splitting the data by training examples (rows). The engine then caches the split data for the subsequent map-reduce invocations. Every algorithm has its own engine instance, and every map-reduce task will be delegated to its engine (step 1). Similar to the original map-reduce architecture, the engine will run a master (step 1.1) which coordinates the mappers and the reducers. The master is responsible for assigning the split data to different mappers, and then collects the processed intermediate data from the mappers (step 1.1.1 and 1.1.2). After the intermediate data is collected, the master will in turn invoke the reducer to process it (step 1.1.3) and return final results (step 1.1.4). Note that some mapper and reducer operations require additional scalar information from the algorithms. In order to support these operations, the mapper/reducer can obtain this information through the query info interface, which can be customized for each different algorithm (step 1.1.1.1 and 1.1.3.2). 4 Adopted Algorithms In this section, we will briefly discuss the algorithms we have implemented based on our framework. These algorithms were chosen partly by their popularity of use in NIPS papers, and our goal will be to illustrate how each algorithm can be expressed in summation form. We will defer the discussion of the theoretical improvement that can be achieved by this parallelization to Section 4.1. In the following, x or xi denotes a training vector and y or yi denotes a training label. ? Locally Weighted Linear Regression (LWLR) LWLR [28,P3] is solved by finding m T the i=1 wi (xi xi ) and b = Pmsolution of the normal equations A? = b, where A = among different mapi=1 wi (xi yi ). For the summation form, we divide the computation P pers. In this case, one set of mappers is used to compute subgroup wi (xi xTi ) and another P set to compute subgroup wi (xi yi ). Two reducers respectively sum up the partial values for A and b, and the algorithm finally computes the solution ? = A?1 b. Note that if wi = 1, the algorithm reduces to the case of ordinary least squares (linear regression). ? Naive Bayes (NB) In NB [17, 21], we have to estimate P (xj = k|y = 1), P (xj = k|y = 0), and P (y) from the training data. In order to do so, we need to sum over xj = k for each y label in the training data Pto calculate P (x|y). We specify P different sets of mappers to calculate the following: 1{x = k|y = 1}, j subgroup 1{xj = k|y = 0}, P P subgroup 1{y = 1} and 1{y = 0}. The reducer then sums up intermediate subgroup subgroup results to get the final result for the parameters. ? Gaussian Discriminative Analysis (GDA) The classic GDA algorithm [13] needs to learn the following four statistics P (y), ?0 , ?1 and ?. For all the summation forms involved in these computations, we may leverage the map-reduce framework to parallelize the process. Each mapper will handle the summation (i.e. ? 1{yi = 1}, ? 1{yi = 0}, ? 1{yi = 0}xi , etc) for a subgroup of the training samples. Finally, the reducer will aggregate the intermediate sums and calculate the final result for the parameters. ? k-means In k-means [12], it is clear that the operation of computing the Euclidean distance between the sample vectors and the centroids can be parallelized by splitting the data into individual subgroups and clustering samples in each subgroup separately (by the mapper). In recalculating new centroid vectors, we divide the sample vectors into subgroups, compute the sum of vectors in each subgroup in parallel, and finally the reducer will add up the partial sums and compute the new centroids. ? Logistic Regression (LR) For logistic regression [23], we choose the form of hypothesis as h? (x) = g(?T x) = 1/(1 + exp(??T x)) Learning is done by fitting ? to the training data where the likelihood function can be optimized by using Newton-Raphson to update ? := ? ? H ?1 ?? `(?). ?? `(?) is the gradient, which can be computed in parallel by P (i) mappers summing up subgroup (y (i) ? h? (x(i) ))xj each NR step i. The computation of the hessian matrix can be also written in a summation form of H(j, k) := H(j, k) + (i) (i) h? (x(i) )(h? (x(i) ) ? 1)xj xk for the mappers. The reducer will then sum up the values for gradient and hessian to perform the update for ?. ? Neural Network (NN) We focus on backpropagation [6] By defining a network structure (we use a three layer network with two output neurons classifying the data into two categories), each mapper propagates its set of data through the network. For each training example, the error is back propagated to calculate the partial gradient for each of the weights in the network. The reducer then sums the partial gradient from each mapper and does a batch gradient descent to update the weights of the network. ? Principal Components Analysis (PCA) PCA the principle eigenvectors of ?P ? [29] computes m 1 T ? ??T over the data. In the definition for the covariance ? ?= m i=1 xi xi ?Pmatrix m T is already expressed in summation form. Further, we can also ?, the term i=1 xi xi Pm 1 express the mean vector ? as a sum, ? = m i=1 xi . The sums can be mapped to separate cores, and then the reducer will sum up the partial results to produce the final empirical covariance matrix. ? Independent Component Analysis (ICA) ICA [1] tries to identify the independent source vectors based on the assumption that the observed data are linearly transformed from the source data. In ICA, the main goal is to compute the unmixing matrix W. We implement batch gradient ascent to optimize the W ?s likelihood. In this scheme, we can independently " # 1 ? 2g(w1T x(i) ) T calculate the expression x(i) in the mappers and sum them up in the .. . reducer. ? Expectation Maximization (EM) For EM [8] we use Mixture of Gaussian as the underlying model as per [19]. For parallelization: In the E-step, every mapper processes its subset (i) of the training data and computes the corresponding wj (expected pseudo count). In Mphase, three sets of parameters need to be updated: p(y), ?, and ?. For p(y), every mapper P (i) will compute subgroup (wj ), and the reducer will sum up the partial result and divide it P P (i) (i) by m. For ?, each mapper will compute subgroup (wj ? x(i) ) and subgroup (wj ), and the reducer will sum up the partial result and divide them. For ?, every mapper will comP P (i) (i) pute subgroup (wj ? (x(i) ? ?j ) ? (x(i) ? ?j )T ) and subgroup (wj ), and the reducer will again sum up the partial result and divide them. ? Support Vector Machine (SVM) Linear SVM?s P [27, 22] primary goal is to optimize the following primal problem minw,b kwk2 + C i:?i >0 ?ip s.t. y (i) (wT x(i) + b) ? 1 ? ?i where p is either 1 (hinge loss) or 2 (quadratic loss). [2] has shown that the primal problem for quadratic loss canP be solved using the following formula wherePsv are the support vectors: ? = 2w + 2C i?sv (w ? xi ? yi )xi & Hessian H = I + C i?sv xi xTi We perform batch gradient descent to optimize the objective function. The mappers will P calculate the partial gradient subgroup(i?sv) (w ? xi ? yi )xi and the reducer will sum up the partial results to update w vector. Some implementations of machine learning algorithms, such as ICA, are commonly done with stochastic gradient ascent, which poses a challenge to parallelization. The problem is that in every step of gradient ascent, the algorithm updates a common set of parameters (e.g. the unmixing W matrix in ICA). When one gradient ascent step (involving one training sample) is updating W , it has to lock down this matrix, read it, compute the gradient, update W , and finally release the lock. This ?lock-release? block creates a bottleneck for parallelization; thus, instead of stochastic gradient ascent, our algorithms above were implemented using batch gradient ascent. 4.1 Algorithm Time Complexity Analysis Table 1 shows the theoretical complexity analysis for the ten algorithms we implemented on top of our framework. We assume that the dimension of the inputs is n (i.e., x ? Rn ), that we have m training examples, and that there are P cores. The complexity of iterative algorithms is analyzed for one iteration, and so their actual running time may be slower.1 A few algorithms require matrix inversion or an eigen-decomposition of an n-by-n matrix; we did not parallelize these steps in our experiments, because for us m >> n, and so their cost is small. However, there is extensive research in numerical linear algebra on parallelizing these numerical operations [4], and in the complexity analysis shown in the table, we have assumed that matrix inversion and eigen-decompositions can be sped up by a factor of P 0 on P cores. (In practice, we expect P 0 ? P .) In our own software implementation, we had P 0 = 1. Further, the reduce phase can minimize communication by combining data as it?s passed back; this accounts for the log(P ) factor. As an example of our running-time analysis, for single-core LWLR we have to compute A = P m T 2 3 i=1 wi (xi xi ), which gives us the mn term. This matrix must be inverted for n ; also, the 2 reduce step incurs a covariance matrix communication cost of n . 5 Experiments To provide fair comparisons, each algorithm had two different versions: One running map-reduce, and the other a serial implementation without the framework. We conducted an extensive series of experiments to compare the speed up on data sets of various sizes (table 2), on eight commonly used machine learning data sets from the UCI Machine Learning repository and two other ones from a [anonymous] research group (Helicopter Control and sensor data). Note that not all the experiments make sense from an output view ? regression on categorical data ? but our purpose was to test speedup so we ran every algorithm over all the data. The first environment we conducted experiments on was an Intel X86 PC with two Pentium-III 700 MHz CPUs and 1GB physical memory. The operating system was Linux RedHat 8.0 Kernel 2.4.201 If, for example, the number of iterations required grows with m. However, this would affect single- and multi-core implementations equally. LWLR LR NB NN GDA PCA ICA k-means EM SVM single O(mn2 + n3 ) O(mn2 + n3 ) O(mn + nc) O(mn + nc) O(mn2 + n3 ) O(mn2 + n3 ) O(mn2 + n3 ) O(mnc) O(mn2 + n3 ) O(m2 n) multi 2 n3 2 + O( mn P P 0 + n log(P )) 3 2 n 2 O( mn P + P 0 + n log(P )) + nc log(P O( mn )) P + nc log(P O( mn )) P n3 mn2 2 O( P + P 0 + n log(P )) 2 n3 2 O( mn P + P 0 + n log(P )) 3 2 n 2 O( mn P + P 0 + n log(P )) mnc O( P + mn log(P )) 2 n3 2 O( mn P + P 0 + n log(P )) 2 O( mP n + n log(P )) Table 1: time complexity analysis Data Sets Adult Helicopter Control Corel Image Features IPUMS Census Synthetic Time Series Census Income ACIP Sensor KDD Cup 99 Forest Cover Type 1990 US Census samples (m) 30162 44170 68040 88443 100001 199523 229564 494021 581012 2458285 features (n) 14 21 32 61 10 40 8 41 55 68 Table 2: data sets size and description 8smp. In addition, we also ran extensive comparison experiments on a 16 way Sun Enterprise 6000, running Solaris 10; here, we compared results using 1,2,4,8, and 16 cores. 5.1 Results and Discussion Table 3 shows the speedup on dual processors over all the algorithms on all the data sets. As can be seen from the table, most of the algorithms achieve more than 1.9x times performance improvement. For some of the experiments, e.g. gda/covertype, ica/ipums, nn/colorhistogram, etc., we obtain a greater than 2x speedup. This is because the original algorithms do not utilize all the cpu cycles efficiently, but do better when we distribute the tasks to separate threads/processes. Figure 2 shows the speedup of the algorithms over all the data sets for 2,4,8 and 16 processing cores. In the figure, the thick lines shows the average speedup, the error bars show the maximum and minimum speedups and the dashed lines show the variance. Speedup is basically linear with number Adult Helicopter Corel Image IPUMS Synthetic Census Income Sensor KDD Cover Type Census lwlr 1.922 1.93 1.96 1.963 1.909 1.975 1.927 1.969 1.961 2.327 gda 1.801 2.155 1.876 2.23 1.964 2.179 1.853 2.216 2.232 2.292 nb 1.844 1.924 2.002 1.965 1.972 1.967 2.01 1.848 1.951 2.008 logistic 1.962 1.92 1.929 1.938 1.92 1.941 1.913 1.927 1.935 1.906 pca 1.809 1.791 1.97 1.965 1.842 2.019 1.955 2.012 2.007 1.997 ica 1.857 1.856 1.936 2.025 1.907 1.941 1.893 1.998 2.029 2.001 svm 1.643 1.744 1.754 1.799 1.76 1.88 1.803 1.946 1.906 1.959 nn 1.825 1.847 2.018 1.974 1.902 1.896 1.914 1.899 1.887 1.883 kmeans 1.947 1.857 1.921 1.957 1.888 1.961 1.953 1.973 1.963 1.946 em 1.854 1.86 1.832 1.984 1.804 1.99 1.949 1.979 1.991 1.977 avg. 1.985 2.080 1.950 1.930 1.937 1.944 1.819 1.905 1.937 1.922 Table 3: Speedups achieved on a dual core processor, without load time. Numbers reported are dualcore time / single-core time. Super linear speedup sometimes occurs due to a reduction in processor idle time with multiple threads. (a) (b) (c) (d) (e) (f) (g) (h) (i) Figure 2: (a)-(i) show the speedup from 1 to 16 processors of all the algorithms over all the data sets. The Bold line is the average, error bars are the max and min speedups and the dashed lines are the variance. of cores, but with a slope < 1.0. The reason for the sub-unity slope is increasing communication overhead. For simplicity and because the number of data points m typically dominates reduction phase communication costs (typically a factor of n2 but n << m), we did not parallelize the reduce phase where we could have combined data on the way back. Even so, our simple SVM approach gets about 13.6% speed up on average over 16 cores whereas the specialized SVM cascade [11] averages only 4%. Finally, the above are runs on multiprocessor machines. We finish by reporting some confirming results and higher performance on a proprietary multicore simulator over the sensor dataset.2 NN speedup was [16 cores, 15.5x], [32 cores, 29x], [64 cores, 54x]. LR speedup was [16 cores, 15x], [32 cores, 29.5x], [64 cores, 53x]. Multicore machines are generally faster than multiprocessor machines because communication internal to the chip is much less costly. 6 Conclusion As the Intel and AMD product roadmaps indicate [24], the number of processing cores on a chip will be doubling several times over the next decade, even as individual cores cease to become significantly faster. For machine learning to continue reaping the bounty of Moore?s law and apply to ever larger datasets and problems, it is important to adopt a programming architecture which takes advantage of multicore. In this paper, by taking advantage of the summation form in a map-reduce 2 This work was done in collaboration with Intel Corporation. framework, we could parallelize a wide range of machine learning algorithms and achieve a 1.9 times speedup on a dual processor on up to 54 times speedup on 64 cores. These results are in line with the complexity analysis in Table 1. We note that the speedups achieved here involved no special optimizations of the algorithms themselves. We have demonstrated a simple programming framework where in the future we can just ?throw cores? at the problem of speeding up machine learning code. Acknowledgments We would like to thank Skip Macy from Intel for sharing his valuable experience in VTune performance analyzer. Yirong Shen, Anya Petrovskaya, and Su-In Lee from Stanford University helped us in preparing various data sets used in our experiments. This research was sponsored in part by the Defense Advanced Research Projects Agency (DARPA) under the ACIP program and grant number NBCH104009. References [1] Sejnowski TJ. Bell AJ. An information-maximization approach to blind separation and blind deconvolution. In Neural Computation, 1995. [2] O. Chapelle. Training a support vector machine in the primal. Journal of Machine Learning Research (submitted), 2006. [3] W. S. Cleveland and S. J. Devlin. Locally weighted regression: An approach to regression analysis by local fitting. In J. Amer. Statist. Assoc. 83, pages 596?610, 1988. [4] L. Csanky. Fast parallel matrix inversion algorithms. SIAM J. Comput., 5(4):618?623, 1976. [5] A. Silvescu D. Caragea and V. Honavar. A framework for learning from distributed data using sufficient statistics and its application to learning decision trees. International Journal of Hybrid Intelligent Systems, 2003. [6] R. J. Williams D. E. Rumelhart, G. E. Hinton. Learning representation by back-propagating errors. In Nature, volume 323, pages 533?536, 1986. [7] J. Dean and S. Ghemawat. Mapreduce: Simplified data processing on large clusters. Operating Systems Design and Implementation, pages 137?149, 2004. [8] N.M. Dempster A.P., Laird and Rubin D.B. [9] D.J. Frank. Power-constrained cmos scaling limits. IBM Journal of Research and Development, 46, 2002. [10] P. Gelsinger. Microprocessors for the new millennium: Challenges, opportunities and new frontiers. In ISSCC Tech. Digest, pages 22?25, 2001. [11] Leon Bottou Igor Durdanovic Hans Peter Graf, Eric Cosatto and Vladimire Vapnik. Parallel support vector machines: The cascade svm. In NIPS, 2004. [12] J. Hartigan. Clustering Algorithms. Wiley, 1975. [13] T. Hastie and R. Tibshirani. Discriminant analysis by gaussian mixtures. Journal of the Royal Statistical Society B, pages 155?176, 1996. [14] R. Jin and G. Agrawal. Shared memory parallelization of data mining algorithms: Techniques, programming interface, and performance. In Second SIAM International Conference on Data Mining,, 2002. [15] M. Kearns. Efficient noise-tolerant learning from statistical queries. pages 392?401, 1999. [16] Michael Kearns and Umesh V. Vazirani. An Introduction to Computational Learning Theory. MIT Press, 1994. [17] David Lewis. Naive (bayes) at forty: The independence asssumption in information retrieval. In ECML98: Tenth European Conference On Machine Learning, 1998. [18] Kun Liu and Hillow Kargupta. Distributed data mining bibliography. http://www.cs.umbc.edu/ hillol/DDMBIB/, 2006. [19] T. K. MOON. The expectation-maximization algorithm. In IEEE Trans. Signal Process, pages 47?59, 1996. [20] G. Moore. Progress in digital integrated electronics. In IEDM Tech. Digest, pages 11?13, 1975. [21] Wayne Iba Pat Langley and Kevin Thompson. An analysis of bayesian classifiers. In AAAI, 1992. [22] John C. Platt. Fast training of support vector machines using sequential minimal optimization. pages 185?208, 1999. [23] Daryl Pregibon. Logistic regression diagnostics. In The Annals of Statistics, volume 9, pages 705?724, 1981. [24] T. Studt. There?s a multicore in your future, http://tinyurl.com/ohd2m, 2006. [25] Herb Sutter and James Larus. Software and the concurrency revolution. Queue, 3(7):54?62, 2005. [26] L.G. Valiant. A theory of the learnable. Communications of the ACM, 3(11):1134?1142, 1984. [27] V. Vapnik. Estimation of Dependencies Based on Empirical Data. Springer Verlag, 1982. [28] R. E. Welsch and E. KUH. Linear regression diagnostics. In Working Paper 173, Nat. Bur. Econ. Res.Inc, 1977. [29] K. Esbensen Wold, S. and P. Geladi. Principal component analysis. In Chemometrics and Intelligent Laboratory Systems, 1987.
3150 |@word repository:1 version:1 briefly:1 inversion:3 covariance:3 decomposition:2 incurs:1 reaping:1 reduction:2 electronics:1 liu:1 contains:1 series:2 undiscovered:1 silvescu:1 com:1 gmail:1 chu:1 written:3 yet:2 assigning:1 must:1 john:1 subsequent:1 numerical:2 kdd:2 confirming:1 sponsored:1 update:6 device:1 xk:1 beginning:2 sutter:2 core:35 lr:4 geladi:1 enterprise:1 become:1 isscc:1 fitting:3 overhead:1 inside:1 bur:1 expected:1 ica:9 mnc:2 frequently:1 themselves:1 multi:2 simulator:1 inspired:1 company:1 little:2 xti:2 cache:2 actual:1 increasing:3 cpu:2 project:1 cleveland:1 underlying:2 circuit:2 what:3 pto:1 developed:1 chengtao:1 unified:1 finding:1 corporation:1 pseudo:1 every:9 classifier:1 rm:1 hit:1 control:2 wayne:1 grant:1 assoc:1 platt:1 local:3 limit:2 era:2 parallelize:6 dynamically:1 collect:1 range:1 averaged:1 acknowledgment:1 responsible:2 practice:1 block:1 implement:1 backpropagation:2 langley:1 empirical:2 bell:1 adapting:1 cascade:2 significantly:1 idle:1 get:2 nb:5 put:1 restriction:1 optimize:3 map:17 demonstrated:1 dean:1 www:1 go:1 williams:1 independently:1 thompson:1 shen:1 simplicity:1 splitting:2 m2:1 his:1 classic:1 handle:2 coordinate:1 yirong:1 updated:1 delegated:1 target:1 annals:1 massive:1 exact:3 programming:13 user:1 lighter:1 designing:1 hypothesis:1 rumelhart:1 updating:1 lwlr:6 observed:1 solved:3 calculate:7 wj:6 cycle:1 sun:1 reducer:17 consumes:1 ran:2 valuable:1 environment:1 agency:1 complexity:7 moderately:1 dempster:1 depend:1 solving:2 algebra:1 concurrency:1 creates:1 eric:1 easily:4 darpa:1 chip:5 various:2 distinct:1 forced:1 describe:1 mn2:7 sejnowski:1 fast:2 query:10 aggregate:3 kevin:1 whose:1 stanford:10 larger:2 posed:1 solve:1 ability:1 statistic:6 itself:1 laird:1 final:4 ip:1 advantage:4 agrawal:2 product:1 helicopter:3 uci:1 combining:1 achieve:5 yuanyuan:2 description:1 x86:1 chemometrics:1 cluster:2 produce:1 unmixing:2 cmos:2 illustrate:1 andrew:1 develop:2 propagating:1 pose:1 acip:2 multicore:14 progress:1 throw:1 implemented:4 c:4 skip:1 indicate:1 snow:1 thick:1 stochastic:2 programmer:1 require:2 subdivided:1 pmatrix:1 anonymous:1 summation:16 frontier:1 normal:2 exp:1 scope:1 claim:3 solaris:1 achieves:1 adopt:1 purpose:1 estimation:1 applicable:2 label:3 concurrent:1 weighted:3 mit:1 sensor:4 gaussian:4 always:1 super:1 rather:1 iedm:1 release:2 focus:5 improvement:2 likelihood:2 tech:2 contrast:1 pentium:1 tradition:2 kim:1 centroid:3 sense:1 multiprocessor:3 nn:6 typically:3 integrated:1 multicores:2 expressible:1 transformed:1 tao:1 issue:1 among:2 dual:3 development:1 constrained:1 special:1 equal:1 construct:2 ng:1 preparing:1 yu:1 look:1 igor:1 future:3 others:1 intelligent:2 few:1 individual:4 geometry:1 phase:4 maintain:1 attempt:1 bradski:1 mining:4 recalculating:1 mixture:2 analyzed:1 diagnostics:2 pc:1 primal:3 tj:1 cosatto:1 partial:10 experience:1 minw:1 tree:2 divide:6 euclidean:1 re:1 theoretical:3 minimal:1 instance:4 herb:1 cover:2 mhz:1 maximization:3 ordinary:2 cost:3 subset:2 conducted:2 kargupta:1 reported:1 dependency:1 sv:3 synthetic:2 combined:1 density:1 international:2 siam:2 stay:1 lee:1 off:1 invoke:1 michael:1 together:1 ym:1 linux:1 again:1 central:1 aaai:1 choose:2 return:2 sang:1 account:1 potential:1 distribute:2 bold:1 inc:2 mp:1 blind:2 piece:2 helped:1 try:1 view:2 bayes:3 parallel:10 defer:1 slope:2 contribution:1 minimize:1 square:2 xor:1 variance:2 moon:1 efficiently:1 yield:1 identify:1 bayesian:1 basically:4 none:1 comp:1 drive:1 processor:8 canp:1 submitted:1 sharing:1 definition:1 frequency:2 involved:2 james:1 obvious:1 pers:1 propagated:1 umbc:1 dataset:1 popular:2 back:4 higher:2 specify:1 amer:1 done:3 shrink:1 wold:1 just:2 clock:2 working:1 su:1 google:3 widespread:1 pragmatically:1 logistic:5 aj:1 grows:1 contain:1 read:1 moore:3 laboratory:1 iba:1 mpi:1 demonstrate:1 orca:1 interface:2 tinyurl:1 meaning:1 image:2 umesh:1 novel:2 common:1 smp:1 specialized:5 functional:1 sped:1 physical:1 corel:2 volume:2 kwk2:1 silicon:2 cup:1 grid:1 larus:2 pm:3 analyzer:1 language:1 mapper:23 had:2 pute:1 chapelle:1 access:1 han:1 operating:2 etc:2 add:1 asssumption:1 own:2 certain:2 verlag:1 success:1 continue:1 yi:11 inverted:1 seen:1 minimum:1 additional:1 greater:1 parallelized:2 forty:1 paradigm:3 dashed:2 ii:2 signal:1 multiple:2 reduces:1 faster:3 adapt:1 match:2 calculation:2 long:1 lin:1 raphson:1 divided:1 retrieval:1 serial:1 equally:1 coded:1 kunle:2 involving:1 regression:12 expectation:3 iteration:2 kernel:2 sometimes:2 achieved:3 addition:1 whereas:1 separately:1 addressed:1 source:2 parallelization:7 ascent:6 call:1 leverage:1 intermediate:5 iii:3 easy:1 split:3 variety:1 xj:6 fit:5 affect:1 finish:1 architecture:8 restrict:1 hastie:1 independence:1 reduce:20 idea:1 devlin:1 shift:1 bottleneck:1 thread:2 expression:1 pca:5 defense:1 gb:1 passed:1 peter:1 queue:1 hessian:3 proprietary:1 generally:1 clear:1 eigenvectors:1 ang:1 locally:3 ten:1 statist:1 svms:1 processed:1 category:1 http:2 per:2 popularity:1 tibshirani:1 econ:1 broadly:2 express:1 group:1 four:1 hartigan:1 tenth:1 utilize:1 vast:1 olukotun:1 year:1 sum:19 run:6 master:4 reporting:1 p3:1 separation:1 decision:2 scaling:2 bit:1 layer:1 followed:1 cheng:1 quadratic:2 oracle:2 adapted:1 covertype:1 throwing:1 your:1 n3:10 software:2 bibliography:1 speed:9 min:2 leon:1 speedup:19 department:1 developing:2 honavar:1 increasingly:1 em:5 unity:1 wi:6 reallocated:1 census:5 gda:6 equation:3 discus:3 turn:1 count:1 letting:1 end:1 adopted:1 operation:4 permit:1 eight:1 apply:1 distinguished:1 batch:4 slower:1 eigen:2 original:2 denotes:2 clustering:2 top:1 running:4 lock:3 hinge:1 newton:1 opportunity:1 society:1 leakage:1 objective:1 already:1 ingenious:2 occurs:1 digest:2 primary:1 costly:1 nr:1 gradient:15 lends:1 distance:1 separate:2 mapped:1 thank:1 topic:1 amd:1 collected:1 discriminant:2 cellular:1 reason:1 code:1 reformulate:1 nc:4 kun:1 mostly:1 frank:1 pregibon:1 info:1 implementation:8 design:2 perform:2 neuron:1 datasets:1 jin:2 descent:2 beat:1 pat:1 defining:2 kyun:1 ever:2 communication:8 hinton:1 y1:1 worthy:1 rn:1 parallelizing:2 david:1 required:1 extensive:3 optimized:1 caragea:1 engine:6 subgroup:18 nip:2 trans:1 adult:2 beyond:1 able:1 bar:2 below:1 xm:1 challenge:2 program:2 including:1 memory:3 max:1 royal:1 mall:1 power:4 daryl:1 hybrid:1 cascaded:2 customized:1 advanced:1 mn:11 scheme:1 millennium:1 concludes:1 categorical:1 naive:3 speeding:1 literature:3 mapreduce:1 graf:1 law:2 anya:1 loss:3 expect:1 generation:1 digital:1 ipums:3 sufficient:3 propagates:1 principle:1 rubin:1 classifying:1 occam:1 share:1 collaboration:1 row:2 ibm:1 last:1 keeping:1 allow:1 wide:2 taking:1 serra:1 distributed:4 benefit:1 dimension:1 doesn:1 computes:3 commonly:2 avg:1 projected:1 simplified:1 income:2 vazirani:1 unreliable:1 kuh:1 tolerant:1 summing:2 assumed:1 xi:23 discriminative:1 search:1 iterative:1 decade:1 table:9 learn:1 nature:1 ca:1 forest:1 bottou:1 necessarily:1 durdanovic:1 microprocessor:1 european:1 did:2 spread:1 main:1 linearly:1 noise:1 n2:1 w1t:1 fair:1 x1:1 intel:4 batched:1 wiley:1 sub:1 comput:1 invocation:1 down:2 formula:1 load:1 specific:2 pac:1 revolution:1 ghemawat:1 learnable:1 svm:9 cease:1 dominates:1 deconvolution:1 vapnik:2 sequential:1 valiant:2 nat:1 simply:1 welsch:1 expressed:3 doubling:4 scalar:1 springer:1 gary:1 lewis:1 acm:1 goal:4 kmeans:1 shared:2 change:2 specifically:1 wt:1 kearns:3 principal:2 partly:1 experimental:2 rarely:1 pragmatic:2 internal:1 support:6
2,371
3,151
An Application of Reinforcement Learning to Aerobatic Helicopter Flight Pieter Abbeel, Adam Coates, Morgan Quigley, Andrew Y. Ng Computer Science Dept. Stanford University Stanford, CA 94305 Abstract Autonomous helicopter flight is widely regarded to be a highly challenging control problem. This paper presents the first successful autonomous completion on a real RC helicopter of the following four aerobatic maneuvers: forward flip and sideways roll at low speed, tail-in funnel, and nose-in funnel. Our experimental results significantly extend the state of the art in autonomous helicopter flight. We used the following approach: First we had a pilot fly the helicopter to help us find a helicopter dynamics model and a reward (cost) function. Then we used a reinforcement learning (optimal control) algorithm to find a controller that is optimized for the resulting model and reward function. More specifically, we used differential dynamic programming (DDP), an extension of the linear quadratic regulator (LQR). 1 Introduction Autonomous helicopter flight represents a challenging control problem with high-dimensional, asymmetric, noisy, nonlinear, non-minimum phase dynamics. Helicopters are widely regarded to be significantly harder to control than fixed-wing aircraft. (See, e.g., [14, 20].) At the same time, helicopters provide unique capabilities, such as in-place hover and low-speed flight, important for many applications. The control of autonomous helicopters thus provides a challenging and important testbed for learning and control algorithms. In the ?upright flight regime? there has recently been considerable progress in autonomous helicopter flight. For example, Bagnell and Schneider [6] achieved sustained autonomous hover. Both LaCivita et al. [13] and Ng et al. [17] achieved sustained autonomous hover and accurate flight in regimes where the helicopter?s orientation is fairly close to upright. Roberts et al. [18] and Saripalli et al. [19] achieved vision based autonomous hover and landing. In contrast, autonomous flight achievements in other flight regimes have been very limited. Gavrilets et al. [9] achieved a split-S, a stall turn and a roll in forward flight. Ng et al. [16] achieved sustained autonomous inverted hover. The results presented in this paper significantly expand the limited set of successfully completed aerobatic maneuvers. In particular, we present the first successful autonomous completion of the following four maneuvers: forward flip and axial roll at low speed, tail-in funnel, and nose-in funnel. Not only are we first to autonomously complete such a single flip and roll, our controllers are also able to continuously repeat the flips and rolls without any pauses in between. Thus the controller has to provide continuous feedback during the maneuvers, and cannot, for example, use a period of hovering to correct errors of the first flip before performing the next flip. The number of flips and rolls and the duration of the funnel trajectories were chosen to be sufficiently large to demonstrate that the helicopter could continue the maneuvers indefinitely (assuming unlimited fuel and battery endurance). The completed maneuvers are significantly more challenging than previously completed maneuvers. In the (forward) flip, the helicopter rotates 360 degrees forward around its lateral axis (the axis going from the right to the left of the helicopter). To prevent altitude loss during the maneuver, the helicopter pushes itself back up by using the (inverted) main rotor thrust halfway through the flip. In the (right) axial roll the helicopter rotates 360 degrees around its longitudinal axis (the axis going from the back to the front of the helicopter). Similarly to the flip, the helicopter prevents altitude loss by pushing itself back up by using the (inverted) main rotor thrust halfway through the roll. In the tail-in funnel, the helicopter repeatedly flies a circle sideways with the tail pointing to the center of the circle. For the trajectory to be a funnel maneuver, the helicopter speed and the circle radius are chosen such that the helicopter must pitch up steeply to stay in the circle. The nose-in funnel is similar to the tail-in funnel, the difference being that the nose points to the center of the circle throughout the maneuver. The remainder of this paper is organized as follows: Section 2 explains how we learn a model from flight data. The section considers both the problem of data collection, for which we use an apprenticeship learning approach, as well as the problem of estimating the model from data. Section 3 explains our control design. We explain differential dynamic programming as applied to our helicopter. We discuss our apprenticeship learning approach to choosing the reward function, as well as other design decisions and lessons learned. Section 4 describes our helicopter platform and our experimental results. Section 5 concludes the paper. Movies of our autonomous helicopter flights are available at the following webpage: http://www.cs.stanford.edu/?pabbeel/heli-nips2006. 2 Learning a Helicopter Model from Flight Data 2.1 Data Collection The E 3 -family of algorithms [12] and its extensions [11, 7, 10] are the state of the art RL algorithms for autonomous data collection. They proceed by generating ?exploration? policies, which try to visit inaccurately modeled parts of the state space. Unfortunately, such exploration policies do not even try to fly the helicopter well, and thus would invariably lead to crashes. Thus, instead, we use the apprenticeship learning algorithm proposed in [3], which proceeds as follows: 1. Collect data from a human pilot flying the desired maneuvers with the helicopter. Learn a model from the data. 2. Find a controller that works in simulation based on the current model. 3. Test the controller on the helicopter. If it works, we are done. Otherwise, use the data from the test flight to learn a new (improved) model and go back to Step 2. This procedure has similarities with model-based RL and with the common approach in control to first perform system identification and then find a controller using the resulting model. However, the key insight from [3] is that this procedure is guaranteed to converge to expert performance in a polynomial number of iterations. In practice we have needed at most three iterations. Importantly, unlike the E 3 family of algorithms, this procedure never uses explicit exploration policies. We only have to test controllers that try to fly as well as possible (according to the current simulator). 2.2 Model Learning The helicopter state s comprises its position (x, y, z), orientation (expressed as a unit quaternion), velocity (x, ? y, ? z) ? and angular velocity (?x , ?y , ?z ). The helicopter is controlled by a 4-dimensional action space (u1 , u2 , u3 , u4 ). By using the cyclic pitch (u1 , u2 ) and tail rotor (u3 ) controls, the pilot can rotate the helicopter around each of its main axes and bring the helicopter to any orientation. This allows the pilot to direct the thrust of the main rotor in any particular direction (and thus fly in any particular direction). By adjusting the collective pitch angle (control input u4 ), the pilot can adjust the thrust generated by the main rotor. For a positive collective pitch angle the main rotor will blow air downward relative to the helicopter. For a negative collective pitch angle the main rotor will blow air upward relative to the helicopter. The latter allows for inverted flight. Following [1] we learn a model from flight data that predicts accelerations as a function of the current state and inputs. Accelerations are then integrated to obtain the helicopter states over time. The key idea from [1] is that, after subtracting out the effects of gravity, the forces and moments acting on the helicopter are independent of position and orientation of the helicopter, when expressed in a ?body coordinate frame?, a coordinate frame attached to the body of the helicopter. This observation allows us to significantly reduce the dimensionality of the model learning problem. In particular, we use the following model: x ?b = Ax x? b + gxb + wx , y?b = Ay y? b + gyb + D0 + wy , z?b = Az z? b + gzb + C4 u4 + E0 k(x? b , y? b , z? b )k2 + D4 + wz , ?? xb = Bx ?xb + C1 u1 + D1 + w?x , ?? yb = By ?yb + C2 u2 + C24 u4 + D2 + w?y , ?? zb = Bz ?zb + C3 u3 + C34 u4 + D3 + w?z . By our convention, the superscripts b indicate that we are using a body coordinate frame with the x-axis pointing forwards, the y-axis pointing to the right and the z-axis pointing down with respect to the helicopter. We note our model explicitly encodes the dependence on the gravity vector (gxb , gyb , gzb ) and has a sparse dependence of the accelerations on the current velocities, angular rates and inputs. This sparse dependence was obtained by scoring different models by their simulation accuracy over time intervals of two seconds (similar to [4]). We estimate the coefficients A? , B? , C? , D? and E? from helicopter flight data. First we obtain state and acceleration estimates using a highly optimized extended Kalman filter, then we use linear regression to estimate the coefficients. The terms wx , wy , wz , w?x , w?y , w?z are zero mean Gaussian random variables, which represent the perturbations to the accelerations due to noise (or unmodeled effects). Their variances are estimated as the average squared prediction error on the flight data we collected. The coefficient D0 captures sideways acceleration of the helicopter due to thrust generated by the tail rotor. The term E0 k(x? b , y? b , z? b )k2 models translational lift: the additional lift the helicopter gets when flying at higher speed. Specifically, during hover, the helicopter?s rotor imparts a downward velocity on the air above and below it. This downward velocity reduces the effective pitch (angle of attack) of the rotor blades, causing less lift to be produced [14, 20]. As the helicopter transitions into faster flight, this region of altered airflow is left behind and the blades enter ?clean? air. Thus, the angle of attack is higher and more lift is produced for a given choice of the collective control (u4 ). The translational lift term was important for modeling the helicopter dynamics during the funnels. The coefficient C24 captures the pitch acceleration due to main rotor thrust. This coefficient is nonzero since (after equipping our helicopter with our sensor packages) the center of gravity is further backward than the center of main rotor thrust. There are two notable differences between our model and the most common previously proposed models (e.g., [15, 8]): (1) Our model does not include the inertial coupling between different axes of rotation. (2) Our model?s state does not include the blade-flapping angles, which are the angles the rotor blades make with the helicopter body while sweeping through the air. Both inertial coupling and blade flapping have previously been shown to improve accuracy of helicopter models for other RC helicopters. However, extensive attempts to incorporate them into our model have not led to improved simulation accuracy. We believe the effects of inertial coupling to be very limited since the flight regimes considered do not include fast rotation around more than one main axis simultaneously. We believe that?at the 0.1s time scale used for control?the blade flapping angles? effects are sufficiently well captured by using a first order model from cyclic inputs to roll and pitch rates. Such a first order model maps cyclic inputs to angular accelerations (rather than the steady state angular rate), effectively capturing the delay introduced by the blades reacting (moving) first before the helicopter body follows. 3 Controller Design 3.1 Reinforcement Learning Formalism and Differential Dynamic Programming (DDP) A reinforcement learning problem (or optimal control problem) can be described by a Markov decision process (MDP), which comprises a sextuple (S, A, T, H, s(0), R). Here S is the set of states; A is the set of actions or inputs; T is the dynamics model, which is a set of probability distributions t t {Psu } (Psu (s0 |s, u) is the probability of being in state s0 at time t + 1 given the state and action at time t are s and u); H is the horizon or number of time steps of interest; s(0) ? S is the initial state; R : S ? A ? R is the reward function. A policy ? = (?0 , ?1 , ? ? ? , ?H ) is a tuple of mappings from the set of states S to the set of actions A, one mapping for each time t = 0, ? ? ? , H. The expected sum of rewards when acting PH according to a policy ? is given by: E[ t=0 R(s(t), u(t))|?]. The optimal policy ? ? for an MDP (S, A, T, H, s(0), R) is the policy that maximizes the expected sum of rewards. In particular, the PH optimal policy is given by ? ? = arg max? E[ t=0 R(s(t), u(t))|?]. The linear quadratic regulator (LQR) control problem is a special class of MDPs, for which the optimal policy can be computed efficiently. In LQR the set of states is given by S = Rn , the set of actions/inputs is given by A = Rp , and the dynamics model is given by: s(t + 1) = A(t)s(t) + B(t)u(t) + w(t), where for all t = 0, . . . , H we have that A(t) ? Rn?n , B(t) ? Rn?p and w(t) is a zero mean random variable (with finite variance). The reward for being in state s(t) and taking action/input u(t) is given by: ?s(t)> Q(t)s(t) ? u(t)> R(t)u(t). Here Q(t), R(t) are positive semi-definite matrices which parameterize the reward function. It is well-known that the optimal policy for the LQR control problem is a linear feedback controller which can be efficiently computed using dynamic programming. Although the standard formulation presented above assumes the all-zeros state is the most desirable state, the formalism is easily extended to the task of tracking a desired trajectory s?0 , . . . , s?H . The standard extension (which we use) expresses the dynamics and reward function as a function of the error state e(t) = s(t) ? s? (t) rather than the actual state s(t). (See, e.g., [5], for more details on linear quadratic methods.) Differential dynamic programming (DDP) approximately solves general continuous state-space MDPs by iterating the following two steps: 1. Compute a linear approximation to the dynamics and a quadratic approximation to the reward function around the trajectory obtained when using the current policy. 2. Compute the optimal policy for the LQR problem obtained in Step 1 and set the current policy equal to the optimal policy for the LQR problem. In our experiments, we have a quadratic reward function, thus the only approximation made in the first step is the linearization of the dynamics. To bootstrap the process, we linearized around the target trajectory in the first iteration.1 3.2 DDP Design Choices Error state. We use the following error state e = (x? b ? (x? b )? , y? b ? (y? b )? , z? b ? (z? b )? , x ? x? , y ? y ? , z ? z ? , ?? xb ? (?? yb )? , ?? yb ? (?? yb )? , ?? zb ? (?? zb )? , ?q ). Here ?q is the axis-angle representation of the rotation that transforms the coordinate frame of the target orientation into the coordinate frame of the actual state. This axis angle representation results in the linearizations being more accurate approximations of the non-linear model since the axis angle representation maps more directly to the angular rates than naively differencing the quaternions or Euler angles. Cost for change in inputs. Using DDP as thus far explained resulted in unstable controllers on the real helicopter: The controllers tended to rapidly switch between low and high values, which resulted in poor flight performance. Similar to frequency shaping for LQR controllers (see, e.g., [5]), we added a term to the reward function that penalizes the change in inputs over consecutive time steps. Controller design in two phases. Adding the cost term for the change in inputs worked well for the funnels. However flips and rolls do require some fast changes in inputs. To still allow aggressive maneuvering, we split our controller design into two phases. In the first phase, we used DDP to find the open-loop input sequence that would be optimal in the noise-free setting. (This can be seen as a planning phase and is similar to designing a feedforward controller in classical control.) In the second phase, we used DDP to design our actual flight controller, but we now redefine the inputs as the deviation from the nominal open-loop input sequence. Penalizing for changes in the new inputs penalizes only unplanned changes in the control inputs. Integral control. Due to modeling error and wind, the controllers (so far described) have non-zero steady-state error. Each controller generated by DDP is designed using linearized dynamics. The orientation used for linearization greatly affects the resulting linear model. As a consequence, the linear model becomes significantly worse an approximation with increasing orientation error. This in turn results in the control inputs being less suited for the current state, which in turn results in larger orientation error, etc. To reduce the steady-state orientation errors?similar to the I term 1 For the flips and rolls this simple initialization did not work: Due to the target trajectory being too far from feasible, the control policy obtained in the first iteration of DDP ended up following a trajectory for which the linearization is inaccurate. As a consequence, the first iteration?s control policy (designed for the time-varying linearized models along the target trajectory) was unstable in the non-linear model and DDP failed to converge. To get DDP to converge to good policies we slowly changed the model from a model in which control is trivial to the actual model. In particular, we change the model such that the next state is ? times the target state plus 1 ? ? times the next state according to the true model. By slowly varying ? from 0.999 to zero throughout DDP iterations, the linearizations obtained throughout are good approximations and DDP converges to a good policy. in PID control?we augment the state vector with integral terms for the orientation errors. More Pt?1 specifically, the state vector at time t is augmented with ? =0 0.99t?? ?q (? ). Our funnel controllers performed significantly better with integral control. For the flips and rolls the integral control seemed to matter less.2 Factors affecting control performance. Our simulator included process noise (Gaussian noise on the accelerations as estimated when learning the model from data), measurement noise (Gaussian noise on the measurements as estimated from the Kalman filter residuals), as well as the Kalman filter and the low-pass filter, which is designed to remove the high-frequency noise from the IMU measurements.3 Simulator tests showed that the low-pass filter?s latency and the noise in the state estimates affect the performance of our controllers most. Process noise on the other hand did not seem to affect performance very much. 3.3 Trade-offs in the reward function Our reward function contained 24 features, consisting of the squared error state variables, the squared inputs, the squared change in inputs between consecutive timesteps, and the squared integral of the error state variables. For the reinforcement learning algorithm to find a controller that flies ?well,? it is critical that the correct trade-off between these features is specified. To find the correct trade-off between the 24 features, we first recorded a pilot?s flight. Then we used the apprenticeship learning via inverse reinforcement learning algorithm [2]. The inverse RL algorithm iteratively provides us with reward weights that result in policies that bring us closer to the expert. Unfortunately the reward weights generated throughout the iterations of the algorithm are often unsafe to fly on the helicopter. Thus rather than strictly following the inverse RL algorithm, we hand-chose reward weights that (iteratively) bring us closer to the expert human pilot by increasing/decreasing the weights for those features that stood out as mostly different from the expert (following the philosophy, but not the strict formulation of the inverse RL algorithm). The algorithm still converged in a small number of iterations. 4 Experiments Videos of all of our maneuvers are available at the URL provided in the introduction. 4.1 Experimental Platform The helicopter used is an XCell Tempest, a competition-class aerobatic helicopter (length 54?, height 19?, weight 13 lbs), powered by a 0.91-size, two-stroke engine. Figure 2 (c) shows a close-up of the helicopter. We instrumented the helicopter with a Microstrain 3DM-GX1 orientation sensor, and a Novatel RT2 GPS receiver. The Microstrain package contains triaxial accelerometers, rate gyros, and magnetometers. The Novatel RT2 GPS receiver uses carrier-phase differential GPS to provide real-time position estimates with approximately 2cm accuracy as long as its antenna is pointing at the sky. To maintain position estimates throughout the flips and rolls, we have used two different setups. Originally, we used a purpose-built cluster of four U-Blox LEA-4T GPS receivers/antennas for velocity sensing. The system provides velocity estimates with standard deviation of approximately 1 cm/sec (when stationary) and 10cm/sec (during our aerobatic maneuvers). Later, we used three PointGrey DragonFly2 cameras that track the helicopter from the ground. This setup gives us 25cm accurate position measurements. For extrinsic camera calibration we collect data from the Novatel RT2 GPS receiver while in view of the cameras. A computer on the ground uses a Kalman filter to estimate the state from the sensor readings. Our controllers generate control commands at 10Hz. 4.2 Experimental Results For each of the maneuvers, the initial model is learned by collecting data from a human pilot flying the helicopter. Our sensing setup is significantly less accurate when flying upside-down, so all data for model learning is collected from upright flight. The model used to design the flip and roll controllers is estimated from 5 minutes of flight data during which the pilot performs frequency sweeps on each of the four control inputs (which covers as similar a flight regime as possible without having to invert the helicopter). For the funnel controllers, we learn a model from the same frequency sweeps and from our pilot flying the funnels. For the rolls and flips the initial model was sufficiently accurate for control. For the funnels, our initial controllers did not perform as well, and we performed two iterations of the apprenticeship learning algorithm described in Section 2.1. 2 When adding the integrated error in position to the cost we did not experience any benefits. Even worse, when increasing its weight in the cost function, the resulting controllers were often unstable. 3 The high frequency noise on the IMU measurements is caused by the vibration of the helicopter. This vibration is mostly caused by the blades spinning at 25Hz. 4.2.1 Flip In the ideal forward flip, the helicopter rotates 360 degrees forward around its lateral axis (the axis going from the right to the left of the helicopter) while staying in place. The top row of Figure 1 (a) shows a series of snapshots of our helicopter during an autonomous flip. In the first frame, the helicopter is hovering upright autonomously. Subsequently, it pitches forward, eventually becoming vertical. At this point, the helicopter does not have the ability to counter its descent since it can only produce thrust in the direction of the main rotor. The flip continues until the helicopter is completely inverted. At this moment, the controller must apply negative collective to regain altitude lost during the half-flip, while continuing the flip and returning to the upright position. We chose the entries of the cost matrices Q and R by hand, spending about an hour to get a controller that could flip indefinitely in our simulator. The initial controller oscillated in reality whereas our human piloted flips do not have any oscillation, so (in accordance with the inverse RL procedure, see Section 3.3) we increased the penalty for changes in inputs over consecutive time steps, resulting in our final controller. 4.2.2 Roll In the ideal axial roll, the helicopter rotates 360 degrees around its longitudinal axis (the axis going from the back to the front of the helicopter) while staying in place. The bottom row of Figure 1 (b) shows a series of snapshots of our helicopter during an autonomous roll. In the first frame, the helicopter is hovering upright autonomously. Subsequently it rolls to the right, eventually becoming inverted. When inverted, the helicopter applies negative collective to regain altitude lost during the first half of the roll, while continuing the roll and returning to the upright position. We used the same cost matrices as for the flips. 4.2.3 Tail-In Funnel The tail-in funnel maneuver is essentially a medium to high speed circle flown sideways, with the tail of the helicopter pointed towards the center of the circle. Throughout, the helicopter is pitched backwards such that the main rotor thrust not only compensates for gravity, but also provides the centripetal acceleration to stay in the circle. For a funnel of radius r at velocity v the centripetal acceleration is v 2 /r, so?assuming the main rotor thrust only provides the centripetal acceleration and compensation for gravity?we obtain a pitch angle ? = atan(v 2 /(rg)). The maneuver is named after the path followed by the length of the helicopter, which sweeps out a surface similar to that of an inverted cone (or funnel). 4 For the funnel reported in this paper, we had H = 80 s, r = 5 m, and v = 5.3 m/s (which yields a 30 degree pitch angle during the funnel). Figure 1 (c) shows an overlay of snapshots of the helicopter throughout a tail-in funnel. The defining characteristic of the funnel is repeatability?the ability to pass consistently through the same points in space after multiple circuits. Our autonomous funnels are significantly more accurate than funnels flown by expert human pilots. Figure 2 (a) shows a complete trajectory in (North, East) coordinates. In figure 2 (b) we superimposed the heading of the helicopter on a partial trajectory (showing the entire trajectory with heading superimposed gives a cluttered plot). Our autonomous funnels have an RMS position error of 1.5m and an RMS heading error of 15 degrees throughout the twelve circuits flown. Expert human pilots can maintain this performance at most through one or two circuits. 5 4.2.4 Nose-In Funnel The nose-in funnel maneuver is very similar to the tail-in funnel maneuver, except that the nose points to the center of the circle, rather than the tail. Our autonomous nose-in funnel controller results in highly repeatable trajectories (similar to the tail-in funnel), and it achieves a level of performance that is difficult for a human pilot to match. Figure 1 (d) shows an overlay of snapshots throughout a nose-in funnel. 5 Conclusion To summarize, we presented our successful DDP-based control design for four new aerobatic maneuvers: forward flip, sideways roll (at low speed), tail-in funnel, and nose-in funnel. The key design decisions for the DDP-based controller to fly our helicopter successfully are the following: 4 The maneuver is actually broken into three parts: an accelerating leg, the funnel leg, and a decelerating leg. During the accelerating and decelerating legs, the helicopter accelerates at amax (= 0.8m/s2 ) along the circle. 5 Without the integral of heading error in the cost function we observed significantly larger heading errors of 20-40 degrees, which resulted in the linearization being so inaccurate that controllers often failed entirely. Figure 1: (Best viewed in color.) (a) Series of snapshots throughout an autonomous flip. (b) Series of snapshots throughout an autonomous roll. (c) Overlay of snapshots of the helicopter throughout a tail-in funnel. (d) Overlay of snapshots of the helicopter throughout a nose-in funnel. (See text for details.) 8 6 6 4 4 North (m) North (m) 8 2 0 ?2 2 0 ?2 ?4 ?4 ?6 ?6 ?8 ?8 ?8 ?6 ?4 ?2 0 2 4 6 8 East (m) (a) ?8 ?6 ?4 ?2 0 2 4 6 8 East (m) (b) (c) Figure 2: (a) Trajectory followed by the helicopter during tail-in funnel. (b) Partial tail-in funnel trajectory with heading marked. (c) Close-up of our helicopter. (See text for details.) We penalized for rapid changes in actions/inputs over consecutive time steps. We used apprenticeship learning algorithms, which take advantage of an expert demonstration, to determine the reward function and to learn the model. We used a two-phase control design: the first phase plans a feasible trajectory, the second phase designs the actual controller. Integral penalty terms were included to reduce steady-state error. To the best of our knowledge, these are the most challenging autonomous flight maneuvers achieved to date. Acknowledgments We thank Ben Tse for piloting our helicopter and working on the electronics of our helicopter. We thank Mark Woodward for helping us with the vision system. References [1] P. Abbeel, Varun Ganapathi, and Andrew Y. Ng. Learning vehicular dynamics with application to modeling helicopters. In NIPS 18, 2006. [2] P. Abbeel and A. Y. Ng. Apprenticeship learning via inverse reinforcement learning. In Proc. ICML, 2004. [3] P. Abbeel and A. Y. Ng. Exploration and apprenticeship learning in reinforcement learning. In Proc. ICML, 2005. [4] P. Abbeel and A. Y. Ng. Learning first order Markov models for control. In NIPS 18, 2005. [5] B. Anderson and J. Moore. Optimal Control: Linear Quadratic Methods. Prentice-Hall, 1989. [6] J. Bagnell and J. Schneider. Autonomous helicopter control using reinforcement learning policy search methods. In International Conference on Robotics and Automation. IEEE, 2001. [7] Ronen I. Brafman and Moshe Tennenholtz. R-max, a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research, 2002. [8] V. Gavrilets, I. Martinos, B. Mettler, and E. Feron. Flight test and simulation results for an autonomous aerobatic helicopter. In AIAA/IEEE Digital Avionics Systems Conference, 2002. [9] V. Gavrilets, B. Mettler, and E. Feron. Human-inspired control logic for automated maneuvering of miniature helicopter. Journal of Guidance, Control, and Dynamics, 27(5):752?759, 2004. [10] S. Kakade, M. Kearns, and J. Langford. Exploration in metric state spaces. In Proc. ICML, 2003. [11] M. Kearns and D. Koller. Efficient reinforcement learning in factored MDPs. In Proc. IJCAI, 1999. [12] M. Kearns and S. Singh. Near-optimal reinforcement learning in polynomial time. Machine Learning Journal, 2002. [13] M. La Civita, G. Papageorgiou, W. C. Messner, and T. Kanade. Design and flight testing of a highbandwidth H? loop shaping controller for a robotic helicopter. Journal of Guidance, Control, and Dynamics, 29(2):485?494, March-April 2006. [14] J. Leishman. Principles of Helicopter Aerodynamics. Cambridge University Press, 2000. [15] B. Mettler, M. Tischler, and T. Kanade. System identification of small-size unmanned helicopter dynamics. In American Helicopter Society, 55th Forum, 1999. [16] A. Y. Ng, A. Coates, M. Diel, V. Ganapathi, J. Schulte, B. Tse, E. Berger, and E. Liang. Autonomous inverted helicopter flight via reinforcement learning. In Int?l Symposium on Experimental Robotics, 2004. [17] Andrew Y. Ng, H. Jin Kim, Michael Jordan, and Shankar Sastry. Autonomous helicopter flight via reinforcement learning. In NIPS 16, 2004. [18] Jonathan M. Roberts, Peter I. Corke, and Gregg Buskey. Low-cost flight control system for a small autonomous helicopter. In IEEE Int?l Conf. on Robotics and Automation, 2003. [19] S. Saripalli, J. F. Montgomery, and G. S. Sukhatme. Visually-guided landing of an unmanned aerial vehicle. IEEE Transactions on Robotics and Autonomous Systems, 2003. [20] J. Seddon. Basic Helicopter Aerodynamics. AIAA Education Series. America Institute of Aeronautics and Astronautics, 1990.
3151 |@word aircraft:1 polynomial:3 open:2 pieter:1 d2:1 simulation:4 linearized:3 harder:1 blade:8 moment:2 electronics:1 cyclic:3 contains:1 series:5 initial:5 lqr:7 longitudinal:2 current:7 must:2 wx:2 thrust:10 remove:1 designed:3 plot:1 stationary:1 half:2 indefinitely:2 provides:5 attack:2 airflow:1 rc:2 along:2 c2:1 direct:1 differential:5 height:1 symposium:1 sustained:3 redefine:1 apprenticeship:8 expected:2 rapid:1 planning:1 simulator:4 inspired:1 decreasing:1 actual:5 increasing:3 becomes:1 provided:1 estimating:1 maximizes:1 medium:1 fuel:1 circuit:3 cm:4 ended:1 sky:1 collecting:1 gravity:5 returning:2 k2:2 control:38 unit:1 maneuver:21 before:2 positive:2 carrier:1 accordance:1 aiaa:2 consequence:2 reacting:1 path:1 becoming:2 approximately:3 plus:1 chose:2 initialization:1 collect:2 challenging:5 microstrain:2 limited:3 unique:1 camera:3 acknowledgment:1 testing:1 practice:1 lost:2 definite:1 bootstrap:1 procedure:4 c24:2 significantly:10 get:3 cannot:1 close:3 shankar:1 prentice:1 landing:2 www:1 gavrilets:3 map:2 center:6 go:1 duration:1 cluttered:1 oscillated:1 factored:1 insight:1 regarded:2 importantly:1 amax:1 autonomous:28 coordinate:6 target:5 nominal:1 pt:1 programming:5 gps:5 us:3 designing:1 velocity:8 continues:1 asymmetric:1 u4:6 predicts:1 bottom:1 observed:1 fly:8 capture:2 parameterize:1 region:1 autonomously:3 maneuvering:2 trade:3 counter:1 rotor:16 broken:1 reward:18 battery:1 dynamic:18 singh:1 flying:5 completely:1 easily:1 america:1 fast:2 effective:1 lift:5 choosing:1 stanford:3 widely:2 larger:2 otherwise:1 compensates:1 ability:2 noisy:1 itself:2 superscript:1 antenna:2 sextuple:1 final:1 quigley:1 sequence:2 advantage:1 subtracting:1 regain:2 hover:6 helicopter:97 remainder:1 causing:1 loop:3 gregg:1 rapidly:1 date:1 c34:1 competition:1 az:1 achievement:1 webpage:1 ijcai:1 cluster:1 produce:1 generating:1 adam:1 converges:1 ben:1 staying:2 help:1 coupling:3 andrew:3 completion:2 rt2:3 axial:3 progress:1 solves:1 c:1 indicate:1 convention:1 direction:3 guided:1 radius:2 correct:3 filter:6 subsequently:2 exploration:5 human:8 education:1 explains:2 require:1 abbeel:5 extension:3 strictly:1 helping:1 sufficiently:3 around:8 considered:1 ground:2 hall:1 visually:1 mapping:2 pointing:5 miniature:1 u3:3 achieves:1 consecutive:4 purpose:1 proc:4 vibration:2 successfully:2 sideways:5 offs:1 sensor:3 gaussian:3 rather:4 varying:2 command:1 ax:3 consistently:1 superimposed:2 steeply:1 greatly:1 contrast:1 novatel:3 kim:1 inaccurate:2 integrated:2 entire:1 koller:1 expand:1 going:4 atan:1 upward:1 translational:2 arg:1 orientation:11 augment:1 plan:1 art:2 platform:2 fairly:1 special:1 equal:1 never:1 having:1 ng:9 psu:2 schulte:1 represents:1 icml:3 feron:2 simultaneously:1 resulted:3 phase:10 consisting:1 maintain:2 attempt:1 invariably:1 interest:1 highly:3 adjust:1 behind:1 xb:3 accurate:6 tuple:1 integral:7 closer:2 partial:2 experience:1 continuing:2 penalizes:2 circle:10 desired:2 e0:2 guidance:2 increased:1 formalism:2 modeling:3 tse:2 decelerating:2 cover:1 cost:9 deviation:2 entry:1 euler:1 delay:1 successful:3 stood:1 front:2 too:1 reported:1 nips2006:1 twelve:1 international:1 stay:2 off:2 michael:1 continuously:1 squared:5 recorded:1 slowly:2 astronautics:1 worse:2 conf:1 expert:7 wing:1 american:1 bx:1 ganapathi:2 aggressive:1 blow:2 accelerometer:1 sec:2 north:3 coefficient:5 matter:1 automation:2 int:2 notable:1 explicitly:1 caused:2 performed:2 try:3 unplanned:1 wind:1 later:1 view:1 vehicle:1 capability:1 air:5 accuracy:4 roll:23 variance:2 characteristic:1 efficiently:2 yield:1 lesson:1 repeatability:1 ronen:1 identification:2 produced:2 trajectory:15 converged:1 stroke:1 explain:1 tended:1 sukhatme:1 frequency:5 dm:1 pitched:1 pilot:13 adjusting:1 color:1 knowledge:1 dimensionality:1 organized:1 shaping:2 inertial:3 actually:1 back:5 higher:2 originally:1 varun:1 improved:2 april:1 yb:5 formulation:2 done:1 anderson:1 angular:5 equipping:1 until:1 langford:1 flight:33 hand:3 working:1 nonlinear:1 believe:2 mdp:2 diel:1 effect:4 true:1 nonzero:1 iteratively:2 moore:1 during:13 steady:4 d4:1 ay:1 complete:2 demonstrate:1 performs:1 bring:3 spending:1 recently:1 common:2 rotation:3 rl:6 attached:1 avionics:1 tail:18 extend:1 measurement:5 cambridge:1 enter:1 sastry:1 similarly:1 pointed:1 had:2 moving:1 calibration:1 similarity:1 surface:1 etc:1 aeronautics:1 showed:1 continue:1 scoring:1 inverted:9 morgan:1 minimum:1 additional:1 captured:1 seen:1 schneider:2 converge:3 determine:1 period:1 semi:1 upside:1 desirable:1 multiple:1 reduces:1 d0:2 leishman:1 faster:1 match:1 long:1 magnetometer:1 visit:1 controlled:1 pitch:11 prediction:1 basic:1 regression:1 controller:35 vision:2 imparts:1 essentially:1 bz:1 metric:1 iteration:9 represent:1 achieved:6 invert:1 c1:1 lea:1 affecting:1 crash:1 whereas:1 robotics:4 interval:1 aerodynamics:2 unlike:1 strict:1 hz:2 seem:1 jordan:1 near:2 ideal:2 feedforward:1 split:2 backwards:1 automated:1 switch:1 affect:3 timesteps:1 stall:1 reduce:3 idea:1 rms:2 url:1 accelerating:2 penalty:2 peter:1 proceed:1 repeatedly:1 action:7 iterating:1 latency:1 transforms:1 ph:2 http:1 generate:1 overlay:4 flapping:3 coates:2 estimated:4 extrinsic:1 track:1 express:1 key:3 four:5 d3:1 prevent:1 penalizing:1 clean:1 backward:1 flown:3 halfway:2 sum:2 cone:1 linearizations:2 angle:14 package:2 inverse:6 named:1 place:3 throughout:13 family:2 oscillation:1 decision:3 entirely:1 capturing:1 accelerates:1 ddp:15 guaranteed:1 followed:2 quadratic:6 worked:1 encodes:1 unlimited:1 regulator:2 speed:7 u1:3 performing:1 vehicular:1 according:3 march:1 poor:1 aerial:1 describes:1 instrumented:1 kakade:1 leg:4 explained:1 altitude:4 pid:1 previously:3 turn:3 discus:1 eventually:2 montgomery:1 needed:1 flip:27 nose:11 available:2 apply:1 rp:1 imu:2 assumes:1 top:1 include:3 completed:3 pushing:1 classical:1 society:1 forum:1 sweep:3 added:1 moshe:1 dependence:3 bagnell:2 thank:2 rotates:4 lateral:2 considers:1 collected:2 unstable:3 trivial:1 assuming:2 spinning:1 kalman:4 length:2 modeled:1 berger:1 gx1:1 demonstration:1 saripalli:2 differencing:1 unfortunately:2 mostly:2 robert:2 setup:3 difficult:1 liang:1 negative:3 corke:1 design:13 collective:6 policy:20 perform:2 vertical:1 observation:1 snapshot:8 markov:2 finite:1 descent:1 compensation:1 jin:1 defining:1 extended:2 frame:7 rn:3 perturbation:1 tempest:1 sweeping:1 unmodeled:1 lb:1 triaxial:1 introduced:1 specified:1 c3:1 optimized:2 extensive:1 unmanned:2 c4:1 engine:1 learned:2 testbed:1 unsafe:1 hour:1 woodward:1 nip:3 able:1 tennenholtz:1 proceeds:1 wy:2 below:1 regime:5 reading:1 summarize:1 built:1 wz:2 max:2 video:1 critical:1 force:1 pause:1 residual:1 altered:1 movie:1 improve:1 mdps:3 axis:15 concludes:1 text:2 inaccurately:1 powered:1 aerobatic:7 relative:2 loss:2 xcell:1 pabbeel:1 funnel:39 digital:1 degree:7 tischler:1 s0:2 principle:1 row:2 changed:1 penalized:1 repeat:1 brafman:1 free:1 heading:6 allow:1 institute:1 taking:1 sparse:2 benefit:1 feedback:2 transition:1 seemed:1 forward:10 collection:3 reinforcement:14 made:1 far:3 transaction:1 logic:1 robotic:1 receiver:4 continuous:2 search:1 reality:1 mettler:3 learn:6 kanade:2 ca:1 papageorgiou:1 did:4 main:13 martinos:1 s2:1 noise:10 body:5 augmented:1 position:9 comprises:2 explicit:1 down:2 minute:1 repeatable:1 showing:1 sensing:2 gzb:2 naively:1 adding:2 effectively:1 linearization:4 downward:3 push:1 horizon:1 suited:1 rg:1 gyro:1 led:1 failed:2 prevents:1 expressed:2 contained:1 tracking:1 u2:3 applies:1 viewed:1 marked:1 acceleration:12 towards:1 considerable:1 change:10 feasible:2 included:2 specifically:3 upright:7 except:1 acting:2 kearns:3 zb:4 pas:3 experimental:5 la:1 east:3 mark:1 hovering:3 latter:1 rotate:1 quaternion:2 centripetal:3 jonathan:1 philosophy:1 incorporate:1 dept:1 d1:1
2,372
3,152
Isotonic Conditional Random Fields and Local Sentiment Flow Yi Mao School of Elec. and Computer Engineering Purdue University - West Lafayette, IN ymao@ecn.purdue.edu Guy Lebanon Department of Statistics, and School of Elec. and Computer Engineering Purdue University - West Lafayette, IN lebanon@stat.purdue.edu Abstract We examine the problem of predicting local sentiment flow in documents, and its application to several areas of text analysis. Formally, the problem is stated as predicting an ordinal sequence based on a sequence of word sets. In the spirit of isotonic regression, we develop a variant of conditional random fields that is wellsuited to handle this problem. Using the M?obius transform, we express the model as a simple convex optimization problem. Experiments demonstrate the model and its applications to sentiment prediction, style analysis, and text summarization. 1 Introduction The World Wide Web and other textual databases provide a convenient platform for exchanging opinions. Many documents, such as reviews and blogs, are written with the purpose of conveying a particular opinion or sentiment. Other documents may not be written with the purpose of conveying an opinion, but nevertheless they contain one. Opinions, or sentiments, may be considered in several ways, the simplest of which is varying from positive opinion, through neutral, to negative opinion. Most of the research in information retrieval has focused on predicting the topic of a document, or its relevance with respect to a query. Predicting the document?s sentiment would allow matching the sentiment, as well as the topic, with the user?s interests. It would also assist in document summarization and visualization. Sentiment prediction was first formulated as a binary classification problem to answer questions such as: ?What is the review?s polarity, positive or negative?? Pang et al. [1] demonstrated the difficulties in sentiment prediction using solely the empirical rules (a subset of adjectives), which motivates the use of statistical learning techniques. The task was then refined to allow multiple sentiment levels, facilitating the use of standard text categorization techniques [2]. However, sentiment prediction is different from traditional text categorization: (1) in contrast to the categorical nature of topics, sentiments are ordinal variables; (2) several contradicting opinions might co-exist, which interact with each other to produce the global document sentiment; (3) context plays a vital role in determining the sentiment. Indeed, sentiment prediction is a much harder task than topic classification tasks such as Reuters or WebKB and current models achieve lower accuracy. Rather than using a bag of words multiclass classifier, we model the sequential flow of sentiment throughout the document using a sequential conditional model. Furthermore, we treat the sentiment labels as ordinal variables by enforcing monotonicity constraints on the model?s parameters. 2 Local and Global Sentiments Previous research on sentiment prediction has generally focused on predicting the sentiment of the entire document. A commonly used application is the task of predicting the number of stars assigned to a movie, based on a review text. Typically, the problem is considered as standard multiclass classification or regression using the bag of words representation. In addition to the sentiment of the entire document, which we call global sentiment, we define the concept of local sentiment as the sentiment associated with a particular part of the text. It is reasonable to assume that the global sentiment of a document is a function of the local sentiment and that estimating the local sentiment is a key step in predicting the global sentiment. Moreover, the concept of local sentiment is useful in a wide range of text analysis applications including document summarization and visualization. Formally, we view local sentiment as a function on the words in a document taking values in a finite partially ordered set, or a poset, (O, ?). To determine the local sentiment at a particular word, it is necessary to take context into account. For example, due to context the local sentiment at each of the following words this is a horrible product is low (in the sense of (O, ?)). Since sentences are natural components for segmenting document semantics, we view local sentiment as a piecewise constant function on sentences. Occasionally we encounter a sentence that violates this rule and conveys opposing sentiments in two different parts. In this situation we break the sentence into two parts and consider them as two sentences. We therefore formalize the problem as predicting a sequence of sentiments y = (y1 , . . . , yn ), yi ? O based on a sequence of sentences x = (x1 , . . . , xn ). Modeling the local sentiment is challenging from several aspects. The sentence sequence x is discrete-time and high-dimensional categorical valued, and the sentiment sequence y is discretetime and ordinal valued. Regression models can be applied locally but they ignore the statistical dependencies across the time domain. Popular sequence models such as HMM or CRF, on the other hand, typically assume that y is categorical valued. In this paper we demonstrate the prediction of local sentiment flow using an ordinal version of conditional random fields, and explore the relation between the local and global sentiment. 3 Isotonic Conditional Random Fields Conditional random fields (CRF) [3] are parametric families of conditional distributions p ? (y|x) that correspond to undirected graphical models or Markov random fields  Q P P exp p? (y, x) c?C ?c (x|c , y|c ) c?C k ?c,k fc,k (x|c , y|c ) = = ?c,k ? R (1) p? (y|x) = p? (x) Z(?, x) Z(?, x) where C is the set of cliques in the graph and x|c and y|c are the restriction of x and y to variables representing nodes in c ? C. It is assumed above that the potentials ?c are exponential functions of P features modulated by decay parameters ?c (x|c , y|c ) = exp( k ?c,k fc,k (x|c , y|c )). CRF have been mostly applied to sequence annotation, where x is a sequence of words and y is a sequence of labels annotating the words, for example part-of-speech tags. The standard graphical structure in this case is a chain structure on y with noisy observations x. In other words, the cliques are C = {{yi?1 , yi }, {yi , xi } : i = 1, . . . , n} (see Figure 1 left) leading to the model ! XX XX 1 ?k fk (yi?1 , yi ) + ?k gk (yi , xi ) ? = (?, ?). (2) p? (y|x) = exp Z(x, ?) i i k k In sequence annotation a standard choice for the feature functions is fh?,? i (yi?1 , yi ) = ?yi?1 ,? ?yi ,? and gh?,wi (yi , xi ) = ?yi ,? ?xi ,w (note that we index the feature functions using pairs rather than k as in (2)). In our case, since xi are sentences we use instead the slightly modified feature functions gh?,wi (yi , xi ) = 1 if yi = ?, w ? xi and 0 otherwise. Given a set of iid training samples the parameters are typically estimated by maximum likelihood or MAP using standard numerical techniques such as conjugate gradient or quasi-Newton. Despite the great popularity of CRF in sequence labeling, they are not appropriate for ordinal data such as sentiments. The ordinal relation is ignored in (2), and in the case of limited training data the parameter estimates will possess high variance resulting in poor predictive power. We therefore enforce a set of monotonicity constraints on the parameters that are consistent with the ordinal structure and domain knowledge. The resulting model is a restricted subset of the CRF (2) and, in accordance with isotonic regression [4], is named isotonic CRF. Since ordinal variables express a progression of some sort, it is natural to expect some of the binary features in (2) to correlate more strongly with some ordinal values than others. In such cases, we should expect the presence of such binary features to increase (or decrease) the conditional probability in a manner consistent with the ordinal relation. Since the parameters ? h?,wi represent the effectiveness of the appearance of w with respect to increasing the probability of ? ? O, they are natural candidates for monotonicity constraints. More specifically, for words w ? M 1 that are identified as strongly associated with positive sentiment, we enforce ? ? ?0 =? ?h?,wi ? ?h?0 ,wi ?w ? M1 . (3) Similarly, for words w ? M2 identified as strongly associated with negative sentiment, we enforce ? ? ?0 =? ?h?,wi ? ?h?0 ,wi ?w ? M2 . (4) The motivation behind the above P restriction is immediate for the non-conditional Markov random fields p? (x) = Z ?1 exp( ?i fi (x)). Parameters ?i are intimately tied to model probabilities through activation of the feature functions fi . In the case of conditional random fields, things get more complicated due to the dependence of the normalization term on x. The following propositions motivate the above parameter restriction for the case of linear structure CRF with binary features. Proposition 1. Let p(y|x) be a linear state-emission chain CRF with binary features f h?,? i , gh?,wi as above, and x a sentence sequence for which v 6? xj . Then, denoting x0 = (x1 , . . . , xj?1 , xj ? {v}, xj+1 , . . . , xn ), we have  ? 0 ??  p(y|x) hyj ,vi hy ,vi j 0 |x) = E e . ?y p(y p(y|x0 ) Proof. p(y|x) Z(x0 ) e( i ?,? ?h?,? i fh?,? i (yi?1 ,yi )+ i ?,w ?h?,wi gh?,wi (yi ,xi )) Z(x0 ) ??hyj ,vi = e = P P P P 0 0 p(y|x ) Z(x) e( i ?,? ?h?,? i fh?,? i (yi?1 ,yi )+ i ?,w ?h?,wi gh?,wi (yi ,xi )) Z(x) P (P P ?h?,? i fh?,? i (y0 ,y0 )+P P ?h?,wi gh?,wi (y0 ,x0 )) i?1 i i i i ?,? i ?,w y0 e =P e??hyj ,vi P P P P 0 0 0 ( i ?,? ?h?,? i fh?,? i (yi?1 ,yi )+ i ?,w ?h?,wi gh?,wi (yi ,xi )) e y0 P ?r e?hr,vi ??hy ,vi X ? j P P r e = e?hr,vi ??hyj ,vi = r?O 0 ? ? 0 r r r?O r ?O r?O X ?hy0 ,vi ??hyj ,vi 0 = p(y |x)e j P P P P y0 where ?r = X y0 :yj0 =r exp XX i 0 ?h?,? i fh?,? i (yi?1 , yi0 ) + ?,? XX i ?h?,wi gh?,wi (yi0 , xi ) ?,w ! . Note that the specific linear CRF structure (Figure 1, left) and binary features are essential for p(y|x) the above result. Proposition 1 connects the probability ratio p(y|x 0 ) to the model parameters in a relatively simple manner. Together with Proposition 2 below, it motivates the ordering of {?hr,vi : r ? O} determined by the restrictions (3)-(4) in terms of the ordering of probability ratios of transformed sequences. Proposition 2. Let p(y|x), x, x0 be as in Proposition 1. For all label sequences s, t, we have ?htj ,vi ? ?hsj ,vi =? p(s|x) p(t|x) ? . 0 p(s|x ) p(t|x0 ) Proof. Since ?htj ,vi ? ?hsj ,vi we have that ez??hsj ,vi ? ez??htj ,vi ? 0 for all z and  ? 0 ??  ? 0 ??htj ,vi hsj ,vi Ep(y0 |x) e hyj ,vi ? e hyj ,vi ? 0. By Proposition 1 the above expectation is p(s|x) p(s|x0 ) ? p(t|x) p(t|x0 ) and Equation (5) follows. (5) The restriction (3) may thus be interpreted as ensuring that adding a word w ? M 1 to transform x 7? x0 will increase labeling probabilities associated with ? no less than with ? 0 if ? 0 ? ?. Similarly, the restriction (4) may be interpreted in the opposite way. If these assumptions are correct, it is clear that they will lead to more accurate parameter estimates and better prediction accuracy. However, even if assumptions (3)-(4) are incorrect, enforcing them may improve prediction by trading off increased bias with lower variance. Conceptually, the parameter estimates for isotonic CRF may be found by maximizing the likelihood or posterior subject to the monotonicity constraints (3)-(4). Since such a maximization is relatively difficult for large dimensionality, we propose a re-parameterization that leads to a much simpler optimization problem. The re-parameterization, in the case of a fully ordered set, is relatively straightforward. In the more general case of a partially ordered set we need the mechanism of ? inversions on finite partially ordered sets. Mobius ? We introduce a new set of features {gh?,wi : ? ? O} for w ? M1 ? M2 defined as X ? gh?,wi (yi , xi ) = gh?,wi (yi , xi ) w ? M 1 ? M2 ? :? ?? and a new set of corresponding parameters {??h?,wi : ? ? O}. If (O, ?) is fully ordered, ??h?,wi = ?h?,wi ? ?h?0 ,wi , where ? 0 is the largest element smaller than ?, or 0 if ? = min(O). In the more general case, ??h?,wi is the convolution of ?h?,wi with the M?obius function of the poset (O, ?) (see [5] for more details). By the M?obius inversion theorem [5] we have that ??h?,wi satisfy X ?h?,wi = ??h?,wi w ? M 1 ? M2 (6) ? :? ?? and that P ? ?h?,wi gh?,wi = p(y|x) = + 1 exp Z(x) X i X P ? ? ? ?h?,wi gh?,wi leading to the re-parameterization of isotonic CRF XX i ?h?,? i fh?,? i (yi?1 , yi ) + ?,? X ? ??h?,wi gh?,wi (yi , xi ) w?M1 ?M2 ? X i ! X X ?h?,wi gh?,wi (yi , xi ) w6?M1 ?M2 ? with ??h?,wi ? 0, w ? M1 and ??h?,wi ? 0, w ? M2 for all ? > min(O). The re-parameterized model has the benefit of simple constraints and its maximum likelihood estimates can be obtained by a trivial adaptation of conjugate gradient or quasi-Newton methods. 3.1 Author Dependent Models Thus far, we have ignored the dependency of the labeling model p(y|x) on the author, denoted here by the variable a. We now turn to account for different sentiment-authoring styles by incorporating this variable into the model. The word emissions yi ? xi in the CRF structure are not expected to vary much across different authors. The sentiment transitions yi?1 ? yi , on the other hand, typically vary across different authors as a consequence of their individual styles. For example, the review of an author who sticks to a list of self-ranked evaluation criteria is prone to strong sentiment variations. In contrast, the review of an author who likes to enumerate pros before he gets to cons (or vice versa) is likely to exhibit more local homogeneity in sentiment. Accounting for author-specific sentiment transition style leads to the graphical model in Figure 1 right. The corresponding author-dependent CRF model P P P P 1 e( i,a0 ?,? (?h?,? i +?h?,?,a0 i )fh?,?,a0 i (yi?1 ,yi ,a)+ i ?,w ?h?,wi gh?,wi (yi ,xi )) p(y|x, a) = Z(x, a) uses features fh?,?,a0 i (yi?1 , yi , a) = fh?,? i (yi?1 , yi )?a,a0 and transition parameters that are authordependent ?h?,?,ai as well as author-independent ?h?,? i . Setting ?h?,?,ai = 0 reduces the model to the standard CRF model. The author-independent parameters ?h?,? i allow parameter sharing across multiple authors in case the training data is too scarce for proper estimation of ? h?,?,ai . For simplicity, the above ideas are described in the context of non-isotonic CRF. However, it is straightforward to combine author-specific models with isotonic restrictions. Experiments demonstrating author-specific isotonic models are described in Section 4.3. a Yi-1 Yi Yi+1 Yi-1 Xi-1 Xi Xi+1 Xi-1 Yi Xi Yi+1 Xi+1 Figure 1: Graphical models corresponding to CRF (left) and author-dependent CRF (right). 3.2 Sentiment Flows as Smooth Curves The sentence-based definition of sentiment flow is problematic when we want to fit a model (for example to predict global sentiment) that uses sentiment flows from multiple documents. Different documents have different number of sentences and it is not clear how to compare them or how to build a model from a collection of discrete flows of different lengths. We therefore convert the sentence-based flow to a smooth length-normalized flow that can meaningfully relate to other flows. We assume from now on that the ordinal set O is realized as a subset of R and that its ordering coincides with the standard ordering on R. In order to account for different lengths, we consider the sentiment flow as a function h : [0, 1] ? O ? R that is piecewise constant on the intervals [0, l), [l, 2l), . . . , [(k ? 1)l, 1] where k is the number of sentences in the document and l = 1/k. Each of the intervals represents a sentence and the function value on it is its sentiment. To create a more robust representation we smooth out the discontinuous function by convolving it with a smoothing kernel. The resulting sentiment flow is a smooth curve f : [0, 1] ? R that can be easily related or compared to similar sentiment flows of other documents (see Figure 3 for an example). We can then define natural distances between two flows, for example the L p distance Z 1 1/p p dp (f1 , f2 ) = |f1 (r) ? f2 (r)| dr (7) 0 for use in a k-nearest neighbor model for relating the local sentiment flow to the global sentiment. 4 Experiments To examine the ideas proposed in this paper we implemented isotonic CRF, and the normalization and smoothing procedure, and experimented with a small dataset of 249 movie reviews, randomly selected from the Cornell sentence polarity dataset v1.01 , all written by the same author. The code for isotonic CRF is a modified version of the quasi-Newton implementation in the Mallet toolkit. In order to check the accuracy and benefit of the local sentiment predictor, we hand-labeled the local sentiments of each of these reviews. We assigned for each sentence one of the following values in O ? R: 2 (highly praised), 1 (something good), 0 (objective description), ?1 (something that needs improvement) and ?2 (strong aversion). 4.1 Sentence Level Prediction To evaluate the prediction quality of the local sentiment we compared the performance of naive Bayes, SVM (using the default parameters of SVMlight ), CRF and isotonic CRF. Figure 2 displays the testing accuracy and distance of predicting the sentiment of sentences as a function of the training data size averaged over 20 cross-validation train-test split. The dataset presents one particular difficulty where more than 75% of the sentences are labeled objective (or 0). As a result, the prediction accuracy for objective sentences is over-emphasized. To correct for this fact, we report our test-set performance over a balanced (equal number of sentences for different labels) sample of labeled sentences. Note that since there are 5 labels, random guessing yields a baseline of 0.2 accuracy and guessing 0 always yields a baseline of 1.2 distance. 1 Available at http://www.cs.cornell.edu/People/pabo/movie-review-data balanced testing accuracy 0.4 balanced testing distance 1.25 isotonic CRFs CRFs SVM naive Bayes isotonic CRFs CRFs SVM naive Bayes 1.2 0.35 1.15 0.3 1.1 1.05 0.25 1 0.2 25 50 75 100 125 150 175 0.95 25 50 75 100 125 150 175 Figure 2: Local sentiment prediction: balanced test results for naive Bayes, SVM, CRF and iso-CRF. As described in Section 3, for isotonic CRF, we obtained 300 words to enforce monotonicity constraints. The 150 words that achieved the highest correlation with the sentiment were chosen for positivity constraints. Similarly, the 150 words that achieved the lowest correlation were chosen for negativity constraints. Table 1 displays the top 15 words of the two lists. great perfection considerable too couldnt wasnt superb outstanding wonderfully didnt i uninspired memorable performance worth just no lacked enjoyable enjoyed beautifully failed satire boring mood certain delightfully unnecessary contrived tended Table 1: Lists of 15 words with the largest positive (top) and negative (bottom) correlations. The results in Figure 2 indicate that by incorporating the sequential information, the two versions of CRF perform consistently better than SVM and naive Bayes. The advantage of setting the monotonicity constraints in CRF is elucidated by the average absolute distance performance criterion (Figure 2, right). This criterion is based on the observation that in sentiment prediction, the cost of misprediction is influenced by the ordinal relation on the labels, rather than the 0-1 error rate. 4.2 Global Sentiment Prediction We also evaluated the contribution of the local sentiment analysis in helping to predict the global sentiment of documents. We compared a nearest neighbor classifier for the global sentiment, where the representation varied from bag of words to smoothed length-normalized local sentiment representation (with and without objective sentences). The smoothing kernel was a bounded Gaussian density (truncated and renormalized) with ? 2 = 0.2. Figure 3 displays discrete and smoothed local sentiment labels, and the smoothed sentiment flow predicted by isotonic CRF. Figure 4 and Table 2 display test-set accuracy of global sentiments as a function of the train set size. The distance in the nearest neighbor classifier was either L1 or L2 for the bag of words representation or their continuous version (7) for the smoothed sentiment curve representation. The results indicate that the classification performance of the local sentiment representation is better than the bag of words representation. In accordance with the conclusion of [6], removing objective sentences (that correspond to sentiment 0) increased the local sentiment analysis performance by 20.7%. We can thus conclude that for the purpose of global sentiment prediction, local sentiment flow of the nonobjective sentences holds most of the relevant information. Performing local sentiment analysis on non-objective sentences improves performance as the model estimates possess lower variance. 4.3 Measuring the rate of sentiment change We examine the rate of sentiment change as a characterization of the author?s writing style using the isotonic author-dependent model of Section 3.1. We assume that the CRF process is a discrete 2 labels curve rep of labels predicted curve rep   1  0  ?1  ?2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Figure 3: Sentiment flow and its smoothed curve representation. The blue circles indicate the labeled sentiment of each sentence. The blue solid curve and red dashed curve are smoothed representations of the labeled and predicted sentiment flows. Only non-objective labels are kept in generating the two curves. The numberings correspond to sentences displayed in Section 4.4. nearest neighbor classifier with L1 0.38 nearest neighbor classifier with L2 0.38 sentiment flow w/o objective sentiment flow w/ objective vocabulary sentiment flow w/o objective sentiment flow w/ objective vocabulary 0.36 0.36 0.34 0.34 0.32 0.32 0.3 0.3 0.28 0.28 25 50 75 100 125 150 0.26 175 25 50 75 100 125 150 175 Figure 4: Accuracy of global sentiment prediction (4-class labeling) as a function of train set size. sampling of a corresponding continuous time Markov jump process. A consequence of this assumption is that the time T the author stays in sentiment ? before leaving is modeled by the exponential distribution p? (T > t) = e?q? (t?1) , t > 1. Here, we assume T > 1 and q? is interpreted as the rate of change of the sentiment ? ? O: the larger the value, the more likely the author will switch to other sentiments in the near future. To estimate the rate of change q? of an author we need to compute p? (T > t) based on the marginal probabilities p(s|a) of sentiment sequences s of length l. The probability p(s|a) may be approximated by X p(s|a) = p(x|a)p(s|x, a) (8) x X p?0 (x|a) ? n?l+1 x X ?i (s1 |x, a) i Qi+(l?1) j=i+1 Mj (sj?i , sj?i+1 |x, a)?i+(l?1) (sl |x, a) Z(x, a) ! P 1 0 where p?0 is the empirical probability function p?0 (x|a) = |C| x0 ?C ?x,x for the set C of documents written by author a of length no less than l. ?, M, ? are the forward, transition and backward probabilities analogous to the dynamic programming method in [3]. Using the model p(s|a) we can compute p? (T > t) for different authors at integer values of t which would lead to the quantity q? associated with each author. However, since (8) is based on an approximation, the calculated values of p? (T > t) will be noisy resulting in slightly different values of q? for different time points t and cross validation iterations. A linear regression fit for q ? based on the approximated values of p? (T > t) for two authors using 10-fold cross validation is displayed in Figure 5. The data was the 249 movie reviews from the previous experiments written by one author, and additional 201 movie reviews from a second author. Interestingly, the author associated with the red dashed line has a consistent lower q? value in all those figures, and thus is considered as more ?static? and less prone to quick sentiment variations. vocabulary sentiment flow with objective sentences sentiment flow without objective sentences L1 0.3095 0.3189 0.3736 3.0% 20.7% L2 0.3068 0.3128 0.3655 1.95% 19.1% Table 2: Accuracy results and relative improvement when training size equals 175. 8 7 7 6 6 10 4 8 5 5 3 4 4 1.8388? ?1.3504 2 ?1.143 2 2 2 3 4 5 0 1 4 1 2 3 1.8959? ?0.76685 1 1 6 1.2181? 1.6808? 3 3 0 1 5 4 5 0 1 ?1.2231 2 2 3 4 5 0 1 2 3 4 5 Figure 5: Linear regression fit for q? , ? = 2, 1, ?1, ?2 (left to right) based on approximated values of p? (T > t) for two different authors. X-axis: time t; Y-axis: negative log-probability of T > t. 4.4 Text Summarization We demonstrate the potential usage of sentiment flow for text summarization with a very simple example. The text below shows the result of summarizing the movie review in Figure 3 by keeping only sentences associated with the start, the end, the top, and the bottom of the predicted sentiment curve. The number before each sentence relates to the circled number in Figure 3. 1 What makes this film mesmerizing, is not the plot, but the virtuoso performance of Lucy Berliner (Ally Sheedy), as a wily photographer, retired from her professional duties for the last ten years and living with a has-been German actress, Greta (Clarkson). 2 The less interesting story line involves the ambitions of an attractive, baby-faced assistant editor at the magazine, Syd (Radha Mitchell), who lives with a boyfriend (Mann) in an emotionally chilling relationship. 3 We just lost interest in the characters, the film began to look like a commercial for a magazine that wouldn?t stop and get to the main article. 4 Which left the film only somewhat satisfying; it did create a proper atmosphere for us to view these lost characters, and it did have something to say about how their lives are being emotionally torn apart. 5 It would have been wiser to develop more depth for the main characters and show them to be more than the superficial beings they seemed to be on screen. Alternative schemes for extracting specific sentences may be used to achieve different effects, depending on the needs of the user. We plan to experiment further in this area by combining local sentiment flow and standard summarization techniques. 5 Discussion In this paper, we address the prediction and application of the local sentiment flow concept. As existing models are inadequate for a variety of reasons, we introduce the isotonic CRF model that is suited to predict the local sentiment flow. This model achieves better performance than the standard CRF as well as non-sequential models such as SVM. We also demonstrate the usefulness of the local sentiment representation for global sentiment prediction, style analysis and text summarization. References [1] B. Pang, L. Lee, and S. Vaithyanathan. Thumbs up? sentiment classification using machine learning techniques. In Proceedings of EMNLP-02. [2] B. Pang and L. Lee. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of ACL-05. [3] J. Lafferty, F. Pereira, and A. McCallum. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In International Conference on Machine Learning, 2001. [4] R. E. Barlow, D.J. Bartholomew, J. M. Bremner, and H. D. Brunk. Statistical inference under order restrictions; the theory and application of isotonic regression. Wiley, 1972. [5] R. P. Stanley. Enumerative Combinatorics. Wadsworth & Brooks/Cole Mathematics Series, 1986. [6] B. Pang and L. Lee. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of ACL-04.
3152 |@word version:4 inversion:2 yi0:2 accounting:1 photographer:1 solid:1 harder:1 series:1 denoting:1 document:20 interestingly:1 existing:1 current:1 activation:1 written:5 numerical:1 plot:1 selected:1 parameterization:3 mccallum:1 iso:1 characterization:1 node:1 simpler:1 incorrect:1 combine:1 introduce:2 manner:2 x0:11 expected:1 indeed:1 examine:3 berliner:1 increasing:1 webkb:1 estimating:1 moreover:1 xx:5 misprediction:1 bounded:1 lowest:1 what:2 interpreted:3 superb:1 htj:4 classifier:5 stick:1 yn:1 segmenting:2 positive:4 before:3 engineering:2 local:31 treat:1 accordance:2 consequence:2 despite:1 solely:1 might:1 acl:2 challenging:1 co:1 limited:1 range:1 averaged:1 lafayette:2 testing:3 poset:2 lost:2 procedure:1 area:2 empirical:2 convenient:1 matching:1 word:21 greta:1 seeing:1 get:3 context:4 writing:1 isotonic:20 www:1 restriction:8 map:1 demonstrated:1 quick:1 maximizing:1 crfs:4 straightforward:2 convex:1 focused:2 simplicity:1 torn:1 m2:8 rule:2 handle:1 variation:2 analogous:1 play:1 yj0:1 user:2 magazine:2 programming:1 commercial:1 us:2 element:1 approximated:3 satisfying:1 cut:1 database:1 labeled:5 ep:1 role:1 bottom:2 ordering:4 decrease:1 highest:1 balanced:4 dynamic:1 renormalized:1 motivate:1 predictive:1 f2:2 easily:1 train:3 elec:2 query:1 labeling:5 refined:1 larger:1 valued:3 film:3 say:1 annotating:1 otherwise:1 statistic:1 transform:2 noisy:2 mood:1 sequence:17 advantage:1 propose:1 product:1 adaptation:1 relevant:1 combining:1 achieve:2 description:1 exploiting:1 contrived:1 produce:1 categorization:3 generating:1 depending:1 develop:2 stat:1 nearest:5 school:2 strong:2 implemented:1 c:1 predicted:4 trading:1 indicate:3 involves:1 correct:2 discontinuous:1 opinion:7 violates:1 mann:1 education:1 atmosphere:1 wellsuited:1 f1:2 proposition:7 helping:1 hold:1 considered:3 exp:6 great:2 predict:3 vary:2 achieves:1 fh:10 purpose:3 estimation:1 assistant:1 bag:5 label:10 cole:1 largest:2 vice:1 create:2 always:1 gaussian:1 modified:2 rather:3 cornell:2 varying:1 emission:2 improvement:2 consistently:1 likelihood:3 check:1 contrast:2 baseline:2 sense:1 summarizing:1 inference:1 dependent:4 entire:2 typically:4 a0:5 her:1 relation:4 quasi:3 transformed:1 semantics:1 syd:1 classification:5 denoted:1 plan:1 platform:1 smoothing:3 wadsworth:1 marginal:1 field:9 equal:2 sampling:1 represents:1 look:1 future:1 others:1 report:1 piecewise:2 randomly:1 homogeneity:1 individual:1 connects:1 opposing:1 interest:2 highly:1 evaluation:1 behind:1 chain:2 accurate:1 necessary:1 re:4 circle:1 increased:2 modeling:1 measuring:1 exchanging:1 maximization:1 cost:1 subset:3 neutral:1 predictor:1 usefulness:1 inadequate:1 too:2 dependency:2 answer:1 density:1 international:1 stay:1 lee:3 off:1 probabilistic:1 together:1 emnlp:1 positivity:1 guy:1 dr:1 convolving:1 style:6 leading:2 account:3 potential:2 star:2 satisfy:1 combinatorics:1 vi:21 view:3 break:1 red:2 start:1 sort:1 bayes:5 complicated:1 annotation:2 contribution:1 pang:4 accuracy:10 variance:3 who:3 correspond:3 conveying:2 yield:2 conceptually:1 thumb:1 iid:1 worth:1 influenced:1 tended:1 sharing:1 definition:1 subjectivity:1 conveys:1 associated:7 proof:2 con:1 static:1 stop:1 dataset:3 popular:1 mitchell:1 knowledge:1 dimensionality:1 improves:1 stanley:1 formalize:1 brunk:1 evaluated:1 strongly:3 furthermore:1 just:2 correlation:3 hand:3 ally:1 web:1 quality:1 usage:1 effect:1 contain:1 concept:3 barlow:1 normalized:2 assigned:2 attractive:1 self:1 coincides:1 criterion:3 mallet:1 crf:30 demonstrate:4 l1:3 gh:16 pro:1 fi:2 began:1 obius:3 ecn:1 he:1 m1:5 relating:1 versa:1 ai:3 enjoyed:1 fk:1 mathematics:1 similarly:3 bartholomew:1 toolkit:1 something:3 posterior:1 apart:1 hy0:1 occasionally:1 certain:1 blog:1 binary:6 rep:2 life:2 baby:1 yi:48 minimum:1 additional:1 somewhat:1 determine:1 living:1 dashed:2 relates:1 multiple:3 reduces:1 smooth:4 cross:3 retrieval:1 ambition:1 ensuring:1 prediction:19 variant:1 regression:7 qi:1 expectation:1 iteration:1 represent:1 normalization:2 kernel:2 achieved:2 addition:1 want:1 interval:2 leaving:1 posse:2 subject:1 undirected:1 thing:1 meaningfully:1 flow:30 spirit:1 effectiveness:1 lafferty:1 call:1 integer:1 extracting:1 near:1 presence:1 svmlight:1 vital:1 split:1 switch:1 xj:4 fit:3 variety:1 identified:2 opposite:1 idea:2 multiclass:2 duty:1 assist:1 sentiment:106 clarkson:1 speech:1 ignored:2 generally:1 useful:1 clear:2 enumerate:1 locally:1 ten:1 simplest:1 discretetime:1 http:1 sl:1 exist:1 problematic:1 estimated:1 popularity:1 blue:2 discrete:4 express:2 key:1 nevertheless:1 demonstrating:1 actress:1 kept:1 backward:1 v1:1 graph:1 convert:1 year:1 parameterized:1 named:1 throughout:1 reasonable:1 family:1 display:4 fold:1 elucidated:1 constraint:9 hy:2 tag:1 hyj:7 aspect:1 min:2 performing:1 relatively:3 department:1 numbering:1 poor:1 conjugate:2 across:4 slightly:2 smaller:1 intimately:1 y0:8 wi:42 chilling:1 character:3 s1:1 restricted:1 pabo:1 satire:1 equation:1 visualization:2 turn:1 german:1 mechanism:1 ordinal:13 end:1 available:1 lacked:1 progression:1 appropriate:1 enforce:4 alternative:1 encounter:1 professional:1 top:3 graphical:4 newton:3 build:1 objective:13 question:1 realized:1 quantity:1 parametric:1 dependence:1 traditional:1 guessing:2 exhibit:1 gradient:2 dp:1 beautifully:1 distance:7 hmm:1 enumerative:1 topic:4 trivial:1 reason:1 enforcing:2 bremner:1 w6:1 length:6 code:1 index:1 polarity:2 modeled:1 ratio:2 relationship:2 difficult:1 mostly:1 relate:1 wiser:1 gk:1 stated:1 negative:5 implementation:1 motivates:2 summarization:8 proper:2 perform:1 observation:2 convolution:1 markov:3 purdue:4 finite:2 displayed:2 truncated:1 immediate:1 situation:1 enjoyable:1 y1:1 varied:1 smoothed:6 rating:1 pair:1 sentence:33 textual:1 brook:1 address:1 below:2 adjective:1 including:1 power:1 difficulty:2 natural:4 ranked:1 predicting:9 hr:3 scarce:1 representing:1 scheme:1 improve:1 movie:6 axis:2 categorical:3 authoring:1 naive:5 negativity:1 perfection:1 text:11 review:11 circled:1 l2:3 faced:1 determining:1 relative:1 fully:2 expect:2 interesting:1 validation:3 aversion:1 consistent:3 article:1 editor:1 story:1 prone:2 last:1 keeping:1 bias:1 allow:3 wide:2 neighbor:5 taking:1 absolute:1 benefit:2 curve:10 depth:1 default:1 xn:2 world:1 transition:4 seemed:1 vocabulary:3 calculated:1 author:28 commonly:1 collection:1 jump:1 forward:1 wouldn:1 far:1 correlate:1 lebanon:2 sj:2 ignore:1 monotonicity:6 clique:2 global:15 assumed:1 unnecessary:1 conclude:1 xi:23 continuous:2 table:4 nature:1 mj:1 robust:1 hsj:4 superficial:1 interact:1 domain:2 did:2 main:2 reuters:1 motivation:1 contradicting:1 facilitating:1 x1:2 memorable:1 west:2 screen:1 wiley:1 mao:1 pereira:1 exponential:2 candidate:1 tied:1 theorem:1 removing:1 vaithyanathan:1 specific:5 emphasized:1 boring:1 list:3 decay:1 experimented:1 svm:6 essential:1 incorporating:2 sequential:4 adding:1 suited:1 fc:2 lucy:1 explore:1 appearance:1 likely:2 ez:2 failed:1 ordered:5 partially:3 conditional:11 formulated:1 considerable:1 change:4 emotionally:2 specifically:1 determined:1 formally:2 people:1 modulated:1 relevance:1 outstanding:1 evaluate:1
2,373
3,153
The Neurodynamics of Belief Propagation on Binary Markov Random Fields Ruedi Stoop Institute of Neuroinformatics ETH/UNIZH Zurich Switzerland ruedi@ini.phys.ethz.ch Thomas Ott Institute of Neuroinformatics ETH/UNIZH Zurich Switzerland tott@ini.phys.ethz.ch Abstract We rigorously establish a close relationship between message passing algorithms and models of neurodynamics by showing that the equations of a continuous Hopfield network can be derived from the equations of belief propagation on a binary Markov random field. As Hopfield networks are equipped with a Lyapunov function, convergence is guaranteed. As a consequence, in the limit of many weak connections per neuron, Hopfield networks exactly implement a continuous-time variant of belief propagation starting from message initialisations that prevent from running into convergence problems. Our results lead to a better understanding of the role of message passing algorithms in real biological neural networks. 1 Introduction Real brain structures employ inference algorithms as a basis of decision making. Belief Propagation (BeP) is a popular, widely applicable inference algorithm that seems particularly suited for a neural implementation. The algorithm is based on message passing between distributed elements that resembles the signal transduction within a neural network. The analogy between BeP and neural networks is emphasised if BeP is formulated within the framework of Markov random fields (MRF). MRF are related to spin models [1] that are often used as abstract models of neural networks with symmetric synaptic weights. If a neural implementation of BeP can be realised on the basis of MRF, each neuron corresponds to a message passing element (hidden node of a MRF) and the synaptic weights reflect their pairwise dependencies. The neural activity then would encode the messages that are passed between connected nodes. Due to the highly recurrent nature of biological neural networks, MRF obtained in this correspondence to a neural network are naturally very ?loopy?. Convergence of BeP on loopy structures is, however, a delicate matter [1]-[2] . Here, we show that BeP on binary MRF can be reformulated as continuous Hopfield networks along the lines of the sketched correspondence. More precisely, the equations of a continuous Hopfield network are derived from the equations of BeP on a binary MRF, if there are many, but weak connections per neuron. As a central result in this case, attractive fixed points of the Hopfield network provide very good approximations of BeP fixed points of the corresponding MRF. In the Hopfield case a Lyapunov function guarantees the convergence towards these fixed points. As a consequence, Hopfield networks implement BeP with guaranteed convergence. The result of the inference is directly represented by the activity of the neurons in the steady state. To illustrate this mechanism, we compare the magnetisations obtained in the original BeP framework to that from the Hopfield network framework, for a symmetric ferromagnetic model. Hopfield networks may also serve as a guideline for the implementation or the detection of BeP in more realistic, e.g., spiking, neural networks. By giving up the symmetric synaptic weights constraints, we may generalise the original BeP inference algorithm towards capturing neurally inspired message passing. 2 A Quick Review on Belief Propagation in Markov Random Fields MRF have been used to formulate inference problems, e.g. in Boltzmann machines (which actually are MRF [3]) or in the field of computer vision [4] and are related to Bayesian networks. In fact, both concepts are equivalent variants of graphical models [1]. Typically, from a given set of observations   that, in our case, take on either of the values  , we  want to infer some hidden quantities two  . For instance, represented by , whereas  the pixel values of a grey-scaled image may be   aparticular describes whether pixel  belongs to an object ( ) or to thebackground  variable  ( ). The natural question that emerges in this context is: Given the observations , what is      the probability for ? The relation between and is usually given by a joint probability, written in the factorised form      !  "$# '& (*)!+ % ,(    - *(  - * 01 # /.   (1)  where the functions describe the pairwise dependencies of the hidden variables and the +   " functions . give the evidences from . is the normalisation constant [1]. (1) can directly be reformulated as an Ising system with the Energy 2 03 4/5 % '& (*)76 8( 93  3 (:;5 where the Boltzmann distribution provides the probability 03 03 >  93 91 =<  %DC )0E*F>G (2) of a spin configuration 3 , "@?AB (3) 8( 03 - 3 ( IHJKKLDM 8(  * -N(  03 9*HJKKLDM  * * 0 A comparison with (1) yields ,6 .  93 QR . ,( 03 - 3 (O+  8( 3  3 (P and (*<  3  3 ( and that In many cases, it is reasonable to assume that < 6 6 3  , where 8( and  are real-valued constants,6 so that (2) transforms into the familiar Ising Hamil< < 6 JS 3 := tonian [5]. For convenience, we set . The inference task inherent to MRF amounts to extracting marginal probabilities G    4  5 & V W  TU X  ! (4) An exact evaluation of  according to Eq. (4) is generally very time-consuming. BeP provides us with approximated marginals within a reasonable time. approach is based on the idea that ,(Z\ Y This [ connected elements (where a connection is given by 6 ) interchange messages that contain a recommendation about what state the other elements should be in [1]. Given the set of messages ]_^,(  (!` at time a , the messages at time ab are determined by ] ^'8( cd   ( > ]l,( 5 Te .     *   + 8(    - (  Vf g # % D)ihj( ] ^Vk     G (5) *o Here, denotes the message sent from the hidden variable (or node)  to node m . n   m denotes the set of all nodes of  without m . Usually, the messages are normalised at every time ]_^,( neighbouring   b ]_^,(   pq . After (5) has converged, the marginals   are approximated by step, i.e.,  the so called beliefs r that are calculated according to s    4=s     *   ] *(     1 r  . 1( f # g % t) (6) in connection with Ising systems, one is primarily where is a normalisation] constant.     In     particular interested in the quantity r  r  , the so-called local magnetisation. For a detailed introduction of BeP on MRF we refer to [1]. 3 BeP and the Neurodynamics of Hopfield Networks The goal of this section is to establish the relationship between the update rule (5) and the dynamical equation of a continuous Hopfield network, uv    5 V1 v V  ~ a w v    u a byx{z V}| a a b?    a G (7) v  is some quantity describing the activity of neuron  (e.g., the membrane potential) and   ?=?I??M??   . x 8(  is the *(  activation function, typically implemented in a sigmoid form, such as x are the connection (synaptic) weights  which need to be symmetric in the Hopfield model. | |  The connectivity might be all-to-all or sparse.   a is an external signal or bias (see, e.g., [6] for a Here  general introduction to Hopfield networks). According to the sketched ,( each neuron repre  v  picture, sents a node , whereas the messages are encoded in the variables and | . The exact nature of this encoding will be worked out below. The Hopfield architecture implements the point attractor paradigm, i.e.,  by means of the dynamics the network is driven into a fixed point. At the fixed point, the beliefs r can be read out. In the MRF picture, this corresponds to (5) and (6). We will now realise the translation from MRF into Hopfield networks as follows: ]Z,(   ]l,(    and to one reparam(1) Reduction of the ,( number of messages per connection from eterised variable ? . (2) Translation into a continuous system. (3) Translation of the obtained8( equations intov  the equations of a Hopfield network, where we find ,( in terms of and | . the encoding of the variables ? This will establish the exact relationship between Hopfield and BeP. 3.1 Reparametrisation of the messages In the case of binary variables  ?I? MN? , the messages ]P(*     can be reparameterised [2] according to G ,(??]_,(  (? ?]l8(  (p  ? By this, the update rules (5) transform into update rules for the new ?messages? ? ^V1 5 ?? ^'8( cd ??`??MN? dP?? ?`??MN?  ,(7?I? MN???? A x 0? ? b 6 Vf g % t)0h( ? < i??@?? (8) ,( G (9) ,( message ?   . We can now directly calculate the For each connection ???m we obtain ]  ?one ?I? MN? single 9? Vf g e ? Vk b local magnetisation according to [2]. The Jacobian of (9) in a point ? < ?? e,?D? ??? ? ? . ?? ?U-? 4 u is denoted by x i ? The used reparametrisation translates the update rules into an additive form (?log domain?) which is a basic assumption of most models of neural networks. 3.2 Translation into a time-continuous system Eq. (9) can be translated into the equivalent time-continuous system u ? u 8(   a ?? ,( i ?? *> 8(   a ? a b a  ? a where < ? ? u u ? 0? ?? ?`??MN? A d??? ?`??MN?  , (7?I? MN???? 5 V1   a b 6 Vf g % t)ihj( ? <   ?????? a (10)  is time-independent. corresponding Jacobian in ? a ? point ? is denoted by  ? u ? The ? 0  ? b x , where is the ? -dimensional identity matrix ( ? is the number of mes, ( sages ? ). Obviously, (9) and (10) have the same fixed points ?:?D? which are given by <u ? ,(p??`??MN? A d ?? `? ??MN?  , (7?I? MN? 6 ?? V1 5 b Vf g % t)ihj( ? <  ?? ??? (11) with identical stability properties in both frameworks:  For stability of (9) it is required that the real u part of the largest eigenvalue of the Jacobian x i? ?D? be smaller than the of ???? for  u ? 1, whereas u u istability b x ??t? must (10) the condition is that the real part of the largest eigenvalue of i? ?t? be smaller than 0. It is obvious that both conditions are identically satisfied. 3.3 Translation into a Hopfield network v  ,( The comparison between Eq. (7) and Eq. (10) does not ,( lead to,( av direct identification of with ? . Rather, under certain conditions, we can identify ? with | . That is, a message corresponds ( to the presynaptic neural ( activity weighted by the synaptic strength. Formally, we may define a 8(? ,( v  v  variable by ? and rewrite Eq. (10) as | u u a | ( 8( v     ( | ,( v  ,( ??I? MN? b ?I? MN? d ?? A ,( ?I? MN? |  Vk v V  ???5 Vf g % t) | | (* v (  b < 8(     ?? ??Z a (12) 8(  where we set | .1 In the following, we assume that the synaptic d weights | are rela,(O?? 6 ?I??M?? d   ?`??MN? ??w  A A  . Hence can be approximated by . Moreover, tively small, i.e., | ?? (* v ( if a neuron receives many inputs (number of connections ? ) then the single contribution | can be neglected. Thus (12) simplifies to u u Upon a division by | a | ,( ( ,( v     | 8( v  ( b | ,(?I??M?????  5 kV  v V b Vf g % D) |   ?? < a G (13) , we arrive at the equation u ( (   v  v    ?I??M??????5 Vk v V  ?? u b b < a (14) Vf g % D) | a GtGDG  v ? e  [7 v  d  [7? v7 ?  [7? tG GDG which for a uniform initialisation for all  preserves this v uniformity  v ? e   v  d  ? v7 ?  ?  d  v? ?  GDGtG through time, i.e., a a a . In other words, the subset defined by v? ? e is ( invariant under the dynamics of (14). For such an initialisation we can therefore replace for v  v  a  all by a single variable , which leads to the equation u v    v   u b a ?I? MN?  ??q?`??MN?  ?`??MN? ?I? MN???? N 5 Vk v V b Vf g % D) | y??   ?? < a y G (15)   b   b Using if , and with < we end up with the postulated equation (7). After the convergence to an attractor fixed point, the local magnetisation is v  simply the activity . This is because the fixed point and the read out equations ?`??MN? ? V Vk v V 0????I? MN? Q? V Vk v V?  K??>collapse ?]_ under the v  b < b? , i.e.,  a . approximation | | In summary, we can emulate8( the original BeP procedure by  a continuous Hopfield network provided that (I) the single weights | and the external fields <  a are relatively weak, (II) that eachV neuron v  [4 Vk? messages have been initialised according to ?  receives many inputs and (III) that the GDoriginal V`? GtG ?V H ?V  v   [7? ?V ?H ?V ?p DV ? H DV ?  v  e  [? ? ? ? e | e . From a biological point of view, | | ? the ? first two points seem reasonable. The effect of a single synapse is typically small compared to the totality of the numerous synaptic inputs of a cell [7]-[8]. In this sense, single weights are considered weak. In order to establish a firm biological correspondence, particular consideration will be required for the last point. In the next section, we show that Hopfield networks are guaranteed to converge and thus, the required initialisation can be considered a natural choice for BeP on MRF with the properties (I) and (II). 3.4 Guarantee of convergence A basic Hopfield model of the form u     a ?   u a b a ?q?`??MN? 5 ( | (* (  - x  a b ?I(  (16)  with x  , has the same attractor structure as the model (7) described above (see [6] and references therein). For the former model, an explicit Lyapunov function has been constructed 1 Hence the synaptic weights ?>? ? are automatically restricted to the interval ???Z??k?? . 1 1 m m 0.2 0.2 5 25 T 0.1 0.4 (a) ?P????? w (b) ?_????? ] J Figure 1: The magnetisation as a function of and | for the symmetric ferromagnetic model. The results for the original BeP (grey stars) and for the Hopfield network (black circles) are compared. [9] which assures that these networks and with them the networks considered by us are globally asymptotically stable [6]. Moreover, the time-continuous model (7) can be translated back into a time-discrete model, yielding v   ab  ??`??MN? ?? 5 ( (* v (   ?? a |    a b? G (17) This equation is the proper analogue of Eq. (9). 4 Results for the Ferromagnetic Model ]   v  q?y a In this section, we evaluate the Hopfield-based inference solution for ,(  networks with a simple connectivity structure: We assume constant positive synaptic weights | (fer| ? romagnetic couplings) and a constant number of connections per neuron . We furthermore abstain  [ . To realise this symmetric model, we may either think of an from an external field and set  infinitely extended network or of a network with some spatial periodicity,? e.g., torus. ?I? MNa? network ???`??on MN? a  H!J? According to the last section, | is related to 6 in a spin model via , | 6 J where, for convenience, we reintroduced a quasi-temperature as a scaling parameter. GDGDG  v ?k? v ?k? v ?k?  v?1?  ?D?  fixed From ?I? MN?  Eq.v 1? ? (7),  it is clear that ? [ point of the system if v?? v?1?  isv!? a ? . This equation has always a solution . However, the stability of is ? | JK??J??  ?Q? ? ^ , where the bifurcation point is given by restricted to J ?   ?Q? ? ^ ?I? MN? % ?j?? ) ? G (18) d d  A  ? ? J$?}J ?  ??i?I? ? ? ? X ?I? ?Q? ? ^ , two additional and This follows from vthe . For ? critical condition ? stable fixed points emerge which are symmetric with respect to the origin. After the convergence J???J ?  J???J ?  ]? v v ? c for ? ^ and ? ^ , the obtained magnetisation Q ? ? Q ? ? to a stable fixed point, for ?I? MN?  J w? [ v ?1?  v ?1? G equal to is shown in dependence of d in Fig. 1a (black circles), for ? . The ? | J ?   H?`??MN? !H?? [ w ? ? ?  ? ^ A critical point is found at a temperature ?Q? The result is compared to the result obtained on the basis of the original BeP equations (5) (grey GDGtG stars in Fig 1a). We see that the critical point is slightly lower in the original This can be   [? *BeP [N I[ case.  understood from Eq. (9), for which the point given by the messages ?4? looses stability at the critical temperature ? G J??Q?   ?Q? ^ (19) `? ??MN? d  d  d A ? ? ? G q??[ Jp?Q?  ? ?  ? ? Jp9? ?  A For the value ? , this yields ?Q? ^ . ?Q? ^ is in fact the critical? temperature for Ising G ?? J ?9?  ?? ? ?7???? grids obtained in the Bethe-Peierls approximation (for ? , we get ?Q? ^ [5]). In this way, we casually come across the deep relationship of BeP and Bethe-Peierls which has been established by the theorem stating that stable BeP fixed points are local minima of the Bethe free energy functional [1],[10]. J In the limit of small weights, i.e. Jq large for?Hopfield nets and BeP must be identical. ?J ? ,  the results]? [ ? ^ , where Q ? ? This, in fact, is certainly true for in both cases. For very large weights, J i.e.,] small , the results are also identical in the case of the ferromagnetic couplings studied here, as ? . It is only around the critical values, where the two results seem to differ. A comparison of the results against the synaptic weight | , however, shows an almost perfect agreement for all | . The differences can be made arbitrarily small for larger ? . 5 Discussion and Outlook In this report, we outlined the general structural affinity between belief propagation on binary Markov random fields and continuous Hopfield networks. According to this analogy, synaptic weights correspond to the pairwise dependencies in the MRF and the neuronal signal transduction corresponds to the message exchange. In the limit of many synaptic connections per neuron, but comparatively small individual synaptic weights, the dynamics of the Hopfield network is an exact mirror of the BeP dynamics in its time-continuous form. To achieve the agreement, the choice of initial messages needs to be confined. From this we can conclude that Hopfield network attractors are also BeP attractors (whereas the opposite does not necessarily hold). Unlike BeP, Hopfield networks are guaranteed to converge to a fixed point. We may thus argue that Hopfield networks naturally implement useful message initialisations that prevent trapping into a limit cycle. As a further benefit, the local magnetisations, as the result of the inference process, are just reflected in the asymptotic neural activity. The binary basis of the implementation is not necessarily a drawback, but could simply reflect the fact that many decisions have a yes-or-no character. Our work so far has preliminary character. The Hopfield network model is still a crude simplification of biological neural networks and the relevance of our results for such real-world structures remains somewhat open. However, the search for a possible neural implementation of BeP is appealing and different concepts have already been outlined [11]. This approach shares our guiding idea that the neural activity should directly be interpreted as a message passing process. Whereas our approach is a mathematically rigorous intermediate step towards more realistic models, the approach chosen in [11] tries to directly implement BeP with spiking neurons. In accordance with the guiding idea, our future work will comprise three major steps. First, we take the step from Hopfield networks to networks with spiking elements. Here, the question is to what extent can the concepts of message passing be adapted or reinterpreted so that a BeP implementation is possible. Second, we will give up the artificial requirement of symmetric synaptic weights. To do this, we might have to modify the original BeP concept, while we still may want to stick to the message passing idea. After all, there is no obvious reason why the brain should implement exactly the BeP algorithm. It rather seems plausible that the brain employs inference algorithms that might be conceptually close to BeP. Third, the context and the tasks for which such algorithms can actually be used must be elaborated. Furthermore, we need to explore how the underlying structure could actually be learnt by a neural system. Message passing-based inference algorithms offer an attractive alternative to traditional notions of computation inspired by computer science, paving the way towards a more profound understanding of natural computation [12]. To judge its eligibility, there is - ultimately - one question: How can the usefulness (or inappropriateness) of the message passing concept in connection with biological networks be verified or challenged experimentally? Acknowledgements This research has been supported by a ZNZ grant (Neuroscience Center Zurich). References [1] Yedidia, J.S., Freeman, W.T., Weiss, Y. (2003) Understanding belief propagtion and its generalizations. In G. Lakemeyer and B. Nebel (eds.) Exploring Artificial Intelligence in the New Millenium, Morgan Kaufmann, San Francisco. [2] Mooij, J.M., Kappen, H.J. (2005) On the properties of the Bethe approximation and loopy belief propagation on binary networks. J.Stat.Mech., doi:10.1088/1742-5468/2005/11/P11012. [3] Welling, M., Teh, W.T. (2003) Approximate inference in Boltzmann machines. Artificial Intelligence 143:19-50. [4] Geman, S., Geman, D. (1984) Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE-PAMI 6(6):721-741. [5] Huang, K. (1987) Statistical mechanics. Second edition, John Wiley & Sons, New York, Chapter 13. [6] Haykin, S. (1999) Neural networks - a comprehensive foundation. Second edition, Prentice-Hall, Inc., Chapter 14. [7] Koch, C. (1999), Biophysics of computation. Oxford University Press, Inc., New York. [8] Douglas, R.J., Mahowald, M., Martin, K.A.C., Stratford, K.J. (1996) The role of synapses in cortical computation. Journal of Neurocytology 25: 893-911. [9] Hopfield, J.J. (1984) Neurons with graded response have collective computational properties like those of two-state neurons. PNAS 81:3088-3092. [10] Heskes, T. (2004) On the uniqueness of loopy belief propagation fixed points. Neural Comput. 16:2379-2413. [11] Shon, A.P., Rao, R.P.N. (2005) Implementing belief propagation in neural circuits. Neurocomputing 65-66:877-884. [12] Stoop, R., Stoop, N. (2004) Natural computation measured as a reduction of complexity. Chaos 14(3):675-679.
3153 |@word seems:2 open:1 grey:3 outlook:1 kappen:1 reduction:2 configuration:1 initial:1 initialisation:5 activation:1 written:1 must:3 john:1 additive:1 realistic:2 update:4 intelligence:2 trapping:1 gtg:1 haykin:1 provides:2 node:6 along:1 constructed:1 direct:1 profound:1 pairwise:3 mechanic:1 brain:3 inspired:2 globally:1 freeman:1 automatically:1 ruedi:2 equipped:1 provided:1 moreover:2 underlying:1 circuit:1 what:3 interpreted:1 loos:1 guarantee:2 every:1 exactly:2 scaled:1 stick:1 grant:1 positive:1 understood:1 local:5 accordance:1 modify:1 limit:4 consequence:2 encoding:2 oxford:1 pami:1 might:3 black:2 therein:1 resembles:1 studied:1 collapse:1 implement:6 procedure:1 mech:1 eth:2 word:1 get:1 convenience:2 close:2 prentice:1 context:2 equivalent:2 quick:1 center:1 starting:1 formulate:1 bep:33 rule:4 stability:4 notion:1 exact:4 neighbouring:1 origin:1 agreement:2 element:5 approximated:3 particularly:1 jk:3 ising:4 geman:2 mna:1 role:2 calculate:1 ferromagnetic:4 connected:2 cycle:1 complexity:1 rigorously:1 dynamic:4 neglected:1 ultimately:1 p11012:1 uniformity:1 rewrite:1 serve:1 upon:1 division:1 basis:4 translated:2 joint:1 hopfield:33 represented:2 emulate:1 chapter:2 describe:1 doi:1 artificial:3 neuroinformatics:2 firm:1 encoded:1 widely:1 valued:1 larger:1 plausible:1 think:1 transform:1 obviously:1 eigenvalue:2 net:1 fer:1 achieve:1 kv:1 convergence:8 requirement:1 perfect:1 object:1 illustrate:1 recurrent:1 coupling:2 stat:1 stating:1 measured:1 eq:8 implemented:1 come:1 judge:1 lyapunov:3 switzerland:2 differ:1 lakemeyer:1 drawback:1 stochastic:1 reparameterised:1 implementing:1 exchange:1 generalization:1 preliminary:1 biological:6 mathematically:1 exploring:1 hold:1 around:1 considered:3 hall:1 koch:1 major:1 nebel:1 isv:1 uniqueness:1 applicable:1 largest:2 weighted:1 always:1 rather:2 encode:1 derived:2 vk:8 rigorous:1 sense:1 inference:11 typically:3 hidden:4 relation:1 jq:1 quasi:1 interested:1 sketched:2 pixel:2 denoted:2 spatial:1 bifurcation:1 marginal:1 field:8 equal:1 comprise:1 identical:3 unizh:2 future:1 report:1 inherent:1 employ:2 primarily:1 preserve:1 comprehensive:1 individual:1 neurocomputing:1 familiar:1 attractor:5 delicate:1 detection:1 normalisation:2 message:29 highly:1 reinterpreted:1 evaluation:1 certainly:1 yielding:1 ihj:3 circle:2 instance:1 rao:1 challenged:1 restoration:1 ott:1 loopy:4 tg:1 mahowald:1 subset:1 uniform:1 usefulness:1 dependency:3 learnt:1 connectivity:2 reflect:2 central:1 satisfied:1 huang:1 external:3 v7:2 potential:1 factorised:1 star:2 matter:1 inc:2 postulated:1 view:1 try:1 realised:1 repre:1 elaborated:1 contribution:1 spin:3 kaufmann:1 yield:2 identify:1 correspond:1 yes:1 conceptually:1 weak:4 bayesian:2 identification:1 converged:1 synapsis:1 phys:2 synaptic:14 ed:1 against:1 energy:2 realise:2 initialised:1 obvious:2 naturally:2 popular:1 emerges:1 actually:3 back:1 reflected:1 response:1 wei:1 synapse:1 furthermore:2 just:1 receives:2 propagation:9 effect:1 concept:5 contain:1 true:1 hamil:1 former:1 hence:2 read:2 symmetric:8 attractive:2 eligibility:1 steady:1 ini:2 temperature:4 image:2 chaos:1 consideration:1 abstain:1 sigmoid:1 functional:1 spiking:3 tively:1 jp:1 marginals:2 refer:1 gibbs:1 grid:1 outlined:2 heskes:1 stable:4 j:1 belongs:1 driven:1 certain:1 binary:8 arbitrarily:1 morgan:1 minimum:1 additional:1 somewhat:1 converge:2 paradigm:1 signal:3 ii:2 neurally:1 pnas:1 infer:1 offer:1 totality:1 biophysics:1 variant:2 mrf:16 basic:2 vision:1 confined:1 cell:1 whereas:5 want:2 background:1 interval:1 unlike:1 sent:2 seem:2 extracting:1 structural:1 intermediate:1 iii:1 identically:1 architecture:1 opposite:1 idea:4 simplifies:1 translates:1 stoop:3 whether:1 passed:1 reformulated:2 passing:10 york:2 deep:1 generally:1 useful:1 detailed:1 clear:1 transforms:1 amount:1 neuroscience:1 per:5 tonian:1 reparametrisation:2 discrete:1 prevent:2 douglas:1 verified:1 v1:4 asymptotically:1 relaxation:1 arrive:1 almost:1 reasonable:3 decision:2 scaling:1 capturing:1 guaranteed:4 simplification:1 correspondence:3 activity:7 strength:1 adapted:1 precisely:1 constraint:1 worked:1 relatively:1 martin:1 according:8 membrane:1 describes:1 smaller:2 slightly:1 across:1 character:2 son:1 appealing:1 making:1 dv:2 invariant:1 restricted:2 equation:14 zurich:3 remains:1 assures:1 describing:1 mechanism:1 end:1 yedidia:1 alternative:1 paving:1 thomas:1 original:7 denotes:2 running:1 graphical:1 emphasised:1 giving:1 peierls:2 establish:4 graded:1 comparatively:1 question:3 quantity:3 already:1 dependence:1 traditional:1 affinity:1 dp:1 me:1 presynaptic:1 argue:1 extent:1 reason:1 relationship:4 casually:1 sage:1 implementation:6 guideline:1 proper:1 boltzmann:3 collective:1 teh:1 av:1 neuron:13 observation:2 markov:5 extended:1 dc:1 required:3 connection:11 reintroduced:1 established:1 usually:2 dynamical:1 below:1 belief:12 analogue:1 critical:6 natural:4 mn:29 numerous:1 picture:2 rela:1 review:1 understanding:3 acknowledgement:1 mooij:1 asymptotic:1 analogy:2 foundation:1 share:1 translation:5 periodicity:1 summary:1 supported:1 last:2 free:1 bias:1 normalised:1 generalise:1 institute:2 emerge:1 sparse:1 distributed:1 benefit:1 calculated:1 cortical:1 world:1 interchange:1 made:1 san:1 far:1 welling:1 approximate:1 inappropriateness:1 conclude:1 francisco:1 consuming:1 continuous:12 search:1 why:1 neurodynamics:3 nature:2 bethe:4 necessarily:2 domain:1 edition:2 stratford:1 neuronal:1 fig:2 transduction:2 wiley:1 guiding:2 explicit:1 torus:1 comput:1 crude:1 jacobian:3 third:1 theorem:1 showing:1 evidence:1 ih:1 mirror:1 suited:1 simply:2 explore:1 infinitely:1 shon:1 recommendation:1 ch:2 corresponds:4 goal:1 formulated:1 identity:1 towards:4 replace:1 experimentally:1 determined:1 called:2 formally:1 relevance:1 ethz:2 evaluate:1
2,374
3,154
Boosting Structured Prediction for Imitation Learning Nathan Ratliff, David Bradley, J. Andrew Bagnell, Joel Chestnutt Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 {ndr, dbradley, dbagnell, joel.chestnutt}@ri.cmu.edu Abstract The Maximum Margin Planning (MMP) (Ratliff et al., 2006) algorithm solves imitation learning problems by learning linear mappings from features to cost functions in a planning domain. The learned policy is the result of minimum-cost planning using these cost functions. These mappings are chosen so that example policies (or trajectories) given by a teacher appear to be lower cost (with a lossscaled margin) than any other policy for a given planning domain. We provide a novel approach, M MP B OOST , based on the functional gradient descent view of boosting (Mason et al., 1999; Friedman, 1999a) that extends MMP by ?boosting? in new features. This approach uses simple binary classification or regression to improve performance of MMP imitation learning, and naturally extends to the class of structured maximum margin prediction problems. (Taskar et al., 2005) Our technique is applied to navigation and planning problems for outdoor mobile robots and robotic legged locomotion. 1 Introduction ?Imitation learning? of control or navigational behaviors is important in many application areas. Recently, (Ratliff et al., 2006) demonstrated that imitation learning of long horizon and goal-directed behavior can be naturally formulated as a structured prediction problem over a space of policies or system trajectories. In this work, the authors demonstrate that efficient planning algorithms (e.g. for deterministic systems or general Markov Decision Problems) can be taught to generalize a set of examples provided by a supervisor. In essence, the algorithm attempts to linearly combine features into costs so that the resulting cost functions make demonstrated example policies appear optimal by a margin over all other policies. The technique utilizes the idea that while a desired behavior or control strategy is often quite clear to a human expert, hand designing cost functions that induce this behavior may be difficult. Unfortunately, this Maximum Margin Planning (MMP) approach, as well as related techniques for maximum margin structured learning developed in (Taskar et al., 2005) and (Taskar et al., 2003), depend on linearly combining a prespecified set of features. 1 Adopting a new variant of the general A NY B OOST algorithm described in (Mason et al., 1999) or similarly (Friedman, 1999a), we propose an alternate extension to Maximum Margin Planning specifically, and maximum margin structured learning generally, in which we perform subgradient descent in the space of cost functions rather than within any fixed parameterization. In this way, we show that we can ?boost in? new features using simple classification that help us a solve a more difficult structured prediction problem. The 1 Alternatively, all of these methods admit straightforward kernelization allowing the implicit learning within a Reproducing Kernel Hilbert space, but these kernel versions can be extremely memory and computationally intensive. application of boosting to structured learning techniques was first explored in (Dietterich et al., 2004), within the context of boosting Conditional Random Fields. This paper extends that result to maximum-margin techniques, and provides a more general functional gradient derivation. We then demonstrate three applications of our technique. First, using only smoothed versions of an overhead image as input, we show that M MP B OOST is able to match human navigation performance well on the task of navigating outdoor terrain. Using the same input, linear MMP, by contrast, performs almost no better than straight line paths. Next we demonstrate that we can develop a local obstacle detection/avoidance control system for an autonomous outdoor robot by observing an expert teleoperator drive. Finally, we demonstrate on legged locomotion problems the use a slow but highly accurate planner to train a fast, approximate planner using M MP B OOST . 2 Preliminaries We model, as in (Ratliff et al., 2006), planning problems as discrete Markov Decision Processes. Let s and a index the state and action spaces S and A, respectively; and let pi (s0 |s, a) denote transition probabilities for example i. A discount factor on rewards (if any) is absorbed into the transition probabilities. Our cost (negative reward) functions are learned from supervised trajectories to produce policies that mimic the demonstrated behavior. Policies are described by ? ? G, where G is the space of all state-action frequency counts. In the case of deterministic planning, ? is simply an indicator variable denoting whether the state-action s, a transition is encountered in the optimal policy. In the following, we use M both to denote a particular MDP, as well as to refer to the set of all state-action pairs in that MDP. We hypothesize the existence of a base feature space X from which all other features are derived. A cost function over an MDP M is defined through this space as c(fM ), where fM : M ? X denotes a mapping from state-action pairs to points in base feature space, and c is a cost function over X . Intuitively, each state-action pair in the MDP has an associated feature vector, and the cost of that state-action pair is a function of that vector. The input to the linear MMP algorithm is a set of training instances D = {(Mi , pi , Fi , ?i , li )}ni=1 . Each training instance consists of an MDP with transition probabilities pi and state-action pairs (si , ai ) ? Mi over which d-dimensional vectors of features mapped from the base feature space X are placed in the form of a d ? |M| feature matrix Fi . In linear MMP, Fi is related to c above by [wT F ]s,a = c(fMi (s, a)). ?i denotes the desired trajectory (or full policy) that exemplifies behavior we hope to match. The loss vector li is a vector on the state-action pairs that indicates the loss for failing to match the demonstrated trajectory ?i . Typically, in this work we use a simple loss function that is 0 on all states occupied in the example trajectory and 1 elsewhere. We use subscripts to denote indexing by training instance, and reserve superscripts for indexing into vectors. (E.g. ?s,a is the expected state-action frequency for state s and action a of example i.) It is i useful for some problems, such as robot path planning, to imagine representing the features as a set of maps and example paths through those maps. For instance, one feature map might indicate the elevation at each state, another the slope, and a third the presence of vegetation. 3 Theory We discuss briefly the linear MMP regularized risk function as derived in (Ratliff et al., 2006) and provide the subgradient formula. We then present an intuitive and algorithmic exposition on the boosted version of this algorithm we use to learn a nonlinear cost function. The precise derivation of this algorithm is available as an appendix to the extended version of the paper, which can be found on the author?s website. 3.1 The Maximum Margin Planning risk function Crucial to the Maximum Margin Planning (MMP) approach is the development of a convex, but non-differentiable regularized risk function for the general margin or slack scaled (Tsochantaridis et al., 2005) maximum margin structured prediction problem. In (Ratliff et al., 2006), the authors show that a subgradient descent procedure on this objective function can utilize efficient inference techniques resulting in an algorithm that is tractable in both computation and memory for large problems. The risk function under this framework is   n 1X ? T T T R(w) = ?i w Fi ?i ? min (w Fi ? li )? + kwk2 , ??G n i=1 2 i which gives the following subgradient with respect to w n gw = 1X Fi ?w ?i + ?w, n i=1 Here Fi is the current set of learned features over example i, ?? = arg min??Gi (wT Fi ? liT )? and ?w ?i = ?? ? ?i . This latter expression points out that, intuitively, the subgradient compares the state-action visitation frequency counts between the example policy and the optimal policy with respect to the current reward function wT Fi . The algorithm in its most basic form is given by the update rule wt+1 ? wt ? ?t gt , where {?t }? t=1 is a prespecified stepsize sequence and gt is a subgradient at the current timestep t. Note that computing the subgradient requires solving the problem ?? = arg min??Gi (wT Fi ? liT )? for each MDP. This is precisely the problem of solving the particular MDP with the cost function wT Fi ? liT , and can be implemented efficiently via a myriad of specialized algorithms such as A? in the context of planning. 3.2 Structured boosting of MMP Maximum margin planning in its original formulation assumed the cost map is a linear function of a set of prespecified features. This is arguably the most restrictive assumption made in this framework. Similar to many machine learning algorithms, we find in practice substantial effort is put into choosing these features well. In this section, we describe at an intuitive and algorithmic level a boosting procedure for learning a nonlinear function of our base features. For clarity of exposition, a full derivation in terms of the functional gradient descent view of boosting (Mason et al., 1999) is postponed to the appendix of the extended version of this paper (available from the author?s website). We encourage the reader to review this derivation as it differs in flavor from those previously seen in the literature in ways important to its application to general structured prediction problems. This gradient boosting framework serves as a reduction (Beygelzimer et al., 2005) from the problem of finding good features for structured prediction to a problem of simple classification. At a high level, this algorithm learns a new feature by learning a classifier that is best correlated with the changes we would like to have made to locally decrease the loss had we an infinite number of parameters at our disposal. In the case of M MP B OOST , this forms the following algorithm which is iterated: ? Fit the current model (using the current features) and compute the resulting loss-augmented cost map. ? Run the planner over this loss-augmented cost map to get the best loss-augmented path. Presumably, when the current feature set is not yet expressive enough, this path will differ significantly from the example path. ? Form positive examples by gathering feature vectors encountered along this loss(i) augmented path {(xplanned , 1)} and form negative examples by gathering feature vectors (j) encountered along the example path {(xexample , ?1)}. ? Learn a classifier using this data set to generalize these suggestions to other points on the map. ? Apply this classifier to every cell of all example maps and add the result as a new feature to the feature matrix. Figure 1: The four subimages to the left show (clockwise from upper left) a grayscale image used as base features for a hold out region, the first boosted feature learned by boosted MMP for this region, the results of boosted MMP on an example over this region (example red, learned path green), and the best linear fit of this limited feature set. The plot on the right compares boosting objective function value (red) and loss on a hold out set (blue) per boosting iteration between linear MMP (dashed) and boosted MMP (solid). This simple procedure forms the M MP B OOST algorithm. If the original set of features cannot correctly represent as a linear function the cost variation necessary to explain the decisions made by the trainer, this algorithm tries to find a new feature as a nonlinear function of the original base set of features, that can best simultaneously raise the cost of the current erroneous path, and lower the cost of the example path. Importantly, this function takes the form of a classifier that can generalize this information to each cell of every map. Adding this feature to the current feature set provides an incremental step toward explaining the decisions made in the example paths. 4 Applications In this section we demonstrate on three diverse problems how M MP B OOST improves performance in navigation and planning tasks. 4.1 Imitation Learning for Path Planning We first consider a problem of learning to imitate example paths drawn by humans on publicly available overhead imagery. In this experiment, a teacher demonstrates optimal paths between a set of start and goal points on the image, and we compare the performance of M MP B OOST to that of a linear MMP algorithm in learning to imitate the behavior. The base features for this experiment consisted of the raw grayscale image, 5 Gaussian convolutions of it with standard deviations 1, 3, 5, 7, and 9, and a constant feature. Cost maps were created as a linear combination of these features in the case of MMP, and as a nonlinear function of these features in the case of M MP B OOST . The planner being trained was an 8-connected implementation of A*. The results of these experiments are shown in Figure 1. The upper right panel on the left side of that Figure shows the grayscale overhead image of the holdout region used for testing. The training region was similar in nature, but taken over a different location. The features are particularly difficult for MMP since the space of cost maps it considers for this problem consists of only linear combinations of the same image at different resolutions. e.g. imagine taking various blurred versions of an image and trying to combine them to make any reasonable cost map. The lower left panel on the left side of Figure 1 shows that the best cost map MMP was able to find within this space was largely just a map with uniformly high cost everywhere. The learned cost map was largely uninformative causing the planner to choose the straight-line path between endpoints. The lower right panel on the left side of Figure 1 shows the result of M MP B OOST on this problem on a holdout image of an area similar to that on which we trained. In this instance, we used regression trees with 10 terminal nodes as our dictionary H, and trained them on the base features to match the functional gradient as described in Section 3.2. Since M MP B OOST searches through a space learned cost map ?3 x 10 demonstrated engineered learned 4 3 11 Average loss of D* path 5 learned engineered 12 7 6 Average loss per round of boosting 13 8 2 1 10 9 8 7 6 5 4 1 2 3 4 5 Boosting round 6 7 8 Figure 2: Left: An example learned cost map for a narrow path through trees showing the engineered system (blue line) wanting to take a short cut to the goal by veering off into dense woods instead of staying on the path as the human demonstrated (green line). In the learned cost map several boosted features combine to make the lowest-cost path (red line) match the human?s preference of staying on the path. The robot is currently located in the center of the cost map. Right: A graph of the average A* path loss over the examples in each round of boosting. In just a few rounds the learned system exceeds the performance of the carefully engineered system. nonlinear cost functions, it is able to perform significantly better than the linear MMP. Interestingly, the first feature it learned to explain the supervised behavior was to a large extent a road detection classifier. The right panel of Figure 1 compares plots of the objective value (red) and the loss on the holdout set (blue) per iteration between the linear MMP (dashed) and M MP B OOST (solid). The first feature shown in figure 1 is interesting in that it largely represents the result of a path detector. The boosting algorithm chooses positive examples along the example path, and negative examples along the loss-augmented path, which are largely disjoint from the example paths. Surprisingly, M MP B OOST also outperformed linear MMP applied to additional features that were hand-engineered for this imagery. In principle, given example plans, M MP B OOST can act as a sophisticated image processing technique to transform any overhead (e.g. satellite) image directly to a cost map with no human intervention and feature engineering. 4.2 Learning from Human Driving Demonstration We next consider the problem of learning to mimic human driving of an autonomous robot in complex outdoor and off-road terrain. We assume that a coarse global planning problem has been solved using overhead imagery and the M MP B OOST application presented above. Instead, we use M MP B OOST to learn an local obstacle detection/avoidance system. We consider the local region around the vehicle?s position at time t separately from the larger global environment. Our goal is to use the vehicle?s onboard sensors to detect obstacles which were not visible in the overhead imagery or were not present when the imagery was collected. The onboard sensor suite used consists of ladar scanners to provide structural information about the environment, and color and NIR cameras to provide appearance information. From these sensors we compute a set of base features for each cell in a discretized 2-D map of the local area. These base features include quantities such as the estimated elevation and slope of the ground plane in the cell, and the average color and density of ladar points in the cell for various height ranges above the estimated ground plane. As training data we use logged sensor data from several kilometers of teleoperation of a large mobile robot through a challenging outdoor environment by an experienced operator. In the previous example, the algorithm had access to both the complete path demonstrated by the teacher, and the same input data (overhead image) the teacher used while generating the path. However, in this example not only is the input data different (since the teacher generally controls the robot from behind and to the side using their own prior knowledge of the environment and highly capable vision system), but we face the additional challenge of estimating the path planned by the teacher at a particular time step from the vehicle motion we observe in future time steps, when the teacher is using additional data. For this experiment we assume that the next 10 m of the path driven by the vehicle after time t matches the operator?s intended path at time t, and only compute loss over that section of the path. In practice this means that we create a set of local examples from each teleoperated path by sampling the internal state of the robot at discrete points in time. At each time t we record the feature map generated by the robots onboard sensors of the local 10 m radius area surrounding it as well as the path the robot followed to the boundary of that area. Additionally, we model the operator?s prior knowledge of the environment and their sensing of obstacles beyond the 10 m range by using our global planning solution to generate the minimum path costs from a set of points on the boundary of each local map to the global goal. The operator also attempted to match the range at which he reacted to obstacles not visible in the overhead data (such as vehicles that were placed in the robot?s path) with the 10 m radius of the local map. An 8-connected variant of A* then chooses a path to one of the points on the boundary of the local map that minimizes the sum of costs accumulated along the path to the boundary point with the cost-to-goal from the boundary point to the goal. Using 8 terminal node classification trees as our dictionary H, we then apply the M MP B OOST algorithm to determine transformations from base features to local costs so that the local trajectories executed by the human are chosen by the planner with large margin over all the other possible local trajectories. The results of running M MP B OOST on the 301 examples in our data set are compared to the results given by the current human engineered cost production system used on the robot in Figure 2. The engineered system is the result of many man-hours of parameter tunning over weeks of field testing. The learned system started with the engineered feature maps, and then boosted in additional features as necessary. After just a few iterations of boosting the learned system displays significantly lower average loss than the engineered system, and corrects important navigational errors such as the one shown. 4.3 Learning a Fast Planner from a Slower one Legged robots have unique capabilities not found in many mobile robots. In particular, they can step over or onto obstacles in their environment, allowing them to traverse complicated terrain. Algorithms have been developed which plan for foot placement in these environments, and have been successfully used on several biped robots (Chestnutt et al., 2005). In these cases, the planner evaluates various steps the robot can execute, to find a sequence of steps that is safe and is within the robot?s capabilities. Another approach to legged robot navigation uses local techniques to reactively adjust foot placement while following a predefined path (Yagi & Lumelsky, 1999). This approach can fall into local minima or become stuck if the predefined path does not have valid footholds along its entire length. Footstep planners have been shown to produce very good footstep sequences allowing legged robots to efficiently traverse a wide variety of terrain. This approach uses much of the robot?s unique abilities, but is more computationally expensive than traditional mobile robot planners. Footstep planning occurs in a high-dimensional state space and therefore is often too computationally burdensome to be used for real-time replanning, limiting its scope of application to largely static environments. For most applications, the footstep planner implicitly solves a low dimensional navigational problem simultaneously with the footstep placement problem. Using M MP B OOST , we use body trajectories produced by the footstep planner to learn the nuances of this navigational problem in the form of a 2.5-dimensional navigational planner that can reproduce these trajectories. We are training a simple, navigational planner to effectively reproduce the body trajectories that typically result from a sophisticated footstep planner. We could use the resulting navigation planner in combination with a reactive solution (as in (Yagi & Lumelsky, 1999)). Instead, we pursue a hybrid approach of using the resulting simple planner as a heuristic to guide the footstep planner. Using a 2-dimensional robot planner as a heuristic has been shown previously (Chestnutt et al., 2005) to dramatically improve planning performance, but the planner must be manually tuned to provide costs that serve as reasonable approximations of the true cost. To combat these computational problems we focus on the heuristic, which largely defines the behavior of the A* planner. Poorly informed admissible heuristics can cause the planner to erroneously attempt numerous dead ends before happening upon the optimal solution. On the other hand, well informed inadmissible heuristics can pull the planner quickly toward a solution whose cost, though suboptimal, is very close to the minimum. This lower-dimensional planner is then used in the heuristic to efficiently and Figure 3: Left is an image of the robot used for the quadruped experiments. The center pair of images shows a typical height map (top), and the corresponding learned cost map (bottom) from a holdout set of the biped planning experiments. Notice how platform-like regions are given low costs toward the center but higher costs toward the edges, and the learned features interact to lower cost chutes to direct the planner through complicated regions. Right are two histograms showing the ratio distribution of the speed of both the admissible Euclidean (top) and the engineered heuristic (bottom) over an uninflated M MP B OOST heuristic on a holdout set of 90 examples from the biped experiment. In both cases, the M MP B OOST heuristic was uniformly better in terms of speed. cost diff speedup cost diff speedup mean std mean std mean std mean std biped admissible biped inflated M MP B OOST vs Euclidean 0.91 10.08 123.39 270.97 9.82 11.78 10.55 17.51 M MP B OOST vs Engineered -0.69 6.7 20.31 33.11 2.55 6.82 11.26 32.07 biped best-first quadruped inflated M MP B OOST vs Euclidean -609.66 5315.03 272.99 1601.62 3.69 7.39 2.19 2.24 M MP B OOST vs Engineered 3.42 37.97 6.4 17.85 -4.34 8.93 3.51 4.11 Figure 4: Statistics comparing the M MP B OOST heuristic to both a Euclidean and discrete navigational heuristic. See the text for descriptions of the values. intelligently guide the footstep planner toward the goal, effectively displacing a large portion of the computational burden. We demonstrate our results in both simulations and real-world experiments. Our procedure is to run a footstep planner over a series of randomly drawn two-dimensional terrain height maps that describe the world the robot is to traverse. The footstep planner produces trajectories of the robot from start to goal over the terrain map. We then apply M MP B OOST again using regression trees with 10 terminal nodes as the base classifier to learn cost features and weights that turn height maps into cost functions so that a 2-dimensional planner over the cost map mimics the body trajectory. We apply the planner to two robots: first the HRP-2 biped robot and second the LittleDog2 quadruped robot.The quadruped tests were demonstrated on the robot.3 Figure 4 shows the resulting computational speedups (and the performance gains) of planning with the learned M MP B OOST heuristic over two previously implemented heuristics: a simple Euclidean heuristic that estimates the cost-to-go as the straight-line distance from the current state to the goal; and an alternative 2-dimensional navigational planner whose cost map was hand engineered. We tested three different versions of the planning configuration: (1) no inflation, in which the heuristic is expected to give its best approximation of the exact cost so that the heuristics are close to admissible (Euclidean is the only one who is truly admissible); (2) inflated, in which the heuristics are inflated by approximately 2.5 (this is the setting commonly used in practice for these planners); and (3) Bestfirst search, in which search nodes are expanded solely based on their heuristic value. The cost diff column relates on average the extent to which the cost of planning under the M MP B OOST heuristic is above or below the opposing heuristic. Loosely speaking this indicates how many more footsteps are taken under the M MP B OOST heuristic, i.e. negative values support M MP B OOST . The speedup column relates the average ratio of total nodes searched between the heuristics. In this case, large values are better, indicating the factor by which M MP B OOST outperforms its competition. 2 3 Boston Dynamics designed the robot and provided the motion capture system used in the tests. A video demonstrating the robot walking across a terrain board is provided with this paper. The most direct measure of heuristic performance arguably comes from the best-first search results. In this case, both the biped and quadruped planner using the learned heuristic significantly outperform their counterparts under a Euclidean heuristic.4 While Euclidean often gets stuck for long periods of time in local minima, both the learned heuristic and to a lesser extent the engineered heuristic are able to navigate efficiently around these pitfalls. We note that A* biped performance gains were considerably higher: we believe this is because orientation plays a large role in planning for the quadruped. 5 Conclusions and Future Work M MP B OOST combines the powerful ideas of structured prediction and functional gradient descent enabling learning by demonstration for a wide variety of applications. Future work will include extending the learning of mobile robot path planning to more complex configuration spaces that allow for modeling of vehicle dynamics. Further, we will pursue applications of the gradient boosting approach to other problems of structured prediction. Acknowledgments The authors gratefully acknowledge the partial support of this research by the DARPA Learning for Locomotion and UPI contracts, and thank John Langford for enlightening conversations on reduction of structured learning problems. References Beygelzimer, A., Dani, V., Hayes, T., Langford, J., & Zadrozny, B. (2005). Error limiting reductions between classification tasks. ICML ?05. New York, NY. Chestnutt, J., Lau, M., Cheng, G., Kuffner, J., Hodgins, J., & Kanade, T. (2005). Footstep planning for the Honda ASIMO humanoid. Proceedings of the IEEE International Conference on Robotics and Automation. Dietterich, T. G., Ashenfelter, A., & Bulatov, Y. (2004). Training conditional random fields via gradient tree boosting. ICML ?04. Friedman, J. H. (1999a). Greedy function approximation: A gradient boosting machine. Annals of Statistics. Hassani, S. (1998). Mathematical physics. Springer. Mason, L., J.Baxter, Bartlett, P., & Frean, M. (1999). Functional gradient techniques for combining hypotheses. Advances in Large Margin Classifiers. MIT Press. Ratliff, N., Bagnell, J. A., & Zinkevich, M. (2006). Maximum margin planning. Twenty Second International Conference on Machine Learning (ICML06). Taskar, B., Chatalbashev, V., Guestrin, C., & Koller, D. (2005). Learning structured prediction models: A large margin approach. Twenty Second International Conference on Machine Learning (ICML05). Taskar, B., Guestrin, C., & Koller, D. (2003). Max margin markov networks. Advances in Neural Information Processing Systems (NIPS-14). Tsochantaridis, I., Joachims, T., Hofmann, T., & Altun, Y. (2005). Large margin methods for structured and interdependent output variables. Journal of Machine Learning Research, 1453?1484. Yagi, M., & Lumelsky, V. (1999). Biped robot locomotion in scenes with unknown obstacles. Proceedings of the IEEE International Conference on Robotics and Automation (pp. 375?380). Detroit, MI. 4 The best-first quadruped planner under the M MP B OOST heuristic is on average approximately 1100 times faster than under the Euclidean heuristic in terms of the number of nodes searched.
3154 |@word briefly:1 version:7 simulation:1 solid:2 reduction:3 configuration:2 series:1 denoting:1 tuned:1 interestingly:1 outperforms:1 bradley:1 current:10 comparing:1 beygelzimer:2 si:1 yet:1 must:1 john:1 visible:2 hofmann:1 hypothesize:1 plot:2 designed:1 update:1 v:4 greedy:1 website:2 parameterization:1 imitate:2 plane:2 short:1 prespecified:3 record:1 provides:2 boosting:19 node:6 location:1 preference:1 coarse:1 traverse:3 honda:1 height:4 mathematical:1 along:6 direct:2 become:1 consists:3 combine:4 overhead:8 expected:2 behavior:9 planning:29 terminal:3 discretized:1 pitfall:1 bulatov:1 provided:3 estimating:1 panel:4 lowest:1 minimizes:1 pursue:2 developed:2 informed:2 finding:1 transformation:1 suite:1 combat:1 every:2 act:1 scaled:1 classifier:7 demonstrates:1 control:4 intervention:1 appear:2 arguably:2 positive:2 before:1 engineering:1 local:15 subscript:1 path:41 solely:1 approximately:2 might:1 challenging:1 limited:1 range:3 directed:1 unique:2 camera:1 acknowledgment:1 testing:2 practice:3 differs:1 procedure:4 area:5 significantly:4 induce:1 road:2 altun:1 get:2 cannot:1 onto:1 tsochantaridis:2 operator:4 close:2 put:1 context:2 risk:4 zinkevich:1 deterministic:2 demonstrated:8 map:33 center:3 straightforward:1 go:1 convex:1 resolution:1 rule:1 avoidance:2 importantly:1 tunning:1 pull:1 autonomous:2 variation:1 limiting:2 annals:1 imagine:2 play:1 ndr:1 exact:1 us:3 designing:1 hypothesis:1 locomotion:4 pa:1 expensive:1 particularly:1 located:1 walking:1 std:4 cut:1 bottom:2 taskar:5 role:1 solved:1 capture:1 region:8 connected:2 decrease:1 substantial:1 environment:8 reward:3 dynamic:2 legged:5 trained:3 depend:1 solving:2 raise:1 myriad:1 serve:1 upon:1 darpa:1 various:3 derivation:4 train:1 surrounding:1 fast:2 describe:2 quadruped:7 choosing:1 quite:1 heuristic:29 larger:1 solve:1 whose:2 ability:1 statistic:2 gi:2 transform:1 superscript:1 sequence:3 differentiable:1 intelligently:1 propose:1 causing:1 combining:2 poorly:1 intuitive:2 description:1 competition:1 satellite:1 extending:1 produce:3 generating:1 incremental:1 staying:2 help:1 andrew:1 develop:1 frean:1 solves:2 implemented:2 indicate:1 come:1 inflated:4 differ:1 foot:2 radius:2 safe:1 human:10 engineered:14 preliminary:1 elevation:2 dbagnell:1 extension:1 hold:2 scanner:1 around:2 inflation:1 ground:2 presumably:1 mapping:3 algorithmic:2 week:1 scope:1 reserve:1 driving:2 dictionary:2 failing:1 outperformed:1 currently:1 replanning:1 create:1 successfully:1 detroit:1 hope:1 dani:1 mit:1 sensor:5 gaussian:1 rather:1 occupied:1 boosted:7 mobile:5 derived:2 exemplifies:1 focus:1 joachim:1 indicates:2 contrast:1 detect:1 burdensome:1 inference:1 chatalbashev:1 accumulated:1 typically:2 entire:1 footstep:13 koller:2 reproduce:2 reacted:1 trainer:1 arg:2 classification:5 orientation:1 development:1 plan:2 platform:1 field:3 sampling:1 manually:1 represents:1 lit:3 icml:2 mimic:3 future:3 few:2 randomly:1 simultaneously:2 hrp:1 intended:1 opposing:1 friedman:3 attempt:2 detection:3 highly:2 joel:2 adjust:1 navigation:5 truly:1 behind:1 predefined:2 accurate:1 edge:1 encourage:1 capable:1 necessary:2 partial:1 tree:5 euclidean:9 loosely:1 desired:2 instance:5 column:2 modeling:1 obstacle:7 planned:1 cost:55 deviation:1 supervisor:1 too:1 teacher:7 considerably:1 chooses:2 density:1 international:4 contract:1 off:2 corrects:1 physic:1 quickly:1 imagery:5 again:1 choose:1 admit:1 dead:1 expert:2 li:3 automation:2 blurred:1 mp:33 vehicle:6 view:2 try:1 observing:1 red:4 start:2 portion:1 capability:2 complicated:2 slope:2 ni:1 publicly:1 largely:6 efficiently:4 who:1 generalize:3 raw:1 iterated:1 produced:1 trajectory:13 drive:1 straight:3 explain:2 detector:1 evaluates:1 frequency:3 pp:1 naturally:2 associated:1 mi:3 static:1 gain:2 holdout:5 color:2 knowledge:2 improves:1 conversation:1 hilbert:1 hassani:1 carefully:1 sophisticated:2 disposal:1 higher:2 supervised:2 formulation:1 execute:1 though:1 just:3 implicit:1 langford:2 hand:4 fmi:1 expressive:1 nonlinear:5 defines:1 mdp:7 believe:1 dietterich:2 consisted:1 true:1 counterpart:1 gw:1 round:4 essence:1 trying:1 complete:1 demonstrate:6 performs:1 motion:2 onboard:3 image:13 novel:1 recently:1 fi:11 specialized:1 functional:6 endpoint:1 he:1 vegetation:1 kwk2:1 mellon:1 refer:1 ai:1 similarly:1 biped:10 gratefully:1 had:2 robot:33 access:1 gt:2 base:12 add:1 own:1 driven:1 binary:1 postponed:1 seen:1 minimum:5 additional:4 guestrin:2 determine:1 period:1 clockwise:1 dashed:2 relates:2 full:2 exceeds:1 match:7 faster:1 long:2 prediction:10 variant:2 regression:3 basic:1 vision:1 cmu:1 iteration:3 kernel:2 adopting:1 represent:1 histogram:1 robotics:3 cell:5 uninformative:1 separately:1 crucial:1 structural:1 presence:1 enough:1 baxter:1 variety:2 fit:2 fm:2 suboptimal:1 displacing:1 idea:2 lesser:1 intensive:1 whether:1 expression:1 bartlett:1 effort:1 speaking:1 cause:1 york:1 action:12 icml05:1 dramatically:1 generally:2 useful:1 clear:1 discount:1 locally:1 generate:1 outperform:1 notice:1 estimated:2 disjoint:1 per:3 correctly:1 blue:3 diverse:1 carnegie:1 discrete:3 taught:1 visitation:1 four:1 demonstrating:1 drawn:2 clarity:1 utilize:1 timestep:1 graph:1 subgradient:7 wood:1 sum:1 run:2 everywhere:1 powerful:1 reactively:1 logged:1 extends:3 almost:1 planner:34 reader:1 reasonable:2 utilizes:1 decision:4 appendix:2 followed:1 display:1 cheng:1 encountered:3 placement:3 precisely:1 ri:1 scene:1 erroneously:1 nathan:1 speed:2 extremely:1 min:3 yagi:3 expanded:1 speedup:4 structured:16 alternate:1 combination:3 across:1 kuffner:1 intuitively:2 lau:1 indexing:2 gathering:2 taken:2 computationally:3 previously:3 discus:1 count:2 slack:1 turn:1 tractable:1 serf:1 end:1 available:3 apply:4 chestnutt:5 observe:1 stepsize:1 alternative:1 slower:1 existence:1 original:3 denotes:2 running:1 include:2 top:2 restrictive:1 objective:3 quantity:1 occurs:1 strategy:1 bagnell:2 traditional:1 gradient:10 navigating:1 distance:1 thank:1 mapped:1 considers:1 extent:3 collected:1 toward:5 nuance:1 length:1 index:1 ratio:2 demonstration:2 difficult:3 unfortunately:1 executed:1 negative:4 ratliff:7 implementation:1 policy:12 twenty:2 perform:2 allowing:3 upper:2 unknown:1 convolution:1 markov:3 chute:1 enabling:1 acknowledge:1 descent:5 zadrozny:1 extended:2 precise:1 reproducing:1 smoothed:1 david:1 pair:7 learned:20 narrow:1 boost:1 hour:1 nip:1 able:4 beyond:1 below:1 challenge:1 navigational:8 green:2 memory:2 video:1 max:1 enlightening:1 hybrid:1 regularized:2 indicator:1 wanting:1 representing:1 improve:2 numerous:1 created:1 started:1 nir:1 text:1 review:1 literature:1 prior:2 interdependent:1 loss:16 suggestion:1 interesting:1 humanoid:1 s0:1 principle:1 pi:3 production:1 elsewhere:1 placed:2 surprisingly:1 side:4 guide:2 allow:1 institute:1 explaining:1 fall:1 taking:1 face:1 wide:2 boundary:5 transition:4 valid:1 world:2 author:5 made:4 stuck:2 commonly:1 ashenfelter:1 approximate:1 implicitly:1 global:4 robotic:1 hayes:1 pittsburgh:1 assumed:1 imitation:6 alternatively:1 terrain:7 grayscale:3 search:4 kilometer:1 additionally:1 kanade:1 learn:5 nature:1 interact:1 complex:2 domain:2 hodgins:1 dense:1 linearly:2 body:3 augmented:5 board:1 ny:2 slow:1 experienced:1 position:1 mmp:21 outdoor:5 third:1 learns:1 admissible:5 formula:1 erroneous:1 navigate:1 showing:2 sensing:1 mason:4 explored:1 burden:1 adding:1 effectively:2 subimages:1 upi:1 margin:20 horizon:1 flavor:1 boston:1 simply:1 appearance:1 absorbed:1 happening:1 springer:1 conditional:2 goal:10 formulated:1 exposition:2 oost:34 man:1 change:1 specifically:1 infinite:1 uniformly:2 typical:1 wt:7 diff:3 inadmissible:1 total:1 attempted:1 indicating:1 internal:1 support:2 searched:2 latter:1 reactive:1 kernelization:1 tested:1 correlated:1
2,375
3,155
Sparse Multinomial Logistic Regression via Bayesian L1 Regularisation Gavin C. Cawley School of Computing Sciences University of East Anglia Norwich, Norfolk, NR4 7TJ, U.K. gcc@cmp.uea.ac.uk Nicola L. C. Talbot School of Computing Sciences University of East Anglia Norwich, Norfolk, NR4 7TJ, U.K. nlct@cmp.uea.ac.uk Mark Girolami Department of Computing Science University of Glasgow Glasgow, Scotland, G12 8QQ, U.K. girolami@dcs.gla.ac.uk Abstract Multinomial logistic regression provides the standard penalised maximumlikelihood solution to multi-class pattern recognition problems. More recently, the development of sparse multinomial logistic regression models has found application in text processing and microarray classification, where explicit identification of the most informative features is of value. In this paper, we propose a sparse multinomial logistic regression method, in which the sparsity arises from the use of a Laplace prior, but where the usual regularisation parameter is integrated out analytically. Evaluation over a range of benchmark datasets reveals this approach results in similar generalisation performance to that obtained using cross-validation, but at greatly reduced computational expense. 1 Introduction Multinomial logistic and probit regression are perhaps the classic statistical methods for multi-class pattern recognition problems (for a detailed introduction, see e.g. [1, 2]). The output of a multinomial logistic regression model can be interpreted as an a-posteriori estimate of the probability that a pattern belongs to each of c disjoint classes. The probabilistic nature of the multinomial logistic regression model affords many practical advantages, such as the ability to set rejection thresholds [3], to accommodate unequal relative class frequencies in the training set and in operation [4], or to apply an appropriate loss matrix in making predictions that minimise the expected risk [5]. As a result, these models have been adopted in a diverse range of applications, including cancer classification [6, 7], text categorisation [8], analysis of DNA binding sites [9] and call routing. More recently, the focus of research has been on methods for inducing sparsity in (multinomial) logistic or probit regression models. In some applications, the identification of salient input features is of itself a valuable activity; for instance in cancer classification from micro-array gene expression data, the identification of biomarker genes, the pattern of expression of which is diagnostic of a particular form of cancer, may provide insight into the ?tiology of the condition. In other applications, these methods are used to select a small number of basis functions to form a compact non-parametric classifier, from a set that may contain many thousands of candidate functions. In this case the sparsity is desirable for the purposes of computational expediency, rather than as an aid to understanding the data. A variety of methods have been explored that aim to introduce sparsity in non-parametric regression models through the incorporation of a penalty or regularisation term within the training criterion. In the context of least-squares regression using Radial Basis Function (RBF) networks, Orr [10], proposes the use of local regularisation, in which a weight-decay regularisation term is used with distinct regularisation parameters for each weight. The optimisation of the Generalised Cross-Validation (GCV) score typically leads to the regularisation parameters for redundant basis functions achieving very high values, allowing them to be identified and pruned from the network (c.f. [11, 12]). The computational efficiency of this approach can be further improved via the use of Recursive Orthogonal Least Squares (ROLS). The relevance vector machine (RVM) [13] implements a form of Bayesian automatic relevance determination (ARD), using a separable Gaussian prior. In this case, the regularisation parameter for each weight is adjusted so as to maximise the marginal likelihood, also known as the Bayesian evidence for the model. An efficient component-wise training algorithm is given in [14]. An alternative approach, known as the LASSO [15], seeks to minimise the negative log-likelihood of the sample, subject to an upper bound on the sum of the absolute value of the weights (see also [16] for a practical training procedure). This strategy is equivalent to the use of a Laplace prior over the model parameters [17], which has been demonstrated to control over-fitting and induce sparsity in the weights of multi-layer perceptron networks [18]. The equivalence of the Laplace prior and a separable Gaussian prior (with appropriate choice of regularisation parameters) has been established by Grandvalet [11, 12], unifying these strands of research. In this paper, we demonstrate that, in the case of the Laplace prior, the regularisation parameters can be integrated out analytically, obviating the need for a lengthy cross-validation based model selection stage. The resulting sparse multinomial logistic regression algorithm with Bayesian regularisation (SBMLR) is then fully automated and, having storage requirements that scale only linearly with the number of model parameters, is well suited to relatively large-scale applications. The remainder of this paper is set out as follows: The sparse multinomial logistic regression procedure with Bayesian regularisation is presented in Section 2. The proposed algorithm is then evaluated against competing approaches over a range of benchmark learning problems in Section 3. Finally, the work is summarised in Section 5 and conclusion drawn. 2 Method ? Let D = {(xn , tn )}n=1 represent the training sample, where xn ? X ? Rd is the vector of input features for the ith example, and tn ? T = {t | t ? {0, 1}c , ktk1 = 1 } is the corresponding vector of desired outputs, using the usual 1-of-c coding scheme. Multinomial logistic regression constructs a generalised linear model [1] with a softmax inverse link function [19], allowing the outputs to be interpreted as a-posteriori estimates of the probabilities of class membership, exp{ani } p(tni |xn ) = yin = Pc n j=1 exp{aj } where ani = d X wij xnj (1) j=1 Assuming that D represents an i.i.d. sample from a conditional multinomial distribution, then the negative log-likelihood, used as a measure of the data-misfit, can be written as, ED = ? X n=1 n ED =? ? X c X n=1 i=1 tni log {yin } The parameters, w of the multinomial logistic regression model are given by the minimiser of a penalised maximum-likelihood training criterion, L = ED + ?EW where EW = c X d X i=1 j=1 |wij | (2) and ? is a regularisation parameter [20] controlling the bias-variance trade-off [21]. At a minima of L, the partial derivatives of L with respect to the model parameters will be uniformly zero, giving ?ED ?ED = ? if |wij | > 0 and ?wij < ? if |wij | = 0. ?wij This implies that if the sensitivity of the negative log-likelihood with respect to a model parameter, wij , falls below ?, then the value of that parameter will be set exactly to zero and the corresponding input feature can be pruned from the model. 2.1 Eliminating the Regularisation Parameters Minimisation of (2) has a straight-forward Bayesian interpretation; the posterior distribution for w, the parameters of the model given by (1), can be written as p(w|D) ? P (D|w)P (w). L is then, up to an additive constant, the negative logarithm of the posterior density. The prior over model parameters, w, is then given by a separable Laplace distribution P (w) =  ? W 2 exp{??EW } = W Y ? i=1 2 exp {??|wi |} , (3) where W is the number of active (non-zero) model parameters. A good value for the regularisation parameter ? can be estimated, within a Bayesian framework, by maximising the evidence [22] or alternatively it may be integrated out analytically [17, 23]. Here we take the latter approach, where the prior distribution over model parameters is given by marginalising over ?, Z p(w) = p(w|?)p(?)d?. As ? is a scale parameter, an appropriate ignorance prior is given by the improper Jeffrey?s prior, p(?) ? 1/?, corresponding to a uniform prior over log ?. Substituting equation (3) and noting that ? is strictly positive, Z ? 1 p(w) = W ?W ?1 exp{??EW }d?. 2 0 R? Using the Gamma integral, 0 x??1 e??x dx = ?(?) ?? [24, equation 3.384], we obtain p(w) = 1 ?(W ) W 2W EW =? ? log p(w) ? W log EW , giving a revised optimisation criterion for sparse logistic regression with Bayesian regularisation, M = ED + W log EW , (4) in which the regularisation parameter has been eliminated, for further details and theoretical justification, see [17]. Note that we integrate out the regularisation parameter and optimise the model parameters, which is unusual in that most Bayesian approaches, such as the relevance vector machine [13] optimise the regularisation parameters and integrate over the weights. 2.1.1 Practical Implementation The training criterion incorporating a fully Bayesian regularisation term can be minimised via a simple modification of existing cyclic co-ordinate descent algorithms for sparse regression using a Laplace prior (e.g. [25, 26]). Differentiating the original and modified training criteria, (2) and (4) respectively, we have that ?L = ?ED + ??EW where and ?M = ?ED + ? ? ?EW W 1 X 1/? ?= |wi |. W i=1 (5) From a gradient descent perspective, minimising M effectively becomes equivalent to minimising L, assuming that the regularisation parameter, ?, is continuously updated according to (5) following every change in the vector of model parameters, w [17]. This requires only a very minor modification of the existing training algorithm, whilst eliminating the only training parameter and hence the need for a model selection procedure in fitting the model. 2.1.2 Equivalence of Marginalisation and Optimisation under the Evidence Framework Williams [17] notes that, at least in the case of the Laplace prior, integrating out the regularisation parameter analytically is equivalent to its optimisation under the evidence framework of MacKay [22]. The argument provided by Williams can be summarised as follows: The evidence framework sets the value of the regularisation parameter so as to optimise the marginal likelihood, Z P (D) = P (D|w)P (w)dw, also known as the evidence for the model. The Bayesian interpretation of the regularised objective function gives, Z 1 P (D) = exp {?L} dw, ZW where ZW is a normalising constant for the prior over the model parameters, for the Laplace prior, ZW = (2/?)W . In the case of multinomial logistic regression, ED represents the negative logarithm of a normalised distribution, and so the corresponding normalising constant for the data misfit term is redundant. Unfortunately this integral is analytically intractable, and so we adopt the Laplace approximation, corresponding to a Gaussian posterior distribution for the model parameters, centred on their most probable value, wMP , 1 L(w) = L(wMP ) + (w ? wMP )T A(w ? wMP ) 2 where A = ??L is the Hessian of the regularised objective function. The regulariser corresponding to the Laplace prior is locally a hyper-plane, and so does not contribute to the Hessian and so A = ??ED . The negative logarithm of the evidence can then be written as, ? log P (D) = ED + ?EW + 1 log |A| + log ZW + constant. 2 Setting the derivative of the evidence with respect to ? to zero, gives rise to a simple update rule for the regularisation parameter, W 1 1 X = |wj |, ? ? W j=1 which is equivalent to the update rule obtained using the integrate-out approach. Maximising the evidence for the model also provides a convenient means for model selection. Using the Laplace approximation, evidence for a multinomial logistic regression model under the proposed Bayesian regularisation scheme is given by   1 ? (W ) + log |A| + constant ? log {D} = ED + W log EW ? log W 2 2 where A = ??L. 2.2 A Simple but Efficient Training Algorithm In this study, we adopt a simplified version of the efficient component-wise training algorithm of Shevade and Keerthi [25], adapted for multinomial, rather than binomial, logistic regression. The principal advantage of a component-wise optimisation algorithm is that the Hessian matrix is not required, but only the first and second partial derivatives of the regularised training criterion. The first partial derivatives of the data mis-fit term are given by, c n X ?E n ?y n ?ED i D = n n ?an ?aj ?y i j i=1 where n ?ED tn = ? in , n ?yi yi ?yin = yi ?ij ? yi yj ?anj and ?ij = 1 if i = j and otherwise ?ij = 0. Substituting, we obtain, ? X ?ED [yin ? tni ] = ?ai n=1 =? ? ? ? X X X ?ED = [yin ? tni ] xnj = yin xnj ? tni xnj . ?wij n=1 n=1 n=1 Similarly, the second partial derivatives are given by, ? ? X X  2 ? 2 ED ?yin = xnj = yin (1 ? yin ) xnj . ?wij ?wij n=1 n=1 The Laplace regulariser is locally a hyperplane, with the magnitude of the gradient given by the regularisation parameter, ?, ??EW = sign {wij } ? ?wij and ? 2 ?EW = 0. 2 ?wij The partial derivatives of the regularisation term are not defined at the origin, and so we define the effective gradient of the regularised loss function as follows: ? ?ED + ? if wij > 0 ? ? ij ? ?w ?ED ? ? ? ?wij ? ? if wij < 0 ?L ?ED ?ED = ?wij + ? if wij = 0 and ?wij + ? < 0 ? ?wij ? ?ED ?E D ? ? ? if wij = 0 and ?w ??>0 ? ? ij ? ?wij 0 otherwise Note that the value of a weight may be stable at zero if the derivative of the regularisation term dominates the derivative of the data misfit. The parameters of the model may then be optimised, using Newton?s method, i.e. " #?1 ?ED ? 2 ED wij ? wij ? . 2 ?wij ?wij Any step that causes a change of sign in a model parameter is truncated and that parameter set to zero. All that remains is to decide on a heuristic used to select the parameter to be optimised in each step. In this study, we adopt the heuristic chosen by Shevade and Keerthi, in which the parameter having the steepest gradient is selected in each iteration. The optimisation proceeds using two nested loops, in the inner loop, only active parameters are considered. If no further progress can be made by optimising active parameters, the search is extended to parameters that are currently set to zero. An optimisation strategy based on scaled conjugate gradient descent [27] has also be found to be effective. 3 Results The proposed sparse multinomial logistic regression method incorporating Bayesian regularisation using a Laplace prior (SBMLR) was evaluated over a suite of well-known benchmark datasets, against sparse multinomial logistic regression with five-fold cross-validation based optimisation of the regularisation parameter using a simple line search (SMLR). Table 1 shows the test error rate and cross-entropy statistics for SMLR and SBMLR methods over these datasets. Clearly, there is little reason to prefer either model over the other in terms of generalisation performance, as neither consistently dominates the other, either in terms of error rate or cross-entropy. Table 1 also shows that the Bayesian regularisation scheme results in models with a slightly higher degree of sparsity (i.e. the proportion of weights pruned from the model). However, the most striking aspect of the comparison is that the Bayesian regularisation scheme is typically around two orders of magnitude faster than the cross-validation based approach, with SBMLR being approximately five times faster in the worst case (COVTYPE). 3.1 The Value of Probabilistic Classification Probabilistic classifiers, i.e. those that providing an a-posteriori estimate of the probability of class membership, can be used in minimum risk classification, using an appropriate loss matrix to account for the relative costs of different types of error. Probabilistic classifiers allow rejection thresholds to be set in a straight-forward manner. This is particularly useful in a medical setting, where it may be prudent to refer a patient for further tests if the diagnosis is uncertain. Finally, the output of Table 1: Evaluation of linear sparse multinomial logistic regression methods over a set of nine benchmark datasets. The best results for each statistic are shown in bold. The final column shows the logarithm of the ratio of the training times for the SMLR and SBMLR, such that a value of 2 would indicate that SBMLR is 100 times faster than SMLR for a given benchmark dataset. Benchmark Covtype Crabs Glass Iris Isolet Satimage Viruses Waveform Wine Error Rate SBMLR SMLR 0.4051 0.0350 0.3318 0.0267 0.0475 0.1610 0.0328 0.1290 0.0225 0.4041 0.0500 0.3224 0.0267 0.0513 0.1600 0.0328 0.1302 0.0281 Cross Entropy SBMLR SMLR 0.9590 0.1075 0.9398 0.0792 0.1858 0.3717 0.1670 0.3124 0.0827 Sparsity SBMLR SMLR 0.9733 0.0891 0.9912 0.0867 0.2641 0.3708 0.1168 0.3131 0.0825 0.4312 0.2708 0.4400 0.4067 0.9311 0.3694 0.8118 0.3712 0.6071 log10 0.3069 0.0635 0.4700 0.4067 0.8598 0.2747 0.7632 0.3939 0.5524 TSMLR TSBMLR 0.6965 2.7949 1.9445 1.9802 1.3110 1.3083 2.1118 1.8133 2.5541 a probabilistic classifier can be adjusted after training to compensate for a difference between the relative class frequencies in the training set and those observed in operation. Saerens [4] provides a simple expectation-maximisation (EM) based procedure for estimating unknown operational apriori probabilities from the output of a probabilistic classifier (c.f. [28]). Let pt (Ci ) represent the a-priori probability of class Ci in the training set and pt (Ci |xn ) represent the raw output of the classifier for the nth pattern of the test data (representing operational conditions). The operational a-priori probabilities, po (Ci ) can then be updated iteratively via n p(s) o (?i |x ) =P c p(s) n o (?i ) pt (?i ) pt (?i |x ) (s) po (?j ) n j=1 pt (?j ) pt (?j |x ) and p(s+1) (?i ) o N 1 X (s) p (?i |xn ), = ? n=1 o (6) (0) beginning with po (Ci ) = pt (Ci ). Note that the labels of the test examples are not required for this procedure. The adjusted estimates of a-posteriori probability are then given by the first part of equation (6). The training and validation sets of the COVTYPE benchmark have been artificially balanced, by random sampling, so that each class is represented by the same number of examples. The test set consists of the unused patterns, and so the test set a-priori probabilities are both highly disparate and very different from the training set a-priori probabilities. Figure 1 and Table 2 summarise the results obtained using the raw and corrected outputs of a linear SBMLR model on this dataset, clearly demonstrating a key advantage of probabilistic classifiers over purely discriminative methods, for example the support vector machine (note the same procedure could be applied to the SMLR model with similar results). 0.5 training set test set estimated Table 2: Error rate and average crossentropy score for linear SBMLR models of the COVTYPE benchmark, using the raw and corrected outputs. Statistic Error Rate Cross-Entropy Raw 40.51% 0.9590 Corrected 28.57% 0.6567 a?priori probability 0.4 0.3 0.2 0.1 0 1 2 3 4 class 5 6 7 Figure 1: Training set, test set and estimated a-priori probabilities for the COVTYPE benchmark. 4 Relationship to Existing Work The sparsity inducing Laplace density has been utilized previously in [15, 25, 26, 29?31] and emerges as the marginal of a scale-mixture-of-Gaussians where the corresponding prior is an Exponential such that Z ? Nw (0, ? )E? (?)d? = exp(??|w|) 2 ? where E? (?) is an Exponential distribution over ? with parameter ? and ? = ?. In [29] this hierarchical representation of the Laplace prior is utilized to develop an EM style sparse binomial probit regression algorithm. The hyper-parameter ? is selected via cross-validation but in an attempt to circumvent this requirement a Jeffreys prior is placed on ? and is used to replace the exponential distribution in the above integral. This yields an improper parameter free prior distribution over w which removes the explicit requirement to perform any cross-validation. However, the method developed in [29] is restricted to binary classification and has compute scaling O(d3 ) which prohibits its use on moderately high-dimensional problems. Likewise in [13] the RVM employs a similar scale-mixture for the prior where now the Exponential distribution is replaced by a Gamma distribution whose marginal yields a Student prior distribution. No attempt is made to estimate the associated hyper-parameters and these are typically set to zero producing, as in [29], a sparsity inducing improper prior. As with [29] the original scaling of [13] is, at worst, O(d3 ), though more efficient methods have been developed in [14]. However the analysis holds only for a binary classifier and it would be non-trivial to extend this to the multi-class domain. A similar multinomial logistic regression model to the one proposed in this paper is employed in [26] where the algorithm is applied to large scale classification problems and yet they, as with [25], have to resort to cross-validation in obtaining a value for the hyper-parameters of the Laplace prior. 5 Summary In this paper we have demonstrated that the regularisation parameter used in sparse multinomial logistic regression using a Laplace prior can be integrated out analytically, giving similar performance in terms of generalisation as is obtained using extensive cross-validation based model selection, but at a greatly reduced computational expense. It is interesting to note that the SBMLR implements a strategy that is exactly the opposite of the relevance vector machine (RVM) [13], in that it integrates over the hyper-parameter and optimises the weights, rather than marginalising the model parameters and optimising the hyper-parameters. It seems reasonable to suggest that this approach is feasible in the case of the Laplace prior as the pruning action of this prior ensures that values of all of the weights are strongly determined by the data misfit term. A similar strategy has already proved effective in cancer classification based on gene expression microarray data in a binomial setting [32], and we plan to extend this work to multi-class cancer classification in the near future. Acknowledgements The authors thank the anonymous reviewers for their helpful and constructive comments. MG is supported by EPSRC grant EP/C010620/1. References [1] P. McCullagh and J. A. Nelder. Generalized linear models, volume 37 of Monographs on Statistics and Applied Probability. Chapman & Hall/CRC, second edition, 1989. [2] D. W. Hosmer and S. Lemeshow. Applied logistic regression. Wiley, second edition, 2000. [3] C. K. Chow. On optimum recognition error and reject tradeoff. IEEE Transactions on Information Theory, 16(1):41?46, January 1970. [4] M. Saerens, P. Latinne, and C. Decaestecker. Adjusting the outputs of a classifier to new a priori probabilities: A simple procedure. Neural Computation, 14(1):21?41, 2001. [5] J. O. Berger. Statistical decision theory and Bayesian analysis. Springer Series in Statistics. Springer, second edition, 1985. [6] J. Zhu and T. Hastie. Classification of gene microarrays by penalized logistic regression. Biostatistics, 5(3):427?443, 2004. [7] X. Zhou, X. Wang, and E. R. Dougherty. Multi-class cancer classification using multinomial probit regression with Bayesian gene selection. IEE Proceedings - Systems Biology, 153(2):70?76, March 2006. [8] T. Zhang and F. J. Oles. Text categorization based on regularised linear classification methods. Information Retrieval, 4(1):5?31, April 2001. [9] L. Narlikar and A. J. Hartemink. Sequence features of DNA binding sites reveal structural class of associated transcription factor. Bioinformatics, 22(2):157?163, 2006. [10] M. J. L. Orr. Regularisation in the selection of radial basis function centres. Neural Computation, 7(3):606?623, 1995. [11] Y. Grandvalet. Least absolute shrinkage is equivalent to quadratic penalisation. In L. Niklasson, M. Bod?en, and T. Ziemske, editors, Proceedings of the International Conference on Artificial Neural Networks, Perspectives in Neural Computing, pages 201?206, Sk?ovde, Sweeden, September 2?4 1998. Springer. [12] Y. Grandvalet and S. Canu. Outcomes of the quivalence of adaptive ridge with least absolute shrinkage. In Advances in Neural Information Processing Systems, volume 11. MIT Press, 1999. [13] M. E. Tipping. Sparse Bayesian learning and the Relevance Vector Machine. Journal of Machine Learning Research, 1:211?244, 2001. [14] A. C. Faul and M. E. Tipping. Fast marginal likelihood maximisation for sparse Bayesian models. In C. M. Bishop and B. J. Frey, editors, Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, Key West, FL, USA, 3?6 January 2003. [15] R. Tibshirani. Regression shrinkage and selection via the LASSO. Journal of the Royal Statistical Society - Series B, 58:267?288, 1996. [16] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Annals of Statistics, 32(2):407?499, 2004. [17] P. M. Williams. Bayesian regularization and pruning using a Laplace prior. Neural Computation, 7(1):117?143, 1995. [18] C. M. Bishop. Neural networks for pattern recognition. Oxford University Press, 1995. [19] J. S. Bridle. Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition. In F. Fogelman Souli?e and J. H?erault, editors, Neurocomputing: Algorithms, architectures and applications, pages 227?236. Springer-Verlag, New York, 1990. [20] A. N. Tikhonov and V. Y. Arsenin. Solutions of ill-posed problems. John Wiley, New York, 1977. [21] S. Geman, E. Bienenstock, and R. Doursat. Neural networks and the bias/variance dilema. Neural Computation, 4(1):1?58, 1992. [22] D. J. C. MacKay. The evidence framework applied to classification networks. Neural Computation, 4(5):720?736, 1992. [23] W. L. Buntine and A. S. Weigend. Bayesian back-propagation. Complex Systems, 5:603?643, 1991. [24] I. S. Gradshteyn and I. M. Ryzhic. Table of Integrals, Series and Products. Academic Press, fifth edition, 1994. [25] S. K. Shevade and S. S. Keerthi. A simple and efficient algorithm for gene selection using sparse logistic regression. Bioinformatics, 19(17):2246?2253, 2003. [26] D. Madigan, A. Genkin, D. D. Lewis, and D. Fradkin. Bayesian multinomial logistic regression for author identification. In AIP Conference Proceedings, volume 803, pages 509?516, 2005. [27] P. M. Williams. A Marquardt algorithm for choosing the step size in backpropagation learning with conjugate gradients. Technical Report CSRP-229, University of Sussex, February 1991. [28] G. J. McLachlan. Discriminant analysis and statistical pattern recognition. Wiley, 1992. [29] M. Figueiredo. Adaptive sparseness for supervised learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(9):1150?1159, September 2003. [30] B. Krishnapuram, L. Carin, M. A. T. Figueiredo, and A. J. Hartemink. Sprse multinomial logistic regression: Fast algorithms and generalisation bounds. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(6):957?968, June 2005. [31] J. M. Bioucas-Dias, M. A. T. Figueiredo, and J. P. Oliveira. Adaptive total variation image deconvolution: A majorization-minimization approach. In Proceedings of the European Signal Processing Conference (EUSIPCO?2006), Florence, Italy, September 2006. [32] G. C. Cawley and N. L. C. Talbot. Gene selection in cancer classification using sparse logistic regression with Bayesian regularisation. Bioinformatics, 22(19):2348?2355, October 2006.
3155 |@word version:1 eliminating:2 proportion:1 seems:1 seek:1 accommodate:1 cyclic:1 series:3 score:2 xnj:6 existing:3 virus:1 marquardt:1 yet:1 dx:1 written:3 john:1 additive:1 informative:1 remove:1 update:2 intelligence:3 selected:2 plane:1 beginning:1 scotland:1 ith:1 steepest:1 normalising:2 provides:3 contribute:1 zhang:1 five:2 consists:1 fitting:2 manner:1 introduce:1 expected:1 multi:6 little:1 becomes:1 provided:1 estimating:1 biostatistics:1 anj:1 interpreted:2 prohibits:1 developed:2 whilst:1 suite:1 every:1 exactly:2 classifier:9 rols:1 uk:3 control:1 scaled:1 medical:1 grant:1 producing:1 bioucas:1 generalised:2 maximise:1 local:1 positive:1 frey:1 eusipco:1 oxford:1 optimised:2 approximately:1 equivalence:2 co:1 range:3 practical:3 yj:1 recursive:1 maximisation:2 implement:2 oles:1 backpropagation:1 procedure:7 reject:1 convenient:1 radial:2 induce:1 integrating:1 madigan:1 suggest:1 krishnapuram:1 selection:9 storage:1 risk:2 context:1 equivalent:5 c010620:1 demonstrated:2 reviewer:1 williams:4 glasgow:2 insight:1 rule:2 array:1 isolet:1 dw:2 classic:1 variation:1 justification:1 laplace:19 qq:1 updated:2 controlling:1 pt:7 annals:1 regularised:5 origin:1 recognition:6 particularly:1 utilized:2 geman:1 observed:1 epsrc:1 ep:1 wang:1 worst:2 thousand:1 wj:1 ensures:1 improper:3 trade:1 valuable:1 balanced:1 monograph:1 moderately:1 purely:1 efficiency:1 basis:4 po:3 represented:1 souli:1 norfolk:2 distinct:1 fast:2 effective:3 artificial:2 hyper:6 outcome:1 choosing:1 whose:1 heuristic:2 posed:1 otherwise:2 ability:1 statistic:7 dougherty:1 itself:1 final:1 advantage:3 mg:1 sequence:1 propose:1 product:1 remainder:1 loop:2 tni:5 bod:1 inducing:3 requirement:3 optimum:1 categorization:1 develop:1 ac:3 ij:5 minor:1 ard:1 school:2 progress:1 faul:1 implies:1 indicate:1 girolami:2 waveform:1 routing:1 crc:1 anonymous:1 probable:1 adjusted:3 strictly:1 hold:1 around:1 gavin:1 considered:1 crab:1 uea:2 exp:7 hall:1 nw:1 substituting:2 adopt:3 wine:1 purpose:1 integrates:1 label:1 currently:1 rvm:3 mclachlan:1 minimization:1 mit:1 clearly:2 gaussian:3 aim:1 modified:1 rather:3 zhou:1 cmp:2 shrinkage:3 minimisation:1 crossentropy:1 focus:1 june:1 consistently:1 biomarker:1 likelihood:7 greatly:2 glass:1 posteriori:4 helpful:1 membership:2 integrated:4 typically:3 chow:1 bienenstock:1 wij:26 fogelman:1 classification:15 ill:1 prudent:1 priori:7 development:1 proposes:1 plan:1 softmax:1 mackay:2 marginal:5 apriori:1 construct:1 optimises:1 having:2 eliminated:1 sampling:1 optimising:2 represents:2 chapman:1 biology:1 carin:1 future:1 summarise:1 report:1 aip:1 micro:1 employ:1 genkin:1 gamma:2 neurocomputing:1 replaced:1 jeffrey:1 keerthi:3 attempt:2 highly:1 ziemske:1 evaluation:2 mixture:2 pc:1 csrp:1 tj:2 gla:1 integral:4 partial:5 minimiser:1 orthogonal:1 logarithm:4 desired:1 decaestecker:1 theoretical:1 uncertain:1 instance:1 column:1 cost:1 uniform:1 gcv:1 iee:1 buntine:1 density:2 international:2 sensitivity:1 probabilistic:8 off:1 minimised:1 continuously:1 resort:1 derivative:8 style:1 account:1 orr:2 centred:1 coding:1 bold:1 student:1 florence:1 majorization:1 square:2 variance:2 likewise:1 yield:2 misfit:4 bayesian:23 identification:4 raw:4 straight:2 penalised:2 ed:24 lengthy:1 against:2 frequency:2 associated:2 mi:1 bridle:1 dataset:2 proved:1 adjusting:1 emerges:1 efron:1 back:1 higher:1 tipping:2 supervised:1 hosmer:1 improved:1 april:1 evaluated:2 though:1 strongly:1 marginalising:2 wmp:4 stage:1 shevade:3 propagation:1 logistic:27 aj:2 perhaps:1 reveal:1 usa:1 contain:1 analytically:6 hence:1 regularization:1 iteratively:1 ktk1:1 ignorance:1 sussex:1 iris:1 criterion:6 generalized:1 ridge:1 demonstrate:1 tn:3 l1:1 saerens:2 image:1 wise:3 recently:2 niklasson:1 multinomial:24 volume:3 extend:2 interpretation:3 refer:1 ai:1 automatic:1 rd:1 canu:1 similarly:1 centre:1 stable:1 posterior:3 perspective:2 italy:1 belongs:1 tikhonov:1 verlag:1 binary:2 yi:4 minimum:2 employed:1 redundant:2 signal:1 desirable:1 technical:1 faster:3 determination:1 academic:1 cross:13 minimising:2 compensate:1 retrieval:1 penalisation:1 prediction:1 regression:34 optimisation:8 patient:1 expectation:1 iteration:1 represent:3 cawley:2 microarray:2 zw:4 marginalisation:1 doursat:1 comment:1 subject:1 latinne:1 call:1 structural:1 near:1 noting:1 unused:1 feedforward:1 automated:1 variety:1 fit:1 hastie:2 identified:1 lasso:2 competing:1 inner:1 opposite:1 architecture:1 tradeoff:1 microarrays:1 minimise:2 expression:3 penalty:1 hessian:3 cause:1 nine:1 action:1 york:2 useful:1 detailed:1 oliveira:1 locally:2 dna:2 reduced:2 affords:1 sign:2 diagnostic:1 disjoint:1 estimated:3 tibshirani:2 diverse:1 summarised:2 diagnosis:1 fradkin:1 key:2 salient:1 threshold:2 demonstrating:1 achieving:1 drawn:1 d3:2 neither:1 ani:2 sum:1 weigend:1 inverse:1 angle:1 striking:1 reasonable:1 decide:1 decision:1 prefer:1 scaling:2 bound:2 layer:1 fl:1 expediency:1 fold:1 quadratic:1 activity:1 adapted:1 incorporation:1 categorisation:1 aspect:1 argument:1 pruned:3 separable:3 relatively:1 department:1 according:1 march:1 conjugate:2 slightly:1 em:2 wi:2 making:1 modification:2 jeffreys:1 restricted:1 lemeshow:1 equation:3 remains:1 previously:1 unusual:1 dia:1 adopted:1 operation:2 gaussians:1 apply:1 hierarchical:1 appropriate:4 alternative:1 original:2 binomial:3 newton:1 unifying:1 log10:1 giving:3 nicola:1 february:1 society:1 objective:2 already:1 parametric:2 strategy:4 usual:2 september:3 gradient:6 link:1 thank:1 discriminant:1 trivial:1 reason:1 assuming:2 maximising:2 relationship:2 berger:1 providing:1 ratio:1 unfortunately:1 october:1 expense:2 negative:6 rise:1 disparate:1 gcc:1 implementation:1 regulariser:2 unknown:1 perform:1 allowing:2 upper:1 revised:1 datasets:4 benchmark:9 descent:3 truncated:1 january:2 extended:1 dc:1 ninth:1 ordinate:1 required:2 extensive:1 unequal:1 established:1 proceeds:1 below:1 pattern:11 sparsity:9 including:1 optimise:3 royal:1 circumvent:1 nth:1 representing:1 scheme:4 zhu:1 text:3 prior:29 understanding:1 acknowledgement:1 regularisation:35 relative:3 probit:4 loss:3 fully:2 interesting:1 validation:10 integrate:3 degree:1 editor:3 grandvalet:3 cancer:7 arsenin:1 summary:1 penalized:1 placed:1 supported:1 free:1 figueiredo:3 bias:2 normalised:1 allow:1 perceptron:1 johnstone:1 fall:1 differentiating:1 absolute:3 sparse:16 fifth:1 xn:5 forward:2 made:2 author:2 adaptive:3 simplified:1 transaction:3 pruning:2 compact:1 transcription:1 gene:7 active:3 reveals:1 nelder:1 discriminative:1 alternatively:1 search:2 sk:1 table:6 nature:1 operational:3 obtaining:1 complex:1 artificially:1 european:1 domain:1 linearly:1 edition:4 obviating:1 site:2 west:1 en:1 aid:1 wiley:3 explicit:2 exponential:4 candidate:1 bishop:2 explored:1 decay:1 covtype:5 talbot:2 evidence:11 dominates:2 incorporating:2 intractable:1 workshop:1 deconvolution:1 effectively:1 gradshteyn:1 ci:6 magnitude:2 narlikar:1 sparseness:1 rejection:2 suited:1 entropy:4 yin:9 strand:1 hartemink:2 binding:2 springer:4 nested:1 lewis:1 conditional:1 g12:1 rbf:1 satimage:1 replace:1 feasible:1 change:2 mccullagh:1 norwich:2 generalisation:4 determined:1 uniformly:1 corrected:3 hyperplane:1 principal:1 total:1 east:2 ew:13 select:2 mark:1 maximumlikelihood:1 latter:1 arises:1 support:1 relevance:5 nr4:2 bioinformatics:3 constructive:1
2,376
3,156
In-Network PCA and Anomaly Detection Ling Huang University of California Berkeley, CA 94720 hling@cs.berkeley.edu Michael I. Jordan University of California Berkeley, CA 94720 jordan@cs.berkeley.edu XuanLong Nguyen University of California Berkeley, CA 94720 xuanlong@cs.berkeley.edu Anthony Joseph University of California Berkeley, CA 94720 adj@cs.berkeley.edu Minos Garofalakis Intel Research Berkeley, CA 94704 minos.garofalakis@intel.com Nina Taft Intel Research Berkeley, CA 94704 nina.taft@intel.com Abstract We consider the problem of network anomaly detection in large distributed systems. In this setting, Principal Component Analysis (PCA) has been proposed as a method for discovering anomalies by continuously tracking the projection of the data onto a residual subspace. This method was shown to work well empirically in highly aggregated networks, that is, those with a limited number of large nodes and at coarse time scales. This approach, however, has scalability limitations. To overcome these limitations, we develop a PCA-based anomaly detector in which adaptive local data filters send to a coordinator just enough data to enable accurate global detection. Our method is based on a stochastic matrix perturbation analysis that characterizes the tradeoff between the accuracy of anomaly detection and the amount of data communicated over the network. 1 Introduction The area of distributed computing systems provides a promising domain for applications of machine learning methods. One of the most interesting aspects of such applications is that learning algorithms that are embedded in a distributed computing infrastructure are themselves part of that infrastructure and must respect its inherent local computing constraints (e.g., constraints on bandwidth, latency, reliability, etc.), while attempting to aggregate information across the infrastructure so as to improve system performance (or availability) in a global sense. Consider, for example, the problem of detecting anomalies in a wide-area network. While it is straightforward to embed learning algorithms at local nodes to attempt to detect node-level anomalies, these anomalies may not be indicative of network-level problems. Indeed, in recent work, [8] demonstrated a useful role for Principal Component Analysis (PCA) to detect network anomalies. They showed that the minor components of PCA (the subspace obtained after removing the components with largest eigenvalues) revealed anomalies that were not detectable in any single node-level trace. This work assumed an environment in which all the data is continuously pushed to a central site for off-line analysis. Such a solution cannot scale either for networks with a large number of monitors nor for networks seeking to track and detect anomalies at very small time scales. Designing scalable solutions presents several challenges. Viable solutions need to process data ?innetwork? to intelligently control the frequency and size of data communications. The key underlying problem is that of developing a mathematical understanding of how to trade off quantization arising from local data filtering against fidelity of the detection analysis. We also need to understand how this tradeoff impacts overall detection accuracy. Finally, the implementation needs to be simple if it is to have impact on developers. In this paper, we present a simple algorithmic framework for network-wide anomaly detection that relies on distributed tracking combined with approximate PCA analysis, together with supporting theoretical analysis. In brief, the architecture involves a set of local monitors that maintain parameterized sliding filters. These sliding filters yield quantized data streams that are sent to a coordinator. The coordinator makes global decisions based on these quantized data streams. We use stochastic matrix perturbation theory to both assess the impact of quantization on the accuracy of anomaly detection, and to design a method that selects filter parameters in a way that bounds the detection error. The combination of our theoretical tools and local filtering strategies results in an in-network tracking algorithm that can achieve high detection accuracy with low communication overhead; for instance, our experiments show that, by choosing a relative eigen-error of 1.5% (yielding, approximately, a 4% missed detection rate and a 6% false alarm rate), we can filter out more than 90% of the traffic from the original signal. Prior Work. The original work on a PCA-based method by Lakhina et al. [8] has been extended by [17], who show how to infer network anomalies in both spatial and temporal domains. As with [8], this work is completely centralized. [14] and [1] propose distributed PCA algorithms distributed across blocks of rows or columns of the data matrix; however, these methods are not applicable to our case. Furthermore, neither [14] nor [1] address the issue of continuously tracking principal components within a given error tolerance or the issue of implementing a communication/accuracy tradeoff; issues which are the main focus of our work. Other initiatives in distributed monitoring, profiling and anomaly detection aim to share information and foster collaboration between widely distributed monitoring boxes to offer improvements over isolated systems [12, 16]. Work in [2, 10] posits the need for scalable detection of network attacks and intrusions. In the setting of simpler statistics such as sums and counts, in-network detection methods related to ours have been explored by [6]. Finally, recent work in the machine learning literature considers distributed constraints in learning algorithms such as kernel-based classification [11] and graphical model inference [7]. (See [13] for a survey). 2 Problem description and background We consider a monitoring system comprising a set of local monitor nodes M 1 , . . . , Mn , each of which collects a locally-observed time-series data stream (Fig. 1(a)). For instance, the monitors may collect information on the number of TCP connection requests per second, the number of DNS transactions per minute, or the volume of traffic at port 80 per second. A central coordinator node aims to continuously monitor the global collection of time series, and make global decisions such as those concerning matters of network-wide health. Although our methodology is generally applicable, in this paper we focus on the particular application of detecting volume anomalies. A volume anomaly refers to unusual traffic load levels in a network that are caused by anomalies such as worms, distributed denial of service attacks, device failures, misconfigurations, and so on. Each monitor collects a new data point at every time step and, assuming a naive, ?continuous push? protocol, sends the new point to the coordinator. Based on these updates, the coordinator keeps track of a sliding time window of size m (i.e., the m most recent data points) for each monitor time series, organized into a matrix Y of size m ? n (where the ith column Yi captures the data from monitor i, see Fig. 1(a)). The coordinator then makes its decisions based solely on this (global) Y matrix. In the network-wide volume anomaly detection algorithm of [8] the local monitors measure the total volume of traffic (in bytes) on each network link, and periodically (e.g., every 5 minutes) centralize the data by pushing all recent measurements to the coordinator. The coordinator then performs PCA on the assembled Y matrix to detect volume anomalies. This method has been shown to work remarkably well, presumably due to the inherently low-dimensional nature of the underlying data [9]. However, such a ?periodic push? approach suffers from inherent limitations: To ensure fast detection, the update periods should be relatively small; unfortunately, small periods also imply increased monitoring communication overheads, which may very well be unnecessary (e.g., if there are no significant local changes across periods). Instead, in our work, we study how the monitors can effectively filter their time-series updates, sending as little data as possible, yet enough so as to allow the coordinator to make global decisions accurately. We provide analytical bounds on the errors that occur because decisions are made with incomplete data, and explore the tradeoff between reducing data transmissions (communication overhead) and decision accuracy. 18 3 Anomaly State Vector Data Flow ? = Y Result x 10 2 1 0 Mon Tue Wed Thu Fri Sat Sun Tue Wed Thu Fri Sat Sun 17 Y= M2 M3 Mn 1 3 4 3 2 7 6 5 5 2 1 8 Residual Vector 2 M1 x 10 1.5 1 0.5 0 Mon (b) Abilene network traffic data (a) The system setup Figure 1: (a) The distributed monitoring system; (b) Data sample (kyk2 ) collected over one week (top); its projection in residual subspace (bottom). Dashed line represents a threshold for anomaly detection. Using PCA for centralized volume anomaly detection. As observed by Lakhina et al. [8], due to the high level of traffic aggregation on ISP backbone links, volume anomalies can often go unnoticed by being ?buried? within normal traffic patterns (e.g., the circle dots shown in the top plot in Fig 1(b)). On the other hand, they observe that, although, the measured data is of seemingly high dimensionality (n = number of links), normal traffic patterns actually lie in a very low-dimensional subspace; furthermore, separating out this normal traffic subspace using PCA (to find the principal traffic components) makes it much easier to identify volume anomalies in the remaining subspace (bottom plot of Fig. 1(b)). As before, let Y be the global m ? n time-series data matrix, centered to have zero mean, and let y = y(t) denote a n-dimensional vector of measurements (for all links) from a single time step t. Formally, PCA is a projection method that maps a given set of data points onto principal components ordered by the amount of data variance that they capture. The set of n principal components, {vi }ni=1 , are defined as: i?1 X vi = arg max k(Y ? Yvj vjT )xk kxk=1 j=1 1 YT Y. As shown in [9], and are the n eigenvectors of the estimated covariance matrix A := m PCA reveals that the Origin-Destination (OD) flow matrices of ISP backbones have low intrinsic dimensionality: For the Abilene network with 41 links, most data variance can be captured by the first k = 4 principal components. Thus, the underlying normal OD flows effectively reside in a (low) k-dimensional subspace of Rn . This subspace is referred to as the normal traffic subspace Sno . The remaining (n ? k) principal components constitute the abnormal traffic subspace S ab . Detecting volume anomalies relies on the decomposition of link traffic y = y(t) at any time step into normal and abnormal components, y = yno +yab , such that (a) yno corresponds to modeled normal traffic (the projection of y onto Sno ), and (b) yab corresponds to residual traffic (the projection of y onto Sab ). Mathematically, yno (t) and yab (t) can be computed as yno (t) = PPT y(t) = Cno y(t) and yab (t) = (I ? PPT )y(t) = Cab y(t) where P = [v1 , v2 , . . . , vk ] is formed by the first k principal components which capture the dominant variance in the data. The matrix Cno = PPT represents the linear operator that performs projection onto the normal subspace Sno , and Cab projects onto the abnormal subspace Sab . As observed in [8], a volume anomaly typically results in a large change to y ab ; thus, a useful metric for detecting abnormal traffic patterns is the squared prediction error (SPE): SPE ? kyab k2 = kCab yk2 (essentially, a quadratic residual function). More formally, their proposed algorithm signals a volume anomaly if SPE > Q? , where Q? denotes the threshold statistic for the SPE residual function at the 1 ? ? confidence level. Such a statistical test for the SPE residual function, known as the Q-statistic [4], can be computed as a function Q? = Q? (?k+1 , . . . , ?n ) of the (n?k) non-principal eigenvalues of the covariance matrix A. Distr. Monitors ?1 - Filter/ Predict - ?2 - Filter/ Predict Anomaly R1 (t)  Y2(t) ? 6 Input:  Y1(t) ? R2 (t) w ? 6 Perturbation Analysis q ?  w Yn (t) ? ?n- Filter/ Predict Rn (t) Subspace Method Adaptive ?1 , . . . , ? n Coordinator Figure 2: Our in-network tracking and detection framework. 3 In-network PCA for anomaly detection We now describe our version of an anomaly detector that uses distributed tracking and approximate PCA analysis. A key idea is to curtail the amount of data each monitor sends to the coordinator. Because our job is to catch anomalies, rather than to track ongoing state, we point out that the coordinator only needs to have a good approximation of the state when an anomaly is near. It need not track global state very precisely when conditions are normal. This observation makes it intuitive that a reduction in data sharing between monitors and the coordinator should be possible. We curtail the amount of data flow from monitors to the coordinator by installing local filters at each monitor. These filters maintain a local constraint, and a monitor only sends the coordinator an update of its data when the constraint is violated. The coordinator thus receives an approximate, or ?perturbed,? view of the data stream at each monitor and hence of the global state. We use stochastic matrix perturbation theory to analyze the effect on our PCA-based anomaly detector of using a perturbed global matrix. Based on this, we can choose the filtering parameters (i.e., the local constraints) so as to limit the effect of the perturbation on the PCA analysis and on any deterioration in the anomaly detector?s performance. All of these ideas are combined into a simple, adaptive distributed protocol. 3.1 Overview of our approach Fig. 2 illustrates the overall architecture of our system. We now describe the functionality at the monitors and the coordinator. The goal of a monitor is to track its local raw time-series data, and to decide when the coordinator needs an update. Intuitively, if the time series does not change much, or doesn?t change in a way that affects the global condition being tracked, then the monitor does not send anything to the coordinator. The coordinator assumes that the most recently received update is still approximately valid. The update message can be either the current value of the time series, or a summary of the most recent values, or any function of the time series. The update serves as a prediction of the future data, because should the monitor send nothing in subsequent time intervals, then the coordinator uses the most recently received update to predict the missing values. For our anomaly detection application, we filter as follows. Each monitor i maintains a filtering window Fi (t) of size 2?i centered at a value Ri (i.e., Fi (t) = [Ri (t) ? ?i , Ri (t) + ?i ]). At each time t, the monitor sends both Yi (t) and Ri (t) to the coordinator only if Yi (t) ? / Fi , otherwise it sends nothing. The window parameter ?i is called the slack; it captures the amount the time series can drift before an update to the coordinator needs to be sent. The center parameter R i (t) denotes the approximate representation, or summary, of Yi (t). In our implementation, we set Ri (t) equal to the average of last five signal values observed locally at monitor i. Let t ? denote the time of the most recent update happens. The monitor needs to send both Y i (t? ) and Ri (t? ) to the coordinator when it does an update, because the coordinator will use Yi (t? ) at time t? and Ri (t? ) for all t > t? until the next update arrives. For any subsequent t > t? when the coordinator receives no update from that monitor, it will use Ri (t? ) as the prediction for Yi (t). The role of the coordinator is twofold. First, it makes global anomaly-detection decisions based upon the received updates from the monitors. Secondly, it computes the filtering parameters (i.e., the slacks ?i ) for all the monitors based on its view of the global state and the condition for triggering an anomaly. It gives the monitors their slacks initially and updates the value of their slack parameters when needed. Our protocol is thus adaptive. Due to lack of space we do not discuss here the method for deciding when slack updates are needed. The global detection task is the same as in the centralized scheme. In contrast to the centralized setting, however, the coordinator does not have ? instead. The PCA analysis, an exact version of the raw data matrix Y; it has the approximation Y ? := A ? ?. The including the computation of Sab is done on the perturbed covariance matrix A magnitude of the perturbation matrix ? is determined by the slack variables ? i (i = 1, . . . , M ). 3.2 Selection of filtering parameters A key ingredient of our framework is a practical method for choosing the slack parameters ? i . This choice is critical because these parameters balance the tradeoff between the savings in data communication and the loss of detection accuracy. Clearly, the larger the slack, the less the monitor needs to send, thus leading to both more reduction in communication overhead and potentially more information loss at the coordinator. We employ stochastic matrix perturbation theory to quantify the effects of the perturbation of a matrix on key quantities such as eigenvalues and the eigen-subspaces, which in turn affect the detection accuracy. Our approach is as follows. We measure the size of a perturbation using a norm on ?. We derive an upper bound on the changes to the eigenvalues ?i and the residual subspace Cab as a function of k?k. We choose ?i to ensure that an approximation to this upper bound on ? is not exceeded. This in turn ensures that ?i and Cab do not exceed their upper bounds. Controlling these latter terms, we are able to bound the false alarm probability. ? = Y + W, Recall that the coordinator?s view of the global data matrix is the perturbed matrix Y where all elements of the column vector Wi are bounded within the interval [??i , ?i ]. Let ?i and ? i (i = 1, . . . , n) denote the eigenvalues of the covariance matrix A = 1 YT Y and its perturbed ? m ? Applying the classical theorems of Mirsky and Weyl [15], we obtain bounds ? T Y. ? := 1 Y version A m on the eigenvalue perturbation in terms of the Frobenius norm k.k F and the spectral norm k.k2 of ? respectively: ? := A ? A, v u n uX 1 ? ? i ? ?i | ? k?k2 ? i ? ?i )2 ? k?kF / n and max|? eig := t (1) (? i n i=1 Applying the sin theorem and results on bounding the angle of projections to subspaces [15] (see [3] for more details), we can bound the perturbation of the residual subspace C ab in terms of the Frobenius norm of ?: ? ? ab kF ? 2k?kF kCab ? C (2) ? where ? denotes the eigengap between the k th and (k +1)th eigenvalues of the estimated covariance ? matrix A. To obtain practical (i.e., computable) bound on the norms of ?, we derive expectation bounds instead of worst case bounds. We make the following assumptions on the error matrix W: 1. The column vectors W1 , . . . , Wn are independent and radially symmetric m-vectors. 2. For each i = 1, . . . , n, all elements of column vector Wi are i.i.d. random variables with mean 0, variance ?i2 := ?i2 (?i ) and fourth moment ?4i := ?4i (?i ). Note that the independence assumption is imposed only on the error?this by no means implies that the signals received by different monitors are statistically independent. Under the above assumption, ? we can show that k?kF / n is upper bounded in expectation by the following quantity: v v u u  n n n n X u 1 X u 1 1 X 4 1 X 4 2 t T olF = 2 + ?i ? ?i + t ?i + (?i ? ?i4 ). (3) mn i=1 m n mn i=1 i=1 i=1 Similar results can be obtained for the spectral norm as well. In practice, these upper bounds are very tight because ?1 , . . . , ?n tend to be small compared to the top eigenvalues. Given the tolerable perturbation T olF , we can use Eqn. (3) to select the slack variables. For example, we can divide the overall tolerance across monitors either uniformly or in proportion to their observed local variance. 3.3 Guarantee on false alarm probability Because our approximation perturbs the eigenvalues, it also impacts the accuracy with which the trigger is fired. Since the trigger condition is kCab yk2 > Q? , we must assess the impact on both of these terms. We can compute an upper bound on the perturbation of the SPE statistic, SPE = kCab yk2 , as follows. First, note that v ? u n uX 2k?k k? y k F ? ? ? k ? kCab yk| ? k(Cab ? Cab )? ? )k ? |kCab y yk + kCab (y ? y + kCab k2 t ?i2 ? i=1 !v ? ? u n uX 2k?kF k? yk ? ab k + 2k?kF t ? + kC ?i2 =: ?1 (? y). ? ? i=1 ? ab y ? ab y ? k2 ? kCab yk2 | ? ?1 (? ? k + ?1 (? |kC y)(2kC y)) =: ?2 (? y). (4) The dependency of the threshold Q? on the eigenvalues, ?k+1 , . . . , ?n , can be expressed as [4]: # h1 " p c? 2?2 h20 ?2 h0 (h0 ? 1) 0 +1+ , (5) Q? = ?1 ?1 ?21 where c? is the (1 ? ?)-percentile of the standard normal distribution, h0 = 1 ? Pn i j=k+1 ?j for i = 1, 2, 3. 2?1 ?3 , 3?22 ?i = To assess the perturbation in false alarm probability, we start by considering the following random variable c derived from Eqn. (5): ?1 [(SPE/?1 )h0 ? 1 ? ?2 h0 (h0 ? 1)/?21 ] p . (6) c= 2?2 h20 The random variable c essentially normalizes the random quantity kC ab yk2 and is known to approximately follow a standard normal distribution [5]. The false alarm probability in the centralized system is expressed as   Pr kCab yk2 > Q? = Pr [c > c? ] = ?, where the lefthand term of this equation is conditioned upon the SPE statistics being inside the ??. ? ab y ? k2 > Q normal range. In our distributed setting, the anomaly detector fires a trigger if k C We thus only observe a perturbed version c? for the random variable c. Let ? c denote the bound on |? c ? c|. The deviation of the false alarm probability in our approximate detection scheme can then be approximated as P (c? ? ?c < U < c? + ?c ), where U is a standard normal random variable. 4 Evaluation We implemented our algorithm and developed a trace-driven simulator to validate our methods. We used a one-week trace collected from the Abilene network1. The traces contains per-link traffic loads measured every 10 minutes, for all 41 links of the Abilene network. With a time unit of 10 minutes, data was collected for 1008 time units. This data was used to feed the simulator. There are 7 anomalies in the data that were detected by the centralized algorithm (and verified by hand to be true anomalies). We also injected 70 synthetic anomalies into this dataset using the method described in [8], so that we would have sufficient data to compute error rates. We used a threshold Q? corresponding to an 1 ? ? = 99.5% confidence level. Due to space limitations, we present results only for the case of uniform monitor slack, ?i = ?. The input parameter for our algorithm is the tolerable relative error of theqeigenvalues (?relative P 2 eigen-error? for short), which acts as a tuning knob. (Precisely, it is T ol F / n1 ?i , where T olF is defined in Eqn. (3).) Given this parameter and the input data we can compute the filtering slack ? for the monitors using Eqn. (3). We then feed in the data to run our protocol in the simulator with the 1 Abilene is an Internet2 high-performance backbone network that interconnects a large number of universities as well as a few other research institutes. 7 Fal. Alarm Rate 2 0 0.005 0.01 0.015 0.015 (a) 0.02 0.025 0.03 0.01 0.005 Rel. Threshold Error 0 0 0.005 0.01 0.1 0.015 (b) 0.02 0.025 0.03 0.05 0 0 0.005 0.01 0.015 (c) 0.02 0.025 0.03 Missed Detec. Rate 4 0 Rel. Eigen Error x 10 0.4 Upper Bound Actual Accrued 0.3 0.2 0.1 0 0 0.005 0.01 0.015 0.02 0.025 0.03 0 0.005 0.01 0.015 0.02 0.025 0.03 0 0.005 0.01 0.015 0.02 0.025 0.03 0.1 (d) 0.05 Comm. Overhead Slack 6 0 1 (e) 0.5 0 (f) Figure 3: In all plots the x-axis is the relative eigen-error. (a) The filtering slack. (b) Actual accrued eigenerror. (c) Relative error of detection threshold. (d) False alarm rates. (e) Missed detection rates. (f) Communication overhead. computed ?. The simulator outputs a set of results including: 1) the actual relative eigen errors and the relative errors on the detection threshold Q? ; 2) the missed detection rate, false alarm rate and communication cost achieved by our method. The missed-detection rate is defined as the fraction of missed detections over the total number of real anomalies, and the false-alarm rate as the fraction of false alarms over the total number of detected anomalies by our protocol, which is ? (defined in Sec. 3.3) rescaled as a rate rather than a probability. The communication cost is computed as the fraction of number of messages that actually get through the filtering window to the coordinator. The results are shown in Fig. 3. In all plots, the x-axis is the relative eigen-error. In Fig. 3(a) we plot the relationship between the relative eigen-error and the filtering slack ? when assuming filtering errors are uniformly distributed on interval [??, ?]. With this model, the relationship between the 2 relative eigen-error and the slack is determined by a simplified version of Eqn. (3) (with all ? i2 = ?3 ). The results make intuitive sense. As we increase our error tolerance, we can filter more at the monitor and send less to the coordinator. The slack increases almost linearly with the relative eigen-error because the first term in the right hand side of Eqn. (3) dominates all other terms. In Fig. we compare the relative eigen-error to the actual accrued relative eigen-error (defined as q3(b) P 2 1 ?i , where eig is defined in Eqn (1)). These were computed using the slack parameters eig / n ? as computed by our coordinator. We can see that the real accrued eigen-errors are always less than the tolerable eigen errors. The plot shows a tight upper bound, indicating that it is safe to use our model?s derived filtering slack ?. In other words, the achieved eigen-error always remains below the requested tolerable error specified as input, and the slack chosen given the tolerable error is close to being optimal. Fig. 3(c) shows the relationship between the relative eigen-error and the relative error of detection threshold Q? 2 . We see that the threshold for detecting anomalies decreases as we tolerate more and more eigen-errors. In these experiments, an error of 2% in the eigenvalues leads to an error of approximately 6% in our estimate of the appropriate cutoff threshold. We now examine the false alarm rates achieved. In Fig. 3(d) the curve with triangles represents the upper bound on the false alarm rate as estimated by the coordinator. The curve with circles is the actual accrued false alarm rate achieved by our scheme. Note that the upper bound on the false alarm rate is fairly close to the true values, especially when the slack is small. The false alarm rate increases with increasing eigen-error because as the eigen-error increases, the corresponding detection threshold Q? will decrease, which in turn causes the protocol to raise an alarm more ? rather than the relative threshold difference, we would obviously see a often. (If we had plotted Q 2 ? k+1 , . . . , ? ?n . ? ? /Q? , where Q ? ? is computed from ? Precisely, it is 1 ? Q ? with increasing eigen-error.) We see in Fig. 3(e) that the missed detection rates remain decreasing Q below 4% for various levels of communication overhead. The communication overhead is depicted in Fig. 3(f). Clearly, the larger the errors we can tolerate, the more overhead can be reduced. Considering these last three plots (d,e,f) together, we observe several tradeoffs. For example, when the relative eigen-error is 1.5%, our algorithm reduces the data sent through the network by more than 90%. This gain is achieved at the cost of approximately a 4% missed detection rate and a 6% false alarm rate. This is a large reduction in communication for a small increase in detection error. These initial results illustrate that our in-network solution can dramatically lower the communication overhead while still achieving high detection accuracy. 5 Conclusion We have presented a new algorithmic framework for network anomaly detection that combines distributed tracking with PCA analysis to detect anomalies with far less data than previous methods. The distributed tracking consists of local filters, installed at each monitoring site, whose parameters are selected based upon global criteria. The idea is to track the local monitoring data only enough so as to enable accurate detection. The local filtering reduces the amount of data transmitted through the network but also means that anomaly detection must be done with limited or partial views of the global state. Using methods from stochastic matrix perturbation theory, we provided an analysis for the tradeoff between the detection accuracy and the data communication overhead. We were able to control the amount of data overhead using the the relative eigen-error as a tuning knob. To the best of our knowledge, this is the first result in the literature that provides upper bounds on the false alarm rate of network anomaly detection. References [1] BAI , Z.-J., C HAN , R. AND L UK , F. Principal component analysis for distributed data sets with updating. In Proceedings of International workshop on Advanced Parallel Processing Technologies (APPT), 2005. [2] D REGER , H., F ELDMANN , A., PAXSON , V. AND S OMMER , R. Operational experiences with highvolume network intrusion detection. In Proceedings of ACM Conference on Computer and Communications Security (CCS), 2004. [3] H UANG , L., N GUYEN , X., G AROFALAKIS , M., J ORDAN , M., J OSEPH , A. AND TAFT, N. In-network PCA and anomaly detection. Technical Report No. UCB/EECS-2007-10, EECS Department, UC Berkeley. [4] JACKSON , J. E. AND M UDHOLKAR , G. S. Control procedures for residuals associated with principal component analysis. In Technometrics, 21(3):341-349, 1979. [5] J ENSEN , D. R. AND S OLOMON , H. A Gaussian approximation for the distribution of definite quadratic forms. In Journal of the American Statistical Association, 67(340):898-902, 1972. [6] K ERALAPURA , R., C ORMODE , G. AND R AMAMIRTHAM , J. Communication-efficient distributed monitoring of thresholded counts. In Proceedings of ACM International Conference on Management of Data (SIGMOD), 2006. [7] K REIDL , P. O., W ILLSKY, A. Inference with minimal communication: A decision-theoretic variational approach. In Proceedings of Neural Information Processing Systems (NIPS), 2006. [8] L AKHINA , A., C ROVELLA , M. AND D IOT, C. Diagnosing network-wide traffic anomalies. In Proceedings of ACM Conference of the Special Interest Group on Data Communication (SIGCOMM), 2004. [9] L AKHINA , A., PAPAGIANNAKI , K., C ROVELLA , M., D IOT, C., KOLACZYK , E. D. AND TAFT, N. Structural analysis of network traffic flows. In Proceedings of International Conference on Measurement and Modeling of Computer Systems (SIGMETRICS), 2004. [10] L EVCHENKO , K., PATURI , R. AND VARGHESE , G. On the difficulty of scalably detecting network attacks. In Proceedings of ACM Conference on Computer and Communications Security (CCS), 2004. [11] N GUYEN , X., WAINWRIGHT, M. AND J ORDAN , M. Nonparametric decentralized detection using kernel methods. In IEEE Transactions on Signal Processing, 53(11):4053-4066, 2005. [12] PADMANABHAN , V. N., R AMABHADRAN , S., AND PADHYE , J. Netprofiler: Profiling wide-area networks using peer cooperation. In Proceedings of International Workshop on Peer-to-Peer Systems, 2005. [13] P REDD , J.B., K ULKARNI , S.B., AND P OOR , H.V. Distributed learning in wireless sensor networks. In IEEE Signal Processing Magazine, 23(4):56-69, 2006. [14] Q U , Y., O STROUCHOVZ , G., S AMATOVAZ , N AND G EIST, A. Principal component analysis for dimension reduction in massive distributed data sets. In Proceedings of IEEE International Conference on Data Mining (ICDM), 2002. [15] S TEWART, G. W., AND S UN , J.-G. Matrix Perturbation Theory. Academic Press, 1990. [16] Y EGNESWARAN , V., BARFORD , P., AND J HA , S. Global intrusion detection in the domino overlay system. In Proceedings of Network and Distributed System Security Symposium (NDSS), 2004. [17] Z HANG , Y., G E , Z.-H., G REENBERG , A., AND ROUGHAN , M. Network anomography. In Proceedings of Internet Measurement Conference (IMC), 2005.
3156 |@word kolaczyk:1 version:5 norm:6 proportion:1 nd:1 scalably:1 covariance:5 decomposition:1 curtail:2 moment:1 bai:1 reduction:4 series:10 contains:1 initial:1 ours:1 current:1 com:2 adj:1 od:2 yet:1 must:3 periodically:1 subsequent:2 weyl:1 plot:7 update:17 discovering:1 device:1 selected:1 indicative:1 yno:4 xk:1 tcp:1 ith:1 short:1 infrastructure:3 coarse:1 provides:2 node:6 detecting:6 quantized:2 attack:3 simpler:1 five:1 diagnosing:1 mathematical:1 lakhina:2 symposium:1 viable:1 initiative:1 consists:1 overhead:12 combine:1 inside:1 indeed:1 themselves:1 nor:2 examine:1 simulator:4 ol:1 decreasing:1 little:1 actual:5 window:4 considering:2 increasing:2 project:1 provided:1 underlying:3 bounded:2 backbone:3 fal:1 developer:1 developed:1 guarantee:1 temporal:1 berkeley:11 every:3 act:1 k2:6 uk:1 control:3 unit:2 yn:1 before:2 service:1 local:17 limit:1 installed:1 solely:1 approximately:5 mirsky:1 collect:3 limited:2 range:1 statistically:1 practical:2 practice:1 block:1 definite:1 communicated:1 procedure:1 area:3 projection:7 confidence:2 word:1 refers:1 oor:1 get:1 onto:6 cannot:1 selection:1 operator:1 close:2 applying:2 map:1 demonstrated:1 yt:2 missing:1 send:6 straightforward:1 go:1 center:1 imposed:1 survey:1 m2:1 jackson:1 controlling:1 trigger:3 magazine:1 anomaly:51 exact:1 massive:1 us:2 designing:1 origin:1 element:2 approximated:1 updating:1 observed:5 role:2 bottom:2 capture:4 worst:1 ensures:1 sun:2 trade:1 rescaled:1 decrease:2 yk:3 environment:1 comm:1 denial:1 raise:1 tight:2 upon:3 centralize:1 completely:1 triangle:1 isp:2 various:1 fast:1 describe:2 detected:2 aggregate:1 choosing:2 h0:6 peer:3 mon:2 whose:1 widely:1 larger:2 otherwise:1 statistic:5 seemingly:1 obviously:1 eigenvalue:11 intelligently:1 analytical:1 propose:1 lefthand:1 fired:1 achieve:1 description:1 intuitive:2 frobenius:2 validate:1 scalability:1 transmission:1 r1:1 derive:2 develop:1 illustrate:1 measured:2 minor:1 received:4 job:1 implemented:1 c:4 involves:1 implies:1 quantify:1 posit:1 safe:1 functionality:1 filter:14 stochastic:5 centered:2 enable:2 implementing:1 taft:4 dns:1 minos:2 secondly:1 mathematically:1 normal:13 deciding:1 presumably:1 algorithmic:2 week:2 predict:4 applicable:2 largest:1 tool:1 clearly:2 sensor:1 gaussian:1 always:2 aim:2 sigmetrics:1 rather:3 pn:1 sab:3 knob:2 derived:2 focus:2 q3:1 guyen:2 improvement:1 vk:1 intrusion:3 contrast:1 sense:2 detect:5 inference:2 typically:1 initially:1 barford:1 coordinator:35 kc:4 buried:1 selects:1 comprising:1 arg:1 overall:3 fidelity:1 issue:3 classification:1 spatial:1 special:1 fairly:1 uc:1 equal:1 saving:1 represents:3 future:1 report:1 inherent:2 employ:1 few:1 ppt:3 fire:1 maintain:2 n1:1 attempt:1 ab:9 detection:47 technometrics:1 centralized:6 message:2 interest:1 highly:1 mining:1 evaluation:1 arrives:1 yielding:1 accurate:2 partial:1 experience:1 incomplete:1 divide:1 circle:2 plotted:1 isolated:1 theoretical:2 minimal:1 instance:2 column:5 increased:1 ommer:1 modeling:1 cost:3 deviation:1 uniform:1 dependency:1 perturbed:6 periodic:1 eec:2 synthetic:1 combined:2 accrued:5 international:5 destination:1 off:2 michael:1 together:2 continuously:4 domino:1 w1:1 squared:1 central:2 management:1 huang:1 choose:2 american:1 leading:1 sec:1 availability:1 matter:1 caused:1 vi:2 stream:4 view:4 h1:1 thu:2 traffic:19 characterizes:1 analyze:1 aggregation:1 maintains:1 start:1 parallel:1 ass:3 formed:1 ni:1 accuracy:11 variance:5 who:1 yield:1 identify:1 raw:2 accurately:1 monitoring:8 cc:2 detector:5 wed:2 suffers:1 sharing:1 against:1 failure:1 frequency:1 spe:9 associated:1 ormode:1 roughan:1 gain:1 dataset:1 radially:1 recall:1 knowledge:1 dimensionality:2 organized:1 actually:2 exceeded:1 feed:2 tolerate:2 follow:1 methodology:1 done:2 box:1 furthermore:2 just:1 until:1 hand:3 receives:2 eqn:7 eig:3 lack:1 effect:3 y2:1 true:2 hence:1 symmetric:1 i2:5 sin:1 kyk2:1 anything:1 percentile:1 criterion:1 paturi:1 theoretic:1 performs:2 network1:1 variational:1 recently:2 fi:3 empirically:1 overview:1 tracked:1 volume:12 association:1 m1:1 measurement:4 significant:1 imc:1 olf:3 tuning:2 had:1 reliability:1 dot:1 han:1 yk2:6 etc:1 dominant:1 recent:6 showed:1 driven:1 yi:6 captured:1 transmitted:1 aggregated:1 fri:2 period:3 signal:6 papagiannaki:1 sliding:3 dashed:1 sno:3 infer:1 reduces:2 technical:1 academic:1 profiling:2 offer:1 concerning:1 icdm:1 sigcomm:1 impact:5 prediction:3 scalable:2 essentially:2 metric:1 expectation:2 kernel:2 deterioration:1 achieved:5 background:1 remarkably:1 interval:3 sends:5 tend:1 sent:3 flow:5 jordan:2 garofalakis:2 structural:1 near:1 revealed:1 exceed:1 enough:3 wn:1 affect:2 independence:1 architecture:2 bandwidth:1 triggering:1 idea:3 tradeoff:7 computable:1 pca:20 eigengap:1 ordan:2 cause:1 constitute:1 yvj:1 dramatically:1 useful:2 latency:1 xuanlong:2 generally:1 eigenvectors:1 amount:7 nonparametric:1 locally:2 yab:4 reduced:1 overlay:1 estimated:3 arising:1 track:6 per:4 group:1 key:4 threshold:12 monitor:35 achieving:1 neither:1 cutoff:1 verified:1 thresholded:1 v1:1 fraction:3 sum:1 run:1 angle:1 parameterized:1 fourth:1 injected:1 almost:1 decide:1 missed:8 decision:8 pushed:1 bound:19 abnormal:4 internet:1 quadratic:2 i4:1 occur:1 constraint:6 precisely:3 ri:8 aspect:1 attempting:1 relatively:1 department:1 developing:1 combination:1 request:1 across:4 remain:1 wi:2 joseph:1 happens:1 intuitively:1 pr:2 equation:1 vjt:1 remains:1 slack:20 detectable:1 count:2 discus:1 needed:2 turn:3 serf:1 unusual:1 sending:1 decentralized:1 distr:1 observe:3 v2:1 spectral:2 appropriate:1 tolerable:5 eigen:22 original:2 top:3 remaining:2 ensure:2 unnoticed:1 denotes:3 graphical:1 assumes:1 pushing:1 sigmod:1 h20:2 especially:1 classical:1 seeking:1 quantity:3 strategy:1 subspace:17 perturbs:1 link:8 separating:1 tue:2 considers:1 collected:3 nina:2 assuming:2 modeled:1 relationship:3 balance:1 setup:1 unfortunately:1 potentially:1 trace:4 paxson:1 implementation:2 design:1 upper:11 observation:1 padmanabhan:1 supporting:1 extended:1 communication:20 y1:1 rn:2 perturbation:16 drift:1 specified:1 connection:1 security:3 california:4 uang:1 nip:1 assembled:1 address:1 able:2 below:2 pattern:3 redd:1 challenge:1 max:2 including:2 wainwright:1 critical:1 difficulty:1 residual:10 advanced:1 mn:4 scheme:3 improve:1 interconnects:1 technology:1 brief:1 imply:1 axis:2 catch:1 naive:1 health:1 byte:1 prior:1 understanding:1 literature:2 kf:6 relative:18 embedded:1 loss:2 interesting:1 limitation:4 filtering:13 ingredient:1 sufficient:1 foster:1 port:1 share:1 collaboration:1 row:1 normalizes:1 summary:2 cooperation:1 last:2 wireless:1 side:1 allow:1 understand:1 institute:1 wide:6 distributed:22 tolerance:3 overcome:1 curve:2 dimension:1 valid:1 doesn:1 computes:1 reside:1 collection:1 adaptive:4 made:1 simplified:1 nguyen:1 far:1 transaction:2 approximate:5 hang:1 keep:1 global:19 reveals:1 sat:2 assumed:1 unnecessary:1 continuous:1 un:1 promising:1 nature:1 ca:6 inherently:1 operational:1 requested:1 anthony:1 domain:2 protocol:6 main:1 linearly:1 bounding:1 ling:1 alarm:19 nothing:2 site:2 intel:4 fig:12 referred:1 lie:1 iot:2 removing:1 minute:4 embed:1 load:2 theorem:2 explored:1 r2:1 dominates:1 intrinsic:1 workshop:2 quantization:2 false:17 rel:2 effectively:2 cab:6 magnitude:1 illustrates:1 push:2 conditioned:1 easier:1 depicted:1 explore:1 kxk:1 ordered:1 expressed:2 tracking:8 ux:3 corresponds:2 relies:2 acm:4 goal:1 twofold:1 installing:1 change:5 determined:2 reducing:1 uniformly:2 principal:13 worm:1 total:3 called:1 m3:1 ucb:1 indicating:1 formally:2 select:1 latter:1 violated:1 ongoing:1
2,377
3,157
Aggregating Classification Accuracy across Time: Application to Single Trial EEG Steven Lemm ? Intelligent Data Analysis Group, Fraunhofer Institute FIRST, Kekulestr. 7 12489 Berlin, Germany Christin Sch? afer Intelligent Data Analysis Group, Fraunhofer Institute FIRST, Kekulestr. 7 12489 Berlin, Germany Gabriel Curio Neurophysics Group, Dept. of Neurology, Campus Benjamin Franklin, Charit?e,University Medicine Berlin, Hindenburgdamm 20, 12200 Berlin, Germany Abstract We present a method for binary on-line classification of triggered but temporally blurred events that are embedded in noisy time series in the context of on-line discrimination between left and right imaginary hand-movement. In particular the goal of the binary classification problem is to obtain the decision, as fast and as reliably as possible from the recorded EEG single trials. To provide a probabilistic decision at every time-point t the presented method gathers information from two distinct sequences of features across time. In order to incorporate decisions from prior time-points we suggest an appropriate weighting scheme, that emphasizes time instances, providing a higher discriminatory power between the instantaneous class distributions of each feature, where the discriminatory power is quantified in terms of the Bayes error of misclassification. The effectiveness of this procedure is verified by its successful application in the 3rd BCI competition. Disclosure of the data after the competition revealed this approach to be superior with single trial error rates as low as 10.7, 11.5 and 16.7% for the three different subjects under study. 1 Introduction The ultimate goal of brain-computer interfacing (BCI) is to translate human intentions into a control signal for a device, such as a computer application, a wheelchair or a neuroprosthesis (e.g. [20]). Most pursued approaches utilize the accompanying EEG-rhythm perturbation in order to distinguish between single trials (STs) of left and right hand imaginary movements e.g. [8, 11, 14, 21]. Up to now there are just a few published approaches utilizing additional features, such as slow cortical potential, e.g. [3, 4, 9] This paper describes the algorithm that has been successfully applied in the 2005 international data analysis competition on BCI-tasks [2] (data set IIIb) for the on-line discrimina? steven.lemm@first.fhg.de tion between imagined left and right hand movement. The objective of the competition was to detect the respective motor intention as early and as reliably as possible. Consequently, the competing algorithms have to solve the on-line discrimination task as based on information on the event onset. Thus it is not within the scope of the competition to solve the problem of detecting the event onset itself. We approach this problem by applying an algorithm that combines the different characteristics of two features: the modulations of the ongoing rhythmic activity and the slow cortical Movement Related Potential (MRP). Both features are differently pronounced over time and exhibit a large trial to trial variability and can therefore be considered as temporally blurred. Consequently, the proposed method combines at one hand the MRP with the oscillatory feature and on the other hand gather information across time as introduced in [8,16]. More precisely, at each time point we estimate probabilistic models on the labeled training data - one for each class and feature - yielding a sequence of weak instantaneous classifiers, i.e. posterior class distributions. The classification of an unlabeled ST is then derived by weighted combination of these weak probabilistic classifiers using linear combination according to their instantaneous discriminatory power. The paper is organized as follows: section II describes the feature and its extraction, In section III introduces the probabilistic model as well as the combinatorial framework to gather information from the different features across time. In section III the results on the competition data are given, followed by a brief conclusion. 2 2.1 Feature Neurophysiology The human perirolandic sensorimotor cortices show rhythmic macroscopic EEG oscillations (?-rhythm) [6], with spectral peak energies around 10 Hz (localized predominantly over the postcentral somatosensory cortex) and 20 Hz (over the precentral motor cortex). Modulations of the ?-rhythm have been reported for different physiological manipulations, e.g., by motor activity, both actual and imagined [7, 13, 18], as well as by somatosensory stimulation [12]. Standard trial averages of ?-rhythm power show a sequence of attenuation, termed event-related desynchronization (ERD) [13], followed by a rebound (event-related synchronization: ERS) which often overshoots the pre-event baseline level [15]. In case of sensorimotor cortical processes accompanying finger movements Babiloni et al. [1] demonstrated that movement related potentials (MRPs) and ERD indeed show up with different spatio-temporal activation patterns across primary (sensori-)motor cortex (M1), supplementary motor area (SMA) and the posterior parietal cortex (PP). Most importantly, the ERD response magnitude did not correlate with the amplitude of the negative MRPs slope. In the subsequent we will combine both features. Thus, in order to extract the rhythmic information we map the EEG to the time-frequency domain by means of Morlet wavelets [19], whereas the slow cortical MRP are extracted by the application of a low pass filter, in form of a simple moving average filter. 2.2 Extraction Let X = [x[1], . . . , x[T ]] denote the EEG signal of one single trial (ST) of length T , recorded from the two bipolar channels C3 and C4, i.e. x[t] = [C3[t], C4[t]] T . The label information about the corresponding motor intention of a ST is denoted by Y ? {L, R}. For information obtain from observations until time s ? T , we will make use of subscript |s throughout this paper, e.g. X|s refers to [x[1], . . . , x[s]]. This observational horizon becomes important with respect to the causality of the feature extraction process, especially in order to ensure the causality of filter operations we have to restrict the algorithm to a certain observational horizon. Note that X|T denotes a completely observed ST. However for notational convenience we will omit the index |T in case of complete observations. Considering ERD as a feature for ST classifications we model the hand-specific time course of absolute ?-rhythm amplitudes over both sensorimotor cortices. Therefore we utilize the time-frequency representations of the ST at two different frequency bands (?, ?), obtained by convolution of the EEG signal with complex Morlet wavelets [19]. Using the notation ?? , and ?? for a wavelet centered at the individual spectral peak in the alpha (8-12Hz) and the beta (16-24Hz) frequency domain, the ERD feature of a ST, observed until time s is calculated as:   ERD|s = erd|s [1], . . . , erd|s [s] , with ? ? |(C3|s ? ?? )[t]| ? |(C4|s ? ?? )[t]| ? . (1) erd|s [t] = ? |(C3|s ? ?? )[t]| ? |(C4|s ? ?? )[t]| In a similar manner we define the ST feature for the MRP by convolution with a moving average filter of length 11, abbreviated as MA(11). h i MRP|s = mrp|s [1], . . . , mrp|s [s] , with  mrp|s [t] = (C3|s ? MA(11))[t] (C4|s ? MA(11))[t]  . (2)  According to (1) and (2) the k-th labeled, observed STs for training, i.e. X (k), Y (k) maps to a STs in feature space, namely (MRP(k) , ERD(k) ). 3 Probabilistic Classification Model Before we start with the model description, we briefly introduce two concepts from Bayesian decision theory. Therefore let p(x|?y , ?y ), y ? {L, R} denote the PDFs of two multivariate Gaussian distributions with different means and covariance matrices (?y , ?y ) for two classes, denoted by L and R. Given the two class-conditional distribution models, and under the assumption of a class prior of P (y) = 21 , y ? {L, R}, and given an observation x, the posterior class distribution according to Bayes formula is given by p(y|x, ?L , ?L , ?R , ?R ) = p(x|?y , ?y ) . p(x|?L , ?L ) + p(x|?R , ?R ) (3) Furthermore the discriminative power between these two distributions can be estimated using the Bayes error of misclassification [5]. In case of distinct class covariance matrices, the Bayes error cannot be calculated directly. However, by using the Chernoff bound [5] we can derive an upper bound and finally approximate the discriminative power w between the two distributions by Z 2w ? p(x|?L , ?L )? p(x|?R , ?R )1?? dx. (4) = 1 ? min 0???1 In case of Gaussian distributions the above integral can be expressed in a closed form [5], such that the minimum solution can be easily obtained (see also [16]). Based on these two necessary concepts, we will now introduce our probabilistic classification method. Therefore we first model the class-conditional distribution of each feature at each time instance as multivariate Gaussian distribution. Hence at each time instance we estimate the class means and the class covariance matrices in the feature space, based on the mapped training STs, i.e. ERD(k) , MRP(k) . Thus from erd(k) [t] we obtain the following two classconditional sets of parameters: h i ?y [t] = E erd(k) [t] (k) (5) Y =y h i ?y [t] = Cov erd(k) [t] (k) , y ? {L, R}. (6) Y =y For convenience we summarize the estimated model parameters for the ERD feature as ?[t] := (?L [t], ?L [t], ?R [t], ?R [t]), whereas ?[t] := (?L [t], ?L [t]), ?R [t], ?R [t]) denote the class means and the covariance matrices obtained in the similar manner from mrp(k) [t]. Given an arbitrary observation x from the appropriate domain, applying Bayes formula as introduced in (3), yields a posterior distribution for each feature:  p y erd, ?[t] , erd ? R4 (7)  2 p y mrp, ?[t] , mrp ? R . (8) Additionally, according to (4) we get approximations of the discriminative power w[t] and v[t] of the ERP resp. MRP feature at every time instance. In order to finally derive the classification of an unlabeled single trial at a certain time s ? T , we incorporate knowledge from all preceding samples t ? s, i.e. we make the classification based on the causally extracted features: ERD|s and MRP|s . Therefore we first apply (7) and (8) given the observations erd|s [t] resp. mrp|s [t] in order to obtain the class posteriors based on observations until s ? T . Secondly we combine these class posteriors with one another across time by taking the expectation under the distributions w and v, i.e.   X w[t] ? p y erd|s [t], ?[t] + v[t] ? p y mrp|s [t], ?[t] P c(y, s) = . (9) t?s w[t] + v[t] t?s As described in [16] this yields an evidence accumulation over time about the decision process. Strictly speaking Eq. (9) gives the expectation value that the ST, observed until time s, is generated by either one of the class models (L or R), until time s. Due to the submission requirements of the competition the final decision at time s is C[s] = 1 ? 2 ? c(L, s), (10) where a positive or negative sign refers to right or left movement, while the magnitude indicates the confidence in the decision on a scale between 0 and 1. 4 4.1 Application Competition data The EEG from two bipolar channels (C3, C4) was provided with bandfilter settings of 0.5 to 30 Hz and sampled at 128 Hz. The data consist of recordings from three different healthy subjects. Except for the first data set, each contains 540 labeled (for training) and 540 unlabeled trials (for competition) of imaginary hand movements, with an equal number of left and right hand trials (first data set provides just 320 trials each). Each trial has a duration of 7 s: after a 3 s preparation period a visual cue is presented for one second, indicating the demanded motor intention. This is followed by another 3 s for performing the imagination task (for details see [2]). The particular competition data was provided by the Dept. of Med. Informatics, Inst. for Biomed. Eng., Univ. of Techn. Graz. The specific competition task is to provide an on-line discrimination between left and right movements for the unlabeled STs for each subject based on the information obtained from the labeled trials. More precisely, at every time instance in the interval from 3 to 7 seconds a strictly causal decision about the intended motor action and its confidence must be supplied. After competition deadline, based on the disclosure of the labels Y (k) for the previously unlabeled STs the output C (k) [t] of the methods were evaluated using the time course of the mutual information (MI) [17], i.e. MI[t] = SNR[t] = 1 log2 (SNR[t] + 1) 2     2 E C (k) [t] Y (k) =L ? E C (k) [t] Y (k) =R      2 Var C (k) [t] Y (k) =L + Var C (k) [t] Y (k) =R (11) (12) More precisely, since the general objective of the competition was to obtain the single trial classification as fast and as accurate as possible, the maximum steepness of the MI was considered as final evaluation criterion, i.e. max t?3.5 MI[t] . t ? 3s (13) Note, that the feature extraction relies on a few hyperparameters, i.e. the center frequency and the width of the wavelets, as well as the length of the MA filter. All those parameters were obtained by model selection using a leave-one-out cross-validation scheme of the classification performance on the training data. 4.2 Results and Discussion As proposed in section 3 we estimated the class-conditional Gaussian distributions cf. (5) ? (8). The resulting posterior distributions were then combined according to (9) in order to obtain the final classification of the unlabeled STs. After disclosure of the label information our method turned out to succeed with a MI steepness (cf. (13)) of 0.17, 0.44 and 0.35 for the individual subjects. Table 4.2 summarizes the results in terms of the achieved minimum binary classification error, the maximum MI, and the maximum steepness of MI for each subject and each competitor in the competition. 1. 2. 3. 4. 5. 6. 7. min. O3 10.69 14.47 13.21 23.90 11.95 10.69 34.28 error rate[%] S4 X11 11.48 16.67 22.96 22.22 17.59 16.48 24.44 24.07 21.48 18.70 13.52 25.19 38.52 28.70 max. MI [bit] O3 S4 X11 0.6027 0.6079 0.4861 0.4470 0.2316 0.3074 0.5509 0.3752 0.4675 0.2177 0.2387 0.2173 0.4319 0.3497 0.3854 0.5975 0.5668 0.2437 0.0431 0.0464 0.1571 max. MI/t [bit/s] O3 S4 X11 0.1698 0.4382 0.3489 0.1626 0.4174 0.1719 0.2030 0.0936 0.1173 0.1153 0.1218 0.1181 0.1039 0.1490 0.0948 0.1184 0.1516 0.0612 0.0704 0.0229 0.0489 Table 1: Overall ranked results of the competing algorithms (first row corresponds to the proposed method) on the competition test data. For three different subjects (O3, S4 and X11) the table states different performance measures of classification accuracy (min. Error rate, max MI, steepness of MI), where the steepness of the MI was used as the objective in the competition. For a description of the 2.?7. algorithm please refer to [2]. The resulting time courses for the MI and the steepness of the MI are presented in the left panel of Fig. 1. For subject two and three, during the first 3.5 seconds (0.5 seconds after cue presentation) the classification is rather by chance, after 3.5 seconds a steep ascent in the classification accuracy can be observed, reflected by the raising MI. The maximum steepness for these two subjects is obtained quite early, between 3.6 ? 3.8s. In opposite, for subject one the maximum is achieved at 4.9 seconds, yielding a low steepness value. However, a low value is also found for the submission of all other competitors. Nevertheless, the MI constantly increases up to 0.64 Bit per trial at 7 seconds, which might indicate a delayed performance of subject one. The right panel in Fig. 1 provides the weights w[t] and v[t], reflecting the Bayes error of misclassification cf. (4), that were used for the temporal integration process. For subject two one can clearly observe a switch in the regime between the ERP and the MRP feature at 5 seconds, as indicated by a crossing of the two weighting functions. From this we conclude that the steep increase in MI for this subject between 3 and 5 seconds is mainly due to the MRP feature, whereas the further improvement in the MI relies primarily on the ERD feature. Subject one provides nearly no discriminative MRP and the classification is almost exclusively based on the ERD feature. For subject three the constant low weights at all time instances, reveal the weak discriminative power of the estimated class-conditional distributions. However in Fig. 1 the advantage of the integration process across time can clearly be observed, as the MI is continuously increasing and the steepness of the MI is surprisingly high even for this subject. Figure 1: Left panel: time courses of the mutual information (light, dashed) and the competition criterion - steepness of mutual information (thin solid) cf. (13)- for the classification of the unlabeled STs is presented. Right panel: the time course of the weights reflecting the discriminative power (cf. (4)) at every time instance for the two different features (ERD dark, solid; MRP - light dashed). In each panel the subjects O3, S4, X11 are arranged top down. A comprehensive comparison of all submitted techniques to solve the specific task for data set IIIb of the BCI-competition is provided in [2] or available on the web 1 . Basically this evaluation reveals that the proposed algorithm outperforms all competing approaches. 5 Conclusion We proposed a general Bayesian framework for temporal combination of sets of simple classifiers based on different features, which is applicable to any kind of sequential data providing a binary classification problems. Moreover, any arbitrary number of features can be combined in the proposed way of temporal weighting, by utilizing the estimated discriminative power over time. Furthermore the estimation of the Bayes error of misclassification is not strictly linked to the chosen parametric form of the class-conditional distributions. For arbitrary distributions the Bayes error can be obtained for instance by statistical resampling approaches, such as Monte Carlo methods. However for the successful application in the BCI-competition 2005 we chose Gaussian distribution for the sake of simplicity concerning two issues: estimating their parameters and obtaining their Bayes error. Note that although the combination of the classifiers across time is linear, the final classification model is non-linear, as the individual classifiers at each time instance are non-linear.For a discussion about linear vs. non-linear methods in the context of BCI see [10]. More precisely due to the distinct covariance matrices of the Gaussian distributions the individual decision boundaries are of quadratic form. In particular to solve the competition task we combined classifiers based on the temporal evolution of different neuro-physiological features, i.e. ERD and MRP. The resulting on-line classification model finally turned out to succeed for the single trial on-line classification of imagined hand movement in the BCI competition 2005. Acknowledgement: This work was supported in part by the Bundesministerium f? ur Bildung und Forschung (BMBF) under grant FKZ 01GQ0415 and by the DFG under grant SFB 618-B4. S. Lemm thanks Stefan Harmeling for valuable discussions. 1 ida.first.fhg.de/projects/bci/competition_iii/ References [1] C. Babiloni, F. Carducci, F. Cincotti, P. M. Rossini, C. Neuper, Gert Pfurtscheller, and F. Babiloni. Human movement-related potentials vs desynchronization of EEG alpha rhythm: A high-resolution EEG study. NeuroImage, 10:658?665, 1999. [2] Benjamin Blankertz, Klaus-Robert M? uller, Dean Krusienski, Gerwin Schalk, Jonathan R. Wolpaw, Alois Schl?ogl, Gert Pfurtscheller, Jos?e del R. Mill?an, Michael Schr? oder, and Niels Birbaumer. The BCI competition III: Validating alternative approachs to actual BCI problems. IEEE Trans. Neural Sys. Rehab. Eng., 14(2):153?159, 2006. [3] Guido Dornhege, Benjamin Blankertz, Gabriel Curio, and Klaus-Robert M? uller. Combining features for BCI. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Inf. Proc. Systems (NIPS 02), volume 15, pages 1115?1122, 2003. [4] Guido Dornhege, Benjamin Blankertz, Gabriel Curio, and Klaus-Robert M? uller. Increase information transfer rates in BCI by CSP extension to multi-class. In Sebastian Thrun, Lawrence Saul, and Bernhard Sch?olkopf, editors, Advances in Neural Information Processing Systems, volume 16, pages 733?740. MIT Press, Cambridge, MA, 2004. [5] R.O. Duda, P.E. Hart, and D.G. Stork. Pattern Classification. John Wiley & Sons, New York, 2nd edition, 2001. [6] R. Hari and R. Salmelin. Human cortical oscillations: a neuromagnetic view through the skull. Trends in Neuroscience, 20:44?9, 1997. [7] H. Jasper and W. Penfield. Electrocorticograms in man: Effect of voluntary movement upon the electrical activity of the precentral gyrus. Arch. Psychiatrie Zeitschrift Neurol., 183:163?74, 1949. [8] Steven Lemm, Christin Sch? afer, and Gabriel Curio. Probabilistic modeling of sensorimotor ? rhythms for classification of imaginary hand movements. IEEE Trans. Biomed. Eng., 51(6):1077?1080, 2004. [9] B.D. Mensh, J. Werfer, and H.S. Seung. Combining gamma-band power with slow cortical potentials to improve single-trial classification of electroencephalographic signals. IEEE Trans. Biomed. Eng., 51(6):1052?6, 2004. [10] Klaus-Robert M? uller, Charles W. Anderson, and Gary E. Birch. Linear and nonlinear methods for brain-computer interfaces. IEEE Trans. Neural Sys. Rehab. Eng., 11(2):165?169, 2003. [11] C. Neuper, A. Schl? ogl, and G. Pfurtscheller. Enhancement of left-right sensorimotor EEG differences during feedback-regulated motor imagery. Journal Clin. Neurophysiol., 16:373?82, 1999. [12] V. Nikouline, K. Linkenkaer-Hansen, Wikstr?om; H., M. Kes?aniemi, E. Antonova, R. Ilmoniemi, and J. Huttunen. Dynamics of mu-rhythm suppression caused by median nerve stimulation: a magnetoencephalographic study in human subjects. Neurosci. Lett., 294, 2000. [13] G. Pfurtscheller and A. Arabibar. Evaluation of event-related desynchronization preceding and following voluntary self-paced movement. Electroencephalogr. Clin. Neurophysiol., 46:138?46, 1979. [14] G. Pfurtscheller, C. Neuper, D. Flotzinger, and M. Pregenzer. EEG-based discrimination between imagination of right and left hand movement. Electroenceph. clin. Neurophysiol., 103:642?51, 1997. [15] S. Salenius, A. Schnitzler, R. Salmelin, V. Jousm?aki, and R. Hari. Modulation of human cortical rolandic rhythms during natural sensorimotor tasks. NeuroImage, 5:221?8, 1997. [16] Christin Sch? afer, Steven Lemm, and Gabriel Curio. Binary on-line classification based on temporally integrated information. In Claus Weihs and Wolfgang Gaul, editors, Proceedings of the 28th annual conference of the Gesellschaft f? ur Klassifikation, pages 216?223, 2005. [17] A. Schl? ogl, R. Scherer C. Keinrath, and G. Pfurtscheller. Information transfer of an EEG-based brain-computer interface. In Proc. First Int. IEEE EMBS Conference on Neural Engineering, pages 641?644, 2003. [18] A. Schnitzler, S. Salenius, R. Salmelin, V. Jousm?aki, and R. Hari. Involvement of primary motor cortex in motor imagery: a neuromagnetic study. NeuroImage, 6:201?8, 1997. [19] C. Torrence and G.P. Compo. A practical guide to wavelet analysis. Bull. Am. Meterol., 79:61?78, 1998. [20] Jonathan R. Wolpaw, Niels Birbaumer, Dennis J. McFarland, Gert Pfurtscheller, and Theresa M. Vaughan. Brain-computer interfaces for communication and control. Clin. Neurophysiol., 113:767?791, 2002. [21] J.R. Wolpaw and D.J. McFarland. Multichannel EEG-based brain-computer communication. Electroenceph. clin. Neurophysiol., 90:444?9, 1994.
3157 |@word neurophysiology:1 trial:18 briefly:1 duda:1 nd:1 cincotti:1 covariance:5 eng:5 solid:2 series:1 contains:1 exclusively:1 franklin:1 outperforms:1 imaginary:4 ida:1 activation:1 dx:1 must:1 john:1 subsequent:1 motor:11 discrimination:4 resampling:1 pursued:1 cue:2 device:1 v:2 sys:2 compo:1 detecting:1 provides:3 beta:1 magnetoencephalographic:1 combine:4 introduce:2 manner:2 indeed:1 multi:1 brain:5 actual:2 considering:1 increasing:1 becomes:1 provided:3 estimating:1 campus:1 notation:1 panel:5 moreover:1 pregenzer:1 project:1 kind:1 dornhege:2 temporal:5 every:4 attenuation:1 bipolar:2 classifier:6 control:2 grant:2 omit:1 causally:1 before:1 positive:1 engineering:1 aggregating:1 zeitschrift:1 subscript:1 modulation:3 might:1 chose:1 quantified:1 r4:1 discriminatory:3 harmeling:1 practical:1 wolpaw:3 procedure:1 area:1 mensh:1 intention:4 pre:1 refers:2 confidence:2 suggest:1 flotzinger:1 get:1 convenience:2 unlabeled:7 cannot:1 selection:1 krusienski:1 context:2 applying:2 vaughan:1 accumulation:1 map:2 demonstrated:1 center:1 dean:1 duration:1 resolution:1 simplicity:1 utilizing:2 importantly:1 gert:3 resp:2 guido:2 crossing:1 trend:1 submission:2 labeled:4 observed:6 steven:4 electrical:1 graz:1 movement:15 valuable:1 benjamin:4 und:1 mu:1 neuromagnetic:2 seung:1 dynamic:1 overshoot:1 upon:1 completely:1 neurophysiol:5 easily:1 differently:1 finger:1 univ:1 distinct:3 fast:2 monte:1 neurophysics:1 klaus:4 quite:1 supplementary:1 solve:4 bci:12 cov:1 noisy:1 itself:1 final:4 iiib:2 triggered:1 sequence:3 advantage:1 rehab:2 turned:2 combining:2 translate:1 ogl:3 description:2 pronounced:1 competition:22 olkopf:1 enhancement:1 requirement:1 leave:1 derive:2 psychiatrie:1 schl:3 eq:1 somatosensory:2 indicate:1 filter:5 centered:1 human:6 observational:2 secondly:1 strictly:3 extension:1 accompanying:2 around:1 considered:2 bildung:1 lawrence:1 scope:1 sma:1 early:2 niels:2 estimation:1 proc:2 applicable:1 combinatorial:1 label:3 hansen:1 healthy:1 successfully:1 electroencephalogr:1 weighted:1 stefan:1 uller:4 mit:1 clearly:2 interfacing:1 gaussian:6 csp:1 rather:1 derived:1 notational:1 pdfs:1 improvement:1 indicates:1 mainly:1 electroencephalographic:1 suppression:1 baseline:1 detect:1 am:1 inst:1 postcentral:1 integrated:1 fhg:2 germany:3 biomed:3 issue:1 x11:5 classification:26 overall:1 denoted:2 integration:2 mutual:3 equal:1 ilmoniemi:1 extraction:4 chernoff:1 nearly:1 rebound:1 thin:1 intelligent:2 few:2 primarily:1 penfield:1 gamma:1 comprehensive:1 individual:4 delayed:1 dfg:1 intended:1 bundesministerium:1 evaluation:3 introduces:1 yielding:2 light:2 accurate:1 integral:1 necessary:1 respective:1 causal:1 precentral:2 instance:9 modeling:1 bull:1 snr:2 successful:2 reported:1 combined:3 st:9 thanks:1 international:1 peak:2 probabilistic:7 informatics:1 jos:1 michael:1 continuously:1 imagery:2 recorded:2 electrocorticograms:1 imagination:2 potential:5 de:2 int:1 blurred:2 sts:8 caused:1 onset:2 tion:1 view:1 closed:1 wolfgang:1 linked:1 start:1 hindenburgdamm:1 bayes:9 slope:1 om:1 kekulestr:2 accuracy:3 characteristic:1 christin:3 yield:2 sensori:1 weak:3 bayesian:2 emphasizes:1 babiloni:3 basically:1 carlo:1 published:1 submitted:1 oscillatory:1 sebastian:1 competitor:2 energy:1 sensorimotor:6 pp:1 frequency:5 mi:20 sampled:1 wheelchair:1 birch:1 knowledge:1 organized:1 amplitude:2 reflecting:2 nerve:1 higher:1 reflected:1 response:1 erd:24 arranged:1 evaluated:1 anderson:1 furthermore:2 just:2 arch:1 until:5 hand:11 dennis:1 web:1 nonlinear:1 del:1 reveal:1 indicated:1 effect:1 concept:2 evolution:1 hence:1 gesellschaft:1 during:3 width:1 self:1 please:1 aki:2 rhythm:9 criterion:2 o3:5 complete:1 interface:3 instantaneous:3 charles:1 predominantly:1 superior:1 stimulation:2 jasper:1 stork:1 b4:1 birbaumer:2 volume:2 imagined:3 m1:1 refer:1 cambridge:1 rd:1 moving:2 afer:3 cortex:7 morlet:2 charit:1 posterior:7 multivariate:2 involvement:1 inf:1 manipulation:1 termed:1 certain:2 binary:5 minimum:2 additional:1 preceding:2 period:1 signal:4 ii:1 dashed:2 huttunen:1 cross:1 concerning:1 deadline:1 hart:1 neuro:1 expectation:2 achieved:2 whereas:3 embs:1 interval:1 median:1 macroscopic:1 sch:4 ascent:1 claus:1 subject:17 hz:6 recording:1 med:1 validating:1 effectiveness:1 revealed:1 iii:3 switch:1 competing:3 restrict:1 opposite:1 fkz:1 ultimate:1 sfb:1 becker:1 speaking:1 york:1 action:1 oder:1 gabriel:5 s4:5 dark:1 band:2 schnitzler:2 gyrus:1 multichannel:1 supplied:1 sign:1 estimated:5 neuroscience:1 per:1 steepness:10 group:3 nevertheless:1 kes:1 erp:2 verified:1 utilize:2 throughout:1 almost:1 oscillation:2 decision:9 summarizes:1 bit:3 bound:2 followed:3 distinguish:1 paced:1 quadratic:1 annual:1 activity:3 precisely:4 lemm:5 sake:1 min:3 performing:1 according:5 combination:4 weihs:1 across:8 describes:2 son:1 ur:2 skull:1 classconditional:1 previously:1 abbreviated:1 disclosure:3 available:1 operation:1 apply:1 observe:1 appropriate:2 spectral:2 alternative:1 denotes:1 top:1 ensure:1 cf:5 log2:1 schalk:1 clin:5 medicine:1 especially:1 objective:3 parametric:1 primary:2 obermayer:1 exhibit:1 regulated:1 mapped:1 berlin:4 thrun:2 length:3 index:1 providing:2 steep:2 robert:4 negative:2 reliably:2 upper:1 observation:6 convolution:2 parietal:1 voluntary:2 variability:1 communication:2 schr:1 perturbation:1 arbitrary:3 neuroprosthesis:1 introduced:2 salmelin:3 namely:1 c3:6 c4:6 raising:1 nip:1 trans:4 mcfarland:2 pattern:2 regime:1 summarize:1 max:4 power:11 event:7 misclassification:4 ranked:1 natural:1 blankertz:3 scheme:2 improve:1 brief:1 temporally:3 fraunhofer:2 extract:1 prior:2 acknowledgement:1 embedded:1 synchronization:1 scherer:1 var:2 localized:1 validation:1 gather:3 editor:3 mrp:22 row:1 course:5 surprisingly:1 supported:1 guide:1 institute:2 saul:1 taking:1 rhythmic:3 absolute:1 boundary:1 calculated:2 cortical:7 gerwin:1 feedback:1 lett:1 electroenceph:2 correlate:1 alpha:2 approximate:1 bernhard:1 reveals:1 hari:3 conclude:1 jousm:2 spatio:1 discriminative:7 neurology:1 demanded:1 mrps:2 additionally:1 table:3 channel:2 rossini:1 transfer:2 obtaining:1 eeg:14 complex:1 domain:3 did:1 neurosci:1 hyperparameters:1 edition:1 causality:2 fig:3 slow:4 wiley:1 bmbf:1 pfurtscheller:7 neuroimage:3 weighting:3 wavelet:5 carducci:1 formula:2 down:1 specific:3 er:1 desynchronization:3 neurol:1 physiological:2 evidence:1 consist:1 curio:5 sequential:1 forschung:1 magnitude:2 horizon:2 mill:1 visual:1 expressed:1 corresponds:1 gary:1 chance:1 relies:2 extracted:2 ma:5 constantly:1 succeed:2 conditional:5 goal:2 presentation:1 consequently:2 man:1 except:1 techn:1 pas:1 neuper:3 indicating:1 jonathan:2 theresa:1 preparation:1 ongoing:1 incorporate:2 dept:2
2,378
3,158
Dirichlet-Enhanced Spam Filtering based on Biased Samples Steffen Bickel and Tobias Scheffer Max-Planck-Institut f?ur Informatik, Saarbr?ucken, Germany {bickel, scheffer}@mpi-inf.mpg.de Abstract We study a setting that is motivated by the problem of filtering spam messages for many users. Each user receives messages according to an individual, unknown distribution, reflected only in the unlabeled inbox. The spam filter for a user is required to perform well with respect to this distribution. Labeled messages from publicly available sources can be utilized, but they are governed by a distinct distribution, not adequately representing most inboxes. We devise a method that minimizes a loss function with respect to a user?s personal distribution based on the available biased sample. A nonparametric hierarchical Bayesian model furthermore generalizes across users by learning a common prior which is imposed on new email accounts. Empirically, we observe that bias-corrected learning outperforms naive reliance on the assumption of independent and identically distributed data; Dirichlet-enhanced generalization across users outperforms a single (?one size fits all?) filter as well as independent filters for all users. 1 Introduction Design and analysis of most machine learning algorithms are based on the assumption that the training data be drawn independently and from the same stationary distribution that the resulting model will be exposed to. In many application scenarios, however, control over the data generation process is less perfect, and so this iid assumption is often a naive over-simplification. In econometrics, learning from biased samples is a common phenomenon, where the willingness to respond to surveys is known to depend on several characteristics of the person queried; work that led to a method for correcting sample selection bias for a class of regression problems has been distinguished by a Nobel Prize [6]. In machine learning, the case of training data that is only biased with respect to the ratio of class labels has been studied [4, 7]. Zadrozny [14] has derived a bias correction theorem that applies when the bias is conditionally independent of the class label given the instance, and when every instance has a nonzero probability of being drawn into the sample. Sample bias correction for maximum entropy density estimation [3] and the analysis of the generalization error under covariate shift [12] follow the same intuition. In our email spam filtering setting, a server handles many email accounts (in case of our industrial partner, several millions), and delivers millions of emails per day. A magnitude of spam and ?ham? (i.e., non-spam) sources are publicly available. They include collections of emails caught in ?spam traps? ? email addresses that are published on the web in an invisible font and are harvested by spammers [11] ? the Enron corpus that was disclosed in the course of the Enron trial [8], and SpamAssassin data. These collections have diverse properties and none of them represents the global distribution of all emails, let alone the distribution received by some particular user. The resulting bias does not only hinder learning, but also leads to skewed accuracy estimates, since individuals may receive a larger proportion of emails that a filter classifies less confidently. The following data generation model is paramount to our problem setting. An unknown process, characterized by a distribution p(?i |?), generates parameters ?i . The ?i parameterize distributions p(x, y|?i ) over instances x (emails) and class labels y. Each p(x, y|?i ) corresponds to the i-th user?s distribution of incoming spam (y = +1) or ham (y = ?1) messages x. The goal is to obtain a classifier fi : x 7? y for each ?i that minimizes the expectation of some loss function E(x,y)??i [`(f (x), y)], defined with respect to the (unknown) distribution ?i . Labeled training data L are drawn from a blend of data sources (public email archives), resulting in a density p(x, y|?) = p(x|?)p(y|x, ?) with parameter ? that governs L. The relation between the ?i and ? is such that (a) any x that has nonzero probability density p(x|?) of being drawn into the sample L also has a nonzero probability p(x|?i ) under the target distributions ?i ; and (b) the concept of spam is consensual for all users and the labeled data; i.e., p(y|x, ?) = p(y|x, ?i ) for all users i. In addition to the (nonempty) labeled sample, zero or more unlabeled data Ui are available for each ?i and are drawn according to ?i . The unlabeled sample Ui is the inbox of user i. The inbox is empty for a newly established account and grows from there on. Our problem setting corresponds to an application scenario in which users are not prepared to manually tag spam messages in their inbox. Due to privacy and legal constraints, we are not allowed to personally read (or label) any single personal email; but the unlabeled messages may be used as input to an automated procedure. The individual distributions ?i are neither independent (identical spam messages are sent to many users), nor are they likely to be identical: distributions of inbound messages vary greatly between (professional, recreational, American, Chinese, . . . ) email users. We develop a nonparametric hierarchical Bayesian model that allows us to impose a common prior on new ?i . Such generalization may be particularly helpful for users with little or no available data Ui . The desired outcome of the learning process is an array of personalized spam filters for all users. The rest of this paper is structured as follows. We devise our solution in Section 2. In Section 3, we study the effectiveness of correcting sample bias for spam, and of using a Dirichlet process to generalize across users, experimentally. Section 4 concludes. 2 Learning from Biased Data The available labeled data L are governed by p(x|?); directly training a classifier on L would therefore minimize the expected loss E(x,y)?? [`(f (x), y)] with respect to p(x|?). By contrast, the task is to find classifiers fi that minimize, for user i, the expected loss E(x,y)??i [`(f (x), y)] with respect to p(x|?i ). We can minimize the loss with respect to ?i from a sample L whose instances are governed by ? when each instance is re-weighted. The weights have to be chosen such that minimizing the loss on the weighted sample L amounts to minimizing the loss with respect to ?i . In order to derive weighting factors with this property, consider the following model of the process that selects the labeled sample L. After drawing an instance x according to p(x|?i ), a coin s is tossed with probability p(s|x, ?i , ?). We move x into the labeled sample (and add the proper class label) if s = 1; otherwise, x is discarded. Our previous assumption that any x with positive p(x|?) also has a positive p(x|?i ) implies that there exists a p(s|x, ?i , ?) such that p(x|?) ? p(x|?i )p(s = 1|x, ?i , ?). (1) That is, repeatedly executing the above process with an appropriate p(s|x, ?i , ?) will create a sample of instances governed by p(x|?). Equation 1 defines p(s|x, ?i , ?); the succeeding subsections will be dedicated to estimating it from the available data. Since p(s|x, ?i , ?) describes the discrepancy between the sample distribution p(x|?) and the target p(x|?i ), we refer to it as the sample bias. ?1 But let us first show that minimizing the loss on L with instances re-weighted by p(s|x, ?i , ?) in fact minimizes the expected loss with respect to ?i . The rationale behind this claim deviates only in minor points from the proof of the bias correction theorem of [14]. Proposition 1 introduces a normalizing constant p(s = 1|?i , ?). Its value can be easily obtained as it normalizes Equation 1. p(s=1|?i ,?) equals the exProposition 1 The expected loss with respect to p(x, y|??i ) = p(x, y|?) p(s=1|x,? i ,?) pected loss with respect to p(x, y|?i ), when p(s|x, ?i , ?) satisfies Equation 1. Proof. Equation 2 expands the expected value and the definition of p(x, y|??i ) in Proposition 1. Equation 3 splits p(x, y|?). We apply the definition of p(s|x, ?i , ?) (Equation 1) and obtain Equation 4. Equation 4 is rewritten as an expected value. Z p(s = 1|?i , ?) E(x,y)???i [`(f (x), y)] = `(f (x), y)p(x, y|?) d(x, y) (2) p(s = 1|x, ?i , ?) Z p(s = |?i , ?) = `(f (x), y)p(y|x, ?)p(x|?) d(x, y) (3) p(s = 1|x, ?i , ?) Z = `(f (x), y)p(y|x, ?i )p(x|?i )d(x, y) = E(x,y)??i [`(f (x), y)] (4) 2.1 Individualized Bias Estimation Equation 1 says that there is an unknown p(s|x, ?i , ?) with p(x|?) ? p(x|?i )p(s = 1|x, ?i , ?) which we call the sample bias. We will now discuss how to obtain an estimate p?I (s|x, ?i , ?). The individualized empirical sample bias is an estimate of the unknown true bias, conditioned on a user?s unlabeled inbox Ui and labeled data L; hence, p?I (s|x, ?i , ?) = p(s|x, Ui , L). Equation 1 immediately implies p(s = 1|x, ?i , ?) ? p(x|?) , p(x|?i ) (5) but neither p(x|?) nor p(x|?i ) are known. However, distribution p(x|?) is reflected in the labeled sample L, and distribution p(x|?i ) in the unlabeled inbox Ui . Instances in L are examples that have been selected into the labeled sample; i.e., s = 1|x ? L. Instances in Ui have not been selected into the labeled sample; i.e., s = 0|x ? Ui . We define sUi ,L to be the vector of selection decisions for all instances in Ui and L. That is, sUi ,L contains |Ui | elements that are 0, and |L| elements that are 1. A density estimator p?(s|x, ?, ?i ) can be trained on the instances in L and Ui , using vector sUi ,L as target variable. We use a regularized logistic regression density estimator parameterized with wi : p?I (s = 1|x, ?, ?i ) = p(s = 1|x; wi ) = The likelihood of the density estimator is Y P (sUi ,L |w, Ui , L) = xu ?Ui p(s = 0|xu , w) 1 . 1 + ehwi ,xi Y x` ?L p(s = 1|x` , w). (6) (7) We train parameters wi = argmaxw log P (sUi ,L |w, Ui , L) + log ?(w) (we write ?(w) for the regularizer) [15] using the fast implementation of regularized logistic regression of [9]. 2.2 Dirichlet-Enhanced Bias Estimation This section addresses estimation of the sample bias p(s|x, ?n+1 , ?) for a new user n+1 by generalizing across existing users U1 , . . . , Un . The resulting estimate p?D (s|x, ?n+1 , ?) will be conditioned on the new user?s inbox Un+1 and the labeled data L, but also on all other users? inboxes. We write p?D (s|x, ?n+1 , ?) = p(s|x, Un+1 ; L, U1 , . . . , Un ) for the Dirichlet-enhanced empirical sample bias. Equation 1 says that there is a p(s = 1|x, ?n+1 , ?) for user n + 1 that satisfies Equation 5. Let us assume a parametric form (we employ a logistic model), and let wn+1 be the parameters that satisfy p(s = 1|x, ?n+1 , ?) = p(s = 1|x; wn+1 ) ? p(x|?)/p(x|?n+1 ). We resort to a Dirichlet process (DP) [5] G(wi ) as a model for the prior belief on wn+1 given w1 , . . . , wn . Dirichlet process G|{?, G0 } ? DP (?, G0 ) with concentration parameter ? and base distribution G0 generates parameters wi : The first element w1 is drawn according to G0 ; in our case, the uninformed prior. It generates wn+1 according to Equation 8, where ?(wi ) is a point distribution centered at wi . Pn ?G0 + i=1 ?(wi ) (8) wn+1 |w1 , . . . , wn ? ?+n Equation 9 integrates over the parameter of the bias for new user n + 1. Equation 10 splits the posterior into the likelihood of the sample selection coin tosses and the common prior which is modeled as a Dirichlet process. Z p(s|x, Un+1 ; L, U1 , . . . , Un ) = p(s|x; w)p(w|Un+1 ; L, U1 , . . . , Un )dw (9) ? p(w|Un+1 , L, U1 , . . . , Un ) ? P (sUn+1 ,L |w, Un+1 , L)G(w|L, U1 , . . . , Un ) (10) Likelihood P (sUn+1 ,L |w, Un+1 , L) is resolved in Equation 7 for a logistic model of the bias. 2.3 Estimation of the Dirichlet Process The parameters of previous users? bias w1 , . . . , wn constitute the prior wn+1 |{wi }ni=1 ? G for user ? has to be n + 1. Since the parameters wi are not observable, an estimate wn+1 |L, {Ui }ni=1 ? G based on the available data. Exact calculation of this prior requires integrating over the w1 , . . . , wn ; since this is not feasible, MCMC sampling [10] or variational approximation [1] can be used. In our application, the model of p(s|x, ?i , ?) involves a regularized logistic regression in a space of more than 800,000 dimensions. In each iteration of the MCMC process or the variational inference of [1], logistic density estimators for all users would need to be trained?which is prohibitive. We therefore follow [13] and approximate the Dirichlet Process as Pn ?G0 + i=1 ?i ?(wi? ) ? . (11) G(w) ? ?+n Compared to the original Equation 8, the sum of point distributions at true parameters wi is replaced by a weighted sum over point distributions at pivotal wi? . Parameter estimation is divided in two steps. First, pivotal models of the sample bias are trained for each user i, solely based on a user?s inbox and the labeled data. Secondly, parameters ?i are estimated using variational EM; they express correlations between, and allow for generalization across, multiple users. Tresp and Yu [13] suggest to use a maximum likelihood estimate wi? ; we implement wi? by training logistic regression models 1 p(s = 1|x; wi? ) = hw? ,xi 1+e i (12) ? with wi = argmaxw log P (sUi ,L |w, Ui , L)+ log ?(w). Algorithmically, the pivotal models are obtained analogously to the individualized estimation of the selection bias for each user described in Section 2.1. After the pivotal models have been identified, an EM algorithm maximizes the likelihood over the parameters ?i . For the E step we rely on the assumption that the posterior is a weighted sum over point distributions at the pivotal density estimates (Equation 13). With this assumption, the posterior is no longer a continuous distribution and the E step resolves to the computation of a discrete number of variational parameters ?ij (Equation 14). Xn p?(w|Uj , L) = ?ij ?(wi? ) (13) i=1 ? i? ) ? P (sUj ,L |w? , Uj , L)G(w (14) Pn ? Equation 11 yields the M step with ?i = j=1 ?ij . Likelihood P (sUj ,L |w , Uj , L), is calculated as in Equation 7. The entire estimation procedure is detailed in Table 1, steps 1 through 3. ?ij 2.4 Inference Having obtained pivotal models p(s|x; wi? ) and parameters ?i , we need to infer the Dirichletenhanced empirical sample bias p(s|x, Ui ; L, U1 , . . . , Un ). During the training procedure, i is one of the known users from U1 , . . . , Un . At application time, we may furthermore experience a message bound for user n + 1. ? Without loss of generality, we discuss the inference problem for a new user n + 1. Inserting G(w) ? into Eqs. 9 and 10 leads to Equation 15. Expanding G(w) according to Eq. 11 yields Equation 16. Z ? p(s|x, Un+1 ; L, U1 , . . . , Un ) ? p(s|x; w)P (sUn+1 ,L |w, Un+1 , L)G(w)dw (15) Z ? ? p(s|x; w)P (sUn+1 ,L |w, Un+1 , L)G0 (w)dw (16) Xn + p(s|x; wi? )P (sUn+1 ,L |wi? , Un+1 , L)?i i=1 The second summand in Equation 16 is determined by summing over the pivotal models p(s|x; wi? ). The first summand can be determined by applying Bayes? rule in Equation 17; G0 is the uninformed ? prior; the resulting term p(s|x, Un+1 , L) = p(s|x; wn+1 ) is the outcome of a new pivotal density estimator, trained to discriminate L against Un+1 . It is determined as in Equation 12. Z Z p(s|x; w)P (sUn+1 ,L |w, Un+1 , L)G0 (w)dw ? p(s|x; w)p(w|Un+1 , L)dw (17) = p(s|x, Un+1 , L) (18) The Dirichlet-enhanced empirical sample bias p(s|x, Un+1 ; L, U1 , . . . , Un ) for user n + 1 is a ? weighted sum of the pivotal density estimate p(s|x; wn+1 ) for user n + 1, and models p(s|x; wi? ) of all users i; the latter are weighted according to their likelihood P (sUn+1 ,L |wi? , Un+1 , L) of observing the messages of user n + 1. Inference for the users that are available at training time is carried out in step 4(a) of the training procedure (Table 1). Table 1: Dirichlet-enhanced, bias-corrected spam filtering. Input: Labeled data L, unlabeled inboxes U1 , . . . , Un . 1. For all users i = 1 . . . n: Train a pivotal density estimator p?(s = 1|x, wi? ) as in Eq. 12. ? 0 (w? ) by setting ?i = 1 for i = 1 . . . n. 2. Initialize G i 3. For t = 1, . . . until convergence: ? t?1 and the density esti(a) E-step: For all i, j, estimate ?tij from Equation 14 using G ? mators p(s|x, wi ). ? t (w? ) according to Equation 11 using ?i = Pn ?t . (b) M-step: Estimate G i j=1 ij 4. For all users i: (a) For all x ? L: determine empirical sample bias p(s|x, Ui ; L, U1 , . . . , Un ), conditioned on the observables according to Equation 16. (b) Train SVM classifier fi : X ? {spam, ham} by solving Optimization Problem 1. Return classifiers fi for all users i. 2.5 Training a Bias-Corrected Support Vector Machine Given the requirement of high accuracy and the need to handle many attributes, SVMs are widely acknowledged to be a good learning mechanism for spam filtering [2]. The final bias-corrected SVM p(s=1|?i ,?) , fn+1 can be trained by re-sampling or re-weighting L according to s(x) = p(s=1|x,U n+1 ;L,U1 ,...,Un ) where p(s|x,P Un+1 ; L, U1 , . . . , Un ) is the empirical sample bias and p(s = 1|?i , ?) is the normalizer that assures x?L s(x) = |L|. Let xk ? L be an example that incurs a margin violation (i.e., slack term) of ?k . The expected contribution of xk to the SVM criterion is s(x)?k because xk will be drawn s(x) times on average into each re-sampled data set. Therefore, training the SVM on the re-sampled data or optimizing with re-scaled slack terms lead to identical optimization problems. Optimization Problem 1 Given labeled data L, re-sampling weights s(x), and regularization parameter C; over all v, b, ?1 , . . . , ?m , minimize Xm 1 2 |v| +C s(x)?k (19) k=1 2 subject to ?m ?m (20) k=1 yk (hv, xk i + b) ? 1 ? ?k ; k=1 ?k ? 0. The bias-corrected spam filter is trained in step 4(b) of the algorithm (Table 1). 2.6 Incremental Update The Dirichlet-enhanced bias correction procedure is intrinsically incremental, which fits into the typical application scenario. When a new user n + 1 subscribes to the email service, the prior Table 2: Email accounts used for experimentation. User Williams Beck Farmer Kaminski Kitchen Lokay Sanders German traveler German architect Ham Enron/Williams Enron/Beck Enron/Farmer Enron/Kaminski Enron/Kitchen Enron/Lokay Enron/Sanders Usenet/de.rec.reisen.misc Usenet/de.sci.architektur Spam Dornbos spam trap (www.dornbos.com) (part 1) spam trap of Bruce Guenter (www.em.ca/?bruceg/spam) personal spam of Paul Wouters (www.xtdnet.nl/paul/spam) spam collection of SpamArchive.org (part 1) personal spam of the second author. spam collection of SpamAssassin (www.spamassassin.org) personal spam of Richard Jones (www.annexia.org/spam) Dornbos spam trap (www.dornbos.com) (part 2) spam collection of SpamArchive.org (part 2) ? is already available. A pivotal model p(s|x, Un+1 ; L) can be trained; when wn+1 |L, {Ui }ni=1 ? G Un+1 is still empty (the new user has not yet received emails), then the regularizer of the density estimate p(s|x, Un+1 , L) resolves to the uniform distribution. Inference of p(s|x, Un+1 ; L, U1 , . . . , Un ) for the new user proceeds as discussed in Section 2.4. When data Un+1 becomes available, the prior can be updated. This update is exercised by invok? ing the EM estimation procedure with additional parameters ?n+1 and ?(n+1) . The estimates of P (sUj ,L |wi? , Uj , L) for all pairs of existing users i and j do not change and can be reused. The EM ? procedure returns the updated prior wn+2 |L, {Ui }n+1 i=1 ? G for the next new user n + 2. 3 Experiments In our experiments, we study the relative benefit of the following filters. The baseline is constituted by a filter that is trained under iid assumption from the labeled data. The second candidate is a ?one size fits all? bias-corrected filter. Here, all users? messages are pooled as unlabeled data and the bias Sn+1 p(s|x, ?n+1 , ?) is modeled by an estimator p?O (s|x, ?n+1 , ?) = p(s|x, i=1 Ui , L). An individually bias-corrected filter uses estimators p?I (s|x, ?n+1 , ?) = p(s|x, Un+1 , L). Finally, we assess the Dirichlet-enhanced bias-corrected filter. It uses the hierarchical Bayesian model to determine the empirical bias p?D (s|x, ?n+1 , ?) = p(s|x, Un+1 ; L, U1 , . . . , Un ) conditioned on the new user?s messages, the labeled data, and all previous users? messages. Evaluating the filters with respect to the personal distributions of messages requires labeled emails from distinct users. We construct nine accounts using real but disclosed messages. Seven of them contain ham emails received by distinct Enron employees from the Enron corpus [8]; we use the individuals with the largest numbers of messages from a set of mails that have been cleaned from spam. We simulate two foreign users: the ?German traveler? receives postings to a moderated German traveling newsgroup, the ?German architect? postings to a newsgroup on architecture. Each account is augmented with between 2551 and 6530 spam messages from a distinct source, see Table 2. The number of ham emails varies between 1189 and 5983, reflecting about natural hamto-spam ratios. The ham section of the labeled data L contains 4000 ham emails from the SpamAssassin corpus, 1000 newsletters and 500 emails from Enron employee Taylor. The labeled data contain 5000 spam emails relayed by blacklisted servers. The data are available from the authors. The total of 76,214 messages are transformed into binary term occurrance vectors with a total of 834,661 attributes; charset and base64 decoding are applied, email headers are discarded, tokens occurring less than 4 times are removed. SVM parameter C, concentration parameter ?, and the regularization parameter of the logistic regression are adjusted on a small reserved tuning set. We iterate over all users and let each one play the role of the new user n + 1. We then iterate over the size of the new user?s inbox and average 10 repetitions of the evaluation process, sampling Un+1 from the inbox and using the remaining messages as hold-out data for performance evaluation. We train the different filters on identical samples and measure the area under the ROC curve (AUC). Figure 1 shows the AUC performance of the iid baseline and the three bias-corrected filters for the first two Enron and one of the German users. Error bars indicate standard error of the difference to Williams Beck German traveler 0.996 0.992 0.9972 AUC Dirichlet one size fits all individual iid baseline AUC AUC 0.992 0.994 0.99 0.988 0.986 1 2 4 8 32 128 512 2048 size of user?s unlabeled inbox Dirichlet one size fits all individual iid baseline Dirichlet one size fits all 0.997 individual iid baseline 0.9968 1 2 4 8 32 128 512 2048 size of user?s unlabeled inbox 1 2 4 8 32 128 512 2048 size of user?s unlabeled inbox Figure 1: AUC of the iid baseline and the three bias-corrected filters versus size of |Un+1 |. 0.2 0.1 0 Dirichlet one size fits all individual 1 2 4 8 32 128 512 2048 size of user?s unlabeled inbox training time Dirichlet-enhanced one size fits all Dirichlet-enh. incremental individual iid baseline 800 0.4 seconds 0.3 correction benefit vs. iid violation reduction of 1-AUC risk reduction of 1-AUC risk average of all nine users 0.4 0.2 0 0.9 0.99 strength of iid violation 400 200 individually corrected 0 600 1 0 1 2 3 4 5 6 number of users 7 8 Figure 2: Average reduction of 1-AUC risk over all nine users (left); reduction of 1-AUC risk dependent on strength of iid violation (center); number of existing users vs. training time (right). the iid filter. Figure 2 (left) aggregates the results over all nine users by averaging the rate by which Ccorrected the risk 1?AU C is reduced. We compute this reduction as 1? 1?AU 1?AU Cbaseline , where AU Ccorrected is one of the bias-corrected filters and AU Cbaseline is the AUC of the iid filter. The benefit of the individualized bias correction depends on the number of emails available for that user; the 1 ? AU C risk is reduced by 35-40% when many emails are available. The ?one size fits all? filter is almost independent of the number of emails of the new user. On average, the Dirichletenhanced filter reduces the risk 1 ? AU C by about 35% for a newly created account and by almost 40% when many personal emails have arrived. It outperforms the ?one size fits all? filter even for an empty Un+1 because fringe accounts (e.g., the German users) can receive a lower weight in the common prior. The baseline AU C of over 0.99 is typical for server-sided spam filtering; a 40% risk reduction that yields an AUC of 0.994 is still a very significant improvement of the filter that can be spent on a substantial reduction of the false positive rate, or on a higher rate of spam recognition. The question occurs how strong a violation of the iid assumption the bias correction techniques can compensate. In order to investigate, we control the violation of the iid property of the labeled data as follows. We create a strongly biased sample by using only Enron users as test accounts ?i , and not using any Enron emails in the labeled data. We vary the proportion of strongly biased data versus randomly drawn Enron mails in the labeled training data (no email occurs in the training and testing data at the same time). When this proportion is zero, the labeled sample is drawn iid from the testing distributions; when it reaches 1, the sample is strongly biased. In Figure 2 (center) we observe that, averaged over all users, bias-correction is effective when the iid violation lies in a mid-range. It becomes less effective when the sample violates the iid assumption too strongly. In this case, ?gaps? occur in ?; i.e., there are regions that have zero probability in the labeled data L ? ? but nonzero probability in the testing data Ui ? ?i . Such gaps render schemes that aim at reconstructing p(x|?i ) by weighting data drawn according to p(x|?) ineffective. Figure 2 (right) displays the total training time over the number of users. We fix |Un+1 | to 16 and vary the number of users that influence the prior. The iid baseline and the individually corrected filter scale constantly. The Dirichlet-enhanced filter scales linearly in the number of users that constitute the common prior; the EM algorithm with a quadratic complexity in the number of users contributes only marginally to the training time. The training time is dominated by the training of the pivotal models (linear complexity). The Dirichlet enhanced filter with incremental update scales favorably compared to the ?one size fits all? filter. Figure 2 is limited to the 9 accounts that we have engineered; the execution time is in the order of minutes and allows to handle larger numbers of accounts. 4 Conclusion It is most natural to define the quality criterion of an email spam filter with respect to the distribution that governs the personal emails of its user. It is desirable to utilize available labeled email data, but assuming that these data were governed by the same distribution unduly over-simplifies the problem setting. Training a density estimator to characterize the difference between the labeled training data and the unlabeled inbox of a user, and using this estimator to compensate for this discrepancy, improves the performance of a personalized spam filter?provided that the inbox contains sufficiently many messages. Pooling the unlabeled inboxes of a group of users, training a density estimator on this pooled data, and using this estimator to compensate for the bias outperforms the individualized bias-correction only when very few unlabeled data for the new user are available. We developed a hierarchical Bayesian framework which uses a Dirichlet process to model the common prior for a group of users. The Dirichlet-enhanced bias correction method estimates ? and compensates for ? the discrepancy between labeled training and unlabeled personal messages, learning from the new user?s unlabeled inbox as well as from data of other users. Empirically, with a 35% reduction of the 1 ? AU C risk for a newly created account, the Dirichlet-enhanced filter outperforms all other methods. When many unlabeled personal emails are available, both individualized and Dirichlet-enhanced bias correction reduce the 1 ? AU C risk by nearly 40% on average. Acknowledgment This work has been supported by Strato Rechenzentrum AG and by the German Science Foundation DFG under grant SCHE540/10-2. References [1] D. Blei and M. Jordan. Variational methods for the Dirichlet process. In Proceedings of the International Conference on Machine Learning, 2004. [2] H. Drucker, D. Wu, and V. Vapnik. Support vector machines for spam categorization. IEEE Transactions on Neural Networks, 10(5):1048?1055, 1999. [3] M. Dudik, R. Schapire, and S. Phillips. Correcting sample selection bias in maximum entropy density estimation. In Advances in Neural Information Processing Systems, 2005. [4] C. Elkan. The foundations of cost-sensitive learning. In Proceedings of the International Joint Conference on Artificial Intellligence, 2001. [5] T. Ferguson. A Bayesian analysis of some nonparametric problems. Annals of Statistics, 1:209?230, 1973. [6] J. Heckman. Sample selection bias as a specification error. Econometrica, 47:153?161, 1979. [7] N. Japkowicz and S. Stephen. The class imbalance problem: A systematic study. Intelligent Data Analysis, 6:429?449, 2002. [8] Bryan Klimt and Yiming Yang. The enron corpus: A new dataset for email classification research. In Proceedings of the European Conference on Machine Learning, 2004. [9] P. Komarek. Logistic Regression for Data Mining and High-Dimensional Classification. Doctoral dissertation, Carnegie Mellon University, 2004. [10] R. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9:249?265, 2000. [11] Matthew Prince, Benjamin Dahl, Lee Holloway, Arthur Kellera, and Eric Langheinrich. Understanding how spammers steal your e-mail address: An analysis of the first six months of data from project honey pot. In Proceedings of the Conference on Email and Anti-Spam, 2005. [12] M. Sugiyama and K.-R. M?uller. Model selection under covariate shift. In Proceedings of the International Conference on Artificial Neural Networks, 2005. [13] Volker Tresp and Kai Yu. An introduction to nonparametric hierarchical Bayesian modelling with a focus on multi-agent learning. In Switching and Learning in Feedback Systems, volume 3355 of Lecture Notes in Computer Science, pages 290?312. Springer, 2004. [14] Bianca Zadrozny. Learning and evaluating classifiers under sample selection bias. In Proceedings of the International Conference on Machine Learning, 2004. [15] T. Zhang and F. Oles. Text categorization based on regularized linear classifiers. Information Retrieval, 4(1):5?31, 2001.
3158 |@word trial:1 proportion:3 reused:1 incurs:1 reduction:8 contains:3 outperforms:5 existing:3 com:2 wouters:1 yet:1 fn:1 succeeding:1 update:3 v:2 stationary:1 alone:1 selected:2 prohibitive:1 xk:4 steal:1 prize:1 dissertation:1 blei:1 org:4 relayed:1 zhang:1 privacy:1 expected:7 mpg:1 nor:2 multi:1 steffen:1 resolve:2 little:1 ucken:1 becomes:2 provided:1 classifies:1 estimating:1 project:1 maximizes:1 minimizes:3 developed:1 ag:1 esti:1 every:1 expands:1 honey:1 classifier:7 scaled:1 control:2 farmer:2 grant:1 planck:1 positive:3 service:1 switching:1 usenet:2 solely:1 au:10 studied:1 doctoral:1 limited:1 range:1 averaged:1 acknowledgment:1 testing:3 oles:1 implement:1 procedure:7 inbox:17 area:1 empirical:7 integrating:1 suggest:1 unlabeled:18 selection:8 risk:10 applying:1 influence:1 www:6 imposed:1 center:2 williams:3 independently:1 caught:1 survey:1 immediately:1 correcting:3 estimator:12 rule:1 array:1 dw:5 handle:3 moderated:1 updated:2 annals:1 enhanced:14 target:3 play:1 user:84 exact:1 us:3 elkan:1 element:3 recognition:1 particularly:1 utilized:1 rec:1 econometrics:1 labeled:28 role:1 hv:1 parameterize:1 region:1 sun:7 removed:1 yk:1 substantial:1 intuition:1 ham:8 benjamin:1 ui:22 complexity:2 econometrica:1 tobias:1 hinder:1 personal:10 trained:8 depend:1 solving:1 exposed:1 eric:1 observables:1 easily:1 resolved:1 joint:1 traveler:3 regularizer:2 train:4 distinct:4 fast:1 effective:2 artificial:2 aggregate:1 outcome:2 header:1 whose:1 larger:2 widely:1 kai:1 say:2 drawing:1 otherwise:1 compensates:1 statistic:2 final:1 inserting:1 convergence:1 empty:3 requirement:1 categorization:2 perfect:1 executing:1 incremental:4 inbound:1 spent:1 derive:1 develop:1 yiming:1 uninformed:2 ij:5 minor:1 received:3 eq:3 strong:1 pot:1 involves:1 implies:2 indicate:1 attribute:2 filter:29 centered:1 engineered:1 public:1 violates:1 fix:1 generalization:4 proposition:2 secondly:1 adjusted:1 correction:11 hold:1 sufficiently:1 claim:1 matthew:1 bickel:2 vary:3 estimation:10 integrates:1 label:5 exercised:1 individually:3 largest:1 sensitive:1 repetition:1 create:2 weighted:7 uller:1 aim:1 pn:4 volker:1 derived:1 focus:1 improvement:1 modelling:1 likelihood:7 blacklisted:1 greatly:1 industrial:1 contrast:1 normalizer:1 baseline:9 helpful:1 inference:5 dependent:1 foreign:1 ferguson:1 entire:1 relation:1 transformed:1 japkowicz:1 selects:1 germany:1 classification:2 initialize:1 equal:1 construct:1 having:1 sampling:5 manually:1 identical:4 represents:1 yu:2 jones:1 nearly:1 discrepancy:3 intelligent:1 summand:2 employ:1 richard:1 few:1 randomly:1 individual:9 dfg:1 beck:3 replaced:1 kitchen:2 message:21 investigate:1 mining:1 evaluation:2 introduces:1 violation:7 mixture:1 nl:1 behind:1 chain:1 arthur:1 experience:1 institut:1 taylor:1 desired:1 re:8 prince:1 instance:12 tossed:1 strato:1 cost:1 uniform:1 too:1 characterize:1 varies:1 person:1 density:16 international:4 systematic:1 lee:1 decoding:1 analogously:1 recreational:1 w1:5 american:1 resort:1 return:2 account:12 de:3 pooled:2 satisfy:1 depends:1 observing:1 klimt:1 bayes:1 bruce:1 contribution:1 minimize:4 ass:1 publicly:2 accuracy:2 ni:3 characteristic:1 reserved:1 yield:3 generalize:1 bayesian:6 informatik:1 iid:19 none:1 marginally:1 published:1 reach:1 email:34 definition:2 against:1 proof:2 sampled:2 newly:3 dataset:1 intrinsically:1 subsection:1 improves:1 reflecting:1 higher:1 day:1 follow:2 reflected:2 strongly:4 generality:1 furthermore:2 correlation:1 until:1 traveling:1 receives:2 web:1 defines:1 logistic:9 quality:1 willingness:1 grows:1 concept:1 true:2 contain:2 adequately:1 hence:1 regularization:2 read:1 nonzero:4 misc:1 neal:1 conditionally:1 skewed:1 during:1 auc:12 mpi:1 criterion:2 guenter:1 arrived:1 invisible:1 newsletter:1 delivers:1 dedicated:1 variational:5 fi:4 common:7 empirically:2 volume:1 million:2 discussed:1 employee:2 refer:1 significant:1 mellon:1 queried:1 phillips:1 tuning:1 kaminski:2 sugiyama:1 specification:1 longer:1 add:1 base:1 posterior:3 optimizing:1 inf:1 scenario:3 server:3 binary:1 devise:2 additional:1 impose:1 dudik:1 determine:2 stephen:1 multiple:1 desirable:1 infer:1 reduces:1 ing:1 characterized:1 calculation:1 compensate:3 retrieval:1 divided:1 regression:7 expectation:1 iteration:1 inboxes:4 receive:2 addition:1 source:4 biased:8 rest:1 enron:17 archive:1 ineffective:1 subject:1 pooling:1 sent:1 effectiveness:1 jordan:1 call:1 yang:1 split:2 identically:1 wn:15 automated:1 sander:2 iterate:2 fit:11 architecture:1 identified:1 reduce:1 simplifies:1 drucker:1 shift:2 motivated:1 suj:3 six:1 render:1 spammer:2 nine:4 constitute:2 repeatedly:1 tij:1 governs:2 detailed:1 amount:1 nonparametric:4 prepared:1 mid:1 svms:1 reduced:2 schapire:1 estimated:1 algorithmically:1 per:1 bryan:1 diverse:1 mators:1 write:2 discrete:1 carnegie:1 express:1 group:2 reliance:1 acknowledged:1 drawn:10 neither:2 dahl:1 utilize:1 sum:4 parameterized:1 respond:1 almost:2 wu:1 decision:1 bound:1 simplification:1 display:1 quadratic:1 paramount:1 pected:1 strength:2 occur:1 constraint:1 your:1 personalized:2 tag:1 generates:3 u1:16 simulate:1 dominated:1 structured:1 according:11 across:5 describes:1 em:6 reconstructing:1 ur:1 wi:27 argmaxw:2 sided:1 legal:1 equation:29 assures:1 discus:2 slack:2 nonempty:1 mechanism:1 german:9 available:17 generalizes:1 rewritten:1 experimentation:1 apply:1 observe:2 hierarchical:5 appropriate:1 distinguished:1 coin:2 professional:1 original:1 dirichlet:28 include:1 remaining:1 graphical:1 chinese:1 uj:4 move:1 g0:9 already:1 question:1 occurs:2 font:1 blend:1 parametric:1 concentration:2 heckman:1 dp:2 individualized:6 subscribes:1 sci:1 seven:1 partner:1 mail:3 nobel:1 assuming:1 modeled:2 ratio:2 minimizing:3 favorably:1 design:1 implementation:1 proper:1 unknown:5 perform:1 imbalance:1 markov:1 discarded:2 anti:1 zadrozny:2 architect:2 pair:1 required:1 cleaned:1 unduly:1 saarbr:1 established:1 address:3 bar:1 proceeds:1 xm:1 confidently:1 max:1 belief:1 natural:2 rely:1 regularized:4 enh:1 representing:1 scheme:1 created:2 concludes:1 carried:1 naive:2 tresp:2 sn:1 deviate:1 prior:15 understanding:1 text:1 personally:1 relative:1 loss:12 harvested:1 lecture:1 rationale:1 generation:2 filtering:6 versus:2 foundation:2 agent:1 normalizes:1 course:1 token:1 supported:1 bias:48 allow:1 distributed:1 benefit:3 curve:1 dimension:1 xn:2 calculated:1 evaluating:2 feedback:1 author:2 collection:5 spam:40 transaction:1 approximate:1 observable:1 global:1 incoming:1 corpus:4 summing:1 xi:2 un:46 continuous:1 table:6 expanding:1 ca:1 contributes:1 lokay:2 european:1 constituted:1 linearly:1 paul:2 allowed:1 pivotal:12 xu:2 augmented:1 scheffer:2 roc:1 bianca:1 candidate:1 governed:5 lie:1 weighting:3 posting:2 hw:1 sui:6 theorem:2 minute:1 covariate:2 svm:5 normalizing:1 trap:4 disclosed:2 exists:1 false:1 vapnik:1 magnitude:1 execution:1 conditioned:4 occurring:1 margin:1 gap:2 entropy:2 generalizing:1 led:1 likely:1 applies:1 springer:1 corresponds:2 satisfies:2 constantly:1 fringe:1 goal:1 month:1 toss:1 feasible:1 experimentally:1 change:1 determined:3 typical:2 corrected:13 spamassassin:4 averaging:1 total:3 discriminate:1 newsgroup:2 holloway:1 support:2 latter:1 mcmc:2 phenomenon:1
2,379
3,159
An EM Algorithm for Localizing Multiple Sound Sources in Reverberant Environments Michael I. Mandel, Daniel P. W. Ellis LabROSA, Dept. of Electrical Engineering Columbia University New York, NY {mim,dpwe}@ee.columbia.edu Tony Jebara Dept. of Computer Science Columbia University New York, NY jebara@cs.columbia.edu Abstract We present a method for localizing and separating sound sources in stereo recordings that is robust to reverberation and does not make any assumptions about the source statistics. The method consists of a probabilistic model of binaural multisource recordings and an expectation maximization algorithm for finding the maximum likelihood parameters of that model. These parameters include distributions over delays and assignments of time-frequency regions to sources. We evaluate this method against two comparable algorithms on simulations of simultaneous speech from two or three sources. Our method outperforms the others in anechoic conditions and performs as well as the better of the two in the presence of reverberation. 1 Introduction Determining the direction from which a sound originated using only two microphones is a difficult problem. It is exacerbated by the presence of sounds from other sources and by realistic reverberations, as would be found in a classroom. A related and equally difficult problem is determining in which regions of a spectrogram a sound is observable, the so-called time-frequency mask, useful for source separation [1]. While humans can solve these problems well enough to carry on conversations in the canonical ?cocktail party?, current computational solutions are less robust. Either they assume sound sources are statistically stationary, they assume anechoic conditions, or they require an array with at least as many microphones as there are sources to be localized. The method proposed in this paper takes a probabilistic approach to localization, using the psychoacoustic cue of interaural phase difference (IPD). Unlike previous approaches, this EM algorithm estimates true probability distributions over both the direction from which sounds originate and the regions of the time-frequency plane associated with each sound source. The basic assumptions that make this possible are that a single source dominates each time-frequency point and that a single delay and amplification cause the difference in the ears? signals at a particular point. By modelling the observed IPD in this way, this method overcomes many of the limitations of other systems. It is able to localize more sources than it has observations, even in reverberant environments. It makes no assumptions about the statistics of the source signal, making it well suited to localizing speech, a highly non-Gaussian and non-stationary signal. Its probabilistic nature also facilitates the incorporation of other probabilistic cues for source separation such as those obtained from single-microphone computational auditory scene analysis. Many comparable methods are also based on IPD, but they first convert it into interaural time difference. Because of the inherent 2? ambiguity in phase differences, this mapping is one-to-one only up to a certain frequency. Our system, however, is able to use observations across the entire frequency range, because even though the same phase difference can correspond to multiple delays, a particular delay corresponds unambiguously to a specific phase difference at every frequency. We evaluate our system on the localization and separation of two and three simultaneous speakers in simulated anechoic and reverberant environments. The speech comes from the TIMIT acousticphonetic continuous speech corpus [2], the anechoic simulations use the head related transfer functions described in [3], and the reverberant simulations use the binaural classroom impulse responses described in [4]. We use four metrics to evaluate our system, the root mean square localization error, the mutual information between the estimated mask and a ground truth mask, the signal to noise ratio of separated speech from [5], and the W-disjoint orthogonality metric of [1]. Our EM approach outperformed Yilmaz and Rickard?s DUET algorithm [1] and Aarabi?s PHAT-histogram [6] in anechoic situations, and performed comparably to PHAT-histogram in reverberation. 1.1 Previous work Many systems exist for localizing sounds using a microphone array, e.g. [7]. These systems can be quite accurate, but this accuracy requires physical bulk, special hardware to synchronize many recordings, and tedious calibration procedures. They isolate signals in reverberation using directional filtering, which becomes more selective only through the addition of further microphones. Because of the structure and abilities of the human auditory system, researchers have paid particular attention to the two-microphone case. Roman et al. [5] make empirical models of the timing and level differences for combinations of two sources in known positions synthesized with anechoic head-related transfer functions (HRTFs). They then classify each time-frequency cell in their auditory-based representation, creating a binary time-frequency mask which contains the cells that appear to be dominated by the target source. Yilmaz and Rickard [1] studied the interaction of speech signals in the time-frequency plane, concluding that multiple speech signals generally do not overlap much in both time and frequency. They also conclude, as in [5], that the best ground truth mask includes only points in which the signal to noise ratio is 0dB or greater. They propose a method for localization that maps IPD to delay before aggregating information and thus cannot use information at higher frequencies. It is designed for anechoic and noise-free situations and subsequently its accuracy suffers in more realistic settings. Aarabi [6] and Rennie [8] focus on localizing sounds. Aarabi?s method, while quite simple, is still one of the most accurate methods for localizing many simultaneous sound sources, even in reverberation. Rennie refined this approach with an EM algorithm for performing the same process probabilistically. A limitation of both algorithms, however, is the assumption that a single source dominates each analysis window, as compared to time-frequency masking algorithms which allow different sources to dominate different frequencies in the same analysis window. 2 Framework For the purposes of deriving this model we will examine the situation where one sound source arrives at two spatially distinct microphones or ears. This will generalize to the assumption that only a single source arrives at each time-frequency point in a spectrogram, but that different points can contain different sources. Denote the sound source as s(t), and the signals received at the left and right ears as `(t) and r(t), respectively. The two received signals will have some delay and some gain relative to the source, in addition to a disruption due to noise. For this model, we assume a convolutive noise process, because it fits our empirical observations, it is easy to analyze, and in general is it is very similar to the additive noise processes that other authors assume. The various signals are then related by, `(t) = a` s(t ? ?` ) ? n` (t) r(t) = ar s(t ? ?r ) ? nr (t). (1) The ratio of the short-time Fourier transforms, F{?}, of both equations is the interaural spectrogram, L(?, t) XIS (?, t) ? = ?(?, t)e?(?,t) = ea?j?? N (?, t), (2) R(?, t) N` (?,t) F {n` (t)} a` where ? = ?` ? ?r , N (?, t) = N = F {nr (t)} , and a = log ar . This equivalence assumes r (?,t) that ? is much smaller than the length of the window over which the Fourier transform is taken, a condition easily met for dummy head recordings with moderately sized Fourier transform windows. For example, in our experiments the maximum delay was 0.75ms, and the window length was 64ms. As observed in [9], N (?, t), the noise in the interaural spectrogram of a single source, is unimodal and approximately identically distributed for all frequencies and times. Using the standard rectangular-to-polar change of coordinates, the noise can be separated into independent magnitude and phase components. The magnitude noise is approximately log-normal, while the phase noise has a circular distribution with tails heavier than the von Mises distribution. In this work, we ignore the magnitude noise and approximate the phase noise with a mixture of Gaussians, all with the same mean. This approximation includes the distribution?s heavy tailed characteristic, but ignores the circularity, meaning that the variance of the noise is generally underestimated. A true circular distribution is avoided because its maximum likelihood parameters cannot be found in closed form. 3 Derivation of EM algorithm The only observed variable in our model is ?(?, t), the phase difference between the left and right channels at frequency ? and timet. While 2? ambiguities complicate the calculation of this quantity, L(?,t) we use ?(?, t) = arg R(?,t) so that it stays within (??, ?]. For similar reasons, we define   ? t; ? ) = arg L(?,t) ej?? as a function of the observation. ?(?, R(?,t) Our model of the interaural phase difference is a mixture over sources, delays, and Gaussians. In particular, we have I sources, indexed by i, each of which has a distribution over delays, ? . For this model, the delays are discretized to a grid and probabilities over them are computed as a multinomial. This discretization gives the most flexible possible distribution over ? for each source, but since we expect sources to be compact in ? , a unimodal parametric distribution could work. Experiments approximating Laplacian and Gaussian distributions over ? , however, did not perform as well at localizing sources or creating masks as the more flexible multinomial. For a particular source, then, the probability of an observed delay is p(?(?, t) | i, ? ) = p6 ? N (?(?, t; ? )) (3) where p6 N (?) is the probability density function of the phase noise, N (?, t), described above. We approximate this distribution as a mixture of J Gaussians, indexed by j and centered at 0, p(?(?, t) | i, ? ) = J X ? t; ? ) | 0, ? 2 ) p(j | i)N (?(?, ij (4) j=1 ?t ?t In order to allow parameter estimation, we define hidden indicator variables zij? such that zij? =1 if ?(?, t) comes fromPGaussian j in source i at delay ? , and 0 otherwise. There is one indicator for ?t ?t each observation, so ij? zij? = 1 and zij? ? 0. The estimated parameters of our model are thus ?ij? ? p(i, j, ? ), a third order tensor of discrete probabilities, and ?ij , the variances of the various Gaussians. For convenience, we define ? ? {?ij? , ?ij ?i, j, ? }. Thus, the total log-likelihood of our data, including marginalization over the hidden variables, is: X X ? t; ? ) | 0, ? 2 ). ?ij? N (?(?, (5) log p(?(?, t) | ?) = log ij ?t ij? This log likelihood allows us to derive the E and M steps of our algorithm. For the E step, we ?t compute the expected value of zij? given the data and our current parameter estimates, ?t ?t ?ij? (?, t) ? E{zij? | ?(?, t), ?} = p(zij? = 1 | ?(?, t), ?) = =P ? t; ? ) | 0, ? 2 ) ?ij? N (?(?, ij ? ?ij? N (?(?, t; ? ) | 0, ? 2 ) ij? ij ?t p(zij? = 1, ?(?, t) | ?) p(?(?, t) | ?) (6) (7) 6 (c) Aarabi mask 7 6 7 6 5 5 5 4 4 4 3 3 3 2 2 2 1 1 1 0.5 1 1.5 2 0.5 time / sec (d) EM mask freq / kHz (b) Yilmaz mask freq / kHz 7 freq / kHz freq / kHz (a) Ground Truth 1 1.5 2 6 5 1 1.5 2 time / sec (f) Noise distrib. by source (e) p(? | i) 7 0.5 time / sec 0.35 0.7 0.3 0.6 0.25 0.5 0.2 0.4 0.15 0.3 0.1 0.2 0.05 0.1 4 3 2 1 0.5 1 1.5 2 time / sec 0 -10 0 -8 -6 -4 -2 0 2 ? 4 -3 -2 -1 0 1 2 3 angle / rad Figure 1: Example parameters estimated for two speakers located at 0? and 45? in a reverberant classroom. (a) Ground truth mask for speaker 1, (b) mask estimated by Yilmaz?s algorithm, (c) mask estimated by Aarabi?s algorithm, (d) mask estimated by EM algorithm, (e) probability distribution over ? for each speaker p(? | i) estimated by EM algorithm, (f) probability distribution over phase error for each speaker, p(?(?, t) | i, ? ) estimated by EM algorithm. For the M step, we first compute the auxiliary function Q(? | ?s ), where ? is the set of parameters over which we wish to maximize the likelihood, and ?s is the estimate of the parameters after s iterations of the algorithm. XX ?t ?ij? (?, t) log p(?(?, t), zij? | ?) (8) Q(? | ?s ) = c + ?t ij? where c does not depend on ?. Since Q is concave in ?, we can maximize it by taking derivatives with respect to ? and Psetting them equal to zero while also including a Lagrange multiplier to enforce the constraint that ij? ?ij? = 1. This results in the update rules ?ij? = 1 X ?ij? (?, t) ?T ?t 2 ?ij = P ?t P ? t; ? )2 ?ij? (?, t)?(?, P? P . ?t ? ?ij? (?, t) (9) Note that we are less interested in the joint distribution ?ij? = p(i, j, ? ) than other distributions derived from it. Specifically, we are interested in the marginal probability of a point?s coming from source i, p(i), the distributions over delays and Gaussians conditioned on the source, p(? | i) and p(j | i), and the probability of each time-frequency point?s coming from each source, Mi (?, t). To ?t calculate these masks, we marginalize p(zij? | ?(?, t), ?) over ? and j to get X X ?t ?ij? (?, t). (10) p(zij? | ?(?, t), ?) = Mi (?, t) ? p(zi?t | ?(?, t), ?) = j? j? See Figure 1(d)-(f) for an example of the parameters estimated for two speakers located at 0? and 45? in a reverberant classroom. 4 Experiments In order to evaluate our system, we simulated speech in anechoic and reverberant noise situations by convolving anechoic speech samples with binaural impulse responses. We used speech from the TIMIT acoustic-phonetic continuous speech corpus [2], a dataset of utterances spoken by 630 native American English speakers. Of the 6300 utterances in the database, we chose 15 at random to use in our evaluation. To allow the speakers to be equally represented in each mixture, we normalized all of the signals by their average energies before convolving them with the binaural impulse responses. The anechoic binaural impulse responses came from Algazi et al. [3], a large effort to record headrelated transfer functions for many different individuals. Impulse response measurements were taken over the sphere surrounding subjects? heads at 25 different azimuths and 50 different elevations. The measurements we used were for the KEMAR dummy head with small ears, although the dataset contains impulse responses for around 50 individuals. The reverberant binaural impulse responses we used were recorded by Shinn-Cunningham et al. in a real classroom [4]. These measurements were also made with a KEMAR dummy head, although a different actual unit was used. Measurements were taken at four different positions in the classroom, three distances from the subject, seven directions, and three repetitions of each measurement. We used the measurements taken in the middle of the classroom with the sources at a distance of 1 m from the subject. Our method has a number of parameters that need to be set. Perhaps the most important part of running the algorithm is the initialization. We initialized it by setting p(? | i) to discrete approximations to Gaussians centered at the I largest peaks in the average cross-correlation. The other parameters are numerical. Following [1], we use a 1024 point window, which corresponds to 64 ms at 16 kHz. We chose J, the number of Gaussians in the noise GMM, to be 2, striking a balance between model flexibility and computational cost. Since the log likelihood increases monotonically with each EM iteration, we chose to stop after 10, when improvements in log likelihood generally became insignificant. Finally, we discretized ? to 31 values linearly spaced between -0.9375 ms and 0.9375 ms. 4.1 Comparison algorithms We compare the performance of the time-frequency masks and the localization accuracy of our algorithm with those of two other algorithms. The first is Yilmaz and Rickard?s DUET algorithm from [1], although it had to be modified slightly to accommodate our recordings. In order to estimate the interaural time and level differences of the signals in a mixture, DUET creates a two-dimensional histogram of them at every point in the interaural spectrogram. It then smooths the histogram and finds the I largest peaks, which should correspond to the I sources. The interaural parameter calculation of DUET requires that the interaural phase of a measurement c where c unambiguously translates to a delay. The maximum frequency at which this is possible is 2d is the speed of sound and d is the distance between the two microphones. The authors in [1] choose a fixed sampling rate and adjust the distance between their free-standing microphones to prevent ambiguity. In the case of our KEMAR recordings, however, the distance between the two ears is fixed at approximately 0.15 m and since the speed of sound is approximately 340 m/s, we must lower the maximum frequency from 8000 to 1150 Hz. Even though the frequencies used to estimate the interaural parameters are limited, a time-frequency mask can still be computed for all frequencies. See Figure 1(b) for an example of such a mask estimated by DUET. We also implemented Aarabi?s PHAT-histogram technique from [6], augmented to create timefrequency masks. The algorithm localizes multiple simultaneous sources by cross-correlating the left and right channels using the Phase Transform (PHAT) for each frame of the interaural spectrogram. This gives point estimates of the delay at each frame, which are pooled over all of the frames of the signal into a histogram. The I largest peaks in this histogram are assumed to be the interaural delays of the I sources. While not designed to create time-frequency masks, one can be constructed that simply assigns an entire frame to the source from which its delay originates. See Figure 1(c) for an example mask estimated by PHAT-histogram. As discussed in the next section, we compare these algorithms using a number of metrics, some of which admit baseline masks. For power-based metrics, we include ground truth and random masks in the comparison as baselines. The ground truth, or 0 dB, mask is the collection of all time-frequency points in which a particular source is louder than the mixture of all other sources, it is included to measure the maximum improvement achievable by an algorithmically created mask. The random mask is created by assigning each time-frequency point to one of the I sources at random, it is included to measure the performance of the simplest possible masking algorithm. 4.2 Performance measurement Measuring performance of localization results is straightforward, we use the root-mean-square error. Measuring the performance of time-frequency masks is more complicated, but the problem has been well studied in other papers [5, 1, 10]. There are two extremes possible in these evaluations. The first of which is to place equal value on every time-frequency point, regardless of the power it contains, e.g. the mutual information metric. The second is to measure the performance in terms of the amount of energy allowed through the mask or blocked by it, e.g. the SNR and WDO metrics. To measure performance valuing all points equally, we compute the mutual information between the ground truth mask and the predicted mask. Each mask point is treated as a binary random variable, so the mutual information can easily be calculated from their individual and joint entropies. In order to avoid including results with very little energy, the points in the lowest energy decile in each band are thrown out before calculating the mutual information. One potential drawback to using the mutual information as a performance metric is that it has no fixed maximum, it is bounded below by 0, but above by the entropy of the ground truth mask, which varies with each particular mixture. Fortunately, the entropy of the ground truth mask was close to 1 for almost all of the mixtures in this evaluation. To measure the signal to noise ratio (SNR), we follow [5] and take the ratio of the amount of energy in the original signal that is passed through the mask to the amount of energy in the mixed signal minus the original signal that is passed through the mask. Since the experimental mixtures are simulated, we have access to the original signal. This metric penalizes masks that eliminate signal as well as masks that pass noise. A similar metric is described in [1], the W-disjoint orthogonality (WDO). This is the signal to noise ratio in the mixture the mask passes through, multiplied by a (possibly negative) penalty term for eliminating signal energy. When evaluated on speech, energy based metrics tend to favor systems with better performance at frequencies below 500 Hz, where the energy is concentrated. Frequencies up to 3000 Hz, however, are still important for the intelligibility of speech. In order to more evenly distribute the energy across frequencies and thus include the higher frequencies more equally in the energy-based metrics, we apply a mild high pass pre-emphasis filter to all of the speech segments. The experimental results were quite similar without this filtering, but the pre-emphasis provides more informative scoring. 4.3 Results We evaluated the performance of these algorithms in four different conditions, using two and three simultaneous speakers in reverberant and anechoic conditions. In the two source experiments, the target source was held at 0? , while the distracter was moved from 5? to 90? . In the three source experiments, the target source was held at 0? and distracters were located symmetrically on either side of the target from 5? to 90? . The experiment was repeated five times for each separation, using different utterances each time to average over any interaction peculiarities. See Figure 2 for plots of the results of all of the experiments. Our EM algorithm performs quite well at localization. Its root mean square error is particularly low for two-speaker and anechoic tests, and only slightly higher for three speakers in reverberation. It does not localize well when the sources are very close together, i.e. within 5? , most likely because of problems with its automatic initialization. At such a separation, two cross-correlation peaks are difficult to discern. Performance also suffers slightly for larger separations, most likely a result of greater head shadowing. Head shadowing causes interaural intensity differences at high frequencies which change the distribution of IPDs, and violate our model?s assumption that phase noise is identically distributed across frequencies. It also performs well at time-frequency masking, more so for anechoic simulations than reverberant. See Figure 1(d) for an example time-frequency mask in reverberation. Notice that the major features follow the ground truth, but much detail is lost. Notice also the lower contrast bands in this figure at 0, 2.7, and 5.4 kHz corresponding to the frequencies at which the sources have the same IPD, modulo 2?. For any particular relative delay between sources, there are frequencies which provide no information to distinguish one from the other. Our EM algorithm, however, can distinguish between the two because the soft assignment in ? uses information from many relative delays. 0.6 SNR / dB 0.1 0 -80 -40 -20 0 separation / deg -60 -40 -20 0 separation / deg info / bits 0.3 0.6 -10 -80 -2 -60 -40 -20 0 separation / deg -80 -60 -40 -20 0 separation / deg -60 -40 -20 0 separation / deg -60 -40 -20 0 separation / deg -60 -40 -20 0 separation / deg 1 10 0 5 0 0.1 -60 0 -80 -40 -20 0 separation / deg -60 -40 -20 0 separation / deg info / bits 1 0.3 0.8 0.6 -1 -5 -10 -80 SNR / dB 0.2 error / ms gnd truth EM aarabi yilmaz random -1 0.2 0.4 -2 -60 -40 -20 0 separation / deg -80 1 10 0 5 0.2 0 0.4 0.1 -60 0 -80 -40 -20 0 separation / deg -60 -40 -20 0 separation / deg info / bits 1 0.3 0.8 0.6 -1 -5 -10 -80 SNR / dB 0.2 error / ms 0 5 -5 SNR / dB -60 0.8 -2 -60 -40 -20 0 separation / deg -80 1 10 0 5 0.2 0 0.4 0.1 0.2 0 -80 W-disjoint orthogonality 10 0 1 0 -80 1 0.2 0.2 error / ms anechoic, 3 sources SNR 0.4 0 -80 reverberant, 2 sources Mutual information 0.3 0.8 0 -80 reverberant, 3 sources Mean-square localization error info / bits error / ms anechoic, 2 sources 1 -60 -40 -20 0 separation / deg 0 -80 -1 -5 -60 -40 -20 0 separation / deg -10 -80 -2 -60 -40 -20 0 separation / deg -80 Figure 2: Experimental results for four conditions (rows) compared using four metrics (columns). First row: two sources, anechoic; second row: three sources, anechoic; third row: two sources, reverberant; fourth row: three sources, reverberant. The EM approach always performs as well as the better of the other two algorithms and outperforms them both in many situations. Its localization performance is comparable to PHAT-histogram in twospeaker conditions and slightly worse in three-speaker conditions. DUET suffers even in anechoic, two-source situations, possibly because it was designed for free-standing microphones as opposed to dummy head recordings. Its performance decreases further as the tasks become more difficult. The advantage of our method for masking, however, is particularly clear in anechoic conditions, where it has the highest mutual information at all angles and the highest SNR and WDO at lower angles. In reverberant conditions, the mutual information between estimated masks and ground truth masks becomes quite low, but PHAT-histogram comes out slightly ahead. Comparing SNR measurements in reverberation, PHAT-histogram and the EM approach perform similarly, with DUET trailing. In WDO, however, PHAT-histogram performs best, with EM and DUET performing similarly to the random mask. 5 Conclusions and Future Work We have derived and demonstrated an expectation-maximization algorithm for probabilistic source separation and time-frequency masking. Using the interaural phase delay, it is able to localize more sources than microphones, even in the reverberation found in a typical classroom. It does not depend on any assumptions about sound source statistics, making it well suited for such non-stationary signals as speech and music. Because it is probabilistic, it is straightforward to augment the feature representation with other monaural or binaural cues. There are many directions to take this project in the future. Perhaps the largest gain in signal separation accuracy could come from the combination of this method with other computational auditory scene analysis techniques [11, 12]. A system using both monaural and binaural cues should surpass the performance of either approach alone. Another binaural cue that would be easy to add is IID caused by head shadowing and pinna filtering, allowing localization in both azimuth and elevation. This EM algorithm could also be expanded in a number of ways by itself. A minimum entropy prior [13] could be included to keep the distributions of the various sources separate from one another. In addition, a parametric, heavy tailed model could be used instead of the current discrete model to ensure unimodality of the distributions and enforce the separation of different sources. Along the same lines, a variational Bayes model could be used with a slightly different parameterization to treat all of the parameters probabilistically, as in [14]. Finally, we could relax the independence constraints between adjacent time-frequency points, making a Markov random field. Since sources tend to dominate regions of adjacent points in both time and frequency, the information at its neighbors could help a particular point localize itself. Acknowledgments The authors would like to thank Barbara Shinn-Cunningham for sharing her lab?s binaural room impulse response data with us and Richard Duda for making his lab?s head-related transfer functions available on the web. This work is supported in part by the National Science Foundation (NSF) under Grants No. IIS-0238301, IIS-05-35168, CCR-0312690, and IIS-0347499. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF. References [1] Ozgur Yilmaz and Scott Rickard. Blind separation of speech mixtures via time-frequency masking. IEEE Transactions on signal processing, 52(7):1830?1847, July 2004. [2] J. S. Garofolo, L. F. Lamel, W. M. Fisher, J. G. Fiscus, D. S. Pallett, and N. L. Dahlgren. DARPA TIMIT acoustic phonetic continuous speech corpus CDROM, 1993. [3] V. R. Algazi, R. O. Duda, D. M. Thompson, and C. Avendano. The CIPIC HRTF database. In Proc 2001 IEEE Workshop on Applications of Signal Processing to Audio and Electroacoustics, pages 99?102, Oct 2001. [4] Barbara Shinn-Cunningham, Norbert Kopco, and Tara J. Martin. Localizing nearby sound sources in a classroom: Binaural room impulse responses. Journal of the Acoustical Society of America, 117:3100? 3115, 2005. [5] Nicoleta Roman, DeLiang Wang, and Guy J. Brown. A classification-based cocktail party processor. In Proceedings of Neural Information Processing Systems, 2003. [6] Parham Aarabi. Self-localizing dynamic microphone arrays. IEEE transactions on systems, man, and cybernetics, 32(4), November 2002. [7] M. Brandstein and H. Silverman. A practical methodology for speech source localization with microphone arrays. Computer, Speech, and Language, 11(2):91?126, April 1997. [8] Steven J. Rennie. Robust probabilistic TDOA estimation in reverberant environments. Technical Report PS1-TR-2005-011, University of Toronto, February 2005. [9] Michael I. Mandel and Daniel P. W. Ellis. A probability model for interaural phase difference. Workshop on Statistical and Perceptual Audio Processing (SAPA), 2006. [10] Ron Weiss and Daniel P. W. Ellis. Estimating single-channel source separation masks: relevance vector machine classifiers vs pitch-based masking. Workshop on Statistical and Perceptual Audio Processing (SAPA), 2006. [11] Martin Cooke and Daniel P. W. Ellis. The auditory organization of speech and other sources in listeners and computational models. Speech Communication, 35(3?4):141?177, 2001. [12] Sam Roweis. One microphone source separation. In Proceedings of Neural Information Processing Systems 13, pages 793?799, 2000. [13] Matthew Brand. Pattern discovery via entropy minimization. In Proceedings of Artificial Intelligence and Statistics, 1999. [14] Matthew J. Beal, Hagai Attias, and Nebojsa Jojic. Audio-video sensor fusion with probabilistic graphical models. In ECCV (1), pages 736?752, 2002.
3159 |@word mild:1 middle:1 eliminating:1 achievable:1 timefrequency:1 duda:2 brandstein:1 tedious:1 simulation:4 paid:1 minus:1 tr:1 accommodate:1 carry:1 contains:3 zij:11 daniel:4 outperforms:2 current:3 discretization:1 comparing:1 assigning:1 must:1 realistic:2 additive:1 numerical:1 informative:1 designed:3 plot:1 update:1 v:1 stationary:3 cue:5 alone:1 intelligence:1 nebojsa:1 parameterization:1 plane:2 short:1 record:1 provides:1 toronto:1 ron:1 five:1 along:1 constructed:1 become:1 consists:1 interaural:15 psetting:1 mask:42 expected:1 examine:1 discretized:2 actual:1 little:1 window:6 becomes:2 project:1 xx:1 bounded:1 estimating:1 lowest:1 spoken:1 finding:2 every:3 dahlgren:1 concave:1 classifier:1 unit:1 grant:1 originates:1 decile:1 appear:1 before:3 engineering:1 aggregating:1 treat:1 timing:1 approximately:4 garofolo:1 chose:3 emphasis:2 initialization:2 studied:2 equivalence:1 limited:1 range:1 statistically:1 acknowledgment:1 practical:1 lost:1 silverman:1 procedure:1 empirical:2 pre:2 distracter:1 mandel:2 get:1 yilmaz:7 cannot:2 acousticphonetic:1 convenience:1 marginalize:1 close:2 map:1 demonstrated:1 straightforward:2 attention:1 regardless:1 thompson:1 rectangular:1 assigns:1 rule:1 array:4 dominate:2 deriving:1 his:1 coordinate:1 target:4 modulo:1 us:1 pinna:1 particularly:2 located:3 native:1 database:2 observed:4 steven:1 electrical:1 wang:1 calculate:1 region:4 decrease:1 highest:2 fiscus:1 environment:4 moderately:1 dynamic:1 depend:2 segment:1 localization:11 creates:1 binaural:11 easily:2 joint:2 darpa:1 various:3 represented:1 unimodality:1 america:1 listener:1 derivation:1 surrounding:1 separated:2 distinct:1 artificial:1 refined:1 quite:5 larger:1 solve:1 rennie:3 relax:1 otherwise:1 ability:1 statistic:4 favor:1 transform:3 itself:2 beal:1 advantage:1 propose:1 interaction:2 coming:2 anechoic:19 flexibility:1 roweis:1 amplification:1 moved:1 help:1 derive:1 tdoa:1 ij:26 received:2 exacerbated:1 auxiliary:1 c:1 implemented:1 come:4 predicted:1 met:1 direction:4 drawback:1 filter:1 subsequently:1 peculiarity:1 centered:2 human:2 opinion:1 material:1 require:1 elevation:2 hagai:1 around:1 ground:11 normal:1 mapping:1 matthew:2 trailing:1 major:1 purpose:1 polar:1 proc:1 outperformed:1 shadowing:3 estimation:2 largest:4 repetition:1 create:2 minimization:1 sensor:1 gaussian:2 always:1 modified:1 avoid:1 ej:1 probabilistically:2 derived:2 focus:1 improvement:2 modelling:1 likelihood:7 contrast:1 baseline:2 entire:2 eliminate:1 cunningham:3 hidden:2 her:1 selective:1 interested:2 arg:2 multisource:1 flexible:2 classification:1 augment:1 special:1 mutual:9 marginal:1 phat:9 equal:2 field:1 sampling:1 future:2 others:1 report:1 inherent:1 roman:2 richard:1 national:1 individual:3 phase:16 thrown:1 organization:1 highly:1 circular:2 evaluation:3 adjust:1 mixture:11 arrives:2 circularity:1 extreme:1 held:2 accurate:2 indexed:2 initialized:1 penalizes:1 column:1 classify:1 soft:1 elli:4 ar:2 localizing:9 measuring:2 assignment:2 maximization:2 cost:1 snr:9 delay:20 azimuth:2 varies:1 density:1 peak:4 stay:1 standing:2 probabilistic:8 michael:2 together:1 von:1 ambiguity:3 ear:5 recorded:1 opposed:1 choose:1 possibly:2 reflect:1 guy:1 worse:1 admit:1 creating:2 convolving:2 derivative:1 american:1 potential:1 distribute:1 sec:4 pooled:1 includes:2 caused:1 blind:1 performed:1 root:3 view:1 closed:1 ipd:5 analyze:1 algazi:2 lab:2 bayes:1 complicated:1 masking:7 timit:3 square:4 accuracy:4 became:1 variance:2 characteristic:1 correspond:2 spaced:1 directional:1 generalize:1 comparably:1 iid:1 researcher:1 cybernetics:1 processor:1 simultaneous:5 suffers:3 sharing:1 complicate:1 against:1 energy:11 frequency:43 associated:1 mi:3 gain:2 auditory:5 dataset:2 stop:1 conversation:1 classroom:9 ea:1 higher:3 follow:2 unambiguously:2 response:9 methodology:1 april:1 wei:1 evaluated:2 though:2 p6:2 correlation:2 web:1 perhaps:2 impulse:9 contain:1 true:2 multiplier:1 normalized:1 brown:1 spatially:1 jojic:1 freq:4 adjacent:2 self:1 speaker:12 m:9 performs:5 lamel:1 disruption:1 meaning:1 variational:1 multinomial:2 physical:1 khz:6 tail:1 discussed:1 synthesized:1 measurement:9 blocked:1 louder:1 automatic:1 grid:1 similarly:2 language:1 had:1 calibration:1 access:1 add:1 deliang:1 barbara:2 phonetic:2 certain:1 binary:2 came:1 scoring:1 minimum:1 greater:2 fortunately:1 spectrogram:6 maximize:2 monotonically:1 signal:26 ii:3 july:1 multiple:4 sound:17 unimodal:2 violate:1 smooth:1 technical:1 calculation:2 cross:3 sphere:1 dept:2 equally:4 laplacian:1 pitch:1 basic:1 expectation:2 metric:12 histogram:12 iteration:2 cell:2 addition:3 underestimated:1 source:71 unlike:1 pass:1 recording:7 isolate:1 subject:3 facilitates:1 db:6 hz:3 tend:2 ee:1 presence:2 symmetrically:1 reverberant:16 enough:1 easy:2 identically:2 ps1:1 marginalization:1 fit:1 zi:1 independence:1 pallett:1 translates:1 attias:1 heavier:1 passed:2 effort:1 penalty:1 stereo:1 speech:21 york:2 cause:2 cocktail:2 useful:1 generally:3 clear:1 transforms:1 amount:3 band:2 hardware:1 concentrated:1 gnd:1 simplest:1 exist:1 canonical:1 nsf:2 notice:2 estimated:12 disjoint:3 dummy:4 algorithmically:1 bulk:1 ccr:1 discrete:3 four:5 localize:4 prevent:1 gmm:1 convert:1 angle:3 fourth:1 striking:1 discern:1 place:1 almost:1 separation:28 comparable:3 bit:4 distinguish:2 kemar:3 ahead:1 incorporation:1 orthogonality:3 constraint:2 scene:2 dominated:1 nearby:1 fourier:3 speed:2 concluding:1 performing:2 expanded:1 martin:2 combination:2 across:3 smaller:1 em:17 slightly:6 sam:1 making:4 ozgur:1 taken:4 equation:1 electroacoustics:1 available:1 gaussians:7 multiplied:1 apply:1 enforce:2 intelligibility:1 original:3 assumes:1 running:1 tony:1 include:3 ensure:1 graphical:1 calculating:1 music:1 approximating:1 society:1 february:1 tensor:1 quantity:1 mim:1 parametric:2 nr:2 distance:5 separate:1 thank:1 separating:1 simulated:3 evenly:1 originate:1 seven:1 acoustical:1 reason:1 length:2 ratio:6 balance:1 difficult:4 info:4 negative:1 reverberation:10 perform:2 allowing:1 observation:5 markov:1 november:1 aarabi:8 situation:6 communication:1 head:11 frame:4 monaural:2 jebara:2 intensity:1 duet:8 rad:1 acoustic:2 able:3 below:2 pattern:1 scott:1 convolutive:1 cdrom:1 including:3 video:1 power:2 overlap:1 treated:1 synchronize:1 hrtfs:1 indicator:2 localizes:1 created:2 hrtf:1 columbia:4 utterance:3 prior:1 discovery:1 determining:2 relative:3 shinn:3 expect:1 mixed:1 limitation:2 filtering:3 localized:1 foundation:1 heavy:2 cooke:1 row:5 eccv:1 supported:1 free:3 english:1 side:1 allow:3 neighbor:1 taking:1 distributed:2 calculated:1 ignores:1 author:4 made:1 collection:1 avoided:1 party:2 transaction:2 approximate:2 observable:1 ignore:1 compact:1 overcomes:1 keep:1 deg:16 correlating:1 corpus:3 conclude:1 assumed:1 xi:1 continuous:3 tailed:2 nature:1 transfer:4 robust:3 channel:3 necessarily:1 did:1 psychoacoustic:1 linearly:1 noise:21 allowed:1 repeated:1 augmented:1 ny:2 position:2 originated:1 wish:1 perceptual:2 third:2 specific:1 insignificant:1 dominates:2 fusion:1 workshop:3 rickard:4 magnitude:3 conditioned:1 suited:2 valuing:1 entropy:5 simply:1 likely:2 lagrange:1 expressed:1 norbert:1 distracters:1 recommendation:1 corresponds:2 truth:12 avendano:1 oct:1 sized:1 room:2 fisher:1 man:1 change:2 included:3 specifically:1 typical:1 surpass:1 microphone:14 called:1 total:1 pas:2 experimental:3 brand:1 tara:1 distrib:1 relevance:1 evaluate:4 audio:4
2,380
316
Learning Trajectory and Force Control of an Artificial Muscle Arm by Parallel-hierarchical Neural Network Model Masazumi Katayama Mitsuo Kawato Cognitive Processes Department ATR Auditory and Visual Perception Research Laboratories Seika-cho. Soraku-gun. Kyoto 619-02. JAPAN Abstract We propose a new parallel-hierarchical neural network model to enable motor learning for simultaneous control of both trajectory and force. by integrating Hogan's control method and our previous neural network control model using a feedback-error-learning scheme. Furthermore. two hierarchical control laws which apply to the model, are derived by using the Moore-Penrose pseudoinverse matrix. One is related to the minimum muscle-tension-change trajectory and the other is related to the minimum motor-command-change trajectory. The human arm is redundant at the dynamics level since joint torque is generated by agonist and antagonist muscles. Therefore, acquisition of the inverse model is an ill-posed problem. However. the combination of these control laws and feedback-error-learning resolve the ill-posed problem. Finally. the efficiency of the parallel-hierarchical neural network model is shown by learning experiments using an artificial muscle arm and computer simulations. 1 INTRODUCTION For humans to properly interact with the environment using their arms. both arm posture and exerted force must be skillfully controlled. The hierarchical neural network model which we previously proposed was successfully applied to trajectory control of an industrial manipulator (Kawato et al.. 1987). However. this model could not directly be applied to force control. because the manipulator mechanism was essentially different from the musculo-skeletal system of a human arm. Hogan proposed a biologically motivated control method which specifies both the virtual trajectory and the mechanical impedance of a musculo-skeletal system (Hogan, 1984, 1985). One of its advantages is that both trajectory and force can be simultaneously controlled. However. this control method does not explain motor learning. 436 Learning Trajectory and Force Control of an Artificial Muscle Arm In this paper, by integrating these two previous studies, we propose a new ParallelHierarchical Neural network Model (PHNM) using afeedhack-error-learning scheme we previously proposed (Kawato et aI., 1987), as shown in Fig.l. PHNM explains the biological motor learning for simultaneous control of both trajectory and force. Arm movement depends on the static and dynamic properties of a musculo-skeletal system. From this viewpoint, its inverse model which computes a motor command from a desired trajectory and force, consists of two parallel inverse models: the Inverse Statics Model (ISM) and the Inverse Dynamics Model (ISM) (see Fig.I). The human arm is redundant at the dynamics level since joint torque is generated by agonist and antagonist muscles. Therefore, acquisition of the inverse model is an illposed problem in the sense that the muscle tensions can not be uniquely determined from the prescribed trajectory and force. The central nervous system can resolve the ill-posed problem by applying suitable constraints. Based on behavioral data of human multi-joint arm movement, Uno et al. (1989) found that the trajectory was generated on the criterion that the time integral of the squared sum of the rate of change of muscle tension is minimized. From this point of view, we assume that the central nervous system controls the arm by using two hierarchical objective functions. One objective function is related to the minimum muscle-tension-change trajectory. The other objective function is related to the minimum motor-command-change trajectory. From this viewpoint, we propose two hierarchical control laws which apply to the feedback controller shown in Fig.l. These control laws are calculated with the Moore-Penrose pseudo-inverse matrix of the Jacobian matrix from muscle tensions or motor commands to joint torque. The combination of these control laws and the feedback-error-Iearning resolve the ill-posed problem. As a result, the inverse model related to hierarchical objective functions can be acquired by PHNM. We ascertained the efficiency of PHNM by performing experiments in learning control using an artificial-muscle arm with agonist and antagonist muscle-like rubber actuators as shown in Fig.2 (Katayama et aI., 1990). 2 PARALLEL-HIERARCHICAL NEURAL NETWORK MODEL In a simple case, the dynamics equation of a human multi-joint arm is described as follows: R(O)8 + B(O, 0)0 = r+ G(O), . . '! = af(O)Tf(Mf' 0, 0) - ae(O)Te(Me, 0, 0). (1a) (1 b) Here, R( 0) is the inertia matrix, B( 0,0) expresses a matrix of centrifugal, coriolis and friction forces and G(O) is the vector of joint torque due to gravity. Mfand Me are agonist and antagonist motor commands, Tf and Te are agonist and antagonist muscle tensions, 0 is the joint-angle, '! is joint torque generated from the tensions of a pair of muscles and aj. 0) and ai 0) are moment arms. If the arm is static (0 = 0 = 0), (1 a) and (1 b) are reduced to the following: 0= af(O)Tf(M1,0,0) - ae(O)Te(Me, 0,0) + G(O). (2) Therefore, (2) is a statics equation. The problem, which calculates the motor commands from joint angles based on (2), is called the inverse statics. There are two difficulties: first, (2) including nonlinear functions (al' a e. TI , Te and G), must be solved. Second, the inverse statics is an ill-posed problem as mentioned above. These difficulties are resolved by?the ISM. The problem of computing dynamic torque other than (2) is called 437 438 Katayama and Kawato Mism , :Midm + , + ?? " 1 If~'-i : : :Mfc + Desired Trajectoryand Force + 8r Fr + Motor Command Realized Trajectory and Force Figure 1: Parallel-Hierarchical Neural Network Model the inverse dynamics and it is resolved by the 10M. The main role of the ISM is to control the equilibrium posture and mechanical stiffness (Hogan, 1984), and that of the IDM is to compensate for dynamic properties of the arm in fast movements. PHNM, in addition to a feedback controller, hierarchically arranges these parallel inverse models. The motor command is the sum of three outputs (M ism' M idm and Mfe) calculated by the ISM, the 10M and the feedback controller, respectively, as shown in Fig.1. The outputs from the ISM and 10M are calculated by feedforward neural networks with synaptic weights w from desired trajectory ()d and desired force F d. These neural network models can be described as the mapping from inputs ()d and F d to motor commands. In order to acquire the parallel inverse model, synaptic weights change according to the following feedback-error-learning algorithm. dw = dt (a'P)t M dw fe (3) The ISM learns when the arm is static and the 10M learns when it is moving. The feedback motor command Mfe is fed only to the ISM when = 0 and only to the 10M e * when 0 0 as an error signal for synaptic modification. The arm is mainly controlled by the feedback controller before learning, and the feed forward control is basically performed only by the parallel inverse model after learning because the output Mfe of the feedback controller is minimized after learning. Two control laws which apply to the feedback controller, are derived below. 3 HIERARCHICAL CONTROL MECHANISM In order to acquire the parallel inverse models related to hierarchical objective functions, we propose two control laws reducing the redundancy at the dynamics level, which apply to a feedback controller in the PHNM. 3.1 MATHEMATICAL MUSCLE MODEL Tensions (T[, Te) of agonist and antagonist muscles are generally modeled as follows: T[ = K(M f){ ()oof(Mf) - ()} - B(Mf)O, Te = K(M e ){ e - ()o.e(M e)} + B(Me)O. ( 4a) (4b) Learning Trajectory and Force Control of an Artificial Muscle Arm Here, M consists of MJ and Me for agonist and antagonist muscles, respectively. The mechanical impedance of a human arm can be manipulated by the stiffness K(M) and viscosity B(M) of the muscle itself, depending on their motor commands. ()oIMJ) and ()o.e<M e) are joint angles at equilibrium position. K(M) , B(M) , ()oIMJ) and ()o,iM e) are approximately given as K(M) == ko + kM, B(M) == bo + bM, ()o,f (M J) == ()o + eMJ and 80?e{Me} =-80 -eMe, respectively. k and b are coefficients which, respectively, determine elasticity and viscosity. ko and b o are intrinsic elasticity and viscosity, respectively. ()o is the intrinsic equilibrium angle and c is a constant. Small changes in joint torque are expressed by using the Jacobian matrix A from small changes in motor command to small changes in joint torque. Therefore, by using the Moore-Penrose pseudo-inverse matrix A#, small changes in motor command are calculated as follows: 21 (aJ?()(C+Rf)J~'l' ( MfJJ=A#~'l'= Mfe aJ?()2(C+RJ) +a ?()/(C-Re)2 ae?()}(C-1:e) e .: C = -(k() + bO), A # =A l' ( AA l' )-1 , (5) 3.2 HIERARCHICAL CONTROL LAWS Two feedback control laws are explained below, which apply to the feedback controller shown in Fig.1. Firstly, .1TJ=.1MJ and .1Te=.1Me are given from (4a) and (4b) by assuming k=b=O, c:;tO, a/()=ai()=a and g.r=Re= 1 in the simplest case. The solution A#.1'l' in which the norm (.1Tj+.1Te 2 )112 of vector .1 T is minimized by using the pseudo-inverse matrix A#, is selected. Therefore, the control law related to the minimum muscle-tension-change trajectory is derived from (5). Then the feedback control law is acquired by using ~'l' = K p( ()d - ()r) + K d( Od - Or) + K J( Fd - Fr ). Here, K p' Kd and KJ are feedback gains. Learning is performed by applying the motor commands calculated by this feedback control law to the learning algorithm of (3). As a result, the inverse model is acquired by the PHNM after learning. Only when a;C())=ae?())=a does, the inverse model strictly give the optimal solution based on the minimum muscle-tension-change trajectory during the movement. a is a constant moment arm. Next, another control law is derived from (5) by assuming k,b;r!{J, e=O, a/()=ae?())=a and g.r=ge= 1 by a similar way. In this case, the control law is related to the minimum motorcommand-change trajectory, because the norm (.1M/+.1M /)112 of vector.1M is minimized by using the pseudo-inverse matrix A# . Then the control law explains the behavioral data of rapid arm movement, during which the mechanical impedance is increased by coactivation of agonist and antagonist muscles (Kurauchi et aI., 1980). The mechanical impedance of the muscles increases when C increases. Therefore, C explains the coactivation because C increases when the arm moves rapidly. Thus, rapid arm movement can be stably executed by such coactivation. It is noted that the control law directly takes account of the variable stiffness and viscosity of the muscle itself. Learning is performed by the same algorithm above . As a result, the inverse model acquired by the PHNM gives the approximate solution related to the minimum motor-command-change trajectory, because A# depends on the joint angle in this case. Furthermore, stiffness and 439 440 Katayama and Kawato virtual trajectory are uniquely determined from a mathematical muscle model using the outputs of the trained inverse models. 4 EFFICIENCY OF PHNM The efficiency of the PHNM is shown by the experiment results using two hierarchical control laws. 4.1 ARTIFICIAL MUSCLE ARM The artificial muscle arm used in our experiments is the rubber-actuator-arm (5 degrees of freedom, 16 rubber actuators, made by Bridgestone Co.), as shown in Fig.2. which is a manipulator with agonist and antagonist muscle-like actuators. The actuators are made of rubber and driven by air. In our experiment, the motor command is air-pressure. Figure 2: Artificial Muscle Arm The mechanical structure of the arti ficial arm is basically the same as that of the human arm. Moreover, properties of the actuator are also similar to those of muscle. The actuator has a variahle mechanical impedance which consists of stiffness and viscosity. Then, the stiffness which is mechanically realized. expresses the spring-like behavior of muscle. This property acts as a simple mechanical feedback system whose time delay is "zero". Furthermore, the ratio of the output torque and the weight of the arm is extremely high. Therefore, we hope it will be easy to control the force and trajectory at the end-effector or joint. However, it is difficult to controi the trajectory of the arm because the artificial arm, like the human arm. is a very nonlinear system. We note that feedforward control using the trained ISM and 10M is necessary to control the arm. 4.2 TRAJECTORY CONTROL OF ARTIFICIAL MUSCLE ARM Learning control experiments using an artificial muscle arm are performed with the feedback control law related to the minimum muscic-lensioll-c/zwIRe trajectory. The ISM and 10M use a 3-layer perceptron. The results shown in Fig.3 indicate that the ~o ,...-------------, conventional feedback control method can not Desired TrdJCdory ~ 10 realize accurate trajectory control, because the 2 ReaIitxd "I taJcclory 20 realized trajectory lagged behind the desired trajectory. While the results shown in FigAa indicate that accurate and smooth trajectory ..... .S - 10 control of a slow movement can be realized o ....., -20 only by feedforward control using the trained ISM and 10M after learning, because the 30 -40 L - . - ' - - - - ' -......................................- - ' - - ' - - I o realized trajectory fits the desired trajectory. o 214 5 ~ 7 0 Moreover, the result indicates that the PHNM (sec.) Time can resolve the ill-posed inverse problem. The results shown in FigAb indicate that Figure 3: Feedback Control Using learning of the ISM and IDM is finished after Conventional Feedback Controller about 2,000 iterations, because the output of Learning Traj ectory and Force Control of an Artificial Muscle Arm ~ 40r---------------------~ oil .g 30 ~ 10 ~ 20 ~ -< ~ A I J I - 40 r------------------------. ........ ISM 11)\1 Feedhack Conln>lIcr Desired '~... ~~~:? Trajeclory (~. .r- .~ -10 \. ~ -20 / \d' -30 40~----~----~--~~~~ o 2345678 Time 2345678 Time (sec.) (~cc. ) (b) Output to Agonist Actuator (after learning) (a) Fcedforward Control Using Trained ISM and IDM (after learning) Figure 4: Trajectory Control Using Control Law Related to Minimum Muscle-Tension-Change Criterion (in slow movement using artificial muscle arm) Agonist \1usclc:\1f Amagonisl \1u,dc:\1c Desired TraJcclory Rcauzed 'l'rajcClory / ." o .2 .3 .4 Time (a) Trajectory .5 6 (,ce.) [J .1 2 'IL-.. .3 .4 Time 5 .r, (sec.) (b) Motor Command Figure 5: Feedback Trajectory Control Using Control Law Related to Minimum Motor-command-change Criterion (in fast movement using computer simulation) the feedback controller is minimized. Then note that the output of the ISM is greater than the other outputs. Furthermore, we confirmed that by using an untrained trajectory, the generalization capability of the trained parallel inverse models is good. 4.3 TRAJECTORY CONTROL IN FAST MOVEMENT One of the advantages of the control law related to the minimum motor-command-change criterion, is shown by a trajectory control experiment in fast movement. We confirmed that the feedback control law allowed stable trajectory control in fast movement. Control experiments were performed by computer simulation. The results shown in Fig.5a indicate that PHNM applying this feedback control law realizes stable trajectory control in rapid movement, because no oscillation characteristics can be found when the arm reaches the desired position. This is because the mechanical impedance of the joint increases when a pair of muscles are coactivated (see Fig.5b). Moreover, the results also explain behavioral data in fast arm movement (Kurauchi et aI., 1980). 441 442 Katayama and Kawato 4.4 FORCE CONTROL We confinned that the feedback control law related to the minimum motor-commandchange criterion succeeded for accurate force control. The results shown in Fig.6 indicate that accurate force control can be performed by combining the trained IDM and ISM, with PHNM using this feedback control law . 12 g 10 - /--', /" f S0 U !5 ~ '., , ,, 6- ,; ( ? '. , I f '" 2 j \ .. '. j o Desired Force Realil.ed Force ". 2 4 Time 6 (sec.) / 5 DISCUSSION Figure 6: Force Control Using Trained ISM and IDM With Control Law The ISM we proposed in this paper has two advantages. The first is that it is easy to Relate to Minimum train the inverse model of the controlled Motor-command-change Criterion object because the inverse model is separated into the ISM and IDM. The second is that control using the ISM explains Bizzi's experiment results with a deafferented rhesus monkey (Bizzi et al., 1984). Furthermore the control using the ISM relates to Hogan's control method using the virtual trajectory (Hogan, 1984, 1985). The Parallel-Hierarchical Neural network Model proposed in this paper integrates Hogan's impedance control and our previous model, and hence can explain motor learning for simultaneous control of both trajectory and force. There is an infinite number of possible combinations of mechanical impedance and virtual trajectory that can produce the same torque and force. Thus, the problem of determining the impedance and the virtual trajectory was ill-posed in Hogan's framework. In the present paper, they were uniquely detennined from (5). References [1] Bizzi, E., Accornero, N., Chapple, W. & Hogan, N. (1984) Posture Control and Trajectory Formation During Arm Movement. The Journal of Neuroscience, 4, II, 2738-2744. [2] Hogan, N. (1984) An Organizing Principle for a Class of Voluntary Movements. The Journal of Neuroscience, 4, 11, 2745-2754. [3] Hogan, N. (1985) Impedance Control: An Approach to Manipulation Part I II III. Journal of Dynamic Systems, Measurement, and Control, 107, 1-24. [4] Katayama, M. & Kawato, M. (1990) Parallel-Hierarchical Neural Network Model for Motor Control of Musculo-Skeletal System. The Transactions of The Institute of Electronics, Information and Communication Engineers, J73-D-II, 8, 1328-1335. in Japanese. [5J Kawato, M., Furukawa, K. & Suzuki, R. (1987) A Hierarchical Neural-Network Model for Control and Learning of Voluntary Movement. Biological Cybernetics, 57,169-185. [6] Kurauchi, S., Mishima, K. & Kurokawa, T. (1980) Characteristics of Rapid Positional Movements of Forearm. The Japanese Journal of Ergonomics, 16, 5, 263-270. in Japanese. l7] Uno, Y., Suzuki, R. & Kawato, M. (1989) Minimum Muscle-Tension-Change Model which Reproduces Human Arm Movement. Proceedings (l the 4th Symposium on Biological and Physiological Engineering, 299-302. in Japanese.
316 |@word norm:2 km:1 rhesus:1 simulation:3 pressure:1 arti:1 moment:2 electronics:1 od:1 must:2 realize:1 motor:26 centrifugal:1 ficial:1 selected:1 nervous:2 firstly:1 mathematical:2 symposium:1 consists:3 behavioral:3 acquired:4 rapid:4 behavior:1 seika:1 multi:2 torque:10 resolve:4 moreover:3 musculo:4 monkey:1 pseudo:4 deafferented:1 ti:1 act:1 iearning:1 gravity:1 control:75 before:1 engineering:1 approximately:1 co:1 confinned:1 coactivation:3 coriolis:1 illposed:1 integrating:2 applying:3 conventional:2 arranges:1 dw:2 role:1 solved:1 movement:19 mentioned:1 environment:1 dynamic:10 hogan:11 trained:7 efficiency:4 resolved:2 joint:15 train:1 separated:1 fast:6 artificial:13 formation:1 whose:1 posed:7 itself:2 advantage:3 propose:4 fr:2 combining:1 rapidly:1 organizing:1 detennined:1 produce:1 object:1 depending:1 indicate:5 human:10 enable:1 virtual:5 explains:4 generalization:1 biological:3 accornero:1 im:1 strictly:1 equilibrium:3 mapping:1 bizzi:3 chapple:1 integrates:1 realizes:1 tf:3 successfully:1 hope:1 command:20 derived:4 properly:1 indicates:1 mainly:1 industrial:1 sense:1 l7:1 ill:7 exerted:1 emj:1 minimized:5 manipulated:1 simultaneously:1 freedom:1 mitsuo:1 fd:1 behind:1 phnm:13 tj:2 accurate:4 integral:1 succeeded:1 necessary:1 elasticity:2 ascertained:1 desired:11 re:2 effector:1 increased:1 delay:1 cho:1 mechanically:1 squared:1 central:2 cognitive:1 japan:1 account:1 sec:4 coefficient:1 depends:2 performed:6 view:1 parallel:13 capability:1 air:2 il:1 characteristic:2 agonist:11 basically:2 trajectory:45 confirmed:2 cc:1 cybernetics:1 simultaneous:3 explain:3 reach:1 synaptic:3 ed:1 acquisition:2 static:7 gain:1 auditory:1 feed:1 dt:1 tension:12 furthermore:5 nonlinear:2 stably:1 aj:3 manipulator:3 oil:1 hence:1 laboratory:1 moore:3 during:3 uniquely:3 noted:1 criterion:6 antagonist:9 kawato:9 m1:1 measurement:1 ai:6 mfc:1 moving:1 stable:2 driven:1 manipulation:1 muscle:38 furukawa:1 minimum:15 greater:1 determine:1 redundant:2 signal:1 ii:3 relates:1 rj:1 kyoto:1 smooth:1 af:2 compensate:1 controlled:4 calculates:1 ko:2 controller:10 essentially:1 ae:5 iteration:1 addition:1 feedforward:3 iii:1 easy:2 fit:1 motivated:1 soraku:1 generally:1 viscosity:5 simplest:1 reduced:1 specifies:1 neuroscience:2 skeletal:4 express:2 redundancy:1 ce:1 eme:1 sum:2 inverse:26 angle:5 oscillation:1 layer:1 constraint:1 uno:2 friction:1 prescribed:1 spring:1 extremely:1 performing:1 department:1 according:1 combination:3 kd:1 biologically:1 modification:1 explained:1 rubber:4 equation:2 previously:2 mechanism:2 ge:1 fed:1 end:1 stiffness:6 apply:5 actuator:8 hierarchical:17 ism:22 objective:5 move:1 realized:5 posture:3 mfe:4 atr:1 gun:1 me:7 idm:7 assuming:2 modeled:1 ratio:1 acquire:2 difficult:1 executed:1 fe:1 relate:1 lagged:1 forearm:1 voluntary:2 communication:1 dc:1 pair:2 mechanical:10 below:2 perception:1 rf:1 including:1 suitable:1 difficulty:2 force:24 arm:43 scheme:2 finished:1 katayama:6 kj:1 determining:1 law:26 degree:1 s0:1 principle:1 viewpoint:2 perceptron:1 institute:1 feedback:28 calculated:5 computes:1 inertia:1 forward:1 made:2 suzuki:2 bm:1 transaction:1 approximate:1 pseudoinverse:1 reproduces:1 impedance:10 mj:2 interact:1 traj:1 untrained:1 japanese:4 main:1 hierarchically:1 allowed:1 fig:11 slow:2 position:2 jacobian:2 learns:2 coactivated:1 physiological:1 intrinsic:2 te:8 mf:3 visual:1 penrose:3 positional:1 expressed:1 bo:2 aa:1 oof:1 change:19 determined:2 infinite:1 reducing:1 engineer:1 called:2
2,381
3,160
Context Effects in Category Learning: An Investigation of Four Probabilistic Models + Michael C. Mozer+ , Michael Jones? , Michael Shettel+ Dept. of Computer Science, ? Dept. of Psychology, and  Institute of Cognitive Science University of Colorado, Boulder, CO 80309-0430 {mozer,mike.jones,shettel}@colorado.edu Abstract Categorization is a central activity of human cognition. When an individual is asked to categorize a sequence of items, context effects arise: categorization of one item influences category decisions for subsequent items. Specifically, when experimental subjects are shown an exemplar of some target category, the category prototype appears to be pulled toward the exemplar, and the prototypes of all nontarget categories appear to be pushed away. These push and pull effects diminish with experience, and likely reflect long-term learning of category boundaries. We propose and evaluate four principled probabilistic (Bayesian) accounts of context effects in categorization. In all four accounts, the probability of an exemplar given a category is encoded as a Gaussian density in feature space, and categorization involves computing category posteriors given an exemplar. The models differ in how the uncertainty distribution of category prototypes is represented (localist or distributed), and how it is updated following each experience (using a maximum likelihood gradient ascent, or a Kalman filter update). We find that the distributed maximum-likelihood model can explain the key experimental phenomena. Further, the model predicts other phenomena that were confirmed via reanalysis of the experimental data. Categorization is a key cognitive activity. We continually make decisions about characteristics of objects and individuals: Is the fruit ripe? Does your friend seem unhappy? Is your car tire flat? When an individual is asked to categorize a sequence of items, context effects arise: categorization of one item influences category decisions for subsequent items. Intuitive naturalistic scenarios in which context effects occur are easy to imagine. For example, if one lifts a medium-weight object after lifting a light-weight or heavy-weight object, the medium weight feels heavier following the light weight than following the heavy weight. Although the object-contrast effect might be due to fatigue of sensory-motor systems, many context effects in categorization are purely cognitive and cannot easily be attributed to neural habituation. For example, if you are reviewing a set of conference papers, and the first three in the set are dreadful, then even a mediocre paper seems like it might be above threshold for acceptance. Another example of a category boundary shift due to context is the following. Suppose you move from San Diego to Pittsburgh and notice that your neighbors repeatedly describe muggy, somewhat overcast days as ?lovely.? Eventually, your notion of what constitutes a lovely day accommodates to your new surroundings. As we describe shortly, experimental studies have shown a fundamental link between context effects in categorization and long-term learning of category boundaries. We believe that context effects can be viewed as a reflection of a trial-to-trial learning, and the cumulative effect of these trial-to-trial modulations corresponds to what we classically consider to be category learning. Consequently, any compelling model of category learning should also be capable of explaining context effects. 1 Experimental Studies of Context Effects in Categorization Consider a set of stimuli that vary along a single continuous dimension. Throughout this paper, we use as an illustration circles of varying diameters, and assume four categories of circles defined ranges of diameters; call them A, B, C, and D, in order from smallest to largest diameter. In a classification paradigm, experimental subjects are given an exemplar drawn from one category and are asked to respond with the correct category label (Zotov, Jones, & Mewhort, 2003). After making their response, subjects receive feedback as to the correct label, which we?ll refer to as the target. In a production paradigm, subjects are given a target category label and asked to produce an exemplar of that category, e.g., using a computer mouse to indicate the circle diameter (Jones & Mewhort, 2003). Once a response is made, subjects receive feedback as to the correct or true category label for the exemplar they produced. Neither classification nor production task has sequential structure, because the order of trial is random in both experiments. The production task provides direct information about the subjects? internal representations, because subjects are producing exemplars that they consider to be prototypes of a category, whereas the categorization task requires indirect inferences to be made about internal representations from reaction time and accuracy data. Nonetheless, the findings in the production and classification tasks mirror one another nicely, providing converging evidence as to the nature of learning. The production task reveals how mental representations shift as a function of trial-to-trial sequences, and these shifts cause the sequential pattern of errors and response times typically observed in the classification task. We focus on the production task in this paper because it provides a richer source of data. However, we address the categorization task with our models as well. Figure 1 provides a schematic depiction of the key sequential effects in categorization. The horizontal line represents the stimulus dimension, e.g., circle diameter. The dimension is cut into four regions labeled with the corresponding category. The category center, which we?ll refer to as the prototype, is indicated by a vertical dashed line. The long solid vertical line marks the current exemplar?whether it is an exemplar presented to subjects in the classification task or an exemplar generated by subjects in the production task. Following an experimental trial with this exemplar, category prototypes appear to shift: the target-category prototype moves toward the exemplar, which we refer to as a pull effect, and all nontarget-category prototypes move away from the exemplar, which we refer to as a push effect. Push and pull effects are assessed in the production task by examining the exemplar produced on the following trial, and in the categorization task by examining the likelihood of an error response near category boundaries. The set of phenomena to be explained are as follows, described in terms of the production task. All numerical results referred to are from Jones and Mewhort (2003). This experiment consisted of 12 blocks of 40 trials, with each category label given as target 10 times within a block. ? Within-category pull: When a target category is repeated on successive trials, the exemplar generated on the second trial moves toward the exemplar generated on the first trial, with respect to the true category prototype. Across the experiment, a correlation coefficient of 0.524 is obtained, and remains fairly constant over trials. ? Between-category push: When the target category changes from one trial to the next, the exemplar generated on the second trial moves away from the exemplar generated on the first trial (or equivalently, from the prototype of the target category on the first trial). Figure 2a summarizes the sequential push effects from Jones and Mewhort. The diameter of the circle produced on trial t is plotted as a function of the target category on trial t ? 1, with one line for each of the four trial t targets. The mean diameter for each target category is subtracted out, so the absolute vertical offset of each line is unimportant. The main feature of the data to note is that all four curves have a negative slope, which has the following meaning: the smaller that target t ? 1 is (i.e., the further to the left on the x axis in Figure 1), the larger the response to target t is (further to the right in Figure 1), and vice versa, reflecting a push away from target t ? 1. Interestingly and importantly, the magnitude of the push increases with the ordinal distance between targets t ? 1 and t. Figure 2a is based on data from only eight subjects and is therefore noisy, though the effect is statistically reliable. As further evidence, Figure 2b shows data from a categorization task (Zotov et al., 2003), where the y-axis is a different dependent measure, but the negative slope has the same interpretation as in Figure 2a. example Figure 1: Schematic depiction of sequential effects in categorization A B C D stimulus dimension D 0.02 C ?0.02 ?0.04 ?0.06 ?0.08 ?0.1 0.06 0.04 0.04 0.02 0 ?0.02 ?0.04 ?0.06 ?0.08 A B C ?0.1 D previous category label B C D previous category label ?0.06 0.06 0.04 0.04 0.02 0 ?0.02 ?0.04 ?0.06 B C D (f) MLGA?distrib 0.08 0.06 ?0.1 A previous category label (d) KFU?distrib 0.08 response deviation C ?0.04 ?0.1 D ?0.08 B 0 ?0.02 previous category label (b) humans: classification A 0.02 ?0.08 A response deviation 0 B (e) MLGA?local 0.08 0.06 response deviation A 0.04 (c) KFU?local 0.08 response deviation response deviation 0.06 response bias from (a) production task of Jones and Mewhort (2003), (b) classification task of Zotov et al. (2003), and (c)-(f) the models proposed in this paper. The y axis is the deviation of the response from the mean, as a proportion of the total category width. The response to category A is solid red, B is dashed magenta, C is dash-dotted blue, and D is dotted green. (a) humans: production 0.08 Figure 2: Push effect data 0.02 0 ?0.02 ?0.04 ?0.06 ?0.08 A B C D previous category label ?0.1 A B C D previous category label ? Push and pull effects are not solely a consequence of errors or experimenter feedback. In quantitative estimation of push and pull effects, trial t is included in the data only if the response on trial t ? 1 is correct. Thus, the effects follow trials in which no error feedback is given to the subjects, and therefore the adjustments are not due to explicit error correction. ? Push and pull effects diminish over the course of the experiment. The magnitude of push effects can be measured by the slope of the regression lines fit to the data in Figure 2a. The slopes get shallower over successive trial blocks. The magnitude of pull effects can be measured by the standard deviation (SD) of the produced exemplars, which also decreases over successive trial blocks. ? Accuracy increases steadily over the course of the experiment, from 78% correct responses in the first block to 91% in the final block. This improvement occurs despite the fact that error feedback is relatively infrequent and becomes even less frequent as performance improves. 2 Four Models In this paper, we explore four probabilistic (Bayesian) models to explain data described in the previous section. The key phenomenon to explain turns out to be the push effect, for which three of the four models fail to account. Modelers typically discard the models that they reject, and present only their pet model. In this work, we find it useful to report on the rejected models for three reasons. First, they help to set up and motivate the one successful model. Second, they include several obvious candidates, and we therefore have the imperative to address them. Third, in order to evaluate a model that can explain certain data, one needs to know the degree to which the the data constrain the space of models. If many models exist that are consistent with the data, one has little reason to prefer our pet candidate. Underlying all of the models is a generative probabilistic framework in which a category i is represented by a prototype value, di , on the dimension that discriminates among the categories. In the example used throughout this paper, the dimension is the diameter of a circle (hence the notation d for the prototype). An exemplar, E, of category i is drawn from a Gaussian distribution with mean di and variance vi , denoted E ? N (di , vi ). Category learning involves determining d ? {di }. In this work, we assume that the {vi } are fixed and given. Because d is unknown at the start of the experiment, it is treated as the value of a random vector, D ? {Di }. Figure 3a shows a simple graphical model representing the generative framework, in which E is the exemplar and C the category label. To formalize our discussion so far, we adopt the following notation: P (E|C = c, D = d) ? N (hc d, vc ), (1) where, for the time being, hc is a unary column vector all of whose elements are zero except for element c which has value 1. (Subscripts may indicate either an index over elements of a vector, or an index over vectors. Boldface is used for vectors and matrices.) Figure 3: (a) Graphical model depicting selection of an exemplar, E, of a category, C, based on the prototype vector, D; (b) Dynamic version of model indexed by trials, t (a) D C (b) Dt-1 Ct-1 E Dt Ct Et-1 Et We assume that the prototype representation, D, is multivariate Gaussian, D ? N (?, ?), where ? and ? encode knowledge?and uncertainty in the knowledge?of the category prototype structure. Given this formulation, the uncertainty in D can be integrated out: P (E|C) ? N (hc ?, hc ?hT c + vc ). (2) For the categorization task, a category label can be assigned by evaluating the category posterior, P (C|E), via Bayes rule, Equation 1, and the category priors, P (C). In this framework, learning takes place via trial-to-trial adaptation of the category prototype distribution, D. In Figure 3b, we add the subscript t to each random variable to denote the trial, yielding a dynamic graphical model for the sequential updating of the prototype vector, Dt . (The reader should be attentive to the fact that we use subscripted indices to denote both trials and category labels. We generally use the index t to denote trial, and c or i to denote a category label.) The goal of our modeling work is to show that the sequential updating process leads to context effects, such as the push and pull effects discussed earlier. We propose four alternative models to explore within this framework. The four models are obtained via the Cartesian product of two binary choices: the learning rule and the prototype representation. 2.1 Learning rule The first learning rule, maximum likelihood gradient ascent (MLGA), attempts to adjust the prototype representation so as to maximize the log posterior of the category given the exemplar. (The category, C = c, is the true label associated with the exemplar, i.e., either the target label the subject was asked to produce, or?if an error was made?the actual category label the subject did produce.) Gradient ascent is performed in all parameters of ? and ?: ??i = ? ? log(P (c|e)) and ??i ??ij = ? ? log(P (c|e)), ??ij (3) where ? and ? are step sizes. To ensure that ? remains a covariance matrix, constrained gradient steps are applied. The constraints are: (1) diagonal terms are nonnegative, i.e., ?i2 ? 0; (2) offdiagonal terms are symmetric, i.e., ?ij = ?ji ; and (3) the matrix remains positive definite, ensured ? by ?1 ? ?iij ?j ? 1. The second learning rule, a Kalman filter update (KFU), reestimates the uncertainty distribution of the prototypes given evidence provided by the current exemplar and category label. To draw the correspondence between our framework and a Kalman filter: the exemplar is a scalar measurement that pops out of the filter, the category prototypes are the hidden state of the filter, the measurement noise is vc , and the linear mapping from state to measurement is achieved by hc . Technically, the model is a measurement-switched Kalman filter, where the switching is determined by the category label c, i.e., the measurement function, hc , and noise, vc , are conditioned on c. The Kalman filter also allows temporal dynamics via the update equation, dt = Adt?1 , as well as internal process noise, whose covariance matrix is often denoted Q in standard Kalman filter notation. We investigated the choice of A and R, but because they did not impact the qualitative outcome of the simulations, we used A = I and R = 0. Given the correspondence we?ve established, the KFU equations?which specify ?t+1 and ?t+1 as a function of ct , et , ?t , and ?t ?can be found in an introductory text (e.g., Maybeck, 1979). Change to a category prototype for each category following a trial of a given category. Solid (open) bars indicate trials in which the exemplar is larger (smaller) than the prototype. 2.2 prototype mvt. Figure 4: 0.2 trial t ?1: A 0 ?0.2 0.2 trial t ?1: B 0 A B C trial t D ?0.2 0.2 trial t ?1: C 0 A B C trial t D ?0.2 0.2 trial t ?1: D 0 A B C trial t D ?0.2 A B C trial t D Representation of the prototype The prototype representation that we described is localist: there is a one-to-one correspondence between the prototype for each category i and the random variable Di . To select the appropriate prototype given a current category c, we defined the unary vector hc and applied hc as a linear transform on D. The identical operations can be performed in conjunction with a distributed representation of the prototype. But we step back momentarily to motivate the distributed representation. The localist representation suffers from a key weakness: it does not exploit interrelatedness constraints on category structure. The task given to experimental subjects specifies that there are four categories, and they have an ordering; the circle diameters associated with category A are smaller than the diameters associated with B, etc. Consequently, dA < dB < dC < dD . One might make a further assumption that the category prototypes are equally spaced. Exploiting these two sources of domain knowledge leads to the distributed representation of category structure. A simple sort of distributed representation involves defining the prototype for category i not as di but as a linear function of an underlying two-dimensional state-space representation of structure. In this state space, d1 indicates the distance between categories and d2 an offset for all categories. This representation of state can be achieved by applying Equation 1 and defining hc = (nc , 1), where nc is the ordinal position of the category (nA = 1, nB = 2, etc.). We augment this representation with a bit of redundancy by incorporating not only the ordinal positions but also the reverse ordinal positions; this addition yields a symmetry in the representation between the two ends of the ordinal category scale. As a result of this augmentation, d becomes a three-dimensional state space, and hc = (nc , N + 1 ? nc , 1), where N is the number of categories. To summarize, both the localist and distributed representations posit the existence of a hidden-state space?unknown at the start of learning?that specifies category prototypes. The localist model assumes one dimension in the state space per prototype, whereas the distributed model assumes fewer dimensions in the state space?three, in our proposal?than there are prototypes, and computes the prototype location as a function of the state. Both localist and distributed representations assume a fixed, known {hc } that specify the interpretation of the state space, or, in the case of the distributed model, the subject?s domain knowledge about category structure. 3 Simulation Methodology We defined a one-dimensional feature space in which categories A-D corresponded to the ranges [1, 2), [2, 3), [3, 4), and [4, 5), respectively. In the human experiment, responses were considered incorrect if they were smaller than A or larger than D; we call these two cases out-of-bounds-low (OOBL) and out-of-bounds-high (OOBH). OOBL and OOBH were treated as two additional categories, resulting in 6 categories altogether for the simulation. Subjects and the model were never asked to produce exemplars of OOBL or OOBH, but feedback was given if a response fell into these categories. As in the human experiment, our simulation involved 480 trials. We performed 100 replications of each simulation with identical initial conditions but different trial sequences, and averaged results over replications. All prototypes were initialized to have the same mean, 3.0, at the start of the simulation. Because subjects had some initial practice on the task before the start of the experimental trials, we provided the models with 12 initial trials of a categorization (not production) task, two for each of the 6 categories. (For the MLGA models, it was necessary to use a large step size on these trials to move the prototypes to roughly the correct neighborhood.) To perform the production task, the models must generate an exemplar given a category. It seems natural to draw an exemplar from the distribution in Equation 2 for P (E|C). However, this distribu- tion reflects the full range of exemplars that lie within the category boundaries, and presumably in the production task, subjects attempt to produce a prototypical exemplar. Consequently, we exclude the intrinsic category variance, vc , from Equation 2 in generating exemplars, leaving variance only via uncertainty about the prototype. Each model involved selection of various parameters and initial conditions. We searched the parameter space by hand, attempting to find parameters that satisfied basic properties of the data: the accuracy and response variance in the first and second halves of the experiment. We report only parameters for the one model that was successful, the MLGA-Distrib: ? = 0.0075, ? = 1.5 ? 10?6 for off-diagonal terms and 1.5 ? 10?7 for diagonal terms (the gradient for the diagonal terms was relatively steep), ?0 = 0.01I, and for all categories c, vc = 0.42 . 4 4.1 Results Push effect The phenomenon that most clearly distinguishes the models is the push effect. The push effect is manifested in sequential-dependency functions, which plot the (relative) response on trial t as a function of trial t ? 1. As we explained using Figures 2a,b, the signature of the push effect is a negatively sloped line for each of the different trial t target categories. The sequential-dependency functions for the four models are presented in Figures 2c-f. KFU-Local (Figure 2c) produces a flat line, indicating no push whatsoever. The explanation for this result is straightforward: the Kalman filter update alters only the variable that is responsible for the measurement (exemplar) obtained on that trial. That variable is the prototype of the target class c, Dc . We thought the lack of an interaction among the category prototypes might be overcome with KFU-Distrib, because with a distributed prototype representation, all of the state variables jointly determine the target category prototype. However, our intuition turned out to be incorrect. We experimented with many different representations and parameter settings, but KFU-Distrib consistently obtained flat or shallow positive sloping lines (Figure 2d). MLGA-Local (Figure 2e) obtains a push effect for neighboring classes, but not distant classes. For example, examining the dashed magenta line, note that B is pushed away by A and C, but is not affected by D. MLGA-Local maximizes the likelihood of the target category both by pulling the classconditional density of the target category toward the exemplar and by pushing the class-conditional densities of the other categories away from the exemplar. However, if a category has little probability mass at the location of the exemplar, the increase in likelihood that results from pushing it further away is negligible, and consequently, so is the push effect. MLGA-Distrib obtains a lovely result (Figure 2f)?a negatively-sloped line, diagnostic of the push effect. The effect magnitude matches that in the human data (Figure 2a), and captures the key property that the push effect increases with the ordinal distance of the categories. We did not build a mechanism into MLGA-Distrib to produce the push effect; it is somewhat of an emergent property of the model. The state representation of MLGA-Distrib has three components: d1 , the weight of the ordinal position of a category prototype, d2 , the weight of the reverse ordinal position, and d3 , an offset. The last term, d3 , cannot be responsible for a push effect, because it shifts all prototypes equally, and therefore can only produce a flat sequential dependency function. Figure 4 helps provide an intuition how d1 and d2 work together to produce the push effect. Each graph shows the average movement of the category prototype (units on the y-axis are arbitrary) observed on trial t, for each of the four categories, following presentation of a given category on trial t ? 1. Positve values on the y axis indicate increases in the prototype (movement to the right in Figure 1), and negative values decreases. Each solid vertical bar represents the movement of a given category prototype following a trial in which the exemplar is larger than its current prototype; each open vertical bar represents movement when the exemplar is to the left of its prototype. Notice that all category prototypes get larger or smaller on a given trial. But over the course of the experiment, the exemplar should be larger than the prototype as often as it is smaller, and the two shifts should sum together and partially cancel out. The result is the value indicated by the small horizontal bar along each line. The balance between the shifts in the two directions exactly corresponds to the push effect. Thus, the model produce a push-effect graph, but it is not truly producing a push effect as was originally conceived by the experimentalists. We are currently considering empirical consequences of this simulation result. Figure 5 shows a trial-by-trial trace from MLGA-Distrib. (a) class prototype 2 50 100 150 200 250 300 350 400 6 4 2 0 50 100 150 200 250 300 350 400 450 ?4 50 100 150 200 250 50 100 150 200 300 350 400 450 250 300 350 400 450 300 350 400 450 300 350 400 450 0.2 0 ?0.2 50 100 150 200 250 (f) 1 ?2 ?6 0.6 (e) (c) 0 0.8 0.4 450 (b) posterior log(class variance) P(correct) 4 0 (d) 1 shift (+=toward ?=away) example 6 0.8 0.6 0.4 50 100 150 200 250 Figure 5: Trial-by-trial trace of MLGA-Distrib. (a) exemplars generated on one run of the simulation; (b) the mean and (c) variance of the class prototype distribution for the 6 classes on one run; (d) mean proportion correct over 100 replications of the simulation; (e) push and pull effects, as measured by changes to the prototype means: the upper (green) curve is the pull of the target prototype mean toward the exemplar, and the lower (red) curve is the push of the nontarget prototype means away from the exemplar, over 100 replications; (f) category posterior of the generated exemplar over 100 replications, reflecting gradient ascent in the posterior. 4.2 Other phenomena accounted for MLGA-Distrib captures the other phenomena we listed at the outset of this paper. Like all of the other models, MLGA-Distrib readily produces a pull effect, which is shown in the movement of category prototypes in Figure 5e. More observably, a pull effect is manifested when two successive trials of the same category are positively correlated: when trial t ? 1 is to the left of the true category prototype, trial t is likely to be to the left as well. In the human data, the correlation coefficient over the experiment is 0.524; in the model, the coefficient is 0.496. The explanation for the pull effect is apparent: moving the category prototype to the exemplar increases the category likelihood. Although many learning effects in humans are based on error feedback, the experimental studies showed that push and pull effects occur even in the absence of errors, as they do in MLGA-Distrib. The model simply assumes that the target category it used to generate an exemplar is the correct category when no feedback to the contrary is provided. As long as the likelihood gradient is nonzero, category prototypes will be shifted. Pull and push effects shrink over the course of the experiment in human studies, as they do in the simulation. Figure 5e shows a reduction in both pull and push, as measured by the shift of the prototype means toward or away from the exemplar. We measured the slope of MLGA-Distrib?s push function (Figure 2f) for trials in the first and second half of the simulation. The slope dropped from ?0.042 to ?0.025, as one would expect from Figure 5e. (These slopes are obtained by combining responses from 100 replications of the simulation. Consequently, each point on the push function was an average over 6000 trials, and therefore the regression slopes are highly reliable.) A quantitative, observable measure of pull is the standard deviation (SD) of responses. As push and pull effects diminish, SDs should decrease. In human subjects, the response SDs in the first and second half of the experiment are 0.43 and 0.33, respectively. In the simulation, the response SDs are 0.51 and 0.38. Shrink reflects the fact that the model is approaching a local optimum in log likelihood, causing gradients?and learning steps?to become smaller. Not all model parameter settings lead to shrink; as in any gradient-based algorithm, step sizes that are too large do not lead to converge. However, such parameter settings make little sense in the context of the learning objective. 4.3 Model predictions MLGA-Distrib produces greater pull of the target category toward the exemplar than push of the neighboring categories away from the exemplar. In the simulation, the magnitude of the target pull? measured by the movement of the prototype mean?is 0.105, contrasted with the neighbor push, which is 0.017. After observing this robust result in the simulation, we found pertinent experimental data. Using the categorization paradigm, Zotov et al. (2003) found that if the exemplar on trial t is near a category border, subjects are more likely to produce an error if the category on trial t ? 1 is repeated (i.e., a pull effect just took place) than if the previous trial is of the neighboring category (i.e., a push effect), even when the distance between exemplars on t ? 1 and t is matched. The greater probability of error translates to a greater magnitude of pull than push. The experimental studies noted a phenomenon termed snap back. If the same target category is presented on successive trials, and an error is made on the first trial, subjects perform very accurately on the second trial, i.e., they generate an exemplar near the true category prototype. It appears as if subjects, realizing they have been slacking, reawaken and snap the category prototype back to where it belongs. We tested the model, but observed a sort of anti snap back. If the model made an error on the first trial, the mean deviation was larger?not smaller?on the second trial: 0.40 versus 0.32. Thus, MLGA-Distrib fails to explain this phenomenon. However, the phenomenon is not inconsistent with the model. One might suppose that on an error trial, subjects become more attentive, and increased attention might correspond to a larger learning rate on an error trial, which should yield a more accurate response on the following trial. McLaren et al. (1995) studied a phenomenon in humans known as peak shift, in which subjects are trained to categorize unidimensional stimuli into one of two categories. Subjects are faster and more accurate when presented with exemplars far from the category boundary than those near the boundary. In fact, they respond more efficiently to far exemplars than they do to the category prototype. The results are characterized in terms of the prototype of one category being pushed away from the prototype of the other category. It seems straightforward to explain these data in MLGA-Distrib as a type of long-term push effect. 5 Related Work and Conclusions Stewart, Brown, and Chater (2002) proposed an account of categorization context effects in which responses are based solely on the relative difference between the previous and present exemplars. No representation of the category prototype is maintained. However, classification based solely on relative difference cannot account for a diminished bias effects as a function of experience. A long-term stable prototype representation, of the sort incorporated into our models, seems necessary. We considered four models in our investigation, and the fact that only one accounts for the experimental data suggests that the data are nontrivial. All four models have principled theoretical underpinnings, and they space they define may suggest other elegant frameworks for understanding mechanisms of category learning. The successful model, MLDA-Distrib, offers a deep insight into understanding multiple-category domains: category structure must be considered. MLGA-Distrib exploits knowledge available to subjects performing the task concerning the ordinal relationships among categories. A model without this knowledge, MLGA-Local, fails to explain data. Thus, the interrelatedness of categories appears to provide a source of constraint that individuals use in learning about the structure of the world. Acknowledgments This research was supported by NSF BCS 0339103 and NSF CSE-SMA 0509521. Support for the second author comes from an NSERC fellowship. References Jones, M. N., & Mewhort, D. J. K. (2003). Sequential contrast and assimilation effects in categorization of perceptual stimuli. Poster presented at the 44th Meeting of the Psychonomic Society. Vancouver, B.C. Maybeck, P.S. (1979). Stochastic models, estimation, and control, Volume I. Academic Press. McLaren, I. P. L., et al. (1995). Prototype effects and peak shift in categorization. JEP:LMC, 21, 662?673. Stewart, N. Brown, G. D. A., & Chater, N. (2002). Sequence effects in categorization of simple perceptual stimuli. JEP:LMC, 28, 3?11. Zotov, V., Jones, M. N., & Mewhort, D. J. K. (2003). Trial-to-trial representation shifts in categorization. Poster presented at the 13th Meeting of the Canadian Society for Brain, Behaviour, and Cognitive Science: Hamilton, Ontario.
3160 |@word trial:77 version:1 seems:4 proportion:2 open:2 d2:3 simulation:15 covariance:2 solid:4 reduction:1 initial:4 interestingly:1 reaction:1 current:4 must:2 readily:1 numerical:1 distant:1 subsequent:2 pertinent:1 motor:1 plot:1 update:4 generative:2 fewer:1 half:3 item:6 realizing:1 mental:1 provides:3 cse:1 location:2 successive:5 along:2 direct:1 become:2 replication:6 qualitative:1 incorrect:2 introductory:1 roughly:1 nor:1 brain:1 little:3 actual:1 considering:1 becomes:2 provided:3 underlying:2 notation:3 maximizes:1 medium:2 sloping:1 mass:1 what:2 matched:1 whatsoever:1 finding:1 temporal:1 quantitative:2 exactly:1 ensured:1 control:1 unit:1 appear:2 producing:2 continually:1 hamilton:1 positive:2 before:1 dropped:1 local:7 negligible:1 sd:5 consequence:2 switching:1 despite:1 subscript:2 solely:3 modulation:1 might:6 studied:1 suggests:1 co:1 range:3 statistically:1 averaged:1 acknowledgment:1 responsible:2 lovely:3 practice:1 block:6 definite:1 jep:2 empirical:1 reject:1 thought:1 poster:2 outset:1 suggest:1 naturalistic:1 cannot:3 get:2 selection:2 mediocre:1 context:14 influence:2 applying:1 nb:1 center:1 straightforward:2 attention:1 rule:5 insight:1 d1:3 importantly:1 pull:23 notion:1 updated:1 feel:1 imagine:1 target:26 colorado:2 diego:1 suppose:2 infrequent:1 element:3 updating:2 cut:1 predicts:1 labeled:1 mike:1 observed:3 capture:2 region:1 momentarily:1 ordering:1 decrease:3 movement:6 principled:2 mozer:2 discriminates:1 intuition:2 asked:6 dynamic:3 signature:1 motivate:2 trained:1 reviewing:1 purely:1 technically:1 negatively:2 easily:1 indirect:1 emergent:1 represented:2 various:1 describe:2 corresponded:1 lift:1 outcome:1 neighborhood:1 adt:1 whose:2 encoded:1 richer:1 larger:8 apparent:1 snap:3 transform:1 noisy:1 jointly:1 final:1 sequence:5 nontarget:3 propose:2 took:1 interaction:1 product:1 adaptation:1 frequent:1 neighboring:3 turned:1 combining:1 causing:1 ontario:1 intuitive:1 exploiting:1 optimum:1 produce:13 categorization:23 generating:1 object:4 help:2 friend:1 measured:6 exemplar:56 ij:3 involves:3 indicate:4 come:1 differ:1 direction:1 posit:1 correct:9 filter:9 stochastic:1 vc:6 human:11 behaviour:1 investigation:2 correction:1 diminish:3 considered:3 presumably:1 cognition:1 mapping:1 sma:1 vary:1 adopt:1 smallest:1 estimation:2 unhappy:1 label:20 currently:1 largest:1 vice:1 reflects:2 clearly:1 gaussian:3 varying:1 conjunction:1 chater:2 encode:1 focus:1 improvement:1 consistently:1 likelihood:9 indicates:1 contrast:2 sense:1 kfu:7 inference:1 dependent:1 unary:2 typically:2 integrated:1 hidden:2 subscripted:1 classification:8 among:3 denoted:2 augment:1 constrained:1 fairly:1 once:1 never:1 nicely:1 identical:2 represents:3 jones:9 cancel:1 constitutes:1 report:2 stimulus:6 distinguishes:1 surroundings:1 ve:1 individual:4 attempt:2 acceptance:1 mlda:1 highly:1 adjust:1 weakness:1 truly:1 yielding:1 light:2 accurate:2 underpinnings:1 capable:1 necessary:2 experience:3 indexed:1 initialized:1 circle:7 plotted:1 theoretical:1 increased:1 column:1 modeling:1 compelling:1 earlier:1 stewart:2 localist:6 deviation:9 imperative:1 examining:3 successful:3 too:1 dependency:3 density:3 fundamental:1 peak:2 probabilistic:4 off:1 michael:3 together:2 mouse:1 na:1 augmentation:1 central:1 reflect:1 satisfied:1 classically:1 cognitive:4 account:6 exclude:1 coefficient:3 vi:3 performed:3 tion:1 observing:1 red:2 start:4 bayes:1 offdiagonal:1 sort:3 slope:8 accuracy:3 variance:6 characteristic:1 efficiently:1 spaced:1 yield:2 correspond:1 bayesian:2 accurately:1 produced:4 confirmed:1 explain:7 suffers:1 attentive:2 nonetheless:1 steadily:1 involved:2 obvious:1 associated:3 attributed:1 modeler:1 di:7 experimenter:1 knowledge:6 car:1 improves:1 formalize:1 reflecting:2 back:4 appears:3 shettel:2 dt:4 day:2 follow:1 methodology:1 response:25 specify:2 originally:1 formulation:1 though:1 shrink:3 rejected:1 just:1 correlation:2 hand:1 horizontal:2 lack:1 indicated:2 pulling:1 believe:1 effect:61 consisted:1 true:5 brown:2 hence:1 assigned:1 symmetric:1 nonzero:1 i2:1 ll:2 lmc:2 width:1 maintained:1 noted:1 fatigue:1 reflection:1 meaning:1 psychonomic:1 ji:1 volume:1 discussed:1 interpretation:2 refer:4 measurement:6 versa:1 had:1 moving:1 stable:1 depiction:2 etc:2 add:1 posterior:6 multivariate:1 showed:1 belongs:1 discard:1 scenario:1 reverse:2 certain:1 termed:1 manifested:2 binary:1 meeting:2 additional:1 somewhat:2 greater:3 determine:1 paradigm:3 maximize:1 converge:1 dashed:3 full:1 multiple:1 bcs:1 match:1 faster:1 characterized:1 offer:1 long:6 academic:1 dept:2 concerning:1 equally:2 schematic:2 converging:1 impact:1 regression:2 basic:1 prediction:1 experimentalists:1 achieved:2 receive:2 whereas:2 addition:1 proposal:1 fellowship:1 source:3 leaving:1 ascent:4 fell:1 subject:26 elegant:1 db:1 contrary:1 inconsistent:1 seem:1 habituation:1 call:2 near:4 canadian:1 easy:1 fit:1 psychology:1 approaching:1 prototype:70 unidimensional:1 translates:1 shift:12 whether:1 heavier:1 cause:1 repeatedly:1 deep:1 useful:1 generally:1 unimportant:1 listed:1 maybeck:2 category:132 diameter:10 generate:3 specifies:2 exist:1 nsf:2 notice:2 dotted:2 alters:1 shifted:1 diagnostic:1 conceived:1 per:1 blue:1 affected:1 key:6 four:17 redundancy:1 threshold:1 drawn:2 d3:2 neither:1 ht:1 graph:2 sum:1 run:2 uncertainty:5 you:2 respond:2 place:2 throughout:2 reader:1 draw:2 decision:3 summarizes:1 prefer:1 pushed:3 bit:1 bound:2 ct:3 dash:1 correspondence:3 nonnegative:1 activity:2 nontrivial:1 occur:2 constraint:3 your:5 constrain:1 flat:4 attempting:1 performing:1 relatively:2 across:1 smaller:8 shallow:1 making:1 explained:2 boulder:1 classconditional:1 equation:6 remains:3 turn:1 eventually:1 fail:1 mechanism:2 ordinal:9 know:1 end:1 available:1 operation:1 eight:1 away:12 appropriate:1 mvt:1 subtracted:1 alternative:1 shortly:1 altogether:1 existence:1 assumes:3 include:1 ensure:1 graphical:3 pushing:2 exploit:2 build:1 society:2 move:6 objective:1 occurs:1 diagonal:4 gradient:9 distance:4 link:1 accommodates:1 mewhort:7 toward:8 pet:2 reason:2 boldface:1 sloped:2 kalman:7 index:4 relationship:1 illustration:1 providing:1 balance:1 equivalently:1 nc:4 steep:1 trace:2 negative:3 unknown:2 perform:2 shallower:1 upper:1 vertical:5 anti:1 defining:2 incorporated:1 dc:2 arbitrary:1 established:1 pop:1 address:2 bar:4 pattern:1 summarize:1 reliable:2 green:2 explanation:2 treated:2 natural:1 representing:1 axis:5 text:1 prior:1 understanding:2 vancouver:1 determining:1 relative:3 expect:1 prototypical:1 versus:1 switched:1 ripe:1 degree:1 consistent:1 fruit:1 dd:1 reanalysis:1 heavy:2 production:14 course:4 accounted:1 supported:1 last:1 distribu:1 tire:1 bias:2 pulled:1 institute:1 neighbor:2 explaining:1 absolute:1 distributed:11 boundary:7 dimension:8 feedback:8 curve:3 cumulative:1 evaluating:1 computes:1 sensory:1 overcome:1 made:5 world:1 san:1 author:1 far:3 obtains:2 observable:1 reveals:1 pittsburgh:1 continuous:1 nature:1 robust:1 symmetry:1 depicting:1 reestimates:1 hc:11 investigated:1 domain:3 da:1 did:3 main:1 border:1 noise:3 arise:2 repeated:2 positively:1 referred:1 iij:1 assimilation:1 fails:2 position:5 explicit:1 candidate:2 lie:1 perceptual:2 third:1 magenta:2 offset:3 experimented:1 evidence:3 incorporating:1 intrinsic:1 sequential:11 mirror:1 lifting:1 magnitude:6 conditioned:1 push:42 cartesian:1 simply:1 likely:3 explore:2 adjustment:1 nserc:1 partially:1 scalar:1 corresponds:2 conditional:1 viewed:1 goal:1 presentation:1 consequently:5 absence:1 change:3 included:1 specifically:1 except:1 determined:1 contrasted:1 diminished:1 total:1 experimental:13 indicating:1 select:1 internal:3 mark:1 searched:1 support:1 distrib:19 assessed:1 categorize:3 evaluate:2 tested:1 phenomenon:11 correlated:1
2,382
3,161
Learning to classify complex patterns using a VLSI network of spiking neurons Srinjoy Mitra? , Giacomo Indiveri? and Stefano Fusi ?? ? Institute of Neuroinformatics, UZH|ETH, Zurich ? Center for Theoretical Neuroscience, Columbia University, New York srinjoy|giacomo|fusi@ini.phys.ethz.ch Abstract We propose a compact, low power VLSI network of spiking neurons which can learn to classify complex patterns of mean firing rates on?line and in real?time. The network of integrate-and-fire neurons is connected by bistable synapses that can change their weight using a local spike?based plasticity mechanism. Learning is supervised by a teacher which provides an extra input to the output neurons during training. The synaptic weights are updated only if the current generated by the plastic synapses does not match the output desired by the teacher (as in the perceptron learning rule). We present experimental results that demonstrate how this VLSI network is able to robustly classify uncorrelated linearly separable spatial patterns of mean firing rates. 1 Introduction Spike driven synaptic plasticity mechanisms have been thoroughly investigated in recent years to solve two important problems of learning: 1) how to modify the synapses in order to generate new memories 2) how to protect old memories against the passage of time, and the overwriting of new memories by ongoing activity. Temporal patterns of spikes can be encoded with spike-timing dependent plasticity (STDP) mechanisms (e.g. see [1, 2]). However, STDP in its simplest form is not suitable for learning patterns of mean firing rates [3], and most of the proposed STDP learning algorithms solved the problems of memory encoding and memory preservation only for relatively simple patterns of mean firing rates. Recently a new model of stochastic spike-driven synaptic plasticity has been proposed [4] that is very effective in protecting old learned memories, and captures the rich phenomenology observed in neurophysiological experiments on synaptic plasticity, including STDP protocols. It has been shown that networks of spiking neurons that use this synaptic plasticity model can learn to classify complex patterns of spike trains ranging from stimuli generated by auditory/vision sensors to images of handwritten digits from the MNIST database [4]. Here we describe a neuromorphic VLSI implementation of this spike-driven synaptic plasticity model and present classification experiments using the VLSI device that validate the model?s implementation. The silicon neurons and synapses inside the chip are implemented using full custom hybrid analog/digital circuits, and the network?s spikes are received in input and transmitted in output using asynchronous digital circuits. Each spike is represented as an Address-Event, where the address encodes either the source neuron or the destination synapse. This device is part of an increasing collection of spike-based computing chips that have been recently developed within the framework of Address-Event Representation (AER) systems [5, 6]. There are even multiple implementations of the same spike-driven plasticity model being investigated in parallel [7, 8]. The focus of this paper is to show that the VLSI device proposed here can successfully classify complex patterns of spike trains, producing results that are in accordance with the theoretical predictions. 1 Figure 1: Layout of a test chip comprising a network of I&F neurons and plastic synapses. The placement of a single neuron along with its synapses is highlighted in the top part of the figure. Other highlighted circuits are described in the test. In Section 2 we describe the main features of the spike-based plasticity model and show how they are well suited for future scaled CMOS VLSI technologies; in Section 3 we characterize the functionality of the spike-based learning circuits; in Section 4 we show control experiments on the learning properties of the VLSI network; and in Section 5 we present experimental results on complex patterns of mean firing rates. In Section 6 we present the concluding remarks and point out future outlooks and potential applications of this system. 2 Implementation of the spike-based plasticity mechanism Physical implementations of long lasting memories, either biological or electronic, are confronted with two hard limits: the synaptic weights are bounded (they cannot grow indefinitely or become negative), and the resolution of the synapse is limited (i.e. the synaptic weight cannot have an infinite number of states). These constraints, usually ignored by the vast majority of software models, have strong impact on the classification performance of the network, and on its memory storage capacity. It has been demonstrated that the number of random uncorrelated patterns p which can be classified or stored in a network of neurons connected by bounded synapses grows only logarithmically with the number of synapses [9]. In addition, if each synapse has a n stable states (i.e. its weight has to traverse n states to go from the lower bound to the upped bound), then the number of patterns p can grow quadratically n. However, this can happen only in unrealistic scenarios, where fine tuning of the network?s parameters is allowed. In more realistic scenarios where there are inhomogeneities and variability (as is the case for biology and silicon) p is largely independent of n [9]. Therefore, an efficient strategy for implementing long lasting memories in VLSI networks of spiking neurons is to use a large number of synapses with only two stable states (i.e. n = 2), and to modify their weights in a stochastic manner, with a small probability. This slows down the learning process, but has the positive effect of protecting previously stored memories from being overwritten. Using this strategy we can build large networks of spiking neurons with very compact learning circuits (e.g. that do not require local Analog-to-Digital Converters or floating gate cells for storing weight values). By construction, these types of devices operate in a massively parallel fashion and are faulttolerant: even if a considerable fraction of the synaptic circuits is faulty due to fabrication problems, the overall functionality of the chip is not compromised. This can be a very favorable property in view of the potential problems of future scaled VLSI processes. The VLSI test chip used to carry out classification experiments implementing such strategy is shown in Fig. 1. The chip comprises 16 low-power integrate-and-fire (I&F) neurons [5] and 2048 dynamic synapses. It was fabricated using a standard 0.35?m CMOS technology, and occupies an area of 6.1mm2 . We use an AER communication infrastructure that allows the chip to receive and transmit asynchronous events (spikes) off-chip to a workstation (for data logging and prototyping) and/or to other neuromorphic event-based devices [10]. An on-chip multiplexer can be used to reconfigure the neuron?s internal dendritic tree connectivity. A single neuron can be connected to 128, 256, 512 or 1024 synapses. Depending on the multiplexer state the number of active neurons decrease from 16 to 2. In this work we configured the chip to use all 16 neurons with 128 synapses per neuron. The synapses are divided into different functional blocks: 4 are excitatory with fixed (externally adjustable) weights, 4 inhibitory and 120 excitatory with local learning circuits. Every silicon neuron in the chip can be used as a classifier that separates the input patterns into two categories. During training, the patterns to be classified are presented to the pre-synaptic synapses, 2 Soma synapses V UP IUP CC2 VUP S1 + Ik2 Vmem Vmth Vmem Ik1 CC1 IB bistable pre AER input w DPI Ik3 VDN IEPSC CC3 V DN S2 DPI IDN Vspk I [Ca] Stop Learning I&F block (b) (a) Figure 2: (a) Plastic synapse circuits belonging to the neuron?s dendritic tree. The synaptic weight node w is modified when there is a pre-synaptic input (i.e. when S1 and S2 are on) depending on the values of VUP and VDN . In parallel, the bistable circuit slowly drives the node w toward either of its two stable states depending on its amplitude. The DPI is a pulse integrator circuit that produces an Excitatory Post-Synaptic Current (IEPSC ), with an amplitude that depends on the synaptic weight w. (b) Neuron?s ?soma? block diagram with stop-learning module. It comprises a low-power I&F neuron block, a DPI integrator, a voltage comparator and a three current comparators(CC). Winnertake-all (WTA) circuits are used as current comparators that set the output to be either the bias current IB , or zero. The voltage comparator enables either the IUP or the IDN block, depending on the value of Vmem with respect to Vmth . The voltages VUP and VDN are used to broadcast the values of IUP and IDN to the neuron?s dendritic tree. in parallel with a teacher signal that represents the desired response. The post-synaptic neuron responds with an activity that is proportional to its net input current, generated by the input pattern weighted by the learned synaptic efficacies, and by the teacher signal. If the neuron?s mean activity is in accordance with the teacher signal (typically either very high or very low), then the output neuron produces the correct response. In this case the the synapses should not be updated. Otherwise, the synapses are updated at the time of arrival of the (Poisson distributed) input spikes, and eventually make a transition to one of the two stable states. Such stochasticity, in addition to the ?stop-learning? mechanism which prevents the synapses from being modified when the output is correct, allows each neuron to classify a wide class of highly correlated, linearly separable patterns. Furthermore, by using more than one neuron per class, it is possible to classify also complex non-linearly separable patterns [4]. 3 The VLSI learning circuits The learning circuits are responsible for locally updating the synaptic weights with the spike-based learning rule proposed in [4]. Upon the arrival of a pre-synaptic spike (an address-event), the plastic synapse circuit updates its weight w according to the spike-driven learning rule. The synapse then produces an Excitatory Post-Synaptic Current (EPSC) with an amplitude proportional to its weight, and with an exponential time course that can be set to last from microseconds to several hundreds of milliseconds [11]. The EPSC currents of all synapses afferent to the target neuron are summed into the neuron?s membrane capacitance, and eventually the I&F neuron?s membrane potential exceeds a threshold and the circuit generates an output spike. As prescribed by the model of [4], the post-synaptic neuron?s membrane potential, together with its mean firing rate are used to determine the weight change values ?w. These weight change values are expressed in the chip as subthreshold currents. Specifically, the signal that triggers positive weight updates is represented by an IUP current, and the signal that triggers weight decreases if represented by the IDN current. The weight updates are performed locally at each synapse, in a pre-synaptic weight update module, while the ?w values are computed globally (for each neuron), in a post-synaptic weight control module. 3 mem 0 0 1.5 0.01 0.02 0.03 0.04 0.05 0.01 0.02 0.03 0.04 0.05 0.01 0.02 0.03 Time (s) 0.04 0.05 3 VDN 3 0.5 1 0.5 1 2.8 2.6 0 0.4 1.5 0.2 0 0 1 V 1 2.8 2.6 0 0.4 VUP 0.5 VUP VCa VDN 3.2 3 2.8 2.6 2.4 0 3.2 0.2 0 0 1.5 Time (s) (a) (b) Figure 3: Post-synaptic circuit data. (a) State of the VUP and VDN voltages as a function of the calcium concentration voltage VCa . (b) State of the VUP and VDN voltages as function of the membrane potential Vmem . This data corresponds to a zoomed-version of the data shown in (a) for VCa ? 2.8V . 3.1 Pre-synaptic weight-update module This module, shown in Fig. 2(a), comprises four main blocks: an input AER interfacing circuit [12], a bistable weight refresh circuit, a weight update circuit and a log-domain current-mode integrator, dubbed the ?diff-pair integrator? (DPI) circuit, and fully characterized in [11]. Upon the arrival of an input event (pre-synaptic spike), the asynchronous AER interfacing circuits produce output pulses that activate switches S1 and S2. Depending on the values of IUP and IDN , mirrored from the postsynaptic weight control module, the node w charges up, discharge toward ground, or does not get updated. The same input event activates the DPI circuit that produces an EPSC current (IEPSC ) with an amplitude that depends on the synaptic weight value w. In parallel, the bistable weight refresh circuit slowly drives w toward one of two stable states depending on whether it is higher or lower than a set threshold value. The two stable states are global analog parameters, set by external bias voltages. 3.2 Post-synaptic weight control module This module is responsible for generating the two global signals VUP and VDN , mirrored to all synapses belonging to the same dendritic tree. Post-synaptic spikes (Vspk ), generated in the soma are integrated by an other instance of the DPI circuit to produce a current ICa proportional to the neuron?s average spiking activity. This current is compared to three threshold values, Ik1 , Ik2 , and Ik3 of Fig. 2(b), using three current-mode winner-take-all circuits [13]. In parallel, the instantaneous value of the neuron?s membrane potential Vmem is compared to the threshold Vmth (see Fig. 2(b)). The values of IUP and IDN depend on the state of the neuron?s membrane potential and its average frequency. Specifically, if Ik1 < ICa < Ik3 and Vmem > Vmth , then IUP = IB . If Ik1 < ICa < Ik2 and Vmem < Vmth , then IDN = IB . Otherwise both IUP , and IDN are null. To characterize these circuits we injected a step current in the neuron, produced a regular output mean firing rate, and measured the voltages VCa , VUP , and VDN (see Fig. 3(a)). VCa is the gate voltage of the P-FET transistor producing ICa , while VDN , VUP are the gate voltages of the P- and N-FET transistors mirroring IDN and IUP respectively (Fig. 2(a)). The neuron?s spikes are integrated and the output current ICa increases with an exponential profile over time (VCa decreases accordingly over time, as shown in Fig. 3(a)). The steady-state asymptotic value depends on the average input frequency, as well as the circuit?s bias parameters [11]. As ICa becomes larger than the first threshold Ik1 (VCa decreases below the corresponding threshold voltage) both VUP and VDN are activated. When ICa becomes larger than the second threshold Ik2 the VDN signal is deactivated, and finally as ICa becomes larger than the third threshold Ik3 , also the VUP signal is switched off. The small ? 300mV changes in VUP and VDN produce subthreshold currents (IUP and IDN ) that are mirrored to the synapses (Fig. 2(a)). In Fig. 3(b) the VDN and VUP signals are zoomed in along with the membrane potential of the post-synaptic neuron (Vmem ), for values of VCa ? 2.8V . Depending on the state of 4 Vmem 0.1 0.15 0.2 0.25 0.05 0.1 0.15 0.2 0.25 Vw 0.05 2 0 0 1.5 1 0.5 0 0 3 2.5 2 1.5 0 pre Vmem Vw pre 1.5 1 0.5 0 0 3 2.5 2 1.5 0 0.05 0.1 0.15 Time(s) 0.2 (a) 0.1 0.15 0.2 0.25 0.05 0.1 0.15 0.2 0.25 0.05 0.1 0.15 Time(s) 0.2 0.25 2 0 0 0.25 0.05 (b) Figure 4: Stochastic synaptic LTP transition: in both sub-figures the non-plastic synapse is stimulated with Poisson distributed spikes at a rate of 250Hz, making the post-synaptic neuron fire at approximately 80Hz; and the plastic synapse is stimulated with Poisson distributed spike trains of 100Hz. (a) The updates in the synaptic weight did not produce any LTP transition during the 250ms stimulus presentation. (b) The updates in the synaptic weight produced an LTP transition that remains consolidated. Vmem , the signals VUP and VDN are activated or inactivated. When not null, currents IUP and IDN are complementary in nature: only one of the two is equal to IB . 4 Stochastic plasticity To characterize the stochastic nature of the weight update process we stimulated the neuron?s plastic synapses with Poisson distributed spike trains. When any irregular spike train is used as a presynaptic input, the synaptic weight voltage crosses the synapse bistability threshold in a stochastic manner, and the probability of crossing the threshold depends on the input?s mean frequency. Therefore Long Term Potentiation (LTP) or Long Term Depression (LTD) occur stochastically even when the mean firing rates of the input and the output are always the same. In Fig. 4 we show two instances of a learning experiment in which the mean input firing rate (bottom row) was 100Hz, and the mean output firing rate (top row) was 80Hz. Although these frequencies were the same for both experiments, LTP occurred only in one of the two cases (compare synaptic weight changes in middle row of both panels). In this experiment we set the efficacy of the ?high? state of all plastic synapses to a relatively low value. In this way the neuron?s mean output firing rate depends primarily on the teacher signal, irrespective of the states of plastic synapses. One essential feature of this learning rule is the non-monotonicity of both the LTP/LTD probabilities as a function of the post-synaptic firing frequency ? post [4]. Such a non-monotonicity is essential to slow down and eventually stop-learning when ? post is very high or very low (indicating that the learned synaptic weights are already correctly classifying the input pattern). In Fig. 5 we show experimental results where we measured the LTP and LTD transitions of 60 synapses over 20 training sessions: for the LTD case (top row) we initialized the synapses to a high state (white pixel) and plotted a black pixel if its final state was low, at the end of the training session. The transitions (white to black) are random in nature and occur with a probability that first increases and then decreases with ? post . An analogous experiment was done for the LTP transitions (bottom row), but with complementary settings (the initial state was set to a low value). In Fig. 5(b) we plot the LTD (top row) and LTP (bottom row) probabilities measured for a single synapse. The shape of these curves can be modified by acting on the post-synaptic weight control module bias parameters such as Ik1?k3 , or IB . 5 20 100 180 320 500 700 900 1 20 p(LTD) Synapse number 5 1 40 0.5 1 p(LTP) Synapse number 0 1 20 0.5 40 0 0 1 20 200 ? 400 (Hz) 600 post (a) (b) Figure 5: (a) LTD and LTP transitions of 60 synapses measured across 20 trials, for different values of post-synaptic frequency ? post (top label on each panel). Each black pixel represents a low synaptic state, and white pixel a high one. On x-axis of each panel we plot the trial number (1 to 20) and y-axis shows the state of the synapses at the end of each trial. In the top row we show the LTD transitions that occur after initializing all the synapses to high state. In the bottom row we show the LTP transition that occur after initializing the synapses to low state. The transitions are stochastic and the LTP/LTD probabilities peak at different frequencies before falling down at higher ? post validating the stop-learning algorithm. No data was taken for the gray panels. (b) Transition probabilities measured for a single synapse as a function ? post . The transition probabilities can be reduced by decreasing the value of IB . The probability peaks can also be modified by changing the biases that set Ik1?k3 . (Fig. 2(b)) +T T? Excitatory synapse, non?plastic Inhibitory synapse, non?plastic Excitatory synapse, plastic High input state (30Hz) Low input state (2Hz) Integrate and Fire neuron C+ C Figure 6: A typical training scenario with 2 random binary spatial patterns. High and low inputs are encoded with generate Poisson spike trains with mean frequencies of 30Hz and 2Hz respectively. Binary patterns are assigned to the C+ or C? class arbitrarily. During training patterns belonging to the C+ class are combined with a T + (teacher) input spike train of with 250Hz mean firing rate. Similarly, patterns belonging to the C? class are combined with a T ? spike train of 20Hz mean firing rate. New Poisson distributed spike trains are generated for each training iterations. 5 Classification of random spatial patterns In order to evaluate the chip?s classification ability, we used spatial binary patterns of activity, randomly generated (see Fig. 6). The neuron?s plastic synapses were stimulated with Poisson spike trains of either high (30Hz) or low (2Hz) mean firing rates. The high/low binary state of the input was chosen randomly, and the number of synapses used was 60. Each 60-input binary pattern was then randomly assigned to either a C+ or a C? class. During training, spatial patterns belonging to the C+ class are presented to the neuron in conjunction with a T + teacher signal (i.e. a 250Hz Poisson spike train). Conversely patterns belonging to the C? class are combined with a T ? teacher signal of 20Hz. The T + and T ? spike trains are presented to the neuron?s non-plastic synapses. Training sessions with C+ and C? patterns are interleaved in a random order, for 50 iterations. Each stimulus presentation lasted 500ms, with new Poisson distributions generated at each training session. 6 After training, the neuron is tested to see if it can correctly distinguish between patterns belonging to the two classes C+ and C? . The binary patterns used during training are presented to the neuron without the teacher signal, and the neuron?s mean firing rate is measured. In Fig. 7(a) we plot the responses of two neurons labeled neuron-A and neuron-B. Neuron-A was trained to produce a high output firing rate in response to patterns belonging to class C+ , while neuron-B was trained to respond to patterns belonging to class C? . As shown, a single threshold (e.g. at 20Hz) is enough to classify the output in C+ (high frequency) and C? (low frequency) class. p(?post) 80 40 20 0 0.5 0 p(?post) ?post(Hz) 60 1 2 3 4 neuron?A 1 0.5 0 2 3 4 neuron?B (a) 0 50 100 ?post(Hz) 150 (b) Figure 7: Classification results, after training on 4 patterns. (a) Mean output frequencies of neurons trained to recognize class C+ patterns (Neuron-A), and class C? patterns (Neuron-B). Patterns 1, 2 belong to class C+ , while patterns 3, 4 belong to class C? . (b) Output frequency probability distribution, for all C+ patterns (top) and C? patterns (bottom) computed over 20 independent experiments. Fig. 7(b) shows the probability distribution of post-synaptic frequencies (of neuron-A) over different classification experiments, each done with new sets of random spatial patterns. To quantify the chip?s classification behavior statistically, we employed a Receiver Operating Characteristics (ROC) analysis [14]. Figure 8(a) shows the area under the ROC curve (AUC) plotted on y-axis for increasing number of patterns. An AUC magnitude of 1 represents 100% correct classification while 0.5 represents chance level. In Fig. 8(b) the storage capacity (p) ?expressed as the number of patterns with AUC larger than 0.75? is plotted against the number ? of synapses N. The?top and bottom traces show the theoretical predictions from [3], with (p? 2 N) and without (p? N) the stop learning condition, respectively. The performance of the VLSI system with 20, 40 and 60 synapses and the stop-learning condition lie within the two theoretical curves. 6 Conclusions We implemented in a neuromorphic VLSI device a recently proposed spike-driven synaptic plasticity model that can classify complex patterns of spike trains [4]. We presented results from the VLSI chip that demonstrate the correct functionality of the spike-based learning circuits, and performed classification experiments of random uncorrelated binary patterns, that confirm the theoretical predictions. Additional experiments have demonstrated that the chip can be applied to the classification of correlated spatial patterns of mean firing rates and as well [15]. To our knowledge, the classification performance achieved with this chip has not yet been reported for any other silicon system. These results show that the device tested can perform real-time classification of sequences of spikes, and is therefore an ideal computational block for adaptive neuromorphic sensory-motor systems and brain-machine interfaces. Acknowledgment This work was supported by the Swiss National Science Foundation grant no. PP00A106556, the ETH grant no. TH02017404, and by the EU grants ALAVLSI (IST-2001-38099) and DAISY (FP62005-015803). 7 1 15 Storage capacity AUC 0.9 0.8 0.7 10 5 0.6 0.5 2 4 6 8 # patterns 10 0 12 (a) 20 40 # input synapses 60 (b) Figure 8: (a). Area under ROC curve (AUC) measured by performing 50 classification experiments. (b) Storage capacity (number of patterns with AUC value ? 0.75) as a function of the number of plastic synapses used. The solid line represents the data obtained from chip, while top and bottom traces represent the theoretical predictions with and without the stop learning condition. References [1] R. G?utig and H. Sompolinsky. The tempotron: a neuron that learns spike timing?based decisions. Nature Neuroscience, 9:420?428, 2006. [2] R.A. Legenstein, C. N?ager, and W. Maass. What can a neuron learn with spike-timing-dependent plasticity? Neural Computation, 17(11):2337?2382, 2005. [3] S. Fusi and W. Senn. Eluding oblivion with smart stochastic selection of synaptic updates. Chaos, An Interdisciplinary Journal of Nonlinear Science, 16(026112):1?11, 2006. [4] J. Brader, W. Senn, and S. Fusi. Learning real world stimuli in a neural network with spike-driven synaptic dynamics. Neural Computation, 2007. (In press). [5] G. Indiveri, E. Chicca, and R. Douglas. A VLSI array of low-power spiking neurons and bistable synapses with spike?timing dependent plasticity. IEEE Transactions on Neural Networks, 17(1):211?221, Jan 2006. [6] J. Arthur and K. Boahen. Learning in silicon: Timing is everything. In Y. Weiss, B. Sch?olkopf, and J. Platt, editors, Advances in Neural Information Processing Systems 18. MIT Press, Cambridge, MA, 2006. [7] D. Badoni, M. Giulioni, V. Dante, and P. Del Giudice. An aVLSI recurrent network of spiking neurons with reconfigurable and plastic synapses. In Proceedings of the IEEE International Symposium on Circuits and Systems, pages 1227?1230. IEEE, IEEE, May 2006. [8] G. Indiveri and S. Fusi. Spike-based learning in VLSI networks of integrate-and-fire neurons. In Proc. IEEE International Symposium on Circuits and Systems, ISCAS 2007, pages 3371?3374, 2007. [9] S. Fusi and L. F. Abbott. Limits on the memory storage capacity of bounded synapses. Nature Neuroscience, 10:485?493, 2007. [10] E. Chicca, P. Lichtsteiner, T. Delbr?uck, G. Indiveri, and R.J. Douglas. Modeling orientation selectivity using a neuromorphic multi-chip system. In Proceedings of IEEE International Symposium on Circuits and Systems, pages 1235?1238. IEEE, 2006. [11] C. Bartolozzi and G. Indiveri. Synaptic dynamics in analog VLSI. Neural Computation, 19:2581?2603, Oct 2007. [12] K. A. Boahen. Point-to-point connectivity between neuromorphic chips using address-events. IEEE Transactions on Circuits and Systems II, 47(5):416?34, 2000. [13] J. Lazzaro, S. Ryckebusch, M.A. Mahowald, and C.A. Mead. Winner-take-all networks of O(n) complexity. In D.S. Touretzky, editor, Advances in neural information processing systems, volume 2, pages 703?711, San Mateo - CA, 1989. Morgan Kaufmann. [14] T. Fawcett. An introduction to ROC analysis. Pattern Recognition Letters, (26):861?874, 2006. [15] S. Mitra, G. Indiveri, and S. Fusi. Robust classification of correlated patterns with a neuromorphic VLSI network of spiking neurons. In IEEE Proceedings on Biomedical Circuits and Systems (BioCAS08), 2008. (In press). 8
3161 |@word trial:3 middle:1 version:1 pulse:2 overwritten:1 solid:1 outlook:1 carry:1 initial:1 efficacy:2 current:20 yet:1 refresh:2 realistic:1 happen:1 plasticity:14 shape:1 enables:1 motor:1 plot:3 update:10 device:7 accordingly:1 indefinitely:1 infrastructure:1 provides:1 node:3 traverse:1 along:2 dn:1 become:1 symposium:3 inside:1 manner:2 ica:8 behavior:1 multi:1 integrator:4 brain:1 globally:1 decreasing:1 increasing:2 becomes:3 bounded:3 circuit:33 panel:4 null:2 what:1 consolidated:1 developed:1 dubbed:1 fabricated:1 temporal:1 every:1 charge:1 lichtsteiner:1 scaled:2 classifier:1 platt:1 control:5 grant:3 producing:2 positive:2 before:1 mitra:2 local:3 modify:2 timing:5 accordance:2 limit:2 encoding:1 mead:1 firing:18 approximately:1 black:3 mateo:1 conversely:1 limited:1 statistically:1 acknowledgment:1 responsible:2 block:7 swiss:1 digit:1 jan:1 area:3 eth:2 pre:9 regular:1 get:1 cannot:2 selection:1 storage:5 faulty:1 demonstrated:2 center:1 layout:1 go:1 resolution:1 iepsc:3 chicca:2 rule:4 array:1 analogous:1 updated:4 transmit:1 construction:1 target:1 trigger:2 discharge:1 delbr:1 logarithmically:1 crossing:1 recognition:1 updating:1 vspk:2 database:1 labeled:1 observed:1 bottom:7 module:9 solved:1 capture:1 initializing:2 epsc:3 connected:3 sompolinsky:1 eu:1 decrease:5 boahen:2 complexity:1 dynamic:3 trained:3 depend:1 smart:1 upon:2 logging:1 chip:20 represented:3 train:13 effective:1 describe:2 activate:1 neuroinformatics:1 encoded:2 larger:4 solve:1 otherwise:2 ability:1 highlighted:2 inhomogeneity:1 final:1 confronted:1 sequence:1 transistor:2 net:1 propose:1 zoomed:2 validate:1 olkopf:1 produce:9 generating:1 cmos:2 depending:7 avlsi:1 recurrent:1 measured:7 received:1 strong:1 implemented:2 quantify:1 correct:4 functionality:3 stochastic:8 occupies:1 bistable:6 implementing:2 everything:1 require:1 potentiation:1 biological:1 dendritic:4 ground:1 stdp:4 giulioni:1 k3:2 favorable:1 proc:1 label:1 successfully:1 weighted:1 mit:1 sensor:1 overwriting:1 interfacing:2 activates:1 modified:4 always:1 voltage:12 conjunction:1 focus:1 indiveri:6 lasted:1 utig:1 dependent:3 typically:1 integrated:2 vlsi:19 comprising:1 pixel:4 overall:1 classification:15 orientation:1 spatial:7 summed:1 equal:1 biology:1 mm2:1 represents:5 comparators:2 future:3 stimulus:4 primarily:1 randomly:3 recognize:1 national:1 floating:1 iscas:1 fire:5 highly:1 custom:1 activated:2 arthur:1 ager:1 tree:4 old:2 initialized:1 desired:2 plotted:3 theoretical:6 instance:2 classify:9 modeling:1 bistability:1 neuromorphic:7 mahowald:1 alavlsi:1 hundred:1 fabrication:1 characterize:3 stored:2 reported:1 teacher:10 giacomo:2 combined:3 thoroughly:1 peak:2 international:3 interdisciplinary:1 destination:1 off:2 together:1 connectivity:2 broadcast:1 slowly:2 external:1 stochastically:1 multiplexer:2 potential:8 configured:1 afferent:1 depends:5 mv:1 performed:2 view:1 parallel:6 daisy:1 kaufmann:1 largely:1 characteristic:1 subthreshold:2 handwritten:1 plastic:16 produced:2 drive:2 cc:1 classified:2 synapsis:40 phys:1 touretzky:1 synaptic:45 against:2 bartolozzi:1 frequency:13 workstation:1 stop:8 auditory:1 knowledge:1 amplitude:4 higher:2 supervised:1 response:4 wei:1 synapse:17 done:2 furthermore:1 biomedical:1 nonlinear:1 del:1 mode:2 gray:1 grows:1 effect:1 assigned:2 maass:1 white:3 during:6 auc:6 steady:1 fet:2 m:2 ini:1 demonstrate:2 stefano:1 passage:1 interface:1 ranging:1 image:1 chaos:1 instantaneous:1 recently:3 functional:1 spiking:9 physical:1 winner:2 volume:1 analog:4 iup:11 occurred:1 belong:2 silicon:5 cambridge:1 tuning:1 session:4 similarly:1 stochasticity:1 winnertake:1 ik1:7 stable:6 operating:1 recent:1 driven:7 scenario:3 massively:1 selectivity:1 binary:7 arbitrarily:1 transmitted:1 morgan:1 additional:1 employed:1 determine:1 tempotron:1 signal:14 preservation:1 ii:1 full:1 multiple:1 exceeds:1 match:1 characterized:1 cross:1 long:4 divided:1 post:25 impact:1 prediction:4 vision:1 poisson:9 iteration:2 represent:1 fawcett:1 achieved:1 cell:1 irregular:1 receive:1 addition:2 fine:1 diagram:1 grow:2 source:1 sch:1 extra:1 operate:1 hz:19 ltp:13 validating:1 vw:2 ideal:1 enough:1 switch:1 converter:1 whether:1 ltd:9 york:1 lazzaro:1 remark:1 depression:1 mirroring:1 ignored:1 locally:2 category:1 simplest:1 reduced:1 generate:2 mirrored:3 inhibitory:2 millisecond:1 senn:2 neuroscience:3 per:2 correctly:2 ist:1 four:1 soma:3 threshold:11 badoni:1 falling:1 changing:1 douglas:2 abbott:1 vast:1 fraction:1 year:1 letter:1 injected:1 respond:1 electronic:1 fusi:7 legenstein:1 decision:1 interleaved:1 bound:2 distinguish:1 activity:5 aer:5 occur:4 placement:1 constraint:1 software:1 encodes:1 giudice:1 generates:1 prescribed:1 concluding:1 performing:1 separable:3 relatively:2 according:1 idn:11 belonging:9 membrane:7 across:1 postsynaptic:1 wta:1 making:1 s1:3 lasting:2 taken:1 zurich:1 previously:1 remains:1 eventually:3 mechanism:5 end:2 phenomenology:1 robustly:1 gate:3 top:9 dante:1 build:1 capacitance:1 already:1 spike:44 strategy:3 concentration:1 ryckebusch:1 responds:1 separate:1 capacity:5 majority:1 presynaptic:1 toward:3 cc1:1 trace:2 negative:1 slows:1 implementation:5 calcium:1 adjustable:1 perform:1 neuron:66 protecting:2 variability:1 communication:1 dpi:7 cc2:1 pair:1 learned:3 quadratically:1 protect:1 address:5 able:1 usually:1 pattern:48 prototyping:1 below:1 including:1 memory:11 deactivated:1 power:4 suitable:1 event:8 unrealistic:1 hybrid:1 technology:2 axis:3 irrespective:1 columbia:1 vmem:11 asymptotic:1 fully:1 proportional:3 digital:3 foundation:1 integrate:4 switched:1 uzh:1 editor:2 uncorrelated:3 storing:1 classifying:1 row:9 excitatory:6 course:1 supported:1 last:1 asynchronous:3 bias:5 ik2:4 perceptron:1 institute:1 wide:1 distributed:5 curve:4 transition:13 world:1 rich:1 sensory:1 collection:1 adaptive:1 san:1 transaction:2 compact:2 monotonicity:2 confirm:1 global:2 active:1 mem:1 receiver:1 compromised:1 stimulated:4 learn:3 nature:5 robust:1 ca:2 inactivated:1 investigated:2 complex:7 protocol:1 domain:1 did:1 main:2 linearly:3 s2:3 arrival:3 profile:1 allowed:1 complementary:2 fig:17 roc:4 fashion:1 slow:1 sub:1 comprises:3 exponential:2 lie:1 ib:7 third:1 reconfigure:1 learns:1 externally:1 down:3 reconfigurable:1 essential:2 mnist:1 magnitude:1 suited:1 neurophysiological:1 prevents:1 expressed:2 ch:1 corresponds:1 chance:1 ma:1 oct:1 comparator:2 presentation:2 microsecond:1 considerable:1 change:5 hard:1 infinite:1 specifically:2 diff:1 typical:1 acting:1 uck:1 experimental:3 indicating:1 internal:1 ethz:1 ongoing:1 evaluate:1 tested:2 correlated:3
2,383
3,162
Structured Learning with Approximate Inference Alex Kulesza and Fernando Pereira? Department of Computer and Information Science University of Pennsylvania {kulesza, pereira}@cis.upenn.edu Abstract In many structured prediction problems, the highest-scoring labeling is hard to compute exactly, leading to the use of approximate inference methods. However, when inference is used in a learning algorithm, a good approximation of the score may not be sufficient. We show in particular that learning can fail even with an approximate inference method with rigorous approximation guarantees. There are two reasons for this. First, approximate methods can effectively reduce the expressivity of an underlying model by making it impossible to choose parameters that reliably give good predictions. Second, approximations can respond to parameter changes in such a way that standard learning algorithms are misled. In contrast, we give two positive results in the form of learning bounds for the use of LP-relaxed inference in structured perceptron and empirical risk minimization settings. We argue that without understanding combinations of inference and learning, such as these, that are appropriately compatible, learning performance under approximate inference cannot be guaranteed. 1 Introduction Structured prediction models commonly involve complex inference problems for which finding exact solutions is intractable [1]. There are two ways to address this difficulty. Directly, models used in practice can be restricted to those for which inference is feasible, such as conditional random fields on trees [2] or associative Markov networks with binary labels [3]. More generally, however, efficient but approximate inference procedures have been devised that apply to a wide range of models, including loopy belief propagation [4, 5], tree-reweighted message passing [6], and linear programming relaxations [7, 3], all of which give efficient approximate predictions for graphical models of arbitrary structure. Since some form of inference is the dominant subroutine for all structured learning algorithms, it is natural to see good approximate inference techniques as solutions to the problem of tractable learning as well. A number of authors have taken this approach, using inference approximations as drop-in replacements during training, often with empirical success [3, 8]. And yet there has been little theoretical analysis of the relationship between approximate inference and reliable learning. We demonstrate with two counterexamples that the characteristics of approximate inference algorithms relevant for learning can be distinct from those, such as approximation guarantees, that make them appropriate for prediction. First, we show that approximations can reduce the expressivity of a model, making previously simple concepts impossible to implement and hence to learn, even though inference meets an approximation guarantee. Second, we show that standard learning algorithms can be led astray by inexact inference, failing to find valid model parameters. It is therefore crucial to choose compatible inference and learning procedures. ? This work is based on research supported by NSF ITR IIS 0428193. 1 With these considerations in mind, we prove that LP-relaxation-based approximate inference procedures are compatible with the structured perceptron [9] as well as empirical risk minimization with a margin criterion using the PAC-Bayes framework [10, 11]. 2 Setting Given a scoring model S(y|x) over candidate labelings y for input x, exact Viterbi inference is the computation of the optimal labeling h(x) = arg max S(y|x) . (1) y In a prediction setting, the goal of approximate inference is to compute efficiently a prediction with the highest possible score. However, in learning a tight relationship between the scoring model and true utility cannot be assumed; after all, learning seeks to find such a relationship. Instead, we assume a fixed loss function L(y|x) that measures the true cost of predicting y given x, a distribution D over inputs x, and a parameterized scoring model S? (y|x) with associated optimal labeling function h? and inference algorithm A? . Exact inference implies A? = h? . Learning seeks the risk minimizer: ?? = arg min Ex?D [L(A? (x)|x)] . (2) ? Successful learning, then, requires two things: the existence of ? for which risk is suitably low, and the ability to find such ? efficiently. In this work we consider the impact of approximate inference on both criteria. We model our examples as pairwise Markov random fields (MRFs) defined over a graph G = (V, E) with probabilistic scoring model Y Y P (y|x) ? ?i (yi |x) ?ij (yi , yj |x) , (3) i?V ij?E where ?i (yi |x) and ?ij (yi , yj |x) are positive potentials. For learning, we use log-linear potentials ?i (yi |x) = exp(w ? f (x, yi )) assuming a feature function f (?) and parameter vector w. Since MRFs are probabilistic, we also refer to Viterbi inference as maximum a posteriori (MAP) inference. 3 Algorithmic separability The existence of suitable model parameters ? is captured by the standard notion of separability. Definition 1. A distribution D (which can be empirical) is separable with respect to a model S? (y|x) and loss L(y|x) if there exists ? such that Ex?D [L(h? (x), x)] = 01 . However, approximate inference may not be able to match exactly the separating hypothesis h? . We need a notion of separability that takes into account the (approximate) inference algorithm. Definition 2. A distribution D is algorithmically separable with respect to parameterized inference algorithm A? and loss L(y|x) if there exists ? such that Ex?D [L(A? (x), x)] = 0. While separability characterizes data distributions with respect to models, algorithmic separability characterizes data distributions with respect to inference algorithms. Note that algorithmic separability is more general than standard separability for any decidable model, since we can design an (inefficient) algorithm A? (x) = h? (x)2 . However, we show by counterexample that even algorithms with provable approximation guarantees can make separable problems algorithmically inseparable. 3.1 LP-relaxed inference Consider the simple Markov random field pictured in Figure 1, a triangle in which each node has as its set of allowed labels a different pair of the three possible labels A, B, and C. Let the node potentials ?i (yi ) be fixed to 1 so that labeling preferences derive only from edge potentials. For positive 1 Separability can be weakened to allow nonzero risk, but for simplicity we focus on the strict case. Note further that algorithmic separability supports inference algorithms that are not based on any abstract model at all; such algorithms can describe arbitrary ?black box? functions from parameters to predictions. It seems unlikely, however, that such algorithms are of much use since their parameters cannot be easily learned. 2 2 constants ?ij , define edge potentials ?ij (yi , yj ) = exp(?ij ) whenever yi = yj and ?ij (yi , yj ) = 1 otherwise. Then the joint probability of a configuration y = (y1 , y2 , y3 ) is given by ? ? Y X exp(?ij ) = exp ? I(yi = yj )?ij ? (4) P (y) ? ij:yi =yj and the MAP labeling is arg maxy i,j hP i,j i I(yi = yj )?ij . Note that this example is associative; that is, neighboring nodes are encouraged to take identical labels (?ij > 0). We can therefore perform approximate inference using a linear programming (LP) relaxation and get a multiplicative approximation guarantee [3]. We begin by writing an integer program for computing the MAP labeling; below, ?i (yi ) indicates node i taking label yi (which ranges over the two allowed labels for node i) and ?ij (yi , yj ) indicates nodes i and j taking labels yi and yj , respectively. max ?12 ?12 (B, B) + ?23 ?23 (C, C) + ?31 ?31 (A, A) ? X s.t. ?i (yi ) ? 1 ?i yi ?ij (yi , yj ) ? ?i (yi ) ?ij, yi , yj Figure 1: A simple MRF. Each node is annotated with its allowed labels. ? ? {0, 1}dim(?) Integer programming is NP-hard, so we use an LP-relaxation by replacing the integrality constraint with ? ? 0. Letting i? j ? = arg maxij ?ij , it is easy to see that the correct MAP configuration assigns matching labels to nodes i? and j ? and an arbitrary label to the third. The score for this configuration is ?i? j ? . However, the LP-relaxation may generate fractional solutions. In particular, whenever (?12 + ?23 + ?31 )/2 > ?i? j ? the configuration that assigns to every node both of its allowed labels in equal proportion?? = 1/2?is optimal. The fractional labeling ? = 1/2 is the most uninformative possible; it suggests that all labelings are equally valid. Even so, (?12 + ?23 + ?31 )/2 ? 3?i? j ? /2 by the definition of i? j ? , so LP-relaxed inference for this MRF has a relatively good approximation ratio of 3/2. 3.2 Learning with LP-relaxed inference Suppose now that we wish to learn to predict labelings y from instances of the MRF in Figure 1 with positive features given by x = (x12 , x23 , x31 ). We will parameterize the model using a positive weight vector w = (w12 , w23 , w31 ), letting ?ij = wij xij . Suppose the data distribution gives equal probability to inputs x = (4, 3, 3), (3, 4, 3), and (3, 3, 4), and that the loss function is defined as follows. Given x, let i? j ? = arg maxij xij . Then assigning matching labels to nodes i? and j ? and an arbitrary label to the third node yields a 0-loss configuration. All other configurations have positive loss. It is clear, first of all, that this problem is separable; if w = (1, 1, 1), ?ij = xij and the solution to the integer program above coincides with the labeling rule. Furthermore, there is margin: any weight vector in a neighborhood of (1, 1, 1) assigns the highest probability to the correct labeling. Using LP-relaxed inference, however, the problem is impossible to learn. In order to correctly label the instance x = (4, 3, 3) we must have, at a minimum, ?12 > ?23 , ?31 (equivalently 4w12 > 3w23 , 3w31 ) since the 0-loss labeling must have higher objective score than any other labeling. Reasoning similarly for the remaining instances, any separating weight vector must satisfy 4wij > 3wkl for each pair of edges (ij, kl). Without loss of generality, assume an instance to be labeled has feature vector x = (4, 3, 3). Then, 1 1 (?12 + ?23 + ?31 ) = (4w12 + 3w23 + 3w31 ) 2 2 1 3 3 > (4w12 + 3 w12 + 3 w12 ) 2 4 4 > 4w12 = ?12 . 3 As a result, LP-relaxed inference predicts ? = 1/2. The data cannot be correctly labeled using an LP-relaxation with any choice of weight vector, and the example is therefore algorithmically inseparable. 4 Insufficiency of algorithmic separability We cannot expect to learn without algorithmic separability; no amount of training can hope to be successful when there simply do not exist acceptable model parameters. Nevertheless, we could draw upon the usual techniques for dealing with (geometric) inseparability in this case. Approximate inference introduces another complication, however. Learning techniques exploit assumptions about the underlying model to search parameter space; the perceptron, for example, assumes that increasing weights for features present in correct labelings but not incorrect labelings will lead to better predictions. While this is formally true with respect to an underlying linear model, inexact inference methods can disturb and even invert such assumptions. 4.1 Loopy inference Loopy belief propagation (LBP) is a common approximate inference procedure in which maxproduct message passing, known to be exact for trees, is applied to arbitrary, cyclic graphical models [5]. While LBP is, of course, inexact, its behavior can be even more problematic for learning. Because LBP does not respond to model parameters in the usual way, its predictions can lead a learner away from appropriate parameters even for algorithmically separable problems. Consider the simple MRF shown in Figure 2 and discussed previously in [6]. All nodes are binary and take labels from the set {?1, 1}. Suppose that node potentials are assigned by type, where each node is of type A or B as indicated and ? and ? are real-valued parameters: Figure 2: An MRF on which LBP is inexact. ?A (?1) = 1 ?A (1) = e? ?B (?1) = 1 ?B (1) = e? Also let edge potentials ?ij (yi , yj ) be equal to the constant ? when yi = yj and 1 otherwise. Define ? to be sufficiently positive that the MAP configuration is either (?1, ?1, ?1, ?1) or (1, 1, 1, 1), abbreviated by ?1 and 1, respectively. In particular, the solution is ?1 when ? + ? < 0 and 1 otherwise. With slight abuse of notation we can write yMAP = sign(? + ?). We now investigate the behavior of LBP on this example. In general, max-product LBP on pairwise MRFs requires iterating the following rule to update messages mij (yj ) from node i to node j, where yj ranges over the possible labels for node j and N (i) is the neighbor set of node i. ? ? Y mij (yj ) = max ??ij (yi , yj )?i (yi ) mki (yi )? (5) yi k?N (i)\{j} Since we take ? to be suitably positive in our example, we can eliminate the max, letting yi = yj , and then divide to remove the edge potentials ?ij (yj , yj ) = ?. When messages are initialized uniformly to 1 and passed in parallel, symmetry also implies that messages are completely determined by the the types of the relevant nodes. The updates are then as follows. mAB (?1) = mBA (?1) mAB (1) = e? mBA (1) mBA (?1) = mAB (?1)mBB (?1) mBA (1) = e? mAB (1)mBB (1) mBB (?1) = m2AB (?1) mBB (1) = e? m2AB (1) Note that messages mij (?1) remain fixed at 1 after any number of updates. Messages mAB (1), mBA (1), and mBB (1) always take the form exp(p? + q?) for appropriate values of p and q, and it is easy to show by iterating the updates that, for all three messages, p and q go to ? while the ratio q/p converges to ? ? 1.089339. The label 1 messages, therefore, approach 0 when ? + ?? < 0 and ? when ? + ?? > 0. Note that after message normalization (mij (?1) + mij (1) = 1 for all ij) the algorithm converges in either case. 4 (a) y = ?1 (b) y = 1 Figure 3: A two-instance training set. Within each instance, nodes of the same shading share a feature vector, as annotated. Below each instance is its correct labeling. Q Beliefs are computed from the converged messages as bi (yi ) ? j?N (i) mji (yi ), so we can express the prediction of LBP as yLBP = sign(? + ??). Intuitively, then, LBP gives a slight preference to the B-type nodes because of their shared edge. If ? and ? are both positive or both negative, or if ? and ? differ in sign but |?| > |?| or |?| > ?|?|, LBP finds the correct MAP solution. However, when the strength of the A nodes only slightly exceeds that of the B nodes (?|?| > |?| > |?|), the preference exerted by LBP is significant enough to flip the labels. For example, if ? = 1 and ? = ?0.95, the true MAP configuration is 1 but LBP converges to ?1. 4.2 Learning with LBP Suppose now that we wish to use the perceptron algorithm with LBP inference to learn the twoinstance data set shown in Figure 3. For each instance the unshaded nodes are annotated with a feature vector x? = (x?1 , x?2 ) and the shaded nodes are annotated with a feature vector x? = (x?1 , x?2 ). We wish to learn weights w = (w1 , w2 ), modeling node potentials as before with ? = w ? x? and ? = w ? x? . Assume that edge potentials remain fixed using a suitably positive ?. By the previous analysis, the data are algorithmically separated by w? = (1, ?1). On instance (a), ? = 1, ? = ?0.95, and LBP correctly predicts ?1. Instance (b) is symmetric. Note that although the predicted configurations are not the true MAP labelings, they correctly match the training labels. The weight vector (1, ?1) is therefore an ideal choice in the context of learning. The problem is also separated in the usual sense by the weight vector (?1, 1). Since we can think of the MAP decision problem as computing sign(? + ?) = sign (w ? (x? + x? )), we can apply the perceptron algorithm with update w ? w ? y?(x? + x? ), where y? is the sign of the proposed labeling. The standard perceptron mistake bound guarantees that separable problems require only a finite number of iterations with exact inference to find a separating weight vector. Here, however, LBP causes the perceptron to diverge even though the problem is not only separable but also algorithmically separable. Figure 4 shows the path of the weight vector as it progresses from the origin over the first 20 iterations of the algorithm. Figure 4: Perceptron learning path. During each pass through the data the weight vector is updated twice: once after mislabeling instance (a) (w ? w ? (1, 0.95)), and again after mislabeling instance (b) (w ? w + (0.95, 1)). The net effect is w ? w + (?0.05, 0.05). The weight vector continually moves in the opposite direction of w? = (1, ?1), and learning diverges. 4.3 Discussion To understand why perceptron learning fails with LBP, it is instructive to visualize the feasible regions of weight space. Exact inference correctly labels instance (a) whenever w1 + 0.95w2 < 0, and, similarly, instance (b) requires a weight vector with 0.95w1 + w2 > 0. Weights that satisfy both constraints are feasible, as depicted in Figure 5(a). For LBP, the preference given to nodes 2 and 3 is effectively a scaling of x? by ? ? 1.089339, so a feasible weight vector must satisfy 5 (a) Exact inference (b) LBP Figure 5: The feasible regions of weight space for exact inference and LBP. Each numbered gray halfspace indicates the region in which the corresponding instance is correctly labeled; their intersection is the feasible region, colored black. w1 + 0.95?w2 < 0 and 0.95?w1 + w2 > 0. Since 0.95? > 1, these constraints define a completely different feasible region of weight space, shown in Figure 5(b). It is clear from the figures why perceptron does not succeed; it assumes that pushing weights into the feasible region of Figure 5(a) will produce correct labelings, while under LBP the exact opposite is required. Algorithmic separability, then, is necessary for learning but may not be sufficient. This does not imply that no algorithm can learn using LBP; a grid search on weight space, for example, will be slow but successful. Instead, care must be taken to ensure that learning and inference are appropriately matched. In particular, it is generally invalid to assume that an arbitrary choice of approximate inference will lead to useful results when the learning method expects exact feedback. 5 Learning bounds for approximate inference In contrast to the failure of LBP in Section 4, appropriate pairs of inference and learning algorithms do exist. We give two bounds using LP-relaxed inference for MRFs with log-linear potentials. First, under the assumption of algorithmic separability, we show that the structured perceptron of Collins [9] makes only a finite number of mistakes. Second, we show using the PAC-Bayesian framework [11] that choosing model parameters to minimize a margin-based empirical risk function (assuming ?soft? algorithmic separability) gives rise to a bound on the true risk. In both cases, the proofs are directly adapted from known results using the following characterization of LP-relaxation. Claim 1. Let z = (z1 , . . . , zk ) be the vector of 0/1 optimization variables for an integer program P . Let Z ? {0, 1}dim(z) be the feasible set of P . Then replacing integrality constraints in P with box constraints 0 ? zi ? 1 yields an LP with a feasible polytope having vertices Z 0 ? Z. Proof. Each z ? Z is integral and thus a vertex of the polytope defined by box constraints alone. The remaining constraints appear in P and by definition do not exclude any element of Z. The addition of constraints cannot eliminate a vertex without rendering it infeasible. Thus, Z ? Z 0 .  We can encode the MAP inference problem for MRFs as an integer program over indicators z with objective w ? ?(x, z) for some ? linear in z (see, for example, [6]). By Claim 1 and the fact that an optimal vertex always exists, LP-relaxed inference given an input x computes LPw (x) = arg max w ? ?(x, z) . (6) z?Z 0 (x) We can think of this as exact inference over an expanded set of labelings Z 0 (x), some of which may not be valid (i.e., z ? Z 0 (x) may be fractional). To simplify notation, we will assume that labelings y are always translated into corresponding indicator values z. 5.1 Perceptron Theorem 1 (adapted from Theorem 1 in [9]). Given a sequence of input/labeling pairs {(xi , zi )}, suppose that there exists a weight vector w? with unit norm and ? > 0 such that, for all i, w? ? (?(xi , zi ) ? ?(xi , z)) ? ? for all z ? Z 0 (xi ) \ {zi }. (The instances are algorithmically separable with margin ?.) Suppose that there also exists R such that k?(xi , zi ) ? ?(xi , z)k ? R for all z ? Z 0 (xi ). Then the structured perceptron makes at most R2 /? 2 mistakes. 6 Proof sketch. Let wk be the weight vector before the kth mistake; w1 = 0. Following the proof of Collins without modification, we can show that kwk+1 k ? k?. We now bound kwk+1 k in the other direction. If (xk , zk ) is the instance on which the kth update occurs and zLP(k) = LPwk (xk ), then by the update rule, kwk+1 k2 = kwk k2 + 2wk ? (?(xk , zk ) ? ?(xk , zLP(k) )) + k?(xk , zk ) ? ?(xk , zLP(k) )k2 ? kwk k2 + R2 . (7) The inequality follows from the fact that LP-relaxed inference maximizes w ? ?(xk , z) over all z ? Z 0 (xk ), so the middle term is nonpositive. Hence, by induction, kwk+1 k2 ? kR2 . Combining the two bounds, k 2 ? 2 ? kwk+1 k2 ? kR2 , hence k ? R2 /? 2 .  5.2 PAC-Bayes The perceptron bound applies when data are perfectly algorithmically separable, but we might also hope to use LP-relaxed inference in the presence of noisy or otherwise almost-separable data. The following theorem adapts an empirical risk minimization bound using the PAC-Bayes framework to show that LP-relaxed inference can also be used to learn successfully in these cases. The measure of empirical risk for a weight vector w over a sample S = (x1 , . . . , xm ) is defined as follows. m 1 X ? R(w, S) = max L(z|xi ) m i=1 z?Hw (xi ) (8) Hw (x) = {z0 ? Z 0 (x) | w ? (?(x, LPw (x)) ? ?(x, z0 )) ? |LPw (x) ? z0 |} ? accounts for the maximum loss of any z that is closer in score than in 1-norm to the Intuitively, R LP prediction. Such z are considered ?confusable? at test time. The PAC-Bayesian setting requires that, after training, weight vectors are drawn from some distribution Q(w); however, a deterministic version of the bound can also be proved. Theorem 2 (adapted from Theorem 3 in [11]). Suppose that loss function L(y|x) is bounded between 0 and 1 and can be expanded to L(z|x) for all z ? Z 0 (x); that is, loss can be defined for every potential value of LP(x). Let ` = dim(z) be the number of indicator variables in the LP, and let R bound the 2-norm of a feature vector for a single clique. Let Q(w) be a symmetric Gaussian centered at w as defined in [11]. Then with probability at least 1 ? ? over the choice of a sample S of size m from distribution D over inputs x, the following holds for all w. s m R2 kwk2 ln( R22`m R2 kwk2 kwk2 ) + ln( ? ) ? + (9) Ex?D,w0 ?Q(w) [L(LPw0 (x)|x)] ? R(w, S) + 2(m ? 1) m The proof in [11] can be directly adapted; the only significant changes are the use of Z 0 in place of the set Y of possible labelings and reasoning as above using the definition of LP-relaxed inference. 6 Related work A number of authors have applied inference approximations to a wide range of learning problems, sometimes with theoretical analysis of approximation quality and often with good empirical results [8, 12, 3]. However, none to our knowledge has investigated the theoretical relationship between approximation and learning performance. Daume et al. [13] developed a method for using a linear model to make decisions during a search-based approximate inference process. They showed that perceptron updates give rise to a mistake bound under the assumption that parameters leading to correct decisions exist. Such results are analogous to those presented in Section 5 in that performance bounds follow from an (implicit) assumption of algorithmic separability. Wainright [14] proved that when approximate inference is required at test time due to computational constraints, using an inconsistent (approximate) estimator for learning can be beneficial. His result suggests that optimal performance is obtained when the methods used for training and testing are appropriately aligned, even if those methods are not independently optimal. In contrast, we consider learning algorithms that use identical inference for both training and testing, minimizing a general measure of empirical risk rather than maximizing data likelihood, and argue for compatibility between the learning method and inference process. 7 Roth et al. [15] consider learning independent classifiers for single labels, essentially using a trivial form of approximate inference. They show that this method can outperform exact inference learning when algorithmic separability holds precisely because approximation reduces expressivity; i.e., less complex models require fewer samples to train accurately. When the data are not algorithmically separable, exact inference provides better performance if a large enough sample is available. It is interesting to note that both of our counterexamples involve strong edge potentials. These are precisely the kinds of examples that are difficult to learn using independent classifiers. 7 Conclusion Effective use of approximate inference for learning depends on two considerations that are irrelevant for prediction. First, the expressivity of approximate inference, and consequently the bias for learning, can vary significantly from that of exact inference. Second, learning algorithms can misinterpret feedback received from approximate inference methods, leading to poor results or even divergence. However, when algorithmic separability holds, the use of LP-relaxed inference with standard learning frameworks yields provably good results. Future work includes the investigation of alternate inference methods that, while potentially less suitable for prediction alone, give better feedback for learning. Conversely, learning methods that are tailored specifically to particular inference algorithms might show improved performance over those that assume exact inference. Finally, the notion of algorithmic separability and the ways in which it might relate (through approximation) to traditional separability deserve further study. References [1] Gregory F. Cooper. The computational complexity of probabilistic inference using Bayesian belief networks (research note). Artif. Intell., 42(2-3):393?405, 1990. [2] John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML ?01: Proceedings of the Eighteenth International Conference on Machine Learning, pages 282?289, 2001. [3] Ben Taskar, Vassil Chatalbashev, and Daphne Koller. Learning associative Markov networks. In ICML ?04: Proceedings of the twenty-first international conference on Machine learning, page 102, 2004. [4] Judea Pearl. Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1988. [5] Kevin Murphy, Yair Weiss, and Michael Jordan. Loopy belief propagation for approximate inference: An empirical study. In Proceedings of the 15th Annual Conference on Uncertainty in Artificial Intelligence (UAI-99), pages 467?47, 1999. [6] M.J. Wainwright, T.S. Jaakkola, and A.S. Willsky. MAP estimation via agreement on trees: messagepassing and linear programming. IEEE Transactions on Information Theory, 51(11):3697?3717, 2005. [7] D. Roth and W. Yih. A linear programming formulation for global inference in natural language tasks. In Proc. of the Conference on Computational Natural Language Learning (CoNLL), pages 1?8, 2004. [8] Charles Sutton and Andrew McCallum. Collective segmentation and labeling of distant entities in information extraction. Technical Report TR # 04-49, University of Massachusetts, 2004. [9] Michael Collins. Discriminative training methods for hidden Markov models: theory and experiments with perceptron algorithms. In EMNLP ?02: Proceedings of the ACL-02 conference on Empirical methods in natural language processing, pages 1?8, 2002. [10] David A. McAllester. PAC-bayesian stochastic model selection. Machine Learning, 51(1):5?21, 2003. [11] David McAllester. Generalization bounds and consistency for structured labeling. In Predicting Structured Data. MIT Press, To Appear. [12] Charles Sutton and Andrew McCallum. Piecewise training of undirected models. In 21st Conference on Uncertainty in Artificial Intelligence, 2005. [13] Hal Daum?e III and Daniel Marcu. Learning as search optimization: Approximate large margin methods for structured prediction. In International Conference on Machine Learning (ICML), 2005. [14] Martin J. Wainwright. Estimating the ?wrong? graphical model: Benefits in the computation-limited setting. Journal of Machine Learning Research, 7:1829?1859, 2006. [15] V. Punyakanok, D. Roth, W. Yih, and D. Zimak. Learning and inference over constrained output. In Proc. of the International Joint Conference on Artificial Intelligence (IJCAI), pages 1124?1129, 2005. 8
3162 |@word version:1 middle:1 norm:3 seems:1 proportion:1 suitably:3 seek:2 yih:2 tr:1 shading:1 configuration:9 cyclic:1 score:5 daniel:1 yet:1 assigning:1 must:5 john:1 distant:1 remove:1 drop:1 update:8 alone:2 intelligence:3 fewer:1 xk:8 mccallum:3 colored:1 characterization:1 provides:1 node:27 complication:1 preference:4 daphne:1 incorrect:1 prove:1 pairwise:2 upenn:1 behavior:2 little:1 increasing:1 begin:1 estimating:1 underlying:3 notation:2 matched:1 maximizes:1 bounded:1 kind:1 developed:1 finding:1 guarantee:6 y3:1 every:2 exactly:2 k2:6 classifier:2 wrong:1 unit:1 appear:2 continually:1 segmenting:1 positive:10 before:2 insufficiency:1 mistake:5 sutton:2 meet:1 path:2 abuse:1 black:2 might:3 twice:1 acl:1 weakened:1 suggests:2 shaded:1 conversely:1 limited:1 range:4 bi:1 yj:21 testing:2 practice:1 implement:1 procedure:4 empirical:11 significantly:1 matching:2 numbered:1 get:1 cannot:6 selection:1 risk:10 impossible:3 writing:1 context:1 kr2:2 map:11 unshaded:1 deterministic:1 maximizing:1 roth:3 go:1 eighteenth:1 independently:1 simplicity:1 assigns:3 rule:3 estimator:1 his:1 notion:3 analogous:1 updated:1 suppose:7 punyakanok:1 exact:15 programming:5 hypothesis:1 origin:1 agreement:1 element:1 marcu:1 predicts:2 labeled:3 taskar:1 parameterize:1 region:6 highest:3 complexity:1 tight:1 upon:1 learner:1 completely:2 triangle:1 translated:1 easily:1 joint:2 train:1 separated:2 distinct:1 describe:1 effective:1 artificial:3 labeling:17 kevin:1 neighborhood:1 choosing:1 valued:1 plausible:1 otherwise:4 ability:1 think:2 mislabeling:2 noisy:1 associative:3 sequence:2 net:1 product:1 neighboring:1 relevant:2 combining:1 aligned:1 adapts:1 ijcai:1 mbb:5 diverges:1 disturb:1 produce:1 converges:3 ben:1 derive:1 andrew:3 ij:23 received:1 progress:1 strong:1 predicted:1 implies:2 differ:1 direction:2 annotated:4 correct:7 stochastic:1 centered:1 mcallester:2 require:2 generalization:1 investigation:1 mab:5 mki:1 hold:3 sufficiently:1 considered:1 exp:5 viterbi:2 algorithmic:13 predict:1 visualize:1 claim:2 inseparable:2 vary:1 failing:1 estimation:1 proc:2 label:21 successfully:1 minimization:3 hope:2 mit:1 always:3 gaussian:1 rather:1 jaakkola:1 encode:1 focus:1 indicates:3 likelihood:1 contrast:3 rigorous:1 sense:1 posteriori:1 inference:79 dim:3 mrfs:5 chatalbashev:1 unlikely:1 eliminate:2 hidden:1 koller:1 wij:2 subroutine:1 labelings:10 provably:1 compatibility:1 arg:6 constrained:1 field:4 equal:3 exerted:1 once:1 having:1 extraction:1 encouraged:1 identical:2 icml:3 future:1 np:1 report:1 simplify:1 intelligent:1 piecewise:1 divergence:1 intell:1 murphy:1 replacement:1 message:11 investigate:1 introduces:1 edge:8 integral:1 closer:1 necessary:1 tree:4 divide:1 initialized:1 confusable:1 theoretical:3 instance:17 modeling:1 soft:1 zimak:1 loopy:4 cost:1 vertex:4 mba:5 expects:1 successful:3 gregory:1 st:1 international:4 probabilistic:5 diverge:1 michael:2 w1:6 again:1 choose:2 emnlp:1 inefficient:1 leading:3 account:2 potential:13 exclude:1 wk:2 includes:1 inc:1 satisfy:3 depends:1 multiplicative:1 kwk:7 characterizes:2 bayes:3 parallel:1 halfspace:1 minimize:1 kaufmann:1 characteristic:1 efficiently:2 yield:3 bayesian:4 mji:1 accurately:1 none:1 converged:1 whenever:3 definition:5 inexact:4 failure:1 associated:1 proof:5 nonpositive:1 judea:1 proved:2 massachusetts:1 w23:3 knowledge:1 fractional:3 segmentation:1 higher:1 follow:1 improved:1 wei:1 formulation:1 though:2 box:3 generality:1 furthermore:1 implicit:1 sketch:1 replacing:2 propagation:3 quality:1 indicated:1 gray:1 hal:1 artif:1 usa:1 effect:1 concept:1 true:6 y2:1 hence:3 assigned:1 symmetric:2 nonzero:1 reweighted:1 during:3 coincides:1 criterion:2 demonstrate:1 reasoning:3 consideration:2 charles:2 common:1 discussed:1 slight:2 kwk2:3 refer:1 significant:2 counterexample:3 grid:1 consistency:1 hp:1 similarly:2 language:3 dominant:1 showed:1 irrelevant:1 inequality:1 binary:2 success:1 yi:31 scoring:5 captured:1 minimum:1 morgan:1 relaxed:13 care:1 fernando:2 ii:1 reduces:1 exceeds:1 technical:1 match:2 devised:1 equally:1 impact:1 prediction:15 mrf:5 essentially:1 iteration:2 normalization:1 sometimes:1 tailored:1 invert:1 lbp:22 uninformative:1 addition:1 crucial:1 appropriately:3 w2:5 publisher:1 strict:1 undirected:1 thing:1 inconsistent:1 lafferty:1 jordan:1 integer:5 w31:3 presence:1 ideal:1 iii:1 easy:2 enough:2 rendering:1 zi:5 pennsylvania:1 perfectly:1 opposite:2 reduce:2 itr:1 utility:1 passed:1 passing:2 cause:1 generally:2 iterating:2 clear:2 involve:2 useful:1 zlp:3 amount:1 generate:1 outperform:1 xij:3 exist:3 nsf:1 problematic:1 sign:6 algorithmically:9 correctly:6 r22:1 write:1 express:1 nevertheless:1 drawn:1 integrality:2 graph:1 relaxation:7 parameterized:2 uncertainty:2 respond:2 place:1 almost:1 draw:1 w12:7 x31:1 acceptable:1 decision:3 scaling:1 conll:1 bound:14 guaranteed:1 annual:1 strength:1 adapted:4 constraint:9 precisely:2 alex:1 min:1 separable:12 expanded:2 relatively:1 x12:1 martin:1 structured:11 department:1 alternate:1 combination:1 poor:1 remain:2 slightly:1 beneficial:1 separability:19 lp:23 making:2 modification:1 maxy:1 intuitively:2 restricted:1 taken:2 ln:2 previously:2 abbreviated:1 fail:1 mind:1 letting:3 flip:1 tractable:1 available:1 x23:1 apply:2 away:1 appropriate:4 yair:1 existence:2 assumes:2 remaining:2 ensure:1 graphical:3 pushing:1 daum:1 exploit:1 objective:2 move:1 occurs:1 usual:3 traditional:1 kth:2 separating:3 entity:1 astray:1 w0:1 polytope:2 argue:2 trivial:1 reason:1 provable:1 induction:1 willsky:1 assuming:2 relationship:4 ratio:2 minimizing:1 equivalently:1 difficult:1 potentially:1 relate:1 negative:1 rise:2 design:1 reliably:1 collective:1 twenty:1 perform:1 markov:5 finite:2 y1:1 wkl:1 arbitrary:6 david:2 pair:4 required:2 kl:1 z1:1 learned:1 expressivity:4 pearl:1 address:1 able:1 deserve:1 below:2 xm:1 kulesza:2 program:4 including:1 reliable:1 max:7 belief:5 maxij:2 wainwright:2 suitable:2 difficulty:1 natural:4 predicting:2 indicator:3 misled:1 pictured:1 imply:1 understanding:1 geometric:1 loss:11 expect:1 interesting:1 wainright:1 sufficient:2 vassil:1 share:1 compatible:3 course:1 supported:1 infeasible:1 bias:1 allow:1 understand:1 perceptron:16 wide:2 neighbor:1 taking:2 benefit:1 feedback:3 valid:3 computes:1 author:2 commonly:1 san:1 transaction:1 approximate:29 dealing:1 clique:1 global:1 uai:1 assumed:1 francisco:1 xi:9 discriminative:1 search:4 why:2 learn:9 zk:4 ca:1 messagepassing:1 symmetry:1 investigated:1 complex:2 daume:1 allowed:4 x1:1 slow:1 cooper:1 fails:1 pereira:3 wish:3 candidate:1 third:2 hw:2 theorem:5 z0:3 pac:6 r2:5 intractable:1 exists:5 effectively:2 ci:1 inseparability:1 margin:5 depicted:1 led:1 intersection:1 simply:1 applies:1 mij:5 minimizer:1 succeed:1 conditional:2 goal:1 consequently:1 invalid:1 shared:1 feasible:10 hard:2 change:2 determined:1 specifically:1 uniformly:1 pas:1 maxproduct:1 formally:1 support:1 collins:3 instructive:1 ex:4
2,384
3,163
Competition adds complexity Judy Goldsmith Department of Computer Science University of Kentucky Lexington, KY goldsmit@cs.uky.edu Martin Mundhenk Friedrich-Schiller-Universit?at Jena Jena, Germany mundhenk@cs.uni-jena.de Abstract It is known that determinining whether a DEC-POMDP, namely, a cooperative partially observable stochastic game (POSG), has a cooperative strategy with positive expected reward is complete for NEXP. It was not known until now how cooperation affected that complexity. We show that, for competitive POSGs, the complexity of determining whether one team has a positive-expected-reward strategy is complete for NEXPNP . 1 Introduction From online auctions to Texas Hold?em, AI is captivated by multi-agent interactions based on competition. The problem of finding a winning strategy harks back to the first days of chess programs. Now, we are starting to have the capacity to handle issues like stochastic games, partial information, and real-time video inputs for human player modeling. This paper looks at the complexity of computations involving the first two factors: partially observable stochastic games (POSGs). There are many factors that could affect the complexity of different POSG models: Do the players, collectively, have sufficient information to reconstruct a state? Do they communicate or cooperate? Is the game zero sum, or do the players? individual utilities depend on other players? utilities? Do the players even have models for other players? utilities? The ultimate question is, what is the complexity of finding a winning strategy for a particular player, with no assumptions about joint observations or knowledge of other players? utilities. Since a special case of this is the DEC-POMDP, where finding an optimal (joint, cooperative) policy is known to be NEXP-hard [1], this problem cannot be any easier than in NEXP. We show that one variant of this problem is hard for the class NEXPNP . 2 Definitions and Preliminaries 2.1 Partially observable stochastic games A partially observable stochastic game (POSG) describes multi-player stochastic game with imperfect information by its states and the consequences of the players actions on the system. We follow the definition from [2] and denote it as a tuple M = (I, S, s0 , A, O,t, o, r), where ? I is the finite set {1, 2, . . ., k} of agents (or players), S is a finite set of states, with distinguished initial state s0 ? S, A is a finite set of actions, and O is a finite set of observations ? t : S ? Ak ? S ? [0, 1] is the transition probability function, where t(s, a1 , . . . , ak , s? ) is the probability that state s? is reached from state s when each agent i chooses action ai ? o : S ? I ? O is the observation function , where o(s, i) is the observation made in state s by agent i, and 1 ? r : S ? Ak ? I ? Z is the reward function, where r(s, a1 , . . . , ak , i) is the reward gained by agent i in state s, when the agents take actions a1 , . . . , ak . (Z is the set of integers.) A POSG where all agents have the same reward function is called a decentralized partiallyobservable Markov decision process (see [1]). Let M = (I, S, s0 , A, O,t, o, r) be a POSG. A step of M is a transition from one state to another according to the transition probability function t. A run of M is a sequence of steps that starts in the initial state s0 . The outcome of each step is probabilistic and depends on the actions chosen. For each agent, a policy describes how to choose actions depending on observations made during the run of the process. A (history-dependent) policy ? chooses an action dependent on all observations made by the agent during the run of the process. This is described as a function ? : O? ? A, mapping each finite sequence of observations to an action. A trajectory ? of length |? | = m for M is a sequence of states ? = ?1 , ?2 , . . . , ?m (m ? 1, ?i ? S) which starts with the initial state of M , i.e. ?1 = s0 . Given policies ?1 , . . . , ?k , each trajectory ? has a probability prob(? , ?1 , . . . , ?k ). We will use some abbreviations in the sequel. For ?1 , . . . , ?k j we will write ?1k , and for ?1 (o(?1 , 1) ? ? ? o(? j , 1)), . . . , ?k (o(?1 , k) ? ? ? o(? j , k)) we will write ?1k (?1 ) accordingly. Then prob(? , ?1 , . . . , ?k ) is defined by prob(? , ?1k ) = |? |?1 ? t(?i , ?1k (?1i ), ?i+1 ) . i=1 We use Tl (s) to denote all length l trajectories which start in the initial state s0 and end in state s. The expected reward Ri (s, l, ?1k ) obtained by agent i in state s after exactly l steps under policies ?1k is the reward obtained in s by the actions according to ?1k weighted by the probability that s is reached after l steps, Ri (s, l, ?1k ) = r(s, ?1k (?1l ), i) ? prob(? , ?1k ) . ? ? ?Tl (s),? =(?1 ,...,?l ) A POSG may behave differently under different policies. The quality of a policy is determined by its performance, i.e. by the sum of expected rewards received on it. We use |M | to denote the size of the representation of M .1 The short-term performance for policies ?1k for agent i with POSG M is the expected sum of rewards received by agent i during the next |M | steps by following the policy ?1k , i.e. perf i (M , ?1k ) = ? Ri (s, |M |, ?1k ) . s?S The performance is also called the expected reward. Agents may cooperate or compete in a stochastic game. We want to know whether a stochastic game can be won by some agents. This is formally expressed in the following decision problems. The cooperative agents problem for k agents: instance: query: a POSG M for k agents are there policies ?1 , . . . , ?k under which every agent has positive performance ? V (I.e. ??1 , . . . , ?k : ki=1 perf i (M , ?1k ) > 0 ?) The competing agents problem for 2k agents: instance: query: a POSG M for 2k agents are there policies ?1 , . . . , ?k under which all agents 1, 2, . . . , k have positive performance independent of which policies agents k + 1, k + 2, . . . , 2k choose? (I.e. V ??1 , . . . , ?k ??k+1 , . . . , ?2k : ki=1 perf i (M , ?12k ) > 0 ?) It was shown by Bernstein et al. [1] that the cooperative agents problem for two or more agents is complete for NEXP. 1 The size of the representation of M is the number of bits to encode the entire model, where the function t, o, and r are encoded by tables. We do not consider smaller representations. In fact, smaller representations may increase the complexity. 2 2.2 NEXPNP A Turing machine M has exponential running time, if there is a polynomial p such that for every input x, the machine M on input x halts after at most 2 p(|x|) steps. NEXP is the class of sets that can be decided by a nondeterministic Turing machine within exponential time. NEXPNP is the class of sets that can be decided by a nondeterministic oracle Turing machine within exponential time, when a set in NP is used as an oracle. Similar as for the class NPNP , it turns out that a NEXPNP computation can be performed by an NEXP oracle machine that asks exactly one query to a co NP oracle and accepts if and only if the oracle accepts. 2.3 Domino tilings Domino tiling problems are useful for reductions between different kinds of computations. They have been proposed by Wang [3], and we will use it according to the following definition. Definition 2.1 We use [m] to denote the set {0, 1, 2, . . . , m ? 1}. A tile type T = (V, H) consists of two finite sets V, H ? N ? N. A T -tiling of an m-square (m ? N) is a mapping ? : [m] ? [m] ? N that satisfies both the following conditions. 1. Every pair of two neighboured tiles in the same row is in H. I.e. for all r ? [m] and c ? [m ? 1], (? (r, c), ? (r, c + 1)) ? H. 2. Every pair of two neighboured tiles in the same column is in V . I.e. for all r ? [m ? 1] and c ? [m], (? (r, c), ? (r + 1, c)) ? V . The exponential square tiling problem is the set of all pairs (T, 1k ), where T is a tile type and 1k is a string consisting of k 1s (k ? N), such that there exists a T -tiling of the 2k -square. It was shown by Savelsbergh and van Emde Boas [4] that the exponential square tiling problem is complete for NEXP. We will consider the following variant, which we call the exponential ?2 square tiling problem: given a pair (T, 1k ), does there exist a row w of tiles and a T -tiling of the 2k -square with final row w, such that there exists no T -tiling of the 2k -square with initial row w? The proof technique of Theorem 2.29 in [4], which translates Turing machine computations into tilings, is very robust in the sense that simple variants of the square tiling problem can analogously be shown to be complete for different complexity classes. Together with the above characterization of NEXPNP it can be used to prove the following. Theorem 2.2 The exponential ?2 square tiling problem is complete for NEXPNP . 3 Results POSGs can be seen as a generalization of partially-observable Markov decision processes (POMDPs) in that POMDPs have only one agent and POSGs allow for many agents. Papadimitriou and Tsitsiklis [5] proved that it is PSPACE-complete to decide the cooperative agents problem for POMDPs. The result of Bernstein et al. [1] shows that in case of history-dependent policies, the complexity of POSGs is greater than the complexity of POMDPs. We show that this difference does not appear when stationary policies are considered instead of history-dependent policies. For POMDPs, the problem appears to be NP-complete [6]. A stationary policy is a mapping O ? A from observations to actions. Whenever the same observation is made, the same action is chosen by a stationary policy. Theorem 3.1 For any k ? 2, the cooperative agents problem for k agents for stationary policies is NP-complete. Proof We start with proving NP-hardness. A POSG with only one agent is a POMDP. The problem of deciding, for a given POMDP M , whether there exists a stationary policy such that the short-term performance of M is greater than 0, is NP-complete [6]. Hence, the cooperative agents problem for stationary policies is NP-hard. 3 It remains to show containment in NP. Let M = (I, S, s0 , A, O,t, o, r) be a POSG. We assume that t is represented in a straightforward way as a table. Let ?1 , . . . , ?k be a sequence of stationary policies for the k agents. This sequence can be straightforwardly represented using not more space than the representation of t takes. Under a fixed sequence of policies, the performance of the POSG for all of the agents can be calculated in polynomial time. Using a guess and check approach (guess the stationary policies and evaluate the POSG), this shows that the cooperative agents problem for stationary policies is in NP. 2 In the same way we can characterize the complexity of a problem that we will need in the proof of Lemma 3.3. Corollary 3.2 The following problem is coNP-complete. instance: query: a POSG M for k agents do all agents under every stationary policy have positive performance? (I.e. V ?stationary ?1 . . . ?k : ki=1 perf i (M , ?1k ) > 0 ?) The cooperative agents problem was shown to be NEXP-complete by Bernstein et al. [1]. Not surprisingly, if the agents compete, the problem becomes harder. Lemma 3.3 For every k ? 1, the competing agents problem for 2k agents is in NEXPNP . Proof The basic idea is as follows. We guess policies ?1 , ?2 , . . . , ?k for agents 1, 2, . . . , k, and construct a POSG that ?implements? these policies and leaves open the actions chosen by agents k + 1, . . ., 2k. This new POSG has states for all short-term trajectories through the origin POSG. Therefore, its size is exponential in the size of the origin POSG. Because the history is stored in every state, and the POSG is loop-free, it turns out that the new POSG can be taken as a POMDP for which a (joint) policy with positive reward is searched. This problem is known to be NP-complete. Let M = (I, S, s0 , A, O,t, o, r) be a POSG with 2k agents, and let ?1 , . . . , ?k be short-term policies for M . We define a k-agent POSG M ? = (I ? , S? , s?0 , A, O? ,t ? , o? , r? ) as follows2 . In M ? , we have as agents those of M , whose policies are not fixed, i.e. I ? = {k + 1, . . . , 2k}. The set of states of M ? is the cross product of states from M and all trajectories up to length |M | over S, i.e. S? = S ? S?|M |+1. The meaning of state (s, u) ? S? is, that state s can be reached on a trajectory u (that ends with s) through M with the fixed policies. The initial state s?0 is s?0 = (s0 , s0 ). The state (s0 , ? ) is taken as a special sink state. After |M | + 2 steps, the sink state is entered in M ? and it is not left thereafter. All rewards gained in the sink state are 0. Now for the transition probabilities. If s is reached on trajectory u in M and the actions a1 , . . . , ak are according to the fixed policies ?1 , . . . , ?k , then the probabiliy of reaching state s? on trajectory us? according to t in M is the same as to reach (s? , us? ) in M ? from (s, u). In the formal description, the sink state has to be considered, too. t ? ((s, u), ak , . . . , a2k , (s, ? u)) ? = ? ?0, t(s, ?1 (o(us, 1)), ? ? ? , ?k (o(us, k)), ak+1 , . . . , a2k , s), ? ? 1, if u 6= ? and us? 6= u? if u? = us, ? |u| ? ? |M |, u 6= ? if |u| = |M | + 1 or u = ? , and u? = ? The observation in M ? is the sequence of observations made in the trajectory that is contained in each state, i.e. o? ((s, w)) = o(w), where o(? ) is any element of O. Finally, the rewards. Essentially, we are interested in the rewards obtained by the agents 1, 2, . . . , k. The rewards obtained by the other agents have no impact on this, only the actions the other agents choose. Therefore, agent i obtains the rewards in M ? that are obtained by agent i ? k in M . In this way, the agents k + 1, . . . , 2k obtain in M ? the same rewards that are obtained by agents 1, 2, . . . , k in M , and this is what we are interested in. This results in r? ((s, u), ak , . . . , a2k , i) = r(s, ?1 (o(u, 1)), ? ? ? , ?k (o(u, k)), ak+1 , . . . , a2k , i ? k) for i = k + 1, . . . , 2k. 2 S?|M | denotes the set of sequences up to |M | elements from S. The empty sequence is denoted by ? . For w ? S?|M | we use o(w, i) to describe the sequence of observations made by agent i on trajectory w. The concatenation of sequences u and w is denoted uw. We do not distinguish between elements of sets and sequences of one element. 4 Notice that the size of M ? is exponential in the size of M . The sink state in M ? is the only state that lies on a loop. This means, that on all trajectories through M ? , the sink state is the only state that may appear more than once. All states other than the sink state contain the full history of how they are reached. Therefore, there is a one-to-one correspondence between history-dependent policies for M and stationary policies for M ? (with regard to horizon |M |). Moreover, the corresponding policies have the same performances. Claim 1 Let ?1 , . . . , ?2k be short-term policies for M , and let ??k+1 , . . . , ??2k be their corresponding stationary policies for M ? . 2k ). For |M | steps and i = 1, 2, . . . , k, perf i (M , ?12k ) = perf i+k (M ? , ??k+1 Thus, this yields an NEXPNP algorithm to decide the competitive agents problem. The input is a POSG M for 2k agents. In the first step, the policies for the agents 1, 2, . . . , k are guessed. This takes nondeterministic exponential time. In the second step, the POSG M ? is constructed from the input M and the guessed policies. This takes exponential time (in the length of the input M ). Finally, the oracle is queried whether M ? has positive performance for all agents under all stationary policies. This problem belongs to coNP (Corollary 3.2). Henceforth, the algorithm shows the competing agents problem to be in NEXPNP . 2 Lemma 3.4 For every k ? 2, the competing agents problem for 2k agents is hard for NEXPNP . Proof We give a reduction from the exponential ?2 square tiling problem to the competing agents problem. Let T = (T, 1k ) be an instance of the exponential ?2 square tiling problem, where T = (V, H) is a tile type. We will show how to construct a POSG M with 4 agents from it, such that T is a positive instance of the exponential ?2 square tiling problem if and only if (1) agents 1 and 2 have a tiling for the 2k square with final row w such that (2) agents 3 and 4 have no tiling for the 2k square with initial row w. The basic idea for checking of tilings with POSGs for two agents stems from Bernstein et al. [1], but we give a slight simplification of their proof technique, and in fact have to extend it for four agents later on. The POSG is constructed so that on every trajectory each agent sees a position in the square. This position is chosen by the process. The only action of the agent that has impact on the process is putting a tile on the given position. In fact, the same position is observed by the agents in different states of the POSG. From a global point of view, the process splits into two parts. The first part checks whether both agents know the same tiling, without checking that it is a correct tiling. In the state where the agents are asked to put their tiles on the given position, a high negative reward is obtained if the agents put different tiles on that position. ?High negative? means that, if there is at least one trajectory on which such a reward is obtained, then the performance of the whole process will be negative. The second part checks whether the tiling is correct. The idea is to give both the agents neighboured positions in the square and to ask each which tile she puts on that position. Notice that the agents do not know in which part of the process they are. This means, that they do not know whether the other agent is asked for the same position, or for its upper or right neighbour. This is why the agents cannot cheat the process. A high negative reward will be obtained if the agents? tiles do not fit together. For the first part, we need to construct is a POSG Pk for two agents, that allows both agents to make the same sequence of observations consisting of 2k bits. This sequence is randomly chosen, and encodes a position in a 2k ? 2k grid. At the end, state same is reached, at which no observation is made. At this state, it will be checked whether both agents put the same tile at this position (see later on). The task of Pk is to provide both agents with the same position. Figure 1 shows an example for a 24 ? 24 -square. The initial state is s4 . Dashed arrows indicate transitions with probability 21 independent of the actions. The observation of agent 1 is written on the left hand side of the states, and the observations of agent 2 at the right hand side. In s4 , the agents make no observation. In Pk both agents always make the same observations. The second part is more involved. The goal is to provide both agents with neighboured positions in the square. Eventually, it is checked whether the tiles they put on the neighboured positions are according to the tile type T . Because the positions are encoded in binary, we can make use 5 s4 s s 0 0 1 1 0 0 1 1 1 0 0 0 1 1 0 0 1 1 1 0 row 0 0 1 1 0 0 1 1 1 0 0 0 1 1 0 0 1 1 1 0 0 0 1 1 0 0 1 1 1 1 0 0 0 0 1 1 0 0 1 1 1 1 0 0 column 0 0 1 1 0 1 0 1 0 0 1 1 1 0 1 0 same same hori Figure 1: P4 Figure 2: C3,4 check tiles Figure 3: L3,4 of the following fact of subsequent binary numbers. Let u = u1 . . . uk and w = w1 . . . wk be bitwise representation of strings. if nw = nu + 1, then for some index l it holds that (1) ui = wi for i = 1, 2, . . . , l ? 1, (2) wl = 1 and ul = 0, and (3) w j = 0 and u j = 1 for j = l + 1, . . ., k. The POSG Cl,k is intended to provide the agents with two neighboured positions in the same row, where the index of the leftmost bit of the column encoding where both positions distinguish is l. (The C stands for column.) Figure 2 shows an example for the 24 -square. The ?final state? of Cl,k is the state hori, from which it is checked whether the agents put horizontically fitting tiles together. In the same way, a POSG Rl,k can be constructed (R stands for row), whose task is, to check whether two tiles in neighboured rows correspond to a correct tiling. This POSG has the final state vert, from which on it is checked whether two tiles fit vertically. Finally, we have to construct the last part of the POSG. It consists of the states same, hori, vert (as mentioned above), good, bad, and sink. All transitions between these states are deterministic (i.e. with probability 1). From state same the state good is reached, if both agents take the same action ? otherwise bad is reached. From state hori the state good is reached, if action a1 by agent 1 and a2 by agent 2 make a pair (a1 , a2 ) in H, i.e. in the set of horizontically correct pairs of tiles ? otherwise bad is reached. Similarly, from state vert the state good is reached, if action a1 by agent 1 and a2 by agent 2 make a pair (a1 , a2 ) in V . All these transitions are with reward 0. From state good the state sink is reached on every action with reward 1, and from state bad the state sink is reached on every action with reward ?(22k+2 ). When the state sink is reached, the process stays there on any action, and all agents obtain reward 0. All rewards are the same for both agents. (This part can be seen in the overall picture in Figure 4). From these POSGs we construct a POSG T2,k that checks whether two agents know the same correct tiling for a 2k ? 2k square, as described above. There are 2k + 1 parts of T2,k . The initial state of each part can be reached with one step from the initial state s0 of T2,k . The parts of T2,k are as follows. ? P2k with initial state s (checks whether two agents have the same tiling) ? For each l = 1, 2, . . . , k, we take Cl,k . Let cl be the initial state of Cl,k . 6 s0 sk c1 ck r1 rk Pk C1,k Ck,k R1,k Rk,k hori vert same good bad sink Figure 4: T2,k ? For each l = 1, 2, . . . , k, we take Rl,k . Let rl be the initial state of Rl,k . There are 22k + 2 ? ?kl=1 2k ? 2l?1 =: tr(k) trajectories with probability > 0 through T2,k . Notice that tr(k) < 22k+2 . From the initial state s0 of T2,k , each of the initial states of the parts is reachable independent on the action chosen by the agents. We will give transition probabilities to the transition from s0 to each of the initial states of the parts in a way, that eventually each trajectory has the same probability. ? t(s0 , a1 , a2 , s ) = ( 22k tr(k) , 2k+l?1 tr(k) if s? = s, i.e. the initial state of Pk if s ? {rl , cl | l = 1, 2, . . . , k} In the initial state s0 and in the initial states of all parts, the observation ? is made. When a state same, hori, vert is reached, each agent has made 2k + 3 observations, where the first and last are ? and the remaining 2k are each in {0, 1}. Such a state is the only one where the actions of the agents have impact on the process. Because of the partial observability, they cannot know in which part of T2,k they are. The agents can win, if they both know the same correct tiling and interpret the sequence of observations as the position in the grid they are asked to put a tile on. On the other hand, if both agents know different tilings or the tiling they share is not correct, then at least one trajectory will end in a bad state and has reward ?(22k+2 ). The structure of the POSG is given in Figure 4. Claim 2 Let (T, 1k ) be an instance of the exponential square tiling problem. (1) There exists a polynomial time algorithm, that on input (T, 1k ) outputs T2,k . (2) There exists a T -tiling of the 2k square if and only if there exist policies for the agents under which T2,k has performance > 0. Part (1) is straightforward. Part (2) is not much harder. If there exists a T -tiling of the 2k square, both agents use the same policy according to this tiling. Under these policies, state bad will not be reached. This guarantess performance > 0 for both agents. For the other direction: if there exist policies for the agents under which T2,k has performance > 0, then state bad is not reached. Hence, both agents use the same policy. It can be shown inductively that this policy ?is? a T -tiling of the 2k square. 7 The POSG for the competing agents problem with 4 agents consists of three parts. The first part is a copy of T2,k . It is used to check whether the first square can be tiled correctly (by agents 1 and 2). In this part, the negative rewards are increased in a way that guarantees the performance of the POSG to be negative whenever agents 1 and 2 do not correctly tile their square. The second part is a modified copy of T2,k . It is used to check whether the second square can be tiled correctly (by agents 3 and 4). Whenever state bad is left in this copy, reward 0 is obtained, and whenever state good is left, reward ?1 is obtained. The third part checks whether agent 1 puts the same tiles into the last row of its square as agent 3 puts into the first row of its square. (See L3,4 in Figure 3 as an example.) If this succeeds, the performance of the third part equals 0, otherwise it has performance 1. These three parts run in parallel. If agents 1 and 2 have a tiling for the first square, the performance of the first part equals 1. ? If agents 3 and 4 are able to continue this tiling through their square, the performance of the second part equals ?1 and the performance of the third part equals 0. At all, the performance of the POSG under these policies equals 0. ? If agents 3 and 4 are not able to continue this tiling through their square, then the performance of part 2 and part 3 is strictly greater ?1. At all, the performance of the POSG under these policies is > 0. 2 Lemmas 3.3 and 3.4 together yield completeness of the competing agents problem. Theorem 3.5 For every k ? 2, the competing agents problem for 2k agents is complete for NEXPNP . 4 Conclusion We have shown that competition makes life?and computation?more complex. However, in order to do so, we needed teamwork. It is not yet clear what the complexity is of determining the existence of a good strategy for Player I in a 2-person POSG, or a 1-against-many POSG. There are other variations that can be shown to be complete for NEXPNP , a complexity class that, shockingly, has not been well explored. We look forward to further results about the complexity of POSGs, and to additional NEXPNP -completeness results for familiar AI and ML problems. References [1] Daniel S. Bernstein, Robert Givan, Neil Immerman, and Shlomo Zilberstein. The complexity of decentralized control of Markov decision processes. Math. Oper. Res., 27(4):819?840, 2002. [2] E. Hansen, D. Bernstein, and S. Zilberstein. Dynamic programming for partially observable stochastic games. In Proceedings of the Nineteenth National Conference on Artificial Intelligence (AAAI-04), pages 709?715, 2004. [3] Hao Wang. Proving theorems by pattern recognition II. Bell Systems Technical Journal, 40:1? 42, 1961. [4] M. Savelsbergh and P. van Emde Boas. Bounded tiling, an alternative to satisfiability. In Gerd Wechsung, editor, 2nd Frege Conference, volume 20 of Mathematische Forschung, pages 354? 363. Akademie Verlag, Berlin, 1984. [5] C.H. Papadimitriou and J.N. Tsitsiklis. The complexity of Markov decision processes. Mathematics of Operations Research, 12(3):441?450, 1987. [6] Martin Mundhenk, Judy Goldsmith, Christopher Lusena, and Eric Allender. Complexity results for finite-horizon Markov decision process problems. Journal of the ACM, 47(4):681?720, 2000. 8
3163 |@word polynomial:3 nd:1 open:1 asks:1 tr:4 harder:2 reduction:2 initial:19 daniel:1 bitwise:1 yet:1 written:1 subsequent:1 mundhenk:3 shlomo:1 stationary:14 intelligence:1 leaf:1 guess:3 accordingly:1 short:5 characterization:1 completeness:2 math:1 constructed:3 consists:3 prove:1 fitting:1 nondeterministic:3 hardness:1 expected:6 multi:2 becomes:1 moreover:1 bounded:1 what:3 kind:1 string:2 lexington:1 finding:3 guarantee:1 every:12 exactly:2 universit:1 uk:1 control:1 appear:2 positive:8 vertically:1 consequence:1 encoding:1 ak:10 cheat:1 teamwork:1 co:1 decided:2 implement:1 akademie:1 bell:1 vert:5 cannot:3 put:9 deterministic:1 straightforward:2 starting:1 pomdp:5 proving:2 handle:1 variation:1 programming:1 origin:2 element:4 recognition:1 cooperative:10 observed:1 wang:2 mentioned:1 complexity:17 ui:1 reward:29 asked:3 inductively:1 dynamic:1 depend:1 eric:1 sink:12 joint:3 differently:1 represented:2 describe:1 query:4 artificial:1 outcome:1 whose:2 encoded:2 nineteenth:1 reconstruct:1 otherwise:3 neil:1 final:4 online:1 sequence:15 interaction:1 product:1 p4:1 loop:2 entered:1 description:1 competition:3 ky:1 empty:1 r1:2 depending:1 a2k:4 received:2 c:2 indicate:1 direction:1 correct:7 stochastic:9 human:1 generalization:1 givan:1 preliminary:1 strictly:1 hold:2 considered:2 deciding:1 mapping:3 nw:1 claim:2 a2:5 hansen:1 wl:1 partiallyobservable:1 weighted:1 always:1 modified:1 reaching:1 ck:2 corollary:2 encode:1 zilberstein:2 she:1 check:10 sense:1 dependent:5 entire:1 interested:2 germany:1 issue:1 overall:1 denoted:2 special:2 equal:5 construct:5 once:1 look:2 papadimitriou:2 np:10 t2:13 randomly:1 neighbour:1 national:1 individual:1 familiar:1 intended:1 consisting:2 tuple:1 partial:2 re:1 instance:6 column:4 modeling:1 increased:1 too:1 characterize:1 stored:1 straightforwardly:1 chooses:2 person:1 stay:1 sequel:1 probabilistic:1 analogously:1 domino:2 together:4 p2k:1 w1:1 aaai:1 choose:3 tile:22 henceforth:1 oper:1 de:1 wk:1 depends:1 performed:1 later:2 view:1 reached:18 competitive:2 start:4 parallel:1 square:32 gerd:1 yield:2 guessed:2 correspond:1 trajectory:16 pomdps:5 history:6 reach:1 whenever:4 checked:4 definition:4 against:1 involved:1 proof:6 proved:1 ask:1 knowledge:1 satisfiability:1 back:1 appears:1 day:1 follow:1 until:1 hand:3 christopher:1 quality:1 contain:1 hence:2 game:10 during:3 won:1 leftmost:1 goldsmith:2 complete:15 auction:1 cooperate:2 meaning:1 rl:5 volume:1 extend:1 slight:1 interpret:1 ai:3 queried:1 grid:2 mathematics:1 similarly:1 reachable:1 l3:2 nexp:8 add:1 belongs:1 verlag:1 binary:2 continue:2 life:1 seen:2 greater:3 additional:1 dashed:1 ii:1 full:1 stem:1 technical:1 cross:1 a1:9 halt:1 impact:3 involving:1 variant:3 basic:2 essentially:1 pspace:1 dec:2 c1:2 want:1 call:1 integer:1 bernstein:6 split:1 affect:1 fit:2 competing:8 imperfect:1 idea:3 observability:1 translates:1 texas:1 whether:18 utility:4 ultimate:1 ul:1 action:24 useful:1 clear:1 s4:3 exist:3 notice:3 correctly:3 mathematische:1 write:2 affected:1 thereafter:1 four:1 putting:1 probabiliy:1 uw:1 sum:3 run:4 prob:4 compete:2 turing:4 communicate:1 decide:2 decision:6 bit:3 ki:3 distinguish:2 simplification:1 correspondence:1 oracle:6 ri:3 encodes:1 u1:1 martin:2 department:1 according:7 describes:2 smaller:2 em:1 wi:1 chess:1 hori:6 taken:2 remains:1 turn:2 boa:2 eventually:2 needed:1 know:8 end:4 tiling:36 operation:1 decentralized:2 posgs:8 distinguished:1 alternative:1 existence:1 denotes:1 running:1 remaining:1 question:1 strategy:5 win:1 schiller:1 berlin:1 capacity:1 concatenation:1 length:4 index:2 robert:1 hao:1 negative:6 policy:48 upper:1 observation:21 allender:1 markov:5 finite:7 behave:1 team:1 namely:1 pair:7 kl:1 c3:1 friedrich:1 accepts:2 nu:1 able:2 pattern:1 program:1 video:1 nexpnp:14 immerman:1 picture:1 perf:6 checking:2 determining:2 conp:2 agent:121 sufficient:1 s0:17 editor:1 share:1 row:12 cooperation:1 surprisingly:1 last:3 free:1 copy:3 tsitsiklis:2 formal:1 allow:1 side:2 van:2 regard:1 calculated:1 transition:9 stand:2 forward:1 made:9 observable:6 uni:1 obtains:1 ml:1 global:1 containment:1 sk:1 why:1 table:2 robust:1 cl:6 complex:1 lusena:1 pk:5 arrow:1 whole:1 tl:2 judy:2 jena:3 position:18 winning:2 exponential:15 lie:1 third:3 theorem:5 rk:2 bad:9 explored:1 exists:6 gained:2 forschung:1 horizon:2 easier:1 expressed:1 contained:1 partially:6 collectively:1 satisfies:1 acm:1 abbreviation:1 goal:1 hard:4 determined:1 lemma:4 called:2 tiled:2 player:12 succeeds:1 formally:1 searched:1 evaluate:1
2,385
3,164
Efficient Principled Learning of Thin Junction Trees Anton Chechetka Carlos Guestrin Carnegie Mellon University Abstract We present the first truly polynomial algorithm for PAC-learning the structure of bounded-treewidth junction trees ? an attractive subclass of probabilistic graphical models that permits both the compact representation of probability distributions and efficient exact inference. For a constant treewidth, our algorithm has polynomial time and sample complexity. If a junction tree with sufficiently strong intraclique dependencies exists, we provide strong theoretical guarantees in terms of KL divergence of the result from the true distribution. We also present a lazy extension of our approach that leads to very significant speed ups in practice, and demonstrate the viability of our method empirically, on several real world datasets. One of our key new theoretical insights is a method for bounding the conditional mutual information of arbitrarily large sets of variables with only polynomially many mutual information computations on fixed-size subsets of variables, if the underlying distribution can be approximated by a bounded-treewidth junction tree. 1 Introduction In many applications, e.g., medical diagnosis or datacenter performance monitoring, probabilistic inference plays an important role: to decide on a patient?s treatment, it is useful to know the probability of various illnesses given the known symptoms. Thus, it is important to be able to represent probability distributions compactly and perform inference efficiently. Here, probabilistic graphical models (PGMs) have been successful as compact representations for probability distributions. In order to use a PGM, one needs to define its structure and parameter values. Usually, we only have data (i.e., samples from a probability distribution), and learning the structure from data is thus a crucial task. For most formulations, the structure learning problem is NP-complete, c.f., [10]. Most structure learning algorithms only guarantee that their output is a local optimum. One of the few notable exceptions is the work of Abbeel et al. [1], for learning structure of factor graphs, that provides probably approximately correct (PAC) learnability guarantees. While PGMs can represent probability distributions compactly, exact inference in compact models, such as those of Abbeel et al., remains intractable [7]. An attractive solution is to use junction trees (JTs) of limited treewidth ? a subclass of PGMs that permits efficient exact inference. For treewidth k = 1 (trees), the most likely (MLE) structure of a junction tree can be learned efficiently using the Chow-Liu algorithm [6], but the representational power of trees is often insufficient. We address the problem of learning JTs for fixed treewidth k > 1. Learning the most likely such JT is NP-complete [10]. While there are algorithms with global guarantees for learning fixed-treewidth JTs [10, 13], there has been no polynomial algorithm with PAC guarantees. The guarantee of [10] is in terms of the difference in log-likelihood of the MLE JT and the model where all variables are independent: the result is guaranteed to achieve at least a constant fraction of that difference. The constant does not improve as the amount of data increases, so it does not imply PAC learnability. The algorithm of [13] has PAC guarantees, but its complexity is exponential. In contrast, we provide a truly polynomial algorithm with PAC guarantees. The contributions of this paper are as follows: ? A theoretical result (Lemma 4) that upper bounds the conditional mutual information of arbitrarily large sets of random variables in polynomial time. In particular, we do not assume that an efficiently computable mutual information oracle exists. ? The first polynomial algorithm for PAC-learning the structure of limited-treewidth junction trees with strong intra-clique dependencies. We provide graceful degradation guarantees for distributions that are only approximately representable by JTs with fixed treewidth. 1 x1,x4,x5 x4,x5 1 x4,x5,x6 4 x1,x5 x1,x2,x5 x2,x5 2 x2,x3,x5 5 x1,x2 x1,x2,x7 3 Figure 1: A junction tree. Rectangles denote cliques, separators are marked on the edges. Algorithm 1: Na??ve approach to structure learning Input: V , oracle I (?, ? | ?), treewidth k, threshold ? 1 L ? ? ; // L is a set of ?useful components? 2 for S ? V s.t. |S| = k do 3 for Q ? V-S do 4 if I (Q, V-SQ | S) ? ? then 5 L ? L ? (S, Q) 6 return FindConsistentTree(L) ? A lazy heuristics that allows to make the algorithm practical. ? Empirical evidence of the viability of our approach on real-world datasets. 2 Bounded treewidth graphical models In general, even to represent a probability distribution P (V ) over discrete variables1 V we need space exponential in the size n of V . However, junction trees of limited treewidth allow compact representation and tractable exact inference. We briefly review junction trees (for details see [7]). Let C = {C1 , . . . , Cm } be a collection of subsets of V . Elements of C are called cliques. Let T be a set of edges connecting pairs of cliques such that (T, C) is a tree. Definition 1. Tree (T, C) is a junction tree iff it satisfies the running intersection property (RIP): ?Ci , Cj ? C and ?Ck on the (unique) simple path between Ci and Cj , x ? Ci ? Cj ? x ? Ck . A set Sij ? Ci ? Cj is called the separator corresponding to an edge (i?j) from T . The size of a largest clique in a junction tree minus one is called the treewidth of that tree. For example, in a junction tree in Fig. 1, variable x2 is contained in both clique 3 and 5, so it has to be contained in clique 2, because 2 is on the simple path between 3 and 5. The largest clique in Fig. 1 has size 3, so the treewidth of that junction tree is 2. A distribution P (V ) is representable using junction tree (T, C) if instantiating all variables in a separator Sij renders the variables on different sides of Sij independent. Denote the fact that A is independent of B given C by (A ? B | C). Let Ciij be cliques that can be reached from Ci in the (T, C) S without using edge (i?j), and denote these reachable variables by Viji ? Vjii ? Ck ?Ci Ck \ Sij . ij 1 2 For example, in Fig. 1, S12 = {x1 , x5 }, V12 = {x4 , x6 } , V12 = {x2 , x3 , x7 }.   Definition 2. P (V ) factors according to junction tree (T, C) iff ?(i ? j) ? T , Viji ? Vijj | Sij . If a distribution P (V ) factors according to some junction tree of treewidth k, we will say that P (V ) is k-JT representable. In this case, a projection P(T,C) of P on (T, C), defined as Q Ci ?C P (Ci ) Q P(T,C) = , (1) (i?j)?T P (Sij ) is equal to P itself. For clarity, we will only consider maximal junction trees, where all separators have size k. If P is k-JT representable, it also factors according to some maximal JT of treewidth k. In practice the notion of conditional independence is too strong. Instead, a natural relaxation is to require sets of variables to have low conditional mutual information I. Denote H(A) the entropy of A, then I(A,B | S) ? H(A | S)?H(A | BS) is nonnegative, and zero iff (A ? B | S). Intuitively, I (A, B | S) shows how much new information about A can we extract from B if we already know S.   Definition 3. (T, C) is an ?-junction tree for P (V ) iff ?(i ? j) ? T : I Viji , Vijj | Sij ? ?. 1 Notation note: throughout the paper, we use small letters (x, y) to denote variables, capital letters (V, C) to denote sets of variables, and double-barred font (C, D) to denote sets of sets. 2 If there exists an ?-junction tree (T, C) for P (V ), we will say that P is k-JT ?-representable. In this case, the Kullback-Leibler divergence of projection (1) of P on (T, C) from P is bounded [13]:  KL P, P(T,C) ? n?. (2) This bound means that if we have an ?-junction tree for P (V ), then instead of P we can use its tractable principled approximation P(T,C) for inference. In this paper, we address the problem of learning structure of such junction tree from data (samples from P ). 3 Structure learning In this paper, we address the following problem: given data, such as multiple temperature readings from sensors in a sensor network, we treat each datapoint as an instantiation of the random variables V and seek to find a good approximation of P (V ). We will assume that P (V ) is k-JT ?-representable for some ? and aim to find a ??-junction tree for P with the same treewidth k and with ?? as small as possible. Note that the maximal treewidth k is considered to be a constant and not a part of problem input. The complexity of our approach is exponential in k. Let us initially assume that we have an oracle I (?, ? | ?) that can compute the mutual in- Algorithm 2: LTCI: find Conditional Indepenformation I (A, B | C) exactly for any disjoint dencies in Low-Treewidth distributions Input: V , separator S, oracle I (?, ? | ?), subsets A, B, C ? V . This is a very strict rethreshold ?, max set size q quirement, which we address in the next section. Using the oracle I, a na??ve approach 1 QS ? ?x?V {x} ; // QS is a set of singletons would be to evaluate2 I(Q, V-QS | S) for all 2 for A ? V-S s.t. |A| ? q do if minX?A I (X, A-X | S) > ? then possible Q, S ? V s.t. |S| = k and record all 3 // find min with Queyranne?s alg. pairs (S, Q) with I(Q, V-QS | S) ? ? into a merge all Qi ? QS , s.t. Qi ? A 6= ? list L. We will say that a junction tree (T, C) 4 is consistent with a list L iff for every separa5 return QS tor Sij of (T, C) it holds that (Sij , Viji ) ? L. After L is formed, any junction tree consistent with L would be an ?-junction tree for P (V ). Such tree would be found by some FindConsistentTree procedure, implemented, e.g., using constraint satisfaction. Alg. 1 summarizes this idea. Algorithms that follow this outline, including ours, form a class of constraint-based approaches. These algorithms use mutual information tests to constrain the set of possible structures and return one that is consistent with the constraints. Unfortunately, using Alg. 1 directly is impractical because its complexity is exponential in the total number of variables n. In the following sections we discuss inefficiencies of Alg. 1 and present efficient solutions. 3.1 Global independence assertions from local tests One can see two problems with the inner loop of Alg. 1 (lines 3-5). First, for each separator we need to call the oracle exponentially many times (2n?k?1 , once for every Q ? V-S ). This drawback is addressed in the next section. Second, the mutual information oracle, I (A, B | S), is called on subsets A and B of size O(n). Unfortunately, the best known way of computing mutual information (and estimating I from data) has time and sample complexity exponential in |A|+|B|+|S|. Previous work has not addressed this problem. In particular, the approach of [13] has exponential complexity, in general, because it needs to estimate I for subsets of size O(n). Our first new result states that we can limit ourselves to computing mutual information over small subsets of variables: Lemma 4. Let P (V ) be a k-JT ?-representable distribution. Let S ? V , A ? V-S . If ?X ? V-S s.t. |X| ? k + 1, it holds that I(A ? X, V-SA ? X | S) ? ?, then I(A, V-SA | S) ? n(? + ?).  We can thus compute an upper bound on I(A, V-SA | S) using O nk ? O(nk ) (i.e., polynomially many) calls to the oracle I (?, ? | ?), and each call will involve at most |S| + k + 1 variables. Lemma 4 also bounds the quality of approximation of P by a projection on any junction tree (T, C): Corollary 5. If conditions of Lemma 4 hold for P (V ) with S = Sij and A = Viji for every separator Sij of a junction tree (T, C), then (T, C) is a n(? + ?)-junction tree for P (V ). 3.2 Partitioning algorithm for weak conditional independencies Now that we have an efficient upper bound for I (?, ? | ?) oracle, let us turn to reducing the number of oracle calls by Alg. 1 from exponential (2n?k?1 ) to polynomial. In [13], Narasimhan and Bilmes 2 Notation note: for any sets A, B, C we will denote A \ (B ? C) as A-BC to lighten the notation. 3 Algorithm 3: Efficient approach to structure learning Input: V , oracle I (?, ? | ?), treewidth k, threshold ?, L = ? 1 for S ? V s.t. |S| = k do 2 for Q ? LTCI(V ,S,I,?,k + 2) do 3 L ? L ? (S, Q) 4 Algorithm 4: FindConsistentTreeDPGreedy Input: List L of components (S, Q) 1 for (S, Q) ? L in the order of increasing |Q| do 2 greedily check if (S, Q) is L-decomposable 3 record the decomposition if it exists 4 if ?S : (S, V-S ) is L-decomposable then 5 return corresponding junction tree return FindConsistentTreeDPGreedy(L) 6 else return no tree found present an approximate solution to this problem, assuming that an efficient approximation of oracle I (?, ? | ?) exists. A key observation that they relied on is that the function FS (A) ? I (A, V-SA | S) is submodular: FS (A)+FS (B) ? FS (A?B)+FS (A?B). Queyranne?s algorithm [14] allows the minimization of a submodular function F using O(n3 ) evaluations of F . [13] combines Queyranne?s algorithm with divide-and-conquer approach to partition V-S into conditionally independent subsets using O(n3 ) evaluations of I (?, ? | ?). However, since I (?, ? | ?) is computed for sets of size O(n), complexity of their approach is still exponential in n, in general. Our approach, called LTCI (Alg. 2), in contrast, has polynomial complexity for q = O(1). We will show that q = O(1) in our approach that uses LTCI as a subroutine. To gain intuition for LTCI, suppose there exists a ?-junction tree for P (V ), such that S is a separator and subsets B and C are on different sides of S in the junction tree. By definition, this means I (B, C | S) ? ?. When we look at subset A ? B ? C, the true partitioning is not known, but setting ? = ?, we can test all possible 2|A|?1 ways to partition A into two subsets (X and A-X ). If none of the possible partitionings have I (X, A-X | S) ? ?, we can conclude that all variables in A are on the same side of separator S in any ?-junction tree that includes S as a separator. Notice also that ?X ? A I (X, A-X | S) > ? ? min I (X, A-X | S) > ?, X?A so we can use Queyranne?s algorithm to evaluate I (?, ? | ?) only O(|A|3 ) times instead of 2|A|?1 times for minimization by exhaustive search. LTCI initially assumes that every variable x forms its own partition Q = {x}. If a test shows that two variables x and y are on the same side of the separator, it follows that their container partitions Q1 ? x, Q2 ? y cannot be separated by S, so LTCI merges Q1 and Q2 (line 3 of Alg. 2). This process is then repeated for larger sets of variables, of size up to q, until we converge to a set of partitions that are ?almost independent? given S.      MI MI Proposition 6. The time complexity of LTCI with |S| = k is O nq nJk+q ? O nq+1 Jk+q , MI where Jk+q is the time complexity of computing I (A, B | C) for |A| + |B| + |C| = k + q. It is important that the partitioning algorithm returns partitions that are similar to connected components of Viji of the true junction tree for P (V ). Formally, let us define two desirable properties. Suppose (T, C) is an ?-junction tree for P (V ), and QSij is an output of the algorithm for separator Sij and threshold ?. We will say that partitioning algorithm is correct iff for ? = ?, ?Q ? QSij either Q ? Viji or Q ? Vijj . A correct algorithm will never mistakenly put two variables on the same  side of a separator. We will say that an algorithm is ?-weak iff ?Q ? QSij I Q, V-QSij | Sij ? ?. For small ?, an ?-weak algorithm puts variables on different sides of a separator only if corresponding mutual information between those variables is not too large. Ideally, we want a correct and ?-weak algorithm; for ? = ? it would separate variables that are on different sides of S in a true junction tree, but not introduce any spurious independencies. LTCI, which we use instead of lines 3-5 in Alg. 1, satisfies the first requirement and a relaxed version of the second: Lemma 7. LTCI, for q ? k + 1, is correct and n(? + (k ? 1)?)-weak. 3.3 Implementing FindConsistentTree using dynamic programming A concrete form of FindConsistentTree procedure is the last step needed to make Alg. 1 practical. For FindConsistentTree, we adopt a dynamic programming approach from [2] that was also used in [13] for the same purpose. We briefly review the intuition; see [2] for details. Consider a junction tree (T, C). Let Sij be a separator in (T, C) and Ciij be the set of cliques reachable from Ci without using edge (i ? j). Denote Tiji the set of edges from T that connect 4 cliques from Ciij . If (T, C) is an ?-junction tree for P (V ), then (Ciij , Tiji ) is an ?-junction tree for P (Viji ? Sij ). Moreover, the subtree (Ciij , Tiji ) consists of a clique Ci and several sub-subtrees that are each connected to Ci . For example, in Fig. 1 the subtree over cliques 1,2,4,5 can be decomposed into clique 2 and two sub-subtrees: one including cliques {1,4} and one with clique 5. The recursive structure suggests dynamic programming approach: given a component (S, Q) such that I (Q, V-QS | S) < ?, check if smaller subtrees can be put together to cover the variables of (S, Q). Formally, we require the following property: Definition 8. (S, Q) ? L is L-decomposable iff ?D = ?i {(Si , Qi )}, x ? Q s.t. 1. ?i(Si , Qi ) is L-decomposable and ?m i=1 Qi = Q \ {x}; 2. Si ? S ? {x}, i.e., each subcomponent can be connected directly to the clique (S, x); 3. Qi ? Qj = ?, ensuring the running intersection property within the subtree over S ? Q. The set {(S1 , Q1 ), . . . , (Sm , Qm )} is called a decomposition of (S, Q). Unfortunately, checking whether a decomposition exists is equivalent to an NP-complete exact set cover problem because of the requirement Qi ? Qj = ? in part 3 of Def. 8. Unfortunately, this challenging issue was not addressed by [13], where the same algorithm was used. To keep complexity polynomial, we use a simple greedy approach: for every x ? Qi , starting with an empty candidate decomposition D, add (Si , Qi ) ? L to D if the last two properties of Def. 8 hold for (Si , Qi ). If eventually Def. 8 holds, return the decomposition D, otherwise return that no decomposition exists. We call the resulting procedure FindConsistentTreeDPGreedy. Proposition 9. For separator size k, time complexity of FindConsistentTreeDPGreedy is O(nk+2 ) Combining Alg. 2 and FindConsistentTreeDPGreedy, we arrive at Alg. 3. Overall complexity of MI Alg. 3 is dominated by Alg. 2 and is equal to O(n2k+3 J2k+2 ). In general, FindConsistentTreeDP with greedy decomposition checks may miss a junction tree that is consistent with the list of components L, but there is a class of distributions for which Alg. 3 is guaranteed to find a junction tree. Intuitively, we require that for every (Sij , Viji ) from a ?-junction tree (T, C), Alg. 2 adds all the components from decomposition of (Sij , Viji ) to L and nothing else. This requirement is guaranteed for distributions where variables inside every clique of the junction tree are sufficiently strongly interdependent (have a certain level of mutual information): Lemma 10. If ? an ?-JT (T, C) for P (V ) s.t. no two edges of T have the same separator, and for every separator S, clique C ? C, minX?C-S I (X, C-XS | S) > (k + 3)? (we will call (T, C) (k + 3)?-strongly connected), then Alg. 3, called with ? = ?, will output a nk?-JT for P (V ). 4 Sample complexity So far we have assumed that a mutual information oracle I (?, ? | ?) exists for the distribution P (V ) and can be efficiently queried. In real life, however, one only has data (i.e., samples from P (V )) to work with. However, we can get a probabilistic estimate of I (A, B | C), that has accuracy ?? 1 with probability 1 ? ?, using number of samples and computation time polynomial in ? and log ?1 : Theorem 11. (H?offgen, [9]). The entropy of a probability distribution over 2k + 2 discrete variables with domain ? with probability at least (1 ? ?) using  size R canbe estimated  with accuracy  F (k, R, ?, ?) ? O R4k+4 ?2 log2 R2k+2 ?2 log R2k+2 ? samples from P and the same amount of time. If we employ this oracle in our algorithms, the performance guarantee becomes probabilistic: Theorem 12. If there exists a (k + 3)(? + 2?)-strongly connected ?-junction tree for P (V ), then ? ) samples Alg. 3, called with ? = ? + ? and I? (?, ?, ?) based on Thm. 11, using U ? F (k, R, ?, n2k+2 2k+3 and O(n U ) time, will find a kn(?+2?)-junction tree for P (V ) with probability at least (1??). Finally, if P (V ) is k-JT representable (i.e., ? = 0), and the corresponding junction tree is strongly connected, then we can let both ? and ? go to zero and use Alg. 3 to find, with probability arbitrarily 1 close to one, a junction tree that approximates P arbitrarily well in time polynomial in ? and log ?1 , i.e., the class of strongly connected k-junction trees is probably approximately correctly learnable3 . 3 A class P of distributions is PAC learnable if for any P ? P, ? > 0, ? > 0 a learning algorithm will output P ? : KL(P, P ? ) < ? with probability 1 ? ? in time polynomial in 1? and log ?1 . 5 Corollary 13. If there exists an ?-strongly connected junction tree for P (V ) with ? > 0, then for ? < ?n, Alg.3 will learn a ?-junction tree for P with probability at least 1 ? ? using  2k+7  2 n 2 n n4 n n n O ? 2 log ? log ? samples from P (V ) and O ? 2 log ? log ? computation time. 5 Lazy evaluation of mutual information Alg. 3 requires the value of threshold ? as an input. To get tighter quality guarantees, we need to choose the smallest ? for which Alg. 3 finds a junction tree. A priori, this value is not known, so we need a procedure to choose the optimal ?. A natural way to select ? is binary search. For discrete random variables with domain size R, for any P (V ), S, x it holds that I (x, V-Sx | S) ? logR, so for any ? > logR Alg. 3 is guaranteed to find a junction tree (with all cliques connected to the same separator). Thus, we can restrict binary search to range ? ? [0, log R]. In binary search, for every value of ?, Alg. 2 checks the result of Queyranne?s algorithm minimizing minX?A I (X, A-X | S) for every |S| = k, |A| ? k+2, which amounts to O(n2k+2 ) complexity per value of ?. It is possible, however, to find the optimal ? while only checking minX?A I (X, A-X | S) for every S and A once over the course of the search process. Intuitively, think of the set of partitions QS in Alg. 2 as a set of connected components of a graph with variables as vertices, and a hyper-edge connecting all variables from A whenever minX?A I (X, A-X | S) > ?. As ? increases, some of the hyper-edges disappear, and the number of connected components (or independent sets) may increase. More specifically, a graph QS is maintained for each separator S. For all S, A add a hyper-edge connecting all variables in A annotated with strengthS (A) ? minX?A I (X, A-X | S) to QS . Until F indConsistentT ree(?S QS ) returns a tree, increase ? to be minS,A:hyperedgeS (A)?QS strengthS (A) (i.e., strength of the weakest remaining hyper-edge), and remove hyperedgeS (A) from QS . Fig. 2(a) shows an example evolution of Qx4 for k = 1. To further save computation time, we exploit two observations: First, if A is a subset of a connected component Q ? QS , adding hyperedgeS (A) to QS will not change QS . Thus, we do not test any hyper-edge A which is contained in a connected component. However, as ? increases, a component may become disconnected, because such an edge was not added. Therefore, we may have more components than we should (inducing incorrect independencies). This issue is addressed by our second insight: If we find a junction tree for a particular value of ?, we only need to recheck the components used in this tree. These insights lead to a simple, lazy procedure: If FindConsistentTree returns a tree (T, C), we check the hyper-edges that intersect the components used to form (T, C). If none of these edges are added, then we can return (T, C) for this value of ?. Otherwise, some of QS have changed; we can iterate this procedure until we find a solution. 6 Evaluation To evaluate our approach, we have applied it to two real-world (sensor network temperature [8] and San Francisco Bay area traffic [11]) and one artificial (samples from ALARM Bayesian network [4]) datasets. Our implementation, called LPACJT, uses lazy evaluations of I (?, ? | ?) from section 5. As baselines for comparison, we used a simple hill-climbing heuristic4 , a combination of LPACJT with hill-climbing, where intermediate results returned by FindConsistentTree were used as starting points for hill-climbing, Chow-Liu algorithm, and algorithms of [10] (denoted Karger-Srebro) and [17] (denoted OBS). All experiments were run on a Pentium D 3.4 GHz, with runtimes capped to 10 hours. The necessary entropies were cached in advance. ALARM. This discrete-valued data was sampled from a known Bayesian network with treewidth 4. We learned models with treewidth 3 because of computational concerns. Fig. 2(b) shows the perpoint log-likelihood of learned models on test data depending on the amount of training data. We see that on small training datasets both LPACJT finds better models than a basic hill-climbing approach, but worse than the OBS of [17] and Chow-Liu. The implementation of OBS was the only one to use regularization, so this outcome can be expected. We can also conclude that on this dataset our approach overfits than hill-climbing. For large enough training sets, LPACJT results achieve the likelihood of the true model, despite being limited to models with smaller treewidth. Chow-Liu performs much worse, since it is limited to models with treewidth 1. Fig. 2(c) shows an example of a structure found by LPACJT for ALARM data. LPACJT only missed 3 edges of the true model. 4 Hill-climbing had 2 kinds of moves available: replace variable x with variable y in a connected subjunction tree, or relpace a leaf clique Ci with another clique (Ci \ Sij ) ? Smr connected to a separator Smr . 6 ? = 0 .1 x3 x2 x1 0.1 ? = 0 .2 x2 OBS Chow?Liu Karger?Srebro LPACJT LPACJT+Local Local ?20 ?25 ?30 2 10 ? = 0.4 (a) Example QS evolution 3 10 Training set size (b) ALARM - loglikelihood Temperature ?50 Local LPACJT Karger?Srebro LPACJT+Local ?80 2 10 3 10 Training set size 4 10 Log?Likelihood Log?likelihood OBS Chow?Liu ?70 (c) ALARM - structure TEMPERATURE sample run, 2K training points ?40 ?60 4 10 ?46 ?47 ?48 0 LPACJT 1 2 3 4 Time, seconds x 10 (d) TEMPERATURE loglikelihood (e) TEMPERATURE sample run TRAFFIC Log?likelihood 4 0. x1 0.2 ?15 x2 Log?likelihood x1 ? =0 x3 True model 4 4 x2 0.2 ALARM 0. x3 0. x1 x3 ?30 OBS Chow?Liu ?40 Local LPACJT+Local Karger?Srebro LPACJT ?50 ?60 2 10 3 10 Training set size (f) TRAFFIC loglikelihood Figure 2: An example of evolution of QS for section 5 (2(a)), one structure learned by LPACJT(2(c)), experimental results (2(b),2(d),2(f)), and an example evolution of the test set likelihood of the best found model (2(e)). In 2(c), nodes denote variables, edges connect variables that belong to the same clique, green edges belong to both true and learned models, blue edges belong only to the learned model, red - only to the true one. TEMPERATURE. This data is from a 2-month deployment of 54 sensor nodes (15K datapoints) [8]. Each variable was discretized into 4 bins and we learned models of treewidth 2. Since the locations of the sensor have an ?-like shape with two loops, the problem of learning a thin junction tree for this data is hard. In Fig. 2(d) one can see that LPACJT performs almost as good as hill-climbing-based approaches, and, on large training sets, much better than Karger-Srebro algorithm. Again, as expected, LPACJT outperforms Chow-Liu algorithm by a significant margin if there is enough data available, but overfits on the smallest training sets. Fig 2(e) shows the evolution of the test set likelihood of the best (highest training set likelihood) structure identified by LPACJT over time. The first structure was identified within 5 minutes, and the final result within 1 hour. TRAFFIC. This dataset contains traffic flow information measured every 5 minutes in 8K locations in California for 1 month [11]. We selected 32 locations in San Francisco Bay area for the experiments, discretized traffic flow values into 4 bins and learned models of treewidth 3. All nonregularized algorithms, including LPACJT, give results of essentially the same quality. 7 Relation to prior work and conclusions For a brief overview of the prior work, we refer the reader to Fig. 3. Most closely related to LPACJT are learning factor graphs of [1] and learning limited-treewidth Markov nets of [13, 10]. Unlike our approach, [1] does not guarantee low treewidth of the result, instead settling for compactness. [13, 10] guarantee low treewidth. However, [10] only guarantees that the difference of the log-likelihood of the result from the fully independent model is within a constant factor from the difference of the most likely JT: LLH(optimal) ? LLH(indep.) ? 8k k!2 (LLH(learned) ? LLH(indep.)). [13] has exponential complexity. Our approach has polynomial complexity and quality guarantees that hold for strongly connected k-JT ?-representable distributions, while those of [13] only hold for ? = 0. We have presented the first truly polynomial algorithm for learning junction trees with limited treewidth. Based on a new upper bound for conditional mutual information that can be computed using polynomial time and number of samples, our algorithm is guaranteed to find a junction tree that is close in KL divergence to the true distribution, for strongly connected k-JT ?-representable distributions. As a special case of these guarantees, we show PAC-learnability of strongly connected k-JT representable distributions. We believe that the new theoretical insights herein provide significant step in the understanding of structure learning in graphical models, and are useful for the analysis of other approaches to the problem. In addition to the theory, we have also demonstrated experimentally that these theoretical ideas are viable, and can, in the future, be used in the development of fast and effective structure learning heuristics. 7 approach model class guarantees true distribution samples time reference score tractable local any any poly? [3, 5] score tree global any any O(n2 ) [6] score tree mixture local any any O(n2 )? [12] score compact local any any poly? [17] score all global any any exp [15] score tractable const-factor any any poly [10] constraint compact PAC? positive poly poly [1] constraint all global any ? poly(tests) [16] constraint tractable PAC strong k-JT exp? exp? [13] ? constraint tractable PAC strong k-JT poly poly this paper Figure 3: Prior work. The majority of the literature can be subdivided into score-based [3, 5, 6, 12, 15, 10] and constraint-based [13, 16, 1] approaches. The former try to maximize some target function, usually regularized likelihood, while the latter perform conditional independence tests and restrict the set of candidate structures to those consistent with the results of the tests. Tractable means that the result is guaranteed to be of limited treewidth, compact - with limited connectivity of the graph. Guarantees column shows whether the result is a local or global optimum, whether there are PAC guarantees, or whether the difference of the log-likelihood of the result from the fully independent model is within a const-factor from the difference of the most likely JT. True distribution shows for what class of distributions the guarantees hold. ? superscript means per-iteration complexity, poly - O(nO(k) ), exp? - exponential in general, but poly for special cases. PAC? and PAC? mean PAC with (different) graceful degradation guarantees. 8 Acknowledgments This work is supported in part by NSF grant IIS-0644225 and by the ONR under MURI N000140710747. C. Guestrin was also supported in part by an Alfred P. Sloan Fellowship. We thank Nathan Srebro for helpful discussions, and Josep Roure, Ajit Singh, CMU AUTON lab, Mark Teyssier, Daphne Koller, Percy Liang and Nathan Srebro for sharing their source code. References [1] P. Abbeel, D. Koller, and A. Y. Ng. Learning factor graphs in polynomial time and sample complexity. JMLR, 7, 2006. [2] S. Arnborg, D. G. Corneil, and A. Proskurowski. Complexity of finding embeddings in a k-tree. SIAM Journal on Algebraic and Discrete Methods, 8(2):277?284, 1987. [3] F. R. Bach and M. I. Jordan. Thin junction trees. In NIPS, 2002. [4] I. Beinlich, J. Suermondt, M. Chavez, and G. Cooper. The ALARM monitoring system: A case study with two probablistic inference techniques for belief networks. In Euro. Conf. on AI in Medicine, 1988. [5] A. Choi, H. Chan, and A. Darwiche. On Bayesian network approximation by edge deletion. In UAI, 2005. [6] C. Chow and C. Liu. Approximating discrete probability distributions with dependence trees. IEEE Transactions on Information Theory, 14(3):462?467, 1968. [7] R. G. Cowell, P. A. Dawid, S. L. Lauritzen, and D. J. Spiegelhalter. Probabilistic Networks and Expert Systems (Information Science and Statistics). Springer, May 2003. [8] A. Deshpande, C. Guestrin, S. Madden, J. Hellerstein, and W. Hong. Model-driven data acquisition in sensor networks. In VLDB, 2004. [9] K. U. H?offgen. Learning and robust learning of product distributions. In COLT, 1993. [10] D. Karger and N. Srebro. Learning Markov networks: Maximum bounded tree-width graphs. SODA-01. [11] A. Krause and C. Guestrin. Near-optimal nonmyopic value of information in graphical models. UAI-05. [12] M. Meil?a and M. I. Jordan. Learning with mixtures of trees. JMLR, 1:1?48, 2001. [13] M. Narasimhan and J. Bilmes. PAC-learning bounded tree-width graphical models. In UAI, 2004. [14] M. Queyranne. Minimizing symmetric submodular functions. Math. Programming, 82(1):3?12, 1998. [15] A. Singh and A. Moore. Finding optimal Bayesian networks by dynamic programming. Technical Report CMU-CALD-05-106, Carnegie Mellon University, Center for Automated Learning and Discovery, 2005. [16] P. Spirtes, C. Glymour, and R. Scheines. Causation, Prediction, and Search. MIT Press, 2001. [17] M. Teyssier and D. Koller. Ordering-based search: A simple and effective algorithm for learning Bayesian networks. In UAI, 2005. 8
3164 |@word version:1 briefly:2 polynomial:16 vldb:1 seek:1 decomposition:8 q1:3 minus:1 liu:9 inefficiency:1 contains:1 njk:1 karger:6 score:7 ours:1 bc:1 outperforms:1 si:5 suermondt:1 subcomponent:1 partition:7 shape:1 remove:1 greedy:2 leaf:1 selected:1 nq:2 record:2 provides:1 math:1 node:2 location:3 chechetka:1 daphne:1 become:1 viable:1 incorrect:1 consists:1 combine:1 inside:1 darwiche:1 introduce:1 expected:2 discretized:2 decomposed:1 increasing:1 becomes:1 estimating:1 bounded:6 underlying:1 notation:3 moreover:1 what:1 cm:1 kind:1 q2:2 narasimhan:2 finding:2 impractical:1 guarantee:21 every:12 subclass:2 exactly:1 qm:1 datacenter:1 partitioning:5 medical:1 grant:1 positive:1 local:12 treat:1 limit:1 despite:1 meil:1 path:2 ree:1 approximately:3 merge:1 probablistic:1 suggests:1 challenging:1 deployment:1 limited:9 range:1 proskurowski:1 practical:2 unique:1 acknowledgment:1 practice:2 recursive:1 x3:6 sq:1 procedure:6 intersect:1 area:2 empirical:1 projection:3 ups:1 get:2 cannot:1 close:2 put:3 equivalent:1 demonstrated:1 center:1 go:1 starting:2 decomposable:4 offgen:2 insight:4 q:19 datapoints:1 notion:1 target:1 play:1 suppose:2 rip:1 exact:5 programming:5 us:2 element:1 dawid:1 approximated:1 jk:2 muri:1 role:1 connected:18 indep:2 ordering:1 highest:1 principled:2 intuition:2 complexity:20 ideally:1 dynamic:4 singh:2 compactly:2 various:1 separated:1 fast:1 effective:2 artificial:1 hyper:6 outcome:1 exhaustive:1 heuristic:2 larger:1 valued:1 say:5 loglikelihood:3 otherwise:2 statistic:1 think:1 itself:1 final:1 superscript:1 net:1 maximal:3 product:1 loop:2 combining:1 iff:8 achieve:2 representational:1 inducing:1 double:1 optimum:2 requirement:3 empty:1 cached:1 depending:1 measured:1 ij:1 lauritzen:1 sa:4 strong:6 variables1:1 implemented:1 treewidth:31 drawback:1 correct:5 annotated:1 closely:1 implementing:1 bin:2 require:3 subdivided:1 abbeel:3 proposition:2 tighter:1 extension:1 hold:9 sufficiently:2 considered:1 exp:4 tor:1 adopt:1 smallest:2 purpose:1 s12:1 largest:2 minimization:2 mit:1 sensor:6 aim:1 ck:4 corollary:2 likelihood:13 check:5 contrast:2 pentium:1 greedily:1 baseline:1 helpful:1 inference:8 chow:9 initially:2 spurious:1 relation:1 dencies:1 compactness:1 koller:3 subroutine:1 issue:2 overall:1 colt:1 denoted:2 priori:1 development:1 special:2 mutual:15 equal:2 once:2 never:1 ng:1 runtimes:1 x4:4 look:1 thin:3 future:1 np:3 report:1 lighten:1 few:1 employ:1 causation:1 divergence:3 ve:2 ourselves:1 ltci:10 intra:1 evaluation:5 smr:2 truly:3 mixture:2 subtrees:3 edge:20 necessary:1 tree:75 divide:1 theoretical:5 r4k:1 column:1 assertion:1 cover:2 arnborg:1 vertex:1 subset:11 successful:1 too:2 learnability:3 dependency:2 connect:2 kn:1 siam:1 probabilistic:6 connecting:3 together:1 concrete:1 na:2 jts:4 again:1 connectivity:1 choose:2 worse:2 conf:1 expert:1 logr:2 return:12 singleton:1 includes:1 notable:1 sloan:1 try:1 lab:1 overfits:2 traffic:6 reached:1 red:1 relied:1 carlos:1 contribution:1 formed:1 accuracy:2 efficiently:4 climbing:7 anton:1 weak:5 bayesian:5 none:2 monitoring:2 bilmes:2 datapoint:1 r2k:2 whenever:1 sharing:1 definition:5 acquisition:1 deshpande:1 mi:4 gain:1 sampled:1 dataset:2 treatment:1 cj:4 x6:2 follow:1 formulation:1 symptom:1 strongly:9 until:3 mistakenly:1 quality:4 believe:1 cald:1 true:12 evolution:5 regularization:1 former:1 symmetric:1 leibler:1 moore:1 spirtes:1 attractive:2 conditionally:1 x5:8 width:2 maintained:1 hong:1 hill:7 outline:1 complete:3 demonstrate:1 performs:2 percy:1 temperature:7 nonmyopic:1 empirically:1 overview:1 exponentially:1 belong:3 illness:1 approximates:1 mellon:2 significant:3 refer:1 queried:1 ai:1 submodular:3 had:1 reachable:2 add:3 own:1 chan:1 driven:1 certain:1 binary:3 arbitrarily:4 onr:1 life:1 guestrin:4 relaxed:1 converge:1 maximize:1 ii:1 multiple:1 desirable:1 technical:1 bach:1 mle:2 qi:10 instantiating:1 ensuring:1 basic:1 prediction:1 patient:1 essentially:1 cmu:2 iteration:1 represent:3 c1:1 addition:1 want:1 fellowship:1 krause:1 addressed:4 else:2 hyperedges:3 source:1 crucial:1 container:1 unlike:1 probably:2 strict:1 flow:2 jordan:2 call:6 near:1 intermediate:1 viability:2 enough:2 embeddings:1 iterate:1 independence:3 automated:1 restrict:2 identified:2 inner:1 idea:2 computable:1 qj:2 whether:4 queyranne:6 render:1 f:5 returned:1 algebraic:1 useful:3 involve:1 amount:4 nsf:1 notice:1 estimated:1 disjoint:1 correctly:1 per:2 blue:1 diagnosis:1 alfred:1 carnegie:2 discrete:6 n2k:3 key:2 independency:3 threshold:4 capital:1 clarity:1 rectangle:1 graph:7 relaxation:1 fraction:1 run:3 letter:2 soda:1 arrive:1 throughout:1 v12:2 decide:1 almost:2 reader:1 missed:1 ob:6 summarizes:1 bound:6 def:3 guaranteed:6 oracle:14 nonnegative:1 strength:3 constraint:8 pgm:1 constrain:1 x2:11 n3:2 dominated:1 x7:2 nathan:2 speed:1 min:3 graceful:2 glymour:1 according:3 combination:1 representable:11 disconnected:1 smaller:2 n4:1 b:1 s1:1 intuitively:3 sij:18 scheines:1 remains:1 discus:1 turn:1 eventually:1 needed:1 know:2 tractable:7 auton:1 junction:57 available:2 permit:2 hellerstein:1 save:1 assumes:1 running:2 remaining:1 graphical:6 log2:1 const:2 medicine:1 exploit:1 conquer:1 disappear:1 approximating:1 move:1 already:1 added:2 font:1 teyssier:2 dependence:1 minx:6 separate:1 thank:1 majority:1 assuming:1 code:1 insufficient:1 minimizing:2 liang:1 unfortunately:4 implementation:2 perform:2 upper:4 observation:2 datasets:4 sm:1 markov:2 beinlich:1 lpacjt:19 ajit:1 thm:1 pair:2 kl:4 california:1 learned:9 merges:1 herein:1 deletion:1 hour:2 nip:1 address:4 able:1 capped:1 usually:2 reading:1 max:1 including:3 green:1 belief:1 power:1 satisfaction:1 natural:2 settling:1 regularized:1 improve:1 spiegelhalter:1 brief:1 imply:1 madden:1 extract:1 review:2 interdependent:1 prior:3 checking:2 understanding:1 literature:1 discovery:1 fully:2 srebro:8 consistent:5 course:1 changed:1 supported:2 last:2 side:7 allow:1 pgms:3 ghz:1 world:3 llh:4 collection:1 san:2 far:1 polynomially:2 transaction:1 approximate:1 compact:7 kullback:1 keep:1 clique:23 global:6 instantiation:1 uai:4 conclude:2 assumed:1 francisco:2 search:7 bay:2 n000140710747:1 learn:1 robust:1 alg:25 poly:10 separator:21 domain:2 bounding:1 alarm:7 n2:2 nothing:1 repeated:1 x1:10 fig:10 euro:1 cooper:1 sub:2 exponential:10 candidate:2 jmlr:2 theorem:2 minute:2 choi:1 jt:18 pac:17 learnable:1 list:4 x:1 evidence:1 weakest:1 exists:11 intractable:1 concern:1 adding:1 ci:13 subtree:3 sx:1 nk:4 margin:1 chavez:1 entropy:3 intersection:2 likely:4 lazy:5 contained:3 cowell:1 springer:1 satisfies:2 conditional:8 marked:1 month:2 replace:1 change:1 hard:1 experimentally:1 specifically:1 reducing:1 miss:1 lemma:6 degradation:2 called:9 total:1 experimental:1 exception:1 formally:2 select:1 mark:1 latter:1 corneil:1 evaluate:2
2,386
3,165
A Bayesian Framework for Cross-Situational Word-Learning Michael C. Frank, Noah D. Goodman, and Joshua B. Tenenbaum Department of Brain and Cognitive Science Massachusetts Institute of Technology {mcfrank, ndg, jbt}@mit.edu Abstract For infants, early word learning is a chicken-and-egg problem. One way to learn a word is to observe that it co-occurs with a particular referent across different situations. Another way is to use the social context of an utterance to infer the intended referent of a word. Here we present a Bayesian model of cross-situational word learning, and an extension of this model that also learns which social cues are relevant to determining reference. We test our model on a small corpus of mother-infant interaction and find it performs better than competing models. Finally, we show that our model accounts for experimental phenomena including mutual exclusivity, fast-mapping, and generalization from social cues. To understand the difficulty of an infant word-learner, imagine walking down the street with a friend who suddenly says ?dax blicket philbin na fivy!? while at the same time wagging her elbow. If you knew any of these words you might infer from the syntax of her sentence that blicket is a novel noun, and hence the name of a novel object. At the same time, if you knew that this friend indicated her attention by wagging her elbow at objects, you might infer that she intends to refer to an object in a nearby show window. On the other hand if you already knew that ?blicket? meant the object in the window, you might be able to infer these elements of syntax and social cues. Thus, the problem of early word-learning is a classic chicken-and-egg puzzle: in order to learn word meanings, learners must use their knowledge of the rest of language (including rules of syntax, parts of speech, and other word meanings) as well as their knowledge of social situations. But in order to learn about the facts of their language they must first learn some words, and in order to determine which cues matter for establishing reference (for instance, pointing and looking at an object but normally not waggling your elbow) they must first have a way to know the intended referent in some situations. For theories of language acquisition, there are two common ways out of this dilemma. The first involves positing a wide range of innate structures which determine the syntax and categories of a language and which social cues are informative. (Though even when all of these elements are innately determined using them to learn a language from evidence may not be trivial [1].) The other alternative involves bootstrapping: learning some words, then using those words to learn how to learn more. This paper gives a proposal for the second alternative. We first present a Bayesian model of how learners could use a statistical strategy?cross-situational word-learning?to learn how words map to objects, independent of syntactic and social cues. We then extend this model to a true bootstrapping situation: using social cues to learn words while using words to learn social cues. Finally, we examine several important phenomena in word learning: mutual exclusivity (the tendency to assign novel words to novel referents), fast-mapping (the ability to assign a novel word in a linguistic context to a novel referent after only a single use), and social generalization (the ability to use social context to learn the referent of a novel word). Without adding additional specialized machinery, we show how these can be explained within our model as the result of domain-general probabilistic inference mechanisms operating over the linguistic domain. 1 ! Os r, b Is Ws Figure 1: Graphical model describing the generation of words (Ws ) from an intention (Is ) and lexicon (`), and intention from the objects present in a situation (Os ). The plate indicates multiple copies of the model for different situation/utterance pairs (s). Dotted portions indicate additions to include the generation of social cues Ss from intentions. Ss ?s 1 The Model Behind each linguistic utterance is a meaning that the speaker intends to communicate. Our model operates by attempting to infer this intended meaning (which we call the intent) on the basis of the utterance itself and observations of the physical and social context. For the purpose of modeling early word learning?which consists primarily of learning words for simple object categories?in our model, we assume that intents are simply groups of objects. To state the model formally, we assume the non-linguistic situation consists of a set Os of objects and that utterances are unordered sets of words Ws 1 . The lexicon ` is a (many-to-many) map from words to objects, which captures the meaning of those words. (Syntax enters our model only obliquely by different treatment of words depending on whether they are in the lexicon or not?that is, whether they are common nouns or other types of words.) In this setting the speaker?s intention will be captured by a set of objects in the situation to which she intends to refer: Is ? Os . This setup is indicated in the graphical model of Fig. 1. Different situation-utterance pairs Ws , Os are independent given the lexicon `, giving: YX P (Ws |Is , `) ? P (Is |Os ). P (W|`, O) = (1) s Is We further simplify by assuming that P (Is |Os ) ? 1 (which could be refined by adding a more detailed model of the communicative intentions a person is likely to form in different situations). We will assume that words in the utterance are generated independently given the intention and the lexicon and that the length of the utterance is observed. Each word is then generated from the intention set and lexicon by first choosing whether the word is a referential word or a non-referential word (from a binomial distribution of weight ?), then, for referential words, choosing which object in the intent it refers to (uniformly). This process gives: " # Y X 1 P (Ws |Is , `) = (1 ? ?)PNR (w|`) + ? PR (w|x, `) . (2) |Is | w?Ws x?Is The probability of word w referring to object x is PR (w|x, `) ? ?x?`(w) , and the probability of word w occurring as a non-referring word is  1 if `(w) = ?, PNR (w|`) ? (3) ? otherwise. (this probability is a distribution over all words in the vocabulary, not just those in lexicon `). The constant ? is a penalty for using a word in the lexicon as a non-referring word?this penalty indirectly enforces a light-weight difference between two different groups of words (parts-of-speech): words that refer and words that do not refer. Because the generative structure of this model exposes the role of speaker?s intentions, it is straightforward to add non-linguistic social cues. We assume that social cues such as pointing are generated 1 Note that, since we ignore word order, the distribution of words in a sentence should be exchangeable given the lexicon and situation. This implies, by de Finetti?s theorem, that they are independent conditioned on a latent state?we assume that the latent state giving rise to words is the intention of the speaker. 2 from the speaker?s intent independently of the linguistic aspects (as shown in the dotted arrows of Fig. 1). With the addition of social cues Ss , Eq. 1 becomes: YX P (Ws |Is , `) ? P (Ss |Is ) ? P (Is |Os ). P (W|`, O) = (4) s Is We assume that the social cues are a set Si (x) of independent binary (cue present or not) feature values for each object x ? Os , which are generated through a noisy-or process: P (Si (x)=1|Is , ri , bi ) = 1 ? (1 ? bi )(1 ? ri )?x?Is . (5) Here ri is the relevance of cue i, while bi is its base rate. For the model without social cues the posterior probability of a lexicon given a set of situated utterances is: P (`|W, O) ? P (W|`, O)P (`). (6) And for the model with social cues the joint posterior over lexicon and cue parameters is: P (`, r, b|W, O) ? P (W|`, r, b, O)P (`)P (r, b). (7) We take the prior probability of a lexicon to be exponential in its size: P (`) ? e??|`| , and the prior probability of social cue parameters to be uniform. Given the model above and the corpus described below, we found the best lexicon (or lexicon and cue parameters) according to Eq. 6 and 7 by MAP inference using stochastic search2 . 2 Previous work While cross-situational word-learning has been widely discussed in the empirical literature, e.g., [2], there have been relatively few attempts to model this process computationally. Siskind [3] created an ambitious model which used deductive rules to make hypotheses about propositional word meanings their use across situations. This model achieved surprising success in learning word meanings in artificial corpora, but was extremely complex and relied on the availability of fully coded representations of the meaning of each sentence, making it difficult to extend to empirical corpus data. More recently, Yu and Ballard [4] have used a machine translation model (similar to IBM Translation Model I) to learn word-object association probabilities. In their study, they used a pre-existing corpus of mother-infant interactions and coded the objects present during each utterance (an example from this corpus?illustrated with our own coding scheme?is shown in Fig. 2). They applied their translation model to estimate the probability of an object given a word, creating a table of associations between words and objects. Using this table, they extracted a lexicon (a group of word-object mappings) which was relatively accurate in its guesses about the names of objects that were being talked about. They further extended their model to incorporate prosodic emphasis on words (a useful cue which we will not discuss here) and joint attention on objects. Joint attention was coded by hand, isolating a subset of objects which were attended to by both mother and infant. Their results reflected a sizable increase in recall with the use of social cues. 3 Materials and Assessment Methods To test the performance of our model on natural data, we used the Rollins section of the CHILDES corpus[5]. For comparison with the model by Yu and Ballard [4], we chose the files me03 and di06, each of which consisted of approximately ten minutes of interaction between a mother and a preverbal infant playing with objects found in a box of toys. Because we were not able to obtain the exact corpus Yu and Ballard used, we recoded the objects in the videos and added a coding of social cues co-occurring with each utterance. We annotated each utterance with the set of objects visible to the infant and with a social coding scheme (for an illustrated example, see Figure 2). Our social code included seven features: infants eyes, infants hands, infants mouth, infant touching, mothers hands, mothers eyes, mother touching. For each utterance, this coding created an object by social feature matrix. 2 In order to speed convergence we used a simulated tempering scheme with three temperature chains and a range of data-driven proposals. 3 Figure 2: A still frame from our corpus showing the coding of objects and social cues. We coded all mid-sized objects visible to the infant as well as social information including what both mother and infant were touching and looking at. We evaluated all models based on their coverage of a gold-standard lexicon, computing precision (how many of the word-object mappings in a lexicon were correct relative to the gold-standard), recall (how many of the total correct mappings were found), and their geometric mean, F-score. However, the gold-standard lexicon for word-learning is not obvious. For instance, should it include the mapping between the plural ?pigs? or the sound ?oink? and the object PIG? Should a goldstandard lexicon include word-object pairings that are correct but were not present in the learning situation? In the results we report, we included those pairings which would be useful for a child to learn (e.g., ?oink? ? PIG) but not including those pairings which were not observed to co-occur in the corpus (however, modifying these decisions did not affect the qualitative pattern of results). 4 Results For the purpose of comparison, we give scores for several other models on the same corpus. We implemented a range of simple associative models based on co-occurrence frequency, conditional probability (both word given object and object given word), and point-wise mutual information. In each of these models, we computed the relevant statistic across the entire corpus and then created a lexicon by including all word-object pairings for which the association statistic met a threshold value. We additionally implemented a translation model (based on Yu and Ballard [4]). Because Yu and Ballard did not include details on how they evaluated their model, we scored it in the same way as the other associative models, by creating an association matrix based on the scores P (O|W ) (as given in equation (3) in their paper) and then creating a lexicon based on a threshold value. In order to simulate this type of threshold value for our model, we searched for the MAP lexicon over a range of parameters ? in our prior (the larger the prior value, the less probable a larger lexicon, thus this manipulation served to create more or less selective lexicons) . Base model. In Figure 3, we plot the precision and the recall for lexicons across a range of prior parameter values for our model and the full range of threshold values for the translation model and two of the simple association models (since results for the conditional probability models were very similar but slightly inferior to the performance of mutual information, we did not include them). For our model, we averaged performance at each threshold value across three runs of 5000 search iterations each. Our model performed better than any of the other models on a number of dimensions (best lexicon shown in Table 1), both achieving the highest F-score and showing a better tradeoff between precision and recall at sub-optimal threshold values. The translation model also performed well, increasing precision as the threshold of association was raised. Surprisingly, standard cooccurrence statistics proved to be relatively ineffective at extracting high-scoring lexicons: at any given threshold value, these models included a very large number of incorrect pairs. Table 1: The best lexicon found by the Bayesian model (?=11, ?=0.2, ?=0.01). baby ? book hand ? hand bigbird ? bird hat ? hat on ? ring bird ? rattle meow ? kitty ring ? ring 4 birdie ? duck moocow ? cow sheep ? sheep book ? book oink ? pig 1 Co!occurrence frequency Mutual information Translation model Bayesian model 0.9 0.8 0.7 recall 0.6 0.5 0.4 0.3 F=0.54 F=0.44 F=0.21 F=0.12 0.2 0.1 0 0 0.2 0.4 0.6 precision 0.8 1 Figure 3: Comparison of models on corpus data: we plot model precision vs. recall across a range of threshold values for each model (see text). Unlike standard ROC curves for classification tasks, the precision and recall of a lexicon depends on the entire lexicon, and irregularities in the curves reflect the small size of the lexicons). One additional virtue of our model over other associative models is its ability to determine which objects the speaker intended to refer to. In Table 2, we give some examples of situations in which the model correctly inferred the objects that the speaker was talking about. Social model. While the addition of social cues did not increase corpus performance above that found in the base model, the lexicons which were found by the social model did have several properties that were not present in the base model. First, the model effectively and quickly converged on the social cues that we found subjectively important in viewing the corpus videos. The two cues which were consistently found relevant across the model were (1) the target of the infant?s gaze and (2) the caregiver?s hand. These data are especially interesting in light of the speculation that infants initially believe their own point of gaze is a good cue to reference, and must learn over the second year that the true cue is the caregiver?s point of gaze, not their own [6]. Second, while the social model did not outperform the base model on the full corpus (where many words were paired with their referents several times), on a smaller corpus (taking every other utterance), the social cue model did slightly outperform a model without social cues (max F-score=0.43 vs. 0.37). Third, the addition of social cues allowed the model to infer the intent of a speaker even in the absence of a word being used. In the right-hand column of Table 2, we give an example of a situation in which the caregiver simply says ?see that?? but from the direction of the infant?s eyes and the location of her hand, the model correctly infers that she is talking about the COW, not either of the other possible referents. This kind of inference might lead the way in allowing infants to learn words like pronouns, which serve pick out an unambiguous focus of attention (one that is so obvious based on social and contextual cues that it does not need to be named). Finally, in the next section we show that the addition of social cues to the model allows correct performance in experimental tests of social generalization which only children older than 18 months can pass, suggesting perhaps that the social model is closer to the strategy used by more mature word learners. Table 2: Intentions inferred by the Bayesian model after having learned a lexicon from the corpus. (IE=Infant?s eyes, CH=Caregiver?s hands). Words Objects Social Cues Inferred intention ?look at the moocow? COW GIRL BEAR ?see the bear by the rattle?? BEAR RATTLE COW COW BEAR RATTLE 5 ?see that?? BEAR RATTLE COW IE & CH?COW COW situation: !7.3, corpus: !631.1, total: !638.4 "dax" BALL DAX situation: !3.4, corpus: !638.9, total: !642.3 situation: !14.1, corpus: !1582.0, total: !1596.2 "this" "is" "dax" BALL situation: !2.7, corpus: !635.1, total: !637.8 situation: !11.8, corpus: !1570.2, total: !1582.0 "this" "is" "dax" "koba" situation: !11.8, corpus: !1570.2, total: !1582.0 "this" "is" "dax" 5 "a" KOBA DAX situation: !2.3, corpus: !642.9, total: !645.2 BALL "koba" KOBA DAX BALL "a" "a" KOBA DAX "koba" Figure 4: Possible outcomes in (right) a mutual-exclusivity situation and (left) a fast-mapping situation. Situation score is the log probability of the situation (blue dots represent words and objects) under a lexicon (mappings are red lines). Corpus score is the posterior log likelihood of the entire old corpus, including both prior and likelihood terms. Coverage of experimental phenomena Mutual exclusivity. When children as young as sixteen months hear a request for a novel word (e.g. where is the dax?) they make a surprising inference: they conclude that the novel word applies to a novel object[7, 8]. This inference is surprising because there seems to be no prima facie reason why children should make it?after all, why shouldnt dax simply be another name for a ball? The experimental phenomenon of ?mutual exclusivity? has become a touchstone for theories of wordlearning: while some authors argue that children use a piece of language-specific knowledge, a principle of mutual exclusivity (that objects do not have two labels), to make this inference [7], others have argued that childrens mapping of the novel noun is a consequence of more general social-pragmatic principles [9]. We test whether, instead of following from language-specific knowledge or pragmatic principles, the same inference could simply be a result of the probabilistic structure of our model. We use the model to infer the best lexicon for a simple artificial corpus (similar to that used in [10]). We then present the model with a single new situation, analogous to the mutual exclusivity experiments (left side of Figure 4). This new situation consists of hearing a novel word (?dax?) and seeing both a familiar object and a novel object (BALL and DAX). We then compare four different lexicons on their coverage of both this situation and the previous corpus: (1) one that learns nothing new from the new situation, (2) one that maps dax to BALL, (3) one that maps ?dax? to DAX, and (4) one that maps dax to both. We evaluate the scores of these lexicons on both the new situation and the old corpus. While learning both words produces the best score on the new situation, explaining with high probability why the word ?dax? was produced, it performs worst on the rest of the corpus. In particular, it gives a low probability to the coincidence that, while ?dax? meant BALL the entire time, the model happened never to hear dax when there was a ball around. In contrast, a lexicon learning no new words scores best on the corpus (because of the prior on smaller lexicons) but has no explanation for why it heard the word ?dax? in the new situation. The lexicon which learns ?dax??BALL scores well on neither the corpus nor the new situation: it has no explanation for why it never heard ?dax? before, but it also must take into account the fact that ?dax? is only half as likely to be spoken when a BALL is present because the word ?ball? also could have been produced. Thus, the correct lexicon, which learns that dax?DAX, performs best when taking into account both the current situation and the model?s prior experience. The success of this lexicon (robust across a variety of simulations and parameter settings) suggests that explaining the phenomenon of mutual exclusivity may not require appeals to special principles, either pragmatic or languagespecific. Instead, the mutual exclusivity phenomenon may come from a general goal: to learn the lexicon which best explains the utterances the learner hears, given their context. 6 situation: ?10.2, corpus: ?772.2, total: ?782.4 situation: ?6.2, corpus: ?774.2, total: ?780.3 "modi" MODI "modi" DAX situation: ?9.1, corpus: ?774.2, total: ?783.2 MODI situation: ?6.1, corpus: ?776.2, total: ?782.3 "modi" MODI DAX "modi" DAX MODI DAX Figure 5: Possible outcomes in a social generalization experiment. The eye-gaze of the speaker (pointing to the MODI) is the only cue which determines that the word ?modi? should be mapped to the MODI object; despite this, our model finds the correct mapping. Fast-mapping. A second phenomenon which has been much discussed in the psychological literature is fast-mapping [11]. This label refers to the ability of older children to learn a novel label for a novel object in a well-understood sentence frame after only one or a few exposures and retain it over a significant delay. There are two surprising components to this task: first, the ability of children to learn from a single exposure, and second, the retention of the word for a long period. Although our model cannot speak to the retention interval, our non-social model predicts that a single, ambiguous situation can give enough evidence to learn a new word. Our scenario is similar to the experimental setup used by Markson and Bloom [12]. We learn a lexicon for a small artificial corpus that contains some number of function words, which do not co-occur regularly with any object. We then present the model with a new situation in which there is a novel referent, three words that had been ?function words? in our corpus, and one new word (analogous to seeing a novel object, a KOBA, and hearing the utterance ?this is a Koba!?; see Figure 4 right side for details). In this scenario, the model strongly favors learning ?koba??KOBA. If it learns nothing, it is penalized on its inability to explain the new situation; if it learns a mapping to a function word, it must explain why this function word was not used referentially in the rest of its experience. Thus, when the other words in the utterance are familiar, our model will learn an appropriate lexical mapping from even a single situation. Social generalization. By adding the ability to learn social cues, our model gains the ability to learn words even in fully ambiguous situations. An experimental demonstration of this phenomena with children is given by Hollich, Hirsh-Pasek, and Golinkoff [6]. In one study, they showed children two novel objects while an experimenter said ?Look at the modi!? and looked directly at one of the objects. While 12-month-olds were not able to learn that the word ?modi? mapped onto the object that the experimenter looked at, both 18- and 24-month-olds correctly made this inference. As pictured in Figure 5, our model shows this same pattern of inference. While the best explanation of this situation was given by assuming that the word ?modi? mapped to both novel objects (bottom right), this alternative was not preferred because it added two mappings to the lexicon rather than one. On the other hand, the most parsimonious option according to the prior was not to learn any new words, but this did not account for the new evidence. Of the two remaining options, the mapping of ?modi? to the correct object was preferred exclusively on account of the distribution of social cues. Much like the older children in Hollich and colleagues? experiment, our model was first able to learn the relevance of particular social cues over the course of its experience (e.g., by processing the corpus) and then apply this knowledge in a novel, ambiguous situation (Figure 5). 7 6 Conclusions We have presented a Bayesian model of cross-situational world-learning which outperforms both baseline associative models and a more sophisticated translation model on learning from noisy corpus data. However, the strength of our model is not just its performance on the corpus, but also a more natural formulation which may contribute to the clarity of our understanding of word learning. By organizing our model around determining the speaker?s referential intent, we find that several puzzling empirical phenomena in word-learning can be explained as consequences of the structure of the model. The first is mutual exclusivity, the tendency to avoid mapping a novel word to a familiar object when a novel object is available. Researchers in the psychological literature have attempted to explain this type of phenomenon in terms of both language-specific constraints and more general social principles. We suggest, however, that mutual exclusivity may be explained as one of a variety of rational inferences that word-learners can make when presented with an ambiguous situation. The same principle applies to the phenomenon of fast-mapping: given the evidence against other mappings, a rational word-learner would do best to learn the novel mapping. In both of these cases, the relevant phenomena come from the basic model design and domain general principles of inference; as do, for instance, the taxonomic inferences observed by Xu & Tenenbaum [13]. Because it is based on a well-posed generative process, the model can be easily extended to account for joint learning with other domains. We have illustrated this by giving an extension to our basic model of social intention, in which social cues independently contribute to establishing the focus of referential intention in a particular situation. A strength of this extension is that the model does not need to know before learning which cues are relevant for establishing referential intention (and indeed, these cues may vary across cultures where pointing is accomplished in different ways). While Yu and Ballard [4] modify their model to incorporate the focus of intention, their social model assumes that the socially-salient objects are externally indicated?it cannot learn what cues signal that focus or their relative weights. Using these learned social cues our model is able to succeed in learning words even when there is no consistent pattern of co-occurrece (either because of a lack of data or because of a truly ambiguous situation). This brings us to the question of the psychological status of our model. Our model does not embody a theory about the process or algorithm that children follow to learn words. Instead, our model can been seen as a proposal about the representations and principles underlying word-learning. According to this proposal, it is not necessary to represent association probabilities for all word-concept pairs in order to learn words statistically. Instead, learners can learn a lexicon consisting only of guesses about the meanings of words. And by applying principles of probabilistic inference to this lexicon, it may be possible to bootstrap into the broader social, communicative system. References [1] S. Pinker. Learnability and cognition: the acquisition of argument structure. MIT Press, 1989. [2] L. Gleitman. The structural sources of verb meanings. Language acquisition, 1:3?55, 1990. [3] J.M. Siskind. A computational study of cross-situational techniques for learning word-to-meaning mappings. Cognition, 61(1):39?91, 1996. [4] C. Yu and D. Ballard. A unified model of word learning: Integrating statistical and social cues. Neurocomputing, in press. [5] B. MacWhinney. The CHILDES Project: Tools for Analyzing Talk. Lawrence Erlbaum, 2000. [6] G. Hollich, K. Hirsh-Pasek, and R.M. Golinkoff. II. The Emergentist Coalition Model. Monographs of the Society for Research in Child Development, 65(3):17?29, 2000. [7] E.M. Markman. Categorization and Naming in Childern: problems of induction. Bradford Book, 1989. [8] C.B. Mervis and J. Bertrand. Acquisition of the Novel Name-Nameless Category (N3C) Principle. Child Development, 65(6):1646?1662, 1994. [9] E. V. Clark. On the logic of contrast. Journal of Child Language, 15:317?335, 1988. [10] C Yu and L Smith. Rapid word learning under uncertainty via cross-situational statistics. Psychological Science, in press. [11] S. Carey. The child as word learner. In Linguistic theory and psychological reality. MA: MIT Press, 1978. [12] L. Markson and P. Bloom. Evidence against a dedicated system for word learning in children. Nature, 385(6619):813?815, 1997. [13] F. Xu and J. B. Tenenbaum. Word learning as bayesian inference. Psychological Review, 2007. 8
3165 |@word seems:1 simulation:1 attended:1 pick:1 contains:1 score:11 exclusively:1 preverbal:1 outperforms:1 existing:1 current:1 contextual:1 surprising:4 si:2 must:6 visible:2 informative:1 plot:2 childes:2 v:2 infant:18 cue:45 generative:2 guess:2 half:1 pasek:2 smith:1 contribute:2 lexicon:47 location:1 positing:1 become:1 pairing:4 qualitative:1 mcfrank:1 consists:3 incorrect:1 indeed:1 rapid:1 embody:1 examine:1 nor:1 brain:1 socially:1 touchstone:1 bertrand:1 window:2 elbow:3 becomes:1 increasing:1 project:1 underlying:1 dax:30 what:2 caregiver:4 kind:1 spoken:1 unified:1 bootstrapping:2 every:1 exchangeable:1 normally:1 before:2 retention:2 understood:1 hirsh:2 modify:1 consequence:2 despite:1 analyzing:1 establishing:3 approximately:1 might:4 chose:1 emphasis:1 bird:2 suggests:1 co:7 range:7 bi:3 averaged:1 statistically:1 enforces:1 irregularity:1 bootstrap:1 empirical:3 word:106 intention:15 refers:2 pre:1 seeing:2 suggest:1 integrating:1 cannot:2 onto:1 context:5 applying:1 map:7 lexical:1 straightforward:1 attention:4 exposure:2 independently:3 rule:2 siskind:2 classic:1 analogous:2 imagine:1 target:1 exact:1 speak:1 hypothesis:1 element:2 walking:1 predicts:1 exclusivity:11 observed:3 role:1 bottom:1 coincidence:1 enters:1 capture:1 worst:1 intends:3 highest:1 monograph:1 cooccurrence:1 golinkoff:2 dilemma:1 serve:1 learner:9 basis:1 girl:1 easily:1 joint:4 talk:1 fast:6 prosodic:1 artificial:3 choosing:2 refined:1 outcome:2 widely:1 larger:2 posed:1 say:2 s:4 otherwise:1 ability:7 statistic:4 favor:1 syntactic:1 itself:1 noisy:2 associative:4 markson:2 interaction:3 relevant:5 pronoun:1 organizing:1 gold:3 obliquely:1 convergence:1 produce:1 categorization:1 ring:3 object:55 depending:1 friend:2 eq:2 sizable:1 coverage:3 implemented:2 involves:2 indicate:1 implies:1 met:1 come:2 direction:1 annotated:1 correct:7 modifying:1 stochastic:1 viewing:1 material:1 explains:1 argued:1 require:1 assign:2 generalization:5 probable:1 extension:3 around:2 lawrence:1 mapping:21 puzzle:1 cognition:2 pointing:4 vary:1 early:3 purpose:2 label:3 communicative:2 expose:1 deductive:1 create:1 tool:1 mit:3 rather:1 avoid:1 broader:1 linguistic:7 focus:4 she:3 consistently:1 referent:9 indicates:1 likelihood:2 contrast:2 baseline:1 inference:14 entire:4 initially:1 her:5 w:8 selective:1 classification:1 development:2 noun:3 raised:1 special:1 mutual:14 never:2 having:1 yu:8 look:2 markman:1 report:1 others:1 blicket:3 simplify:1 primarily:1 few:2 modi:15 neurocomputing:1 familiar:3 intended:4 consisting:1 attempt:1 sheep:2 truly:1 light:2 ndg:1 behind:1 chain:1 accurate:1 closer:1 necessary:1 experience:3 culture:1 machinery:1 old:4 isolating:1 psychological:6 instance:3 column:1 modeling:1 hearing:2 subset:1 uniform:1 delay:1 erlbaum:1 learnability:1 referring:3 person:1 ie:2 retain:1 probabilistic:3 michael:1 gaze:4 quickly:1 na:1 reflect:1 cognitive:1 creating:3 book:4 toy:1 account:6 suggesting:1 de:1 unordered:1 coding:5 availability:1 matter:1 rollins:1 depends:1 piece:1 innately:1 performed:2 philbin:1 portion:1 red:1 relied:1 option:2 pinker:1 carey:1 who:1 bayesian:8 produced:2 served:1 researcher:1 converged:1 explain:3 against:2 acquisition:4 frequency:2 colleague:1 obvious:2 gain:1 rational:2 experimenter:2 proved:1 treatment:1 massachusetts:1 recall:7 knowledge:5 infers:1 sophisticated:1 follow:1 reflected:1 formulation:1 evaluated:2 though:1 box:1 strongly:1 just:2 hand:11 o:9 assessment:1 lack:1 brings:1 indicated:3 perhaps:1 believe:1 innate:1 name:4 consisted:1 true:2 concept:1 hence:1 jbt:1 illustrated:3 during:1 inferior:1 unambiguous:1 speaker:10 ambiguous:5 syntax:5 plate:1 performs:3 dedicated:1 temperature:1 meaning:11 wise:1 novel:24 recently:1 common:2 specialized:1 physical:1 extend:2 discussed:2 association:7 refer:5 significant:1 mother:8 language:10 had:1 dot:1 operating:1 subjectively:1 add:1 base:5 posterior:3 own:3 showed:1 touching:3 driven:1 manipulation:1 scenario:2 gleitman:1 binary:1 success:2 baby:1 accomplished:1 joshua:1 scoring:1 captured:1 seen:1 additional:2 determine:3 period:1 signal:1 ii:1 multiple:1 sound:1 full:2 infer:7 cross:7 long:1 naming:1 coded:4 paired:1 basic:2 iteration:1 represent:2 achieved:1 chicken:2 proposal:4 addition:5 interval:1 situational:7 source:1 goodman:1 rest:3 unlike:1 file:1 ineffective:1 mature:1 regularly:1 call:1 extracting:1 structural:1 enough:1 variety:2 affect:1 competing:1 cow:8 tradeoff:1 whether:4 penalty:2 speech:2 useful:2 heard:2 detailed:1 referential:6 mid:1 tenenbaum:3 situated:1 ten:1 category:3 outperform:2 dotted:2 happened:1 correctly:3 blue:1 finetti:1 group:3 four:1 salient:1 threshold:9 achieving:1 tempering:1 clarity:1 neither:1 bloom:2 year:1 run:1 taxonomic:1 you:6 communicate:1 uncertainty:1 named:1 parsimonious:1 decision:1 strength:2 occur:2 noah:1 constraint:1 your:1 ri:3 nearby:1 aspect:1 speed:1 simulate:1 extremely:1 argument:1 attempting:1 macwhinney:1 relatively:3 department:1 according:3 ball:12 request:1 coalition:1 across:9 slightly:2 smaller:2 making:1 explained:3 pr:2 computationally:1 equation:1 describing:1 discus:1 mechanism:1 know:2 available:1 apply:1 observe:1 indirectly:1 appropriate:1 occurrence:2 alternative:3 hat:2 binomial:1 remaining:1 include:5 assumes:1 graphical:2 yx:2 giving:3 especially:1 society:1 suddenly:1 already:1 added:2 occurs:1 looked:2 strategy:2 question:1 said:1 mapped:3 simulated:1 street:1 seven:1 argue:1 trivial:1 reason:1 induction:1 assuming:2 length:1 talked:1 code:1 demonstration:1 goldstandard:1 setup:2 difficult:1 frank:1 rise:1 intent:6 design:1 recoded:1 ambitious:1 allowing:1 observation:1 situation:49 extended:2 looking:2 frame:2 verb:1 inferred:3 propositional:1 pair:4 speculation:1 sentence:4 learned:2 mervis:1 able:5 below:1 pattern:3 pig:4 hear:2 including:6 oink:3 video:2 max:1 mouth:1 explanation:3 difficulty:1 natural:2 pictured:1 scheme:3 older:3 technology:1 eye:5 created:3 utterance:17 hears:1 text:1 prior:9 literature:3 geometric:1 understanding:1 review:1 determining:2 relative:2 fully:2 bear:5 generation:2 interesting:1 sixteen:1 clark:1 consistent:1 principle:10 playing:1 translation:8 ibm:1 course:1 penalized:1 surprisingly:1 copy:1 childrens:1 side:2 understand:1 institute:1 wide:1 explaining:2 taking:2 curve:2 dimension:1 vocabulary:1 world:1 author:1 made:1 social:54 ignore:1 preferred:2 status:1 logic:1 corpus:42 conclude:1 knew:3 search:1 latent:2 why:6 table:7 additionally:1 reality:1 learn:31 ballard:7 robust:1 nature:1 kitty:1 complex:1 domain:4 did:8 arrow:1 scored:1 plural:1 nothing:2 child:16 allowed:1 xu:2 fig:3 roc:1 egg:2 precision:7 sub:1 duck:1 exponential:1 third:1 learns:6 young:1 externally:1 down:1 theorem:1 minute:1 specific:3 showing:2 appeal:1 virtue:1 evidence:5 adding:3 effectively:1 conditioned:1 occurring:2 simply:4 likely:2 talking:2 applies:2 ch:2 determines:1 extracted:1 ma:1 succeed:1 conditional:2 sized:1 month:4 goal:1 absence:1 included:3 determined:1 operates:1 uniformly:1 total:12 pas:1 bradford:1 experimental:6 tendency:2 attempted:1 formally:1 pragmatic:3 puzzling:1 searched:1 inability:1 meant:2 relevance:2 incorporate:2 evaluate:1 phenomenon:12 prima:1
2,387
3,166
Ultrafast Monte Carlo for Kernel Estimators and Generalized Statistical Summations Michael P. Holmes, Alexander G. Gray, and Charles Lee Isbell, Jr. College Of Computing Georgia Institute of Technology Atlanta, GA 30327 {mph, agray, isbell}@cc.gatech.edu Abstract Machine learning contains many computational bottlenecks in the form of nested summations over datasets. Kernel estimators and other methods are burdened by these expensive computations. Exact evaluation is typically O(n2 ) or higher, which severely limits application to large datasets. We present a multi-stage stratified Monte Carlo method for approximating such summations with probabilistic relative error control. The essential idea is fast approximation by sampling in trees. This method differs from many previous scalability techniques (such as standard multi-tree methods) in that its error is stochastic, but we derive conditions for error control and demonstrate that they work. Further, we give a theoretical sample complexity for the method that is independent of dataset size, and show that this appears to hold in experiments, where speedups reach as high as 1014 , many orders of magnitude beyond the previous state of the art. 1 Introduction Many machine learning methods have computational bottlenecks in the form of nested summations that become intractable for large datasets. We are particularly motivated by the nonparametric kernel estimators (e.g. kernel density estimation), but a variety of other methods require computations of similar form. In this work we formalize the general class of nested summations and present a new multi-stage Monte Carlo method for approximating any problem in the class with rigorous relative error control. Key to the efficiency of this method is the use of tree-based data stratification, i.e. sampling in trees. We derive error guarantees and sample complexity bounds, with the intriguing result that runtime depends not on dataset size but on statistical features such as variance and kurtosis, which can be controlled through stratification. We also present experiments that validate these theoretical results and demonstrate tremendous speedup over the prior state of the art. Previous approaches to algorithmic acceleration of this kind fall into roughly two groups: 1) methods that run non-accelerated algorithms on subsets of the data, typically without error bounds, and 2) multi-tree methods with deterministic error bounds. The former are of less interest due to the lack of error control, while the latter are good when exact error control is required, but have built-in overconservatism that limits speedup, and are difficult to extend to new problems. Our Monte Carlo approach offers much larger speedup and a generality that makes it simple to adapt to new problems, while retaining strong error control. While there are non-summative problems to which the standard multi-tree methodology is applicable and our Monte Carlo method is not, our method appears to give greater speedup by many orders of magnitude on problems where both methods can be used. In summary, this work makes the following contributions: formulation of the class of generalized nested data summations; derivation of recursive Monte Carlo algorithms with rigorous error guarantees for this class of computation; derivation of sample complexity bounds showing no explicit 1 dependence on dataset size; variance-driven tree-based stratified sampling of datasets, which allows Monte Carlo approximation to be effective with small sample sizes; application to kernel regression and kernel conditional density estimation; empirical demonstration of speedups as high as 10 14 on datasets with points numbering in the millions. It is the combination of all these elements that enables our method to perform so far beyond the previous state of the art. 2 Problem definition and previous work We first illustrate the problem class by giving expressions for the least-squares cross-validation scores used to optimize bandwidths in kernel regression (KR), kernel density estimation (KDE), and kernel conditional density estimation (KCDE): SKR SKDE SKCDE !2 P 1X j6=i Kh (||xi ? xj ||)yj = yi ? P n i j6=i Kh (||xi ? xj ||) ? ? XXZ X 1X 2 1 = K (||x ? x ||)K (||x ? x ||)dx ? K (||x ? x ||) j i j h h k h n i (n ? 1)2 (n ? 1) j6=i k6=i j6=i R P P ? Kh1 (y ? yj )Kh1 (y ? yk )dy 1X j6=i k6=i Kh2 (||xi ? xj ||)Kh2 (||xi ? xk ||) P P = n i K (||x ? x ||)K i j h h 2 2 (||xi ? xk ||) j6=i k6=i P ? j6=i Kh2 (||xi ? xj ||)Kh1 (yi ? yj ) P . ?2 j6=i Kh2 (||xi ? xj ||) These nested sums have quadratic and cubic computation times that are intractable for large datasets. We would like a method for quickly approximating these and similar computations in a simple and general way. We begin by formulating an inductive generalization of the problem class: X B (Xc ) ? f (Xc , Xi ) (1) i?I(Xc ) G (Xc ) ? B (Xc ) | X i?I(Xc ) f (Xc , G1 (Xc , Xi ) , G2 (Xc , Xi ) , . . . ) . (2) B represents the base case, in which a tuple of constant arguments Xc may be specified and a tuple of variable arguments Xi is indexed by a set I, which may be a function of Xc . For instance, in the innermost leave-one-out summations of SKR , Xc is the single point xi while I(Xc ) indexes all single points other than xi . Note that |I| is the number of terms in a summation of type B, and therefore represents the base time complexity. Whenever I consists of all k-tuples or leave-one-out k-tuples, the base complexity is O(nk ), where n is the size of the dataset. The inductive case G is either: 1) the base case B, or 2) a sum where the arguments to the summand function are Xc and a series of nested instances of type G. In SKR the outermost summation is an example of this. The base complexity here is |I| multiplied by the maximum base complexity among the nested instances, e.g. if, as in SKR , I is all single points and the most expensive inner G is O(n), then the overall base complexity is O(n2 ). Previous work. Past efforts at scaling this class of computation have fallen into roughly two groups. First are methods where data is simply subsampled before running a non-accelerated algorithm. Stochastic gradient descent and its variants (e.g. [1]) are prototypical here. While these approaches can have asymptotic convergence, there are no error guarantees for finite sample sizes. This is not show-stopping in practice, but the lack of quality assurance is a critical shortcoming. Our approach also exploits the speedup that comes from sampling, but provides a rigorous relative error guarantee and is able to automatically determine the necessary sample size to provide that guarantee. The other main class of acceleration methods consists of those employing ?higher order divide and conquer? or multi-tree techniques that give either exact answers or deterministic error bounds (e.g. [2, 3, 4]). These approaches apply to a broad class of ?generalized n-body problems? (GNPs), and feature the use of multiple spatial partitioning structures such as kd-trees or ball trees to decompose and reuse portions of computational work. While the class of GNPs has yet to be formally defined, the generalized summations we address are clearly related and have at least partial overlap. 2 The standard multi-tree methodology has three significant drawbacks. First, although it gives deterministic error bounds, the bounds are usually quite loose, resulting in overconservatism that prevents aggressive approximation that could give greater speed. Second, creating a new multi-tree method to accelerate a given algorithm requires complex custom derivation of error bounds and pruning rules. Third, the standard multi-tree approach is conjectured to reduce O(n p ) computations at best to O(nlog p ). This still leaves an intractable computation for p as small as 4. In [5], the first of these concerns began to be addressed by employing sample-based bounds within a multi-tree error propagation framework. The present work builds on that idea by moving to a fully Monte Carlo scheme where multiple trees are used for variance-reducing stratification. Error is rigorously controlled and driven by sample variance, allowing the Monte Carlo approach to make aggressive approximations and avoid the overconservatism of deterministic multi-tree methods. This yields greater speedups by many orders of magnitude. Further, our Monte Carlo approach handles the class of nested summations in full generality, making it easy to specialize to new problems. Lastly, the computational complexity of our method is not directly dependent on dataset size, which means it can address high degrees of nesting that would make the standard multi-tree approach intractable. The main tradeoff is that Monte Carlo error bounds are probabilistic, though the bound probability is a parameter to the algorithm. Thus, we believe the Monte Carlo approach is superior for all situations that can tolerate minor stochasticity in the approximated output. 3 Single-stage Monte Carlo We first derive a Monte Carlo approximation for the base case of a single-stage, flat summation, i.e. Equation 1. The basic results for this simple case (up to and including Algorithm 1 and Theorem 1) mirror the standard development of Monte Carlo as in [6] or [7], with some modification to accommodate our particular problem setup. We then move beyond to present novel sample complexity bounds and extend the single-stage results to the multi-stage and multi-stage stratified cases. These extensions allow us to efficiently bring Monte Carlo principles to bear on the entire class of generalized summations, while yielding insights into the dependence of computational complexity on sample statistics and how tree-based methods can improve those statistics. To begin, note that the summation B (Xc ) can be written as nE[fi ] = n?f , where n = |I| and the expectation is taken over a discrete distribution Pf that puts mass n1 on each term fi = f (Xc , Xi ). ? that has low relative error with high probability. More precisely, Our goal is to produce an estimate B ? for a specified  and ?, we want |B ? B| ? |B| with probability at least 1 ? ?. This is equivalent to estimating ?f by ? ?f such that |? ?f ? ?f | ? |?f |. Let ? ?f be the sample mean of m samples taken from Pf . From the Central Limit Theorem, we have asymptotically ? ?f N (?f , ? ?f2 /m), where ? ?f2 is the sample ? f ? ?f | ? ? variance, from which we can construct the standard confidence interval: |? z?/2 ? ?f / m with probability 1 ? ?. When ? ? f satisfies this bound, our relative error condition is ? ? implied by z?/2 ? ?f / m ? |?f |, and we also have |?f | ? |? ?f | ? z?/2 ? ?f / m. Combining these, ? ? ?f | ? z?/2 ? ?f / m), we can ensure our target relative error by requiring that z?/2 ? ?f / m ? (|? which rearranges to: 2 m ? z?/2 ?f2 (1 + )2 ? . 2 ? ?2f (3) Equation 3 gives an empirically testable condition that guarantees the target relative error level with probability 1 ? ?, given that ? ? f has reached its asymptotic distribution N (?f , ? ?f2 /m). This suggests an iterative sampling procedure in which m starts at a value mmin chosen to make the normal approximation valid, and then is increased until the condition of Equation 3 is met. This procedure is summarized in Algorithm 1, and we state its error guarantee as a theorem. Theorem 1. Given mmin large enough to put ? ? f in its asymptotic normal regime, with probability at least 1 ? ? Algorithm 1 approximates the summation S with relative error no greater than  . Proof. We have already established that Equation 3 is a sufficient condition for  relative error with probability 1 ? ?. Algorithm 1 simply increases the sample size until this condition is met. Sample Complexity. Because we are interested in fast approximations, Algorithm 1 is only useful if it terminates with m significantly smaller than the number of terms in the full summation. Equation 3 3 Algorithm 1 Iterative Monte Carlo approximation for flat summations. addSamples(samples, mneeded , S, Xc ) for i = 1 to mneeded Xi ? rand(S.I) samples ? samples ? S.f (Xc , Xi ) end for MC-Approx(S, Xc , , ?, mmin ) samples ? ?, mneeded ? mmin repeat addSamples(samples, mneeded , S, Xc ) m, ? ?f , ? ?f2 ? calcStats(samples) 2 ?2f mthresh ? z?/2 (1 + )2 ? ?f2 /2 ? mneeded ? mthresh ? m until m ? mthresh return |S.I|? ?f calcStats(samples) m ? count(samples) ? ?f ? avg(samples) ? ?f2 ? var(samples) return m, ? ?f , ? ?f2 gives an empirical test indicating when m is large enough for sampling to terminate; we now provide an upper bound, in terms of the distributional properties of the full set of fi , for the value of m at which Equation 3 will be satisfied. Theorem 2. Given mmin large enough to put ? ? f and ? ?f in their asymptotic normal regimes, with q?  ?2 ? probability at least 1 ? 2? Algorithm 1 terminates with m ? O ?f2 + |?ff | ?4f4 ? 1 . f f Proof. The termination condition is driven by ? ? f2 /? ?2f , so we proceed by bounding this ratio. First, with probability 1 ? ? ? we have a lower bound on the absolute value of the sample mean: |? ?f | ? |?f | ? z?/2 ? ?f / m. Next, because the sample variance is asymptotically distributed as N (?f2 , (?4f ? ?f4 )/m), where ?4f is the fourth central moment, we can apply the delta method to infer that ? ?f converges in distribution to N (?f , (?4f ? ?f4 )/4?f2 m). Using the normal-based confidence interval, 1 ? ? upper bound for the sample standard deviation: q this gives the following ? 4 ? ?f ? ?f + z?/2 ?4f ? ?f /(2?f m). We now combine these bounds, but since we only know that each bound individually covers at least a 1 ? ? fraction of outcomes, we can only guarantee they will jointly hold with probability at least 1 ? 2?, giving the following 1 ? 2? bound: ? ?4f ??f4 ? ? + z f ?/2 ? ?f 2?f m ? . ?f |? ?f | |?f | ? z?/2 ?m Combining this with Equation 3 and solving for m shows that, with probability at least 1 ? 2?, the algorithm will terminate with m no larger than: 2 z?/2 (1 + 2)2 ?f 2 2 |?f | " (1 + ) ?f + |?f | (1 + 2)2 r ?4f ?1+ ?f4 r ?f |?f | s 2(1 + ) ?f + |?f | (1 + 2)2 r ?4f ?1 ?f4 # . (4) Three aspects of this bound are salient. First, computation time is liberated from dataset size. This is because the sample complexity depends only on the distributional features (? f2 , ?f , and ?4f ) of the summation terms, and not on the number of terms. For i.i.d. datasets in particular, these distributional features are convergent, which means the sample or computational complexity converges to a constant while speedup becomes unbounded as the dataset size goes to infinity. Second, the bound has sensible dependence on ?f /|?f | and ?4f /?f4 . The former is a standard dispersion measure known as the coefficient of variation, and the latter is the kurtosis. Algorithm 1 therefore gives greatest speedup for summations whose terms have low dispersion and low kurtosis. The intuition is that sampling is most efficient when values are concentrated tightly in a few clusters, making it easy to get a representative sample set. This motivates the additional speedup we later gain by stratifying the dataset into low-variance regions. Finally, the sample complexity bound indicates whether Algorithm 1 will actually give speedup for any particular problem. For a given summation, let the speedup be defined as the total number of terms n divided by the number of terms evaluated by the approximation. For a desired speedup ? , we need n ? ? mbound , where mbound is the expression in Equation 4. This is the fundamental characterization of whether speedup will be attained. 4 Algorithm 2 Iterative Monte Carlo approximation for nested summations. MC-Approx: as in Algorithm 1 calcStats: as in Algorithm 1 4 addSamples(samples, mneeded , S, Xc ) for i = 1 to mneeded Xi ? rand(S.I(Xc )) mcArgs ? map(MC-Approx(?, Xc ? Xi , . . .), hS.Gj i) samples ? samples ? S.f (Xc , mcArgs) end for Multi-stage Monte Carlo We now turn to the inductive case of nested summations, i.e. Equation 2. The approach we take is to apply the single-stage Monte Carlo algorithm over the terms fi as before, but with recursive invocation to obtain approximations for the arguments Gj . Algorithm 2 specifies this procedure. Theorem 3. Given mmin large enough to put ? ? f in its asymptotic normal regime, with probability at least 1 ? ? Algorithm 2 approximates the summation S with relative error no greater than  . Proof. We begin by noting that the proof of correctness for P Algorithm 1 rests on 1) the ability to sample from a distribution Pf whose expectation is ?f = n1 i fi , and 2) the ability to invoke the CLT on the sample mean ? ? f in terms of the sample variance ? ? f2 . Given these properties, Equation 3 follows as a sufficient condition for relative error no greater than  with probability at least 1??. We therefore need only establish that Algorithm 2 samples from a distribution having these properties. b j be the recursive approximation for argument Gj . We assume G b j has For each sampled fi , let G b j are recursively approximated, this been drawn from a CLT-type normal distribution. Because the G is an inductive hypothesis, with the remainder of the proof showing that if the hypothesis holds for the recursive invocations, it also holds for the outer invocation. The base case, where all recursions must bottom out, is the type-B summation already shown to give CLT-governed answers (see proof b1 , G b 2 , . . .) be the vector of G b j values after each G b j has been estimated of Theorem 1). Let Gbm = (G P bj from mj samples ( mj = m), and let G be the vector of true Gj values. Since each component G 2 b b converges in distribution to N (Gj , ? /mj ), Gm satisfies Gm N (G, ?m ). We leave the detailed j entries of the covariance ?m unspecified, except to note that its jjth element is ?j2 /mj , and that its b j are generated in a correlated way (this can be used off-diagonal elements may be non-zero if the G as a variance reduction technique). Given the asymptotic normality of Gbm , the same arguments used to derive the multivariate delta method can be used, with some modification, to show that fi (Gbm ) N (fi (G), Of (G)?m OTf (G)). Thus, asymptotically, fi (Gbm ) is normally distributed around its true value with a variance that depends on both the gradient of f and the covariance matrix of the approximated arguments in Gbm . This being the case, uniform sampling of the recursively estimated fi is equivalent to sampling from a distribution P?f that gives weight n1 to a normal distribution centered on each fi . The expectation over P?f is ?f , and since the algorithm uses a simple sample mean the CLT does apply. These are the properties we need for correctness, and the applicability of the CLT combined with the proven base case completes the inductive proof. P Note that the variance over P?f works out to ? ?f2 = ?f2 + n1 i?I ?i2 , where ?i2 = Of (G)?m OTf (G). In other words, the variance with recursive approximation is the exact variance ? f2 plus the average of the variances ?i2 of the approximated fi . Likewise one could write an expression for the kurtosis ? ?4f . Because we are still dealing with a sample mean, Theorem 2 still holds in the nested case. Corollary 2.1. Given mmin large enough to put ? ? f and ? ?f in their asymptotic normal regimes, q  ? ?2 ? ? ? ? with probability at least 1 ? 2? Algorithm 2 terminates with m ? O ?f2 + |?ff | ??4f4 ? 1 . f f It is important to point out that the 1 ? ? confidences and  relative error bounds of the recursively approximated arguments do not pass through to or compound in the overall estimator ? ? f : their influence appears in the variance ?i2 of each sampled fi , which in turn contributes to the overall variance ? ?f2 , and the error from ? ?f2 is independently controlled by the outermost sampling procedure. 5 Algorithm 3 Iterative Monte Carlo approximation for nested summations with stratification. MC-Approx: as in Algorithm 1 addSamples(strata, samples, mneeded , S, Xc ) needP erStrat = optAlloc(samples, strata, mneeded ) for s = 1 to strata.count ms = needP erStrat[s] for i = 1 to ms Xi ? rand(S.I(Xc ), strata[s]) mcArgs ? map(MC-Approx(?, Xc ? Xi , . . .), hS.Gj i) samples[s] ? samples[s] ? S.f (Xc , mcArgs) end for end for calcStats(strata, samples) m ? count(samples) ? ?f s ? stratAvg(strata, samples) ? ?f2 s ? stratV ar(strata, samples) return m, ? ?f s , ? ?f2 s 5 Variance Reduction With Algorithm 2 we have coverage of the entire generalized summation problem class, and our focus turns to maximizing efficiency. As noted above, Theorem 2 implies we need fewer samples when the summation terms are tightly concentrated in a few clusters. We formalize this by spatially partitioning the data to enable a stratified sampling scheme. Additionally, by use of correlated sampling we induce covariance between recursively estimated summations whenever the overall variance can be reduced by doing so. Adding these techniques to recursive Monte Carlo makes for an extremely fast, accurate, and general approximation scheme. Stratification. Stratification is a standard Monte Carlo principle whereby the values being sampled are partitioned into subsets (strata) whose contributions are separately estimated and then combined. The idea is that strata with higher variance can be sampled more heavily than those with lower variance, thereby making more efficient use of samples than in uniform sampling. Application of this principle requires the development of an effective partitioning scheme for each new domain of interest. In the case of generalized summations, the values being sampled are the f i , which are not known a priori and cannot be directly stratified. However, since f is generally a function with some degree of continuity, its output is similar for similar values of its arguments. We therefore stratify the argument space, i.e. the input datasets, by use of spatial partitioning structures. Though any spatial partitioning could be used, in this work we use modified kd-trees that recursively split the data along the dimension of highest variance. The approximation procedure runs as it did before, except that the sampling and sample statistics are modified to make use of the trees. Trees are expanded up to a user-specified number of nodes, prioritized by a heuristic of expanding nodes in order of largest size times average per-dimensional standard deviation. This heuristic will later be justified by the variance expression for the stratified sample mean. The approximation procedure is summarized in Algorithm 3, and we now establish its error guarantee. Theorem 4. Given mmin large enough to put ? ? f in its asymptotic normal regime, with probability at least 1 ? ? Algorithm 3 approximates the summation S with relative error no greater than  . Proof. Identical to Theorem 3, but we need to establish that 1) the sample mean remains unbiased under stratification, and 2) the CLT still holds under stratification. These turn out to be standard properties of the stratified sample mean and its variance estimator (see [7]): X ? ?f s = pj ? ?j (5) j ? ? 2 (? ?f s ) = ? ?f2 s m ; ? ?f2 s , m X j p2j X p2j ? ?j2 = ? ?j2 , mj q j j (6) where j indexes the strata, ? ? j and ? ?j2 are the sample mean and variance of stratum j, pj is the fraction of summation terms in stratum j, and qj is the fraction of samples drawn from stratum j. Algorithm 3 modifies the addSamples subroutine to sample in stratified fashion, and computes the stratified ? ?f s and ? ?f2 s instead of ? ?f and ? ?f2 in calcStats. Since these estimators satisfy the two conditions necessary for the error guarantee, this establishes the theorem. The true variance ? 2 (? ?f s ) is identical to Equation 6 but with the exact ?j2 substituted for ? ?j2 . In [7], 2 2 it is shown that ?f s ? ?f , i.e. stratification never increases variance, and that any refinement of a 6 stratification can only reduce ?f2 s . Although the sample allocation fractions qj can be chosen arbiP trarily, ?f2 s is minimized when qj ? pj ?j . With this optimal allocation, ?f2 s reduces to ( j pj ?j )2 . This motivates our kd-tree expansion heuristic, as described above, which tries to first split the nodes with highest pj ?j , i.e. the nodes with highest contribution to the variance under optimal allocation. While we never know the ?j exactly, Algorithm 3 uses the sample estimates ? ? j at each stage to approximate the optimal allocation (this is the optAlloc routine). Finally, the Theorem 2 sample complexity still holds for the CLT-governed stratified sample mean. Corollary 2.2. Given mmin large enough to put ? ? f s and ? ?f s in their asymptotic normal regimes, q?  ?2 ? with probability at least 1 ? 2? Algorithm 3 terminates with m ? O ?f2s + |?ffs| ?4f4 s ? 1 . f fs Correlated Sampling. The variance of recursively estimated fi , as expressed by Of (G)?m OTf (G), depends on the full covariance matrix of the estimated arguments. If the gradient of f is such that the variance of fi depends negatively (positively) on a covariance ?jk , we can reduce the variance by inducing positive (negative) covariance between Gj and Gk . Covariance can be induced by sharing sampled points across the estimates of Gj and Gk , assuming they both use the same datasets. In some cases the expression for fi ?s variance is such that the effect of correlated sampling is datadependent; when this happens, it is easy to test and check whether correlation helps. All experiments presented here were benefited by correlated sampling on top of stratification. 6 Experiments We present experimental results in two phases. First, we compare stratified multi-stage Monte Carlo approximations to exact evaluations on tractable datasets. We show that the error distributions conform closely to our asymptotic theory. Second, having verified accuracy to the extent possible, we run our method on datasets containing millions of points in order to show 1) validation of the theoretical prediction that runtime is roughly independent of dataset size, and 2) many orders of magnitude speedup (as high as 1014 ) relative to exact computation. These results are presented for two method-dataset pairs: kernel regression on a dataset containing 2 million 4-dimensional redshift measurements used for quasar identification, and kernel conditional density estimation on an n-body galaxy simulation dataset containing 3.5 million 3-dimensional locations. In the KR case, the fourth dimension is regressed against the other three, while in KCDE the distribution of the third dimension is predicted as a function of the first two. In both cases we are evaluating the cross-validated score functions used for bandwidth optimization, i.e. SKR and SKCDE as described in Section 2. Error Control. The objective of this first set of experiments is to validate the guarantee that relative error will be less than or equal to  with probability 1 ? ?. We measured the distribution of error on a series of random data subsets up to the highest size for which the exact computation was tractable. For the O(n2 ) SKR , the limit was n = 10K, while for the O(n3 ) SKCDE it was n = 250. For each dataset we randomly chose and evaluated 100 bandwidths with 1 ? ? = 0.95 and  = 0.1. Figure 1 shows the full quantile spreads of the relative errors. The most salient feature is the relationship of the 95% quantile line (dashed) to the threshold line at  = 0.1 (solid). Full compliance with asymptotic theory would require the dashed line never to be above the solid. This is basically the case for KCDE,1 while the KR line never goes above 0.134. The approximation is therefore quite good, and could be improved if desired by increasing mmin or the number of strata, but in this case we chose to trade a slight increase in error for an increase in speed. Speedup. Given the validation of the error guarantees, we now turn to computational performance. As before, we ran on a series of random subsets of the data, this time with n ranging into the millions. At each value of n, we randomly chose and evaluated 100 bandwidths, measuring the time for each evaluation. Figure 2 presents the average evaluation time versus dataset size for both methods. The most striking feature of these graphs is their flatness as n increases by orders of magnitude. This is in accord with Theorem 2 and its corollaries, which predict sample and computational complexity independent of dataset size. Speedups2 for KR range from 1.8 thousand at n = 50K to 2.8 million at n = 2M. KCDE speedups range from 70 million at n = 50K to 1014 at n = 3.5M. This performance is many orders of magnitude better than that of previous methods. 1 2 The spike in the max quantile is due to a single outlier point. All speedups are relative to extrapolated runtimes based on the O() order of the exact computation. 7 0.25 99%?max 90%?99% 75%?90% 50%?75% 25%?50% 10%?25% 1%?10% min?1% 95% error = 0.1 0.15 0.6 0.5 relative error relative error 0.2 0.7 0.1 0.4 99%?max 90%?99% 75%?90% 50%?75% 25%?50% 10%?25% 1%?10% min?1% 95% error = 0.1 0.3 0.2 0.05 0.1 0 1000 2000 3000 4000 5000 6000 dataset size 7000 8000 9000 0 50 10000 100 150 dataset size 200 250 Figure 1: Error distribution vs. dataset size for KR (left), and KCDE (right). 4000 6000 avg. computation time (ms) avg. computation time (ms) 5000 3000 2000 1000 4000 3000 2000 1000 0 0 0 500,000 1,000,000 dataset size 1,500,000 ?1000 0 2,000,000 1,000,000 2,000,000 dataset size 3,000,000 Figure 2: Runtime vs. dataset size for KR (left), and KCDE (right). Error bars are one standard deviation. 7 Conclusion We have presented a multi-stage stratified Monte Carlo method for efficiently approximating a broad class of generalized nested summations. Summations of this type lead to computational bottlenecks in kernel estimators and elsewhere in machine learning. The theory derived for this Monte Carlo approach predicts: 1) relative error no greater than  with probability at least 1??, for user-specified  and ?, and 2) sample and computational complexity independent of dataset size. Our experimental results validate these theoretical guarantees on real datasets, where we accelerate kernel crossvalidation scores by as much as 1014 on millions of points. This is many orders of magnitude faster than the previous state of the art. In addition to applications, future work will likely include automatic selection of stratification granularity, additional variance reduction techniques, and further generalization to other computational bottlenecks such as linear algebraic operations. References [1] Nicol N. Schraudolph and Thore Graepel. Combining conjugate direction methods with stochastic approximation of gradients. In Workshop on Artificial Intelligence and Statistics (AISTATS), 2003. [2] Alexander G. Gray and Andrew W. Moore. N-body problems in statistical learning. In Advances in Neural Information Processing Systems (NIPS) 13, 2000. [3] Mike Klaas, Mark Briers, Nando de Freitas, and Arnaud Doucet. Fast particle smoothing: If I had a million particles. In International Conference on Machine Learning (ICML), 2006. [4] Ping Wang, Dongryeol Lee, Alexander Gray, and James M. Rehg. Fast mean shift with accurate and stable convergence. In Workshop on Artificial Intelligence and Statistics (AISTATS), 2007. [5] Michael P. Holmes, Alexander G. Gray, and Charles Lee Isbell Jr. Fast nonparametric conditional density estimation. In Uncertainty in Artificial Intelligence (UAI), 2007. [6] Reuven Y. Rubinstein. Simulation and the Monte Carlo Method. John Wiley & Sons, 1981. [7] Paul Glasserman. Monte Carlo methods in financial engineering. Springer-Verlag, 2004. 8
3166 |@word h:2 termination:1 simulation:2 covariance:7 innermost:1 thereby:1 solid:2 accommodate:1 recursively:6 reduction:3 moment:1 contains:1 score:3 series:3 past:1 freitas:1 yet:1 intriguing:1 dx:1 written:1 must:1 john:1 klaas:1 enables:1 v:2 xxz:1 intelligence:3 leaf:1 assurance:1 fewer:1 xk:2 provides:1 characterization:1 node:4 location:1 unbounded:1 along:1 become:1 consists:2 specialize:1 f2s:1 combine:1 mthresh:3 roughly:3 brier:1 multi:17 automatically:1 glasserman:1 pf:3 increasing:1 becomes:1 begin:3 estimating:1 mass:1 kind:1 unspecified:1 guarantee:13 p2j:2 runtime:3 exactly:1 control:7 partitioning:5 normally:1 before:4 positive:1 stratifying:1 engineering:1 limit:4 severely:1 plus:1 chose:3 suggests:1 stratified:12 range:2 yj:3 recursive:6 practice:1 differs:1 procedure:6 kh2:4 empirical:2 significantly:1 confidence:3 word:1 induce:1 get:1 cannot:1 ga:1 selection:1 put:7 influence:1 optimize:1 equivalent:2 deterministic:4 map:2 maximizing:1 modifies:1 go:2 independently:1 estimator:7 holmes:2 rule:1 nesting:1 insight:1 rehg:1 financial:1 handle:1 variation:1 target:2 gm:2 heavily:1 user:2 exact:9 us:2 hypothesis:2 element:3 expensive:2 particularly:1 approximated:5 jk:1 distributional:3 predicts:1 bottom:1 mike:1 wang:1 thousand:1 region:1 trade:1 highest:4 yk:1 ran:1 intuition:1 complexity:18 rigorously:1 solving:1 negatively:1 efficiency:2 f2:29 accelerate:2 derivation:3 fast:6 effective:2 shortcoming:1 monte:28 artificial:3 rubinstein:1 outcome:1 quite:2 whose:3 larger:2 heuristic:3 ability:2 statistic:5 g1:1 jointly:1 kurtosis:4 nlog:1 remainder:1 gnp:2 j2:6 combining:3 skr:6 kh:2 validate:3 inducing:1 scalability:1 crossvalidation:1 convergence:2 cluster:2 produce:1 leave:3 converges:3 help:1 derive:4 illustrate:1 andrew:1 measured:1 minor:1 strong:1 coverage:1 predicted:1 come:1 implies:1 met:2 direction:1 drawback:1 closely:1 f4:9 stochastic:3 centered:1 nando:1 enable:1 require:2 generalization:2 decompose:1 summation:33 extension:1 hold:7 around:1 normal:10 algorithmic:1 bj:1 predict:1 gbm:5 estimation:6 applicable:1 individually:1 largest:1 correctness:2 establishes:1 clearly:1 modified:2 avoid:1 gatech:1 corollary:3 validated:1 focus:1 derived:1 indicates:1 check:1 rigorous:3 dependent:1 stopping:1 typically:2 entire:2 subroutine:1 interested:1 overall:4 among:1 k6:3 retaining:1 development:2 priori:1 art:4 spatial:3 smoothing:1 equal:1 construct:1 never:4 having:2 sampling:17 stratification:12 identical:2 represents:2 broad:2 runtimes:1 icml:1 future:1 minimized:1 summand:1 few:2 randomly:2 tightly:2 subsampled:1 phase:1 n1:4 atlanta:1 interest:2 custom:1 evaluation:4 yielding:1 accurate:2 rearranges:1 tuple:2 partial:1 necessary:2 tree:22 indexed:1 divide:1 desired:2 theoretical:4 instance:3 increased:1 cover:1 ar:1 measuring:1 mbound:2 applicability:1 deviation:3 subset:4 entry:1 uniform:2 reuven:1 dongryeol:1 answer:2 combined:2 density:6 fundamental:1 stratum:14 international:1 lee:3 probabilistic:2 invoke:1 off:1 michael:2 quickly:1 central:2 satisfied:1 containing:3 creating:1 return:3 aggressive:2 de:1 summarized:2 coefficient:1 satisfy:1 depends:5 later:2 try:1 doing:1 portion:1 reached:1 start:1 contribution:3 square:1 accuracy:1 variance:32 efficiently:2 likewise:1 kh1:3 yield:1 identification:1 fallen:1 basically:1 mc:5 carlo:28 cc:1 j6:8 ping:1 reach:1 whenever:2 sharing:1 definition:1 against:1 galaxy:1 james:1 proof:8 gain:1 sampled:6 dataset:22 graepel:1 formalize:2 routine:1 actually:1 appears:3 higher:3 tolerate:1 attained:1 methodology:2 improved:1 rand:3 formulation:1 evaluated:3 though:2 quasar:1 generality:2 stage:12 lastly:1 until:3 correlation:1 lack:2 propagation:1 continuity:1 quality:1 gray:4 believe:1 thore:1 effect:1 requiring:1 true:3 unbiased:1 former:2 inductive:5 spatially:1 arnaud:1 moore:1 i2:4 mmin:10 noted:1 whereby:1 m:4 generalized:8 demonstrate:2 bring:1 ranging:1 novel:1 fi:16 charles:2 began:1 superior:1 empirically:1 million:9 extend:2 slight:1 approximates:3 significant:1 measurement:1 approx:5 automatic:1 particle:2 stochasticity:1 had:1 moving:1 stable:1 gj:8 base:10 multivariate:1 conjectured:1 driven:3 compound:1 verlag:1 yi:2 greater:8 additional:2 determine:1 clt:7 dashed:2 multiple:2 full:6 flatness:1 infer:1 reduces:1 faster:1 adapt:1 offer:1 cross:2 schraudolph:1 divided:1 controlled:3 prediction:1 variant:1 regression:3 basic:1 expectation:3 kernel:13 accord:1 justified:1 addition:1 want:1 separately:1 liberated:1 addressed:1 interval:2 completes:1 rest:1 induced:1 compliance:1 noting:1 granularity:1 split:2 easy:3 enough:7 variety:1 xj:5 bandwidth:4 inner:1 idea:3 reduce:3 tradeoff:1 shift:1 qj:3 bottleneck:4 whether:3 motivated:1 expression:5 reuse:1 effort:1 stratify:1 f:1 algebraic:1 proceed:1 useful:1 generally:1 detailed:1 nonparametric:2 concentrated:2 reduced:1 specifies:1 delta:2 estimated:6 per:1 conform:1 discrete:1 write:1 group:2 key:1 salient:2 threshold:1 drawn:2 pj:5 verified:1 asymptotically:3 graph:1 fraction:4 sum:2 run:3 fourth:2 uncertainty:1 striking:1 dy:1 scaling:1 bound:23 convergent:1 quadratic:1 precisely:1 infinity:1 isbell:3 n3:1 flat:2 regressed:1 aspect:1 speed:2 argument:11 extremely:1 formulating:1 min:2 expanded:1 speedup:19 redshift:1 numbering:1 combination:1 ball:1 kd:3 jr:2 terminates:4 smaller:1 across:1 conjugate:1 son:1 partitioned:1 making:3 modification:2 happens:1 outlier:1 taken:2 equation:11 remains:1 turn:5 loose:1 count:3 know:2 tractable:2 end:4 operation:1 multiplied:1 apply:4 top:1 running:1 ensure:1 include:1 xc:28 exploit:1 giving:2 testable:1 quantile:3 build:1 conquer:1 approximating:4 establish:3 implied:1 move:1 objective:1 already:2 spike:1 dependence:3 diagonal:1 gradient:4 mph:1 sensible:1 outer:1 extent:1 assuming:1 index:2 relationship:1 ratio:1 demonstration:1 difficult:1 setup:1 kde:1 gk:2 negative:1 motivates:2 perform:1 allowing:1 upper:2 dispersion:2 datasets:12 finite:1 descent:1 situation:1 pair:1 required:1 specified:4 tremendous:1 established:1 nip:1 address:2 beyond:3 able:1 bar:1 usually:1 regime:6 built:1 including:1 max:3 greatest:1 critical:1 overlap:1 recursion:1 normality:1 scheme:4 improve:1 technology:1 ne:1 prior:1 nicol:1 relative:20 asymptotic:11 fully:1 bear:1 prototypical:1 allocation:4 proven:1 var:1 versus:1 validation:3 degree:2 sufficient:2 principle:3 elsewhere:1 summary:1 extrapolated:1 repeat:1 allow:1 institute:1 fall:1 absolute:1 distributed:2 outermost:2 dimension:3 valid:1 evaluating:1 computes:1 avg:3 refinement:1 far:1 employing:2 pruning:1 approximate:1 dealing:1 doucet:1 uai:1 b1:1 tuples:2 xi:20 iterative:4 additionally:1 terminate:2 mj:5 expanding:1 contributes:1 expansion:1 agray:1 complex:1 domain:1 substituted:1 did:1 aistats:2 main:2 spread:1 bounding:1 paul:1 n2:3 body:3 positively:1 benefited:1 representative:1 ff:2 georgia:1 cubic:1 fashion:1 wiley:1 explicit:1 ultrafast:1 invocation:3 governed:2 third:2 theorem:14 showing:2 concern:1 essential:1 intractable:4 workshop:2 adding:1 kr:6 mirror:1 magnitude:7 nk:1 simply:2 likely:1 prevents:1 expressed:1 datadependent:1 g2:1 springer:1 nested:13 satisfies:2 conditional:4 goal:1 acceleration:2 prioritized:1 except:2 reducing:1 total:1 pas:1 experimental:2 indicating:1 formally:1 college:1 mark:1 latter:2 alexander:4 accelerated:2 correlated:5
2,388
3,167
Regularized Boost for Semi-Supervised Learning Ke Chen and Shihai Wang School of Computer Science The University of Manchester Manchester M13 9PL, United Kingdom {chen,swang}@cs.manchester.ac.uk Abstract Semi-supervised inductive learning concerns how to learn a decision rule from a data set containing both labeled and unlabeled data. Several boosting algorithms have been extended to semi-supervised learning with various strategies. To our knowledge, however, none of them takes local smoothness constraints among data into account during ensemble learning. In this paper, we introduce a local smoothness regularizer to semi-supervised boosting algorithms based on the universal optimization framework of margin cost functionals. Our regularizer is applicable to existing semi-supervised boosting algorithms to improve their generalization and speed up their training. Comparative results on synthetic, benchmark and real world tasks demonstrate the effectiveness of our local smoothness regularizer. We discuss relevant issues and relate our regularizer to previous work. 1 Introduction Semi-supervised inductive learning concerns the problem of automatically learning a decision rule from a set of both labeled and unlabeled data, which has received a great deal of attention due to enormous demands of real world learning tasks ranging from data mining to medical diagnosis [1]. From different perspectives, a number of semi-supervised learning algorithms have been proposed [1],[2], e.g., self-training, co-training, generative models along with the EM algorithm, transductive learning models and graph-based methods. In semi-supervised learning, the ultimate goal is to find out a classification function which not only minimizes classification errors on the labeled training data but also must be compatible with the input distribution by inspecting their values on unlabeled data. To work towards the goal, unlabeled data can be exploited to discover how data is distributed in the input space and then the information acquired from the unlabeled data is used to find a good classifier. As a generic framework, regularization has been used in semi-supervised learning to exploit unlabeled data by working on well known semi-supervised learning assumptions, i.e., the smoothness, the cluster, and the manifold assumptions [1], which leads to a number of regularizers applicable to various semi-supervised learning paradigms, e.g., the measure-based [3], the manifold-based [4], the information-based [5], the entropy-based [6], harmonic mixtures [7] and graph-based regularization [8]. As a generic ensemble learning framework [9] , boosting works by sequentially constructing a linear combination of base learners that concentrate on difficult examples, which results in a great success in supervised learning. Recently boosting has been extended to semi-supervised learning with different strategies. Within the universal optimization framework of margin cost functional [9], semi-supervised MarginBoost [10] and ASSEMBLE [11] were proposed by introducing the ?pseudo-classes? to unlabeled data for characterizing difficult unlabeled examples. In essence, such extensions work in a self-training way; the unlabeled data are assigned pseudo-class labels based on the constructed ensemble learner so far, and in turn the pseudo-class labels achieved will be used to find out a new proper learner to be added to the ensemble. The co-training idea was extended to boosting, e.g. CoBoost [12]. More recently, the Agreement Boost algorithm [13] has been developed with a theoretic justification of benefits from the use of multiple boosting learners within the co-training framework. To our knowledge, however, none of the aforementioned semi-supervised boosting algorithms has taken the local smoothness constraints into account. In this paper, we exploit the local smoothness constraints among data by introducing a regularizer to semi-supervised boosting. Based on the universal optimization framework of margin cost functional for boosting [9], our regularizer is applicable to existing semi-supervised boosting algorithms [10]-[13]. Experimental results on the synthetic, benchmark and real world classification tasks demonstrate its effectiveness of our regularizer in semi-supervised boosting learning. In the reminder of this paper, Sect. 2 briefly reviews semi-supervised boosting learning and presents our regularizer. Sect. 3 reports experimental results and the behaviors of regularized semi-supervised boosting algorithms. Sect. 4 discusses relevant issues and the last section draws conclusions. 2 Semi-supervised boosting learning and regularization In the section, we first briefly review the basic idea behind existing semi-supervised boosting algorithms within the universal optimization framework of margin cost functional [9] for making it self-contained. Then we present our Regularized Boost based on the previous work. 2.1 Semi-supervised boosting learning Given a training set, S = L ? U , of |L| labeled examples, L = {(x1 , y1 ), ? ? ? , (x|L| , y|L| )}, and |U | unlabeled P examples, U = {x|L|+1 , ? ? ? , x|L|+|U | }, we wish to construct an ensemble learner F (x) = t wt ft (x), where wt is coefficients for linear combination and ft (x) is a base learner, so that P (F (x) 6= y) is small. Since there exists no label information available for unlabeled data, the critical idea underlying semi-supervised boosting is introducing a pseudo-class [11] or a pseudo margin [10] concept within the universal optimization framework [9] to unlabeled data. Similar to an approach in supervised learning, e.g., [14], a multi-class problem can be converted into binary classification forms. Therefore, our presentation below focuses on the binary classification problem only; i.e. y ? {?1, 1}. The pseudo-class of an unlabeled example, x, is typically defined as y = sign[F (x)] [11] and its corresponding pseudo margin is yF (x) = |F (x)| [10],[11]. Within the universal optimization framework of margin cost functional [9], the semi-supervised boosting learning is to find F such that the cost of functional X X C(F ) = ?i C[yi F (xi )] + ?i C[|F (xi )|] (1) xi ?L xi ?U is minimized for some non-negative and monotonically decreasing cost function C : R ? R and the weight ?i ? R+ . In the universal optimization framework [9], constructing an ensemble learner needs to choose a base learner, f (x), to maximize the inner product ?h?C(F ), f i. For unlabeled data, a subgradient of C(F ) in (1) has been introduced to tackle its non-differentiable problem [11] and then unlabeled data of pseudo-class labels can be treated in the same way as labeled data in the optimization problem. As a result, finding a proper f (x) amounts to maximizing X X ?h?C(F ), f i = ?i C 0 [yi F (xi )] ? ?i C 0 [yi F (xi )], (2) i:f (xi )6=yi i:f (xi )=yi where yi is the true class label P if xi is a labeled example or a pseudo-class label otherwise. After dividing through by ? i?S ?i C 0 [yi F (xi )] on both sides of (2), finding f (x) to maximize ?h?C(F ), f i is equivalent to searching for f (x) to minimize X X X D(i) ? D(i) = 2 D(i) ? 1, (3) i:f (xi )6=yi i:f (xi )=yi i:f (xi )6=yi where D(i), for 1 ? i ? |L| + |U |, is the empirical data distribution defined as D(i) = 0 P ?i C [yi0F (xi )] . From (3), a proper base learner, f (x), can be found by minimizing weighted ? i C [yi F (xi )] i?S P errors i:f (xi )6=yi D(i). Thus, any boosting algorithms specified for supervised learning [9] are now applicable to semi-supervised learning with the aforementioned treatment. For co-training based semi-supervised boosting algorithms [12],[13], the above semi-supervised boosting procedure is applied to each view of data to build up a component ensemble learner. Instead of self-training, the pseudo-class label of an unlabeled example for a specific view is determined by ensemble learners trained on other views of this example. For example, the Agreement Boost [13] defines the co-training cost functional as 1 J C(F , ? ? ? , F ) = J X X C[yi F j (xi )] + ? j=1 xi ?L X C[?V (xi )]. (4) xi ?U Here J views of data are used to train J ensemble learners, F 1 , ? ? ? , F J , respectively. The disagreePJ ment of J ensemble learners for an unlabeled example, xi ? U , is V (xi ) = J1 j=1 [F j (xi )]2 ? ? ? 1 PJ 2 j and the weight ? ? R+ . In light of view j, the pseudo-class label of an unlaj=1 F (xi ) J ? ? PJ beled example, xi , is determined by yi = sign J1 j=1 F j (xi ) ? F j (xi ) . Thus, the minimization of (3) with such pseudo-class labels leads to a proper base learner f j (x) to be added to F j (x). 2.2 Boosting with regularization Motivated by the work on the use of regularization in semi-supervised learning [3]-[8], we introduce a local smoothness regularizer to semi-supervised boosting based on the universal optimization framework of margin cost functional [9], which results in a novel objective function: X (5) T (F, f ) = ?h?C(F ), f i ? ?i R(i), i:xi ?S + where ?i ? R is a weight, determined by the input distribution to be discussed in Sect. 4, associated with each training example and the local smoothness around an example, xi , is measured by X ? R(i) = Wij C(?I (6) ij ). j:xj ?S,j6=i Here, Iij is a class label compatibility function for two different examples xi , xj ? S and defined as Iij = |yi ? yj | where yi and yj are the true labels of xi and xj for labeled data or their pseudo-class labels otherwise. C? : R ? R is a monotonically decreasing function derived from the cost function ? = 0. Wij is an affinity measure defined by Wij = exp(?||xi ?xj ||2 /2? 2 ) adopted in (1) so that C(0) where ? is a bandwidth parameter. To find a proper base learner, f (x), we now need to maximize T (F, f ) in (5) so as to minimize not only misclassification errors as before (see Sect. 2.1) but also the local class label incompatibility cost for smoothness. In order to use the objective function in (5) for boosting learning, we need to have the new empirical data distribution and the termination condition. Inserting (2) into (5) results in X X X T (F, f ) = ?i C 0 [yi F (xi )] ? ?i C 0 [yi F (xi )] ? ?i R(i). (7) i:f (xi )6=yi i:f (xi )=yi i:xi ?S Since an appropriate cost function used in (1) is non-negative and monotonically decreasing, C 0 [yi F (xi )] is always negative and R(i) is non-negative according to its definition in (6). Therefore, we can define our empirical data distribution as ?i C 0 [yi F (xi )] ? ?i R(i) ? ? ? , 1 ? i ? |L| + |U |. D(i) =P (8) 0 k:xk ?S ?k C [yk F (xk )] ? ?k R(k) ? is always non-negative based on definitions of cost function in (1) and R(i) in (6). Applying (8) D(i) to (7) with some mathematical development similar to that described in Sect. 2.1, we can show that finding a proper base learner f (x) to maximize T (F, f ) is equivalent to finding f (x) to minimize X X X ?i R(i) ? ? ? ?, P D(i) ? D(i) ? 2 0 k:xk ?S ?k C [yk F (xk )] ? ?k R(k) i:f (xi )6=yi i:f (xi )=yi which is equal to X 2 i:f (xi )6=yi | {z ? D(i) } misclassification errors i:f (xi )=yi X + 2 i:f (xi )=yi | P k:xk ?S ? ??i R(i) ? ? 1. ?k C 0 [yk F (xk )] ? ?k R(k) {z } local class label incompatibility (9) In (9), the first term refers to misclassification errors while the second term corresponds to the class label incompatibility of a data point with its nearby data points even though this data point itself fits well. In contrast to (3), finding a proper base learner, f (x), now needs to minimize not only the misclassification errors but also the local class label incompatibility in our Regularized Boost. Accordingly, a new termination condition of our Regularized Boost is derived from (9) as ? ? 21 P ? +P ? ??i R(i) ?. where ? = i:f (xi )6=yi D(i) i:f (xi )=yi P 0 k:xk ?S ?k C [yk F (xk )]??k R(k) Once finding an optimal base learner, ft+1 (x), at step t + 1, we need to choose a proper weight, wt+1 , to form a new ensemble, Ft+1 (x) = Ft (x) + wt+1 ft+1 (x). In our Regularized Boost, we ? ? by choose wt+1 = 21 log 1?? simply treating pseudo-class labels for unlabeled data as same as ? true labels of labeled data, as suggested in [11]. 3 Experiments In this section, we report experimental results on synthetic, benchmark and real data sets. Although our regularizer is applicable to existing semi-supervised boosting [10]-[13], we mainly apply it to the ASSEMBLE [11], a winning algorithm from the NIPS 2001 Unlabeled Data Competition, on a variety of classification tasks. In addition, our regularizer is also used to train component ensemble learners of the Agreement Boost [13] for binary classification benchmark tasks since the algorithm [13] in its original form can cope with binary classification only. In our experiments, we ? use C(?) = e?? in (1) and C(?) = C(?) ? 1 in (6) and set ?i = 1 in (1) and ?i = 21 in (5). For synthetic and benchmark data sets, we always randomly select 20% of examples as testing data except that a benchmark data set has pre-defined a training/test split. Accordingly, the remaining examples used as a training set or those in a pre-defined training set, S, are randomly divided into two subsets, i.e., labeled data (L) and unlabeled data (U ), and the ratio between labeled and unlabeled data is 1:4 in our experiments. For reliability, each experiment is repeated for ten times. To test the effectiveness of our Regularized Boost across different base learners, we perform all experiments with K nearest-neighbors (KNN) classifier, a local classifier, and multi-layer perceptron (MLP), a global classifier, where 3NN and a single hidden layered MLP are used in our experiments. For comparison, we report results of a semi-supervised boosting algorithm (i.e., ASSEMBLE [11] or Agreement Boost [13]) and its regularized version (i.e., Regularized Boost). In addition, we also provide results of a variant of Adaboost [14] trained on the labeled data only for reference. The above experimental method conforms to those used in semi-supervised boosting methods [10]-[13] as well as other empirical studies of semi-supervised learning methods, e.g., [15]. 3.1 Synthetic data set We use a Gaussian mixture model of four components to generate a data set of four categories in the 2-D space; 200 examples are in each category, as illustrated in Figure 1(a). We wish to test our regularizer on this intuitive multi-class classification task of a high optimal Bayes error. 35 5 4 with KNN with MLP 32.50 31.88 3 Error Rate(%) 2 1 0 ?1 ?2 30 28.75 27.37 26.87 26.25 25 ?3 ?4 ?5 ?5 ?4 ?3 ?2 ?1 0 1 2 3 4 20 AdaBoost ASSEMBLE RegularizedBoost (b) (a) Figure 1: Synthetic data classification task. (a) The data set. (b) Classification results From Figure 1(b), it is observed that the use of unlabeled data improves the performance of Adaboost and the use of our regularizer further improves the generalization performance of the ASSEMBLE by achieving an averaging error rate closer to the optimal Bayes error no matter what kind of a base learner is used. Our further observation via visualization with the ground truth indicates that the use of our regularizer leads to smoother decision boundaries than the original ASSEMBLE, which yields the better generalization performance. 3.2 Benchmark data sets To assess the performance of our regularizer for semi-supervised boosting algorithms, we perform a series of experiments on benchmark data sets from the UCI machine learning repository [16] without any data transformation. In our experiments, we use the same initialization conditions for all boosting algorithms. Our empirical work suggests that a maximum number of 100 boosting steps is sufficient to achieve the reasonable performance for those benchmark tasks. Hence, we set such a maximum number of boosting steps to stop all boosting algorithms for a sensible comparison. We first apply our regularizer to the ASSEMBLE [11] on five UCI benchmark classification tasks of different categories[16]: BUPA liver disorders (BUPA), Wisconsin Diagnostic Breast Cancer (WDBC), Balance Scale Weight & Distance (BSWD), Car Evaluation Database (CAR), and Optical Recognition of Handwritten Digits (OPTDIGITS) where its data set has been split into the fixed training and testing subsets in advance by the data collector. Table 1: Error rates (mean?dev.)% of AdaBoost, ASSEMBLE and Regularized Boost (RegBoost) with different base learners on five UCI classification data sets. Data Set BUPA WDBC BSWD CAR OTIDIGITS AdaBoost 37.7?3.4 8.3?1.9 22.2?0.9 31.3?1.2 4.9?0.1 KNN ASSEMBLE 36.1?3.0 4.1?1.0 18.7?0.4 24.4?0.7 3.1?0.5 RegBoost 34.9?3.1 3.7?2.0 17.4?0.9 23.2 ?1.1 2.7?0.7 AdaBoost 35.1?1.1 9.7?2.0 16.8?2.8 30.6?3.0 6.3?0.2 MLP ASSEMBLE 31.2?6.7 3.5?0.9 14.4?2.4 20.5?0.9 5.2?0.2 RegBoost 28.8?5.6 3.2?0.8 13.6?2.6 17.7?1.1 5.0?0.2 Table 1 tabulates the results of different boosting learning algorithms. It is evident from Table 1 that in general the use of unlabeled data constantly improves the generalization performance in contrast to the performance of AdaBoost and the use of our regularizer in the ASSEMBLE always further reduces its error rates on all five data sets no matter what kind of a base learner is used. It is also observed that the use of different base learners results in various performance on five data sets; the use of KNN as a base learner yields better performance on the WDBC and OPTDIGITS data set whereas the use of MLP as a base learner outperforms its KNN counterpart on other three data sets. Apparently the nature of a base learner, e.g., global vs. local classifiers, may determine if it is suitable for a classification task. It is worth mentioning that for the OPTDIGITS data set the lowest error rate achieved by 3NN with the entire training set, i.e., using all 3823 examples as training prototypes, is around 2.2% on the testing set, as reported in the literature [16]. In contrast, the ASSEMBLE [11] on 3NN equipped with our regularizer yields an error rate of 2.7% on average despite the fact that our Regularized Boost algorithm simply uses 765 labeled examples. Table 2: Error rates (mean?dev.)% of AdaBoost, Agreement Boost and Regularized Boost (RegBoost) on five UCI binary classification data sets. Data Set BUPA WDBC VOTE AUSTRALIAN KR-vs-KP AdaBoost-KNN 37.7?3.4 8.3?1.9 9.0?1.5 37.7?1.2 15.6?0.7 AdaBoost-MLP 35.1?1.1 9.7?2.0 10.6?0.5 21.0?3.4 7.1?0.2 AgreementBoost 30.4?7.5 3.3?0.7 4.4?0.8 16.7?2.1 6.3?1.3 RegBoost 28.9?5.8 3.0?0.8 2.8?0.6 15.2?2.8 5.2?1.6 We further apply our regularizer to the Agreement Boost [13]. Due to the limitation of this algorithm [13], we can use only the binary classification data sets to test the effectiveness. As a result, we use BUPA and WDBC mentioned above and three additional UCI binary classification data sets [16]: 1984 U.S. Congressional Voting Records (VOTE), Australian Credit Approval (AUSTRALIAN) and Chess End-Game King Rook versus King Pawn (KR-vs-KP). As required by the Agreement Boost [13], the KNN and the MLP classifiers as base learners are used to construct two component ensemble learners without and with the use of our regularizer in experiments, which corresponds to its original and regularized version of the Agreement Boost. Table 2 tabulates results produced by different boosting algorithms. It is evident from Table 2 that the use of our regularizer in its component ensemble learners always leads the Agreement Boost to improve its generalization on five benchmark tasks while its original version trained with labeled and unlabeled data considerably outperforms the Adaboost trained with labeled data only. 4 8.5 RegularizedBoost ASSEMBLE 3.8 3.2 7.5 Error Rate(%) 3.4 6.5 6 7 6.5 3 6 5.5 2.8 2.6 0 RegularizedBoost AgreementBoost 8 7 Error Rate(%) 3.6 Error Rate(%) RegularizedBoost ASSEMBLE 7.5 10 20 30 40 50 60 70 80 Number of base learners 90 100 5 0 5.5 10 20 30 40 50 60 70 80 Number of base learners 5 0 90 100 10 20 30 40 50 60 70 80 Number of base learners 90 100 (a) (b) (c) Figure 2: Behaviors of semi-supervised boosting algorithms: the original version vs. the regularized version. (a) The ASSEMBLE with KNN on the OPTDIGITS. (b) The ASSEMBLE with MLP on the OPTDIGITS. (c) The Agreement Boost on the KR-vs-KP. We investigate behaviors of regularized semi-supervised boosting algorithms on two largest data sets, OPTDIGITS and VR-vs-VP. Figure 2 shows the averaging generalization performance achieved by stopping a boosting algorithm at different boosting steps. From Figure 2, the use of our regularizer in the ASSEMBLE regardless of base learners adopted and the Agreement Boost always yields fast training. As illustrated in Figures 2(a) and 2(b), the regularized version of the ASSEMBLE with KNN and MLP takes only 22 and 46 boosting steps on average to reach the performance of the original ASSEMBLE after 100 boosting steps, respectively. Similarly, Figure 2(c) shows that the regularized Agreement Boost takes only 12 steps on average to achieve the performance of its original version after 100 boosting steps. 3.3 Facial expression recognition Facial expression recognition is a typical semi-supervised learning task since labeling facial expressions is an extremely expensive process and very prone to errors due to ambiguities. We test the effectiveness of our regularizer by using a facial expression benchmark database, JApanese Female Facial Expression (JAFFE) [17] where there are 10 female expressers who posed 3 or 4 examples for each of seven universal facial expressions (anger, disgust, fear, joy, neutral, sadness and surprise), as exemplified in Figure 3(a), and 213 pictures of 256 ? 256 pixels were collected totally. 45 Error Rate(%) 40 35 34.27 32.19 30 26.37 25 20 15 AdaBoost ASSEMBLE RegularizedBoost (b) (a) Figure 3: Facial expression recognition on the JAFFE. (a) Exemplar pictures corresponding to seven universal facial expressions. (b) Classification results of different boosting algorithms. In our experiments, we first randomly choose 20% images (balanced to seven classes) as testing data and the rest of images constitute a training set (S) randomly split into labeled (L) and unlabeled (U ) data of equal size in each trial. We apply the independent component analysis and then the principal component analysis (PCA) to each image for feature extraction and use only first 40 PCA coefficients to form a feature vector. A single hidden layered MLP of 30 hidden neurons is used as the based learner. We set a maximum number of 1000 boosting rounds to stop the algorithms if their termination conditions are not met while the same initialization is used for all boosting algorithms. For reliability, the experiment is repeated 10 times. From Figure 3(b), it is evident that the ASSEMBLE with our regularizer yields 5.82% error reduction on average; an averaging error rate of 26.37% achieved is even better than that of some supervised learning methods on the same database, e.g., [18] where around 70% images were used to train a convolutional neural network and an averaging error rate of 31.5% was achieved on the remaining images. 4 Discussions In this section, we discuss issues concerning our regularizer and relate it to previous work in the context of regularization in semi-supervised learning. As defined in (5), our regularizer has a parameter, ?i , associated with each training point, which can be used to encode the information of the marginal or input distribution, P (x), by setting ?i = ?P (x) where ? is a tradeoff or regularization parameter. Thus, the use of ?i would make the regularization take effect only in dense regions although our experiments reported were carried out by setting ?i = 21 ; i.e., we were using a weak assumption that data are scattered uniformly throughout the whole space. In addition, (6) uses an affinity metric system to measure the proximity of data points and can be extended by incorporating the manifold information, if available, into our regularizer. Our local smoothness regularizer plays an important role in re-sampling all training data including labeled and unlabeled data for boosting learning. As uncovered in (9), the new empirical distribution based on our regularizer not only assigns a large probability to a data point misclassified but also may cause a data point even classified correctly in the last round of boosting learning but located in a ?non-smoothing? region to be assigned a relatively large probability, which distinguishes our approach from existing boosting algorithms where the distribution for re-sampling training data is determined solely by misclassification errors. For unlabeled data, such an effect always makes sense to work on the smoothness and the cluster assumptions [1] as performed by existing regularization techniques [3]-[8]. For labeled data, it actually has an effect that the labeled data points located in a ?non-smoothing? region is more likely to be retained in the next round of boosting learning. As exemplified in Figure 1, such points are often located around boundaries between different classes and therefore more informative in determining a decision boundary, which would be another reason why our regularizer improves the generalization of semi-supervised boosting algorithms. The use of manifold smoothness in a special form of Adaboost, marginal Adaboost, has been attempted in [19] where the graph Laplacian regularizer was applied to select base learners by the adaptive penalization of base learners according to their decision boundaries and the actual manifold structural information. In essence, the objective of using manifold smoothness in our Regularized Boost is identical to theirs in [19] but we accomplish it in a different way. We encode the manifold smoothness into the empirical data distribution used in boosting algorithms for semi-supervised learning, while their implementation adaptively adjusts the edge offset in the marginal Adaboost algorithm for a weight decay used in the linear combination of based learners [19]. In contrast, our implementation is simpler yet applicable to any boosting algorithms for semi-supervised learning, while theirs needs to be fulfilled via the marginal Adaboost algorithm even though their regularized marginal Adaboost is applicable to both supervised and semi-supervised learning indeed. By comparison with existing regularization techniques used in semi-supervised learning, our Regularized Boost is closely related to graph-based semi-supervised learning methods, e.g., [8]. In general, a graph-based method wants to find a function to simultaneously satisfy two conditions [2]: a) it should be close to given labels on the labeled nodes, and b) it should be smooth on the whole graph. In particular, the work in [8] develops a regularization framework to carry out the above idea by defining the global and local consistency terms in their cost function. Similarly, our cost function in (9) has two terms explicitly corresponding to global and local consistency though true labels of labeled data never change during our boosting learning, which resembles theirs [8]. Nevertheless, a graph-based algorithm is an iterative label propagation process on a graph where a regularizer directly gets involved in label modification over the graph, whereas our Regularized Boost is an iterative process that runs a base learner on various distributions over training data where our regularizer simply plays a role in determining distributions. In general, a graph-based algorithm is applicable to transductive learning only although it can be combined with other methods, e.g. a mixture model [7], for inductive learning. In contrast, our Regularized Boost is developed for inductive learning. Finally it is worth stating that unlike most of existing regularization techniques used in semi-supervised learning, e.g., [5],[6], our regularization takes effect on both labeled and unlabeled data while theirs are based on unlabeled data only. 5 Conclusions We have proposed a local smoothness regularizer for semi-supervising boosting learning and demonstrated its effectiveness on different types of data sets. In our ongoing work, we are working for a formal analysis to justify the advantage of our regularizer and explain the behaviors of Regularized Boost, e.g. fast training, theoretically. References [1] Chapelle, O., Sch?olkopf, B., & Zien, A. (2006) Semi-Supervised Learning. Cambridge, MA: MIT Press. [2] Zhu, X. (2006) Semi-supervised learning literature survey. Computer Science TR-1530, University of Wisconsin - Madison, U.S.A. [3] Bousquet, O., Chapelle, O., & Hein, M. (2004) Measure based regularization. In Advances in Neural Information Processing Systems 16. Cambridge, MA: MIT Press. [4] Belkin, M., Niyogi, P., & Sindhwani, V. (2004) Manifold regularization: a geometric framework for learning from examples. Technical Report, University of Michigan, U.S.A. [5] Szummer, M., & Jaakkola, T. (2003) Information regularization with partially labeled data. In Advances in Neural Information Processing Systems 15. Cambridge, MA: MIT Press. [6] Grandvalet, Y., & Begio, Y. (2005) Semi-supervised learning by entropy minimization. In Advances in Neural Information Processing Systems 17. Cambridge, MA: MIT Press. [7] Zhu, X., & Lafferty, J. (2005) Harmonic mixtures: combining mixture models and graph-based methods for inductive and scalable semi-supervised learning. In Proc. Int. Conf. Machine Learning, pp. 1052-1059. [8] Zhou, D., Bousquet, O., Lal, T., Weston, J., & Sch?lkopf, B. (2004) Learning with local and global consistency. In Advances in Neural Information Processing Systems 16. Cambridge, MA: MIT Press. [9] Mason, L., Bartlett, P., Baxter, J., & Frean, M. (2000) Functional gradient techniques for combining hypotheses. In Advances in Large Margin Classifiers. Cambridge, MA: MIT Press. [10] d?Alch?e-Buc, F., Grandvalet, Y., & Ambroise, C. (2002) Semi-supervised MarginBoost. In Advances in Neural Information Processing Systems 14. Cambridge, MA: MIT Press. [11] Bennett, K., Demiriz, A., & Maclin, R. (2002) Expoliting unlabeled data in ensemble methods. In Proc. ACM Int. Conf. Knowledge Discovery and Data Mining, pp. 289-296. [12] Collins, M., & Singer, Y. (1999) Unsupervised models for the named entity classification. In Proc. SIGDAT Conf. Empirical Methods in Natural Language Processing and Very Large Corpora. [13] Leskes, B. (2005) The value of agreement, a new boosting algorithm. In Proc. Int. Conf. Algorithmic Learning Theory (LNAI 3559), pp. 95-110, Berlin: Springer-Verlag. [14] G?unther, E., & Pfeiffer, K.P. (2005) Multiclass boosting for weak classifiers. Journal of Machine Learning Research 6:189-210. [15] Nigam, K., McCallum, A., Thrum, S., & Mitchell, T. (2000) Using EM to classify text from labeled and unlabeled documents. Machine Learning 39:103-134. [16] Blake, C., Keogh, E., & Merz, C.J. (1998) UCI repository of machine learning databases. University of California, Irvine. [on-line] http://www.ics.uci.edu/ mlearn/MLRepository.html [17] The JAFFE Database. [Online] http://www.kasrl.org/jaffe.html [18] Fasel, B. (2002) Robust face analysis using convolutional neural networks. In Proc. Int. Conf. Pattern Recognition, vol. 2, pp. 40-43. [19] K?egl, B., & Wang, L. (2004) Boosting on manifolds: adaptive regularization of base classifier. In Advances in Neural Information Processing Systems 16. Cambridge, MA: MIT Press.
3167 |@word trial:1 repository:2 version:7 briefly:2 termination:3 tr:1 carry:1 reduction:1 series:1 uncovered:1 united:1 document:1 outperforms:2 existing:8 yet:1 must:1 j1:2 informative:1 treating:1 joy:1 v:6 generative:1 accordingly:2 xk:8 mccallum:1 record:1 boosting:57 node:1 org:1 simpler:1 five:6 mathematical:1 along:1 constructed:1 introduce:2 theoretically:1 acquired:1 indeed:1 behavior:4 beled:1 multi:3 approval:1 decreasing:3 automatically:1 actual:1 equipped:1 totally:1 discover:1 underlying:1 lowest:1 what:2 kind:2 minimizes:1 developed:2 finding:6 transformation:1 pseudo:14 voting:1 tackle:1 classifier:9 uk:1 medical:1 before:1 fasel:1 local:17 despite:1 solely:1 initialization:2 resembles:1 suggests:1 sadness:1 co:5 bupa:5 mentioning:1 yj:2 testing:4 digit:1 procedure:1 universal:10 empirical:8 pre:2 refers:1 get:1 unlabeled:31 layered:2 close:1 context:1 applying:1 www:2 equivalent:2 demonstrated:1 maximizing:1 attention:1 regardless:1 survey:1 ke:1 disorder:1 assigns:1 rule:2 adjusts:1 searching:1 ambroise:1 justification:1 play:2 us:2 hypothesis:1 agreement:13 recognition:5 expensive:1 located:3 labeled:23 database:5 observed:2 ft:6 role:2 wang:2 region:3 sect:6 yk:4 mentioned:1 balanced:1 trained:4 pawn:1 learner:39 alch:1 various:4 regularizer:35 train:3 fast:2 kp:3 labeling:1 posed:1 otherwise:2 niyogi:1 knn:9 transductive:2 demiriz:1 itself:1 online:1 advantage:1 differentiable:1 ment:1 product:1 inserting:1 relevant:2 uci:7 combining:2 achieve:2 intuitive:1 competition:1 olkopf:1 manchester:3 cluster:2 comparative:1 supervising:1 ac:1 frean:1 stating:1 exemplar:1 nearest:1 liver:1 measured:1 ij:1 school:1 received:1 dividing:1 c:1 australian:3 met:1 concentrate:1 closely:1 generalization:7 inspecting:1 keogh:1 extension:1 pl:1 proximity:1 around:4 credit:1 ground:1 blake:1 exp:1 great:2 ic:1 algorithmic:1 proc:5 applicable:8 label:22 largest:1 weighted:1 minimization:2 mit:8 always:7 gaussian:1 zhou:1 incompatibility:4 jaakkola:1 encode:2 derived:2 focus:1 regboost:5 indicates:1 mainly:1 contrast:5 sense:1 stopping:1 nn:3 typically:1 entire:1 lnai:1 maclin:1 hidden:3 wij:3 misclassified:1 compatibility:1 issue:3 among:2 classification:19 aforementioned:2 pixel:1 html:2 development:1 smoothing:2 special:1 marginal:5 equal:2 construct:2 once:1 extraction:1 never:1 sampling:2 identical:1 unsupervised:1 anger:1 minimized:1 report:4 develops:1 belkin:1 distinguishes:1 randomly:4 simultaneously:1 mlp:10 mining:2 investigate:1 evaluation:1 mixture:5 light:1 behind:1 regularizers:1 edge:1 closer:1 conforms:1 facial:8 re:2 hein:1 classify:1 dev:2 cost:15 introducing:3 subset:2 neutral:1 reported:2 accomplish:1 synthetic:6 considerably:1 adaptively:1 combined:1 ambiguity:1 containing:1 choose:4 conf:5 account:2 converted:1 coefficient:2 matter:2 int:4 satisfy:1 explicitly:1 performed:1 view:5 apparently:1 bayes:2 minimize:4 ass:1 convolutional:2 who:1 ensemble:15 yield:5 vp:1 weak:2 handwritten:1 lkopf:1 produced:1 none:2 worth:2 j6:1 classified:1 mlearn:1 explain:1 reach:1 definition:2 pp:4 involved:1 associated:2 stop:2 irvine:1 treatment:1 mitchell:1 knowledge:3 reminder:1 improves:4 car:3 actually:1 supervised:55 adaboost:17 though:3 working:2 propagation:1 defines:1 jaffe:4 yf:1 effect:4 concept:1 true:4 counterpart:1 inductive:5 regularization:17 assigned:2 hence:1 illustrated:2 deal:1 round:3 during:2 self:4 game:1 essence:2 mlrepository:1 evident:3 theoretic:1 demonstrate:2 ranging:1 harmonic:2 image:5 novel:1 recently:2 functional:8 discussed:1 theirs:4 cambridge:8 smoothness:15 consistency:3 similarly:2 language:1 reliability:2 chapelle:2 base:26 female:2 perspective:1 verlag:1 binary:7 success:1 yi:29 exploited:1 additional:1 determine:1 paradigm:1 maximize:4 monotonically:3 semi:51 smoother:1 multiple:1 zien:1 reduces:1 smooth:1 technical:1 divided:1 concerning:1 laplacian:1 variant:1 basic:1 scalable:1 breast:1 metric:1 achieved:5 addition:3 whereas:2 want:1 sch:2 rest:1 unlike:1 lafferty:1 effectiveness:6 structural:1 split:3 congressional:1 baxter:1 variety:1 xj:4 fit:1 bandwidth:1 inner:1 idea:4 prototype:1 tradeoff:1 multiclass:1 motivated:1 expression:8 pca:2 bartlett:1 ultimate:1 unther:1 cause:1 constitute:1 amount:1 ten:1 category:3 generate:1 http:2 sign:2 diagnostic:1 fulfilled:1 correctly:1 diagnosis:1 vol:1 four:2 nevertheless:1 enormous:1 achieving:1 pj:2 graph:11 subgradient:1 run:1 sigdat:1 disgust:1 named:1 throughout:1 reasonable:1 draw:1 decision:5 layer:1 assemble:21 constraint:3 nearby:1 bousquet:2 speed:1 extremely:1 optical:1 relatively:1 according:2 combination:3 across:1 em:2 making:1 modification:1 chess:1 taken:1 visualization:1 discus:3 turn:1 singer:1 end:1 adopted:2 available:2 apply:4 generic:2 appropriate:1 original:7 remaining:2 madison:1 exploit:2 tabulates:2 build:1 objective:3 added:2 strategy:2 affinity:2 gradient:1 distance:1 berlin:1 entity:1 sensible:1 seven:3 manifold:9 collected:1 reason:1 retained:1 ratio:1 minimizing:1 balance:1 kingdom:1 difficult:2 relate:2 negative:5 implementation:2 proper:8 perform:2 marginboost:2 observation:1 neuron:1 benchmark:12 defining:1 extended:4 y1:1 introduced:1 required:1 specified:1 lal:1 california:1 boost:27 nip:1 suggested:1 below:1 exemplified:2 pattern:1 including:1 critical:1 misclassification:5 treated:1 suitable:1 regularized:23 natural:1 pfeiffer:1 zhu:2 improve:2 picture:2 carried:1 text:1 review:2 literature:2 geometric:1 discovery:1 determining:2 wisconsin:2 limitation:1 versus:1 penalization:1 sufficient:1 grandvalet:2 cancer:1 compatible:1 prone:1 last:2 side:1 formal:1 perceptron:1 neighbor:1 characterizing:1 face:1 distributed:1 benefit:1 boundary:4 world:3 adaptive:2 far:1 cope:1 functionals:1 buc:1 global:5 sequentially:1 rook:1 corpus:1 xi:46 iterative:2 why:1 table:6 learn:1 nature:1 robust:1 nigam:1 m13:1 japanese:1 constructing:2 dense:1 whole:2 repeated:2 collector:1 x1:1 scattered:1 vr:1 iij:2 wish:2 winning:1 specific:1 offset:1 decay:1 mason:1 concern:2 exists:1 incorporating:1 kr:3 egl:1 margin:9 demand:1 chen:2 wdbc:5 surprise:1 entropy:2 michigan:1 simply:3 likely:1 contained:1 partially:1 fear:1 sindhwani:1 springer:1 corresponds:2 truth:1 constantly:1 acm:1 ma:8 weston:1 goal:2 presentation:1 optdigits:6 king:2 towards:1 bennett:1 change:1 determined:4 except:1 typical:1 uniformly:1 wt:5 averaging:4 justify:1 principal:1 experimental:4 merz:1 attempted:1 vote:2 select:2 szummer:1 collins:1 ongoing:1
2,389
3,168
Simplified Rules and Theoretical Analysis for Information Bottleneck Optimization and PCA with Spiking Neurons Lars Buesing, Wolfgang Maass Institute for Theoretical Computer Science Graz University of Technology A-8010 Graz, Austria {lars,maass}@igi.tu-graz.at Abstract We show that under suitable assumptions (primarily linearization) a simple and perspicuous online learning rule for Information Bottleneck optimization with spiking neurons can be derived. This rule performs on common benchmark tasks as well as a rather complex rule that has previously been proposed [1]. Furthermore, the transparency of this new learning rule makes a theoretical analysis of its convergence properties feasible. A variation of this learning rule (with sign changes) provides a theoretically founded method for performing Principal Component Analysis (PCA) with spiking neurons. By applying this rule to an ensemble of neurons, different principal components of the input can be extracted. In addition, it is possible to preferentially extract those principal components from incoming signals X that are related or are not related to some additional target signal YT . In a biological interpretation, this target signal YT (also called relevance variable) could represent proprioceptive feedback, input from other sensory modalities, or top-down signals. 1 Introduction The Information Bottleneck (IB) approach [2] allows the investigation of learning algorithms for unsupervised and semi-supervised learning on the basis of clear optimality principles from information theory. Two types of time-varying inputs X and YT are considered. The learning goal is to learn a transformation from X into another signal Y that extracts only those components from X that are related to the relevance signal YT . In a more global biological interpretation X might represent for example some sensory input, and Y the output of the first processing stage for X in the cortex. In this article Y will simply be the spike output of a neuron that receives the spike trains X as inputs. The starting point for our analysis is the first learning rule for IB optimization in for this setup, which has recently been proposed in [1], [3]. Unfortunately, this learning rule is complicated, restricted to discrete time and no theoretical analysis of its behavior is feasible. Any online learning rule for IB optimization has to make a number of simplifying assumptions, since true IB optimization can only be carried out in an offline setting. We show here, that with a slightly different set of assumptions than those made in [1] and [3], one arrives at a drastically simpler and intuitively perspicuous online learning rule for IB optimization with spiking neurons. The learning rule in [1] was derived by maximizing the objective function1 L0 : L0 = ?I(X, Y ) + ?I(Y, YT ) ? ?DKL (P (Y )kP (Y? )), (1) 1 The term DKL (P (Y )kP (Y? )) denotes the Kullback-Leibler divergence between the distribution P (Y ) and a target distribution P (Y? ). This term ensures that the weights remain bounded, it is shortly discussed in [4]. 1 where I(., .) denotes the mutual information between its arguments and ? is a positive trade-off factor. The target signal YT was assumed to be given by a spike train. The learning rule from [1] (see [3] for a detailed interpretation) is quite involved and requires numerous auxiliary definitions (hence we cannot repeat it in this abstract). Furthermore, it can only be formulated in discrete time (steps size ?t) for reasons we want to outline briefly: In the limit ?t ? 0 the essential contribution to the learning rule, which stems from maximizing the mutual information I(Y, YT ) between output and target signal, vanishes. This difficulty is rooted in a rather technical assumption, made in appendix k A.4 in [3], concerning the expectation value ? at time step k of the neural firing probability ? , given the information about the postsynaptic spikes and the target signal spikes up to the preceding time step k ? 1 (see our detailed discussion in [4])2 . The restriction to discrete time prevents the application of powerful analytical methods like the Fokker-Planck equation, which requires continuous time, for analyzing the dynamics of the learning rule. In section 2 of this paper, we propose a much simpler learning rule for IB optimization with spiking neurons, which can also be formulated in continuous time. In contrast to [3], we approximate the k critical term ? with a linear estimator, under the assumption that X and YT are positively correlated. Further simplifications in comparison to [3] are achieved by considering a simpler neuron model (the linear Poisson neuron, see [5]). However we show through computer simulation in [4] that the resulting simple learning rule performs equally well for the more complex neuron model with refractoriness from [1] - [5]. The learning rule presented here can be analyzed by the means of the drift function of the corresponding Fokker-Planck equation. The theoretical results are outlined in section 3, followed by the consideration of a concrete IB optimization task in section 4. A link between the presented learning rule and Principal Component Analysis (PCA) is established in section 5. A more detailed comparison of the learning rule presented here and the one of [3] as well as results of extensive computer tests on common benchmark tasks can be found in [4]. 2 Neuron model and learning rule for IB optimization We consider a linear Poisson neuron with N synapses P of weights w = (w1 , . . . , wN ) . It is driven by the input X, consisting of N spike trains Xj (t) = i ?(t ? tij ), j ? {1, . . . , N }, where tij denotes the time of the i?th spike at synapse j. The membrane potential u(t) of the neuron at time t is given by the weighted sum of the presynaptic activities ?(t) = (?1 (t), . . . , ?N (t)): u(t) = N X wj ?j (t) (2) j=1 ?j (t) = Z t ?(t ? s)Xj (s)ds. ?? The kernel ?(.) models the EPSP of a single spike (in simulations ?(t) was chosen to be a decaying exponential with a time constant of ?m = 10 ms). The postsynaptic neuron spikes at time t with the probability density g(t): g(t) = u(t) , u0 P with u0 being a normalization constant. The postsynaptic spike train is denoted as Y (t) = i ?(t ? tif ), with the firing times tif . We now consider the IB task described in general in [2], which consists of maximizing the objective function LIB , in the context of spiking neurons. As in [6], we introduce a further term L3 into the the objective function that reflects the higher metabolic costs for the neuron to maintain strong P synapses, a natural, simple choice being L3 = ?? wj2 . Thus the complete objective function L to maximize is: L = LIB + L3 = ?I(X, Y ) + ?I(YT , Y ) ? ? N X wj2 . (3) j=1 2 The remedy, proposed in section 3.1 in [3], of replacing the mutual information I(Y, YT ) in L0 by an information rate I(Y, YT )/?t does not solve this problem, as the term I(Y, YT )/?t diverges in the continuous time limit. 2 The objective function L differs slightly from L0 given in (1), which was optimized in [3]; this change turned out to be advantageous for the PCA learning rule given in section 5, without significantly changing the characteristics of the IB learning rule. The online learning rule governing the change of the weights wj (t) at time t is obtained by a gradient ascent of the objective function L: d ?L wj (t) = ? . dt ?wj For small learning rates ? and under the assumption that the presynaptic input X and the target signal YT are stationary processes, the following learning rule can be derived:   d Y (t)?j (t)  wj (t) = ? ? (u(t) ? u(t)) + ? F [YT ](t) ? F [YT ](t) ? ??wj (t), (4) dt u(t)u(t) where the operator (.) denotes the low-pass filter with a time constant ?C (in simulations ?C = 3s), i. e. for a function f :   Z t t?s 1 exp ? f (s)ds. (5) f (t) = ?C ?? ?C The operator F [YT ](t) appearing in (4) is equal to the expectation value of the membrane potential hu(t)iX|YT = E[u(t)|YT ], given the observations (YT (? )|? ? R) of the relevance signal; F is thus closely linked to estimation and filtering theory. For a known joint distribution of the processes X and YT , the operator F could in principal be calculated exactly, but it is not clear how this quantity can be estimated in an online process; thus we look for a simple approximation to F . Under the above assumptions, F is time invariant and can be approximated by a Volterra series (for details see [4]): Z ? Z n X Y hu(t)iX|YT = F [YT ](t) = ? ? ? ?n (t ? t1 , . . . , t ? tn ) YT (ti )dti . (6) n=0 R R i=1 In this article, we concentrate on the situation, where F can be well approximated by its linearization F1 [YT ](t), corresponding to a linear estimator of hu(t)iX|YT . For F1 [YT ](t) we make the following ansatz: Z F [YT ](t) ? F1 [YT ](t) = c ? uT (t) = c ?1 (t ? t1 )YT (t1 )dt1 . (7) R According to (7), F is approximated by a convolution uT (t) of the relevance signal YT and a suitable prefactor c. Assuming positively correlated X and YT , ?1 (t) is chosen to be a non-anticipating decaying exponential exp(?t/?0 )?(t) with a time constant ?0 (in simulations ?0 = 100 ms), where ?(t) is the Heaviside step function. This choice is motivated by the standard models for the impact of neuromodulators (see [7]), thus such a kernel may be implemented in a realistic biological mechanism. It turned out that the choice of ?0 was not critical, it could be varied over a decade ranging from 10 ms to 100 ms. The prefactor c appearing in (7) can be determined from the fact that F1 is the optimal linear estimator of the form given in (7), leading to: c= huT (t), u(t)i . huT (t), uT (t)i The quantity c can be estimated online in the following way: d c(t) = (uT (t) ? uT (t)) [(u(t) ? u(t)) ? c(t)(uT (t) ? uT (t))] . dt Using the above definitions, the resulting learning rule is given by (in vector notation): d Y (t)?(t) w(t) = ? [? (u(t) ? u(t)) + c(t)?(uT (t) ? uT (t))] ? ??w(t). dt u(t)u(t) (8) Equation (8) will be called the spike-based learning rule, as the postsynaptic spike train Y (t) explicitly appears. An accompanying rate-base learning rule can also be derived: ?(t) d w(t) = ? [? (u(t) ? u(t)) + c(t)?(uT (t) ? uT (t))] ? ??w(t). dt u0 u(t) 3 (9) 3 Analytical results The learning rules (8) and (9) are stochastic differential equations for the weights wj driven by the processes Y (.), ?j (.) and uT (.), of which the last two are assumed to be stationary with the means h?j (t)i = ?0 and huT (t)i = uT,0 respectively. The evolution of the solutions w(t) to (8) and (9) may be studied via a Master equation for the probability distribution of the weights p(w, t) (see [8]). For small learning rates ?, the stationary distribution p(w) sharply peaks3 at the roots of the drift function A(w) of the corresponding Fokker-Planck equation (the detailed derivation is given in [4]). Thus, for ? ? 1, the temporal evolution of the learning rules (8) and (9) may be studied via the deterministic differential equation: d w ? dt z = A(w) ? =? = N X  1 ?C 0 + ?C 1 w ? ? ??w ? ?0 u 0 z w ?j , (10) (11) j=1 where z is the total weight. The matrix C = ?C 0 + ?C 1 (with the elements Cij ) has two contributions. C 0 is the covariance matrix of the input and the matrix C 1 quantifies the covariance between the activities ?j and the trace uT : 0 Cij = h?i (t), ?j (t)i 1 Cij = h?i (t), uT (t)ihuT (t), ?j (t)i . huT (t), uT (t)i Now the critical points w? of dynamics of (10) are investigated. These critical points, if asymptotically stable, determine the peaks of the stationary distribution p(w) of the weights w; we therefore expect the solutions of the stochastic equations to fluctuate around these fixed points w? . If ? and ? are much larger than one, the term containing the matrix C 0 can be neglected and equation (10) has a unique stable fixed point w? : w? CiT ? CT = h?i (t), uT (t)i . Under this assumption the maximal mutual information between the target signal YT (t) and the output of the neuron Y (t) is obtained by a weight vector w = w? that is parallel to the covariance vector C T . In general, the critical points of equation (10) depend on the eigenvalue spectrum of the symmetric matrix C: If all eigenvalues are negative, the weight vector w ? decays to the lower hard bound 0. In case of at least one positive eigenvalue (which exists if ? is chosen large enough), there is a unique stable fixed point w? : ? w? = b (12) ?u0 ?0 b N X b := bi . i=1 The vector b appearing in (12) is the eigenvector of C corresponding to the largest eigenvalue ?. Thus, a stationary unimodal4 distribution p(w) of the weights w is predicted, which is centered around the value w? . 4 A concrete example for IB optimization A special scenario of interest, that often appears in the literature (see for example [1], [9] and [10]), is the following: The synapses, and subsequently the input spike trains, form M different subgroups 3 It can be shown that the diffusion term in the FP equation scales like O(?), i. e. for small learning rates ?, fluctuations tend to zero and the dynamics can be approximated by the differential equation (10) . 4 Note that p(w) denotes the distribution of the weight vector, not the distribution of a single weight p(wj ). 4 A X 1(t) X 2(t) X N(t) B Output Y(t) Relevance Signal YT(t) C D Figure 1: A The basic setup for the Information Bottleneck optimization. B-D Numerical and analytical results for the IB optimization task described in section 4. The temporal evolution of P the average weights w ?l = 1/M j?Gl wj of the four different synaptic subgroups Gl are shown. B The performance of the spike-based rule (8). The highest trajectory corresponds to w ?1 ; it stays close to its analytical predicted fixed point value obtained from (12), which is visualized by the upper dashed line. The trajectory just below belongs to w ?3 , for which the fixed point value is also plotted as dashed line. The other two trajectories w ?2 and w ?4 decay and eventually fluctuate above the predicted value of zero. C The performance of the rate-based rule (9); results are analogous to the ones of the spike-based rule. D Simulation of the deterministic equation (10). Gl , l ? {1, . . . , N/M } of the same size N/M ? N. The spike trains Xj and Xk , j 6= k, are statistically independent if they belong to different subgroups; within a subgroup there is a homogeneous 0 covariance term Cjk = cl , j 6= k for j, k ? Gl , which can be due either to spike-spike correlations or correlations in rate modulations. The covariance between the target signal YT and the spike trains Xj is homogeneous among a subgroup. As a numerical example, we consider in figure 1 a modification of the IB task presented in figure 2 of [1]. The N = 100 synapses form M = 4 subgroups Gl = {25(l ? 1) + 1, . . . , 25l}, l ? {1, . . . , 4}. Synapses in G1 receive Poisson spike trains of constant rate ?0 = 20 Hz, which are mutually spikespike correlated with a correlation-coefficient 5 of 0.5. The same holds for the spike trains of G2 . Spike trains for G3 and G4 are uncorrelated Poisson trains with a common rate modulation, which is equal to low pass filtered white noise (cut-off frequency 5 Hz) with mean ?0 and standard deviation (SD) ? = ?0 /2. The rate modulations for G3 and G4 are however independent (though identically distributed). Two spike trains for different synapse subgroups are statistically independent. The target signal YT was chosen to be the sum of two Poisson trains. The first is of constant rate ?0 and has spike-spike correlations with G1 of coefficient 0.5; the second is a Poisson spike train with the same rate modulation as the spike trains of G3 superimposed by additional white noise of SD 2 Hz. Furthermore, the target signal was turned off during random intervals6 . The resulting evolution of the weights is shown in figure 1, illustrating the performance of the spike-based rule (8) as well as of the rate-based rule (9). As expected, the weights of G1 and G3 are potentiated as YT has mutual information with the corresponding part of the input. The synapses of G2 and G4 are depressed. The analytical result for the stable fixed point w? obtained from (12) is shown as dashed lines and is in good agreement with the numerical results. Furthermore the trajectory of the solution w(t) ? to 5 Spike-spike correlated Poisson spike trains were generated according to the method outlined in [9]. These intervals of silence were modeled as random telegraph noise with a time constant of 200 ms and a overall probability of silence of 0.5. 6 5 the deterministic equation (10) is plotted. The presented concrete IB task was slightly changed from the one presented in [1], because for the setting used here, the largest eigenvalue ? of C and its corresponding eigenvector b can be calculated analytically. The simulation results for the original setting in [1] can also be reproduced with the simpler rules (8) and (9) (not shown). 5 Relevance-modulated PCA with spiking neurons The presented learning rules (8) and (9) exhibit a close relation to Principal Component Analysis (PCA). A learning rule which enables the linear Poisson neuron to extract principal components from the input X(.) can be derived by maximizing the following objective function: LPCA = ?LIB ? ? N X wj2 = +I(X, Y ) ? ?I(YT , Y ) ? ? j=1 N X wj2 , (13) j=1 which just differs from (3) by a change of sign in front of LIB . The resulting learning rule is in close analogy to (8): d Y (t)?(t) w(t) = ? [(u(t) ? u(t)) ? c(t)?(uT (t) ? uT (t))] ? ??w(t). dt u(t)u(t) (14) The corresponding rate-based version can also be derived. Without the trace uT (.) of the target signal, it can be seen that the solution w(t) ? of deterministic equation corresponding to (14) (which is of the same form as (10) with the obvious sign changes) converges to an eigenvector of the covariance matrix C 0 . Thus, for ? = 0 we expect the learning rule (14) to perform PCA for small learning rates ?. The rule (14) without the relevance signal is comparable to other PCA rules, e. g. the covariance rule (see [11]) for non-spiking neurons. The side information given by the relevance signal YT (.) can be used to extract specific principal components from the input, thus we call this paradigm relevance-modulated PCA. Before we consider a concrete example for relevance-modulated PCA, we want to point out a further application of the learning rule (14). The target signal YT can also be used to extract different components from the input with different neurons (see figure 2). Consider m neurons receiving the same input X. These neurons have the outputs Y1 (.), . . . , Ym (t), target signals YT1 (.), . . . , YTm (t) and weight vectors w1 (t), . . . , wm (t), the latter evolving according to (14). In order to prevent all weight vectors from converging towards the same eigenvector of C 0 (the principal component), the target signal YTi for neuron i is chosen to be the sum of all output spike trains except Yi : YTi (t) = N X Yj (t). (15) j=1, j6=i If one weight vector wi (t) is already close to the eigenvector ek of C 0 , than by means of (15), the basins of attraction of ek for the other weight vectors wj (t), j 6= i are reduced (or even vanish, depending on the value of ?). It is therefore less likely (or impossible) that they also converge to ek . In practice, this setup is sufficiently robust, if only a small number (? 4) of different components is to be extracted and if the differences between the eigenvalues ?i of these principal components are not too big7 . For the PCA learning rule, the time constant ?0 of the kernel ?1 (see (7)) had to be chosen smaller than for the IB tasks in order to obtain good performance; we used ?0 = 10 ms in simulations. This is in the range of time constants for IPSPs. Hence, the signals YTi could probably be implemented via lateral inhibition. The learning rule considered in [3] displayed a close relation to Independent Component Analysis (ICA). Because of the linear neuron model used here and the linearization of further terms in the derivation, the resulting learning rule (14) performs PCA instead of ICA. The results of a numerical example are shown in figure 2. The m = 3 for the regular PCA experiment neurons receive the same input X and their weights change according to (14). The weights and input spike trains are grouped into four subgroups G1 , . . . , G4 , as for the IB optimization discussed 7 Note that the input X may well exhibit a much larger number of principal components. However it is only possible to extract a limited number of them by different neurons at the same time. 6 A B neuron 1 C neuron 2 E neuron 2 F neuron 3 Output Y1(t) X 1(t) X 2(t) Output Ym(t) XN(t) D neuron 1 Figure 2: A The basic setup for the PCA task: The m different neurons receive the same input X and are expected to extract different principal components of it. B-F The temporal evolution of the P average subgroup weights w ?l = 1/25 j?Gl wj for the groups G1 (black solid line), G2 (light gray solid line) and G3 (dotted line). B-C Results for the relevance-modulated PCA task: neuron 1 (fig. B) specializes on G2 and neuron 2 (fig. C) on subgroup G3 . D-F Results for the regular PCA task: neuron 1 (fig. D) specialize on G1 , neuron 2 (fig. E) on G2 and neuron 3 (fig. F) on G3 . in section 4. The only difference is that all groups (except for G4 ) receive spike-spike correlated Poisson spike trains with a correlation coefficient for the groups G1 , G2 , G3 of 0.5, 0.45, 0.4 respectively. Group G4 receives uncorrelated Poisson spike trains. As can be seen in figure 2 D to F, the different neurons specialize on different principal components corresponding to potentiated synaptic subgroups G1 , G2 and G3 respectively. Without the relevance signals YTi (.), all neurons tend to specialize on the principal component corresponding to G1 (not shown). As a concrete example for relevance-modulated PCA, we consider the above setup with slight modifications: Now we want m = 2 neurons to extract the components G2 and G3 from the input X, and not the principal component G1 . This is achieved with an additional relevance signal YT0 , which is the same for both neurons and has spike-spike correlations with G2 and G3 of 0.45 and 0.4. We add the term ?I(Y, YT0 ) to the objective function (13), where ? is a positive trade-off factor. The resulting learning rule has exactly the same structure as (14), with an additional term due to YT0 . The numerical results are presented in figure 2 B and C, showing that it is possible in this setup to explicitly select the principle components that are extracted (or not extracted) by the neurons. 6 Discussion We have introduced and analyzed a simple and perspicuous rule that enables spiking neurons to perform IB optimization in an online manner. Our simulations show that this rule works as well as the substantially more complex learning rule that had previously been proposed in [3]. It also performs well for more realistic neuron models as indicated in [4]. We have shown that the convergence properties of our simplified IB rule can be analyzed with the help of the Fokker-Planck equation (alternatively one may also use the theoretical framework described in A.2 in [12] for its analysis). The investigation of the weight vectors to which this rule converges reveals interesting relationships to PCA. Apparently, very little is known about learning rules that enable spiking neurons to extract multiple principal components from an input stream (a discussion of a basic learning rule performing PCA is given in chapter 11.2.4 of [5]). We have demonstrated both analytically and through simulations that a slight variation of our new learning rule performs PCA. Our derivation of this rule within the IB framework opens the door to new variations of PCA where preferentially those components are extracted from a high dimensional input stream that are ?or are not? related to some external relevance variable. We expect that a further investigation of such methods will shed light on the unknown principles of unsupervised and semi-supervised learning that might shape and constantly retune the output of lower cortical areas to intermediate and higher cortical areas. The learning rule that we have proposed might in principle be able to extract from high-dimensional 7 sensory input streams X those components that are related to other sensory modalities or to internal expectations and goals. Quantitative biological data on the precise way in which relevance signals YT (such as for example dopamin) might reach neurons in the cortex and modulate their synaptic plasticity are still missing. But it is fair to assume that these signals reach the synapse in a low-pass filtered form of the type uT that we have assumed for our learning rules. From that perspective one can view the learning rules that we have derived (in contrast to the rules proposed in [3]) as local learning rules. Acknowledgments Written under partial support by the Austrian Science Fund FWF, project # P17229, project # S9102 and project # FP6-015879 (FACETS) of the European Union. References [1] S. Klampfl, R. A. Legenstein, and W. Maass. Information bottleneck optimization and independent component extraction with spiking neurons. In Proc. of NIPS 2006, Advances in Neural Information Processing Systems, volume 19. MIT Press, 2007. [2] N. Tishby, F. C. Pereira, and W. Bialek. The information bottleneck method. In Proceedings of the 37-th Annual Allerton Conference on Communication, Control and Computing, pages 368?377, 1999. [3] S. Klampfl, R. Legenstein, and W. Maass. Spiking neurons can learn to solve information bottleneck problems and to extract independent components. Neural Computation, 2007. in press. [4] L. Buesing and W. Maass. journal version. 2007. in preparation. [5] W. Gerstner and W. M. Kistler. Spiking Neuron Models. Cambridge University Press, Cambridge, 2002. [6] Taro Toyoizumi, Jean-Pascal Pfister, Kazuyuki Aihara, and Wulfram Gerstner. Optimality Model of Unsupervised Spike-Timing Dependent Plasticity: Synaptic Memory and Weight Distribution. Neural Computation, 19(3):639?671, 2007. [7] Eugene M. Izhikevich. Solving the Distal Reward Problem through Linkage of STDP and Dopamine Signaling. Cereb. Cortex, page bhl152, 2007. [8] H. Risken. The Fokker-Planck Equation. Springer, 3rd edition, 1996. [9] R. G?utig, R. Aharonov, S. Rotter, and H. Sompolinsky. Learning input correlations through non-linear temporally asymmetric hebbian plasticity. Journal of Neurosci., 23:3697?3714, 2003. [10] H. Meffin, J. Besson, A. N. Burkitt, and D. B. Grayden. Learning the structure of correlated synaptic subgroups using stable and competitive spike-timing-dependent plasticity. Physical Review E, 73, 2006. [11] T. J. Sejnowski and G. Tesauro. The hebb rule for synaptic plasticity: algorithms and implementations. In J. H. Byrne and W. O. Berry, editors, Neural Models of Plasticity, pages 94?103. Academic Press, 1989. [12] N. Intrator and L. N. Cooper. Objective function formulation of the BCM theory of visual cortical plasticity: statistical connections, stability conditions. Neural Networks, 5:3?17, 1992. 8
3168 |@word illustrating:1 version:2 briefly:1 advantageous:1 open:1 hu:3 simulation:9 simplifying:1 covariance:7 tif:2 solid:2 series:1 wj2:4 written:1 numerical:5 realistic:2 plasticity:7 shape:1 enables:2 fund:1 stationary:5 xk:1 filtered:2 provides:1 allerton:1 simpler:4 differential:3 consists:1 specialize:3 manner:1 introduce:1 g4:6 theoretically:1 ica:2 expected:2 behavior:1 little:1 considering:1 lib:4 project:3 bounded:1 notation:1 substantially:1 eigenvector:5 transformation:1 temporal:3 dti:1 quantitative:1 ti:1 shed:1 exactly:2 control:1 planck:5 positive:3 t1:3 before:1 local:1 timing:2 sd:2 limit:2 analyzing:1 firing:2 fluctuation:1 modulation:4 might:4 black:1 studied:2 klampfl:2 limited:1 bi:1 statistically:2 range:1 unique:2 acknowledgment:1 yj:1 practice:1 union:1 differs:2 signaling:1 area:2 evolving:1 significantly:1 regular:2 cannot:1 close:5 operator:3 context:1 applying:1 impossible:1 restriction:1 deterministic:4 demonstrated:1 yt:40 maximizing:4 missing:1 starting:1 rule:63 estimator:3 attraction:1 stability:1 variation:3 analogous:1 target:15 aharonov:1 homogeneous:2 agreement:1 element:1 approximated:4 asymmetric:1 cut:1 prefactor:2 graz:3 ensures:1 wj:11 sompolinsky:1 trade:2 highest:1 vanishes:1 reward:1 dynamic:3 neglected:1 depend:1 solving:1 basis:1 joint:1 chapter:1 derivation:3 train:21 sejnowski:1 kp:2 quite:1 jean:1 larger:2 solve:2 toyoizumi:1 g1:10 online:7 reproduced:1 eigenvalue:6 analytical:5 propose:1 maximal:1 epsp:1 tu:1 turned:3 convergence:2 diverges:1 converges:2 help:1 depending:1 strong:1 auxiliary:1 implemented:2 predicted:3 s9102:1 concentrate:1 closely:1 filter:1 lars:2 stochastic:2 centered:1 subsequently:1 enable:1 kistler:1 f1:4 investigation:3 biological:4 p17229:1 accompanying:1 hold:1 hut:4 considered:2 around:2 sufficiently:1 exp:2 stdp:1 estimation:1 proc:1 largest:2 grouped:1 weighted:1 reflects:1 mit:1 rather:2 fluctuate:2 varying:1 derived:7 l0:4 superimposed:1 contrast:2 utig:1 dependent:2 relation:2 perspicuous:3 overall:1 among:1 pascal:1 denoted:1 special:1 mutual:5 equal:2 extraction:1 look:1 unsupervised:3 primarily:1 divergence:1 consisting:1 maintain:1 interest:1 analyzed:3 arrives:1 light:2 partial:1 plotted:2 theoretical:6 facet:1 cost:1 deviation:1 front:1 too:1 tishby:1 density:1 peak:1 stay:1 off:4 telegraph:1 ansatz:1 receiving:1 ym:2 concrete:5 w1:2 neuromodulators:1 containing:1 external:1 ek:3 leading:1 potential:2 coefficient:3 explicitly:2 igi:1 stream:3 root:1 view:1 wolfgang:1 linked:1 apparently:1 wm:1 decaying:2 competitive:1 complicated:1 parallel:1 ytm:1 contribution:2 characteristic:1 ensemble:1 buesing:2 trajectory:4 j6:1 synapsis:6 reach:2 synaptic:6 definition:2 frequency:1 involved:1 obvious:1 austria:1 ut:21 anticipating:1 appears:2 higher:2 dt:7 supervised:2 synapse:3 formulation:1 refractoriness:1 though:1 furthermore:4 governing:1 stage:1 just:2 correlation:7 d:2 receives:2 replacing:1 indicated:1 gray:1 izhikevich:1 true:1 remedy:1 byrne:1 evolution:5 analytically:2 hence:2 symmetric:1 leibler:1 proprioceptive:1 maass:5 white:2 distal:1 during:1 rooted:1 m:6 outline:1 complete:1 tn:1 performs:5 cereb:1 ranging:1 consideration:1 recently:1 common:3 spiking:13 physical:1 function1:1 retune:1 volume:1 discussed:2 interpretation:3 belong:1 slight:2 cambridge:2 rd:1 outlined:2 depressed:1 had:2 l3:3 stable:5 cortex:3 inhibition:1 base:1 add:1 perspective:1 belongs:1 driven:2 tesauro:1 scenario:1 rotter:1 yi:1 seen:2 additional:4 preceding:1 determine:1 maximize:1 paradigm:1 converge:1 signal:28 semi:2 u0:4 dashed:3 multiple:1 transparency:1 stem:1 hebbian:1 technical:1 academic:1 concerning:1 ipsps:1 equally:1 dkl:2 impact:1 converging:1 basic:3 austrian:1 expectation:3 poisson:10 dopamine:1 represent:2 kernel:3 normalization:1 achieved:2 receive:4 addition:1 want:3 interval:1 modality:2 ascent:1 probably:1 hz:3 tend:2 call:1 fwf:1 door:1 intermediate:1 enough:1 wn:1 identically:1 xj:4 bottleneck:7 motivated:1 pca:21 linkage:1 tij:2 clear:2 detailed:4 visualized:1 cit:1 reduced:1 dotted:1 sign:3 estimated:2 discrete:3 group:4 four:2 changing:1 prevent:1 diffusion:1 asymptotically:1 fp6:1 sum:3 powerful:1 master:1 yt1:1 legenstein:2 appendix:1 comparable:1 bound:1 ct:1 followed:1 simplification:1 risken:1 annual:1 activity:2 sharply:1 argument:1 optimality:2 performing:2 according:4 membrane:2 remain:1 slightly:3 smaller:1 postsynaptic:4 wi:1 g3:11 modification:2 aihara:1 intuitively:1 restricted:1 invariant:1 equation:17 mutually:1 previously:2 eventually:1 mechanism:1 intrator:1 appearing:3 shortly:1 original:1 top:1 denotes:5 objective:9 already:1 quantity:2 spike:41 volterra:1 bialek:1 exhibit:2 gradient:1 link:1 lateral:1 presynaptic:2 reason:1 assuming:1 modeled:1 relationship:1 preferentially:2 setup:6 unfortunately:1 cij:3 trace:2 negative:1 implementation:1 unknown:1 perform:2 upper:1 potentiated:2 neuron:50 observation:1 convolution:1 benchmark:2 displayed:1 situation:1 communication:1 precise:1 y1:2 varied:1 drift:2 introduced:1 extensive:1 optimized:1 connection:1 bcm:1 established:1 subgroup:12 nip:1 able:1 below:1 fp:1 memory:1 suitable:2 critical:5 difficulty:1 natural:1 technology:1 numerous:1 temporally:1 carried:1 specializes:1 extract:11 eugene:1 literature:1 kazuyuki:1 review:1 berry:1 expect:3 interesting:1 filtering:1 analogy:1 taro:1 basin:1 article:2 principle:4 metabolic:1 editor:1 uncorrelated:2 yt0:3 changed:1 repeat:1 last:1 gl:6 offline:1 drastically:1 silence:2 side:1 institute:1 distributed:1 feedback:1 calculated:2 xn:1 cortical:3 sensory:4 made:2 simplified:2 founded:1 approximate:1 kullback:1 global:1 incoming:1 reveals:1 assumed:3 alternatively:1 spectrum:1 continuous:3 decade:1 quantifies:1 learn:2 robust:1 investigated:1 complex:3 cl:1 european:1 gerstner:2 neurosci:1 noise:3 edition:1 fair:1 positively:2 fig:5 burkitt:1 hebb:1 cooper:1 pereira:1 exponential:2 ib:19 vanish:1 ix:3 down:1 specific:1 showing:1 decay:2 essential:1 exists:1 cjk:1 linearization:3 simply:1 likely:1 visual:1 prevents:1 g2:9 springer:1 fokker:5 dt1:1 corresponds:1 constantly:1 extracted:5 modulate:1 goal:2 formulated:2 towards:1 feasible:2 change:6 hard:1 yti:4 determined:1 except:2 wulfram:1 principal:16 called:2 total:1 pas:3 pfister:1 select:1 internal:1 support:1 latter:1 modulated:5 relevance:16 preparation:1 heaviside:1 correlated:6
2,390
3,169
Predicting human gaze using low-level saliency combined with face detection Jonathan Harel Electrical Engineering California Institute of Technology Pasadena, CA 91125 harel@klab.caltech.edu Moran Cerf Computation and Neural Systems California Institute of Technology Pasadena, CA 91125 moran@klab.caltech.edu Wolfgang Einh?auser Institute of Computational Science Swiss Federal Institute of Technology (ETH) Zurich, Switzerland wolfgang.einhaeuser@inf.ethz.ch Christof Koch Computation and Neural Systems California Institute of Technology Pasadena, CA 91125 koch@klab.caltech.edu Abstract Under natural viewing conditions, human observers shift their gaze to allocate processing resources to subsets of the visual input. Many computational models try to predict such voluntary eye and attentional shifts. Although the important role of high level stimulus properties (e.g., semantic information) in search stands undisputed, most models are based on low-level image properties. We here demonstrate that a combined model of face detection and low-level saliency significantly outperforms a low-level model in predicting locations humans fixate on, based on eye-movement recordings of humans observing photographs of natural scenes, most of which contained at least one person. Observers, even when not instructed to look for anything particular, fixate on a face with a probability of over 80% within their first two fixations; furthermore, they exhibit more similar scanpaths when faces are present. Remarkably, our model?s predictive performance in images that do not contain faces is not impaired, and is even improved in some cases by spurious face detector responses. 1 Introduction Although understanding attention is interesting purely from a scientific perspective, there are numerous applications in engineering, marketing and even art that can benefit from the understanding of both attention per se, and the allocation of resources for attention and eye movements. One accessible correlate of human attention is the fixation pattern in scanpaths [1], which has long been of interest to the vision community [2]. Commonalities between different individuals? fixation patterns allow computational models to predict where people look, and in which order [3]. There are several models for predicting observers? fixations [4], some of which are inspired by putative neural mechanisms. A frequently referenced model for fixation prediction is the Itti et al. saliency map model (SM) [5]. This ?bottom-up? approach is based on contrasts of intrinsic images features such as color, orientation, intensity, flicker, motion and so on, without any explicit information about higher order scene structure, semantics, context or task-related (?top-down?) factors, which may be crucial for attentional allocation [6]. Such a bottom-up saliency model works well when higher order semantics are reflected in low-level features (as is often the case for isolated objects, and even for reasonably cluttered scenes), but tends to fail if other factors dominate: e.g., in search tasks [7, 8], strong contextual effects [9], or in free-viewing of images without clearly isolated objects, such as 1 forest scenes or foliage [10]. Here, we test how images containing faces - ecologically highly relevant objects - influence variability of scanpaths across subjects. In a second step, we improve the standard saliency model by adding a ?face channel? based on an established face detector algorithm. Although there is an ongoing debate regarding the exact mechanisms which underlie face detection, there is no argument that a normal subject (in contrast to autistic patients) will not interpret a face purely as a reddish blob with four lines, but as a much more significant entity ([11, 12]. In fact, there is mounting evidence of infants? preference for face-like patterns before they can even consciously perceive the category of faces [13], which is crucial for emotion and social processing ([13, 14, 15, 16]). Face detection is a well investigated area of machine vision. There are numerous computer-vision models for face detection with good results ([17, 18, 19, 20]). One widely used model for face recognition is the Viola & Jones [21] feature-based template matching algorithm (VJ). There have been previous attempts to incorporate face detection into a saliency model. However, they have either relied on biasing a color channel toward skin hue [22] - and thus being ineffective in many cases nor being face-selective per se - or they have suffered from lack of generality [23]. We here propose a system which combines the bottom-up saliency map model of Itti et al. [5] with the Viola & Jones face detector. The contributions of this study are: (1) Experimental data showing that subjects exhibit significantly less variable scanpaths when viewing natural images containing faces, marked by a strong tendency to fixate on faces early. (2) A novel saliency model which combines a face detector with intensity, color, and orientation information. (3) Quantitative results on two versions of this saliency model, including one extended from a recent graph-based approach, which show that, compared to previous approaches, it better predicts subjects? fixations on images with faces, and predicts as well otherwise. 2 2.1 Methods Experimental procedures Seven subjects viewed a set of 250 images (1024 ? 768 pixels) in a three phase experiment. 200 of the images included frontal faces of various people; 50 images contained no faces but were otherwise identical, allowing a comparison of viewing a particular scene with and without a face. In the first (?free-viewing?) phase of the experiment, 200 of these images (the same subset for each subject) were presented to subjects for 2 s, after which they were instructed to answer ?How interesting was the image?? using a scale of 1-9 (9 being the most interesting). Subjects were not instructed to look at anything in particular; their only task was to rate the entire image. In the second (?search?) phase, subjects viewed another 200 image subset in the same setup, only this time they were initially presented with a probe image (either a face, or an object in the scene: banana, cell phone, toy car, etc.) for 600 ms after which one of the 200 images appeared for 2 s. They were then asked to indicate whether that image contained the probe. Half of the trials had the target probe present. In half of those the probe was a face. Early studies suggest that there should be a difference between free-viewing of a scene, and task-dependent viewing of it [2, 4, 6, 7, 24]. We used the second task to test if there are any differences in the fixation orders and viewing patterns between freeviewing and task-dependent viewing of images with faces. In the third phase, subjects performed a 100 images recognition memory task where they had to answer with y/n whether they had seen the image before. 50 of the images were taken from the experimental set and 50 were new. Subjects? mean performance was 97.5% correct, verifying that they were indeed alert during the experiment. The images were introduced as ?regular images that one can expect to find in an everyday personal photo album?. Scenes were indoors and outdoors still images (see examples in Fig. 1). Images included faces in various skin colors, age groups, and positions (no image had the face at the center as this was the starting fixation location in all trials). A few images had face-like objects (see balloon in Fig. 1, panel 3), animal faces, and objects that had irregular faces in them (masks, the Egyptian sphinx face, etc.). Faces also vary in size (percentage of the entire image). The average face was 5% ? 1% (mean ? s.d.) of the entire image - between 1? to 5? of the visual field; we also varied the number of faces in the image between 1-6, with a mean of 1.1 ? 0.48. Image order was randomized throughout, and subjects were na??ve to the purpose of the experiment. Subjects fixated on a cross in the center before each image onset. Eye-position data were acquired at 1000 Hz using an Eyelink 1000 (SR Research, Osgoode, Canada) eye-tracking device. The images were presented on a CRT 2 screen (120 Hz), using Matlab?s Psychophysics and eyelink toolbox extensions ([25, 26]). Stimulus luminance was linear in pixel values. The distance between the screen and the subject was 80 cm, giving a total visual angle for each image of 28? ? 21? . Subjects used a chin-rest to stabilize their head. Data were acquired from the right eye alone. All subjects had uncorrected normal eyesight. Figure 1: Examples of stimuli during the ?free-viewinng? phase. Notice that faces have neutral expressions. Upper 3 panels include scanpaths of one individual. The red triangle marks the first and the red square the last fixation, the yellow line the scanpath, and the red circles the subsequent fixations. Lower panels show scanpaths of all 7 subjects. The trend of visiting the faces first - typically within the 1st or 2nd fixation - is evident. All images are available at http://www.klab.caltech.edu/?moran/db/faces/. 2.2 Combining face detection with various saliency algorithms We tried to predict the attentional allocation via fixation patterns of the subjects using various saliency maps. In particular, we computed four different saliency maps for each of the images in our data set: (1) a saliency map based on the model of [5] (SM), (2) a graph-based saliency map according to [27] (GBSM), (3) a map which combines SM with face-detection via VJ (SM+VJ), and (4) a saliency map combining the outputs of GBSM and VJ (GBSM+VJ). Each saliency map was represented as a positive valued heat map over the image plane. SM is based on computing feature maps, followed by center-surround operations which highlight local gradients, followed by a normalization step prior to combining the feature channels. We used the ?Maxnorm? normalization scheme which is a spatial competition mechanism based on the squared ratio of global maximum over average local maximum. This promotes feature maps with one conspicuous location to the detriment of maps presenting numerous conspicuous locations. The graphbased saliency map model (GBSM) employs spectral techniques in lieu of center surround subtraction and ?Maxnorm? normalization, using only local computations. GBSM has shown more robust correlation with human fixation data compared with standard SM [27]. For face detection, we used the Intel Open Source Computer Vision Library (?OpenCV?) [28] implementation of [21]. This implementation rapidly processes images while achieving high detection rates. An efficient classifier built using the Ada-Boost learning algorithm is used to select a small number of critical visual features from a large set of potential candidates. Combining classifiers in a cascade allows background regions of the image to be quickly discarded, so that more cycles process promising face-like regions using a template matching scheme. The detection is done by applying a classifier to a sliding search window of 24x24 pixels. The detectors are made of three joined black and white rectangles, either up-right or rotated by 45? . The values at each point are calculated as a weighted sum of two components: the pixel sum over the black rectangles and the sum over the whole detector area. The classifiers are combined to make a boosted cascade with classifiers going from simple to more complex, each possibly rejecting the candidate window as ?not a face? [28]. This implementation of the facedetect module was used with the standard default training set of the original model. We used it to form a ?Faces conspicuity map?, or ?Face channel? 3 by convolving delta functions at the (x,y) detected facial centers with 2D Gaussians having standard deviation equal to estimated facial radius. The values of this map were normalized to a fixed range. For both SM and GBSM, we computed the combined saliency map as the mean of the normalized color (C), orientation (O), and intensity (I) maps [5]: 1 (N (I) + N (C) + N (O)) 3 And for SM+VJ and GBSM+VJ, we incorporated the normalized face conspicuity map (F) into this mean (see Fig 2): 1 (N (I) + N (C) + N (O) + N (F )) 4 This is our combined face detector/saliency model. Although we could have explored the space of combinations which would optimize predictive performance, we chose to use this simplest possible combination, since it is the least complicated to analyze, and also provides us with first intuition for further studies. Face?detection Color Intensity Orientation False? False positive Saliency?Map?with?face??detection Saliency?Map Figure 2: Modified saliency model. An image is processed through standard [5] color, orientation and intensity multi-scale channels, as well as through a trained template-matching face detection mechanism. Face coordinates and radius from the face detector are used to form a face conspicuity map (F), with peaks at facial centers. All four maps are normalized to the same dynamic range, and added with equal weights to a final saliency map (SM+VJ, or GBSM+VJ). This is compared to a saliency map which only uses the three bottom-up features maps (SM or GBSM). 3 3.1 Results Psychophysical results To evaluate the results of the 7 subjects? viewing of the images, we manually defined minimally sized rectangular regions-of-interest (ROIs) around each face in the entire image collection. We first assessed, in the ?free-viewing? phase, how many of the first fixations went to a face, how many of the second, third fixations and so forth. In 972 out of the 1050 (7 subjects x 150 images with faces) trials (92.6%), the subject fixated on a face at least once. In 645/1050 (61.4%) trials, a 4 face was fixated on within the first fixation, and of the remaining 405 trials, a face was fixated on in the second fixation in 71.1% (288/405), i.e. after two fixations a face was fixated on in 88.9% (933/1050) of trials (Fig. 3). Given that the face ROIs were chosen very conservatively (i.e. fixations just next to a face do not count as fixations on the face), this shows that faces, if present, are typically fixated on within the first two fixations (327 ms ? 95 ms on average). Furthermore, in addition to finding early fixations on faces, we found that inter-subject scanpath consistency on images with faces was higher. For the free-viewing task, the mean minimum distance to another?s subject?s fixation (averaged over fixations and subjects) was 29.47 pixels on images with faces, and a greater 34.24 pixels on images without faces (different with p < 10?6 ). We found similar results using a variety of different metrics (ROC, Earth Mover?s Distance, Normalized Scanpath Saliency, etc.). To verify that the double spatial bias of photographer and observer ([29] for discussion of this issue) did not artificially result in high fractions of early fixations on faces, we compared our results to an unbiased baseline: for each subject, the fraction of fixations from all images which fell in the ROIs of one particular image. The null hypothesis that we would see the same fraction of first fixations on a face at random is rejected at p < 10?20 (t-test). To test for the hypothesis that face saliency is not due to top-down preference for faces in the absence of other interesting things, we examined the results of the ?search? task, in which subjects were presented with a non-face target probe in 50% of the trials. Provided the short amount of time for the search (2 s), subjects should have attempted to tune their internal saliency weights to adjust color, intensity, and orientation optimally for the searched target [30]. Nevertheless, subjects still tended to fixate on the faces early. A face was fixated on within the first fixation in 24% of trials, within the first two fixations in 52% of trials, and within the three fixations in 77% of the trials. While this is weaker than in free-viewing, where 88.9% was achieved after just two fixations, the difference from what would be expected for random fixation selection (unbiased baseline as above) is still highly significant (p < 10?8 ). Overall, we found that in both experimental conditions (?free-viewing? and ?search?), faces were powerful attractors of attention, accounting for a strong majority of early fixations when present. This trend allowed us to easily improve standard saliency models, as discussed below. Figure 3: Extent of fixation on face regions-of-interest (ROIs) during the ?free-viewing? phase . Left: image with all fixations (7 subjects) superimposed. First fixation marked in blue, second in cyan, remaining fixations in red. Right: Bars depict percentage of trials, which reach a face the first time in the first, second, third, . . . fixation. The solid curve depicts the integral, i.e. the fraction of trials in which faces were fixated on at least once up to and including the nth fixation. 3.2 Assessing the saliency map models We ran VJ on each of the 200 images used in the free viewing task, and found at least one face detection on 176 of these images, 148 of which actually contained faces (only two images with faces were missed). For each of these 176 images, we computed four saliency maps (SM, GBSM, SM+VJ, GBSM+VJ) as discussed above, and quantified the compatibility of each with our scanpath recordings, in particular fixations, using the area under an ROC curve. The ROC curves were generated by sweeping over saliency value thresholds, and treating the fraction of non-fixated pixels 5 on a map above threshold as false alarms, and the fraction of fixated pixels above threshold as hits [29, 31]. According to this ROC fixation ?prediction? metric, for the example image in Fig. 4, all models predict above chance (50%): SM performs worst, and GBSM+VJ best, since including the face detector substantially improves performance in both cases. Figure 4: Comparison of the area-under-the-curve (AUC) for an image (chosen arbitrarily. Subjects? scanpaths shown on the left panels of figure 1). Top panel: image with the 49 fixations of the 7 subjects (red). First central fixations for each subject were excluded. From left to right, saliency map model of Itti et al. (SM), saliency map with the VJ face detection map (SM+VJ), the graph-based saliency map (GBSM), and the graph-based saliency map with face detection channel (GBSM+VJ). Red dots correspond to fixations. Lower panels depict ROC curves corresponding to each map. Here, GBSM+VJ predicts fixations best, as quantified by the highest AUC. Across all 176 images, this trend prevails (Fig. 5): first, all models perform better than chance, even over the 28 images without faces. The SM+VJ model performed better than the SM model for 154/176 images. The null hypothesis to get this result by chance can be rejected at p < 10?22 (using a coin-toss sign-test for which model does better, with uniform null-hypothesis, neglecting the size of effects). Similarly, the GBSM+VJ model performed better than the GBSM model for 142/176 images, a comparably vast majority (p < 10?15 ) (see Fig. 5, right). For the 148/176 images with faces, SM+VJ was better than SM alone for 144/148 images (p < 10?29 ), whereas VJ alone (equal to the face conspicuity map) was better than SM alone for 83/148 images, a fraction that fails to reach significance. Thus, although the face conspicuity map was surprisingly predictive on its own, fixation predictions were much better when it was combined with the full saliency model. For the 28 images without faces, SM (better than SM+VJ for 18) and SM+VJ (better than SM for 10) did not show a significant difference, nor did GBSM vs. GBSM+VJ (better on 15/28 compared to 13/28, respectively. However, in a recent follow-up study with more non-face images, we found preliminary results indicating that the mean ROC score of VJ-enhanced saliency maps is higher on such non-face images, although the median is slightly lower, i.e. performance is much improved when improved at all indicating that VJ false positives can sometimes enhance saliency maps. In summary, we found that adding a face detector channel improves fixation prediction in images with faces dramatically, while it does not impair prediction in images without faces, even though the face detector has false alarms in those cases. 4 Discussion First, we demonstrated that in natural scenes containing frontal shots of people, faces were fixated on within the first few fixations, whether subjects had to grade an image on interest value or search it for a specific possibly non-face target. This powerful trend motivated the introduction of a new saliency 6 ** *** 2* 14 70 4 15 60 34 22 60 0 0 1 60 0 0 0.9 0.9 AUC (GBSM+VJ) AUC (SM+VJ) 0.8 0.7 0.8 0.7 0.6 0.6 image with face image without face 0.5 0.5 0.6 0.7 AUC (SM) 0.8 0.5 0.5 0.9 0.6 0.8 0.7 AUC (GBSM) 0.9 1 Figure 5: Performance of SM compared to SM+VJ and GBSM compared to GBSM+VJ. Scatterplots depict the area under ROC curves (AUC) for the 176 images in which VJ found a face. Each point represents a single image. Points above the diagonal indicate better prediction of the model including face detection compared to the models without face channel. Blue markers denote images with faces; red markers images without faces (i.e. false positives of the VJ face detector). Histograms of the SM and SM+VJ (GBSM and GBSM+VJ) are depicted to the top and left (binning: 0.05); colorcode as in scatterplots. model, which combined the ?bottom-up? feature channels of color, orientation, and intensity, with a special face-detection channel, based on the Viola & Jones algorithm. The combination was linear in nature with uniform weight distribution for maximum simplicity. In attempting to predict the fixations of human subjects, we found that this additional face channel improved the performance of both a standard and a more recent graph-based saliency model (almost all blue points in Fig. 5 are above the diagonal) in images with faces. In the few images without faces, we found that the false positives represented in the face-detection channel did not significantly alter the performance of the saliency maps ? although in a preliminary follow-up on a larger image pool we found that they boost mean performance. Together, these findings point towards a specialized ?face channel? in our vision system, which is subject to current debate in the attention literature [11, 12, 32]. In conclusion, inspired by biological understanding of human attentional allocation to meaningful objects - faces - we presented a new model for computing an improved saliency map which is more consistent with gaze deployment in natural images containing faces than previously studied models, even though the face detector was trained on standard sets. This suggests that faces always attract attention and gaze, relatively independent of the task. They should therefore be considered as part of the bottom-up saliency pathway. References [1] G. Rizzolatti, L. Riggio, I. Dascola, and C. Umilta. Reorienting attention across the horizontal and vertical meridians: evidence in favor of a premotor theory of attention. Neuropsychologia, 25(1A):31?40, 1987. [2] G.T. Buswell. How People Look at Pictures: A Study of the Psychology of Perception in Art. The University of Chicago press, 1935. [3] M. Cerf, D. R. Cleary, R. J. Peters, and C. Koch. Observers are consistent when rating image conspicuity. Vis Res, 47(25):3017?3027, 2007. [4] S.J. Dickinson, H.I. Christensen, J. Tsotsos, and G. Olofsson. Active object recognition integrating attention and viewpoint control. Computer Vision and Image Understanding, 67(3):239?260, 1997. [5] L. Itti, C. Koch, E. Niebur, et al. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11):1254?1259, 1998. [6] A.L. Yarbus. Eye Movements and Vision. Plenum Press New York, 1967. 7 [7] J.M. Henderson, J.R. Brockmole, M.S. Castelhano, and M. Mack. Visual Saliency Does Not Account for Eye Movements during Visual Search in Real-World Scenes. Eye Movement Research: Insights into Mind and Brain, R. van Gompel, M. Fischer, W. Murray, and R. Hill, Eds., 1997. [8] Gregory Zelinsky, Wei Zhang, Bing Yu, Xin Chen, and Dimitris Samaras. The role of top-down and bottom-up processes in guiding eye movements during visual search. In Y. Weiss, B. Sch?olkopf, and J. Platt, editors, Advances in Neural Information Processing Systems 18, pages 1569?1576. MIT Press, Cambridge, MA, 2006. [9] A. Torralba, A. Oliva, M.S. Castelhano, and J.M. Henderson. Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. Psych Rev, 113(4):766?786, 2006. [10] W. Einh?auser and P. K?onig. Does luminance-contrast contribute to a saliency map for overt visual attention? Eur. J Neurosci, 17(5):1089?1097, 2003. [11] O. Hershler and S. Hochstein. At first sight: a high-level pop out effect for faces. Vision Res, 45(13):1707? 24, 2005. [12] R. Vanrullen. On second glance: Still no high-level pop-out effect for faces. Vision Res, 46(18):3017? 3027, 2006. [13] C. Simion and S. Shimojo. Early interactions between orienting, visual sampling and decision making in facial preference. Vision Res, 46(20):3331?3335, 2006. [14] R. Adolphs. Neural systems for recognizing emotion. Curr. Op. Neurobiol., 12(2):169?177, 2002. [15] A. Klin, W. Jones, R. Schultz, F. Volkmar, and D. Cohen. Visual Fixation Patterns During Viewing of Naturalistic Social Situations as Predictors of Social Competence in Individuals With Autism, 2002. [16] JJ Barton. Disorders of face perception and recognition. Neurol Clin, 21(2):521?48, 2003. [17] K.K. Sung and T. Poggio. Example-based learning for view-based human face detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(1):39?51, 1998. [18] H.A. Rowley, S. Baluja, and T. Kanade. Neural network-based face detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(1):23?38, 1998. [19] H. Schneiderman and T. Kanade. Statistical method for 3 D object detection applied to faces and cars. Computer Vision and Pattern Recognition, 1:746?751, 2000. [20] D. Roth, M. Yang, and N. Ahuja. A snow-based face detection. In S. A. Solla, T. K. Leen, and K. R. Muller, editors, Advances in Neural Information Processing Systems 13, pages 855?861. MIT Press, Cambridge, MA, 2000. [21] P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. Computer Vision and Pattern Recognition, 1:511?518, 2001. [22] D. Walther. Interactions of visual attention and object recognition: computational modeling, algorithms, and psychophysics. PhD thesis, California Institute of Technology, 2006. [23] C. Breazeal and B. Scassellati. A context-dependent attention system for a social robot. 1999 International Joint Conference on Artificial Intelligence, pages 1254?1259, 1999. [24] V. Navalpakkam and L. Itti. Search Goal Tunes Visual Features Optimally. Neuron, 53(4):605?617, 2007. [25] D.H. Brainard. The psychophysics toolbox. Spat Vis, 10(4):433?436, 1997. [26] F.W. Cornelissen, E.M. Peters, and J. Palmer. The Eyelink Toolbox: Eye tracking with MATLAB and the Psychophysics Toolbox. Behav Res Meth Instr Comput, 34(4):613?617, 2002. [27] J. Harel, C. Koch, and P. Perona. Graph-based visual saliency. In B. Sch?olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 545?552. MIT Press, Cambridge, MA, 2007. [28] G. Bradski, A. Kaehler, and V. Pisarevsky. Learning-based computer vision with Intels open source computer vision library. Intel Technology Journal, 9(1), 2005. [29] B.W. Tatler, R.J. Baddeley, and I.D. Gilchrist. Visual correlates of fixation selection: effects of scale and time. Vision Res, 45(5):643?59, 2005. [30] V. Navalpakkam and L. Itti. Search goal tunes visual features optimally. Neuron, 53(4):605?617, 2007. [31] R.J. Peters, A. Iyer, L. Itti, and C. Koch. Components of bottom-up gaze allocation in natural images. Vision Res, 45(18):2397?2416, 2005. [32] O. Hershler and S. Hochstein. With a careful look: Still no low-level confound to face pop-out Authors? reply. Vis Res, 46(18):3028?3035, 2006. 8
3169 |@word trial:12 version:1 nd:1 open:2 tried:1 accounting:1 photographer:1 solid:1 cleary:1 shot:1 score:1 outperforms:1 current:1 contextual:2 rizzolatti:1 subsequent:1 chicago:1 treating:1 depict:3 mounting:1 v:1 infant:1 half:2 alone:4 device:1 intelligence:4 plane:1 short:1 pisarevsky:1 provides:1 contribute:1 location:4 preference:3 yarbus:1 zhang:1 alert:1 walther:1 fixation:51 combine:3 pathway:1 acquired:2 inter:1 mask:1 expected:1 indeed:1 rapid:2 frequently:1 nor:2 multi:1 grade:1 brain:1 inspired:2 window:2 provided:1 panel:6 null:3 what:1 cm:1 neurobiol:1 substantially:1 psych:1 finding:2 sung:1 quantitative:1 classifier:5 hit:1 platt:2 control:1 onig:1 underlie:1 christof:1 before:3 positive:5 engineering:2 referenced:1 local:3 tends:1 instr:1 black:2 chose:1 minimally:1 studied:1 examined:1 quantified:2 suggests:1 deployment:1 palmer:1 range:2 averaged:1 swiss:1 procedure:1 area:5 barton:1 eth:1 significantly:3 cascade:3 matching:3 integrating:1 regular:1 suggest:1 get:1 naturalistic:1 selection:2 context:2 influence:1 applying:1 www:1 optimize:1 map:42 demonstrated:1 center:6 roth:1 attention:15 starting:1 cluttered:1 rectangular:1 simplicity:1 disorder:1 undisputed:1 perceive:1 insight:1 dominate:1 coordinate:1 plenum:1 target:4 enhanced:1 exact:1 dickinson:1 us:1 hypothesis:4 trend:4 recognition:7 predicts:3 binning:1 bottom:8 role:3 module:1 electrical:1 verifying:1 worst:1 region:4 cycle:1 went:1 balloon:1 movement:7 highest:1 solla:1 ran:1 intuition:1 eyelink:3 rowley:1 asked:1 dynamic:1 personal:1 trained:2 predictive:3 purely:2 samara:1 eyesight:1 triangle:1 easily:1 joint:1 various:4 represented:2 heat:1 detected:1 artificial:1 olofsson:1 premotor:1 widely:1 valued:1 larger:1 otherwise:2 favor:1 fischer:1 final:1 blob:1 spat:1 propose:1 interaction:2 relevant:1 combining:4 rapidly:1 tatler:1 forth:1 everyday:1 competition:1 olkopf:2 impaired:1 double:1 assessing:1 rotated:1 object:12 brainard:1 op:1 strong:3 uncorrected:1 indicate:2 switzerland:1 foliage:1 radius:2 snow:1 correct:1 human:9 crt:1 viewing:17 preliminary:2 biological:1 extension:1 klab:4 koch:6 around:1 normal:2 roi:4 considered:1 predict:5 opencv:1 vary:1 commonality:1 early:7 torralba:1 purpose:1 earth:1 freeviewing:1 overt:1 weighted:1 hoffman:1 federal:1 mit:3 clearly:1 always:1 sight:1 modified:1 boosted:2 superimposed:1 reorienting:1 contrast:3 baseline:2 dependent:3 attract:1 entire:4 typically:2 initially:1 pasadena:3 spurious:1 perona:1 selective:1 going:1 semantics:2 compatibility:1 pixel:8 issue:1 overall:1 orientation:7 animal:1 art:2 special:1 auser:2 psychophysics:4 spatial:2 emotion:2 field:1 equal:3 having:1 once:2 sampling:1 manually:1 identical:1 represents:1 look:5 jones:5 yu:1 alter:1 stimulus:3 few:3 employ:1 harel:3 ve:1 mover:1 individual:3 phase:7 attractor:1 attempt:1 curr:1 detection:25 interest:4 bradski:1 highly:2 adjust:1 henderson:2 integral:1 neglecting:1 poggio:1 facial:4 circle:1 re:8 isolated:2 guidance:1 modeling:1 ada:1 deviation:1 subset:3 neutral:1 uniform:2 predictor:1 recognizing:1 optimally:3 meridian:1 autistic:1 answer:2 gregory:1 combined:7 eur:1 person:1 st:1 peak:1 randomized:1 international:1 accessible:1 pool:1 gaze:5 enhance:1 quickly:1 together:1 na:1 thesis:1 squared:1 central:1 containing:4 prevails:1 possibly:2 zelinsky:1 castelhano:2 cornelissen:1 sphinx:1 convolving:1 itti:7 toy:1 account:1 potential:1 stabilize:1 onset:1 vi:3 performed:3 try:1 observer:5 view:1 wolfgang:2 observing:1 graphbased:1 red:7 analyze:1 relied:1 complicated:1 contribution:1 square:1 correspond:1 saliency:47 yellow:1 consciously:1 rejecting:1 comparably:1 niebur:1 ecologically:1 autism:1 detector:13 reach:2 tended:1 ed:1 fixate:4 color:9 car:2 improves:2 actually:1 higher:4 follow:2 reflected:1 response:1 improved:5 wei:2 leen:1 done:1 though:2 generality:1 furthermore:2 marketing:1 just:2 rejected:2 reply:1 correlation:1 horizontal:1 marker:2 lack:1 glance:1 outdoors:1 scientific:1 orienting:1 effect:5 contain:1 normalized:5 verify:1 unbiased:2 excluded:1 semantic:1 white:1 during:6 auc:7 anything:2 m:3 chin:1 evident:1 presenting:1 demonstrate:1 hill:1 performs:1 motion:1 image:82 novel:1 specialized:1 gilchrist:1 cohen:1 discussed:2 interpret:1 significant:3 reddish:1 surround:2 cambridge:3 consistency:1 similarly:1 had:8 dot:1 robot:1 etc:3 own:1 recent:3 perspective:1 inf:1 phone:1 arbitrarily:1 buswell:1 muller:1 caltech:4 seen:1 minimum:1 greater:1 additional:1 subtraction:1 sliding:1 full:1 cross:1 long:1 promotes:1 prediction:6 oliva:1 vision:16 patient:1 metric:2 histogram:1 normalization:3 sometimes:1 achieved:1 cell:1 irregular:1 background:1 remarkably:1 addition:1 whereas:1 median:1 source:2 suffered:1 crucial:2 scanpaths:7 sch:2 rest:1 sr:1 ineffective:1 fell:1 recording:2 subject:35 hz:2 db:1 thing:1 neuropsychologia:1 yang:1 vanrullen:1 variety:1 psychology:1 regarding:1 shift:2 whether:3 expression:1 motivated:1 allocate:1 peter:3 york:1 jj:1 behav:1 matlab:2 scanpath:4 dramatically:1 cerf:2 se:2 indoors:1 tune:3 amount:1 hue:1 processed:1 category:1 simplest:1 http:1 percentage:2 flicker:1 conspicuity:6 notice:1 sign:1 delta:1 estimated:1 per:2 blue:3 group:1 four:4 nevertheless:1 threshold:3 achieving:1 rectangle:2 luminance:2 vast:1 graph:6 fraction:7 sum:3 tsotsos:1 angle:1 schneiderman:1 powerful:2 throughout:1 almost:1 missed:1 putative:1 decision:1 cyan:1 followed:2 scene:12 argument:1 hochstein:2 attempting:1 relatively:1 according:2 combination:3 across:3 slightly:1 conspicuous:2 rev:1 making:1 christensen:1 confound:1 mack:1 taken:1 resource:2 zurich:1 previously:1 bing:1 count:1 mechanism:4 fail:1 mind:1 photo:1 lieu:1 available:1 operation:1 gaussians:1 probe:5 spectral:1 adolphs:1 coin:1 original:1 top:5 remaining:2 include:1 klin:1 clin:1 giving:1 murray:1 psychophysical:1 skin:2 added:1 breazeal:1 diagonal:2 visiting:1 exhibit:2 gradient:1 distance:3 attentional:4 entity:1 majority:2 seven:1 extent:1 toward:1 navalpakkam:2 ratio:1 detriment:1 setup:1 debate:2 implementation:3 perform:1 allowing:1 upper:1 vertical:1 neuron:2 sm:30 discarded:1 voluntary:1 situation:1 viola:4 extended:1 variability:1 banana:1 head:1 incorporated:1 varied:1 sweeping:1 competence:1 community:1 intensity:7 canada:1 rating:1 introduced:1 toolbox:4 california:4 egyptian:1 established:1 boost:2 pop:3 impair:1 bar:1 below:1 pattern:11 perception:2 dimitris:1 biasing:1 appeared:1 built:1 including:4 memory:1 critical:1 natural:6 predicting:3 nth:1 meth:1 scheme:2 improve:2 technology:6 eye:12 numerous:3 library:2 picture:1 prior:1 understanding:4 literature:1 expect:1 highlight:1 interesting:4 allocation:5 age:1 consistent:2 viewpoint:1 editor:3 summary:1 einh:2 last:1 free:10 surprisingly:1 bias:1 allow:1 weaker:1 institute:6 template:3 face:127 benefit:1 van:1 curve:6 calculated:1 default:1 stand:1 world:2 conservatively:1 instructed:3 made:1 collection:1 author:1 schultz:1 social:4 correlate:2 transaction:3 global:2 active:1 fixated:11 maxnorm:2 shimojo:1 search:13 kanade:2 promising:1 channel:13 reasonably:1 robust:1 ca:3 nature:1 forest:1 investigated:1 complex:1 artificially:1 vj:34 did:4 significance:1 neurosci:1 whole:1 alarm:2 allowed:1 fig:8 intel:2 roc:7 screen:2 depicts:1 ahuja:1 scatterplots:2 fails:1 position:2 guiding:1 explicit:1 x24:1 candidate:2 comput:1 third:3 down:3 specific:1 showing:1 moran:3 explored:1 neurol:1 evidence:2 intrinsic:1 volkmar:1 false:7 adding:2 phd:1 album:1 iyer:1 chen:1 depicted:1 photograph:1 visual:16 contained:4 tracking:2 joined:1 ch:1 chance:3 ma:3 marked:2 viewed:2 sized:1 goal:2 careful:1 towards:1 toss:1 absence:1 included:2 baluja:1 total:1 experimental:4 tendency:1 attempted:1 xin:1 meaningful:1 indicating:2 select:1 internal:1 people:4 mark:1 searched:1 jonathan:1 assessed:1 ethz:1 frontal:2 ongoing:1 incorporate:1 evaluate:1 baddeley:1
2,391
317
Qualitative structure from motion Daphna Weinshall Center for Biological Information Processing MIT, E25-201, Cambridge MA 02139 Abstract Exact structure from motion is an ill-posed computation and therefore very sensitive to noise. In this work I describe how a qualitative shape representation, based on the sign of the Gaussian curvature, can be computed directly from motion disparities, without the computation of an exact depth map or the directions of surface normals. I show that humans can judge the curvature sense of three points undergoing 3D motion from two, three and four views with success rate significantly above chance. A simple RBF net has been trained to perform the same task. 1 INTRODUCTION When a scene is recorded from two or more different positions in space, e.g. by a moving camera, objects are projected into disparate locations in each image. This disparity can be used to recover the three-dimensional structure of objects that is lost in the projection process. The computation of structure requires knowledge of the 3D motion parameters. Although these parameters can themselves be computed from the disparities, their computation presents a difficult problem that is mathematically ill-posed: small perturbations (or errors) in the data may cause large changes in the solution [9]. This brittleness, or sensitivity to noise, is a major factor limiting the applicability of a number of structure from motion algorithms in practical situations (Ullman, 1983). The problem of brittleness of the structure from motion algorithms that use the minimal possible information may be attacked through two different approaches. One involves using more data, either in the space domain (more corresponding points in each image frame, Bruss & Horn, 1981), or in the time domain (more frames, 356 Qualitative Structure From Motion Ullman, 1984). The other approach is to look for, instead of a general quantitative solution, a qualitative one that would still meet the main requirements of the task for which the computation is performed (e.g., object representation or navigation). This approach has been applied to navigation (e.g., Nelson & Aloimonos, 1988) and object recognition (e.g., Koenderink & van Doorn, 1976; Weinshall, 1989). Under perspective projection, the knowledge of the positions of 7 corresponding points in two successive frames is the theoretical lower limit of information necessary to compute the 3D structure of an object that undergoes a general motion (Tsai & Huang, 1984). As mentioned above, acceptable performance of structure from motion algorithms on real, noisy images requires that a larger number of corresponding points be used. In contrast, the human visual system can extract 3D motion information using as few as 3 points in each of the two frames (Borjesson & von Hofsten, 1973). To what extent can object shape be recovered from such impoverished data? I have investigated this question experimentally (by studying the performance of human subjects) and theoretically (by analyzing the information available in the three-point moving stimuli). 2 THEORETICAL SHORTCUTS The goal of the structure from motion computation is to obtain the depth map of a moving object: the value of the depth coordinate at each point in the 2D image of the object. The depth map can be used subsequently to build a representation of the object, e.g., for purposes of recognition. One possible object representation is the description of an object as a collection of generic parts, where each part is described by a few parameters. Taking the qualitative approach to vision described in the introduction, the necessity of having a complete depth map for building useful generic representations can be questioned. Indeed, one such representation, a map of the sign of the Gaussian curvature of the object's surface, can be computed directly (and, possibly, more reliably) from motion disparities. The knowledge of the sign of the Gaussian curvature of the surface allows the classification of surface patches as elliptic (convex/concave), hyperbolic (saddle point), cylindrical, or planar. Furthermore, the boundaries between adjacent generic parts are located along lines of zero curvature (parabolic lines). The basic result that allows the computation of the sign of the Gaussian curvature directly from motion disparities is the following theorem (see Weinshall, 1989 for details): Theorem 1 Let FOE denote the Focus Of Ezpansion - the location in the image towards (or away from) which the motion is directed. Pick three collinear points in one image and observe the pattern they form in a subsequent image. The sign of the curvature of these three points in the second image relative to the FOE is the same as the sign of the normal curvature of the 3D curve defined by these three points. The sign of the Gaussian curvature at a given point can be found without knowing the direction of the normal to the surface, by computing the curvature sign of point 357 358 Weillshall ? ( ? ( ? CONCAVE ..... 0 . 8 0 r - - - - - , - - - - - : - - - - : - - - - - - , ? oQ.I ? 0.75 ...................................................... . ~ ? CONVEX o "- 0.70 ............ .. o t: D D Frame 1 Frame 2 (a) o :;; &. "- 0.55 ............. . ]' ............. o a. 1' .............. ;. ............. . .. . 3 4 "- (b) Figure 1: Experiment 1: perception of curvature from three points in 3D translation. (a) Four naive subjects were shown two, three or four snapshots of the motion sequence. The subjects did not perceive the motion as translation. The total extent and the speed of the motion were identical in each condition. The three points were always collinear in the first frame. The back and forth motion sequence was repeated eight times, after which the subjects were required to decide on the sign of the curvature (see text). The mean performance, 62%, differed significantly from chance (t = 5.55, p < 0.0001). Furthermore, all subjects but one performed significantly above chance. (b) The effect of the number offrames was not significant (X 2 = 1. 72, p = 0.42). Bars show ?1 standard error of the mean. triads in all directions around the point. The sign of the Gaussian curvature is determined by the number of sign reversals of the triad curvatures encountered around the given point. The exact location of the FOE is therefore not important. The sign operator described above has biological appeal, since the visual system can compute the deviation of three points from a straight line with precision in the hyperacuity range (that is, by an order of magnitude more accurately than allowed by the distance between adjacent photoreceptors in the retina). In addition, this feature must be important to the visual system, since it appears to be detected preattentively (in parallel over the entire visual field; see Fahle, 1990). It is difficult to determine whether the visual system uses such a qualitative strategy to characterize shape features, since it is possible that complete structure is first recovered, from which the sign of the Gaussian curvature is then computed. In the following experiments I present subjects with impoverished data that is insufficient for exact structure from motion (3 points in 2 frames). If subjects can perform the task, they have to use some strategy different from exact depth recovery. 3 S Number of frames EXPERIMENT 1 In the first experiment four subjects were presented with 120 moving rigid configurations of three points. The number of distinct frames per configuration varied from 2 to 4. The motion was translation only. Subjects had to judge whether the three points were in a convex or a concave configuration, namely, whether the broken 3D Qualitative Structure From Motion line formed by the points was bent towards or away from the subject (figure 1a). The middle point was almost never the closest or the farthest one, so that relative depth was not sufficient for solving the problem. With only two-frame the stimulus was ambiguous in that there was an infinity of rigid convex and concave 3D configurations of three points that could have given rise to the images presented. For these stimuli the correct answer is meaningless, and one important question is whether this inherent ambiguity affects the subjects' performance (as compared to their performance with 3 and 4 frames). The subjects' performance in this experiment was significantly better than chance (figure Ib). The subjects were able to recover partial information on the shape of the stimulus even with 2 frames, despite the theoretical impossibility of a full structure from motion computation l . Moreover, the number of frames presented in each trial had no significant effect on the error rate: the subjects performed just as well in the 2 frame trials as in the 3 and 4 frame trials (figure Ib). Had the subjects relied on the exact computation of structure from motion, one would expect a better performance with more frames (Ullman, 1984; Hildreth et al., 1989). One possible account (reconstructional) of this result is that subjects realized that the motion of the stimuli consisted of pure 3D translation. Three points in two frames are in principle sufficient to verify that the motion is translational and to compute the translation parameters. The next experiment renders this account implausible by demonstrating that the subjects perform as well when the stimuli undergo general motion that includes rotation as well as translatior~. Another possible (geometrical) account is that the human visual system incorporates the geometrical knowledge expressed by theorem 1, and uses this knowledge in ambiguous cases to select the more plausible answer. However, theorem 1 does not address the ambiguity of the stimulus that stems from the dependency of the result on the location of the Focus Of Expansion. Ifindeed some knowledge of this theorem is used in performing this task, the ambiguity has to be resolved by "guessing" the location of the FOE. The strategy consistent with human performance in the first experiment is assuming that the FOE lies in the general direction towards which the points in the image are moving. The next experiment is designed to check the use of this heuristic. 4 EXPERIMENT 2 This experiment was designed to clarify which of the two proposed explanations to the subjects' good performance in experiment 1 with only 2 frames is more plausible. First, to eliminate completely the cue to exact depth in a translational motion, the stimuli in experiment 2 underwent rotation as well as translation. The 3D motion was set up in such a manner that the projected 2D optical flow could not be interpreted as resulting from pure translational motion. Second, if subjects do use an implicit knowledge of theorem 1, the accuracy of their performance should depend on the correctness of the heuristic used to estimate II should note thal all the subjects were surprised by their good performance. They felt that the stimulus was ambiguous and that they were mostly guessing. 359 360 Weinshall the location of the FOE as discussed in the previous section. This heuristic yields incorrect results for many instances of general 3D motion. In experiment 2, two types of 3-point 2-frame motion were used: one in which the estimation of the FOE using the above heuristic is correct, and one in which this estimation is wrong. If subjects rely on an implicit knowledge of theorem 1, their judgement should be mostly correct for the first type of motion, and mostly incorrect for the second type. i 1.00 Iii I8 0.75 "6 & 0.5iJ ~ ~ e t 0.25 O.OD,:--- FOE cue (O=lncorrect, 1=CO"ect) Figure 2: Experiment 2: three points in general motion. The same four subjects as in experiment 1 were shown two-frame sequences of back and forth motion that included 3D translation and rotation. The mean performance when the FOE heuristic (see text) was correct, 71%, was significantly above chance (t = 5.71, p < 0.0001). In comparison, the mean performance when the FOE heuristic was misleading, 26%, was significantly below chance (t = -4.90, p < 0.0001). The degree to which the motion could be mistakenly interpreted as pure translation was uncorrelated with performance (,. = 0.04, F(I, 318) < 1). The performance in experiment 2 was similar to that in experiment 1 (the difference was not significant X2 < 1). In other words, the performance was as good under general motion as under pure translation. Figure 2a describes the results of experiment 2. As in the first experiment, the subjects performed significantly above chance when the FOE estimation heuristic was correct. When the heuristic was misleading, they were as likely to be wrong as they were likely to be right in the correct heuristic condition. As predicted by the geometrical explanation to the first experiment, seeing general motion instead of pure translation did not seem to affect the performance. 5 LEARNING WITH A NEURAL NETWORK Computation of qualitative structure from motion, outlined in section 2, can be supported by a biologically plausible architecture based on the application of a three-point hyperacuity operator, in parallel, in different directions around each point and over the entire visual field. Such a computation is particularly suitable to implementation by an artificial neural network. I have trained a Radial Basis Function (RBF) network (Moody & Darken, 1989; Poggio & Girosi, 1990) to Qualitative Structure From Motion identify the sign of Gaussian curvature of three moving points (represented by a coordinate vector of length 6). After a supervised learning phase in which the network was trained to produce the correct sign given examples of motion sequences, it consistently achieved a substantial success rate on novel inputs, for a wide range of parameters. Figure 3 shows the success rate (the percentage of correct answers) plotted against the number of examples used in the training phase. D?? D?5D D !:-----?:----~'~DD:----~'60=-----~2DD::------::!.260 nu~r 01 training .xamp". Figure 3: The correct performance rate of the RBF implementation vs. the number of examples in the training set. 6 SUMMARY I have presented a qualitative approach to the problem of recovering object structure from motion information and discussed some of its computational, psychophysical and implementational aspects. The computation of qualitative shape, as represented by the sign of the Gaussian curvature, can be performed by a field of simple operators, in parallel over the entire image. The performance of a qualitative shape detection module, implemented by an artificial neural network, appears to be similar to the performance of human subjects in an identical task. Acknowledgements I thank H. Biilthofr, N. Cornelius, M. Dornay, S. Edelman, M. Fahle, S. Kirkpatrick, M. Ross and A. Shashua for their help. This research was done partly in the MIT AI Laboratory. It was supported by a Fairchild postdoctoral fellowship, and in part by grants from the office of Naval Research (N00014-88-k-0164), from the National Science Foundation (IRI-8719394 and IRI-8657824), and a gift from the James S. McDonnell Foundation to Professor Ellen Hildreth. References [1] E. Borjesson and C. von Hofsten. Visual perception of motion in depth: ap- 361 362 Weinshall plication of a vector model to three-dot motion patterns. Perception and Psychophy,ics, 13:169-179, 1973. [2] A. Bruss and B. K. P. Horn. Passive navigation. Computer Vision, Graphics, and Image Processing, 21:3-20, 1983. [3] M. W. Fahle. Parallel, semi-parallel, and serial processing of visual hyperacuity. In Proc. SPIE Con/. on Electronic Imaging: science and technology, Santa Clara, CA, February 1990. to appear. [4] E. C. Hildreth, N. M. Grzywacz, E. H. Adelson, and V. K. Inada. The perceptual buildup of three-dimensional structure from motion, 1989. Perception & Psychophysics, in press. [5] J. J. Koenderink and A. J. van Doorn. Local structure of movement parallax of the plane. Journal of the Optical Society of America, 66:717-723, 1976. [6] J. Moody and C. Darken. Fast learning in networks oflocally tuned processing units. Neural Computation, 1:281-289, 1989. [7] R. C. Nelson and J. Aloimonos. Using flow field divergence for obstacle avoidance: towards qualitative vision. In Proceedings of the 2nd International Conference on Computer Vision, pages 188-196, Tarpon Springs, FL, 1988. IEEE, Washington, DC. [8] T. Poggio and F. Girosi. Regularization algorithms for learning that are equivalent to multilayer networks. Science, 247:978-982, 1990. [9] T. Poggio and C. Koch. ill-posed problems in early vision: from computational theory to analog networks. Proceedings of the Royal Society of London B, 226:303-323, 1985. [10] R.Y. Tsai and T.S. Huang. Uniqueness and estimation of three dimensional motion parameters of rigid objects with curved surfaces. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6:13-27, 1984. [11] S. Ullman. Computational studies in the interpretation of structure and motion: summary and extension. In J. Beck, B. Hope, and A. Rosenfeld, editors, Human and Machine Vi,ion. Academic Press, New York, 1983. [12] S. Ullman. Maximizing rigidity: the incremental recovery of 3D structure from rigid and rubbery motion. Perception, 13:255-274, 1984. [13] D. Weinshall. Direct computation of 3D shape and motion invariants. A.I. Memo No. 1131, Artificial Intelligence Laboratory, Massachusetts Institute of Technology, May 1989.
317 |@word cylindrical:1 trial:3 middle:1 judgement:1 nd:1 plication:1 pick:1 necessity:1 configuration:4 disparity:5 tuned:1 recovered:2 od:1 clara:1 must:1 subsequent:1 girosi:2 shape:7 designed:2 v:1 cue:2 intelligence:2 plane:1 location:6 successive:1 along:1 direct:1 ect:1 surprised:1 qualitative:13 incorrect:2 edelman:1 oflocally:1 parallax:1 manner:1 theoretically:1 indeed:1 themselves:1 gift:1 moreover:1 what:1 weinshall:6 interpreted:2 quantitative:1 concave:4 wrong:2 unit:1 farthest:1 grant:1 appear:1 local:1 limit:1 despite:1 analyzing:1 meet:1 ap:1 co:1 inada:1 range:2 directed:1 practical:1 camera:1 horn:2 lost:1 significantly:7 hyperbolic:1 projection:2 word:1 radial:1 seeing:1 operator:3 equivalent:1 map:5 center:1 maximizing:1 iri:2 convex:4 recovery:2 pure:5 perceive:1 avoidance:1 brittleness:2 ellen:1 bruss:2 coordinate:2 grzywacz:1 limiting:1 exact:7 us:2 recognition:2 hyperacuity:3 located:1 particularly:1 module:1 triad:2 hofsten:2 movement:1 mentioned:1 substantial:1 broken:1 trained:3 depend:1 solving:1 completely:1 basis:1 resolved:1 represented:2 america:1 distinct:1 fast:1 describe:1 london:1 detected:1 artificial:3 heuristic:9 posed:3 larger:1 plausible:3 rosenfeld:1 noisy:1 sequence:4 net:1 forth:2 description:1 requirement:1 produce:1 incremental:1 object:14 help:1 ij:1 thal:1 recovering:1 predicted:1 involves:1 judge:2 implemented:1 direction:5 correct:9 subsequently:1 human:7 biological:2 mathematically:1 extension:1 clarify:1 around:3 koch:1 ic:1 normal:3 major:1 dornay:1 early:1 purpose:1 uniqueness:1 estimation:4 proc:1 ross:1 sensitive:1 correctness:1 hope:1 mit:2 gaussian:9 always:1 office:1 focus:2 naval:1 consistently:1 check:1 impossibility:1 contrast:1 sense:1 rigid:4 entire:3 eliminate:1 translational:3 classification:1 ill:3 psychophysics:1 field:4 never:1 having:1 washington:1 identical:2 look:1 adelson:1 stimulus:9 inherent:1 few:2 retina:1 national:1 divergence:1 beck:1 phase:2 detection:1 kirkpatrick:1 navigation:3 partial:1 necessary:1 poggio:3 plotted:1 theoretical:3 minimal:1 instance:1 obstacle:1 implementational:1 applicability:1 deviation:1 graphic:1 characterize:1 dependency:1 answer:3 international:1 sensitivity:1 e25:1 moody:2 von:2 ambiguity:3 recorded:1 huang:2 possibly:1 koenderink:2 ullman:5 account:3 includes:1 vi:1 performed:5 view:1 shashua:1 recover:2 relied:1 parallel:5 formed:1 accuracy:1 yield:1 identify:1 accurately:1 straight:1 foe:11 implausible:1 against:1 james:1 spie:1 con:1 massachusetts:1 knowledge:8 impoverished:2 back:2 appears:2 supervised:1 planar:1 done:1 furthermore:2 just:1 implicit:2 mistakenly:1 hildreth:3 undergoes:1 building:1 effect:2 consisted:1 verify:1 regularization:1 laboratory:2 adjacent:2 ambiguous:3 complete:2 motion:49 passive:1 geometrical:3 image:12 novel:1 rotation:3 discussed:2 analog:1 interpretation:1 significant:3 cambridge:1 ai:1 outlined:1 had:3 dot:1 moving:6 surface:6 curvature:17 closest:1 perspective:1 n00014:1 success:3 determine:1 ii:1 semi:1 full:1 stem:1 academic:1 serial:1 bent:1 basic:1 multilayer:1 vision:5 achieved:1 ion:1 doorn:2 addition:1 fellowship:1 meaningless:1 subject:24 undergo:1 oq:1 incorporates:1 flow:2 seem:1 iii:1 affect:2 architecture:1 knowing:1 whether:4 collinear:2 render:1 buildup:1 questioned:1 york:1 cause:1 useful:1 santa:1 percentage:1 sign:16 per:1 four:5 demonstrating:1 imaging:1 parabolic:1 almost:1 decide:1 electronic:1 patch:1 acceptable:1 fl:1 encountered:1 infinity:1 scene:1 x2:1 felt:1 aspect:1 speed:1 spring:1 performing:1 optical:2 mcdonnell:1 describes:1 biologically:1 invariant:1 reversal:1 studying:1 available:1 eight:1 observe:1 away:2 generic:3 elliptic:1 daphna:1 build:1 february:1 society:2 psychophysical:1 question:2 realized:1 strategy:3 guessing:2 distance:1 thank:1 nelson:2 extent:2 assuming:1 length:1 insufficient:1 difficult:2 mostly:3 memo:1 disparate:1 rise:1 implementation:2 reliably:1 perform:3 snapshot:1 darken:2 attacked:1 curved:1 situation:1 frame:21 dc:1 perturbation:1 varied:1 namely:1 required:1 nu:1 aloimonos:2 address:1 able:1 bar:1 below:1 pattern:3 perception:5 royal:1 explanation:2 suitable:1 rely:1 misleading:2 technology:2 extract:1 naive:1 text:2 acknowledgement:1 relative:2 expect:1 foundation:2 degree:1 sufficient:2 consistent:1 principle:1 dd:2 editor:1 i8:1 uncorrelated:1 translation:10 summary:2 supported:2 institute:1 wide:1 taking:1 underwent:1 van:2 fairchild:1 curve:1 depth:9 boundary:1 collection:1 projected:2 transaction:1 fahle:3 photoreceptors:1 preattentively:1 postdoctoral:1 ca:1 expansion:1 investigated:1 domain:2 did:2 main:1 noise:2 repeated:1 allowed:1 differed:1 precision:1 position:2 lie:1 perceptual:1 ib:2 theorem:7 undergoing:1 appeal:1 magnitude:1 saddle:1 likely:2 visual:9 expressed:1 chance:7 ma:1 goal:1 rbf:3 towards:4 professor:1 shortcut:1 change:1 experimentally:1 included:1 determined:1 total:1 partly:1 select:1 tsai:2 rigidity:1
2,392
3,170
Expectation Maximization and Posterior Constraints Jo?ao V. Grac?a L2 F INESC-ID INESC-ID Lisboa, Portugal Kuzman Ganchev Computer & Information Science University of Pennsylvania Philadelphia, PA Ben Taskar Computer & Information Science University of Pennsylvania Philadelphia, PA Abstract The expectation maximization (EM) algorithm is a widely used maximum likelihood estimation procedure for statistical models when the values of some of the variables in the model are not observed. Very often, however, our aim is primarily to find a model that assigns values to the latent variables that have intended meaning for our data and maximizing expected likelihood only sometimes accomplishes this. Unfortunately, it is typically difficult to add even simple a-priori information about latent variables in graphical models without making the models overly complex or intractable. In this paper, we present an efficient, principled way to inject rich constraints on the posteriors of latent variables into the EM algorithm. Our method can be used to learn tractable graphical models that satisfy additional, otherwise intractable constraints. Focusing on clustering and the alignment problem for statistical machine translation, we show that simple, intuitive posterior constraints can greatly improve the performance over standard baselines and be competitive with more complex, intractable models. 1 Introduction In unsupervised problems where observed data has sequential, recursive, spatial, relational, or other kinds of structure, we often employ statistical models with latent variables to tease apart the underlying dependencies and induce meaningful semantic parts. Part-of-speech and grammar induction, word and phrase alignment for statistical machine translation in natural language processing are examples of such aims. Generative models (graphical models, grammars, etc.) estimated via EM [6] are one of the primary tools for such tasks. The EM algorithm attempts to maximize the likelihood of the observed data marginalizing over the hidden variables. A pernicious problem with most models is that the data likelihood is not convex in the model parameters and EM can get stuck in local optima with very different latent variable posteriors. Another problem is that data likelihood may not guide the model towards the intended meaning for the latent variables, instead focusing on explaining irrelevant but common correlations in the data. Very indirect methods such as clever initialization and feature design (as well as ad-hoc procedural modifications) are often used to affect the posteriors of latent variables in a desired manner. By allowing to specify prior information directly about posteriors of hidden variables, we can help avoid these difficulties. A somewhat similar in spirit approach is evident in work on multivariate information bottleneck [8], where extra conditional independence assumptions between latent variables can be imposed to control their ?meaning?. Similarly, in many semisupervised approaches, assumptions about smoothness or other properties of the posteriors are often used as regularization [18, 13, 4]. In [17], deterministic annealing was used to to explicitly control a particular feature of the posteriors of a grammar induction model. In this paper, we present an approach that effectively incorporates rich constraints on posterior distributions of a graphical model into a simple and efficient EM scheme. An important advantage of our approach is that the E-step remains tractable in a large class of problems even though incorporating the desired constraints directly into the model would make it intractable. We test our approach on synthetic clustering data as well as statistical 1 word alignment and show that we can significantly improve the performance of simple, tractable models, as evaluated on hand-annotated alignments for two pairs of languages, by introducing intuitive constraints such as limited fertility and the agreement of two models. Our method is attractive in its simplicity and efficiency and is competitive with more complex, intractable models. 2 Expectation Maximization and posterior constraints We are interested in estimating the parameters ? of a model p? (x, z) over observed variables X taking values x ? X and latent variables Z taking values z ? Z. We are often even more interested in the induced posterior distribution over the latent variables, p? (z | x), as we ascribe domainspecific semantics to these variables. We typically represent p? (x, z) as a directed or undirected graphical model (although the discussion below also applies to context free grammars and other probabilistic models). We assume that computing the jointQ and the marginals is tractable and that the model factors across cliques as follows: p? (x, z) ? ? ?? (x? , z? ), where ?? (x? , z? ) are clique potentials or conditional probability distributions. Given a sample S = {x1 , . . . , xn }, EM maximizes the average log likelihood function LS (?) via an auxiliary lower bound F (q, ?) (cf. [14]): " " # # X X p? (x, z) q(z | x) p? (x, z) = ES log LS (?) = ES [log p? (x)] = ES log (1) q(z | x) z z " # X p? (x, z) ? ES q(z | x) log = F (q, ?), (2) q(z | x) z P where ES [f (x)] = n1 i f (xi ) denotes the sample average and q(z | x) is non-negative and sums to 1 over z for each x. The lower bound above is a simple consequence of Jensen?s inequality for the log function. It can be shown that the lower bound can be made tight for a given value of ? by maximizing over q and under mild continuity conditions on p? (x, z), local maxima (q ? , ?? ) of F (q, ?) correspond to local maxima ?? of LS (?) [14]. Standard EM iteration performs coordinate ascent on F (q, ?) as follows: E: q t+1 (z | x) = arg max F (q, ?t ) = arg min KL(q(z | x) || p?t (z | x)) = p?t (z | x); q(z|x) " M: ? t+1 = arg max F (q t+1 (3) q(z|x) , ?) = arg max ES ? ? # X q t+1 (z | x) log p? (x, z) , (4) z q(?) ] is Kullback-Leibler divergence. The E step computes the posteriors where KL(q||p) = Eq [log p(?) of the latent variables given the observed variables and current parameters. The M step uses q to ?fill in? the values of latent variables z and estimate parameters ? as if the data was complete. This step is particularly easy for exponential models, where ? is a simple function of the (expected) sufficient statistics. This modular split into two intuitive and straightforward steps accounts for the vast popularity of EM. In the following, we build on this simple scheme while incorporating desired constraints on the posteriors over latent variables. 2.1 Constraining the posteriors Our goal is to allow for finer-level control over posteriors, bypassing the likelihood function. We propose an intuitive way to modify EM to accomplish this and discuss the implications of the new procedure below in terms of the objective it attempts to optimize. We can express our desired constraints on the posteriors as the requirement that p? (z | x) ? Q(x). For example, in dependency grammar induction, constraining the average length of dependency attachments is desired [17]; in statistical word alignment, the constraint might involve the expected degree of each node in the alignment [3]. Instead of restricting p directly, which might not be feasible, we can penalize the distance of p to the constraint set Q. As it turns out, we can accomplish this by restricting q to be constrained to Q instead. This results in a very simple modification to the E step of EM, by constraining the set of q over which F (q, ?) is optimized (M step is unchanged): E: q t+1 (z | x) = arg max F (q, ?t ) = arg min KL(q(z | x) || p?t (z | x)) (5) q(z|x)?Q(x) q(z|x)?Q(x) 2 Note that in variational EM, the set Q(x) is usually a simpler inner bound (as in mean field) or outer bound (as in loopy belief propagation) on the intractable original space of posteriors [9]. The situation here is the opposite: we assume the original posterior space is tractable but we add constraints to enforce intended semantics not captured by the simple model. Of course to make this practical, the set Q(x) needs to be well-behaved. We assume that Q(x) is convex and non-empty for every x so that the problem in Eq. (5) becomes a strictly convex minimization over a non-empty convex set, guaranteed to have a unique minimizer [1]. A natural and general way to specify constraints on q is by bounding expectations of given functions: Eq [f (x, z)] ? b (equality can be achieved by adding Eq [?f (x, z)] ? ?b). Stacking functions f () into a vector f () and constants b into a vector b, the minimization problem in Eq. (5) becomes: arg min KL(q(z | x) || p?t (z | x)) s.t. Eq [f (x, z)] ? b. (6) q In the next section, we discuss how to solve this optimization problem (also called I-projection in information geometry), but before we move on, it is interesting to consider what this new procedure in Eq. (5) converges to. The new scheme alternately maximizes F (q, ?), but over a subspace of the original space of q, hence using a looser lower-bound than original EM. We are no longer guaranteed that the local maxima of the constrained problem are local maxima of the log-likelihood. However, we can characterize the objective maximized at local maxima as log-likelihood penalized by average KL divergence of posteriors from Q: Proposition 2.1 The local maxima of F (q, ?) such that q(z | x) ? Q(x), ?x ? S are local maxima of ES [log p? (x)] ? ES [KL(Q(x) || p? (z | x)], where KL(Q(x) || p? (z | x) = minq(z|x))?Q(x) KL(q(z | x) || p? (z | x)). P Proof: By adding and subtracting ES [ z q(z | x) log p? (z | x)] from F (q, ?), we get: F (q, ?) = ES " X z = ES " X z = ES " X p? (x, z) q(z | x) log q(z | x) # (7) " # # X p? (x, z) q(z | x) q(z | x) log ? ES q(z | x) log p? (z | x) p? (z | x) z # q(z | x) log p? (x) ? ES [KL(q(z | x)||p? (z | x)] (8) (9) z = ES [log p? (x)] ? ES [KL(q(z | x) || p? (z | x)]. (10) ? Since the first term does not depend on q, the second term is minimized by q (z | x) = minq(z|x))?Q(x) KL(q(z | x) || p? (z | x)) at local maxima. This proposition implies that our procedure trades off likelihood and distance to the desired posterior subspace (modulo getting stuck in local maxima) and provides an effective method of controlling the posteriors. 2.2 Computing I-projections onto Q(x) The KL-projection onto Q(x) in Eq. (6) is easily solved via the dual (cf. [5, 1]): ! X arg max ?> b ? log p?t (z | x) exp{?> f (x, z)} ??0 (11) z Define q? (z | x) ? p?t (z | x) exp{?> f (x, z)}, then at the dual optimum ?? , the primal solution is given by q?? (z | x). Such projections become particularly efficient when we assume P the constraint functions decompose the same way as the graphical model: f (x, z) = ? f (x? , z? ). Then q? (z | x) ? Q > t ? (x , z ) exp{? f (x , z )}, which factorizes the same way as p? (x, z). In case the con? ? ? ? ? ? straint functions do not decompose over the model cliques but require additional cliques, the resulting q? will factorize over the union of the original cliques and the constraint function cliques, 3 Initial Configuration Output of EM Output of Constrained EM Figure 1: Synthetic data results. The dataset consists of 9 points drawn as dots and there are three clusters represented by ovals centered at their mean with dimensions proportional to their standard deviation. The EM algorithm clusters each column of points together, but if we introduce the constraint that each column should have at least one of the clusters, we get the clustering to the right. 0 1 2 3 4 5 6 7 8 1 ? ? ? ? ? ? ? ? ? ? ? ? ju de ga ba n 0 ? ? ? ? ? ? 2 ? ? ? ? ? ? 3 ? ? ? ? ? ? 4 ? ? ? ? ? ? 5 ? ? ? ? ? 6 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? un ma an y a ne ima ra da 7 ? ? ? ? ? ? 8 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? m co . uy rd ia l 0 1 2 3 4 5 6 7 8 0 ? ? ? 1 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ju de ga ba n 2 ? ? ? ? ? ? ? ? ? un a 3 ? ? ? 4 ? ? ? ? ? 5 ? ? ? ? ? ? ? ? ? m ? ? ? ? ? ? ? ? ? ? a y an nim era ad a 6 ? ? ? ? ? ? 7 ? ? ? ? ? ? 8 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? m co . uy rd ia l 0 1 2 3 4 5 6 7 8 0 ? ? ? 1 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ju de ga ba n 2 ? ? 3 4 5 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? un ma an y a ne ima ra da 6 ? ? ? ? ? ? 7 ? ? ? ? ? ? 8 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? m co . uy rd ia it was an animated , very convivial game . l Figure 2: An example of the output of HMM trained on 100k the EPPS data. Left: Baseline model. Middle: Substochastic constraints. Right Agreement constraints. potentially making inference more expensive. In our experiments, we used constraint functions that decompose with the original model. Note that even in this case, the graphical model p? (x, z) can not in general satisfy the expectation constraints for every setting of ? and x. Instead, the constrained EM procedure is tuning ? to the distribution of x to satisfy these constraints in expectation. The dual of the projection problem can be solved using a variety of optimization methods; perhaps the simplest of them is projected gradient (since ? is non-negative, we need to simply truncate negative values as we performP gradient ascent). The gradient of the objective in Eq. (11) is given by: b ? Eq? [f (x, z)] = b ? ? Eq? (z? |x) [f (x? , z? )]. Every gradient computation thus involves computing marginals of q? (z | x), which is of the same complexity as computing marginals of p? (z | x) if no new cliques are added by the constraint functions. In practice, we do not need to solve the dual to a very high precision in every round of EM, so several (about 5-10) gradient steps suffice. When the number of constraints is small, alternating projections are also a good option. 3 Clustering A simple but common problem that employs EM is clustering a group of points using a mixture of Gaussians. In practice, the data points and Gaussian clusters have some meaning not captured by the model. For example, the data points could correspond to descriptors of image parts and the clusters could be ?words? used for later processing of the image. In that case, we often have special knowledge about the clusters that we expect to see that is difficult to express in the original model. For example, we might know that within each image two features that are of different scales should not be clustered together. As another example, we might know that each image has at least one copy of each cluster. Both of these constraints are easy to capture and implement in our framework. Let zij = 1 represent the event that data point i is assigned to cluster j. If we want to ensure that data point i is not assigned to the same cluster as data point i0 then we need to enforce the constraint E [zij + zi0 j ] ? 1, ?j. To ensure the constraint that each cluster one data point assigned Phas at least  to it from an instance I we need to enforce the constraint E i?I zij ? 1, ?j. We implemented this constraint in a mixture of Gaussians clustering algorithm. Figure 1 compares clustering of synthetic data using unconstrained EM as well as our method with the constraint that each column of data points has at least one copy of each cluster in expectation. 4 4 Statistical word alignment Statistical word alignment, used primarily for machine translation, is a task where the latent variables are intended to have a meaning: whether a word in one language translates into a word in another language in the context of the given sentence pair. The input to an alignment systems is a sentence aligned bilingual corpus, consisting of pairs of sentences in two languages. Figure 2 shows three machine-generated alignments of a sentence pair. The black dots represent the machine alignments and the shading represents the human annotation. Darkly shaded squares with a border represent a sure alignments that the system is required to produce while lightly shaded squares without a border represent possible alignments that the system is optionally allowed to produce. We denote one language the ?source? language and use s for its sentences and one language the ?target? language and use t for its sentences. It will also be useful to talk about an alignment for a particular sentence pair as a binary matrix z, with zij = 1 representing ?source word i generates target word j.? The generative models we consider generate target word j from only P one source word, and so an alignment is only valid from the point of view of the model when i zij = 1, so we can equivalently represent an alignment as an array a of indices, with aj = i ? zij = 1. Figure 2 shows three alignments performed by a baseline model as well as our two modifications. We see that the rare word ?convivial? acts as a garbage collector[2], aligning to words that do not have a simple translation in the target sentence. Both of the constraints we suggest repair this problem to different degrees. We now introduce the baseline models and the constraints we impose on them. 4.1 Baseline models We consider three models below: IBM Model 1, IBM Model 2 [3] and the HMM model proposed by [20]. The three models can be expressed as: Y (12) pd (aj |j, aj?1 )pt (tj |saj ), p(t, a | s) = j with the three models differing in their definition of the distortion probability pd (aj |j, aj?1 ). Model 1 assumes that the positions of the words are not important and assigns uniform distortion probability. Model 2 allows a dependence on the positions pd (aj |j, aj?1 ) = pd (aj |j) and the HMM model assumes that the only the distance between the current and previous source word are important pd (aj |j, aj?1 ) = pd (aj |aj ? aj?1 ). All the models are augmented by adding a special ?null? word to the source sentence. The likelihood of the corpus, marginalized over possible alignments is concave for Model 1, but not for the other models [3]. 4.1.1 Substochastic Constraints A common error for our baseline models is to use rare source words as garbage collectors [2]. The models align target words that do not match any of the source words to rare source words rather than to the null word. While this results in higher data likelihood, the resulting alignments are not desirable, since they cannot be interpreted as translations. Figure 2 shows an example. One might consider augmenting the models to disallow this, for example by restricting that the alignments are at most one-to-one. Unfortunately computing the normalization for such a model is a ]P complete problem [19]. Our approach is to instead constrain the posterior distribution over alignments during the E-step. More concretely we enforce the constraint Eq [zij ] ? 1. Another way of thinking of this constraint is that we require the expected fertility of each source word to be at most one. For our hand-aligned corpora Hansards [15] and EPPS [11, 10], the average fertility is around 1 and 1.2, respectively, with standard deviation of 0.01. We will see that these constraints improve alignment accuracy. 4.1.2 Agreement Constraints Another weakness of our baseline models is that they are asymmetric. Usually, a model is trained in each direction and then they are heuristically combined. [12] introduce an objective to train the two models concurrently and encourage them to agree. Unfortunately their objective leads to an intractable E-step and they are forced to use a heuristic approximation. In our framework, we can 5 Language English French Hansards 447 sentences Max Avg. Fertility 30 15.7 6 30 17.4 3 Avg. F. 1.02 1.00 Language English Spanish EPPS 400 sentences Max Avg. Fertility 90 29 218 99 31.2 165 Avg. F. 1.20 1.17 Table 1: Test Corpus statistics. Max and Avg. refer to sentence length. Fertility is the number of words that occur at least twice and have on average at least 1.5 sure alignment when they have any. Avg. F. is the average word fertility. All average fertilities have a standard deviation of 0.01. also enforce agreement in expectation without approximating. Denote one direction the ?forward? ? direction and the other the ?backward? direction. Denote the forward model ? p with hidden variables ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? z ? Z and backward model p with hidden variables z ? Z and note p (? z ) = 0 and ? p (? z)= ? ? ? ? ? ? 1? 1? 0. Define a mixture p(z) = 2 p (z)+ 2 p (z) for z ? Z ? Z . The constraints that enforce agreement in this setup are Eq [f (x, z)] = 0 with ? ? ? ? 1 z ? Z and zij = 1 ? ? fij (x, z) = . ? ?1 z ? Z and zij = 1 0 otherwise 5 Evaluation We evaluated our augmented models on two corpora: the Hansards corpus [15] of English/French and the Europarl corpus [10] with EPPS annotation [11]. Table 1 presents some statistics for the two corpora. Notably, Hansards is a much easier corpus than EPPS. Hansards test sentences are on average only half as long as those of EPPS and only 21% of alignments in Hansards are sure and hence required compared with 69% for EPPS. Additionally, more words in EPPS are aligned to multiple words in the other language. Since our models cannot model this ?fertility? we expect their performance to be worse on EPPS data. Despite these differences, the corpora are also similar in some ways. Both are alignments of a Romance language to English and the average distance of an alignment to the diagonal is around 2 for both corpora. The error metrics we use are precision, recall and alignment error rate (AER), which is a weighted combination of precision and recall. Although AER is the standard metric in word alignment is has been shown [7] that it has a weak correlation with the standard MT metric, Bleu, when the alignments are used in a phrase-based translation system. [7] suggest weighted F-Measure1 as an alternative that correlates well with Bleu, so we also report precision and recall numbers. Following prior work [16], we initialize Model 1 translation table with uniform probabilities over word pairs that occur together in same sentence. Model 2 and Model HMM were initialized with the translation probabilities from Model 1 and with uniform distortion probabilities. All models were trained for 5 iterations. We used a maximum length cutoff for training sentences of 40. For the Hansards corpus this leaves 87.3% of the sentences, while for EPPS this leaves 74.5%. Following common practice, we included the unlabeled test and development data during training. We report results for the model with English as the ?source? language when using posterior decoding [12]. Figures 3 shows alignment results for the baselines models as well as the models with additional constraints. We show precision, recall and AER for the HMM model as well as precision and recall for Model 2. We note that both constraints improve all measures of performance for all dataset sizes, with most improvement for smaller dataset sizes. We performed additional experiments to verify that our model is not unfairly aided by the standard but arbitrary choice of 5 iterations of EM. Figure 4 shows AER and data likelihood as a function of the number of EM iterations. We see that the performance gap between the model with and without agreement constraints is preserved as the number of EM iterations increases. Note also that likelihood increases monotonically for all the models and that the baseline model always achieves higher likelihood as expected. 1 ? + defined as ( P recision 1?? ?1 ) Recall with 0.1 ? ? ? 0.4 showing good correlation with Bleu [7]. 6 100 100 95 95 90 90 85 85 80 80 75 75 Precision Recall Agreement Substochastic Baseline 70 65 60 0 20 40 60 30 20 15 10 Precision Recall Agreement Substochastic Baseline 70 65 60 80 Baseline Substochastic Agreement 25 100 10 100 5 0 1000 10 100 100 90 90 80 80 70 70 25 60 60 20 100 45 1000 Baseline Substochastic Agreement 40 35 50 50 Precision Recall Agreement Substochastic Baseline 40 30 0 20 40 60 30 40 30 80 15 Precision Recall Agreement Substochastic Baseline 100 10 100 10 5 0 1000 10 100 1000 Figure 3: Effect of posterior constraints on learning curves for IBM Model 2 and HMM. From left to right: Precision/Recall for IBM Model 2, Precision/Recall for HMM Model and AER for HMM Model. Top: Hansards Bottom: EPPS. Both types of constraints improve all accuracy measures across both datasets and models. 20 Baseline Substochastic Agreement Baseline Substochastic Agreement 18 16 14 12 10 2 4 6 8 10 12 14 2 (a) Hansards negative log Likelihood 4 6 8 10 12 14 (b) Hansards AER Figure 4: Data likelihood and AER vs. EM iteration using HMM on 100k Hansards. 6 Conclusions In this paper we described a general and principled way to introduce prior knowledge to guide the EM algorithm. Intuitively, we can view our method as a way to exert flexible control during the execution of EM. More formally, our method can be viewed as a regularization of the expectations of the hidden variables during EM. Alternatively, it can be viewed as an augmentation of the EM objective function with KL divergence from a set of feasible models. We implemented our method on two different problems: probabilistic clustering using mixtures of Gaussians and statistical word alignment and tested it on synthetic and real data. We observed improved performance by introducing simple and intuitive prior knowledge into the learning process. Our method is widely applicable to other problems where the EM algorithm is used but prior knowledge about the problem is hard to introduce directly into the model. 7 Acknowledgments J. V. Grac?a was supported by a fellowship from Fundac?a? o para a Ci?encia e Tecnologia (SFRH/ BD/ 27528/ 2006). K. Ganchev was partially supported by NSF ITR EIA 0205448. References [1] D. Bertsekas. Nonlinear Programming. Athena Scientific, Belmont, MA, 1999. [2] P. F. Brown, S. A. Della Pietra, V. J. Della Pietra, M. J. Goldsmith, J. Hajic, R. L. Mercer, and S. Mohanty. But dictionaries are data too. In Proc. HLT, 1993. [3] Peter F. Brown, Stephen Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. The mathematic of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263?311, 1994. [4] O. Chapelle, B. Sch?olkopf, and A. Zien, editors. Semi-Supervised Learning. MIT Press, Cambridge, MA, 2006. [5] I. Csiszar. I-divergence geometry of probability distributions and minimization problems. The Annals of Probability, 3, 1975. [6] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society. Series B (Methodological), 39(1):1?38, 1977. [7] Alexander Fraser and Daniel Marcu. Measuring word alignment quality for statistical machine translation. Comput. Linguist., 33(3):293?303, 2007. [8] Nir Friedman, Ori Mosenzon, Noam Slonim, and Naftali Tishby. Multivariate information bottleneck. In UAI, 2001. [9] Michael I. Jordan, Zoubin Ghahramani, Tommi Jaakkola, and Lawrence K. Saul. An introduction to variational methods for graphical models. Machine Learning, 37(2):183?233, 1999. [10] Philipp Koehn. Europarl: A multilingual corpus for evaluation of machine translation, 2002. [11] P. Lambert, A.De Gispert, R. Banchs, and J. B. Mari?no. Guidelines for word alignment evaluation and manual alignment. In Language Resources and Evaluation, Volume 39, Number 4, pages 267?285, 2005. [12] Percy Liang, Ben Taskar, and Dan Klein. Alignment by agreement. In Proc. HLT-NAACL, 2006. [13] Gideon S. Mann and Andrew McCallum. Simple, robust, scalable semi-supervised learning via expectation regularization. In Proc. ICML, 2007. [14] R. M. Neal and G. E. Hinton. A new view of the EM algorithm that justifies incremental, sparse and other variants. In M. I. Jordan, editor, Learning in Graphical Models, pages 355? 368. Kluwer, 1998. [15] Franz Josef Och and Hermann Ney. Improved statistical alignment models. In ACL, 2000. [16] Franz Josef Och and Hermann Ney. A systematic comparison of various statistical alignment models. Comput. Linguist., 29(1):19?51, 2003. [17] Noah A. Smith and Jason Eisner. Annealing structural bias in multilingual weighted grammar induction. In Proc. ACL, pages 569?576, 2006. [18] Martin Szummer and Tommi Jaakkola. Information regularization with partially labeled data. In Proc. NIPS, pages 1025?1032, 2003. [19] L. G. Valiant. The complexity of computing the permanent. Theoretical Computer Science, 8:189?201, 1979. [20] Stephan Vogel, Hermann Ney, and Christoph Tillmann. Hmm-based word alignment in statistical translation. In Proc. COLING, 1996. 8
3170 |@word mild:1 middle:1 heuristically:1 shading:1 initial:1 configuration:1 series:1 zij:9 daniel:1 animated:1 current:2 mari:1 bd:1 romance:1 belmont:1 v:1 generative:2 half:1 leaf:2 mccallum:1 smith:1 provides:1 node:1 philipp:1 simpler:1 become:1 consists:1 dan:1 introduce:5 manner:1 notably:1 ra:2 expected:5 becomes:2 estimating:1 underlying:1 suffice:1 maximizes:2 null:2 what:1 kind:1 interpreted:1 differing:1 every:4 act:1 concave:1 control:4 och:2 bertsekas:1 before:1 local:10 modify:1 slonim:1 consequence:1 era:1 despite:1 id:2 might:5 black:1 twice:1 initialization:1 exert:1 acl:2 shaded:2 christoph:1 co:3 limited:1 zi0:1 directed:1 practical:1 unique:1 uy:3 acknowledgment:1 recursive:1 union:1 practice:3 implement:1 hansard:11 procedure:5 significantly:1 projection:6 word:33 induce:1 suggest:2 zoubin:1 get:3 onto:2 clever:1 ga:3 cannot:2 unlabeled:1 context:2 optimize:1 imposed:1 deterministic:1 maximizing:2 straightforward:1 l:3 convex:4 minq:2 simplicity:1 assigns:2 tillmann:1 array:1 fill:1 coordinate:1 annals:1 controlling:1 target:5 pt:1 modulo:1 programming:1 us:1 agreement:15 pa:2 expensive:1 particularly:2 marcu:1 asymmetric:1 labeled:1 observed:6 taskar:2 bottom:1 solved:2 capture:1 trade:1 principled:2 pd:6 dempster:1 complexity:2 trained:3 depend:1 tight:1 efficiency:1 easily:1 indirect:1 represented:1 various:1 talk:1 train:1 forced:1 effective:1 modular:1 widely:2 solve:2 heuristic:1 distortion:3 koehn:1 otherwise:2 grammar:6 statistic:3 laird:1 hoc:1 advantage:1 propose:1 subtracting:1 aligned:3 intuitive:5 olkopf:1 getting:1 empty:2 optimum:2 requirement:1 cluster:11 produce:2 incremental:1 converges:1 ben:2 help:1 andrew:1 augmenting:1 eq:13 auxiliary:1 implemented:2 involves:1 implies:1 tommi:2 direction:4 fij:1 hermann:3 annotated:1 centered:1 human:1 mann:1 require:2 fundac:1 ao:1 clustered:1 decompose:3 proposition:2 strictly:1 bypassing:1 around:2 exp:3 lawrence:1 achieves:1 dictionary:1 estimation:2 sfrh:1 proc:6 applicable:1 ganchev:2 tool:1 grac:2 weighted:3 minimization:3 mit:1 concurrently:1 gaussian:1 always:1 aim:2 rather:1 avoid:1 factorizes:1 jaakkola:2 improvement:1 methodological:1 likelihood:18 greatly:1 baseline:17 inference:1 i0:1 mathematic:1 typically:2 hidden:5 interested:2 semantics:2 josef:2 arg:8 dual:4 flexible:1 priori:1 development:1 spatial:1 constrained:4 special:2 initialize:1 field:1 represents:1 unsupervised:1 inesc:2 icml:1 thinking:1 minimized:1 report:2 primarily:2 employ:2 divergence:4 pietra:4 ima:2 intended:4 geometry:2 consisting:1 n1:1 attempt:2 friedman:1 evaluation:4 alignment:38 weakness:1 mixture:4 primal:1 tj:1 csiszar:1 implication:1 encourage:1 incomplete:1 initialized:1 desired:6 theoretical:1 instance:1 column:3 measuring:1 maximization:3 phrase:2 loopy:1 introducing:2 stacking:1 deviation:3 rare:3 uniform:3 too:1 tishby:1 characterize:1 dependency:3 para:1 accomplish:2 synthetic:4 combined:1 ju:3 probabilistic:2 off:1 systematic:1 decoding:1 michael:1 together:3 jo:1 augmentation:1 worse:1 inject:1 account:1 potential:1 de:4 satisfy:3 permanent:1 explicitly:1 ad:2 later:1 view:3 performed:2 ori:1 jason:1 competitive:2 option:1 annotation:2 square:2 accuracy:2 descriptor:1 maximized:1 correspond:2 weak:1 vincent:1 lambert:1 finer:1 manual:1 hlt:2 definition:1 proof:1 con:1 dataset:3 recall:12 knowledge:4 focusing:2 higher:2 supervised:2 specify:2 improved:2 eia:1 evaluated:2 though:1 correlation:3 hand:2 nonlinear:1 propagation:1 continuity:1 french:2 quality:1 aj:13 ascribe:1 behaved:1 perhaps:1 scientific:1 semisupervised:1 effect:1 naacl:1 verify:1 brown:2 regularization:4 equality:1 hence:2 alternating:1 assigned:3 leibler:1 semantic:1 neal:1 attractive:1 round:1 game:1 during:4 spanish:1 naftali:1 evident:1 complete:2 goldsmith:1 performs:1 percy:1 meaning:5 variational:2 image:4 substochastic:10 common:4 mt:1 volume:1 kluwer:1 marginals:3 refer:1 cambridge:1 smoothness:1 rd:3 tuning:1 unconstrained:1 similarly:1 portugal:1 language:15 dot:2 chapelle:1 longer:1 etc:1 add:2 aligning:1 align:1 posterior:24 multivariate:2 irrelevant:1 apart:1 inequality:1 binary:1 captured:2 additional:4 somewhat:1 impose:1 accomplishes:1 maximize:1 monotonically:1 stephen:1 zien:1 multiple:1 lisboa:1 desirable:1 semi:2 match:1 long:1 fraser:1 scalable:1 variant:1 pernicious:1 expectation:10 metric:3 iteration:6 sometimes:1 represent:6 normalization:1 achieved:1 penalize:1 preserved:1 want:1 fellowship:1 annealing:2 source:10 sch:1 extra:1 vogel:1 ascent:2 sure:3 induced:1 undirected:1 incorporates:1 spirit:1 jordan:2 structural:1 constraining:3 split:1 easy:2 stephan:1 variety:1 affect:1 independence:1 pennsylvania:2 opposite:1 inner:1 itr:1 translates:1 bottleneck:2 whether:1 peter:1 speech:1 linguist:2 garbage:2 useful:1 involve:1 simplest:1 generate:1 straint:1 nsf:1 estimated:1 overly:1 popularity:1 klein:1 express:2 group:1 procedural:1 drawn:1 cutoff:1 backward:2 vast:1 sum:1 looser:1 epps:11 bound:6 guaranteed:2 nim:1 aer:7 occur:2 noah:1 constraint:43 constrain:1 lightly:1 generates:1 min:3 martin:1 truncate:1 combination:1 across:2 smaller:1 em:31 making:2 modification:3 intuitively:1 repair:1 resource:1 agree:1 remains:1 discus:2 turn:1 know:2 tractable:5 gaussians:3 enforce:6 ney:3 alternative:1 original:7 denotes:1 clustering:8 cf:2 ensure:2 assumes:2 graphical:9 top:1 marginalized:1 linguistics:1 eisner:1 ghahramani:1 build:1 approximating:1 society:1 unchanged:1 objective:6 move:1 added:1 primary:1 dependence:1 diagonal:1 gradient:5 subspace:2 distance:4 hmm:10 outer:1 athena:1 bleu:3 induction:4 length:3 index:1 kuzman:1 optionally:1 difficult:2 unfortunately:3 equivalently:1 setup:1 potentially:1 robert:1 liang:1 noam:1 negative:4 ba:3 design:1 guideline:1 allowing:1 datasets:1 situation:1 relational:1 hinton:1 arbitrary:1 pair:6 required:2 kl:13 optimized:1 sentence:16 darkly:1 alternately:1 nip:1 below:3 usually:2 gideon:1 max:8 royal:1 belief:1 ia:3 event:1 natural:2 difficulty:1 representing:1 scheme:3 improve:5 ne:2 attachment:1 philadelphia:2 fertility:9 nir:1 prior:5 l2:1 marginalizing:1 expect:2 interesting:1 proportional:1 degree:2 sufficient:1 mercer:2 rubin:1 editor:2 translation:12 ibm:4 course:1 penalized:1 saj:1 supported:2 free:1 tease:1 copy:2 english:5 unfairly:1 guide:2 allow:1 disallow:1 bias:1 explaining:1 saul:1 taking:2 sparse:1 curve:1 dimension:1 xn:1 valid:1 rich:2 computes:1 stuck:2 domainspecific:1 made:1 projected:1 concretely:1 avg:6 forward:2 franz:2 correlate:1 kullback:1 multilingual:2 clique:7 uai:1 corpus:13 xi:1 factorize:1 alternatively:1 un:3 latent:14 table:3 additionally:1 learn:1 robust:1 complex:3 da:2 bounding:1 border:2 bilingual:1 allowed:1 collector:2 x1:1 augmented:2 precision:12 position:2 mohanty:1 exponential:1 comput:2 coling:1 showing:1 jensen:1 intractable:7 incorporating:2 restricting:3 sequential:1 effectively:1 adding:3 ci:1 valiant:1 mosenzon:1 execution:1 justifies:1 gap:1 easier:1 simply:1 expressed:1 partially:2 applies:1 minimizer:1 ma:4 conditional:2 goal:1 viewed:2 towards:1 feasible:2 hard:1 aided:1 included:1 tecnologia:1 called:1 oval:1 e:16 meaningful:1 formally:1 szummer:1 alexander:1 encia:1 tested:1 della:4 europarl:2
2,393
3,171
Mining Internet-Scale Software Repositories Erik Linstead, Paul Rigor, Sushil Bajracharya, Cristina Lopes and Pierre Baldi Donald Bren School of Information and Computer Science University of California, Irvine Irvine, CA 92697-3435 {elinstea,prigor,sbajrach,lopes,pfbaldi}@ics.uci.edu Abstract Large repositories of source code create new challenges and opportunities for statistical machine learning. Here we first develop Sourcerer, an infrastructure for the automated crawling, parsing, and database storage of open source software. Sourcerer allows us to gather Internet-scale source code. For instance, in one experiment, we gather 4,632 java projects from SourceForge and Apache totaling over 38 million lines of code from 9,250 developers. Simple statistical analyses of the data first reveal robust power-law behavior for package, SLOC, and lexical containment distributions. We then develop and apply unsupervised author-topic, probabilistic models to automatically discover the topics embedded in the code and extract topic-word and author-topic distributions. In addition to serving as a convenient summary for program function and developer activities, these and other related distributions provide a statistical and information-theoretic basis for quantifying and analyzing developer similarity and competence, topic scattering, and document tangling, with direct applications to software engineering. Finally, by combining software textual content with structural information captured by our CodeRank approach, we are able to significantly improve software retrieval performance, increasing the AUC metric to 0.84? roughly 10-30% better than previous approaches based on text alone. Supplementary material may be found at: http://sourcerer.ics.uci.edu/nips2007/nips07.html. 1 Introduction Large repositories of private or public software source code, such as the open source projects available on the Internet, create considerable new opportunities and challenges for statistical machine learning, information retrieval, and software engineering. Mining such repositories is important, for instance, to understand software structure, function, complexity, and evolution, as well as to improve software information retrieval systems and identify relationships between humans and the software they produce. Tools to mine source code for functionality, structural organization, team structure, and developer contributions are also of interest to private industry, where these tools can be applied to such problems as in-house code reuse and project staffing. While some progress has been made in the application of statistics and machine learning techniques to mine software corpora, empirical studies have typically been limited to small collections of projects, often on the order of one hundred projects or less, several orders of magnitude smaller than publicly available repositories(eg. [1]). Mining large software repositories requires leveraging both the textual and structural aspects of software data, as well as any relevant meta data. Here we develop Sourcerer, a large-scale infrastructure to explore such aspects. We first identify a number of robust power-law behaviors by simple statistical analyses. We then develop and apply unsupervised author-topic probabilistic models to discover the topics embedded in the code and extract topic-word and author-topic distributions. Finally, we leverage the dual textual and graphical nature of software to improve code search and retrieval. 2 Infrastructure and Data To allow for the Internet-scale analysis of source code we have built Sourcerer, an extensive infrastructure designed for the automated crawling, downloading, parsing, organization, and storage of large software repositories in a relational database. A highly configurable crawler allows us to specify the number and types of projects desired, as well as the host databases that should be targeted, and to proceed with incremental updates in an automated fashion. Once target projects are downloaded, a depackaging module uncompresses archive files while saving useful metadata (project name, version, etc). While the infrastructure is general, we apply it here to a sample of projects in Java. Specifically, for the results reported, we download 12,151 projects from Sourceforge and Apache and filter out distributions packaged without source code (binaries only). The end result is a repository consisting of 4,632 projects, containing 244,342 source files, with 38.7 million lines of code, written by 9,250 developers. For the software author-topic modeling approach we also employ the Eclipse 3.0 source code as a baseline. Though only a single project, Eclipse is a large, active open source effort that has been widely studied. In this case, we consider 2,119 source files, associated with about 700,000 lines of code, a vocabulary of 15,391 words, and 59 programmers. Methods for extracting and assigning words and programmers to documents are described in the next sections. A complete list of all the projects contained in our repository is available from the supplementary materials web pages. 3 Statistical Analysis During the parsing process our system performs a static analysis on project source code files to extract code entities and their relationships, storing them in a relational database. For java these entities consist of packages, classes, interfaces, methods, and fields, as well as more specific constructs such as constructors and static initializers. Relations capture method calls, inheritance, and encapsulation, to name a few. The populated database represents a substantial foundation on which to base statistical analysis of source code. Parsing the multi-project repository described above yields a repository of over 5 million entities organized into 48 thousand packages, 560 thousand classes, and 3.2 million methods, participating in over 23.4 million relations. By leveraging the query capabilities of the underlying database we can investigate other interesting statistics. For example, table 1 contains the frequencies of Java keywords across all 4,632 projects. Upon examining this data we can see that the ?default? keyword occurs about 6 percent less frequently than the ?switch? keyword, despite the fact that best practice typically mandates all switch statements contain a default block. Moreover, the ?for? loop is about twice as pervasive as the ?while? loop, suggesting that the bound on the number of iterations is more likely to be known or based on the size of a known data structure. Table 1: Frequency of java keyword occurrence Keyword public if new return import int null void private static final else throws Percentage 12.53 8.44 8.39 7.69 6.89 6.54 5.52 4.94 3.66 3.16 3.01 2.33 2.16 Keyword boolean false case true class protected catch for try throw package byte extends Percentage 2.12 1.69 1.60 1.60 1.36 1.33 1.33 1.22 1.22 1.16 0.96 0.93 0.89 Keyword this break while super instanceof double long implements char float abstract synchronized short Percentage 0.89 0.85 0.63 0.57 0.56 0.55 0.54 0.43 0.30 0.28 0.25 0.25 0.20 Keyword switch interface continue finally default native transient do assert enum volatile strictfp Percentage 0.19 0.17 0.15 0.14 0.13 0.08 0.06 0.05 0.03 0.02 0.04 2.49E-06 Finally, statistical analyses of distributions also identify several power-law distributions. We have observed power-law distributions governing package, SLOC, and inside relation (lexical contain- ment) counts. For instance, Figure 1 shows the log-log plots for the number of packages across projects. Similar graphs for other distributions are available from the supplemental materials page. Distribution of Packages over Projects 3 Number of Packages 10 2 10 1 10 0 10 0 10 1 10 2 10 Rank 3 10 4 10 Figure 1: Approximate power-law distribution for packages over projects 4 Topic and Author-Topic Probabilistic Modeling of Source Code Automated topic and author-topic modeling have been successfully used in text mining and information retrieval where they have been applied, for instance, to the problem of summarizing large text corpora. Recent techniques include Latent Dirichlet Allocation (LDA), which probabilistically models text documents as mixtures of latent topics, where topics correspond to key concepts presented in the corpus [2] (see also [3]). Author-Topic (AT) modeling is an extension of topic modeling that captures the relationship of authors to topics in addition to extracting the topics themselves. An extension of LDA to probabilistic AT modeling has been developed in [4]. In the literature [5], these more recent approaches have been found to produce better results than more traditional methods such as latent semantic analysis (LSA) [6]. Despite previous work in classifying code based on concepts [1], applications of LDA and AT models have been limited to traditional text corpora such as academic publications, news reports, corporate emails, and historical documents [7, 8]. At the most basic level, however, a code repository can be viewed as a text corpus, where source files are analogous to documents and developers to authors. Though vocabulary, syntax, and conventions differentiate a programming language from a natural language, the tokens present in a source file are still indicative of its function (ie. its topics). Thus here we develop and apply probabilistic AT models to software data. In AT models for text, the data consists of a set of documents. The authors of each documents are known and each document is treated as a bag of words. We let A be the total number of authors, W the total number of distinct words (vocabulary size), and T the total number of topics present in the documents. While non-parametric Bayesian [9] and other [10] methods exist to try to infer T from the data, here we assume that T is fixed (e.g. T = 100), though we explore different values. As in [7], our model assumes that each topic t is associated with a multinomial distribution ??t over words w, and each author a is associated with a multinomial distribution ??a over topics. More precisely, the parameters are given by two matrices: a T ? A matrix ? = (?ta ) of author-topic distributions, and a W ? T matrix ? = (?wt ) of topic-word distributions. Given a document d containing Nd words with known authors, in generative mode each word is assigned to one of the authors a of the document uniformly, then the corresponding ??a is sampled to derive a topic t, and finally the corresponding ??t is sampled to derive a word w. A fully Bayesian model is derived by putting symmetric Dirichlet priors with hyperparameters ? and ? over the distributions ??a and ??t . So for instance the prior on ??a is given by D? (??a ) = T ?(T ?) Y ??1 ? (?(?))T t=1 ta and similarly for ??t . If A is the set of authors of the corpus and document d has Ad authors, it is easy to see that under these assumptions the likelihood of a document is given by: P (d|?, ?, A) = Nd T Y 1 XX ?wi t ?ta A i=1 d a t=1 which can be integrated over ? and ? and their Dirichlet distributions to get P (d|?, ?, A). The posterior can be sampled efficiently using Markov Chain Monte Carlo Methods (Gibbs sampling) and, for instance, the ? and ? parameter matrices can be estimated by MAP or MPE methods. Once the data is obtained, applying this basic AT model to software requires the development of several tools to facilitate the processing and modeling of source code. In addition to the crawling infrastructure described above, the primary functions of the remaining tools are to extract and resolve author names from source code, as well as convert the source code to the bag-of-words format. 4.1 Information Extraction from Source Code Author-Document: The author-document matrix is produced from the output of our author extraction tool. It is a binary matrix where entry [i,j]=1 if author i contributed to document j, and 0 otherwise. Extracting author information is ultimately a matter of tokenizing the code and associating developer names with file (document) names when this information is available. This process is further simplified for java software due to the prevalence of javadoc tags which present this metadata in the form of attribute-value pairs. Exploratory analysis of the Eclipse 3.0 code base, however, shows that most source files are credited to ?The IBM Corporation? rather than specific developers. Thus, to generate a list of authors for specific source files, we parsed the Eclipse bug data available in [11]. After pruning files not associated with any author, this input dataset consists of 2,119 Java source files, comprising 700,000 lines of code, from a total of 59 developers. While leveraging bug data is convenient (and necessary) to generate the developer list for Eclipse 3.0, it is also desirable to develop a more flexible approach that uses only the source code itself, and not other data sources. Thus to extract author names from source code we also develop a lightweight parser that examines the code for javadoc ?@author? tags, as well as free form labels such as ?author? and ?developer.? Occurrences of these labels are used to isolate and identify developer names. Ultimately author identifiers may come in the form of full names, email addresses, url?s, or CVS account names. This multitude of formats, combined with the fact that author names are typically labeled in the code header, is key to our decision to extract developer names using our own parsing utilities, rather than part-of-speech taggers [12] leveraged in other text mining projects. A further complication for author name extraction is the fact that the same developer may write his name in several different ways. For example, ?John Q. Developer? alternates between ?John Developer,? ?J. Q. Developer,? or simply ?Developer.? To account for this effect, we implement also a two-tiered approach to name resolution using the q-gram algorithm [13]. When an individual project is parsed, a list of contributing developers (and the files they modified) is created. A pairwise comparison of author-names is then performed using q-gram similarity, and pairs of names whose similarity is greater than a threshold t1 are merged. This process continues until all pairwise similarities are below the threshold, and the project list is then added to a global list of authors. When parsing is complete for all projects, the global author list is resolved using the same process, but with a new threshold, t2, such that t2 > t1. This approach effectively implements more conservative name resolution across projects in light of the observation that the scope of most developer activities is limited to a relatively small number (1 in many cases) of open source efforts. In practice, we set t1 = .65 and t2 = .75. Running our parser on the multi-project repository yields 9,250 distinct authors respectively. Word-Document: To produce the word-document matrix for our input data we have developed a comprehensive tokenization tool tuned to the Java programming language. This tokenizer includes language-specific heuristics that follow the commonly practiced naming conventions. For example, the Java class name ?QuickSort? will generate the words ?quick? and ?sort?. All punctuation is ignored. As an important step in processing source files our tool removes commonly occurring stop words. We augment a standard list of stop words used for the English language (e.g. and, the, but, etc) to include the names of all classes from the Java SDK (eg. ArrayList, HashMap, etc). This is done to specifically avoid extracting common topics relating to the Java collections framework.We run the LDA-based AT algorithm on the input matrices and set the total number of topics (100) and the number of iterations by experimentation. For instance, the number of iterations, i, to run the algorithm is determined empirically by analyzing results for i ranging from 500 to several thousands. The results presented in the next section are derived using 3,000 iterations, which were found to produce interpretable topics in a reasonable amount of time (a week or so). Because the algorithm contains a stochastic component we also verified the stability of the results across multiple runs. 4.2 Topic and Author-Topic Modeling Results A representative subset of 6 topics extracted via Author-Topic modeling on the selected 2,119 source files from Eclipse 3.0 is given in Table 2. Each topic is described by several words associated with the topic concept. To the right of each topic is a list of the most likely authors for each topic with their probabilities. Examining the topic column of the table it is clear that various functions of the Eclipse framework are represented. For example, topic 1 clearly corresponds to unit testing, topic 2 to debugging, topic 4 to building projects, and topic 6 to automated code completion. Remaining topics range from package browsing to compiler options. Table 2: Representative topics and authors from Eclipse 3.0 # 1 2 3 Topic junit run listener item suite target source debug breakpoint location ast button cplist entries astnode Author Probabilities egamma 0.97065 wmelhem 0.01057 darin 0.00373 krbarnes 0.00144 kkolosow 0.00129 jaburns 0.96894 darin 0.02101 lbourlier 0.00168 darins 0.00113 jburns 0.00106 maeschli 0.99161 mkeller 0.00097 othomann 0.00055 tmaeder 0.00055 teicher 0.00046 # 4 5 6 Topic nls-1 ant manager listener classpath type length names match methods token completion current identifier assist Author Probabilities darins 0.99572 dmegert 0.00044 nick 0.00044 kkolosow 0.00036 maeschli 0.00031 kjohnson 0.59508 jlanneluc 0.32046 darin 0.02286 johna 0.00932 pmulet 0.00918 daudel 0.99014 teicher 0.00308 jlanneluc 0.00155 twatson 0.00084 dmegert 0.00046 Table 3 presents 6 representative author-topic assignments from the multi-project repository. This dataset yields a substantial increase in topic diversity. Topics representing major sub-domains of software development are clearly represented, with the first topic corresponding to web applications, the second to databases, the third to network applications, and the fourth to file processing. Topics 5 and 6 are especially interesting, as they correspond to common examples of crosscutting concerns from aspect-oriented programming [14], namely security and logging. Topic 5 is also demonstrative of the inherent difficulty of resolving author names, and the shortcomings of the qgram algorithm, as the developer ?gert van ham? and the developer ?hamgert? are most likely the same person documenting their name in different ways. Several trends reveal themselves when all results are considered. Though the majority of topics can be intuitively mapped to their corresponding domains, some topics are too noisy to be able to associate any functional description to them. For example, one topic extracted from our repository consists of Spanish words unrelated to software engineering which seem to represent the subset of source files with comments in Spanish. Other topics appear to be very project specific, and while they may indeed describe a function of code, they are not easily understood by those who are only casually familiar with the software artifacts in the codebase. This is especially true with Eclipse, which is limited in both the number and diversity of source files. In general noise appears to diminish as repository size grows. Noise can be controlled to some degree with tuning the number of topics to be extracted, but of course can not be eliminated completely. Examining the author assignments (and probabilities) for the various topics provides a simple means by which to discover developer contributions and infer their competencies. It should come as no surprise that the most probable developer assigned to the JUnit framework topic is ?egamma?, or Erich Gamma. In this case, there is a 97% chance that any source file in our dataset assigned to this topic will have him as a contributor. Based on this rather high probability, we can also infer that he is likely to have extensive knowledge of this topic. This is of course a particularly Table 3: Representative topics and authors from the multi-project repository # 1 2 3 Topic servlet session response request http sql column jdbc type result packet type session snmpwalkmv address Author Probabilities craig r mcclanahan 0.19147 remy maucherat 0.08301 peter rossbach 0.04760 greg wilkins 0.04251 amy roh 0.03100 mark matthews 0.33265 ames 0.02640 mike bowler 0.02033 manuel laflamme 0.02027 gavin king 0.01813 brian weaver 0.14015 apache directory project 0.10066 opennms 0.08667 matt whitlock 0.06508 trustin lee 0.04752 # 4 5 6 Topic file path dir directory stream token key security param cert service str log config result Author Probabilities adam murdoch 0.02466 peter donald 0.02056 ludovic claude 0.01496 matthew hawthorne 0.01170 lk 0.01106 werner dittmann 0.09409 apache software foundation 0.06117 gert van ham 0.05153 hamgert 0.05144 jcetaglib.sourceforge.net 0.05133 wayne m osse 0.44638 dirk mascher 0.07339 david irwin 0.04928 linke 0.02823 jason 0.01505 attractive example because Erich Gamma is widely known for being a founder of the JUnit project, a fact which lends credibility to the ability of the topic modeling algorithm to assign developers to reasonable topics. One can interpret the remaining author-topic assignments along similar lines. For example, developer ?daudel? is assigned to the topic corresponding to automatic code completion with probability .99. Referring back to the Eclipse bug data it is clear that the overwhelming majority of bug fixes for the codeassist framework were made by this developer. One can infer that this is likely to be an area of expertise of the developer. In addition to determining developer contributions, one may also be curious to know the scope of a developer?s involvement. Does a developer work across application areas, or are his contributions highly focused? How does the breadth of one developer compare to another? These are natural questions that arise in the software development process. To answer these questions within the framework P of author-topic models, we can measure the breadth of an author a by the entropy H(a) = ? t ?ta log ?ta of the corresponding distribution over topics. Applying the measure to our multi-project dataset, we find that the average measure is 2.47 bits. The developer with the lowest entropy is ?thierry danard,? with .00076 bits. The developer with the highest entropy is ?wdi? with 4.68 bits, with 6.64 bits being the maximum possible score for 100 topics. While the entropy twatson jeff rchaves dj wmelhem prapicau johna jfogell kkolosow jaburns dejan ikhelifi schan dpollock cwong dbirsan nick jburns darins khorne mrennie sxenos bbokowski sarsenau mfaraj darin sfranklin dwilson tod mvanmeek cmarti kent krbarnes lbourlier egamma kjohnson jdesrivieres jeromel aweinand kmaetzel daudel bbiggs ffusier jeem bbaumgart akiezun othomann jlanneluc oliviert dmegert twidmer pmulet maeschlimann tmaeder teicher dbaeumer mkeller ptff maeschli Figure 2: All 59 Eclipse 3.0 authors clustered by KL divergence of the distribution of an author over topics measures the author?s breadth, the similarity between two authors can be measured by comparing their respective distributions over topics. Several metrics are possible for this purpose, but one of the most natural measures is provided by the symmetrized Kullback-Leibler (KL) divergence. Multidimensional scaling (MDS) is employed to further visual- ize author similarities, resulting in Figure 2 for the Eclipse project. The boxes represent individual developers, and are arranged such that developers with similar topic distributions are nearest one another. A similar figure, displaying only a subset of the 4,500 SourceForge and Apache authors due to space and legibility constraints, is available in the supplementary materials. This information is especially useful when considering how to form a development team, choosing suitable programmers to perform code updates, or deciding to whom to direct technical questions. Two other important distributions that can be retrieved from the AT modeling approach are the distribution of topics across documents, and the distribution of documents across topics (not shown). The corresponding entropies provide an automated and novel way to precisely formalize and measure topic scattering and document tangling, two fundamental concepts of software design [14], which are important to software architects when performing activities such as code refactoring. 5 Code Search and Retrieval Sourcerer relies on a deep analysis of code to extract pertinent textual and structural features that can be used to improve the quality and performance of source code search, as well as augment the ways in which code can be searched. By combining standard text information retrieval techniques with source-specific heuristics and a relational representation of code, we have available a comprehensive platform for searching software components. While there has been progress in developing sourcecode-specific search engines in recent years (e.g. Koders, Krugle, and Google?s CodeSearch), these systems continue to focus strictly on text information retrieval, and do not appear to leverage the copious relations that can be extracted and analyzed from code. Programs are best modeled as graphs, with code entities comprising the nodes and various relations the edges. As such, it is worth exploring possible ranking methods that leverage the underlying graphs. A natural starting point is Google?s PageRank [15], which considers hyperlinks to formulate a notion of popularity among web pages. This can be applied to source as well, as it is likely that a code entity referenced by many other entities are more robust than those with few references. We used Google?s PageRank [15] almost verbatim. The Code Rank of a code entity (package, class, or method) A is given by: CR(A) = (1 ? d) + d(CR(T1 )/C(T1 ) + ... + CR(Tn )/C(Tn )) where T1 ...Tn are the code entities referring to A, C(A) is the number of outgoing links of A, and d is a damping factor. Using the CodeRank algorithm as a basis it is possible to devise many ranking schemes by building graphs from the many entities and relations stored in our database, or subsets thereof. For example, one may consider the graph of only method call relationships, package dependencies, or inheritance hierarchies. Moreover, graph-based techniques can be combined with a variety of heuristics to further improve code search. For example, keyword hits to the right of the fully-qualified name can be boosted, hits in comments can be discounted, and terms indicative of test articles can be ignored. We are conducting detailed experiments to assess the effectiveness of graph-based algorithms in conjunction with standard IR techniques to search source code. Current evidence strongly indicates that best results are ultimately obtained by combining term-based ranking with source-specific heuristics and coderank. After defining a set of 25 control queries with known ?best? hits, we compared performances using standard information retrieval metrics, such as area under curve (AUC). Queries were formulated to represent users searching for specific algorithms, such as ?depth first search,? as well as users looking to reuse complete components, such as ?database connection manager.? Best hits were determined manually with a team of 3 software engineers serving as human judges of result quality, modularity, and ease of reuse. Results clearly indicate that the general Google search engine is ineffective for locating relevant source code, with a mean AUC of .31 across the queries. By restricting its corpus to code alone, Google?s code search engine yields substantial improvement with an AUC of approximately .66. Despite this improvement this system essentially relies only on regular expression matching of code keywords. Using a Java-specific keyword and comment parser our infrastructure yields an immediate improvement with an AUC of .736. By augmenting this further with the heuristics above and CodeRank (consisting of class and method relations), the mean AUC climbs to .841. At this time we have conducted extensive experiments for 12 ranking schemes corresponding to various combinations of graph-based and term-based heuristics, and have observed similar improvements. While space does not allow their inclusion, additional results are available from our supplementary materials page. 6 Conclusion Here we have leveraged a comprehensive code processing infrastructure to facilitate the mining of large-scale software repositories. We conduct a statistical analysis of source code on a previously unreported scale, identifying robust power-law behavior among several code entities. The development and application of author-topic probabilistic modeling to source code allows for the unsupervised extraction of program organization, functionality, developer contributions, and developer similarities, thus providing a new direction for research in this area of software engineering. The methods developed are applicable at multiple scales, from single projects to Internet-scale repositories. Results indicate that the algorithm produces reasonable and interpretable automated topics and author-topic assignments. The probabilistic relationships between author, topics, and documents that emerge from the models naturally provide an information-theoretic basis to define and compare developer and program similarity, topic scattering, and document tangling with potential applications in software engineering ranging from bug fix assignments and staffing to software refactoring. Finally, by combining term-based information retrieval techniques with graphical information derived from program structure, we are able to significantly improve software search and retrieval performance. Acknowledgments: Work in part supported by NSF MRI grant EIA-0321390 and a Microsoft Faculty Research Award to PB, as well as NSF grant CCF-0725370 to CL and PB. References [1] S. Ugurel, R. Krovetz, and C. L. Giles. What?s the code?: automatic classification of source code archives. In KDD ?02: Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 632?638, New York, NY, USA, 2002. ACM Press. [2] D.M. Blei, A.Y. Ng, and M.I. Jordan. Latent dirichlet allocation. Journal of Machine Learning Research, 3:993?1022, January 2003. [3] W. Buntine. Open source search: a data mining platform. SIGIR Forum, 39(1):4?10, 2005. [4] M. Steyvers, P. Smyth, M. Rosen-Zvi, and T. Griffiths. Probabilistic author-topic models for information discovery. In KDD ?04: Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 306?315, New York, NY, USA, 2004. ACM Press. [5] D. Newman, C. Chemudugunta, P. Smyth, and M. Steyvers. Analyzing entities and topics in news articles using statistical topic models. In ISI, pages 93?104, 2006. [6] S. Deerwester, S. Dumais, T. Landauer, G. Furnas, and R. Harshman. Indexing by latent semantic analysis. Journal of the American Society of Information Science, 41(6):391?407, 1990. [7] Michal Rosen-Zvi, Thomas Griffiths, Mark Steyvers, and Padhraic Smyth. The author-topic model for authors and documents. In UAI ?04: Proceedings of the 20th conference on Uncertainty in artificial intelligence, pages 487?494, Arlington, Virginia, United States, 2004. AUAI Press. [8] D. Newman and S. Block. Probabilistic topic decomposition of an eighteenth-century american newspaper. J. Am. Soc. Inf. Sci. Technol., 57(6):753?767, 2006. [9] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476):1566?1581, 2006. [10] T. L. Griffiths and M. Steyvers. Finding scientific topics. Proc Natl Acad Sci U S A, 101 Suppl 1:5228? 5235, April 2004. [11] A. Schr?oter, T. Zimmermann, R. Premraj, and A. Zeller. If your bug database could talk. . . . In Proceedings of the 5th International Symposium on Empirical Software Engineering, Volume II: Short Papers and Posters, pages 18?20, September 2006. [12] E. Brill. Some advances in transformation-based part of speech tagging. In National Conference on Artificial Intelligence, pages 722?727, 1994. [13] E. Ukkonen. Approximate string-matching with q-grams and maximal matches. Theor. Comput. Sci., 92(1):191?211, 1992. [14] G. Kiczales, J. Lamping, A. Menhdhekar, C. Maeda, C. Lopes, J. Loingtier, and J. Irwin. Aspect-oriented programming. In Mehmet Aks?it and Satoshi Matsuoka, editors, Proceedings European Conference on Object-Oriented Programming, volume 1241, pages 220?242. Springer-Verlag, Berlin, Heidelberg, and New York, 1997. [15] R. Motwani L. Page, S. Brin and T. Winograd. The pagerank citation ranking: Bringing order to the web. Stanford Digital Library working paper SIDL-WP-1999-0120 of 11/11/1999 (see: http://dbpubs.stanford.edu/pub/1999-66).
3171 |@word repository:19 version:1 faculty:1 private:3 mri:1 nd:2 open:5 decomposition:1 kent:1 downloading:1 cristina:1 contains:2 lightweight:1 score:1 practiced:1 tuned:1 document:25 united:1 pub:1 current:2 comparing:1 michal:1 manuel:1 assigning:1 crawling:3 written:1 parsing:6 import:1 john:2 kdd:2 pertinent:1 remove:1 designed:1 plot:1 update:2 interpretable:2 alone:2 generative:1 selected:1 intelligence:2 item:1 indicative:2 directory:2 short:2 blei:2 infrastructure:8 provides:1 complication:1 location:1 ames:1 node:1 tagger:1 along:1 direct:2 symposium:1 consists:3 baldi:1 inside:1 pairwise:2 tagging:1 indeed:1 roughly:1 themselves:2 frequently:1 isi:1 multi:5 manager:2 behavior:3 discounted:1 automatically:1 resolve:1 overwhelming:1 param:1 str:1 increasing:1 considering:1 project:34 discover:3 underlying:2 moreover:2 xx:1 unrelated:1 provided:1 null:1 lowest:1 what:1 string:1 developer:39 developed:3 supplemental:1 finding:1 transformation:1 corporation:1 suite:1 remy:1 assert:1 multidimensional:1 auai:1 hit:4 control:1 unit:1 grant:2 wayne:1 lsa:1 appear:2 harshman:1 t1:6 service:1 engineering:6 understood:1 referenced:1 zeller:1 acad:1 despite:3 ak:1 analyzing:3 path:1 credited:1 approximately:1 twice:1 studied:1 ease:1 limited:4 range:1 acknowledgment:1 testing:1 practice:2 block:2 implement:3 prevalence:1 area:4 empirical:2 java:12 significantly:2 convenient:2 matching:2 word:19 poster:1 donald:2 regular:1 griffith:3 get:1 storage:2 ast:1 applying:2 map:1 quick:1 lexical:2 eighteenth:1 starting:1 focused:1 resolution:2 formulate:1 sigir:1 identifying:1 amy:1 examines:1 his:2 steyvers:4 stability:1 searching:2 exploratory:1 gert:2 constructor:1 analogous:1 notion:1 century:1 target:2 hierarchy:1 parser:3 user:2 smyth:3 programming:5 us:1 premraj:1 associate:1 trend:1 particularly:1 continues:1 native:1 database:10 labeled:1 observed:2 mike:1 module:1 winograd:1 bren:1 capture:2 thousand:3 news:2 keyword:9 highest:1 substantial:3 ham:2 complexity:1 mine:2 ultimately:3 upon:1 logging:1 basis:3 completely:1 resolved:1 easily:1 various:4 represented:2 listener:2 talk:1 distinct:2 shortcoming:1 describe:1 monte:1 query:4 artificial:2 newman:2 header:1 choosing:1 whose:1 heuristic:6 supplementary:4 widely:2 stanford:2 otherwise:1 ability:1 statistic:2 itself:1 noisy:1 final:1 beal:1 differentiate:1 claude:1 net:1 ment:1 maximal:1 uci:2 combining:4 relevant:2 loop:2 bug:6 description:1 participating:1 ludovic:1 sourceforge:4 double:1 motwani:1 produce:5 incremental:1 adam:1 object:1 derive:2 develop:7 completion:3 augmenting:1 measured:1 nearest:1 keywords:2 school:1 thierry:1 progress:2 throw:2 soc:1 come:2 judge:1 synchronized:1 convention:2 indicate:2 nls:1 direction:1 merged:1 functionality:2 filter:1 attribute:1 stochastic:1 human:2 packet:1 char:1 programmer:3 transient:1 material:5 public:2 mkeller:2 brin:1 assign:1 fix:2 clustered:1 probable:1 brian:1 theor:1 extension:2 exploring:1 strictly:1 considered:1 ic:2 diminish:1 wdi:1 deciding:1 enum:1 scope:2 week:1 gavin:1 copious:1 matthew:2 major:1 darin:4 purpose:1 whitlock:1 proc:1 applicable:1 bag:2 label:2 him:1 contributor:1 create:2 successfully:1 tool:7 clearly:3 super:1 modified:1 rather:3 avoid:1 cr:3 boosted:1 totaling:1 probabilistically:1 pervasive:1 publication:1 derived:3 focus:1 conjunction:1 improvement:4 rank:2 likelihood:1 indicates:1 sigkdd:2 baseline:1 summarizing:1 am:1 typically:3 integrated:1 relation:7 comprising:2 dual:1 html:1 flexible:1 augment:2 among:2 classification:1 development:5 platform:2 tokenization:1 field:1 once:2 saving:1 construct:1 extraction:4 sampling:1 eliminated:1 manually:1 represents:1 ng:1 unsupervised:3 breakpoint:1 rosen:2 report:1 t2:3 competency:1 inherent:1 employ:1 few:2 oriented:3 gamma:2 divergence:2 comprehensive:3 individual:2 verbatim:1 national:1 familiar:1 consisting:2 krovetz:1 microsoft:1 initializers:1 organization:3 interest:1 mining:9 highly:2 investigate:1 mixture:1 punctuation:1 analyzed:1 light:1 natl:1 chain:1 edge:1 necessary:1 respective:1 damping:1 conduct:1 desired:1 sidl:1 instance:7 industry:1 modeling:12 boolean:1 column:2 giles:1 assignment:5 werner:1 entry:2 subset:4 hundred:1 examining:3 conducted:1 too:1 virginia:1 buntine:1 configurable:1 reported:1 encapsulation:1 answer:1 stored:1 dependency:1 dir:1 zvi:2 combined:2 referring:2 person:1 dumais:1 fundamental:1 international:3 ie:1 probabilistic:9 lee:1 containing:2 leveraged:2 padhraic:1 american:3 return:1 suggesting:1 account:2 potential:1 diversity:2 includes:1 nips07:1 int:1 matter:1 ranking:5 ad:1 stream:1 performed:1 try:2 break:1 jason:1 mpe:1 compiler:1 sort:1 option:1 capability:1 hashmap:1 contribution:5 ass:1 publicly:1 greg:1 cert:1 ir:1 who:1 efficiently:1 conducting:1 yield:5 identify:4 correspond:2 ant:1 satoshi:1 bayesian:2 produced:1 craig:1 carlo:1 expertise:1 worth:1 email:2 frequency:2 thereof:1 naturally:1 associated:5 static:3 irvine:2 sampled:3 dataset:4 stop:2 knowledge:3 organized:1 formalize:1 back:1 appears:1 scattering:3 ta:5 follow:1 arlington:1 specify:1 response:1 april:1 arranged:1 done:1 though:4 box:1 strongly:1 eia:1 governing:1 until:1 working:1 web:4 google:5 matsuoka:1 mode:1 lda:4 reveal:2 artifact:1 quality:2 grows:1 scientific:1 building:2 name:23 facilitate:2 contain:2 true:2 concept:4 effect:1 evolution:1 ccf:1 assigned:4 documenting:1 ize:1 symmetric:1 leibler:1 wp:1 semantic:2 eg:2 attractive:1 during:1 demonstrative:1 spanish:2 auc:6 syntax:1 theoretic:2 complete:3 tn:3 performs:1 interface:2 percent:1 ranging:2 novel:1 volatile:1 common:2 legibility:1 multinomial:2 functional:1 apache:5 packaged:1 empirically:1 volume:2 million:5 association:1 he:1 relating:1 interpret:1 gibbs:1 cv:1 credibility:1 tuning:1 automatic:2 debug:1 populated:1 similarly:1 erich:2 session:2 inclusion:1 language:5 dj:1 unreported:1 sql:1 similarity:8 etc:3 base:2 posterior:1 own:1 recent:3 retrieved:1 involvement:1 inf:1 verlag:1 meta:1 binary:2 continue:2 devise:1 captured:1 greater:1 additional:1 employed:1 ii:1 resolving:1 full:1 corporate:1 desirable:1 infer:4 multiple:2 technical:1 match:2 academic:1 long:1 retrieval:11 host:1 naming:1 award:1 controlled:1 basic:2 essentially:1 metric:3 iteration:4 represent:3 suppl:1 addition:4 mandate:1 void:1 source:44 else:1 float:1 bringing:1 archive:2 file:19 comment:3 isolate:1 ineffective:1 leveraging:3 climb:1 seem:1 jordan:2 effectiveness:1 call:2 extracting:4 structural:4 leverage:3 curious:1 config:1 easy:1 automated:7 switch:3 codebase:1 variety:1 pfbaldi:1 associating:1 expression:1 utility:1 url:1 reuse:3 assist:1 effort:2 peter:2 locating:1 speech:2 proceed:1 york:3 deep:1 ignored:2 useful:2 clear:2 detailed:1 amount:1 http:3 generate:3 percentage:4 exist:1 nsf:2 estimated:1 popularity:1 serving:2 chemudugunta:1 write:1 key:3 putting:1 threshold:3 pb:2 verified:1 breadth:3 tenth:1 graph:8 button:1 convert:1 year:1 deerwester:1 run:4 package:12 fourth:1 uncertainty:1 lope:3 extends:1 almost:1 reasonable:3 decision:1 scaling:1 bit:4 bound:1 internet:5 nips2007:1 activity:3 precisely:2 constraint:1 your:1 software:34 tag:2 aspect:4 performing:1 format:2 relatively:1 developing:1 alternate:1 debugging:1 request:1 combination:1 smaller:1 across:8 wi:1 founder:1 intuitively:1 indexing:1 zimmermann:1 sloc:2 previously:1 count:1 know:1 end:1 available:9 experimentation:1 apply:4 hierarchical:1 pierre:1 occurrence:2 symmetrized:1 thomas:1 assumes:1 dirichlet:5 include:2 remaining:3 running:1 graphical:2 opportunity:2 parsed:2 especially:3 forum:1 society:1 added:1 question:3 occurs:1 parametric:1 primary:1 md:1 traditional:2 september:1 lends:1 link:1 mapped:1 sci:3 entity:11 majority:2 berlin:1 topic:89 whom:1 considers:1 erik:1 code:59 length:1 modeled:1 relationship:5 tiered:1 providing:1 statement:1 casually:1 design:1 contributed:1 perform:1 dejan:1 teh:1 observation:1 oter:1 markov:1 technol:1 architect:1 immediate:1 defining:1 relational:3 looking:1 team:3 dirk:1 january:1 schr:1 competence:1 download:1 david:1 pair:2 namely:1 kl:2 extensive:3 connection:1 security:2 nick:2 california:1 engine:3 textual:4 address:2 able:3 below:1 roh:1 eighth:1 maeda:1 challenge:2 hyperlink:1 program:5 pagerank:3 built:1 power:6 suitable:1 natural:4 treated:1 difficulty:1 weaver:1 representing:1 scheme:2 improve:6 library:1 lk:1 created:1 catch:1 extract:7 metadata:2 mehmet:1 text:10 byte:1 literature:1 inheritance:2 prior:2 discovery:3 contributing:1 determining:1 law:6 embedded:2 fully:2 ukkonen:1 interesting:2 allocation:2 digital:1 foundation:2 downloaded:1 degree:1 gather:2 usa:2 article:2 displaying:1 editor:1 storing:1 classifying:1 ibm:1 course:2 summary:1 token:3 supported:1 free:1 english:1 qualified:1 allow:2 understand:1 emerge:1 van:2 curve:1 default:3 vocabulary:3 gram:3 depth:1 author:63 made:2 collection:2 commonly:2 simplified:1 hawthorne:1 historical:1 newspaper:1 approximate:2 pruning:1 citation:1 kullback:1 global:2 active:1 quicksort:1 uai:1 corpus:7 containment:1 landauer:1 search:11 latent:5 protected:1 modularity:1 table:7 nature:1 robust:4 ca:1 heidelberg:1 cl:1 european:1 domain:2 noise:2 paul:1 hyperparameters:1 teicher:3 identifier:2 arise:1 refactoring:2 representative:4 tod:1 fashion:1 ny:2 furnas:1 sub:1 comput:1 house:1 third:1 tokenizing:1 specific:10 list:9 multitude:1 concern:1 evidence:1 consist:1 false:1 restricting:1 effectively:1 magnitude:1 occurring:1 browsing:1 surprise:1 entropy:5 simply:1 explore:2 likely:6 visual:1 contained:1 eclipse:12 springer:1 sdk:1 corresponds:1 chance:1 relies:2 extracted:4 acm:4 viewed:1 targeted:1 king:1 quantifying:1 formulated:1 jeff:1 content:1 considerable:1 specifically:2 determined:2 uniformly:1 wt:1 brill:1 engineer:1 conservative:1 total:5 rigor:1 matt:1 wilkins:1 mark:2 searched:1 irwin:2 crawler:1 outgoing:1
2,394
3,172
Continuous Time Particle Filtering for fMRI Lawrence Murray School of Informatics University of Edinburgh lawrence.murray@ed.ac.uk Amos Storkey School of Informatics University of Edinburgh a.storkey@ed.ac.uk Abstract We construct a biologically motivated stochastic differential model of the neural and hemodynamic activity underlying the observed Blood Oxygen Level Dependent (BOLD) signal in Functional Magnetic Resonance Imaging (fMRI). The model poses a difficult parameter estimation problem, both theoretically due to the nonlinearity and divergence of the differential system, and computationally due to its time and space complexity. We adapt a particle filter and smoother to the task, and discuss some of the practical approaches used to tackle the difficulties, including use of sparse matrices and parallelisation. Results demonstrate the tractability of the approach in its application to an effective connectivity study. 1 Introduction Functional Magnetic Resonance Imaging (fMRI) poses a large-scale, noisy and altogether difficult problem for machine learning algorithms. The Blood Oxygen Level Dependent (BOLD) signal, from which fMR images are produced, is a measure of hemodynamic activity in the brain ? only an indirect indicator of the neural processes which are of primary interest in most cases. For studies of higher level patterns of activity, such as effective connectivity [1], it becomes necessary to strip away the hemodynamic activity to reveal the underlying neural interactions. In the first instance, this is because interactions between regions at the neural level are not necessarily evident at the hemodynamic level [2]. In the second, analyses increasingly benefit from the temporal qualities of the data, and the hemodynamic response itself is a form of temporal blurring. We are interested in the application of machine learning techniques to reveal meaningful patterns of neural activity from fMRI. In this paper we construct a model of the processes underlying the BOLD signal that is suitable for use in a filtering framework. The model proposed is close to that of Dynamic Causal Modelling (DCM) [3]. The main innovation over these deterministic models is the incorporation of stochasticity at all levels of the system. This is important; under fixed inputs, DCM reduces to a generative model with steady state equilibrium BOLD activity and independent noise at each time point. Incorporating stochasticity allows proper statistical characterisation of the dependence between brain regions, rather than relying on relating decay rates1 . Our work has involved applying a number of filtering techniques to estimate the parameters of the model, most notably the Unscented Kalman Filter [4] and various particle filtering techniques. This paper presents the application of a simple particle filter. [5] take a similar filtering approach, applying a local linearisation filter [6] to a model of individual regions. In contrast, the approach here is applied to multiple regions and their interactions, not single regions in isolation. Other approaches to this type of problem are worth noting. Perhaps the most commonly used technique to date is Structural Equation Modelling (SEM) [7; 8] (e.g. [9; 10; 11]). SEM is a multivariate 1 A good analogy is the fundamental difference between modelling time series data yt using an exponentially decaying curve with observational noise xt = axt?1 +c, yt = xt +?t , and using a much more flexible Kalman filter xt = axt?1 + c + ?t , yt = xt + ?t (where xt is a latent variable, a a decay constant, c a constant and ? and ? Gaussian variables). 1 regression technique where each dependent variable may be a linear combination of both independent and other dependent variables. Its major limitation is that it is static, assuming that all observations are temporally independent and that interactions are immediate and wholly evident within each single observation. Furthermore, it does not distinguish between neural and hemodynamic activity, and in essence identifies interactions only at the hemodynamic level. The major contributions of this paper are establishing a stochastic model of latent neural and hemodynamic activity, formulating a filtering and smoothing approach for inference in this model, and overcoming the basic practical difficulties associated with this. The estimated neural activity relates to the domain problem and is temporally consistent with the stimulus. The approach is also able to establish connectivity relationships. The ability of this model to establish such connectivity relationships on the basis of stochastic temporal relationships is significant. One problem in using structural equation models for effective connectivity analysis is the statistical equivalence of different causal models. By presuming a temporal causal order, temporal models of this form have no such equivalence problems. Any small amount of temporal connectivity information available in fMRI data is of significant benefit, as it can disambiguate between statically equivalent models. Section 2 outlines the basis of the hemodynamic model that is used. This is combined with neural, input and measurement models in Section 3 to give the full framework. Inference and parameter estimation are discussed in Section 4, before experiments and analysis in Sections 5 and 6. 2 Hemodynamics Temporal analysis of fMRI is significantly confounded by the fact that it does not measure brain activity directly, but instead via hemodynamic activity, which (crudely) temporally smooths the activity signal. The quality of temporal analysis therefore depends significantly on the quality of model used to relate neural and hemodynamic activity. This relationship may be described using the now well established Balloon model [12]. This models a venous compartment as a balloon using Windkessel dynamics. The state of the compartment is represented by its blood volume normalised to the volume at rest, v = V /V0 (blood volume V , rest volume V0 ), and deoxyhemoglobin (dHb) content normalised to the content at rest, q = Q/Q0 (dHb content Q, rest content Q0 ). The compartment receives inflow of fully oxygenated arterial blood fin (t), extracts oxygen from the blood, and expels partially deoxygenated blood fout (t). The full dynamics may be represented by the differential system: ? ? 1 E(t) q dq = fin (t) ? fout (v) (1) dt ?0 E0 v 1 dv = [fin (t) ? fout (v)] (2) dt ?0 E(t) ? fout (v) ? 1 1 ? (1 ? E0 ) fin (t) v 1 ? (3) (4) where ?0 and ? are constants, and E0 is the oxygen extraction fraction at rest. This base model is driven by the independent input fin (t). It may be further extended to couple in neural activity z(t) via an abstract vasodilatory signal s [13]: df dt ds dt = s (5) = ?z(t) ? s (f ? 1) ? . ?s ?f (6) The complete system defined by Equations 1-6, with fin (t) = f , is now driven by the independent input z(t). From the balloon model, the relative BOLD signal change over the baseline S at any time may be predicted using [12]: h i ? q? ?S = V0 k1 (1 ? q) + k2 1 ? + k3 (1 ? v) . (7) S v Figure 1 illustrates the system dynamics. Nominal values for constants are given in Table 1. 2 q v 1.04 f 1.35 0.84 0.9 0 30 s 1.9 0.8 0 30 BOLD (%) 0.7 1 -0.3 0 30 -0.2 0 30 0 30 Figure 1: Response of the balloon model to a 1s burst of neural activity at magnitude 1 (time on x axis, response on y axis). 3 Model We define a model of the neural and hemodynamic interactions between M regions of interest. A region consists of neural tissue and a venous compartment. The state xi (t) of region i at time t is given by: ? ? z (t) neural activity ? ? i ? ? ?fi (t) normalised blood flow into the venous compartment xi (t) = si (t) vasodilatory signal ? ? ? qi (t) normalised dHb content of the venous compartment ? ? ? vi (t) normalised blood volume of the venous compartment The complete state at time t is given by x(t) = (x1 (t)T , . . . , xM (t)T )T . We construct a model of the interactions between regions in four parts ? the input model, the neural model, the hemodynamic model and the measurement model. 3.1 Input model The input model represents the stimulus associated with the experimental task during an fMRI session. In general this is a function u(t) with U dimensions. For a simple block design paradigm a one-dimensional box-car function is sufficient. 3.2 Neural model Neural interactions between the regions are given by: dz = Az dt + Cu dt + c + ?z dW, (8) where dW is the M -dimensional standard (zero mean, unit variance) Wiener process, A an M ? M matrix of efficacies between regions, C an M ? U matrix of efficacies between inputs and regions, c an M -dimensional vector of constant terms and ?z an M ? M diagonal diffusion matrix with ?z1 , . . . , ?zM along the diagonal. This is similar to the deterministic neural model of DCM expressed as a stochastic differential equation, but excludes the bilinear components allowing modulation of connections between seeds. In theory these can be added, we simply limit ourselves to a simpler model for this early work. In addition, and unlike DCM, nonlinear interactions between regions could also be included to account for modulatory activity. Again it seems sensible to keep the simplest linear case at this stage of the work, but the potential for nonlinear generalisation is one of the longer term benefits of this approach. 3.3 Hemodynamic model Within each region, the variables fi , si , qi , vi and zi interact according to a stochastic extension of the balloon model (c.f. Equations 1-6). It is assumed that regions are sufficiently separate that their Constant Value ?0 0.98 ?f 1/0.65 ?s 1/0.41 ? 0.32 ? 0.8 V0 0.018 E0 0.4 k1 7E0 k2 2 Table 1: Nominal values for constants of the balloon model [12; 13]. 3 k3 2E0 ? 0.2 hemodynamic activity is independent given neural activity[14]. Noise in the form of the Wiener process is introduced to si and the log space of fi , qi and vi , in the latter three cases to ensure positivity: d ln fi = dsi = d ln qi = d ln vi = 1 si dt + ?fi dW fi ? ? (f ? 1) s ? dt + ?si dW ?zi ? ?s ?f " # 1 1 1 ? (1 ? E0 ) fi 1 ? ?1 fi ? vi qi dt + ?qi dW qi ?0 E0 i 1 1 h fi ? vi? dt + ?vi dW. vi ?0 (9) (10) (11) (12) 3.4 Measurement model The relative BOLD signal change at any time for a particular region is given by (c.f. Equation 7): ? ? ? ? qi + k3 (1 ? vi ) . (13) ?yi = V0 k1 (1 ? qi ) + k2 1 ? vi This may be converted to an absolute measurement yi? for comparison with actual observations by using the baseline signal bi for each seed and an independent noise source ? ? N (0, 1): yi? = bi (1 + ?yi ) + ?yi ?. 4 (14) Estimation The model is completely defined by Equations 8 to 14. This fits nicely into a filtering framework, whereby the input, neural and hemodynamic models define state transitions, and the measurement model predicted observations. For i = 1, . . . , M , ?zi , ?fi , ?si , ?qi and ?vi define the system noise and ?yi the measurement noise. Parameters to estimate are the elements of A, C, c and b. For a sequence of time points t1 , . . . , tT , we are given observations y(t1 ), . . . , y(tT ), where y(t) = (y1 (t), . . . , yM (t))T . We seek to exploit the data as much as possible by estimating P (x(tn ) | y(t1 ), . . . , y(tT )) for n = 1, . . . , T ? the distribution over the state at each time point given all the data. Because of non-Gaussianity and nonlinearity of the transitions and measurements, a two-pass particle filter is proposed to solve the problem. The forward pass is performed using a sequential importance resampling technique similar to C ONDENSATION [15], obtaining P (x(tn ) | y(t1 ), . . . , y(tn )) for n = 1, . . . , T . Resampling at each step is handled using a deterministic resampling method [16]. The transition of particles through the differential system uses a 4th/5th order Runge-Kutta-Fehlberg method, the adaptive step size maintaining fixed error bounds. The backwards pass is substantially more difficult. Naively, we can simply negate the derivatives of the differential system and step backwards to obtain P (x(tn ) | y(tn+1 ), . . . , y(tT )), then fuse these with the results of the forwards pass to obtain the desired posterior. Unfortunately, such a backwards model is divergent in q and v, so that the accumulated numerical errors of the Runge-Kutta can easily cause an explosion to implausible values and a tip-toe adaptive step size to maintain error bounds. This can be mitigated by tightening the error bounds, but the task becomes computationally prohibitive well before the system is tamed. An alternative is a two-pass smoother that reuses particles from the forwards pass [17], reweighting them on the backwards pass so that no explicit backwards dynamics are required. This sidesteps the divergence issue completely, but is computationally and spatially expensive and requires computa(i) (j) (i) (j) tion of p(x(tn ) = stn | x(tn?1 ) = stn?1 ) for particular particles stn and stn?1 . This imposes some limitations, but is nevertheless the method used here. (i) (i) The forwards pass provides a weighted sample set {(st , ?t )} at each time point t = t1 , . . . , tT for i = 1, . . . , P . Initialising with ?tT = ?tT , the backwards step to calculate weights at time tn is 4 as follows [17]2 : ?tn ? tn (i,j) = = p(x(tn+1 ) = stn+1 | x(tn ) = stn ) for i, j = 1, . . . , P ?tn ?tn (i) ? tn ?tn = = ?tTn (?tn+1 ? ?tn ) where ? is element-wise division, ?tn ? ?tn where ? is element-wise multiplication. These are then normalised so that is stored. P (j) (i) (i) (i) ?tn = 1 and the smoothed result {(stn , ?tn )} for i = 1, . . . , P There are numerous means of propagating particles through the forwards pass that accommodate the resampling step and propagation of the Wiener noise through the nonlinearity. These include various stochastic Runge-Kutta methods, the Unscented Transformation [4] or a simple Euler scheme using fixed time steps and adding an appropriate portion of noise after each step. The requirement to (i) (j) efficiently make P 2 density calculations of p(x(tn+1 ) = stn+1 | x(tn ) = stn ) during the backwards pass is challenging with such approaches, however. To keep things simple, we instead simply propagate particles noiselessly through the transition function, and add noise from the Wiener process only at times t1 , . . . , tT as if the transition were linear. This reasonably approximates the noise of (j) the system while keeping the density calculations very simple ? transition stn noiselessly to obtain the mean value of a Gaussian with covariance equal to that of the system noise, then calculate the (i) density of this Gaussian at stn+1 . Observe that if system noise is sufficiently tight, ?tn becomes sparse as negligibly small densities round to zero. Implementing ?tn as a sparse matrix can provide significant time and space savings. Propagation of particles through the transition function and density calculations can be performed in parallel. This applies during both passes. For the backwards pass, each particle at tn need only be transitioned once to produce a Gaussian from which the density of all particles at tn+1 can be calculated, filling in one column of ?tn . Finally, the parameters A, C, c and b may be estimated by adding them to the state with artificial dynamics (c.f. [18]), applying a broad prior and small system noise to suggest that they are generally constant. The same applies to parameters of the balloon model, which may be included to allow variation in the hemodynamic response across the brain. 5 Experiments We apply the model to data collected during a simple finger tapping exercise. Using a Siemens Vision at 2T with a TR of 4.1s, a healthy 23-year-old right-handed male was scanned on 33 separate days over a period of two months. In each session, 80 whole volumes were taken, with the first two discarded to account for T1 saturation effects. The experimental paradigm consists of alternating 6TR blocks of rest and tapping of the right index finger at 1.5Hz, where tapping frequency is provided by a constant audio cue, present during both rest and tapping phases. All scans across all sessions were realigned using SPM5 [19] and a two-level random effects analysis performed, from which 13 voxels were selected to represent regions of interest. No smoothing or normalisation was applied to the data. Of the 13 voxels, four are selected for use in this experiment ? located in the left posterior parietal cortex, left M1, left S1 and left premotor cortex. The mean of all sessions is used as the measurement y(t), which consists of M = 4 elements, one for each region. We set t1 = 1TR = 4.1s, . . . , tT = 78TR = 319.8s as the sequence of times, corresponding to the times at which measurements are taken after realignment. The experimental input function u(t) is plotted in Figure 2, taking a value of 0 at rest and 1 during tapping. The error bounds on the Runge-Kutta are set to 10?4 . Measurement noise is set to ?yi = 2 for i = 1, . . . , M and the prior and system noise as in Table 2. With the elements of A, C, c and b included in the state, the state size is 48. P = 106 particles are used for the forwards pass, downsampling to 2.5 ? 104 particles for the more expensive backwards pass. 2 We have expressed this in matrix notation rather than the original notation in [17] 5 4x108 2 3x108 1 2x108 0 1x108 -1 0 0 6 12 18 0 Figure 2: Experimental input u(t), x axis is time t expressed in TRs. Ai,i Ai,j Ci,1 zi fi , si , qi , vi , ci bi i = 1, . . . , N i, j = 1, . . . , N , i 6= j i = 1, . . . , N i = 1, . . . , N i = 1, . . . , N i = 1, . . . , N ? ?1 0 0 0 0 y?i 77 Figure 3: Number of nonzero elements in ?tn for n = 1, . . . , 77. Prior ? 1/2 1/2 1/2 1/2 1/2 10 Noise ? ?2 10 10?2 10?2 10?1 10?2 10?2 Table 2: Prior and system noise. The experiment is run on the Eddie cluster of the Edinburgh Compute and Data Facility (ECDF) 3 over 200 nodes, taking approximately 10 minutes real time. The particle filter and smoother are distributed across nodes and run in parallel using the dysii Dynamic Systems Library 4 . After application of the filter, the predicted neural activity is given in Figure 4 and parameter estimates in Figures 6 and 7. The predicted output obtained from the model is in Figure 5, where it is compared to actual measurements acquired during the experiment to assess model fit. 6 Discussion The model captures the expected underlying form for neural activity, with all regions distinctly correlated with the experimental stimulus. Parameter estimates are generally constant throughout the length of the experiment and some efficacies are significant enough in magnitude to provide biological insight. The parameters found typically match those expected for this form of finger tapping task. However, as the focus of this paper is the development of the filtering approach we will reserve a real analysis of the results for a future paper, and focus on the issues surrounding the filter and its capabilities and deficiencies. A number of points are worth making in this regard. Particles stored during the forwards pass do not necessarily support the distributions obtained during the backwards pass. This is particularly obvious towards the extreme left of Figure 4, where the smoothed results appear to become erratic, essentially due to degeneracy in the backwards pass. Furthermore, while the smooth weighting of particles in the forwards pass is informative, that of the backwards pass is often not, potentially relying on heavy weighting of outlying particles and shedding little light on the actual nature of the distributions involved. Figure 3 provides empirical results as to the sparseness of ?tn . At worst at least 25% of elements are zero, demonstrating the advantages of a sparse matrix implementation in this case. The particle filter is able to establish consistent neural activity and parameter estimates across runs. These estimates also come with distributions in the form of weighted sample sets which enable the uncertainty of the estimates to be understood. This certainly shows the stochastic model and particle filter to be a promising approach for systematic connectivity analysis. 3 4 http://www.is.ed.ac.uk/ecdf/ http://www.indii.org/software/dysii/ 6 0.14 0.06 210 1 200 0 190 -1 180 0 0.14 0 0.06 210 1 200 0 190 -1 180 0 0.14 0 0.06 210 1 200 0 190 -1 180 0 0.14 0 0.06 210 1 200 0 190 -1 180 0 0 319.8 0 0 319.8 Figure 5: Measurement predictions y? (y axis) over time (x axis). Forwards pass results as shaded histogram, smoothed results as solid line with 2? error, circles actual measurements. Figure 4: Neural activity predictions z (y axis) over time (x axis). Forwards pass results as shaded histogram, smoothed results as solid line with 2? error. 0.2 1 0 -1 -2 0 0.2 1 0 -1 -2 0 0.2 1 0 -1 -2 0 0.2 1 0 -1 -2 0 Figure 6: Parameter estimates A (y axis) over time (x axis). Forwards pass results as shaded histogram, smoothed results as solid line with 2? error. The authors would like to thank David McGonigle for helpful discussions and detailed information regarding the data set. 7 0.2 1 0 -1 0 Figure 7: Parameter estimates of C (y axis) over time (x axis). Forwards pass results as shaded histogram, smoothed results as solid line with 2? error. References [1] Friston, K. and Buchel, C. (2004) Human Brain Function, chap. 49, pp. 999?1018. Elsevier. [2] Gitelman, D. R., Penny, W. D., Ashburner, J., and Friston, K. J. (2003) Modeling regional and psychophysiologic interactions in fMRI: the importance of hemodynamic deconvolution. NeuroImage, 19, 200?207. [3] Friston, K., Harrison, L., and Penny, W. (2003) Dynamic causal modelling. NeuroImage, 19, 1273?1302. [4] Julier, S. J. and Uhlmann, J. K. (1997) A new extension of the Kalman filter to nonlinear systems. The Proceedings of AeroSense: The 11th International Symposium on Aerospace/Defense Sensing, Simulation and Controls, Multi Sensor Fusion, Tracking and Resource Management. [5] Riera, J. J., Watanabe, J., Kazuki, I., Naoki, M., Aubert, E., Ozaki, T., and Kawashim, R. (2004) A state-space model of the hemodynamic approach: nonlinear filtering of BOLD signals. NeuroImage, 21, 547?567. [6] Ozaki, T. (1993) A local linearization approach to nonlinear filtering. International Journal on Control, 57, 75?96. [7] Bentler, P. M. and Weeks, D. G. (1980) Linear structural equations with latent variables. Psychometrika, 45, 289?307. [8] McArdle, J. J. and McDonald, R. P. (1984) Some algebraic properties of the reticular action model for moment structures. British Journal of Mathematical and Statistical Psychology, 37, 234?251. [9] Schlosser, R., Gesierich, T., Kaufmann, B., Vucurevic, G., Hunsche, S., Gawehn, J., and Stoeterb, P. (2003) Altered effective connectivity during working memory performance in schizophrenia: a study with fMRI and structural equation modeling. NeuroImage, 19, 751?763. [10] Au Duong, M., et al. (2005) Modulation of effective connectivity inside the working memory network in patients at the earliest stage of multiple sclerosis. NeuroImage, 24, 533?538. [11] Storkey, A. J., Simonotto, E., Whalley, H., Lawrie, S., Murray, L., and McGonigle, D. (2007) Learning structural equation models for fMRI. Advances in Neural Information Processing Systems, 19. [12] Buxton, R. B., Wong, E. C., and Frank, L. R. (1998) Dynamics of blood flow and oxygenation changes during brain activation: The balloon model. Magnetic Resonance in Medicine, 39, 855?864. [13] Friston, K. J., Mechelli, A., Turner, R., and Price, C. J. (2000) Nonlinear responses in fMRI: The balloon model, Volterra kernels, and other hemodynamics. NeuroImage, 12, 466?477. [14] Zarahn, E. (2001) Spatial localization and resolution of BOLD fMRI. Current Opinion in Neurobiology, 11, 209?212. [15] Isard, M. and Blake, A. (1998) Condensation ? conditional density propagation for visual tracking. International Journal of Computer Vision, 29, 5?28. [16] Kitagawa, G. (1996) Monte Carlo filter and smoother for non-Gaussian nonlinear state space models. Journal of Computational and Graphical Statistics, 5, 1?25. [17] Isard, M. and Blake, A. (1998) A smoothing filter for condensation. Proceedings of the 5th European Conference on Computer Vision, 1, 767?781. [18] Kitagawa, G. (1998) A self-organising state-space model. Journal of the American Statistical Association, 93, 1203?1215. [19] Wellcome Department of Imaging Neuroscience (2006), Statistical parametric mapping. Online at www.fil.ion.ucl.ac.uk/spm/. 8
3172 |@word cu:1 seems:1 seek:1 propagate:1 simulation:1 covariance:1 ttn:1 tr:4 solid:4 accommodate:1 moment:1 series:1 efficacy:3 duong:1 current:1 si:7 activation:1 numerical:1 informative:1 oxygenation:1 resampling:4 generative:1 prohibitive:1 cue:1 selected:2 isard:2 provides:2 node:2 organising:1 org:1 simpler:1 mathematical:1 burst:1 along:1 differential:6 become:1 symposium:1 ozaki:2 consists:3 inside:1 acquired:1 theoretically:1 notably:1 expected:2 multi:1 brain:6 relying:2 chap:1 actual:4 little:1 becomes:3 provided:1 estimating:1 underlying:4 mitigated:1 notation:2 psychometrika:1 substantially:1 transformation:1 temporal:8 computa:1 tackle:1 axt:2 k2:3 uk:4 control:2 unit:1 reuses:1 appear:1 before:2 t1:8 understood:1 local:2 naoki:1 limit:1 bilinear:1 establishing:1 tapping:6 modulation:2 approximately:1 au:1 equivalence:2 challenging:1 shaded:4 bi:3 presuming:1 practical:2 block:2 wholly:1 empirical:1 significantly:2 lawrie:1 suggest:1 close:1 applying:3 wong:1 www:3 equivalent:1 deterministic:3 yt:3 dz:1 arterial:1 resolution:1 insight:1 dw:6 variation:1 nominal:2 us:1 fout:4 storkey:3 element:7 expensive:2 particularly:1 located:1 observed:1 negligibly:1 capture:1 worst:1 calculate:2 mcgonigle:2 region:19 balloon:9 complexity:1 dynamic:9 tight:1 localization:1 division:1 blurring:1 basis:2 completely:2 easily:1 indirect:1 various:2 represented:2 finger:3 surrounding:1 effective:5 monte:1 artificial:1 premotor:1 solve:1 ability:1 reticular:1 statistic:1 noisy:1 itself:1 runge:4 online:1 sequence:2 advantage:1 ucl:1 interaction:10 zm:1 date:1 az:1 cluster:1 requirement:1 produce:1 ac:4 propagating:1 pose:2 school:2 predicted:4 come:1 filter:14 stochastic:7 human:1 enable:1 observational:1 opinion:1 implementing:1 biological:1 kitagawa:2 extension:2 unscented:2 fil:1 sufficiently:2 blake:2 lawrence:2 equilibrium:1 k3:3 seed:2 week:1 reserve:1 mapping:1 major:2 early:1 fmr:1 estimation:3 whalley:1 healthy:1 uhlmann:1 amos:1 weighted:2 sensor:1 gaussian:5 rather:2 realigned:1 earliest:1 focus:2 modelling:4 contrast:1 baseline:2 helpful:1 inference:2 elsevier:1 dependent:4 accumulated:1 typically:1 interested:1 issue:2 flexible:1 ecdf:2 development:1 resonance:3 smoothing:3 spatial:1 equal:1 construct:3 saving:1 extraction:1 nicely:1 once:1 represents:1 broad:1 schlosser:1 filling:1 fmri:12 future:1 stimulus:3 divergence:2 individual:1 phase:1 ourselves:1 aubert:1 maintain:1 interest:3 normalisation:1 certainly:1 male:1 venous:5 extreme:1 light:1 explosion:1 necessary:1 old:1 desired:1 plotted:1 causal:4 e0:8 circle:1 instance:1 column:1 handed:1 modeling:2 tractability:1 euler:1 x108:4 stored:2 combined:1 st:1 density:7 fundamental:1 international:3 systematic:1 informatics:2 tip:1 ym:1 realignment:1 connectivity:9 again:1 management:1 positivity:1 american:1 derivative:1 sidestep:1 account:2 potential:1 converted:1 bold:9 gaussianity:1 depends:1 vi:12 performed:3 tion:1 portion:1 decaying:1 parallel:2 capability:1 contribution:1 ass:1 compartment:7 wiener:4 variance:1 kaufmann:1 efficiently:1 produced:1 carlo:1 worth:2 tissue:1 implausible:1 ed:3 strip:1 ashburner:1 frequency:1 involved:2 pp:1 obvious:1 toe:1 associated:2 static:1 couple:1 degeneracy:1 car:1 noiselessly:2 higher:1 dt:10 day:1 response:5 box:1 furthermore:2 stage:2 crudely:1 d:1 working:2 receives:1 nonlinear:7 reweighting:1 propagation:3 spm:1 quality:3 reveal:2 perhaps:1 effect:2 facility:1 spatially:1 q0:2 alternating:1 nonzero:1 round:1 during:11 self:1 essence:1 whereby:1 steady:1 evident:2 outline:1 demonstrate:1 complete:2 tt:9 tn:31 mcdonald:1 parallelisation:1 oxygen:4 image:1 wise:2 fi:11 functional:2 exponentially:1 volume:6 discussed:1 julier:1 approximates:1 relating:1 m1:1 association:1 significant:4 measurement:13 ai:2 session:4 particle:21 nonlinearity:3 stochasticity:2 longer:1 cortex:2 v0:5 base:1 add:1 multivariate:1 posterior:2 linearisation:1 driven:2 yi:7 paradigm:2 period:1 signal:10 relates:1 smoother:4 multiple:2 full:2 reduces:1 smooth:2 match:1 adapt:1 calculation:3 schizophrenia:1 qi:11 prediction:2 regression:1 basic:1 vision:3 essentially:1 df:1 patient:1 histogram:4 represent:1 inflow:1 kernel:1 ion:1 addition:1 condensation:2 harrison:1 source:1 rest:8 unlike:1 regional:1 pass:1 simonotto:1 hz:1 thing:1 flow:2 structural:5 noting:1 backwards:12 enough:1 isolation:1 zi:4 fit:2 psychology:1 gitelman:1 regarding:1 motivated:1 handled:1 defense:1 algebraic:1 cause:1 action:1 generally:2 modulatory:1 detailed:1 amount:1 simplest:1 http:2 estimated:2 neuroscience:1 four:2 nevertheless:1 demonstrating:1 blood:10 characterisation:1 diffusion:1 imaging:3 fuse:1 excludes:1 fraction:1 year:1 run:3 aerosense:1 uncertainty:1 throughout:1 initialising:1 bound:4 distinguish:1 trs:1 activity:23 scanned:1 incorporation:1 deficiency:1 software:1 formulating:1 statically:1 department:1 according:1 combination:1 sclerosis:1 across:4 increasingly:1 biologically:1 s1:1 making:1 dv:1 taken:2 wellcome:1 computationally:3 equation:10 ln:3 spm5:1 resource:1 discus:1 ondensation:1 confounded:1 available:1 apply:1 observe:1 away:1 appropriate:1 magnetic:3 alternative:1 altogether:1 original:1 ensure:1 include:1 graphical:1 maintaining:1 medicine:1 exploit:1 k1:3 murray:3 establish:3 added:1 mechelli:1 volterra:1 parametric:1 primary:1 dependence:1 diagonal:2 kutta:4 separate:2 thank:1 sensible:1 collected:1 assuming:1 kalman:3 length:1 index:1 relationship:4 downsampling:1 innovation:1 difficult:3 unfortunately:1 potentially:1 relate:1 frank:1 tightening:1 design:1 implementation:1 proper:1 allowing:1 observation:5 discarded:1 fin:6 parietal:1 immediate:1 extended:1 neurobiology:1 y1:1 smoothed:6 overcoming:1 introduced:1 david:1 required:1 z1:1 connection:1 aerospace:1 established:1 able:2 pattern:2 xm:1 saturation:1 including:1 memory:2 erratic:1 hemodynamics:2 suitable:1 difficulty:2 friston:4 indicator:1 turner:1 scheme:1 altered:1 library:1 temporally:3 numerous:1 identifies:1 axis:11 extract:1 prior:4 voxels:2 stn:11 deoxyhemoglobin:1 multiplication:1 relative:2 buchel:1 fully:1 dsi:1 limitation:2 filtering:10 analogy:1 buxton:1 sufficient:1 consistent:2 imposes:1 dq:1 heavy:1 keeping:1 normalised:6 allow:1 taking:2 absolute:1 sparse:4 distinctly:1 edinburgh:3 benefit:3 curve:1 dimension:1 calculated:1 transition:7 distributed:1 regard:1 penny:2 forward:12 commonly:1 adaptive:2 author:1 outlying:1 keep:2 assumed:1 xi:2 eddie:1 continuous:1 latent:3 table:4 disambiguate:1 promising:1 transitioned:1 reasonably:1 nature:1 sem:2 obtaining:1 interact:1 necessarily:2 european:1 domain:1 main:1 whole:1 noise:17 x1:1 neuroimage:6 watanabe:1 explicit:1 exercise:1 weighting:2 kazuki:1 minute:1 british:1 xt:5 sensing:1 decay:2 divergent:1 negate:1 deconvolution:1 incorporating:1 naively:1 fusion:1 sequential:1 adding:2 importance:2 ci:2 magnitude:2 linearization:1 illustrates:1 sparseness:1 simply:3 visual:1 expressed:3 tracking:2 partially:1 applies:2 dcm:4 conditional:1 month:1 towards:1 price:1 content:5 change:3 included:3 generalisation:1 pas:22 experimental:5 siemens:1 shedding:1 meaningful:1 support:1 latter:1 scan:1 hemodynamic:19 audio:1 correlated:1
2,395
3,173
Feature Selection Methods for Improving Protein Structure Prediction with Rosetta Ben Blum, Michael I. Jordan Department of Electrical Engineering and Computer Science University of California at Berkeley Berkeley, CA 94305 {bblum,jordan}@cs.berkeley.edu David E. Kim, Rhiju Das, Philip Bradley, David Baker Department of Genome Sciences University of Washington Seattle, WA 98195 {dekim, rhiju, pbradley, dabaker}@u.washington.edu Abstract Rosetta is one of the leading algorithms for protein structure prediction today. It is a Monte Carlo energy minimization method requiring many random restarts to find structures with low energy. In this paper we present a resampling technique for structure prediction of small alpha/beta proteins using Rosetta. From an initial round of Rosetta sampling, we learn properties of the energy landscape that guide a subsequent round of sampling toward lower-energy structures. Rather than attempt to fit the full energy landscape, we use feature selection methods?both L1-regularized linear regression and decision trees?to identify structural features that give rise to low energy. We then enrich these structural features in the second sampling round. Results are presented across a benchmark set of nine small alpha/beta proteins demonstrating that our methods seldom impair, and frequently improve, Rosetta?s performance. 1 Introduction Protein structure prediction is one of the most important unsolved problems in biology today. With the wealth of genome data now available, it is of great interest to determine the structures of the proteins that genes encode. Proteins are composed of long chains of amino acid residues, of which there are twenty natural varieties. A gene encodes a specific amino acid sequence, which, when translated, folds into a unique three-dimensional conformation. The protein structure prediction problem is to predict this conformation (the protein?s tertiary structure) from the amino acid sequence (the protein?s primary structure). The biological function of a protein is dependent on its structure, so structure prediction is an important step towards function prediction. Potential applications of structure prediction range from elucidation of cellular processes to vaccine design. Experimental methods for protein structure determination are costly and time-intensive, and the number of known protein sequences now far outstrips the capacity of experimentalists to determine their structures. Computational methods have been improving steadily and are approaching the level of resolution attainable in experiments. Structure prediction methods fall into two broad camps: comparative modeling, in which solved protein structures are known for one or more proteins with sequences similar to the target sequence (?homologs?), and ab initio modeling, in which no homologs are known. In this paper we concentrate on ab initio modeling, and specifically on the Rosetta algorithm [3]. 1 Figure 1: Flowchart of resampling method. Rosetta is one of the leading methods for ab initio protein structure prediction today. Rosetta uses a Monte Carlo search procedure to minimize an energy function that is sufficiently accurate that the conformation found in nature (the ?native? conformation) is generally the conformation with lowest Rosetta energy. Finding the global minimum of the energy function is very difficult because of the high dimensionality of the search space and the very large number of local minima. Rosetta employs a number of strategies to combat these issues, but the primary one is to perform a large number of random restarts. Thanks to a very large-scale distributed computing platform called Rosetta@home, composed of more than three hundred thousand volunteer computers around the world, up to several million local minima of the energy function (?decoys,? in Rosetta parlance) can be computed for each target sequence. Our work begins with the observation that a random-restart strategy throws away a great deal of information from previously computed local minima. In particular, previous samples from conformation space might suggest regions of uniformly lower energy; these are regions in which Rosetta may wish to concentrate further sampling. This observation is applicable to many global optimization problems, and past researchers have proposed a variety of methods for exploiting it, including fitting a smoothed response surface to the local minima already gathered [1] and learning to predict good starting points for optimization [2]. Unfortunately, conformation space is very high-dimensional and very irregular, so response surfaces do not generalize well beyond the span of the points to which they are fitted. Generally, the correct (or ?native?) structure will not be in the span of the points seen so far?if it were, the first round of Rosetta sampling would already have been successful. We have developed an approach that sidesteps this limitation by explicitly recombining successful features of the models seen so far. No single local minimum computed in the first round of Rosetta search will have all the native features. However, many native features are present in at least some of the decoys. If these features can be identified and combined with each other, then sampling can be improved. Our approach has three steps, each mapping from one structural representation space to another (Figure 1). In the first step, we project the initial set of Rosetta models from continuous conformation space into a discrete feature space. The structural features that we have designed characterize significant aspects of protein structure and are largely sufficient to determine a unique conformation. In the second step, we use feature selection methods including both decision trees and Least Angle Regression (LARS) [4] to identify structural features that best account for energy variation in the initial set of models. We can then predict that certain of these features (generally, those associated with low energy) are present in the native conformation. In the third step, we use constrained Rosetta search to generate a set of models enriched for these key features. 2 Outline In section 3, we outline a response surface approach and its shortcomings, and motivate the move to a feature-based representation. In section 4, we describe the features we use and the way that particular feature values are enforced in Rosetta search. This characterizes the way we map points from our discretized feature space back to continuous conformation space. In section 5, we describe the feature selection techniques we use to determine which features to enforce. In section 6, we 2 (a) (b) Figure 2: (a) Rosetta models (black) and relaxed natives (blue) projected onto the first principal component. (b) Models and natives projected onto the third principal component. show the results of Rosetta search biased towards selected features. In section 7, we conclude with a discussion of the results achieved so far and of further work to be done. 3 Response Surface Methods As an initial attempt at developing resampling methods for protein structure prediction, we investigated a response surface fitting approach. Our goal was to fit a smoothed energy surface to the Rosetta models seen so far and then to minimize this surface to find new starting points for local optimization of the Rosetta energy function. The first task was to define the conformation space. The most natural space is defined in terms of the conformational degrees of freedom. Each residue in an amino acid sequence has two primary degrees of freedom: rotation around the C? ?N bond, referred to as the ? torsion angle, and rotation around the C? ?C bond, referred to as the ? torsion angle. However, it is difficult to fit a response surface in the space of torsion angles because the energy function is highly irregular in this space; a slight change in a single torsion angle typically causes large global structural changes, which in turn cause large energy changes. Instead, we took the three-dimensional coordinates of the backbone atoms as our conformation space, with all models in the set aligned to a reference model. There are four backbone atoms per residue and three coordinates per backbone atom, so an n-residue protein is represented by a 12n-dimensional vector. Even for small proteins of only around 70 residues this space is very high-dimensional, but we found that most of the structural variation in sets of Rosetta models was captured by the first 10 principal components. Data were sufficient to fit a response surface in these 10 dimensions. Along certain directions, energy gradients were detectable that pointed toward the native structure. One such direction was the first principal component for protein 1n0u (Figure 2.a; in this graph, the native structure is represented as an ensemble of Rosetta-minimized structures that started at the native conformation). However, in most directions the gradient did not point toward the natives (Figure 2.b). A response surface fitted to the Rosetta models shown in these graphs will therefore have high energy in the vicinity of the natives. These observations suggest a new strategy: rather than fitting a response surface to all the dimensions jointly, we should identify a few dimensions that are associated with clear score gradients and make no claims about the other dimensions. This motivates a shift in philosophy: rather than predicting energy and minimizing, we wish to predict features of the native structure and then enforce them independently of each other. 3 (a) (b) Figure 3: (a) Bins in Ramachandran plot. (b) Structure of 1dcj. Two helices are visible behind a beta pleated sheat consisting of four strands, the bottommost three paired in the anti-parallel orientation and the topmost two paired in the parallel orientation. In this ?cartoon? representation of structure, individual atoms are not rendered. 4 Structural features For the purpose of the work described in this paper, we make use of two types of structural features: torsion angle features and beta contact features. 4.1 Torsion angle features The observed values of the ? and ? angles for a single residue are strongly clustered in the database of solved protein structures (the PDB). Their empirical distribution is shown in a Ramachandran plot. In order to discretize the possible torsion angles for each residue, we divide the Ramachandran plot into five regions, referred to as ?A,? ?B,? ?E,? ?G,? and ?O,? (Figure 3.a). These regions are chosen to correspond roughly to clusters observed in the PDB. A protein with 70 amino acid residues has 70 torsion bin features, each with possible values A, B, E, G, and O. The primary search move in Rosetta is a fragment replacement move: the conformation of a string of three or nine consecutive residues within the target sequence is replaced with the conformation of a similar subsequence from the PDB. A torsion angle feature can be constrained in Rosetta search by limiting the fragments to those which have torsion angles within the given bin at the given residue position. Strings of torsion features are referred to as barcodes in Rosetta, and the apparatus for defining and constraining them was developed in-house by Rosetta developers. 4.2 Beta contact features Proteins exhibit two kinds of secondary structure, characterized by regular hydrogen bond patterns: alpha helices and beta pleated sheets (Figure 3.b). In alpha helices, the hydrogen bonds are all local, and are predicted fairly consistently by Rosetta. In beta sheets, however, the bonds can be between residues that are quite distant along the chain. A beta contact feature for residues i and j indicates the presence of two backbone hydrogen bonds between i and j. We use the same definition of beta pairing as the standard secondary structure assignment algorithm DSSP [5]. The bonding pattern can be either parallel (as between the red residues in Figure 3.b) or antiparallel (as between the blue residues). Furthermore, the pleating can have one of two different orientations. A beta pairing feature is defined for every triple (i, j, o) of residue numbers i and j and orientations o ? {parallel, antiparallel}. The possible values of a beta pairing feature are X, indicating no pairing, and P1 or P2, indicating pleating of orientation 1 or 2, respectively. Beta contact features are enforced in Rosetta by means of a technique called ?jumping.? A pseudobackbone-bond is introduced between the two residues to be glued together. This introduces a closed loop into the backbone topology of the protein. Torsion angles within the loop can no longer be altered without breaking the loop, so, in order to permit further fragment replacements, a cut (or 4 ?chainbreak?) must be introduced somewhere else in the loop. The backbone now takes the form of a tree rather than a chain. After a Rosetta search trajectory terminates, an attempt is made to close the chainbreak with local search over several torsion angles on either side of it. 5 Prediction of native features Let us transform our set of multi-valued features into a set of 0-1 valued features indicating whether or not a particular value for the feature is present. Let us assume that each binary feature f has an independent energetic effect; if present, it brings with it an average energy bonus bf . Under these assumptions, the full energy of a conformation d is modelled as X E0 + d f bf + N , f where E0 is a constant offset, df is either 1 if the feature is present in d or 0 if it is absent, and N is Gaussian noise. This model is partially justified by the fact that the true energy is indeed a sum of energies from local interactions, and our features capture local structural information. Our hypothesis is that native features have lower energy on average even if other native features are not present. In order to identify a small set of potentially native features, we use L1 regularization, or lasso regression [6], to find a sparse model. The minimization performed is X X X argmin(b,E0 ) (E(d) ? E0 ? df bf )2 + C |bf |, d?D f f where E(d) is the computed Rosetta energy of model d and C is a regularization constant. The small set of features that receive non-zero weights are those that best account for energy variations in the population of decoys. These are the features we can most confidently predict to be native. The Least Angle Regression algorithm [4] allows us to efficiently compute the trajectory of solutions for all values of C simultaneously. Experience with Rosetta has shown that constraining more than ten or fifteen torsion features can hamper search more than it helps; if there are very few fragments available for a given position that satisfy all torsion constraints, the lack of mobility at that position can be harmful. We typically take the point in the LARS trajectory that gives fifteen feature values. 5.1 Feature enforcement strategy LARS gives us a set of feature values that have a strong effect on energy. Our hypothesis is that features strongly associated with lower energies?namely, those selected by LARS and given negative weights?are more likely to be native, and that features given positive weights by LARS are more likely to be non-native. This hypothesis is born out by our experiments on a benchmark set of 9 small alpha/beta proteins. The LARS prediction accuracy is given in Figure 4.a. This chart shows, for each protein, the fraction of LARS-selected features correctly labeled as native or non-native by the sign of the LARS weight. Fifteen LARS features were requested per protein. The more accurate ?low energy leaf? predictions will be discussed in the next section. It is clear from Figure 4.a that LARS is informative about native features for most proteins. However, we cannot rely wholly on its predictions. If we were simply to constrain every LARS feature, then Rosetta would never find the correct structure, since some incorrect features would be present in every model. Our resampling strategy is therefore to flip a coin at the beginning of the Rosetta run to decide whether or not to constrain a particular LARS feature. Coins are flipped independently for each LARS feature. Resampling improves on unbiased Rosetta sampling if the number of viable runs (runs in which no non-native features are enforced) is sufficiently high that the benefits from the enforcement of native features are visible. We have achieved some success by enforcing LARS features with probability 30% each, as demonstrated in the results section. 5.2 Decision trees for beta contact features Beta contact features are less suited to the lasso regression approach than torsion angle features, because independence assumptions are not as valid. For instance, contact (i, j, parallel) and contact 5 (a) (b) Figure 4: (a) LARS prediction accuracy when fitted to total decoy population and to the three decision-tree leaves with lowest 10th percentile energies, ordered here by average rmsd. (b) Relation of prediction accuracy to resampling improvement in LARS-only runs. (i + 1, j + 1, parallel) are redundant and will usually co-occur, whereas contact (i, j, parallel) and contact (i ? 1, j + 1, parallel) are mutually exclusive and will never co-occur. For beta contact features, we employ a decision tree approach to divide the decoy population into non-overlapping clusters each defined by the presence of several beta contacts. Lasso regression is then employed in each cluster separately to determine likely native torsion features. We use decision trees of depth three. At each node, a beta contact feature is selected to use as a split point and a child node is created for each of the three possible values X, P1, and P2. Our strategy is to choose split points which most reduce entropy in the features. The beta contact feature is therefore chosen whose mutual information with the other beta contact features is maximized, as approximated by the sum of the marginal mutual informations with each other feature. Since some clusters are sampled more heavily than others, the lowest energy within a cluster is not a fair measure of its quality, even though, in principle, we care only about the lowest achievable energy. Instead, we use the 10th percentile energy to evaluate clusters. Its advantage as a statistic is that its expectation is not dependent on sample size, but it often gives a reasonably tight upper bound on achievable energy. Our resampling strategy, given a decision tree, is to sample evenly from each of the top three leaves as ranked by 10th percentile energy. Within the subpopulation of decoys defined by each leaf, we select torsion features using LARS. In our benchmark set, the top three low-energy leaves of the decision tree were generally closer to the native than the population at large. Perhaps as a result, LARS generally achieved greater prediction accuracy when restricted to their associated subpopulations, as seen in Figure 3.b. Leaves are sorted by average rmsd, so ?low energy leaf 1,? the ?best? leaf, consists of decoys which are closest, on average, to the native conformation. The best leaf consisted of only native contacts for all proteins except 1n0u and 1ogw, but in both these cases it contained structures generally lower in rmsd than the population at large and resampling achieved improvements over plain Rosetta sampling. In general, LARS performed better on the leaves that were closer to the native structure, although there were a few notable exceptions. Ideally, we would concentrate our sampling entirely on the best leaf, but since we cannot generally identify which one it is, we have to hedge our bets. Including more leaves in the resampling round increases the chances of resampling a native leaf but dilutes sampling of the best leaf in the pool. This tradeoff is characteristic of resampling methods. 6 Results We tested two Rosetta resampling schemes over a set of 9 alpha/beta proteins of between 59 and 81 residues. In the first scheme (referred to henceforth as ?LARS-only?), 15 LARS-predicted torsion features were constrained at 30% frequency. In the second (referred to henceforth as ?decision6 RMSD of low-energy decoys Decision tree LARS only Control Resamp Control Resamp 1di2 2.35 2.14 2.76 0.97 1dtj 3.20 1.53 5.28 1.88 1dcj 2.35 3.31 2.34 2.11 1ogw 5.22 3.99 3.03 2.80 2reb 1.15 1.17 1.07 1.27 2tif 5.68 4.57 3.57 6.85 1n0u 11.89 11.60 11.93 3.54 1hz6A 2.52 1.06 3.36 4.68 1mkyA 10.39 8.21 4.60 4.58 Mean difference -0.8 -1.03 Median difference -1.11 -0.23 Number improved 7/9 6/9 Lowest RMSD of 25 low-energy decoys Decision tree LARS only Control Resamp Control Resamp 1.78 1.34 1.82 0.73 1.46 1.53 1.95 1.59 2.19 1.86 1.71 1.88 3.12 2.6 2.08 2.48 0.89 0.93 0.83 0.86 3.32 3.27 3.27 2.61 9.78 3.19 3.54 2.84 2.38 1.06 1.97 1.19 3.43 3.25 3.33 4.23 -1.04 -0.23 -0.33 -0.36 7/9 5/9 tree?), three subpopulations were defined for each protein using a decision tree, and within each subpopulation 15 LARS-predicted torsion features were constrained at frequencies heuristically determined on the basis of several meta-level ?features of features,? including the rate of the feature?s occurrence in the first round of Rosetta sampling and the magnitude of the regression weight for the feature. Each resampling scheme was compared against a control population generated at the same time. Exactly the same number of models were generated for the control and resampled populations. The control and resampled populations for the LARS-only scheme consist of about 200,000 decoys each. The populations for the decision-tree scheme consist of about 30,000 decoys each, due to limitations in available compute time. The difference in quality between the two control populations is partially explained by the different numbers of samples in each, and partially by changes in Rosetta in the time between the generation of the two datasets. Our two primary measures of success for a resampling run are both based on root-mean-square distance to the native structure. Root-mean-square distance (rmsd) is a standard measure of discrepancy between two structures. It is defined as the square root of the mean of the squared distances between pairs of corresponding backbone atoms in the two structures, under the alignment that minimizes this quantity. Our first measure of success is the rmsd between the native structure and the lowest scoring model. This measures Rosetta?s performance if forced to make a single prediction. Our second measure of success is lowest rmsd among the twenty-five top-scoring models. This is a smoother measure of the quality of the lowest scoring Rosetta models, and gives some indication of the prediction quality if more sophisticated minima-selection methods are used than Rosetta energy ? from the native have atomic-level resolution?this is the goal. Structures ranking. Structures at 1A ? and 4A ? generally have several important structural details incorrect. In proteins the at between 2A ? from the native are poor predictions. size of those in our benchmark, structures more than 5A Both resampling schemes achieved some success. The performance measures are shown in table 6. The decision-tree scheme performed more consistently and achieved larger improvements on average; it improved the low-energy rmsd in 7 of the 9 benchmark proteins, with a significant me? Particularly exciting are the atomic-resolution prediction for 1hz6 and dian improvement of 1.11A. the nearly atomic-resolution prediction for 1dtj. In both these cases, plain Rosetta sampling performed considerably worse. The LARS-only scheme was successful as well, providing improved ? lowest-energy predictions on 6 of the 9 benchmark proteins with a median improvement of 0.23A. ? away from the native The LARS-only low-energy prediction for 1di2 is atomic-resolution at 0.97A ? for the control run. In general, improvements correlated with LARS structure, as compared to 2.97A accuracy (Figure 4.b). The two notable exceptions were 2reb, for which plain Rosetta search performs so well that constraints only hurt sampling, and 1n0u, for which plain Rosetta search concen? Certain LARS-selected features, trates almost entirely on a cluster with incorrect topology at 10A. ? when enforced, switch sampling over to a cluster at around 3A. Even when incorrect features are enforced within this cluster, sampling is much improved. The cases in which the decision-tree scheme did not yield improved low-energy predictions are interesting in their own right. In the case of 1dcj, resampling does yield lower rmsd structures?the top 25 low rms prediction is superior, and the minimum rmsd from the set is 1.35, nearly atomic 7 resolution, as compared to 1.95 for the control run?but the Rosetta energy function does not pick them out. This suggests that better decoy-selection techniques would improve our algorithms. In the case of 2reb, the unbiased rounds of Rosetta sampling were so successful that they would have been difficult to improve on. This emphasizes the point that resampling cannot hurt us too much. If a plain Rosetta sampling round of n decoys is followed by a resampling round of n decoys, then no matter how poor the resampled decoys are, sampling efficiency is decreased by at most a factor of 2 (since we could have generated n plain Rosetta samples in the same time). The danger is that resampling may overconverge to broad, false energy wells, achieving lower energies in the resampling round even though rmsd is higher. This appears to occur with 2tif, in which the LARS-only low-energy prediction has significantly lower energy than the control prediction despite being much farther from the native. Once more, better decoy-selection techniques might help. 7 Discussion and Conclusions Our results demonstrate that our resampling techniques improve structure prediction on a majority of the proteins in our benchmark set. Our first resampling method significantly improves Rosetta predictions in 3 of the 9 test cases, and marginally improves two or three more. Our second resampling method expands the set of proteins on which we achieve improvements, including an additional atomic-level prediction. It is important to note that significant improvements over Rosetta on any proteins are hard to achieve; if our methods achieved one or two significantly improved predictions, we would count them a success. Rosetta is the state of the art in protein structure prediction, and it has undergone years of incremental advances and optimizations. Surpassing its performance is very difficult. Furthermore, it doesn?t hurt Rosetta too badly if a resampling scheme performs worse than unbiased sampling on some proteins, since models from the unbiased sampling round that precedes the resampling round can be used as predictions as well. There are a number of avenues of future work to pursue. We have designed a number of other structural features, including per-residue secondary structure features, burial features, and side-chain rotamer features, and we hope to incorporate these into our methods. The primary barrier is that each new feature requires a method for constraining it during Rosetta search. We also plan to further investigate the possibility of detecting which LARS predictions are correct using ?features of features,? and to apply these methods to discrimate between decision tree leaves as well. It is possible that, with more sampling, the decision tree runs would yield atomic-resolution predictions. However, computational costs for Rosetta are high; each Rosetta model takes approximately fifteen minutes of CPU time to compute on a 1GHz CPU, and each of the 36 data sets represented here consists of on the order of 100, 000 models. The success of our feature selection techniques suggests that the high dimensionality and multiple minima that make high resolution protein structure prediction difficult to solve using traditional methods provide an excellent application for modern machine learning methods. The intersection between the two fields is just beginning, and we are excited to see further developments. References [1] G. E. P. Box and K. B. Wilson. On the experimental attainment of optimum conditions (with discussion). Journal of the Royal Statistical Society Series B, 13(1):1?45, 1951. [2] Justin Boyan and Andrew W. Moore. Learning evaluation functions to improve optimization by local search. The Journal of Machine Learning Research, 1:77?112, 2001. [3] Phil Bradley, Lars Malmstrom, Bin Qian, Jack Schonbrun, Dylan Chivian, David E. Kim, Jens Meiler, Kira M. Misura, and David Baker. Free modeling with Rosetta in CASP6. Proteins, 61(S7):128?134, 2005. [4] Bradley Efron, Trevor Hastie, Iain Johnstone, and Robert Tibshirani. Least angle regression. Annals of Statistics (with discussion), 32(2):407?499, 2004. [5] Wolfgang Kabsch and Chris Sander. Dictionary of protein secondary structure: pattern recognition of hydrogen-bonded and geometrical features. Biopolymers, 22(12):2577?2637, 1983. [6] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society Series B, 58(1):267?288, 1996. 8
3173 |@word achievable:2 bf:4 heuristically:1 excited:1 attainable:1 pick:1 fifteen:4 tif:2 initial:4 born:1 series:2 score:1 fragment:4 past:1 bradley:3 di2:2 must:1 visible:2 distant:1 subsequent:1 informative:1 designed:2 plot:3 resampling:25 selected:5 leaf:15 beginning:2 tertiary:1 farther:1 detecting:1 node:2 five:2 along:2 beta:21 viable:1 pairing:4 incorrect:4 consists:2 fitting:3 indeed:1 roughly:1 p1:2 frequently:1 multi:1 discretized:1 cpu:2 begin:1 project:1 baker:2 bonus:1 lowest:9 backbone:7 kind:1 argmin:1 string:2 developer:1 minimizes:1 developed:2 pursue:1 finding:1 berkeley:3 combat:1 every:3 expands:1 exactly:1 control:11 positive:1 engineering:1 local:11 apparatus:1 despite:1 kabsch:1 glued:1 approximately:1 might:2 black:1 suggests:2 co:2 range:1 unique:2 atomic:7 procedure:1 wholly:1 danger:1 empirical:1 significantly:3 regular:1 subpopulation:4 pdb:3 protein:43 suggest:2 onto:2 close:1 selection:9 sheet:2 cannot:3 map:1 demonstrated:1 phil:1 starting:2 independently:2 resolution:8 qian:1 iain:1 population:10 variation:3 coordinate:2 hurt:3 limiting:1 annals:1 target:3 today:3 heavily:1 us:1 hypothesis:3 approximated:1 dcj:3 particularly:1 recognition:1 cut:1 native:37 database:1 labeled:1 observed:2 rmsd:12 electrical:1 solved:2 capture:1 thousand:1 region:4 topmost:1 ideally:1 motivate:1 bottommost:1 tight:1 efficiency:1 basis:1 translated:1 represented:3 forced:1 shortcoming:1 describe:2 monte:2 precedes:1 quite:1 whose:1 larger:1 valued:2 solve:1 statistic:2 jointly:1 transform:1 sequence:8 advantage:1 indication:1 dian:1 took:1 interaction:1 aligned:1 loop:4 achieve:2 seattle:1 exploiting:1 cluster:9 optimum:1 comparative:1 incremental:1 ben:1 help:2 andrew:1 conformation:18 strong:1 throw:1 p2:2 c:1 predicted:3 concentrate:3 direction:3 correct:3 lars:32 bin:4 clustered:1 biological:1 initio:3 sufficiently:2 around:5 great:2 mapping:1 predict:5 claim:1 dictionary:1 consecutive:1 purpose:1 applicable:1 bond:7 resamp:4 minimization:2 hope:1 gaussian:1 rather:4 shrinkage:1 bet:1 wilson:1 encode:1 improvement:8 consistently:2 indicates:1 kim:2 camp:1 dependent:2 typically:2 relation:1 issue:1 among:1 orientation:5 development:1 enrich:1 art:1 platform:1 constrained:4 fairly:1 mutual:2 field:1 marginal:1 once:1 never:2 washington:2 sampling:21 atom:5 biology:1 flipped:1 broad:2 cartoon:1 nearly:2 discrepancy:1 minimized:1 others:1 future:1 employ:2 few:3 modern:1 composed:2 simultaneously:1 hamper:1 individual:1 replaced:1 consisting:1 replacement:2 attempt:3 ab:3 freedom:2 interest:1 highly:1 investigate:1 possibility:1 evaluation:1 alignment:1 introduces:1 behind:1 chain:4 accurate:2 closer:2 experience:1 jumping:1 mobility:1 tree:18 divide:2 harmful:1 e0:4 fitted:3 instance:1 modeling:4 assignment:1 cost:1 hundred:1 successful:4 too:2 characterize:1 considerably:1 combined:1 thanks:1 dssp:1 reb:3 pool:1 michael:1 together:1 outstrips:1 barcodes:1 squared:1 choose:1 henceforth:2 worse:2 sidestep:1 leading:2 account:2 potential:1 matter:1 satisfy:1 notable:2 explicitly:1 ranking:1 performed:4 root:3 closed:1 wolfgang:1 characterizes:1 red:1 parallel:8 minimize:2 square:3 chart:1 accuracy:5 acid:5 largely:1 efficiently:1 ensemble:1 gathered:1 identify:5 landscape:2 correspond:1 maximized:1 yield:3 generalize:1 modelled:1 emphasizes:1 marginally:1 carlo:2 antiparallel:2 trajectory:3 researcher:1 trevor:1 definition:1 against:1 energy:50 frequency:2 steadily:1 associated:4 unsolved:1 sampled:1 efron:1 dimensionality:2 improves:3 sophisticated:1 back:1 appears:1 higher:1 restarts:2 response:9 improved:7 done:1 though:2 strongly:2 box:1 furthermore:2 just:1 parlance:1 ramachandran:3 overlapping:1 lack:1 brings:1 quality:4 perhaps:1 homologs:2 effect:2 requiring:1 true:1 unbiased:4 consisted:1 vicinity:1 regularization:2 moore:1 deal:1 round:13 during:1 percentile:3 bonded:1 outline:2 demonstrate:1 performs:2 l1:2 geometrical:1 jack:1 superior:1 rotation:2 million:1 discussed:1 slight:1 surpassing:1 significant:3 seldom:1 pointed:1 longer:1 surface:11 closest:1 own:1 certain:3 meta:1 binary:1 success:7 jens:1 scoring:3 characteristic:1 seen:4 minimum:9 captured:1 relaxed:1 care:1 greater:1 employed:1 additional:1 determine:5 redundant:1 smoother:1 full:2 multiple:1 determination:1 characterized:1 long:1 paired:2 prediction:38 regression:9 experimentalists:1 expectation:1 df:2 volunteer:1 achieved:7 irregular:2 justified:1 receive:1 residue:18 whereas:1 separately:1 decreased:1 wealth:1 else:1 median:2 biased:1 jordan:2 structural:12 presence:2 constraining:3 split:2 sander:1 variety:2 independence:1 fit:4 switch:1 hastie:1 approaching:1 identified:1 topology:2 lasso:4 reduce:1 avenue:1 tradeoff:1 intensive:1 shift:1 absent:1 whether:2 rms:1 s7:1 energetic:1 nine:2 cause:2 generally:8 conformational:1 clear:2 ten:1 generate:1 sign:1 per:4 correctly:1 tibshirani:2 blue:2 kira:1 discrete:1 key:1 four:2 demonstrating:1 blum:1 achieving:1 graph:2 fraction:1 sum:2 year:1 enforced:5 run:8 angle:16 almost:1 decide:1 home:1 decision:16 entirely:2 bound:1 resampled:3 followed:1 fold:1 badly:1 occur:3 constraint:2 constrain:2 encodes:1 aspect:1 span:2 rendered:1 recombining:1 department:2 developing:1 poor:2 across:1 terminates:1 explained:1 restricted:1 mutually:1 previously:1 turn:1 detectable:1 count:1 enforcement:2 flip:1 available:3 permit:1 apply:1 away:2 enforce:2 occurrence:1 coin:2 top:4 elucidation:1 somewhere:1 society:2 contact:16 move:3 already:2 quantity:1 strategy:7 primary:6 costly:1 exclusive:1 traditional:1 exhibit:1 gradient:3 distance:3 capacity:1 philip:1 restart:1 majority:1 evenly:1 me:1 chris:1 cellular:1 toward:3 enforcing:1 decoy:16 minimizing:1 providing:1 difficult:5 unfortunately:1 robert:2 potentially:1 negative:1 rise:1 design:1 motivates:1 twenty:2 perform:1 discretize:1 upper:1 observation:3 datasets:1 benchmark:7 anti:1 defining:1 biopolymers:1 smoothed:2 rotamer:1 david:4 introduced:2 namely:1 pair:1 plan:1 california:1 bonding:1 flowchart:1 impair:1 beyond:1 justin:1 usually:1 pattern:3 confidently:1 including:6 royal:2 natural:2 rely:1 regularized:1 predicting:1 ranked:1 boyan:1 scheme:10 improve:5 altered:1 torsion:20 started:1 created:1 generation:1 limitation:2 interesting:1 triple:1 degree:2 sufficient:2 undergone:1 principle:1 exciting:1 helix:3 free:1 guide:1 side:2 johnstone:1 fall:1 barrier:1 sparse:1 distributed:1 benefit:1 ghz:1 dimension:4 depth:1 world:1 valid:1 genome:2 plain:6 doesn:1 made:1 projected:2 far:5 alpha:6 gene:2 global:3 conclude:1 subsequence:1 search:15 continuous:2 hydrogen:4 table:1 learn:1 nature:1 reasonably:1 ca:1 attainment:1 improving:2 rosetta:59 requested:1 investigated:1 excellent:1 da:1 did:2 noise:1 child:1 fair:1 dtj:2 amino:5 enriched:1 referred:6 vaccine:1 position:3 wish:2 dylan:1 house:1 breaking:1 third:2 minute:1 specific:1 offset:1 consist:2 false:1 magnitude:1 suited:1 entropy:1 intersection:1 simply:1 likely:3 strand:1 ordered:1 contained:1 partially:3 chance:1 hedge:1 goal:2 sorted:1 towards:2 change:4 hard:1 specifically:1 except:1 uniformly:1 determined:1 principal:4 called:2 total:1 secondary:4 experimental:2 indicating:3 select:1 exception:2 philosophy:1 incorporate:1 evaluate:1 tested:1 correlated:1
2,396
3,174
An online Hebbian learning rule that performs Independent Component Analysis Claudia Clopath School of Computer Science and Brain Mind Institute Ecole polytechnique federale de Lausanne 1015 Lausanne EPFL claudia.clopath@epfl.ch Andre Longtin Center for Neural Dynamics University of Ottawa 150 Louis Pasteur, Ottawa alongtin@uottawa.ca Wulfram Gerstner School of Computer Science and Brain Mind Institute Ecole polytechnique federale de Lausanne 1015 Lausanne EPFL wulfram.gerstner@epfl.ch Abstract Independent component analysis (ICA) is a powerful method to decouple signals. Most of the algorithms performing ICA do not consider the temporal correlations of the signal, but only higher moments of its amplitude distribution. Moreover, they require some preprocessing of the data (whitening) so as to remove second order correlations. In this paper, we are interested in understanding the neural mechanism responsible for solving ICA. We present an online learning rule that exploits delayed correlations in the input. This rule performs ICA by detecting joint variations in the firing rates of pre- and postsynaptic neurons, similar to a local rate-based Hebbian learning rule. 1 Introduction The so-called cocktail party problem refers to a situation where several sound sources are simultaneously active, e.g. persons talking at the same time. The goal is to recover the initial sound sources from the measurement of the mixed signals. A standard method of solving the cocktail party problem is independent component analysis (ICA), which can be performed by a class of powerful algorithms. However, classical algorithms based on higher moments of the signal distribution [1] do not consider temporal correlations, i.e. data points corresponding to different time slices could be shuffled without a change in the results. But time order is important since most natural signal sources have intrinsic temporal correlations that could potentially be exploited. Therefore, some algorithms have been developed to take into account those temporal correlations, e.g. algorithms based on delayed correlations [2, 3, 4, 5] potentially combined with higher-order statistics [6], based on innovation processes [7], or complexity pursuit [8]. However, those methods are rather algorithmic and most of them are difficult to interpret biologically, e.g. they are not online or not local or require a preprocessing of the data. Biological learning algorithms are usually implemented as an online Hebbian learning rule that triggers changes of synaptic efficacy based on the correlations between pre- and postsynaptic neurons. A Hebbian learning rule, like Oja?s learning rule [9], combined with a linear neuron model, has been shown to perform principal component analysis (PCA). Simply using a nonlinear neuron combined with Oja?s learning rule allows one to compute higher moments of the distributions which yields ICA if the signals have been preprocessed (whitening) at an earlier stage [1]. In this paper, we are 1 s x Hebbian Learning C W Mixing ICA y Figure 1: The sources s are mixed with a matrix C, x = Cs, x are the presynaptic signals. Using a linear neuron y = W x, we want to find the matrix W which allows the postsynaptic signals y to recover the sources, y = P s, where P is a permutation matrix with different multiplicative constants. interested in exploiting the correlation of the signals at different time delays, i.e. a generalization of the theory of Molgedey and Schuster [4]. We will show that a linear neuron model combined with a Hebbian learning rule based on the joint firing rates of the pre- and postsynaptic neurons of different time delays performs ICA by exploiting the temporal correlations of the presynaptic inputs. 2 2.1 Mathematical derivation of the learning rule The problem We assume statistically independent autocorrelated source signals si with mean < si >= 0 (<> means averaging over time) and correlations < si (t)sj (t0 ) >= Ki (|t ? t0 |)?ij . The sources s are mixed by a matrix C x = Cs, (1) where x are the mixed signals recorded by a finite number of receptors (bold notation refers to a vector). We think of the receptors as presynaptic neurons that are connected via a weight matrix W to postsynaptic neurons. We consider linear neurons [9], so that the postsynaptic signals y can be written y = W x. (2) The aim is to find a learning rule that adjusts the appropriate weight matrix W to W ? (* denotes the value at the solution) so that the postsynaptic signals y recover the independent sources s (Fig 1), i.e. y = P s where P is a permutation matrix with different multiplicative constants (the sources are recovered in a different order up to a multiplicative constant), which means that, neglecting P , W ? = C ?1 . (3) To solve this problem we extend the theory of Molgedey and Schuster [4] in order to derive an online biological hebbian rule. 2.2 Theory of Molgedey and Schuster and generalization The paper of Molgedey and Schuster [4] focuses on the instantaneous correlation matrix but also the time delayed correlations Mij =< xi (t)xj (t + ? ) > of the incoming signals. Since the correlation matrix Mij is symmetric, it has up to n(n + 1)/2 independent elements. However, the unknown mixing matrix C has potentially n2 elements (for n sources and n detectors). Therefore, we need to ? with two different time delays defined as evaluate two delayed correlation matrices M and M Mij =< xi (t)xj (t + ?2 ) > M?ij =< xi (t)xj (t + ?1 ) > (4) to get enough information about the mixing process [10]. P ? ij = P Cil Cjl ? ? ll From equation 1, we obtain the relation Mij = l Cil Cjl ?ll and similarly M l T ? where ?ij = ?ij Ki (?2 ) and ?ij = ?ij Ki (?1 ) are diagonal matrices. Since M = C?C and ? = C ?C ? T , we have M ? ?1 )C = C(?? ? ?1 ). (M M 2 (5) It follows that C can be found from an eigenvalue problem. Since C is the mixing matrix, a simple algorithmic inversion allows Molgedey and Schuster to recover the original sources [4]. 2.3 Our learning rule In order to understand the putative neural mechanism performing ICA derived from the formalism developed above, we need to find an online learning rule describing changes of the synapses as ? M ?1 = a function of pre- and postsynaptic activity. Taking the inverse of (5), we have C ?1 M ?1 ?1 ? ?? C . Therefore, for weights that solve the ICA problem we expect because of (3) that ? = ?? ? ?1 W ? M, W ?M (6) which defines the weight matrix W ? at the solution. For the sake of simplicity, consider only one linear postsynaptic neuron. The generalization to many postsynaptic neurons is straightforward (see section 4). The output signal y of the neuron can be written as y = w?T x, where w?T is a row of the matrix W ? . Then equation 6 can be written as ? = ?w?T M, w?T M (7) ? ?1 . where ? is one element of the diagonal matrix ?? In order to solve this equation, we can use the following iterative update rule with update parameter ?. ? ? ?wT M ]. ? = ?[wT M w (8) The fixed point of this update rule is giving by (7), i.e. w = w? . Furthermore, multiplication of (7) ?w wT M with w yields ? = w TM w . If we insert the definition of M from (2), we obtain the following rule ? = ?[< y(t)x(t + ?1 ) > ?? < y(t)x(t + ?2 ) >], w (9) with a parameter ? given by ?= < y(t)y(t + ?1 ) > . < y(t)y(t + ?2 ) > ? It is possible to show that w? is orthogonal to w. This implies that to first order (in |w/w|), w will keep the same norm during iterations of (9). The rule 9 we derived is a batch-rule, i.e. it averages over all sample signals. We convert this rule into an online learning rule by taking a small learning rate ? and using an online estimate of ?. ?1 y(t)x(t + ?2 )] ?2 ?? ??1 = ??1 + y(t)y(t + ?1 ) ?? ??2 = ??2 + y(t)y(t + ?2 ). ? = ?[y(t)x(t + ?1 ) ? w (10) Note that the rule defined in (10) uses information on the correlated activity xy of pre- and postsynaptic neurons as well as an estimate of the autocorrelation < yy > of the postsynaptic neuron. ?? is taken sufficiently long so as to average over a representative sample of the signals and |?| ? 1 is a small learning rate. Stability properties of updates under rule (10) are discussed in section 4. 3 Performances of the learning rule A simple example of a cocktail party problem is shown in Fig 2 where two signals, a sinus and a ramp (saw-tooth signal), have been mixed. The learning rule converges to a correct set of synaptic 3 A signals autocorrelation Ki(t-t?) B -10 time C -5 0 time 5 10 signals signals D time time Figure 2: A. Two periodic source signals, a sinus (thick solid line) and a ramp (thin solid line), are mixed into the presynaptic signals (dotted lines). B. The autocorrelation functions of the two source signals are shown (the sinus in thick solid line and the ramp in thin solid line). The sources are normalized so that ?(0) = 1 for both. C. The learning rule with ?1 = 3 and ?2 = 0 extracts the sinusoidal output signal (dashed) composed to the two input signals. In agreement with the calculation of stability, ? > 0 , the output is recovering the sinus source because ?sin (3) > ?ramp (3). D. The learning rule with ?1 = 10, ?2 = 0, converges to the other signal (dashed line), i.e. the ramp, because ?ramp (10) > ?sin (10). Note that the signals have been rescalled since the learning rule recovers the signals up to a multiplicative factor. weights so that the postsynaptic signal recovers correctly one of the sources. Postsynaptic neurons with different combinations of ?1 and ?2 are able to recover different signals (see the section 4 on Stability). In the simulations, we find that the convergence is fast and the performance is very accurate and stable. Here we show only a two-sources problem for the sake of visual clarity. However, the rule can easily recover several mixed sources that have different temporal characteristics. Fig 3 shows an ICA problem with sources s(t) generated by an Ornstein-Uhlenbeck process of the form ?si s?i = ?si + ?, where ? is some gaussian noise. The different sources are characterized by different time constants. The learning rule is able to decouple these colored noise signals with gaussian amplitude distribution since they have different temporal correlations. Finally, Fig 4 shows an application with nine different sounds. We used 60 postsynaptic neurons with time delays ?1 chosen uniformly in an interval [1,30ms] and ?2 = 0 . Globally 52 of the 60 neurons recovered exactly 1 source (A, B) and the remaining 8 recovered mixtures of 2 sources (E). One postsynaptic neuron is recovering one of the sources depending on the source?s autocorrelation at time ?1 and ?2 (.i.e. the source with the biggest autocorrelation at time ?1 since ?2 = 0 for all neurons, see section Stability). A histogram (C) shows how many postsynaptic neurons recover each source. However, as it will become clear from the stability analysis below, a few specific postsynaptic neurons tuned to time delays, where the autocorrelation functions intersect (D, at time ?1 = 3ms and ?2 = 0), cannot recover one of the sources precisely (E). 4 A singals singals B time time Figure 3: A. The 3 source signals (solid lines generated with the equation ?si s?i = ?si + ? with different time constants, where ? is some gaussian noise) are plotted together with the output signal (dashed). The learning rule is converging to one of the sources. B. Same as before, but only the one signal (solid) that was recovered is shown together with the neuronal output (dashed). B signals signals A 1 2 3 4 5 10 ms time [s] C time D 15 autocorrelation # of ouput 10 5 0 1 2 3 4 5 6 sources # 7 8 ?4 9 ?2 0 time [ms] 2 4 signals E 1 2 3 time [s] 4 5 Figure 4: Nine different sound sources from [11] were mixed with a random matrix. 60 postsynaptic neurons tuned to different ?1 and ?2 were used in order to recover the sources, i.e. ?1 varies from 1ms to 30ms by steps of 0.5ms and ?2 = 0 for all neurons. A. One source signal (below) is recovered by one of the postsynaptic neurons (above, for clarity reason, the output is shifted upward). B. Zoom on one source (solid line) and one output (dashed line). C. Histogram of the number of postsynaptic neurons recovering each sources. D. Autocorrelation of the different sources. There are several sources with the biggest autocorrelation at time 3ms. E. The postsynaptic neuron tuned to a ?1 = 3ms and ?2 = 0 (above) is not able to recover properly one of the sources even though it still performs well except for the low amplitude parts of the signal (below). 5 4 Stability of the learning rule In principle our online learning rule (10) could lead to several solutions corresponding to different fixed points of the dynamics. Fixed points will be denoted by w? = ek , which are by construction the row vectors of the decoupling matrix W ? (see (5) and (7)). The rule 10 has two parameters, i.e. the delays ?1 and ?2 (the ?? is considered fixed). We assume that in our architecture, these delays characterize different properties of the postsynaptic neuron. Neurons with different choices of ?1 and ?2 will potentially recover different signals from the same mixture. The stability analysis will show which fixed point is stable depending on the autocorrelation functions of the signals and the delays ?1 and ?2 . We analyze the stability, assuming small perturbation of the weights, i.e. w = ei + ?ej where {ek }, the basis of the matrix C ?1 , are the fixed points. We obtain the expression (see Appendix for calculation details) ?? = ?? ?jj (?1 )?ii (?2 ) ? ?ii (?1 )?jj (?2 ) , ?ii (?2 ) (11) where ?(? )ij =< si (t)sj (t + ? ) > is the diagonal correlation matrix. To illustrate the stability equation (11), let us take ?1 = 0 and assume that ?ii (0) = ?jj (0), i.e. all signals have the same zero-time-lag autocorrelation. In this case (11) reduces to ?? = ??[?jj (?1 ) ? ?ii (?1 )]. That is the solution ei is stable if ?jj (?1 ) < ?ii (?1 ) for all directions ej (with biggest autocorrelation at time ?1 ) for ? > 0. If ? < 0, the solution ei is stable for ?jj (?1 ) > ?ii (?1 ). This stability relation is verified in the simulations. Fig 2 shows two signals with different autocorrelation functions. In this example, we chose ?1 = 0 and ?(0) = I, i.e. the signals are normalized. The learning rule is recovering the signal with the biggest autocorrelation at time ?1 , ?kk (?1 ), for a positive learning rate. 5 Comparison between Spatial ICA and Temporal ICA One of the algorithms most used to solve ICA is FastICA [1]. It is based on an approximation of negentropy and is purely spatial, i.e. it takes into account only the amplitude distribution of the signal, but not it?s temporal structure. Therefore we show an example (Fig. 5), where three signals generated by Ornstein-Uhlenbeck processes have the same spatial distribution but different time constants of the autocorrelation. With a spatial algorithm data points corresponding to different time slices can be shuffled without any change in the results. Therefore, it cannot solve this example. We tested our example with FastICA downloaded from [11] and it failed to recover the original sources (Fig. 5). However, to our surprise, FastICA could for very few trial solve this problem even though the convergence was not stable. Indeed, since FastICA algorithm is an iterative online algorithm, it takes the signals in the temporal order in which they arrive. Therefore temporal correlations can in some cases be taken into account even though this is not part of the theory of FastICA. 6 Discussions and conclusions We presented a powerful online learning rule that performs ICA by computing joint variations in the firing rates of pre- and postsynaptic neurons at different time delays. This is very similar to a standard Hebbian rule with exception of an additional factor ? which is an online estimate of the output correlations at different time delays. The different delay times ?1 , ?2 are necessary to recover different sources. Therefore properties varying between one postsynaptic neuron and the next could lead to different time delays used in the learning rule. We could assume that the time delays are intrinsic properties of each postsynaptic neuron due to for example the distance on the dendrites where the synapse is formed [12], i.e. due to different signal propagation time. The calculation of stability shows that a postsynaptic neuron will recover the signal with the biggest autocorrelation at the considered delay time or the smallest depending of the sign of the learning rates. We assume that for biological signals autocorrelation functions cross so that it?s possible with different postsynaptic neurons to recover all the signals. 6 A autocorrelation distribution B signals time delay C signals signals D time time Figure 5: Two signals generated by an Ornstein-Uhlenbeck process are mixed. A. The signals have the same spatial distributions. B. The time constants of the autocorrelations are different. C. Our learning rule converges to an output (dashed line) recovering one of the signals source (solid line). D. FastICA (dashed line) doesn?t succeed to recover the sources (solid line). The algorithm assumes centered signals. However for a complete mapping of those signals to neural rates, we have to consider positive signals. Nevertheless we can easily compute an online estimate of the mean firing rate and remove this mean from the original rates. This way the algorithm still holds taking neural rates as input. Hyvaerinen proposed an ICA algorithm [8] based on complexity pursuit. It uses the nongaussianity of the residuals once the part of the signals that is predictable from the temporal correlations has been removed. The update step of this algorithm has some similarities with our learning rule even though the approach is completely different since we want to exploit temporal correlations directly rather than formally removing them by a ?predictor?. We also do not assume pre-whitened data and are not considering nongaussianity. Our learning rule considers smooth signals that are assumed to be rates. However, it is commonly accepted that synaptic plasticity takes into account spike trains of pre- and postsynaptic neurons looking at the precise timing of the spikes, i.e. Spike Timing Dependent Plasticity (STDP) [13, 14, 15]. Therefore a spike-based description of our algorithm is currently under study. Appendix: Stability calculation By construction, the row vectors {ek , k = 1,..,n} of W ? = C ?1 , the inverse of the mixing matrix, are solutions of the batch learning rule 9 (n is the number of sources). Assume one of these row vectors eTi , (i.e. a fixed point of the dynamic), and consider w = ei + ?ej a small perturbation in direction eTj . Note that {ek } is a basis because det(C) 6= 0 (the matrix must be invertible). The rule (9) becomes: 7 ?e ? i =?[< x(t + ?1 )(ei + ?ej )T x(t) > T ? (12) T < (ei + ?ej ) x(t)(ei + ?ej ) x(t + ?1 ) > < x(t + ?2 >)(ei + ?ej )T x(t) >]. < (ei + ?ej )T x(t)(ei + ?ej )T x(t + ?2 >) We can expand the terms on the righthand side to first order in ?. Multiplying the stability expression by eTj (here we can assume that eTj ej = 1 since the recovering of the sources are up to a multiplicative constant), we find: ?? =?? [eTj C?(?1 )C T ej ][eTi C?(?2 )C T ei ] ? [eTi C?(?1 )C T ei ][eTj C?(?2 )C T ej ] eTi C?(?2 )C T ei ?? (13) 4[eTi C?(?1 )C T ej ][eTj C?(?2 )C T ei ] . eTi C?(?2 )C T ei where ?(? )ij =< si (t)sj (t + ? ) > is the diagonal matrix. This expression can be simplified because eTi is a row of W ? = C ?1 , so that eTi C is the unit vector of the form (0,0,...,1,0,...) where the position of the ?1? indicates the solution number 0. Therefore, we have eTi C?(? )C T ek = ?(? )ik . The expression of stability becomes ?? = ?? ?jj (?1 )?ii (?2 ) ? ?ii (?1 )?jj (?2 ) ?ii (?2 ) (14) References [1] A. Hyvaerinen, J. Karhunen, and E. Oja. Independent Component Analysis. Wiley-Interscience, 2001. [2] L Tong, R Liu, VC Soon, and YF Huang. Indeterminacy and identifiability of blind identification. IEEE Trans. on Circuits and Systems, 1991. [3] A. Belouchrani, KA. Meraim, JF. Cardoso, and E. Moulines. A blind source separation technique based on second order statistics. IEEE Trans. on Sig. Proc., 1997. [4] L. Molgedey and H.G. Schuster. Separation of a mixture of independent signals using time delayed correlations. Phys. Rev. Lett., 72:3634?37, 1994. [5] A. Ziehe and K. Muller. Tdsep ? an efficient algorithm for blind separation using time structure. [6] KR. Mueller, P. Philips, and A. Ziehe. Jade td : Combining higher-order statistics and temporal information for blind source separation (with noise). Proc. Int. Workshop on ICA, 1999. [7] A. Hyvaerinen. Independent component analysis for time-dependent stochastic processes. Proc. Int. Conf. on Art. Neur. Net., 1998. [8] A. Hyvaerinen. Complexity pursuit: Separating interesting components from time-series. Neural Computation, 13:883?898, 2001. [9] E. Oja. A simplified neuron model as principal component analyzer. J. Math. Biol., 15:267 ?273, 1982. [10] J.J. Hopfield. Olfactory computation and object perception. PNAS, 88:6462?6466, 1991. [11] H. Gavert, J. Hurri, J. Sarela, and A. Hyvarinen. http://www.cis.hut.fi/projects/ica/. Fastica and cocktail party demo. [12] R. C. Froemke, M. Poo, and Y. Dan. Spike-timing dependent synaptic plasticity depends on dentritic location. Nature, 434:221?225, 2005. [13] G. Bi and M. Poo. Synaptic modification by correlated activity: Hebb?s postulate revisited. Annual Review of Neuroscience, 2001. [14] H. Markram, J. L?ubke, M. Frotscher, and B. Sakmann. Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs. Science, 275:213?215, 1997. [15] W. Gerstner, R. Kempter, JL. van Hemmen, and H. Wagner. A neuronal learning rule for sub-millisecond temporal coding. Nature, 383:76?78, 1996. 8
3174 |@word trial:1 inversion:1 norm:1 simulation:2 solid:9 moment:3 initial:1 liu:1 series:1 efficacy:2 ecole:2 tuned:3 recovered:5 ka:1 si:9 negentropy:1 written:3 must:1 plasticity:3 remove:2 update:5 aps:1 colored:1 detecting:1 math:1 revisited:1 location:1 mathematical:1 become:1 ik:1 ouput:1 dan:1 interscience:1 autocorrelation:18 olfactory:1 indeed:1 ica:18 brain:2 moulines:1 globally:1 td:1 considering:1 becomes:2 project:1 moreover:1 notation:1 circuit:1 developed:2 temporal:15 exactly:1 unit:1 louis:1 before:1 positive:2 local:2 timing:3 receptor:2 firing:4 chose:1 lausanne:4 bi:1 statistically:1 responsible:1 intersect:1 pre:8 refers:2 get:1 cannot:2 www:1 center:1 poo:2 straightforward:1 simplicity:1 rule:44 adjusts:1 stability:14 variation:2 construction:2 trigger:1 us:2 sig:1 agreement:1 element:3 coincidence:1 connected:1 removed:1 predictable:1 complexity:3 dynamic:3 solving:2 purely:1 molgedey:6 basis:2 completely:1 easily:2 joint:3 hopfield:1 autocorrelated:1 uottawa:1 derivation:1 train:1 fast:1 jade:1 lag:1 solve:6 ramp:6 statistic:3 think:1 online:13 eigenvalue:1 net:1 combining:1 mixing:5 description:1 exploiting:2 convergence:2 etj:6 tdsep:1 converges:3 object:1 derive:1 depending:3 illustrate:1 ij:9 school:2 indeterminacy:1 epsps:1 implemented:1 c:2 recovering:6 implies:1 direction:2 thick:2 correct:1 stochastic:1 vc:1 centered:1 require:2 generalization:3 biological:3 insert:1 hold:1 sufficiently:1 considered:2 hut:1 stdp:1 algorithmic:2 mapping:1 smallest:1 proc:3 currently:1 saw:1 eti:9 gaussian:3 aim:1 rather:2 ej:13 varying:1 derived:2 focus:1 properly:1 indicates:1 mueller:1 dependent:3 epfl:4 relation:2 expand:1 interested:2 upward:1 denoted:1 spatial:5 art:1 frotscher:1 once:1 ubke:1 thin:2 few:2 oja:4 composed:1 simultaneously:1 zoom:1 delayed:5 righthand:1 mixture:3 accurate:1 neglecting:1 necessary:1 xy:1 orthogonal:1 plotted:1 federale:2 formalism:1 earlier:1 ottawa:2 predictor:1 delay:15 fastica:7 characterize:1 varies:1 periodic:1 combined:4 person:1 invertible:1 together:2 postulate:1 recorded:1 cjl:2 huang:1 conf:1 ek:5 account:4 sinusoidal:1 de:2 bold:1 coding:1 int:2 ornstein:3 blind:4 depends:1 performed:1 multiplicative:5 analyze:1 recover:16 identifiability:1 sarela:1 formed:1 characteristic:1 yield:2 identification:1 multiplying:1 tooth:1 meraim:1 detector:1 synapsis:1 phys:1 andre:1 synaptic:6 definition:1 recovers:2 amplitude:4 higher:5 synapse:1 though:4 furthermore:1 stage:1 correlation:22 ei:15 nonlinear:1 propagation:1 autocorrelations:1 defines:1 yf:1 normalized:2 shuffled:2 symmetric:1 ll:2 during:1 sin:2 claudia:2 m:9 complete:1 polytechnique:2 performs:5 instantaneous:1 fi:1 jl:1 extend:1 discussed:1 interpret:1 measurement:1 similarly:1 analyzer:1 stable:5 similarity:1 whitening:2 exploited:1 muller:1 additional:1 signal:66 dashed:7 ii:10 sound:4 pnas:1 reduces:1 hebbian:8 smooth:1 characterized:1 calculation:4 cross:1 long:1 converging:1 whitened:1 longtin:1 sinus:4 iteration:1 histogram:2 uhlenbeck:3 want:2 interval:1 source:46 enough:1 xj:3 architecture:1 tm:1 det:1 t0:2 expression:4 pca:1 clopath:2 nine:2 jj:8 cocktail:4 clear:1 cardoso:1 http:1 millisecond:1 shifted:1 dotted:1 sign:1 neuroscience:1 correctly:1 yy:1 nevertheless:1 clarity:2 preprocessed:1 verified:1 convert:1 inverse:2 powerful:3 arrive:1 separation:4 putative:1 appendix:2 ki:4 annual:1 activity:3 precisely:1 sake:2 performing:2 neur:1 combination:1 postsynaptic:30 rev:1 biologically:1 modification:1 taken:2 equation:5 describing:1 mechanism:2 mind:2 pursuit:3 appropriate:1 batch:2 original:3 denotes:1 remaining:1 assumes:1 exploit:2 giving:1 classical:1 spike:5 diagonal:4 distance:1 separating:1 philip:1 presynaptic:4 considers:1 reason:1 assuming:1 kk:1 innovation:1 difficult:1 regulation:1 potentially:4 sakmann:1 unknown:1 perform:1 neuron:36 finite:1 situation:1 looking:1 precise:1 perturbation:2 trans:2 able:3 usually:1 below:3 perception:1 natural:1 residual:1 extract:1 review:1 understanding:1 multiplication:1 kempter:1 expect:1 permutation:2 mixed:9 interesting:1 downloaded:1 principle:1 row:5 belouchrani:1 soon:1 side:1 understand:1 institute:2 pasteur:1 taking:3 markram:1 wagner:1 van:1 slice:2 lett:1 doesn:1 commonly:1 preprocessing:2 simplified:2 party:4 hyvarinen:1 sj:3 keep:1 active:1 incoming:1 assumed:1 hurri:1 xi:3 demo:1 iterative:2 schuster:6 nature:2 ca:1 decoupling:1 dendrite:1 gerstner:3 froemke:1 noise:4 n2:1 neuronal:2 fig:7 representative:1 biggest:5 hemmen:1 hebb:1 cil:2 wiley:1 tong:1 sub:1 position:1 removing:1 dentritic:1 specific:1 intrinsic:2 workshop:1 kr:1 ci:1 karhunen:1 surprise:1 simply:1 visual:1 failed:1 talking:1 ch:2 mij:4 succeed:1 goal:1 jf:1 wulfram:2 change:4 except:1 uniformly:1 averaging:1 wt:3 decouple:2 principal:2 called:1 accepted:1 exception:1 formally:1 ziehe:2 evaluate:1 nongaussianity:2 tested:1 biol:1 correlated:2
2,397
3,175
Non-Parametric Modeling of Partially Ranked Data Guy Lebanon Department of Statistics, and School of Elec. and Computer Engineering Purdue University - West Lafayette, IN lebanon@stat.purdue.edu Yi Mao School of Elec. and Computer Engineering Purdue University - West Lafayette, IN ymao@ecn.purdue.edu Abstract Statistical models on full and partial rankings of n items are often of limited practical use for large n due to computational consideration. We explore the use of non-parametric models for partially ranked data and derive efficient procedures for their use for large n. The derivations are largely possible through combinatorial and algebraic manipulations based on the lattice of partial rankings. In particular, we demonstrate for the first time a non-parametric coherent and consistent model capable of efficiently aggregating partially ranked data of different types. 1 Introduction Rankers such as humans, search engines, and classifiers, output full or partial rankings representing preference relations over n items. The absence of numeric scores or the lack of calibration between existing numeric scores output by the rankers necessitates modeling rankings rather than numeric scores. To effectively analyze ranked data, a statistical model has the following desiderata. (1) Handle efficiently a very large number of items n by reverting to partial rather than full rankings. (2) Probability assignment to full and partial rankings should be coherent and contradiction-free. (3) Conduct inference based on training data consisting of partial rankings of different types. (4) Correct retrieval of the underlying process as training data increases (statistical consistency). (5) In the case of large n convergence of the estimator to the underlying process can be extremely slow for fully ranked data but should be much faster when restricted to simpler partial rankings. In this paper, we present a model achieving the above requirements without any parametric assumptions on the underlying generative process. The model is based on the non-parametric Parzen window estimator with a Mallows kernel on permutations. By considering partial rankings as censored data we are able to define the model on both full and partial rankings in a coherent and contradictionfree manner. Furthermore, we are able to estimate the underlying structure based on data containing partial rankings of different types. We demonstrate computational efficiency for partial rankings, even in the case of a very large n, by exploiting the combinatorial and algebraic structure of the lattice of partial rankings. We start below by reviewing basic concepts concerning partially ranked data (see [1] for further details) and the Mallows model and then proceed to define our non-parametric estimator. We conclude by demonstrating computational efficiency and some experiments. 2 Permutations and Cosets A permutation ? is a bijective function ? : {1, . . . , n} ? {1, . . . , n} associating with each item i ? {1, . . . , n} a rank ?(i) ? {1, . . . , n}. In other words, ?(i) denotes the rank given to item i 1 and ? ?1 (i) denotes the item assigned to rank i. We denote a permutation ? using the following vertical bar notation ? ?1 (1)|? ?1 (2)| ? ? ? |? ?1 (n). For example, the permutation ?(1) = 2, ?(2) = 3, ?(3) = 1 would be denoted as 3|1|2. In this notation the numbers correspond to items and the locations of the items in their corresponding compartments correspond to their ranks. The collection of all permutations of n items forms the non-Abelian symmetric group of order n, denoted by S n , using function composition as the group operation ?? = ? ? ?. We denote the identity permutation by e. The concept of inversions and the result below, taken from [7], will be of great use later on. Definition 1. The inversion set of a permutation ? is the set of pairs def U (?) = {(i, j) : i < j, ?(i) > ?(j)} ? {1, . . . , n} ? {1, . . . , n} def whose cardinality is denoted by i(?) = |U (?)|. For example, i(e) = |?| = 0, and i(3|2|1|4) = |{(1, 2), (1, 3), (2, 3)}| = 3. Proposition 1 (e.g., [7]). The map ? 7? U (?) is a bijection. When n is large, the enormous number of permutations raises difficulties in using the symmetric group for modeling rankings. A reasonable solution is achieved by considering partial rankings which correspond to cosets of the symmetric group. For example, the subgroup of S n consisting of all permutations that fix the top k positions is denoted S1,...,1,n?k = {? ? Sn : ?(i) = i, i = 1, . . . , k}. The right coset S1,...,1,n?k ? = {?? : ? ? S1,...,1,n?k } is the set of permutations consistent with the ordering of ? on the k top-ranked items. It may thus be interpreted as a partial ranking of the top k items, that does not contain any information concerning the relative ranking of the bottom n?k items. The set of all such partial rankings forms the quotient space S n /S1,...,1,n?k . Figure 1 (left) displays the set of permutations that corresponds to a partial ranking of the top 2 out of 4 items. We generalize this concept to arbitrary partial rankings using the concept of composition. Definition 2. A composition of n is a sequence ? = (?1 , . . . , ?r ) of positive integers whose sum is n. Note that in contrast to a partition, in a composition the order of the integers matters. A composition ? = (?1 , . . . , ?r ) corresponds to a partial ranking with ?1 items in the first position, ?2 items in the second position and so on. For such a partial ranking it is known that the first set of ?1 items are to be ranked before the second set of ?2 items etc., but no further information is conveyed about the orderings within each set. The partial ranking S 1,...,1,n?k ? of the top k items is a special case corresponding to ? = (1, . . . , 1, n ? k). More formally, let N1 = {1, . . . , ?1 }, N2 = {?1 + 1, . . . , ?1 + ?2 }, ? ? ? , Nr = {?1 + ? ? ? + ?r?1 + 1, . . . , n}. Then the subgroup S? contains all permutations ? for which the set equalities ?(Ni ) = Ni , ?i holds (all permutations that only permute within each Ni ). A partial ranking of type ? is equivalent to a coset S? ? = {?? : ? ? S? , ? ? Sn} and the set of such partial rankings forms the quotient space Sn /S? . The vertical bar notation described above is particularly convenient for denoting partial rankings. We list items 1, . . . , n separated by vertical bars, indicating that items on the left side of each vertical bar are preferred to (ranked higher than) items on the right side of the bar. For example, the partial ranking displayed in Figure 1 (left) is denoted by 3|1|2, 4. In the notation above, the ordering of items not separated by a vertical line is meaningless, and for consistency we use the conventional ordering e.g., 1|2, 3|4 rather than 1|3, 2|4. The set of all partial rankings def Wn = {S? ? : ? ? Sn , ??} (1) which includes all full rankings ? ? Sn , is a subset of all possible partial orders on {1, . . . , n}. While the formalism of partial rankings in Wn cannot realize all partial orderings, it is sufficiently powerful to include many useful naturally occurring orderings as special cases. Furthermore, as demonstrated in later sections, it enables simplification of the otherwise overwhelming computational difficulty. Special cases include the following partial rankings. ? ? ? Sn corresponds to permutation or a full ordering e.g. 3|2|4|1. ? S1,n?1 ? e.g. 3|1, 2, 4, corresponds to selection of the top alternative such as a multiclass classification. ? S1,...,1,n?k ? e.g. 1|3|2, 4, corresponds to top k ordering such as the ranked list of top k webpages output by search engines. 2 ? Sk,n?k ? e.g. 1, 2, 4|3, 5, corresponds to a more preferred and a less preferred dichotomy such as a multilabel classification. In the cases above, we often have a situation where n is large (or even approaching infinity as in the third example above) but k is of manageable size. Traditionally, data from each one of the special cases above was modeled using different tools and was considered fundamentally different. That problem was aggravated as different special cases were usually handled by different communities such as statistics, computer science, and information retrieval. In constructing a statistical model on permutations or cosets, it is essential to relate one permutation to another. We do this using a distance function on permutations d : Sn ? Sn ? R that satisfies the usual metric function properties, and in addition is invariant under item relabeling or right action of the symmetric group [1] d(?, ?) = d(??, ?? ) ? ?, ?, ? ? Sn . There have been many propositions for such right-invariant distance functions, the most popular of them being Kendall?s tau [3] d(?, ?) = n?1 XX I(?? ?1 (i) ? ?? ?1 (l)) (2) i=1 l>i where I(x) = 1 for x > 0 and I(x) = 0 otherwise. Kendall?s tau d(?, ?) can be interpreted as the number of pairs of items for which ? and ? have opposing orderings (called disconcordant pairs) or the minimum number of adjacent transpositions needed to bring ? ?1 to ? ?1 (adjacent transposition flips a pair of items having adjacent ranks). By right invariance, d(?, ?) = d(?? ?1 , e) which, for Kendall?s tau equals the number of inversions i(?? ?1 ). This is an important observation that will allow us to simplify many expressions concerning Kendall?s tau using the theory of permutation inversions from the combinatorics literature. 3 The Mallows Model and its Extension to Partial Rankings The Mallows model [5] is a simple model on permutations based on Kendall?s tau distance using a location parameter ? and a spread parameter c (which we often treat as a constant) p? (?) = exp (?cd(?, ?) ? log ?(c)) ?, ? ? Sn c ? R+ . (3) The normalization term ? doesn?t depend on ? and has the closed form X ?(c) = e?c d(?,?) = (1 + e?c )(1 + e?c + e?2c ) ? ? ? (1 + e?c + ? ? ? + e?(n?1)c ) (4) ??Sn as shown by the fact that d(?, ?) = i(?? ?1 ) and the following proposition. P Qn?1 Pj Proposition 2 (e.g., [7]). For q > 0, ??Sn q i(?) = j=1 k=0 q k . Model (3) has been motivated on axiomatic grounds by Mallows and has been a major focus of statistical modeling on permutations. A natural extension to partially ranked data is to consider a partial ranking as censored data equivalent to the set of permutations in its related coset: X X def p? (S? ?) = p? (? ) = ? ?1 (c) exp (?c d(?, ?)) . (5) ? ?S? ? ? ?S? ? Fligner and Verducci [2] have shown that in the case of ? = (1, . . . , 1, n ? k) the above summation has a closed form expression. However, the apparent absence of a closed form formula for more general partial rankings prevented the widespread use of the above model for large n and encouraged more ad-hoc and heuristic models [1, 6]. This has become especially noticeable due to a new surge of interest, especially in the computer science community, in partial ranking models for large n. The ranking lattice presented next enables extending Fligner and Verducci?s closed form to a more general setting which is critical to the practicality of our non-parametric estimator. 4 The Ranking Lattice Partial rankings S? ? relate to each other in a natural way by expressing more general, more specific or inconsistent ordering. We define below the concepts of partially ordered sets and lattices and then relate them to partial rankings by considering the set of partial rankings Wn as a lattice. Some of the definitions below are taken from [7], where a thorough introduction to posets can be found. 3 1 PSfrag replacements ?1 ? 2 1,2,3 Ranks ? Items: web pages movies labels etc. 1 3 1 4 2 2 3 1 3 4 PSfrag replacements S1,1,2 ? = {?1 ?, ?2 ?} = 3|1|2, 4 1|2,3 1,2|3 1,3|2 2|1,3 3|1,2 2,3|1 1|2|3 1|3|2 2|1|3 3|1|2 2|3|1 3|2|1 ?2 ? 4 2 3 4 asdf Figure 1: A partial ranking corresponds to a coset or a set of permutations (left). The Hasse diagram of W3 . Some lines are dotted for 3D visualization purposes (right). Definition 3. A partially ordered set or poset (Q, ), is a set Q endowed with a binary relation  satisfying ?x, y, z ? Q (i) reflexibility: x  x, (ii) anti-symmetry: x  y and y  x ? x = y, and (iii) transitivity: x  y and y  z ? x  z. We write x ? y when x  y and x 6= y. We say that y covers x when x ? y and there is no z ? Q such that x ? z ? y. A finite poset is completely described by its covering relation. The planar Hasse diagram of (Q, ) is the graph connecting the elements of Q as nodes using edges that correspond to the covering relation. An additional requirement is that if y covers x then y is drawn higher than x. Two elements x, y are comparable if x  y or y  x and otherwise are incomparable. The set of partial rankings Wn defined in (1) is naturally endowed with the partial order of ranking refinement i.e. ? ? ? if ? refines ? or alternatively if we can get from ? to ? by dropping vertical lines [4]. Figure 1 (right) shows the Hasse diagram of W3 . A lower bound z of two elements in a poset x, y satisfies z  x and z  y. The greatest lower bound of x, y or infimum is a lower bound of x, y that is greater than or equal to any other lower bound V of x, y. Infimum, W and the analogous concept of supremum are denoted by x ? y and x ? y or {x1 , . . . , xk } and {x1 , . . . , xk } respectively. Two elements x, y ? Wn are consistent if there exists a lower bound in Wn . Note that consistency is a weaker relation than comparability. For example, 1|2, 3|4 and 1, 2|3, 4 are consistent but incomparable while 1|2, 3|4 and 2|1, 3|4 are both inconsistent and incomparable. Using the vertical bar notation, two elements are inconsistent iff there exists two items i, j that appear on opposing sides of a vertical bar in x, y i.e. x = ? ? ? i|j ? ? ? while y = ? ? ? j|i ? ? ? . A poset for which ? and ? always exist is called a lattice. Lattices satisfy many useful combinatorial properties - one of which is that they are completely described by the ? and ? operations. While the ranking poset is not a lattice, it may be turned into one by augmenting it with a minimum element ?0. def ?n= ? of the ranking poset and a minimum element is a lattice. Proposition 3. The union W Wn ?{0} ? n is finite, it is enough to show existence of ?, ? for pairs of elements [7]. We begin Proof. Since W by showing existence of x ? y. If x, y are inconsistent, there is no lower bound in W n and therefore the unique lower bound ?0 is also the infimum x ? y. If x, y are consistent, their infimum may be obtained as follows. Since x and y are consistent, we do not have a pair of items i, j appearing as i|j in x and j|i in y. As a result we can form a lower bound z to x, y by starting with a list of numbers and adding the vertical bars that are in either x or y, for example for x = 3|1, 2, 5|4 and y = 3|2|1, 4, 5 we have z = 3|2|1, 5|4. The resulting z ? Wn , is smaller than x and y since by construction it contains all preferences (encoded by vertical bars) in x and y. It remains to show that for every other lower bound z 0 to x and y we have z 0  z. If z 0 is comparable to z, z 0  z since removing any vertical bar from z results in an element that is not a lower bound. If z 0 is not comparable to z, then both z, z 0 contain the vertical bars in x and vertical bars in y possibly with some additional ones. By construction z contains only the essential vertical bars to make it a lower bound and hence z 0 ? z, contradicting the assumption that z, z 0 are non-comparable. By Proposition 3.3.1 of [7] a poset for which an infimum is always defined and that has a supremum W element is ? n and 1, . . . , n = W ? n , the necessarily a lattice. Since we just proved that ? always exists for W proof is complete. 4 1, . . . , n 1, . . . , n S? ? S? ? S? ? S? ? PSfrag replacements PSfrag replacements ? 0 ? 0 ? n corresponding to two partial rankings with Figure 2: Censored data in the Hasse diagram of W the same (left) and different (right) number of vertical bars. The two big triangles correspond to the Hasse diagram of Figure 1 (right) with permutations occupying the bottom level. 5 Non-Parametric Models on the Ranking Lattice The censored data approach to partial ranking described by Equation (5) may be generalized to ? n by defining it arbitrary probability models p on Sn . Extending a probability model p on Sn to W ? to be zero on Wn \ Sn and considering the partial ranking model X X ? n. g(S? ?) = p(?) = p(? ), ? ?W (6) ? S? ? ??S? ? The function g, when restricted to partial rankings of the same type G = {S? ? : ? ? Sn } constitutes a distribution over G. The relationship between p and g may be more elegantly described ? n ? [0, 1] defined above we have through M?obius inversion on lattices: for the functions p, g : W X X ?n g(? ) = p(? 0 ) iff p(? ) = g(? 0 )?(? 0 , ? ) ?, ? 0 ? W (7) ? 0 ? ? 0 ? ?n?W ? n ? R is the M?obius function of the lattice W ? n [7]. where ? : W For large n, modeling partial, rather than full rankings is a computational necessity. It is tempting to construct a statistical model on partial rankings directly without reference to an underlying permutation model, e.g. [1, 6]. However, doing so may lead to contradicting probabilities in the permutation level i.e. there exists no distribution p on Sn consistent with the specified values of g at g(S? ?) and g(S? ?), ? 6= ?. Figure 2 illustrates this problem for partial rankings with the same (left) and different (right) number of vertical bars. Verifying that no contradictions exist involves solving a lengthy and complicated set of equations. The alternative we present of starting with a permutation model p : Sn ? R and extending it to g via the M?obius inversion is a simple and effective way of avoiding such lack of coherence. Identifying partially ranked training data D = {S?i ?i : i = 1, . . . , m} as censored data, a nonparametric Parzen window estimator based on the Mallows kernel is m p?(?) = X 1 1 m ?(c) i=1 |S?i | X exp(?cd(?, ? )) ? ? Sn (8) ? ?S?i ?i where we used the fact that |S?i ?i | = |S?i e| = |S?i |, or its censored data extension g?(S? ?) = m X X 1 1 m ?(c) i=1 |S?i | X exp(?cd(?, ? )) ? n. S? ? ? W (9) ??S? ? ? ?S?i ?i Model (8) and its partial ranking extension (9) satisfy requirement 3 in Section 1 since D contains partial rankings of possibly different types. Similarly, by the censored data interpretation of partial rankings, they satisfy requirement 2. Requirement 4 holds as m, c ? ? by standard properties of the Parzen window estimator. Requirement 5 holds since g? in (9) restricted to G = {S ? ? : ? ? Sn } becomes a consistent model on a much smaller probability space. Requirement 1 is demonstrated in the next section by deriving an efficient computation of (9). In the case of a very large number of items reverting to partial ranking of type ? is a crucial element. The coherence between p?, g? and 5 the nature of D are important factors in modeling partially ranked data. In the next section we show that even for n ? ? (as is nearly the case for web-search), the required computation is feasible as it depends only on the complexity of the composition ? characterizing the data D and the partial rankings on which g? is evaluated. 6 Efficient Computation and Inversion Combinatorics Computational efficiency of the inner summations in Equations (8)-(9) is crucial to the practical application of the estimators p?, g?. By considering how the pairs constituting i(? ) decompose with respect to certain cosets we can obtain efficient computational schemes for (8),(9). Proposition 4. The following decomposition of i(? ) with respect to a composition ? holds r r r X X X i(? ) = a?k (? ) + b?kl (? ) ?? ? Sn where (10) k=1 l=k+1 k=1 ? ? k?1 k ? ? X X def a?k (? ) = (s, t) : s < t , ?j < ? ?1 (t) < ? ?1 (s) ? ?j ? ? j=1 j=1 ? ? k?1 k l?1 l ? ? X X X X def b?kl (? ) = (s, t) : s < t , ?j < ? ?1 (t) ? ?j ? ?j < ? ?1 (s) ? ? j . ? ? j=1 j=1 j=1 (11) (12) j=1 Proof. First note that by the right invariance of Kendall?s tau d(?, ?) = i(? ? ?1 ), we have i(? ) = i(? ?1 ) and we may decompose i(? ?1 ) instead of i(? ). The set appearing in the definition of a?k (? ) contains all label pairs (s, t) that are inversions of ? ?1 and that appear in the k-compartment of the decomposition ?. The set appearing in the definition of b?kl (? ) contains label pairs (s, t) that are inversions of ? ?1 and for which s and t appear in the l and k compartments of ? respectively. Since any inversion pair appear in either one or two compartments, the decomposition holds. Decomposition (10) is actually a family of decompositions as it holds for all possible compositions ?. For example, i(? ) = 4 for ? = 4|1|3|2 ? S4?2 ? = 1, 4|2, 3, with inversions (4, 1), (4, 3), (4, 2), (3, 2) for ? ?1 . The first compartment 1, 4 contains the inversion (4, 1) and so a?1 (? ) = 1. The second compartment 2, 3 contains the inversion (3, 2) and so a?2 (? ) = 1. The cross compartment inversions are (4, 3), (4, 2) making b?12 (? ) = 2. The significance of (10) is that as we sum over all representatives of the coset ? ? S? ? the cross compartmental inversions b?kl (? ) remain constant while the within-compartmental inversions a?k (? ) vary over all possible combinations. This leads to powerful extensions of Proposition 2 which in turn lead to efficient computation of (8), (9). Proposition 5. For ? ? Sn , q > 0, and a composition ? we have X q i(? ) = q Pr k=1 Pr l=k+1 b? kl (?) ? ?S? ? Proof. X ? ?S? ? q i(? ) = X q Pr k=1 j r ?Y s ?1 X Y s=1 j=1 k=0 a? k (? )+ Pr b? kl (?) r Y k=1 Pr l=k+1 b? kl (? ) =q Pr k=1 Pr l=k+1 b? kl (?) Pr k=1 X q Pr k=1 a? k (? ) ? ?S? ? ? ?S? ? =q (13) qk . Pr l=k+1 X q i(? ) = q s=1 ? ?S?s Pr k=1 Pr l=k+1 b? kl (?) j r ?Y s ?1 X Y qk . s=1 j=1 k=0 Above, we used two ideas: (i) disconcordant pairs between two different compartments of the coset S? ? are invariant under change of the coset representative, and (ii) the number of disconcordant pairs within a compartment varies over all possible choices enabling the replacement of the summation by a sum over a lower order symmetric group. Pr Pr ? An important feature of (13) is that only the first and relatively simple term q k=1 l=k+1 bkl (?) depends on ?. The remaining terms depend only on the partial ranking type ? and thus may be pre-computed and tabulated for efficient computation. The following two corollaries generalize the well known Proposition 2 to arbitrary cosets enabling efficient computation of (8), (9). 6 (1, n ? 1) O(1) O(k) O(k) ?? (1, n ? 1) (1, ? ? ? , 1, n ? k) (k, n ? k) (1, ? ? ? , 1, n ? t) O(1) O(k + t) O(k + t) (t, n ? t) O(1) O(k + t) O(k + t) Table 1: Computational complexity for computing Equation (9) for each training example. Notice the independence of the complexity terms from n. Corollary 1. P ? ?S? ? q i(? ?) = q Pr k=1 Pr l=k+1 b? kl (??) Qr s=1 Q?s ?1 Pj j=1 k=0 qk ? ? Sn . Proof. Using group theory, it can be shown that the set equality (S? ?)? = S? (??) holds. As a P P 0 result, ? ?S? ? q i(? ?) = ? 0 ?S? (??) q i(? ) . Proposition 5 completes the proof. Corollary 2. The partial ranking extension corresponding to the Mallows model p ? is Qr Q?s ?1 Pj ?kc Pr Pr Pr Pr ? ? ?1 ?1 s=1 j=1 k=0 e p? (S? ?) = e?c k=1 l=k+1 bkl (?? ) ? e?c k=1 l=k+1 bkl (?? ) Qn?1 Pj ?kc j=1 k=0 e Proof. Using Corollary 1 we have P P ?1 X )) ? ?S? ? exp(?c d(?, ?)) ? ?S? ? exp(?c i(? ? p? (S? ?) = p? (? ) = P = Qn?1 Pj ?kc ? ?Sn exp(?c d(?, ?)) j=1 k=0 e ? ?S? ? P Q Q?s ?1 Pj r i(? ??1 ) ?kc P P ?1 ? ?S? ? (exp(?c)) s=1 j=1 k=0 e ?c rk=1 rl=k+1 b? (?? ) kl = = e Qn?1 Pj Qn?1 Pj ?kc ?kc j=1 k=0 e j=1 k=0 e Despite its daunting appearance, the expression in Corollary 2 can be computed relatively easily. The fraction does not depend on ? or ? and in fact may be considered as a normalization constant that may be easily pre-computed and tabulated. The remaining term is relatively simple and depends on the location parameter ? and the coset representative ?. Corollary 2 and Proposition 6 below (whose proof is omitted due to lack of space), provide efficient computation for the estimators (8), (9). The complexity of computing (14) and (8), (9) for some popular partial ranking types appears in Table 1. Proposition 6. ? ?? ? j r r r ?Y s ?1 X X X X Y Y Y ? e?c d(?,? ) = ? e?c bkl (? ) ? ? e?kc ? . (14) ??S? ?1 ? ?S? ?2 7 ? ??1 ?2?1 S? k=1 l=k+1 s=1 j=1 k=0 Applications Figure 3 (top left) compares the average test log-likelihood between the Mallows model and the nonparametric model with different c as a function of training size averaged over 10 cross validations. We use fully ranked APA election data (rankings are ballots for five APA presidential candidates), and during each iteration, 30% of the examples are randomly selected for testing. The parameters of the Mallows model are estimated by maximum likelihood. The figure illustrates the advantage of using a non-parametric estimator over the parametric Mallows model given enough training data. Also note when c increases, the non-parametric model approaches the empirical histogram thus performing worse for small datasets and better for large datasets. To visualize the advantage of the non-parametric model over the Mallows model we display in Figure 3 (bottom row) their estimated probabilities by scaling the vertices of the permutation polytope proportionally. The displayed polytope has vertices corresponding to rankings of 4 items and whose edges correspond to an adjacent transposition (Kendall?s tau distance is the shortest path between two vertices). In this case the four ranked items are movies no. 357, 1356, 440, 25 from the EachMovie dataset containing rankings of 1628 movies. Note how the probabilities assigned by the Mallows model (left) form a unimodal function centered at 2|1|3|4 while the non-parametric estimator (right) discovers the true modes 2|3|1|4 and 4|1|2|3 that were undetected by the Mallows model. 7 Figure 3 (top right) demonstrates modeling partial rankings of a much larger n. We used 10043 rankings from the Jester dataset which contains user rankings of n = 100 jokes. We kept the partial ranking type of the testing data fixed at (5, n ? 5) and experimented with different censoring of the training data. The figure illustrates the slower consistency rate for fully ranked training data and the statistical benefit in censoring full rankings in the training data. This striking statistical advantage demonstrates the achievement of property 5 in Section 1 and is independent of the computational advantage obtained from censoring the training data. ?4.65 ?17.5 ?18 ?4.7 (k(6),n?k(6)) (k(0),n?k(0)) (1,1,n?2) (k(8),n?k(8)) (1,n?1) average log?likelihood average log?likelihood ?18.5 ?4.75 ?4.8 (1,1,1,n?3) (1,1,1,1,1,n?5) fully ranked ? ?20 ?40 ?60 ?4.85 mallows c=1 c=2 c=5 ?4.9 800 1600 2400 3200 ?80 1000 4000 2000 3000 # of samples 3241 3241 2431 5000 6000 7000 2431 2314 2314 3214 3214 3421 4000 # of samples 2341 2341 3421 4231 4231 2413 2413 2134 2134 4321 4321 2143 2143 4213 4213 3124 3124 3412 3412 1234 1234 4312 4312 3142 3142 1324 1324 1243 1243 4123 4123 4132 4132 1342 1342 1423 1423 1432 1432 Figure 3: Top row: Average test log-likelihood as a function of the training size: Mallows model vs. non-parametric model for APA election data (left) and non-parametric model with different partial ranking types for Jester data (right). Bottom row: Visualizing estimated probabilities for EachMovie data by permutation polytopes: Mallows model (left) and non-parametric model for c = 2 (right). 8 Discussion In this paper, we demonstrate for the first time a non-trivial effective modeling framework satisfying properties 1-5 in Section 1. The key component is our ability to efficiently compute (14) for simple partial ranking types and large n. Table 1 indicates the resulting complexity scales up with complexity of the composition k but is independent of n which is critical for modeling practical situations of k  n partial rankings. Experiments show the statistical advantage of the non-parametric partial ranking modeling in addition to its computational feasibility. References [1] D. E. Critchlow. Metric Methods for Analyzing Partially Ranked Data. Springer, 1986. [2] M. A. Fligner and J. S. Verducci. Distance based ranking models. Journal of the Royal Statistical Society B, 43:359?369, 1986. [3] M. G. Kendall. A new measure of rank correlation. Biometrika, 30, 1938. [4] G. Lebanon and J. Lafferty. Conditional models on the ranking poset. In Advances in Neural Information Processing Systems, 15, 2003. [5] C. L. Mallows. Non-null ranking models. Biometrika, 44:114?130, 1957. [6] J. I. Marden. Analyzing and modeling rank data. CRC Press, 1996. [7] R. P. Stanley. Enumerative Combinatorics, volume 1. Cambridge University Press, 2000. 8
3175 |@word inversion:16 manageable:1 decomposition:5 necessity:1 contains:9 score:3 denoting:1 existing:1 realize:1 refines:1 partition:1 enables:2 v:1 generative:1 selected:1 item:31 xk:2 transposition:3 bijection:1 location:3 preference:2 node:1 simpler:1 cosets:5 five:1 become:1 psfrag:4 manner:1 surge:1 election:2 overwhelming:1 window:3 considering:5 cardinality:1 becomes:1 begin:1 xx:1 underlying:5 notation:5 null:1 interpreted:2 thorough:1 every:1 biometrika:2 classifier:1 demonstrates:2 appear:4 positive:1 before:1 engineering:2 aggregating:1 treat:1 despite:1 analyzing:2 path:1 limited:1 averaged:1 lafayette:2 practical:3 unique:1 testing:2 mallow:17 poset:8 union:1 procedure:1 empirical:1 convenient:1 word:1 pre:2 get:1 cannot:1 selection:1 equivalent:2 map:1 conventional:1 demonstrated:2 starting:2 identifying:1 contradiction:2 estimator:10 deriving:1 marden:1 handle:1 traditionally:1 analogous:1 construction:2 user:1 element:11 satisfying:2 particularly:1 bottom:4 verifying:1 ordering:10 complexity:6 multilabel:1 raise:1 reviewing:1 depend:3 solving:1 efficiency:3 completely:2 triangle:1 necessitates:1 easily:2 derivation:1 elec:2 separated:2 effective:2 dichotomy:1 whose:4 apparent:1 heuristic:1 encoded:1 larger:1 say:1 compartmental:2 otherwise:3 presidential:1 ability:1 statistic:2 hoc:1 sequence:1 advantage:5 turned:1 iff:2 qr:2 achievement:1 exploiting:1 convergence:1 webpage:1 requirement:7 extending:3 posets:1 derive:1 augmenting:1 stat:1 school:2 noticeable:1 quotient:2 involves:1 correct:1 centered:1 human:1 crc:1 fix:1 decompose:2 proposition:13 summation:3 extension:6 hold:7 sufficiently:1 considered:2 ground:1 exp:8 great:1 visualize:1 major:1 vary:1 omitted:1 purpose:1 axiomatic:1 combinatorial:3 label:3 occupying:1 tool:1 always:3 rather:4 corollary:6 focus:1 rank:8 likelihood:5 indicates:1 contrast:1 inference:1 critchlow:1 relation:5 kc:7 classification:2 denoted:6 jester:2 special:5 equal:2 construct:1 having:1 encouraged:1 constitutes:1 nearly:1 fundamentally:1 simplify:1 randomly:1 relabeling:1 consisting:2 replacement:5 n1:1 opposing:2 interest:1 edge:2 capable:1 partial:60 censored:7 conduct:1 formalism:1 modeling:11 cover:2 assignment:1 lattice:14 vertex:3 subset:1 varies:1 parzen:3 connecting:1 containing:2 possibly:2 guy:1 worse:1 includes:1 matter:1 satisfy:3 combinatorics:3 ranking:76 ad:1 depends:3 later:2 closed:4 kendall:8 analyze:1 doing:1 start:1 complicated:1 compartment:9 ni:3 qk:3 largely:1 efficiently:3 correspond:6 generalize:2 lengthy:1 definition:6 naturally:2 proof:8 proved:1 dataset:2 popular:2 stanley:1 actually:1 appears:1 higher:2 verducci:3 planar:1 daunting:1 evaluated:1 furthermore:2 just:1 correlation:1 web:2 lack:3 widespread:1 mode:1 infimum:5 concept:6 contain:2 true:1 equality:2 assigned:2 hasse:5 hence:1 symmetric:5 adjacent:4 visualizing:1 transitivity:1 during:1 covering:2 generalized:1 bijective:1 complete:1 demonstrate:3 bring:1 consideration:1 discovers:1 rl:1 obius:3 ecn:1 volume:1 interpretation:1 expressing:1 composition:10 cambridge:1 consistency:4 similarly:1 calibration:1 etc:2 manipulation:1 certain:1 binary:1 yi:1 minimum:3 additional:2 greater:1 shortest:1 tempting:1 ii:2 full:9 unimodal:1 eachmovie:2 faster:1 cross:3 retrieval:2 concerning:3 prevented:1 feasibility:1 desideratum:1 basic:1 metric:2 iteration:1 kernel:2 normalization:2 histogram:1 achieved:1 addition:2 diagram:5 completes:1 crucial:2 meaningless:1 inconsistent:4 lafferty:1 integer:2 iii:1 enough:2 wn:9 independence:1 aggravated:1 w3:2 approaching:1 associating:1 incomparable:3 inner:1 idea:1 multiclass:1 ranker:2 expression:3 handled:1 motivated:1 tabulated:2 algebraic:2 proceed:1 action:1 useful:2 proportionally:1 nonparametric:2 s4:1 exist:2 notice:1 dotted:1 estimated:3 write:1 dropping:1 group:7 key:1 four:1 demonstrating:1 enormous:1 achieving:1 drawn:1 pj:8 kept:1 graph:1 fraction:1 sum:3 powerful:2 ballot:1 striking:1 family:1 reasonable:1 coherence:2 scaling:1 comparable:4 def:7 bound:11 apa:3 simplification:1 display:2 infinity:1 extremely:1 performing:1 relatively:3 department:1 combination:1 smaller:2 remain:1 making:1 s1:7 restricted:3 invariant:3 pr:20 taken:2 equation:4 visualization:1 remains:1 turn:1 reverting:2 needed:1 flip:1 operation:2 coset:8 endowed:2 appearing:3 alternative:2 slower:1 existence:2 top:11 denotes:2 include:2 remaining:2 practicality:1 especially:2 society:1 parametric:17 joke:1 usual:1 nr:1 comparability:1 distance:5 enumerative:1 polytope:2 trivial:1 modeled:1 relationship:1 abelian:1 relate:3 vertical:16 observation:1 datasets:2 purdue:4 finite:2 enabling:2 anti:1 displayed:2 situation:2 defining:1 arbitrary:3 community:2 pair:12 required:1 specified:1 kl:11 engine:2 coherent:3 polytopes:1 subgroup:2 able:2 bar:15 below:5 usually:1 royal:1 tau:7 greatest:1 critical:2 ranked:18 difficulty:2 natural:2 undetected:1 representing:1 scheme:1 movie:3 sn:24 literature:1 relative:1 fully:4 permutation:29 validation:1 conveyed:1 consistent:8 cd:3 row:3 censoring:3 free:1 side:3 allow:1 weaker:1 characterizing:1 benefit:1 numeric:3 doesn:1 qn:5 collection:1 refinement:1 constituting:1 lebanon:3 preferred:3 supremum:2 conclude:1 alternatively:1 search:3 sk:1 table:3 nature:1 symmetry:1 permute:1 necessarily:1 constructing:1 elegantly:1 significance:1 spread:1 big:1 n2:1 contradicting:2 x1:2 west:2 representative:3 fligner:3 slow:1 mao:1 position:3 candidate:1 third:1 formula:1 removing:1 rk:1 specific:1 showing:1 list:3 experimented:1 essential:2 exists:4 adding:1 effectively:1 illustrates:3 occurring:1 explore:1 appearance:1 ordered:2 partially:10 springer:1 corresponds:7 satisfies:2 conditional:1 identity:1 bkl:4 absence:2 feasible:1 change:1 called:2 invariance:2 indicating:1 formally:1 avoiding:1
2,398
3,176
Discriminative K-means for Clustering Jieping Ye Arizona State University Tempe, AZ 85287 jieping.ye@asu.edu Zheng Zhao Arizona State University Tempe, AZ 85287 zhaozheng@asu.edu Mingrui Wu MPI for Biological Cybernetics T?ubingen, Germany mingrui.wu@tuebingen.mpg.de Abstract We present a theoretical study on the discriminative clustering framework, recently proposed for simultaneous subspace selection via linear discriminant analysis (LDA) and clustering. Empirical results have shown its favorable performance in comparison with several other popular clustering algorithms. However, the inherent relationship between subspace selection and clustering in this framework is not well understood, due to the iterative nature of the algorithm. We show in this paper that this iterative subspace selection and clustering is equivalent to kernel K-means with a specific kernel Gram matrix. This provides significant and new insights into the nature of this subspace selection procedure. Based on this equivalence relationship, we propose the Discriminative K-means (DisKmeans) algorithm for simultaneous LDA subspace selection and clustering, as well as an automatic parameter estimation procedure. We also present the nonlinear extension of DisKmeans using kernels. We show that the learning of the kernel matrix over a convex set of pre-specified kernel matrices can be incorporated into the clustering formulation. The connection between DisKmeans and several other clustering algorithms is also analyzed. The presented theories and algorithms are evaluated through experiments on a collection of benchmark data sets. 1 Introduction Applications in various domains such as text/web mining and bioinformatics often lead to very highdimensional data. Clustering such high-dimensional data sets is a contemporary challenge, due to the curse of dimensionality. A common practice is to project the data onto a low-dimensional subspace through unsupervised dimensionality reduction such as Principal Component Analysis (PCA) [9] and various manifold learning algorithms [1, 13] before the clustering. However, the projection may not necessarily improve the separability of the data for clustering, due to the inherent separation between subspace selection (via dimensionality reduction) and clustering. One natural way to overcome this limitation is to integrate dimensionality reduction and clustering in a joint framework. Several recent work [5, 10, 16] incorporate supervised dimensionality reduction such as Linear Discriminant Analysis (LDA) [7] into the clustering framework, which performs clustering and LDA dimensionality reduction simultaneously. The algorithm, called Discriminative Clustering (DisCluster) in the following discussion, works in an iterative fashion, alternating between LDA subspace selection and clustering. In this framework, clustering generates the class labels for LDA, while LDA provides the subspace for clustering. Empirical results have shown the benefits of clustering in a low dimensional discriminative space rather than in the principal component space (generative). However, the integration between subspace selection and clustering in DisCluster is not well understood, due to the intertwined and iterative nature of the algorithm. In this paper, we analyze this discriminative clustering framework by studying several fundamental and important issues: (1) What do we really gain by performing clustering in a low dimensional discriminative space? (2) What is the nature of its iterative process alternating between subspace 1 selection and clustering? (3) Can this iterative process be simplified and improved? (4) How to estimate the parameter involved in the algorithm? The main contributions of this paper are summarized as follows: (1) We show that the LDA projection can be factored out from the integrated LDA subspace selection and clustering formulation. This results in a simple trace maximization problem associated with a regularized Gram matrix of the data, which is controlled by a regularization parameter ?; (2) The solution to this trace maximization problem leads to the Discriminative K-means (DisKmeans) algorithm for simultaneous LDA subspace selection and clustering. DisKmeans is shown to be equivalent to kernel K-means, where discriminative subspace selection essentially constructs a kernel Gram matrix for clustering. This provides new insights into the nature of this subspace selection procedure; (3) The DisKmeans algorithm is dependent on the value of the regularization parameter ?. We propose an automatic parameter tuning process (model selection) for the estimation of ?; (4) We propose the nonlinear extension of DisKmeans using the kernels. We show that the learning of the kernel matrix over a convex set of pre-specified kernel matrices can be incorporated into the clustering formulation, resulting in a semidefinite programming (SDP) [15]. We evaluate the presented theories and algorithms through experiments on a collection of benchmark data sets. 2 Linear Discriminant Analysis and Discriminative Clustering n m Consider a data set Pnconsisting of n data points {xi }i=1 ? R . For simplicity, we assume the data is centered, that is, i=1 xi /n = 0. Denote X = [x1 , ? ? ? , xn ] as the data matrix whose i-th column is given by xi . In clustering, we aim to group the data {xi }ni=1 into k clusters {Cj }kj=1 . Let F ? Rn?k be the cluster indicator matrix defined as follows: F = {fi,j }n?k , where fi,j = 1, iff xi ? Cj . (1) We can define the weighted cluster indicator matrix as follows [4]: 1 L = [L1 , L2 , ? ? ? , Lk ] = F (F T F )? 2 . It follows that the j-th column of L is given by (2) nj z }| { 1 (3) Lj = (0, . . . , 0, 1, . . . , 1, 0, . . . , 0)T /nj2 , P where nj is the sample size of the j-th cluster Cj . Denote ?j = x?Cj x/nj as the mean of the j-th cluster Cj . The within-cluster scatter, between-cluster scatter, and total scatter matrices are defined as follows [7]: Sw = k X X (xi ? ?j )(xi ? ?j )T , Sb = k X j=1 xi ?Cj nj ?j ?Tj = XLLT X T , St = XX T . (4) j=1 It follows that trace(Sw ) captures the intra-cluster distance, and trace(Sb ) captures the inter-cluster distance. It can be shown that St = Sw + Sb . Given the cluster indicator matrix F (or L), Linear Discriminant Analysis (LDA) aims to compute a linear transformation (projection) P ? Rm?d that maps each xi in the m-dimensional space to a vector x ?i in the d-dimensional space (d < m) as follows: xi ?? IRm ? x ?i = P T xi? ? IRd , T such that the following objective function is maximized [7]: trace (P Sw P )?1 P T Sb P . Since St = Sw + Sb , the optimal transformation matrix P is also given by maximizing the following objective function: ? ? trace (P T St P )?1 P T Sb P . (5) For high-dimensional data, the estimation of the total scatter (covariance) matrix is often not reliable. The regularization technique [6] is commonly applied to improve the estimation as follows: S?t = St + ?Im = XX T + ?Im , (6) where Im is the identity matrix of size m and ? > 0 is a regularization parameter. In Discriminant Clustering (DisCluster) [5, 10, 16], the transformation matrix P and the weighted cluster indicator matrix L are computed by maximizing the following objective function: ? ? f (L, P ) ? trace (P T S?t P )?1 P T Sb P ? ? = trace (P T (XX T + ?Im )P )?1 P T XLLT X T P . (7) 2 The algorithm works in an intertwined and iterative fashion, alternating between the computation of L for a given P and the computation of P for a given L. More specifically, for a given L, P is given by the standard LDA procedure. Since trace(AB) = trace(BA) for any two matrices [8], for a given P , the objective function f (L, P ) can be expressed as: ? ? f (L, P ) = trace LT X T P (P T (XX T + ?Im )P )?1 P T XL . (8) Note that L is not an arbitrary matrix, but a weighted cluster indicator matrix, as defined in Eq. (3). The optimal L can be computed by applying the gradient descent strategy [10] or by solving a kernel K-means problem [5, 16] with X T P (P T (XX T + ?Im )P )?1 P T X as the kernel Gram matrix [4]. The algorithm is guaranteed to converge in terms of the value of the objective function f (L, P ), as the value of f (L, P ) monotonically increases and is bounded from above. Experiments [5, 10, 16] have shown the effectiveness of DisCluster in comparison with several other popular clustering algorithms. However, the inherent relationship between subspace selection via LDA and clustering is not well understood, and there is need for further investigation. We show in the next section that the iterative subspace selection and clustering in DisCluster is equivalent to kernel K-means with a specific kernel Gram matrix. Based on this equivalence relationship, we propose the Discriminative K-means (DisKmeans) algorithm for simultaneous LDA subspace selection and clustering. 3 DisKmeans: Discriminative K-means with a Fixed ? Assume that ? is a fixed positive constant. Let?s consider the maximization of the function in Eq. (7): ? ? f (L, P ) = trace (P T (XX T + ?Im )P )?1 P T XLLT X T P . (9) Here, P is a transformation matrix and L is a weighted cluster indicator matrix as in Eq. (3). It follows from the Representer Theorem [14] that the optimal transformation matrix P ? IRm?d can be expressed as P = XH, for some matrix H ? IRn?d . Denote G = X T X as the Gram matrix, which is symmetric and positive semidefinite. It follows that ?? ? ??1 T f (L, P ) = trace H T (GG + ?G) H H GLLT GH . (10) We show that the matrix H can be factored out from the objective function in Eq. (10), thus dramatically simplifying the optimization problem in the original DisCluster algorithm. The main result is summarized in the following theorem: Theorem 3.1. Let G be the Gram matrix defined as above and ? > 0 be the regularization parameter. Let L? and P ? be the optimal solution to the maximization of the objective function f (L, P ) in Eq. (7). Then L? solves the following maximization problem: ? ? ? ? 1 (11) L? = arg max trace LT In ? (In + G)?1 L . ? L Proof. Let G = U ?U T be the Singular Value Decomposition (SVD) [8] of G, where U ? IRn?n is orthogonal, ? = diag (?1 , ? ? ? , ?t , 0, ? ? ? , 0) ? IRn?n is diagonal, and t = rank(G). Let U1 ? IRn?t consist of the first t columns of U and ?t = diag (?1 , ? ? ? , ?t ) ? IRt?t . Then (12) G = U ?U T = U1 ?t U1T . 1 Denote R = (?2t + ??t )? 2 ?t U1T L and let R = M ?R N T be the SVD of R, where M and N are orthogonal and ?R is diagonal ? ? with rank(?R ) = rank(R) = q. Define the matrix Z as 2 ? 21 Z = U diag (?t + ??t ) M, In?t , where diag(A, B) is a block diagonal matrix. It follows that ? ? ? ? ? ? ? 0 It 0 ? T T T Z GLL G Z = , Z (GG + ?G) Z = , (13) 0 0 0 0 ? = (?R )2 is diagonal with non-increasing diagonal entries. It can be verified that where ? ? ? ? ? ? = trace (GG + ?G)+ GLLT G f (L, P ) ? trace ? ? ? + = trace LT G (GG + ?G) GL ? ? ? ? 1 (14) = trace LT In ? (In + G)?1 L , ? where the equality holds when P = XH and H consists of the first q columns of Z. 3 3.1 Computing the Weighted Cluster Matrix L The weighted cluster indicator matrix L solving the maximization problem in Eq. (11) can be computed by solving a kernel K-means problem [5] with the kernel Gram matrix given by ??1 ? ? = In ? In + 1 G . (15) G ? Thus, DisCluster is equivalent to a kernel K-means problem. We call the algorithm Discriminative K-means (DisKmeans). 3.2 Constructing the Kernel Gram Matrix via Subspace Selection The kernel Gram matrix in Eq. (15) can be expressed as ? = U diag (?1 /(? + ?1 ), ?2 /(? + ?2 ), ? ? ? , ?n /(? + ?n )) U T . G (16) Recall that the original DisCluster algorithm involves alternating LDA subspace selection and clustering. The analysis above shows that the LDA subspace selection in DisCluster essentially constructs a kernel Gram matrix for clustering. More specifically, all the eigenvectors in G is kept unchanged, while the following transformation is applied to the eigenvalues: ?(?) = ?/(? + ?). This elucidates the nature of the subspace selection procedure in DisCluster. The clustering algorithm is dramatically simplified by removing the iterative subspace selection. We thus address issues (1)?(3) in Section 1. The last issue will be addressed in Section 4 below. 3.3 Connection with Other Clustering Approaches ? ? G/?. The optimal L is Consider the limiting case when ? ? ?. It follows from Eq. (16) that G thus given by solving the following maximization problem: ? ? arg max trace LT GL . L The solution is given by the standard K-means clustering [4, 5]. ? ? U T U1 . Note that Consider the other extreme case when ? ? 0. It follows from Eq. (16) that G 1 the columns of U1 form the full set of (normalized) principal components [9]. Thus, the algorithm is equivalent to clustering in the (full) principal component space. 4 DisKmeans? : Discriminative K-means with Automatically Tuned ? Our experiments show that the value of the regularization parameter ? has a significant impact on the performance of DisKmeans. In this section, we show how to incorporate the automatic tuning of ? into the optimization framework, thus addressing issue (4) in Section 1. The maximization problem in Eq. (11) is equivalent to the minimization of the following function: ? ? ??1 ! 1 T L . (17) In + G trace L ? It is clear that a small value of ? leads to a small value of the objective function in Eq. (17). To overcome this problem, we include an additional penalty term to control the eigenvalues of the matrix In + ?1 G. This leads to the following optimization problem: ? ? ??1 ! ? ? 1 1 T L + log det In + G . (18) In + G min g(L, ?) ? trace L L,? ? ? Note that the objective function in Eq. (18) is closely related to the negative log marginal likelihood function in Gaussian Process [12] with In + ?1 G as the covariance matrix. We have the following main result for this section: Theorem 4.1. Let G be the Gram matrix defined above and let L be a given weighted cluster indicator matrix. Let G = U ?U T = U1 ?t U1T be the SVD of G with ?t = diag (?1 , ? ? ? , ?t ) as in Eq. (12), and ai be the i-th diagonal entry of the matrix U1T LLT U1 . Then for a fixed L, 4 the optimal ?? solving the optimization problem in Eq. (18) is given by minimizing the following objective function: t ? X ?ai ?i ? + log 1 + . (19) ? + ?i ? i=1 Proof. Let U = [U1 , U2 ], that is, U2 is the orthogonal complement of U1 . It follows that ? ? X ? ? t 1 1 = log det It + ?1 = log (1 + ?i /?) . log det In + G (20) ? ? i=1 ? ! ? ? ??1 ! ? ??1 ? ? 1 1 T T T L = trace L U1 It + ?t U1 L + trace LT U2 U2T L trace L In + G ? ? t X = ? ? (1 + ?i /?)?1 ai + trace LT U2 U2T L , (21) i=1 ? ? The result follows as the second term in Eq. (21), trace LT U2 U2T L , is a constant. We can thus solve the optimization problem in Eq. (18) iteratively as follows: For a fixed ?, we update L by maximizing the objective function in Eq. (17), which is equivalent to the DisKmeans algorithm; for a fixed L, we update ? by minimizing the objective function in Eq. (19), which is a single-variable optimization and can be solved efficiently using the line search method. We call the algorithm DisKmeans? , whose solution depends on the initial value of ?. 5 Kernel DisKmeans: Nonlinear Discriminative K-means using the kernels The DisKmeans algorithm can be easily extended to deal with nonlinear data using the kernel trick. Kernel methods [14] work by mapping the data into a high-dimensional feature space F equipped with an inner product through a nonlinear mapping ? : IRm ? F. The nonlinear mapping can be implicitly specified by a symmetric kernel function K, which computes the inner product of the images of each data pair in the feature space. For a given training data set {xi }ni=1 , the kernel Gram matrix GK is defined as follows: GK (i, j) = (?(xi ), ?(xj )). For a given GK , the weighted cluster matrix L = [L1 , ? ? ? , Lk ] in kernel DisKmeans is given by minimizing the following objective function: ? ? ??1 ! X ? ??1 k 1 1 T L = LTj In + GK Lj . (22) In + GK trace L ? ? j=1 The performance of kernel DisKmeans is dependent on the choice of the kernel Gram matrix. Following [11], we assume that GK is restricted to be a convex combination of a given set P` ` of kernel Gram matrices {Gi }`i=1 as GK = i=1 ?i Gi , where the coefficients {?i }i=1 satisfy P` ` i=1 ?i trace(Gi ) = 1 and ?i ? 0 ?i. If L is given, the optimal coefficients {?i }i=1 may be computed by solving a Semidefinite programming (SDP) problem as follows: Theorem 5.1. Let GK be constrained to be a convex combination of a given set of kernel matrices P` {Gi }`i=1 as GK = i=1 ?i Gi satisfying the constraints defined above. Then the optimal GK minimizing the objective function in Eq. (22) is given by solving the following SDP problem: min t1 ,??? ,tk ,? ? s.t. In + k X tj j=1 1 ? ?i ? 0 ?i, P` ? i=1 ?i Gi LTj ` X Lj tj ? ? 0, for j = 1, ? ? ? , k, ?i trace(Gi ) = 1. (23) i=1 ??1 ? Lj ? ti is equivalent to Proof. It follows as LTj In + ?1 GK 5 ? I+ 1 ? P` ? i=1 ?i Gi LTj Lj tj ? ? 0. This leads to an iterative algorithm alternating between the computation of the kernel Gram matrix GK and the computation of the cluster indicator matrix L. The parameter ? can also be incorporated into the SDP formulation by treating the identity matrix In as one of the kernel Gram matrix as in [11]. The algorithm is named Kernel DisKmeans? . Note that unlike the kernel learning in [11], the class label information is not available in our formulation. 6 Empirical Study In this section, we empirically study the properties of DisKmeans and its variants, and evaluate the performance of the proposed algorithms in comparison with several other representative algorithms, including Locally Linear Embedding (LLE) [13] and Laplacian Eigenmap (Leigs) [1]. Experiment Setup: All algorithms were implemented us- Table 1: Summary of benchmark data sets # DIM # INST # CL ing Matlab and experiments were conducted on a PENData Set (m) (n) (k) TIUM IV 2.4G PC with 1.5GB RAM. We test these albanding 29 238 2 soybean 35 562 15 gorithms on eight benchmark data sets. They are five segment 19 2309 7 UCI data sets [2]: banding, soybean, segment, satimage, pendigits 16 10992 10 pendigits; one biological data set: leukemia (http://www. satimage 36 6435 6 leukemia 7129 72 2 upo.es/eps/aguilar/datasets.html) and two image ORL 10304 100 10 data sets: ORL (http://www.uk.research.att.com/ USPS 256 9298 10 facedatabase.html, sub-sampled to a size of 100*100 = 10000 from 10 persons) and USPS (ftp://ftp.kyb.tuebingen.mpg.de/pub/bs/ data/). See Table 1 for more details. To make the results of different algorithms comparable, we first run K-means and the clustering result of K-means is used to construct the set of k initial centroids, for all experiments. This process is repeated for 50 times with different sub-samples from the original data sets. We use two standard measurements: the accuracy (ACC) and the normalized mutual information (NMI) to measure the performance. 0.64 0.638 0.768 0.636 0.767 K?means DisCluster DisKmeans 0.765 0.695 0.67 0.634 0.66 K?means DisCluster DisKmeans 0.65 0.63 0.628 0.763 0 10 ? 2 10 4 10 0.624 ?6 10 6 10 0.68 ?4 10 ?2 10 0 10 ? 2 10 4 10 0.63 ?6 10 6 10 0.78 0.745 0.7 0.775 0.744 0 10 ? K?means DisCluster DisKmeans ACC ACC 6 0.755 K?means DisCluster DisKmeans 0.75 10 ?4 ?2 10 0 10 2 10 ? 4 10 0.7 0.66 0.74 0.64 0.62 0.739 0.6 K?means DisCluster DisKmeans 0.738 0.58 0.74 0.56 0.736 0.61 ?6 10 ?4 10 ?2 10 0 10 ? 2 10 4 10 6 10 0.735 ?6 10 10 0.68 0.737 0.62 6 10 USPS 0.745 0.63 ?6 10 0.741 0.76 0.65 0.64 4 10 0.72 0.742 0.765 0.66 2 10 K?means DisCluster DisKmeans 0.743 0.77 0.67 ?2 10 ORL 0.71 0.68 ?4 10 leukemia 0.69 K?means DisCluster DisKmeans 0.685 ACC ?2 10 0.69 0.64 0.626 ?4 10 satimage ACC 0.68 0.632 0.764 0.762 ?6 10 0.7 ACC 0.77 0.769 pendigits 0.69 K?means DisCluster DisKmeans ACC 0.642 ACC ACC 0.644 0.771 0.766 segment soybean Banding 0.772 ?4 10 ?2 10 0 10 ? 2 10 4 10 6 10 0.735 ?5 10 0 10 5 ? 10 10 10 0.54 ?6 10 ?4 10 ?2 10 0 10 ? 2 10 4 10 Figure 1: The effect of the regularization parameter ? on DisKmeans and Discluster. Effect of the regularization parameter ?: Figure 1 shows the accuracy (y-axis) of DisKmeans and DisCluster for different ? values (x-axis). We can observe that ? has a significant impact on the performance of DisKmeans. This justifies the development of an automatic parameter tuning process in Section 4. We can also observe from the figure that when ? ? ?, the performance of DisKmeans approaches to that of K-means on all eight benchmark data sets. This is consistent with our theoretical analysis in Section 3.3. It is clear that in many cases, ? = 0 is not the best choice. Effect of parameter tuning in DisKmeans? : Figure 2 shows the accuracy of DisKmeans? using 4 data sets. In the figure, the x-axis denotes the different ? values used as the starting point for DisKmeans? . The result of DisKmeans (without parameter tuning) is also presented for comparison. We can observe from the figure that in many cases the tuning process is able to significantly improve the performance. We observe similar trends on other four data sets and the results are omitted. 6 6 10 satimage pendigits USPS ORL 0.72 0.72 0.75 0.7 0.7 0.698 0.748 0.7 0.746 0.68 0.696 0.744 0.68 0.66 0.694 0.69 ACC 0.66 ACC ACC ACC 0.742 0.692 0.74 0.62 0.738 0.688 0.6 0.64 0.736 0.686 ?4 10 ?2 0 10 2 10 ? 10 0.58 0.734 ? 0.682 ? 0.6 ?6 10 DisKmeans DisKmeans 0.684 DisKmeans DisKmeans 0.62 4 0.68 ?6 10 6 10 10 0.64 DisKmeans DisKmean ?4 ?2 10 0 10 10 ? 2 10 4 6 10 10 0.73 ?5 0 10 10 5 ? ? 0.54 ?6 10 10 10 DisKmeans DisKmeans 0.56 ? 0.732 10 ?4 ?2 10 10 0 2 10 ? 10 4 10 Figure 2: The effect of the parameter tuning in DisKmeans? using 4 data sets. The x-axis denotes the different ? values used as the starting point for DisKmeans? . Figure 2 also shows that the tuning process is dependent on the initial value of ? due to its nonconvex optimization, and when ? ? ?, the effect of the tuning process become less pronounced. Our results show that a value of ?, which is neither too large nor too small works well. segment pendigits satimage USPS 0.23 0.347 0.098 0.0275 0.228 0.096 0.346 0.027 0.226 0.094 0.345 0.344 0.0265 TRACE TRACE 0.092 TRACE TRACE 0.224 0.222 0.09 DisCluster DisKmeans 0.026 0.22 0.343 DisCluster DisKmeans 0.088 0.084 DisCluster DisKmeans 0.342 0.086 0.218 DisCluster DisKmeans 0.216 0.341 1 2 3 4 ? 5 6 7 8 0.214 1 2 3 ? 4 5 0.0255 0.025 1 2 3 4 ? 5 6 7 1 1.5 2 2.5 3 ? 3.5 4 4.5 5 Figure 3: Comparison of the trace value achieved by DisKmean and DisCluster. The x-axis denotes the number of iterations in Discluster. The trace value of DisCluster is bounded from above by that of DisKmean. DisKmean versus DisCluster: Figure 3 compares the trace value achieved by DisKmean and the trace value achieved in each iteration of DisCluster on 4 data sets for a fixed ?. It is clear that the trace value of DisCluster increases in each iteration but is bounded from above by that of DisKmean. We observe a similar trend on the other four data sets and the results are omitted. This is consistent with our analysis in Section 3 that both algorithms optimize the same objective function, and DisKmean is a direct approach for the trace maximization without the iterative process. Clustering evaluation: Table 2 presents the accuracy (ACC) and normalized mutual information (NMI) results of various algorithms on all eight data sets. In the table, DisKmeans (or DisCluster) with ?max? and ?ave? stands for the maximal and average performance achieved by DisKmeans and DisCluster using ? from a wide range between 10?6 and 106 . We can observe that DisKmeans? is competitive with other algorithms. It is clear that the average performance of DisKmeans? is robust against different initial values of ?. We can also observe that the average performance of DisKmeans and DisCluster is quite similar, while DisCluster is less sensitive to the value of ?. 7 Conclusion In this paper, we analyze the discriminative clustering (DisCluster) framework, which integrates subspace selection and clustering. We show that the iterative subspace selection and clustering in DisCluster is equivalent to kernel K-means with a specific kernel Gram matrix. We then propose the DisKmeans algorithm for simultaneous LDA subspace selection and clustering, as well as an automatic parameter tuning procedure. The connection between DisKmeans and several other clustering algorithms is also studied. The presented analysis and algorithms are verified through experiments on a collection of benchmark data sets. We present the nonlinear extension of DisKmeans in Section 5. Our preliminary studies have shown the effectiveness of Kernel DisKmeans? in learning the kernel Gram matrix. However, the SDP formulation is limited to small-sized problems. We plan to explore efficient optimization techniques for this problem. Partial label information may be incorporated into the proposed formulations. This leads to semi-supervised clustering [3]. We plan to examine various semi-learning techniques within the proposed framework and their effectiveness for clustering from both labeled and unlabeled data. 7 6 10 Table 2: Accuracy (ACC) and Normalized Mutual Information (NMI) results on 8 data sets. ?max? and ?ave? stand for the maximal and average performance achieved by DisKmeans and DisCluster using ? from a wide range of values between 10?6 and 106 . We present the result of DisKmeans? with different initial ? values. LLE stands for Local Linear Embedding and LEI for Laplacian Eigenmap. ?AVE? stands for the mean of ACC or NMI on 8 data sets for each algorithm. DisKmeans DisCluster DisKmeans? Data Sets LLE LEI max max 10?2 10?1 100 101 ave ave ACC banding 0.771 0.771 0.771 0.771 0.771 0.771 0.648 0.764 0.768 0.767 soybean 0.641 0.633 0.639 0.639 0.638 0.637 0.630 0.649 0.634 0.632 segment 0.687 0.676 0.664 0.659 0.671 0.680 0.594 0.663 0.664 0.672 pendigits 0.699 0.696 0.700 0.696 0.696 0.697 0.599 0.697 0.690 0.690 satimage 0.701 0.654 0.696 0.712 0.696 0.683 0.627 0.663 0.651 0.642 leukemia 0.775 0.738 0.738 0.753 0.738 0.738 0.714 0.686 0.763 0.738 ORL 0.744 0.739 0.749 0.743 0.748 0.748 0.733 0.317 0.738 0.738 USPS 0.712 0.692 0.684 0.702 0.680 0.684 0.631 0.700 0.628 0.683 AVE 0.716 0.700 0.705 0.709 0.705 0.705 0.647 0.642 0.692 0.695 NMI banding 0.225 0.225 0.225 0.225 0.225 0.225 0.093 0.213 0.221 0.219 soybean 0.707 0.698 0.706 0.707 0.704 0.704 0.691 0.709 0.701 0.696 segment 0.632 0.615 0.629 0.625 0.628 0.632 0.539 0.618 0.612 0.608 pendigits 0.669 0.660 0.661 0.658 0.658 0.660 0.577 0.645 0.656 0.654 satimage 0.593 0.551 0.597 0.608 0.596 0.586 0.493 0.548 0.537 0.541 leukemia 0.218 0.163 0.163 0.185 0.163 0.163 0.140 0.043 0.199 0.163 ORL 0.794 0.789 0.800 0.795 0.801 0.800 0.784 0.327 0.789 0.788 USPS 0.647 0.629 0.612 0.637 0.609 0.612 0.569 0.640 0.544 0.613 AVE 0.561 0.541 0.549 0.555 0.548 0.548 0.486 0.468 0.532 0.535 Acknowledgments This research is sponsored by the National Science Foundation Grant IIS-0612069. References [1] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. In NIPS, 2003. [2] C.L. Blake and C.J. Merz. UCI repository of machine learning databases, 1998. [3] O. Chapelle, B. Sch?olkopf, and A. Zien. Semi-Supervised Learning. The MIT Press, 2006. [4] I. S. Dhillon, Y. Guan, and B. Kulis. A unified view of kernel k-means, spectral clustering and graph partitioning. Technical report, Department of Computer Sciences, University of Texas at Austin, 2005. [5] C. Ding and T. Li. Adaptive dimension reduction using discriminant analysis and k-means clustering. In ICML, 2007. [6] J. H. Friedman. Regularized discriminant analysis. JASA, 84(405):165?175, 1989. [7] K. Fukunaga. Introduction to Statistical Pattern Classification. Academic Press. [8] G. H. Golub and C. F. Van Loan. Matrix Computations. The Johns Hopkins Univ. Press, 1996. [9] I.T. Jolliffe. Principal Component Analysis. Springer; 2nd edition, 2002. [10] F. De la Torre Frade and T. Kanade. Discriminative cluster analysis. In ICML, pages 241?248, 2006. [11] G.R.G. Lanckriet, N. Cristianini, P. Bartlett, L. E. Ghaoui, and M. I. Jordan. Learning the kernel matrix with semidefinite programming. JMLR, 5:27?72, 2004. [12] C.E. Rasmussen and C.K.I. Williams. Gaussian Processes for Machine Learning. The MIT Press, 2006. [13] S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290:2323?2326, 2000. [14] B. Sch?olkopf and A. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization and Beyond. MIT Press, 2002. [15] L. Vandenberghe and S. Boyd. Semidefinite programming. SIAM Review, 38:49?95, 1996. [16] J. Ye, Z. Zhao, and H. Liu. Adaptive distance metric learning for clustering. In CVPR, 2007. 8
3176 |@word kulis:1 repository:1 nd:1 covariance:2 simplifying:1 decomposition:1 reduction:8 initial:5 liu:1 att:1 pub:1 tuned:1 com:1 scatter:4 john:1 kyb:1 treating:1 sponsored:1 update:2 generative:1 asu:2 provides:3 five:1 leigs:1 direct:1 become:1 consists:1 inter:1 mpg:2 nor:1 sdp:5 examine:1 automatically:1 curse:1 equipped:1 increasing:1 mingrui:2 project:1 xx:6 bounded:3 diskmeans:62 what:2 banding:4 unified:1 transformation:6 nj:4 ti:1 rm:1 uk:1 control:1 partitioning:1 grant:1 before:1 positive:2 understood:3 t1:1 local:1 tempe:2 pendigits:7 studied:1 equivalence:2 gll:1 limited:1 range:2 acknowledgment:1 practice:1 block:1 procedure:6 empirical:3 significantly:1 projection:3 boyd:1 pre:2 onto:1 unlabeled:1 selection:25 applying:1 www:2 equivalent:9 map:1 optimize:1 jieping:2 maximizing:3 williams:1 starting:2 convex:4 simplicity:1 factored:2 insight:2 aguilar:1 vandenberghe:1 embedding:3 limiting:1 elucidates:1 programming:4 lanckriet:1 trick:1 trend:2 satisfying:1 labeled:1 database:1 ding:1 solved:1 capture:2 contemporary:1 u1t:4 cristianini:1 solving:7 segment:6 usps:7 easily:1 joint:1 various:4 univ:1 whose:2 quite:1 solve:1 cvpr:1 niyogi:1 gi:8 eigenvalue:2 propose:5 product:2 maximal:2 uci:2 iff:1 roweis:1 pronounced:1 az:2 ltj:4 olkopf:2 cluster:19 tk:1 ftp:2 eq:19 solves:1 implemented:1 involves:1 closely:1 torre:1 centered:1 really:1 investigation:1 preliminary:1 biological:2 im:7 extension:3 hold:1 blake:1 mapping:3 omitted:2 favorable:1 estimation:4 integrates:1 label:3 sensitive:1 weighted:8 minimization:1 mit:3 gaussian:2 aim:2 rather:1 rank:3 likelihood:1 discluster:38 centroid:1 ave:7 dim:1 inst:1 dependent:3 facedatabase:1 sb:7 integrated:1 lj:5 irn:4 frade:1 germany:1 issue:4 arg:2 html:2 classification:1 development:1 plan:2 constrained:1 integration:1 mutual:3 marginal:1 construct:3 unsupervised:1 leukemia:5 representer:1 icml:2 report:1 inherent:3 belkin:1 simultaneously:1 national:1 ab:1 friedman:1 mining:1 zheng:1 intra:1 evaluation:1 golub:1 analyzed:1 extreme:1 semidefinite:5 pc:1 tj:4 partial:1 orthogonal:3 iv:1 irm:3 theoretical:2 column:5 maximization:9 addressing:1 entry:2 eigenmaps:1 conducted:1 too:2 st:5 person:1 fundamental:1 siam:1 hopkins:1 soybean:5 zhao:2 li:1 de:3 summarized:2 coefficient:2 satisfy:1 depends:1 view:1 analyze:2 competitive:1 contribution:1 ni:2 accuracy:5 efficiently:1 maximized:1 cybernetics:1 acc:17 simultaneous:5 llt:1 against:1 involved:1 associated:1 proof:3 gain:1 sampled:1 popular:2 recall:1 dimensionality:8 cj:6 supervised:3 improved:1 formulation:7 evaluated:1 smola:1 web:1 nonlinear:8 lda:17 lei:2 effect:5 ye:3 normalized:4 regularization:9 equality:1 alternating:5 symmetric:2 iteratively:1 dhillon:1 deal:1 mpi:1 gg:4 performs:1 l1:2 gh:1 image:2 recently:1 fi:2 common:1 empirically:1 eps:1 significant:3 measurement:1 ai:3 automatic:5 tuning:10 chapelle:1 recent:1 nonconvex:1 ubingen:1 additional:1 converge:1 monotonically:1 semi:3 zien:1 ii:1 full:2 ing:1 technical:1 academic:1 controlled:1 impact:2 laplacian:3 variant:1 essentially:2 metric:1 iteration:3 kernel:42 achieved:5 addressed:1 singular:1 sch:2 unlike:1 nj2:1 effectiveness:3 jordan:1 call:2 xj:1 inner:2 det:3 texas:1 pca:1 bartlett:1 gb:1 penalty:1 ird:1 matlab:1 dramatically:2 clear:4 eigenvectors:1 locally:2 http:2 intertwined:2 group:1 four:2 neither:1 verified:2 kept:1 ram:1 graph:1 run:1 named:1 wu:2 separation:1 orl:6 comparable:1 guaranteed:1 arizona:2 constraint:1 generates:1 u1:10 min:2 fukunaga:1 performing:1 department:1 combination:2 nmi:5 separability:1 b:1 restricted:1 ghaoui:1 jolliffe:1 studying:1 available:1 eight:3 observe:7 spectral:1 original:3 denotes:3 clustering:54 include:1 sw:5 unchanged:1 objective:15 xllt:3 strategy:1 diagonal:6 gradient:1 subspace:26 distance:3 manifold:1 tuebingen:2 discriminant:7 relationship:4 minimizing:4 setup:1 gk:12 trace:39 negative:1 ba:1 irt:1 datasets:1 benchmark:6 descent:1 extended:1 incorporated:4 rn:1 arbitrary:1 complement:1 pair:1 specified:3 connection:3 nip:1 address:1 able:1 beyond:1 below:1 pattern:1 challenge:1 reliable:1 max:6 including:1 natural:1 regularized:2 indicator:9 improve:3 lk:2 axis:5 kj:1 text:1 review:1 l2:1 limitation:1 versus:1 foundation:1 integrate:1 jasa:1 consistent:2 austin:1 summary:1 gl:2 last:1 rasmussen:1 lle:3 wide:2 saul:1 benefit:1 van:1 overcome:2 dimension:1 xn:1 gram:19 stand:4 computes:1 commonly:1 collection:3 adaptive:2 simplified:2 implicitly:1 discriminative:17 xi:13 search:1 iterative:12 table:5 kanade:1 nature:6 robust:1 necessarily:1 cl:1 constructing:1 domain:1 diag:6 u2t:3 main:3 edition:1 repeated:1 x1:1 representative:1 fashion:2 gorithms:1 sub:2 xh:2 xl:1 guan:1 jmlr:1 theorem:5 removing:1 specific:3 consist:1 justifies:1 lt:8 explore:1 expressed:3 u2:5 springer:1 identity:2 sized:1 satimage:7 loan:1 specifically:2 principal:5 called:1 total:2 svd:3 e:1 merz:1 la:1 highdimensional:1 support:1 bioinformatics:1 eigenmap:2 incorporate:2 evaluate:2
2,399
3,177
Exponential Family Predictive Representations of State David Wingate Computer Science and Engineering University of Michigan wingated@umich.edu Satinder Singh Computer Science and Engineering University of Michigan baveja@umich.edu Abstract In order to represent state in controlled, partially observable, stochastic dynamical systems, some sort of sufficient statistic for history is necessary. Predictive representations of state (PSRs) capture state as statistics of the future. We introduce a new model of such systems called the ?Exponential family PSR,? which defines as state the time-varying parameters of an exponential family distribution which models n sequential observations in the future. This choice of state representation explicitly connects PSRs to state-of-the-art probabilistic modeling, which allows us to take advantage of current efforts in high-dimensional density estimation, and in particular, graphical models and maximum entropy models. We present a parameter learning algorithm based on maximum likelihood, and we show how a variety of current approximate inference methods apply. We evaluate the quality of our model with reinforcement learning by directly evaluating the control performance of the model. 1 Introduction One of the basic problems in modeling controlled, partially observable, stochastic dynamical systems is representing and tracking state. In a reinforcement learning context, the state of the system is important because it can be used to make predictions about the future, or to control the system optimally. Often, state is viewed as an unobservable, latent variable, but models with predictive representations of state [4] propose an alternative: PSRs represent state as statistics about the future. The original PSR models used the probability of specific, detailed futures called tests as the statistics of interest. Recent work has introduced the more general notion of using parameters that model the distribution of length n futures as the statistics of interest [8]. To clarify this, consider an agent interacting with the system. It observes a series of observations o1 ...ot , which we call a history ht (where subscripts denote time). Given any history, there is some distribution over the next n observations: p(Ot+1 ...Ot+n |ht ) ? p(F n |ht ) (where Ot+i is the random variable representing an observation i steps in the future, and F n is a mnemonic for future). We emphasize that this distribution directly models observable quantities in the system. Instead of capturing state with tests, the more general idea is to capture state by directly modeling the distribution p(F n |ht ). Our central assumption is that the parameters describing p(F n |ht ) are sufficient for history, and therefore constitute state (as the agent interacts with the system, p(F n |ht ) changes because ht changes; therefore the parameters and hence state change). As an example of this, the Predictive Linear-Gaussian (PLG) model [8] assumes that p(F n |ht ) is jointly Gaussian; state therefore becomes its mean and covariance. Nothing is lost by defining state in terms of observable quantities: Rudary et al [8] proved that the PLG is formally equivalent to the latent-variable approach in linear dynamical systems. In fact, because the parameters are grounded, statistically consistent parameter estimators are available for PLGs. 1 Thus, as part of capturing state in a dynamical system in our method, p(F n |ht ) must be estimated. This is a density estimation problem. In systems with rich observations (say, camera images), p(F n |ht ) may have high dimensionality. As in all high-dimensional density estimation problems, structure must be exploited. It is therefore natural to connect to the large body of recent research dealing with high-dimensional density estimation, and in particular, graphical models. In this paper, we introduce the Exponential Family PSR (EFPSR) which assumes that p(F n |ht ) is a standard exponential family distribution. By selecting the sufficient statistics of the distribution carefully, we can impose graphical structure on p(F n |ht ), and therefore make explicit connections to graphical models, maximum entropy modeling, and Boltzmann machines. The EFPSR inherits both the advantages and disadvantages of graphical exponential family models: inference and parameter learning in the model is generally hard, but all existing research on exponential family distributions is applicable (in particular, work on approximate inference). Selecting the form of p(F n |ht ) and estimating its parameters to capture state is only half of the problem. We must also model the dynamical component, which describes the way that the parameters vary over time (that is, how the parameters of p(F n |ht ) and p(F n |ht+1 ) are related). We describe a method called ?extend-and-condition,? which generalizes many state update mechanisms in PSRs. Importantly, the EFPSR has no hidden variables, but can still capture state, which sets it apart from other graphical models of sequential data. It is not directly comparable to latent-variable models such as HMMs, CRFs [3], or Maximum-entropy Markov Models (MEMMs) [5], for example. In particular, EM-based procedures used in the latent-variable models for parameter learning are unnecessary, and indeed, impossible. This is a consequence of the fact that the model is fully observed: all statistics of interest are directly related to observable quantities. We refer the reader to [11] for an extended version of this paper. 2 The Exponential Family PSR We now present the Exponential Family PSR (EFPSR) model. The next sections discuss the specifics of the central parts of the model: the state representation, and how we maintain that state. 2.1 Standard Exponential Family Distributions We first discuss exponential family distributions, which we use because of their close connections to maximum entropy modeling and graphical models. We refer the reader to Jaynes [2] for detailed justification, but briefly, he states that the maximum entropy distribution ?agrees with everything that is known, but carefully avoids assuming anything that is not known,? which ?is the fundamental property which justifies its use for inference.? The standard exponential family distribution is the form of the maximum entropy distribution under certain constraints. For a random variable X, a standard exponential family distribution has the form p(X = x; s) = exp{sT ?(x) ? Z(s)}, where s is the canonical (or natural) vector of parameters and ?(x) is a vector of features of variable x. The vector ?(x) also forms the sufficient statistics of the distribution. The term Z(s) is known as the log-partition function,R and is a normalizing constant which ensures that p(X; s) defines a valid distribution: Z(s) = log exp{sT ?(x)}dx. By carefully selecting the features ?(x), graphical structure may be imposed on the distribution. 2.2 State Representation and Dynamics State. The EFPSR defines state as the parameters of an exponential family distribution modeling p(F n |ht ). To emphasize that these parameters represent state, we will refer to them as st :  n p(F n = f n |ht ; st ) = exp s? (1) t ?(f ) ? log Z(st ) , with both { ?(f n ), st } ? Rl?1 . We emphasize that st changes with history, but ?(f n ) does not. Maintaining State. In addition to selecting the form of p(F n |ht ), there is a dynamical component: given the parameters of p(F n |ht ), how can we incorporate a new observation to find the parameters of p(F n |ht , ot+1 )? Our strategy is to extend and condition, as we now explain. 2 Extend. We assume that we have the parameters of p(F n |ht ), denoted st . We extend the distribution of F n |ht to include Ot+n+1 , which forms a new variable F n+1 |ht , and we assume it has the distribution p(F n , Ot+n+1 |ht ) = p(F n+1 |ht ). This is a temporary distribution with (n + 1)d random variables. In order to add the new variable Ot+n+1 , we must add new features which describe Ot+n+1 and its relationship to F n . We capture this with a new feature vector ?+ (f n+1 ) ? Rk?1 , k?1 and define the vector s+ to be the parameters associated with this feature vector. We thus t ? R have the following form for the extended distribution:  +? + n+ p(F n+1 = f n+1 |ht ; s+ ) ? log Z(s+ t ) = exp st ? (f t ) . To define the dynamics, we define a function which maps the current state vector to the parameters of the extended distribution. We call this the extension function: s+ t = extend(st ; ?), where ? is a vector of parameters controlling the extension function (and hence, the overall dynamics). The extension function helps govern the kinds of dynamics that the model can capture. For example, in the PLG family of work, a linear extension allows the model to capture linear dynamics [8], while a non-linear extension allows the model to capture non-linear dynamics [11]. Condition. Once we have extended the distribution to model the n + 1?st observation in the future, we then condition on the actual observation ot+1 , which results in the parameters of a distribution over observations from t + 1 through t + n + 1: st+1 = condition(s+ t , ot+1 ), which are precisely the statistics representing p(F n |ht+1 ), which is our state at time t + 1. By extending and conditioning, we can maintain state for arbitrarily long periods. Furthermore, for many choices of features and extension function, the overall extend-and-condition operation does not involve any inference, mean that tracking state is computationally efficient. There is only one restriction on the extension function: we must ensure that after extending and conditioning the distribution, the resulting distribution can be expressed as: p(F n = f n |ht+1 ; st+1 ) = n exp{s? t+1 ?(f ) ? log Z(st+1 )}. This looks like exactly like Eq. 1, which is the point: the feature vector ? did not change between timesteps, which means the form of the distribution does not change. For example, if p(F n |ht ) is a Gaussian, then p(F n |ht+1 ) will also be a Gaussian. 2.3 Representational Capacity The EFPSR model is quite general. It has been shown that a number of popular models can be unified under the umbrella of the general EFPSR: for example, every PSR can be represented as an EFPSR (implying that every POMDP, MDP, and k-th order Markov model can also be represented as an EFPSR); and every linear dynamical system (Kalman filter) and some nonlinear dynamical systems can also be represented by an EFPSR. These different models are obtained with different choices of the features ? and the extension function, and are possible because many popular distributions (such as multinomials and Gaussians) are exponential family distributions [11]. 3 The Linear-Linear EFPSR We now choose specific features and extension function to generate an example model designed to be analytically tractable. We select a linear extension function, and we carefully choose features so that conditioning is always a linear operation. We restrict the model to domains in which the observations are vectors of binary random variables. The result is named the Linear-Linear EFPSR. Features. Recall that the features ?() and ?+ () do not depend on time. This is equivalent to saying that the form of the distribution does not vary over time. If the features impose graphical structure on the distribution, it is also equivalent to saying that the form of the graph does not change over time. Because of this, we will now discuss how we can use a graph whose form is independent of time to help define structure on our distributions. We construct the feature vectors ?() and ?+ () as follows. Let each Ot ? {0, 1}d ; therefore, each i F n |ht ? {0, 1}nd . Let (F n ) be the i?th random variable in F n |ht . We assume that we have an undirected graph G which we will use to create the features in the vector ?(), and that we have another graph G+ which we will use to define the features in the vector ?+ (). Define G = (V, E) i where V = {1, ..., nd} are the nodes in the graph (one for each F n |ht ), and (i, j) ? E are the 3 G+ Observation features G t+1 t+2 t+n Distribution of next n observations p(F n |ht ) t+1 t+2 t+n G t+n+1 Extended distribution p(F n , Ot+n+1 |ht ) t+1 t+2 t+n t+n+1 Conditioned distribution p(F n |ht , ot+1 ) Figure 1: An illustration of extending and conditioning the distribution. edges. Similarly, we define G+ = (V +, E+) where V + = {1, ..., (n + 1)d} are the nodes in the i graph (one for each (F n+1 |ht ) ), and (i, j) ? E+ are the edges. Neither graph depends on time. To use the graph to define our distribution, we will let entries in ? be conjunctions of atomic observation variables (like the standard Ising model): for i ? V , there will be some feature k in the vector such that ?(ft )k = fti . We also create one feature for each edge: if (i, j) ? E, then there will be some feature k in the vector such that ?(ft )k = fti ftj . Similarly, we use G+ to define ?+ (). As discussed previously, neither G nor G+ (equivalently, ? and ?+ ) can be arbitrary. We must ensure that after conditioning G+ , we recover G. To accomplish this, we ensure that both temporally shifted copies and conditioned versions of each feature exist in the graphs (seen pictorially in Fig. 1). Because all features are either atomic variables or conjunctions of variables, conditioning the distribution can be done with an operation which is linear in the state (this is true even if the random variables are discrete or real-valued). We therefore define the linear conditioning operator G(ot+1 ) + to be a matrix which transforms s+ t into st+1 : st+1 = G(ot+1 )st . See [11] for details. Linear extension. In general, the function extend can take any form. We choose a linear extension: s+ t = Ast + B where A ? Rk?l and B ? Rk?1 are our model parameters. The combination of a linear extension and a linear conditioning operator can be rolled together into a single operation. Without loss of generality, we can permute the indices in our state vector such that st+1 = G(ot+1 ) (Ast + B). Note that although this is linear in the state, it is nonlinear in the observation. 4 Model Learning We have defined our concept of state, as well as our method for tracking that state. We now address the question of learning the model from data. There are two things which can be learned in our model: the structure of the graph, and the parameters governing the state update. We briefly address each in the next two subsections. We assume we are given a sequence of T observations, [o1 ? ? ? oT ], which we stack to create a sequence of samples from the F n |ht ?s: ft |ht = [ot+1 ? ? ? ot+n |ht ]. 4.1 Structure Learning To learn the graph structure, we make the approximation of ignoring the dynamical component of the model. That is, we treat each ft as an observation, and try to estimate the density of the resulting unordered set, ignoring the t subscripts (we appeal to density estimation because many good algorithms have been developed for structure induction). We therefore ignore temporal relationships across samples, but we preserve temporal relationships within samples. For example, if observation a is always followed by observation b, this fact will be captured within the ft ?s. The problem therefore becomes one of inducing graphical structure for a non-sequential data set, which is a problem that has already received considerable attention. In all of our experiments, we used the method of Della Pietra et. al [7]. Their method iteratively evaluates a set of candidate features and adds the one with highest expected gain in log-likelihood. To enforce the temporal 4 invariance property, whenever we add a feature, we also add all of the temporally shifted copies of that feature, as well as the conditioned versions of that feature. 4.2 Maximum Likelihood Parameter Estimation With the structure of the graph in place, we are left to learn the parameters A and B of the state extension. It is now useful that our state is defined in terms of observable quantities, for two reasons: first, because everything in our model is observed, EM-style procedures for estimating the parameters of our model are not needed, simply because there are no unobserved variables over which to take expectations. Second, when trying to learn a sequence of states (st ?s) given a long trajectory of futures (ft ?s), each ft is a sample of information directly from the distribution we?re trying to model. Given a parameter estimate, an initial state s0 , and a sequence of observations, the sequence of st ?s is completely determined. This will be a key element to our proposed maximum-likelihood learning algorithm. Although the sequence of state vectors st are the parameters defining the distributions p(F n |ht ), they are not the model parameters ? that is, we cannot freely select them. Instead, the model parameters are the parameters ? which govern the extension function. This is a significant difference from standard maximum entropy models, and stems from the fact that our overall problem is that of modeling a dynamical system, rather than just density estimation. QT The likelihood of the training data is p(o1 , o2 ...oT ) = t=1 p(ot |ht ). We will find it more conveQT nient to measure the likelihood of the corresponding ft ?s: p(o1 , o2 ...oT ) ? n t=1 p(ft |ht ) (the likelihoods are not the same because the likelihood of the ft ?s counts a single observation n times; the approximate equality is because the first n and last n are counted fewer than n times). The expected log-likelihood of the training ft ?s under the model defined in Eq. 1 is ! T 1 X ? LL = ?s ?(ft ) ? log Z(st ) T t=1 t (2) Our goal is to maximize this quantity. Any optimization method can be used to maximize the loglikelihood. Two popular choices are gradient ascent and quasi-Newton methods, such as (L-)BFGS. We use both, for different problems (as discussed later). However, both methods require the gradient of the likelihood with respect to the parameters, which we will now compute. Using the chain rule of derivatives, we can compute the derivative with respect to the parameters A: T ?LL X ?LL ? ?st = (3) ?A ?st ?A t=1 First, we compute the derivative of the log-likelihood with respect to each state:  ?LL ?  ? = ?st ?(ft ) ? log Z(st ) = Est [?(F n |ht )] ? ?(ft ) ? ?t (4) ?st ?st where Est [?(F n |ht )] ? Rl?1 is the vector of expected sufficient statistics at time t. Computing this is a standard inference problem in exponential family models, as discussed in Section 5. This gradient tells us that we wish to adjust each state to make the expected features of the next n observations closer to the observed features however, we cannot adjust st directly; instead, we must adjust it implicitly by adjusting the transition parameters A and B. We now compute the gradients of the state with respect to each parameter:   ? ?st?1 ?st = G(ot+1 ) (Ast?1 + B) = G(ot+1 ) A + s? ? I . t?1 ?A ?A ?A where ? is the Kronecker product, and I is an identity matrix the same size as A. The gradients of the state with respect to B are given by   ?st ? ?st?1 = G(ot+1 ) (Ast?1 + B) = G(ot+1 ) A +I ?B ?B ?B These gradients are temporally recursive ? they implicitly depend on gradients from all previous timesteps. It might seem prohibitive to compute them: must an algorithm examine all past t1 ? ? ? tt?1 data points to compute the gradient at time t? Fortunately, the answer is no: the necessary statistics can be computed in a recursive fashion as the algorithm walks through the data. 5 Training LL Testing LL True LL Naive LL Log?likelihood ?1.4 ?1.6 p ?2.07 1?p A q B 1?q ?1.8 ?2.08 0 10 20 ?2 0 10 20 0 10 20 0 10 20 Iterations of optimization (a) (b) Figure 2: Results on two-state POMDPs. The right shows the generic model used. By varying the transition and observation probabilities, three different POMDPs were generated. The left shows learning performance on the three models. Likelihoods for naive predictions are shown as a dotted line near the bottom; likelihoods for optimal predictions are shown as a dash-dot line near the top. Problem Paint Network Tiger # of states 16 7 2 # of obs. 2 2 2 # of actions 4 4 3 Naive LL 6.24 6.24 6.24 True LL 4.66 4.49 5.23 Training set LL % 4.67 99.7 4.50 99.5 5.24 92.4 Test set LL % 4.66 99.9 4.52 98.0 5.25 86.0 Figure 3: Results on standard POMDPs. See text for explanation. 5 Inference In order to compute the gradients needed for model learning, the expected sufficient statistics E[?(F n |ht )] at each timestep must be computed (see Eq. 4): Z E [?(F n |ht )] = ?(ft )p(F n |ht )dft = ?Z(s). This quantity, also known as the mean parameters, is of central interest in standard exponential families, and has several interesting properties. For example, each possible set of canonical parameters s induces one set of mean parameters; assuming that the features are linearly independent, each set of valid mean parameters is uniquely determined by one set of canonical parameters [9]. Computing these marginals is an inference problem. This is repeated T times (the number of samples) in order to get one gradient, which is then used in an outer optimization loop; because inference must be repeatedly performed in our model, computational efficiency is a more stringent requirement than accuracy. In terms of inference, our model inherits all of the properties of graphical models, for better and for worse. Exact inference in our model is generally intractable, except in the case of fully factorized or tree-structured graphs. However, many approximate algorithms exist: there are variational methods such as naive mean-field, tree-reweighted belief propagation, and log-determinant relaxations [10]; other methods include Bethe-Kikuchi approximations, expectation propagation, (loopy) belief propagation, MCMC methods, and contrastive divergence [1]. 6 Experiments and Results Two sets of experiments were conducted to evaluate the quality of our model and learning algorithm. The first set tested whether the model could capture exact state, given the correct features and exact inference. We evaluated the learned model using exact inference to compute the exact likelihood of the data, and compared to the true likelihood. The second set tested larger models, for which exact inference is not possible. For the second set, bounds can be provided for the likelihoods, but may be so loose as to be uninformative. How can we assess the quality of the final model? One objective gauge is control performance: if the model has a reward signal, reinforcement learning can be used to determine an optimal policy. Evaluating the reward achieved becomes an objective measure of model quality, even though approximate likelihood is the learning signal. 6 EFPSR/VMF 0.15 EFPSR/LBP 0.1 EFPSR/LDR POMDP 0.05 Reactive 0 1 2 3 4 5 Steps of optimization 6 Random Average Reward Average Reward 0.2 0.15 0.1 0.05 0 1 2 3 4 5 Steps of optimization 6 Figure 4: Results on Cheesemaze (left) and Maze 4x3 (right) for different inference methods. First set. We tested on three two-state problems, as well as three small, standard POMDPs. For each problem, training and test sets were generated (using a uniformly random policy for controlled systems). We used 10,000 samples, set n = 3 and used structure learning as explained in Section 4.1. We used exact inference to compute the E[?(F n |ht )] term needed for the gradients. We optimized the likelihood using BFGS. For each dataset, we computed the log-likelihood of the data under the true model, as well as the log-likelihood of a ?naive? model, which assigns uniform probability to every possible observation. We then learned the best model possible, and compared the final log-likelihood under the learned and true models. Figure 2 (a) shows results for three two-state POMDPs with binary observations. The left panel of Fig. 2 (a) shows results for a two-state MDP. The likelihood of the learned model closely approaches the likelihood of the true model (although it does not quite reach it; this is because the model has trouble modeling deterministic observations, because the weights in the exponential need to be infinitely large [or small] to generate a probability of one [or zero]). The middle panel shows results for a moderately noisy POMDP; again, the learned model is almost perfect. The third panel shows results for a very noisy POMDP, in which the naive and true LLs are very close; this indicates that prediction is difficult, even with a perfect model. Figure 3 shows results for three standard POMDPs, named Paint, Network and Tiger1 . The table conveys similar information to the graphs: naive and true log-likelihoods, as well as the loglikelihood of the learned models (on both training and test sets). To help interpret the results, we also report a percentage (highlighted in bold), which indicates the amount of the likelihood gap (between the naive and true models) that was captured by the learned model. Higher is better; again we see that the learned models are quite accurate, and generalize well. Second set. We also tested on a two more complicated POMDPs called Cheesemaze and Maze 4x31 . For both problems, exact inference is intractable, and so we used approximate inference. We experimented with loopy belief propagation (LBP) [12], naive mean field (or variational mean field, VMF), and log-determinant relaxations (LDR) [10]. Since the VMF and LDR bounds on the loglikelihood were so loose (and LBP provides no bound), it was impossible to assess our model by an appeal to likelihood. Instead, we opted to evaluate the models based on control performance. We used the Natural Actor Critic (or NAC) algorithm [6] to test our model (see [11] for further experiments). The NAC algorithm requires two things: a stochastic, parameterized policy which operates as a function of state, and the gradients of the log probability of that policy. We used a softmax function of a linear projection of the state: the probability of taking action ai from state st  ?  P|A| given the policy parameters ? is: p(ai ; st , ?) = exp s? t ?i / j=1 exp st ?j . The parameters ? are to be determined. For comparison, we also ran the NAC planner with the POMDP belief state: we used the same stochastic policy and the same gradients, but we used the belief state of the true POMDP in place of the EFPSR?s state (st ). We also tested NAC with the first-order Markov assumption (or reactive policy) and a totally random policy. Results. Figure 4 shows the results for Cheesemaze. The left panel shows the best control performance obtained (average reward per timestep) as a function of steps of optimization. The ?POMDP? line shows the best reward obtained using the true belief state as computed under the true model, the ?Random? line shows the reward obtained with a random policy, and the ?Reactive? line shows the best reward obtained by using the observation as input to the NAC algorithm. The lines ?VMF,? ?LBP,? and ?LDR? correspond to the different inference methods. 1 From Tony Cassandra?s POMDP repository at http://www.cs.brown.edu/research/ai/pomdp/index.html 7 The EFPSR models all start out with performance equivalent to the random policy (average reward of 0.01), and quickly hop to of 0.176. This is close to the average reward of using the true POMDP state at 0.187. The EFPSR policy closes about 94% of the gap between a random policy and the policy obtained with the true model. Surprisingly, only a few iterations of optimization were necessary to generate a usable state representation. Similar results hold for the Maze 4x3 domain, although the improvement over the first order Markov model is not as strong: the EFPSR closes about 77.8% of the gap between a random policy and the optimal policy. We conclude that the EFPSR has learned a model which successfully incorporates information from history into the state representation, and that it is this information which the NAC algorithm uses to obtain better-than-reactive performance. This implies that the model and learning algorithm are useful even with approximate inference methods, and even in cases where we cannot compare to the exact likelihood. 7 Conclusions We have presented the Exponential Family PSR, a new model of controlled, stochastic dynamical systems which provably unifies other models with predictively defined state. We have also discussed a specific member of the EFPSR family, the Linear-Linear EFPSR, and a maximum likelihood learning algorithm. We were able to learn almost perfect models of several small POMDP systems, both from a likelihood perspective and from a control perspective. The biggest drawback is computational: the repeated inference calls make the learning process very slow. Improving the learning algorithm is an important direction for future research. While slow, the learning algorithm generates models which can be accurate in terms of likelihood and useful in terms of control performance. Acknowledgments David Wingate was supported under a National Science Foundation Graduate Research Fellowship. Satinder Singh was supported by NSF grant IIS-0413004. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF. References [1] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 14(8):1771?1800, 2002. [2] E. T. Jaynes. Notes on present status and future prospects. In W. Grandy and L. Schick, editors, Maximum Entropy and Bayesian Methods, pages 1?13, 1991. [3] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In International Conference on Machine Learning (ICML), 2001. [4] M. L. Littman, R. S. Sutton, and S. Singh. Predictive representations of state. In Neural Information Processing Systems (NIPS), pages 1555?1561, 2002. [5] A. McCallum, D. Freitag, and F. Pereira. Maximum entropy Markov models for information extraction and segmentation. In International Conference on Machine Learning (ICML), pages 591?598, 2000. [6] J. Peters, S. Vijayakumar, and S. Schaal. Natural actor-critic. In European Conference on Machine Learning (ECML), pages 280?291, 2005. [7] S. D. Pietra, V. D. Pietra, and J. Lafferty. Inducing features of random fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(4):380?393, 1997. [8] M. Rudary, S. Singh, and D. Wingate. Predictive linear-Gaussian models of stochastic dynamical systems. In Uncertainty in Artificial Intelligence (UAI), pages 501?508, 2005. [9] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference. Technical Report 649, UC Berkeley, 2003. [10] M. J. Wainwright and M. I. Jordan. Log-determinant relaxation for approximate inference in discrete Markov random fields. IEEE Transactions on Signal Processing, 54(6):2099?2109, 2006. [11] D. Wingate. Exponential Family Predictive Representations of State. PhD thesis, University of Michigan, 2008. [12] J. S. Yedida, W. T. Freeman, and Y. Weiss. Understanding belief propagation and its generalizations. Technical Report TR-2001-22, Mitsubishi Electric Research Laboratories, 2001. 8
3177 |@word determinant:3 briefly:2 version:3 middle:1 repository:1 nd:2 mitsubishi:1 covariance:1 contrastive:2 tr:1 initial:1 series:1 selecting:4 o2:2 existing:1 past:1 current:3 jaynes:2 dx:1 must:10 partition:1 designed:1 update:2 implying:1 half:1 fewer:1 prohibitive:1 intelligence:2 mccallum:2 provides:1 node:2 wingated:1 freitag:1 introduce:2 expected:5 indeed:1 nor:1 examine:1 freeman:1 actual:1 totally:1 becomes:3 fti:2 estimating:2 provided:1 panel:4 factorized:1 kind:1 developed:1 unified:1 unobserved:1 finding:1 temporal:3 berkeley:1 every:4 exactly:1 control:7 grant:1 segmenting:1 t1:1 engineering:2 treat:1 consequence:1 sutton:1 subscript:2 might:1 hmms:1 graduate:1 statistically:1 acknowledgment:1 camera:1 testing:1 atomic:2 lost:1 recursive:2 x3:2 procedure:2 projection:1 get:1 cannot:3 close:5 operator:2 context:1 impossible:2 ast:4 restriction:1 equivalent:4 imposed:1 map:1 deterministic:1 crfs:1 www:1 attention:1 pomdp:11 assigns:1 estimator:1 rule:1 importantly:1 notion:1 justification:1 controlling:1 exact:9 us:1 element:1 ising:1 observed:3 ft:15 bottom:1 wingate:4 capture:9 pictorially:1 ensures:1 highest:1 prospect:1 observes:1 ran:1 govern:2 moderately:1 reward:10 littman:1 dynamic:6 singh:4 depend:2 predictive:7 efficiency:1 completely:1 represented:3 describe:2 artificial:1 tell:1 labeling:1 quite:3 whose:1 larger:1 valued:1 say:1 loglikelihood:3 statistic:12 jointly:1 noisy:2 highlighted:1 final:2 advantage:2 sequence:7 propose:1 product:2 loop:1 representational:1 inducing:2 requirement:1 extending:3 perfect:3 kikuchi:1 help:3 qt:1 received:1 eq:3 strong:1 c:1 implies:1 direction:1 closely:1 correct:1 drawback:1 filter:1 stochastic:6 stringent:1 opinion:1 material:1 everything:2 require:1 generalization:1 extension:15 clarify:1 hold:1 exp:7 vary:2 estimation:7 applicable:1 agrees:1 create:3 gauge:1 successfully:1 gaussian:5 always:2 rather:1 varying:2 conjunction:2 inherits:2 schaal:1 improvement:1 likelihood:31 indicates:2 opted:1 inference:23 hidden:1 quasi:1 schick:1 provably:1 unobservable:1 overall:3 html:1 denoted:1 art:1 softmax:1 uc:1 field:6 once:1 construct:1 extraction:1 hop:1 look:1 icml:2 future:12 report:3 lls:1 few:1 preserve:1 divergence:2 national:1 pietra:3 connects:1 maintain:2 interest:4 adjust:3 rolled:1 plg:3 chain:1 accurate:2 psrs:4 edge:3 closer:1 necessary:3 tree:2 walk:1 re:1 modeling:8 disadvantage:1 loopy:2 entry:1 uniform:1 conducted:1 optimally:1 connect:1 answer:1 accomplish:1 st:37 density:7 fundamental:1 international:2 rudary:2 vijayakumar:1 probabilistic:2 together:1 quickly:1 again:2 central:3 reflect:1 thesis:1 choose:3 worse:1 expert:1 derivative:3 style:1 usable:1 bfgs:2 unordered:1 bold:1 explicitly:1 depends:1 later:1 try:1 performed:1 view:1 start:1 sort:1 recover:1 complicated:1 ass:2 accuracy:1 correspond:1 generalize:1 bayesian:1 unifies:1 trajectory:1 pomdps:7 history:6 explain:1 reach:1 whenever:1 evaluates:1 conveys:1 associated:1 gain:1 proved:1 adjusting:1 popular:3 dataset:1 recall:1 subsection:1 dimensionality:1 segmentation:1 carefully:4 higher:1 wei:1 done:1 evaluated:1 though:1 generality:1 furthermore:1 governing:1 just:1 grandy:1 nonlinear:2 propagation:5 defines:3 quality:4 mdp:2 nac:6 ldr:4 umbrella:1 brown:1 true:15 concept:1 equality:1 analytically:1 hence:2 iteratively:1 laboratory:1 reweighted:1 ll:12 uniquely:1 anything:1 trying:2 tt:1 vmf:4 image:1 variational:3 multinomial:1 rl:2 conditioning:8 extend:7 he:1 discussed:4 marginals:1 interpret:1 refer:3 significant:1 dft:1 ai:3 similarly:2 predictively:1 baveja:1 dot:1 actor:2 add:5 recent:2 perspective:2 apart:1 certain:1 binary:2 arbitrarily:1 exploited:1 seen:1 captured:2 fortunately:1 impose:2 freely:1 determine:1 maximize:2 period:1 signal:3 ii:1 stem:1 technical:2 mnemonic:1 long:2 controlled:4 prediction:4 basic:1 expectation:2 iteration:2 represent:3 grounded:1 achieved:1 addition:1 uninformative:1 lbp:4 fellowship:1 ot:27 ascent:1 undirected:1 thing:2 member:1 incorporates:1 lafferty:2 seem:1 jordan:2 call:3 near:2 variety:1 timesteps:2 restrict:1 idea:1 whether:1 effort:1 peter:1 constitute:1 action:2 repeatedly:1 generally:2 useful:3 detailed:2 involve:1 transforms:1 amount:1 induces:1 generate:3 http:1 exist:2 percentage:1 canonical:3 nsf:2 shifted:2 dotted:1 estimated:1 per:1 discrete:2 key:1 neither:2 ht:49 timestep:2 graph:14 relaxation:3 parameterized:1 uncertainty:1 named:2 place:2 family:22 reader:2 saying:2 almost:2 planner:1 ob:1 x31:1 comparable:1 capturing:2 bound:3 followed:1 dash:1 constraint:1 precisely:1 kronecker:1 generates:1 structured:1 combination:1 describes:1 across:1 em:2 explained:1 computationally:1 previously:1 describing:1 discus:3 mechanism:1 count:1 needed:3 loose:2 tractable:1 umich:2 available:1 generalizes:1 operation:4 gaussians:1 apply:1 enforce:1 generic:1 alternative:1 original:1 assumes:2 top:1 include:2 ensure:3 trouble:1 graphical:12 tony:1 maintaining:1 newton:1 tiger1:1 objective:2 question:1 quantity:6 already:1 paint:2 strategy:1 interacts:1 gradient:13 capacity:1 outer:1 reason:1 induction:1 assuming:2 length:1 o1:4 kalman:1 relationship:3 illustration:1 index:2 minimizing:1 equivalently:1 difficult:1 boltzmann:1 policy:15 observation:26 markov:6 ecml:1 defining:2 extended:5 hinton:1 interacting:1 stack:1 arbitrary:1 david:2 introduced:1 connection:2 optimized:1 learned:10 temporary:1 nip:1 address:2 able:1 dynamical:12 pattern:1 ftj:1 explanation:1 belief:7 wainwright:2 natural:4 representing:3 temporally:3 naive:9 text:1 understanding:1 fully:2 loss:1 interesting:1 foundation:1 agent:2 sufficient:6 consistent:1 s0:1 editor:1 critic:2 surprisingly:1 last:1 copy:2 supported:2 taking:1 evaluating:2 avoids:1 rich:1 valid:2 transition:2 maze:3 author:1 reinforcement:3 counted:1 transaction:2 approximate:8 observable:6 emphasize:3 ignore:1 implicitly:2 status:1 satinder:2 dealing:1 uai:1 unnecessary:1 conclude:1 latent:4 table:1 learn:4 bethe:1 ignoring:2 improving:1 permute:1 necessarily:1 european:1 electric:1 domain:2 did:1 linearly:1 nothing:1 repeated:2 body:1 fig:2 biggest:1 fashion:1 slow:2 pereira:2 explicit:1 wish:1 exponential:21 candidate:1 third:1 rk:3 specific:4 appeal:2 experimented:1 normalizing:1 intractable:2 sequential:3 phd:1 justifies:1 conditioned:3 gap:3 cassandra:1 entropy:9 michigan:3 simply:1 psr:7 infinitely:1 expressed:2 tracking:3 partially:2 recommendation:1 conditional:1 viewed:1 goal:1 identity:1 considerable:1 change:7 hard:1 tiger:1 determined:3 except:1 uniformly:1 operates:1 called:4 invariance:1 est:2 formally:1 select:2 reactive:4 incorporate:1 evaluate:3 mcmc:1 tested:5 della:1